Who has the fastest F1 website in 2021? Part 1
In 2019 I did a performance review of F1 websites, and it was fun so I figured I'd do it again, but bigger (definitely) and better (I hope). Turns out a lot has changed in the past two years, and well, some things haven't changed.
Not interested in F1? It shouldn't matter. This is just a performance review of 10 recently-built/updated sites that have broadly the same goal, but are built by different teams, and have different performance issues.
- ➡️ Part 1: Methodology & Alpha Tauri
- Part 2: Alfa Romeo
- Part 3: Red Bull
- Part 4: Williams
- Part 5: Aston Martin
- Part 6: Ferrari
- Part 7: Haas
- Part 8: McLaren
- Bonus: Google I/O
- …more coming soon…
Methodology
I'm sticking to the same method as 2019, so we can compare between the years.
I'm going to put each site through WebPageTest to gather the data in Chrome on a Moto G4 with a 3g connection.
Why test on 3G?
Alex Russell recently did an analysis of mobile devices and connection speeds and concluded that 'slow 4g' is a better baseline. However, I'm going to test on 'good 3g' to keep results comparable to the 2019 results.
Besides, I've been to the British Grand Prix, and, well, the Northamptonshire cell towers are ill-equipped to handle 140,000 people at once, and speeds grind to what feels more like 2g, and that's exactly the kind of time and place someone might visit an F1 website.
Why test on a 5 year old phone?
If you look outside the tech bubble, a lot of users can't or don't want to pay for a high-end phone. To get a feel for how a site performs for real users, you have to look at mid-to-lower-end Android devices. And, unfortunately, new low-end phones perform about the same as a Moto G4.
The score
Each site will get a score which is how long it take to become interactive on a first load, plus the same for a second load (to measure caching efficiency). By "interactive", I mean meaningful content is displayed in a stable way, and the main thread is free enough to react to a tap/click.
There's some subjectivity there, so I'll try and justify things as I go along.
Issues with the test
I'm not comparing how 'good' the website is in terms of design, features etc etc. In fact, about:blank
would win this contest. Thankfully no F1 teams have chosen about:blank
as their website.
I'm only testing Chrome. Sorry. There's only one of me and I get tired. In fact, with 10 sites to get through, it's possible I'll miss something obvious, but I'll post the raw data so feel free to take a look.
Also, and perhaps most importantly, the results aren't a reflection of the abilities of the developers. We don't know how many were on each project, we don't know what their deadline was or any other constraints. My goal here is to show common performance issues that exist on real-world sites, how to identify them, and how to fix them.
Ok, that's enough waffle, let's GO, GO, GO!
Alpha Tauri
- Link
- First run
-
16.7s (raw results)
- Second run
-
5.4s (raw results)
- Total
-
22.1s
- 2019 total
The video above shows how users would experience the site on a low-end phone on a good 3G connection. Alternatively, scroll along the timeline above.
Possible improvements
Here's what they could do to make major improvements to load performance:
- 7 second delay to content-render caused by CSS font tracker.
- 1 second delay to content-render caused by preload resource priority issues.
- 1 second delay to content-render caused by unnecessary SVG inlining.
- 5 second delay to primary image caused by unnecessary preloading.
- 1 second delay to primary image caused by poor image compression.
- 40+ second delay to content-blocking cookie modal caused by… a number of things.
Some of these delays overlap, so let's dive in:
This was a really interesting one to profile. More often than not, poor content-render performance is down to JavaScript in some way, but in this case it looks like someone has done the right thing and avoided render-blocking JS, but non-JS things have spoiled the performance party.
I use Chrome DevTools' "Performance" panel during development to measure page performance, and later use WebPageTest to test it on a low-end device. WebPageTest gives you a waterfall of resources, and since the page renders around the 17s mark, I focus on everything that loads before that (don't worry if you don't understand it yet):
In rows 1-10 I see:
- HTML, row 1 (213kB gzipped)
- 3 big images, rows 2, 3, 9 (~1MB)
- 4 CSS files, rows 4, 5, 6, 10 (~100kB)
- 2 JS files (~40kB)
It's difficult to tell from our current tooling whether a JS file is contributing to render-blocking, but I looked at the source and it's non-blocking. The rest however…
Key issue: CSS font tracker
The main issue here is a CSS font tracker hosted on another server, and it accounts for 7 seconds of first-content delay, but it's really a combination of issues:
Parallelise sequential resources
That late-loading CSS on row 10 is a bad smell, so I took a closer look at the request in Chrome DevTools' network panel:
And there's the red flag in the 'initiator' column. We have some CSS, a render-blocking resource, which loads more CSS. The HTML contains:
<link rel="stylesheet" href="autoptimize_etcetc.css" />
Which contains:
@import url('//hello.myfonts.net/count/3ad3ad');
The browser is good at loading things in parallel, but it can only load what it knows about. In this case it doesn't know about the above resource until it loads the CSS that contains that line.
The ideal way to solve this is to delete that @import
above, which would save 7 seconds, but that line is a requirement made by the owners of the web font, who use it to ensure sites have paid for font usage. I feel there are better ways for the font foundry to achieve this, but that isn't in the control of the site we're looking at. They could switch to open source fonts, which don't have these requirements & restrictions, but let's assume they can't do that, so we'll work around the problem.
We need to turn this from a sequence of requests, to two requests in parallel, which we can do using preload
:
<link
rel="preload"
as="stylesheet"
href="https://hello.myfonts.net/count/3ad3ad"
/>
This quick change would shave 3 seconds off the first-content delay. That isn't as good as saving the whole 7 seconds of course, because:
Avoid blocking resources on other servers
Back in the bad old days of HTTP/1.1 browsers had to set up a new HTTP connection for every in-parallel request, and browsers were limited to between 2-8 connections per server (depending on the browser and the version). This was extremely slow, especially if SSL was involved, since there's a chunk of up-front cost.
Because this limit was per-origin, you could work around the limit by adding more origins.
However, HTTP/2 came along and gave us massive parallelism across a single connection. You only pay the cost of connection setup once… if your resources are on the same server that is.
The requests on rows 1 & 10 have an extra thinner bit at the start, representing the various bits of connection setup. Row 1 has it because it's the first request to the site, and row 10 has it because it's to a different site.
That extra connection setup accounts for 5 seconds of blocking time. Using a preload tag would help start this connection earlier, but it can't eliminate the cost.
Unfortunately what was 'best practice' in HTTP/1.1 times became 'worst practice' in HTTP/2 times. As a result, it's important to avoid hosting render-blocking content on other servers.
In this case, because it's a tracker, it can't just be moved to the site's own server, so we need another solution:
Load cross-origin font CSS async
Since we can't do anything else about it, the best thing we can do is remove the render-blocking nature of that separate connection. We can do that by moving all the @font-face
related CSS, along with the @import
, into its own stylesheet, and async-load it in the <head>
:
<link rel="preload" href="/font-css.css" as="style" />
<link
rel="stylesheet"
href="/font-css.css"
media="print"
onload="media='all'"
/>
This technique was developed by the filament group. Browsers will download print stylesheets ahead of time, but they won't block rendering. However, they will download it at a low priority, so the preload
is used to make it high priority. When the stylesheet has loaded, it changes media
to all
, so it applies to the page.
As a side-effect, fonts will display using fallbacks before the CSS loads. Make sure this looks ok, and make sure your @font-face
rules use font-display: swap
so this pattern continues once the CSS loads.
If you're not happy with swap
, use the font loading API to get even more control over how the page displays while fonts are loading.
And one last thing:
Preload fonts
Fonts only start downloading once the browser finds something on the page that needs them. This is efficient in some ways, as it avoids loading fonts it doesn't need. However, that also means they can start downloading pretty late.
If you're sure particular fonts are needed on the page, preload them to get the download starting earlier:
<link
rel="preload"
href="/path/to/font.woff2"
as="font"
type="font/woff2"
crossorigin
/>
The crossorigin
bit is important, because font requests are CORS requests. Using crossorigin
in a preload ensures the preload also uses CORS.
Phew, ok, next issue:
Key issue: Late modal
I don't have anything nice to say about 'cookie consent' modals. I think they're doing a huge amount of damage to the web exclusively, while the problems they're trying to solve happen on other platforms too. Also, I don't think they solve the problems they're trying to solve. But hey, I'm not a lawyer, so I've mostly ignored them in this test, and haven't factored them into a site's score.
However, throwing up a modal after the user has been using the page for 30 seconds is an awful user experience. The only 'sensible' way to show one of these modals is to use a small-as-you-can-make-it bit of JS at the top of the page, so you can show it before anything else, and the user can get it out of the way early.
Issue: Preload priority
There are two images at the top of the waterfall:
I was surprised to see images loading before CSS, since CSS is render-blocking but images aren't. As you see from the darker bits of the response, which represents bytes being received, the image is taking away bandwidth from the CSS.
It turns out these are preloaded images:
<link rel="preload" href="alphatauri_desktop.jpg" as="image" />
<link rel="preload" href="alphatauri_tablet.jpg" as="image" />
I'm surprised that the browser sent the preload request before the CSS, but request priority is a really delicate balance between what the browser asks for and what the server chooses to send. Maybe putting the preload later in the source would help, or avoid using the preload at all and instead use an <img>
to load the image (currently it's a CSS background).
Update: Performance expert Andy Davies has encountered this priority issue before too, and told me why it happens, and… it's AppCache.
Preload requests always bypass AppCache, but other requests on the page don't. That means the browser sees the preload, and queues it up. Then it sees the more-important CSS request, but it doesn't know if it needs to request it, because it might be handled by AppCache, so it has to check if the site uses AppCache or not. This takes time, and during that time the less-important request goes through.
Here's the Chrome bug, and there's a fix in progress.
Issue: Unnecessary preloading
You might have spotted an issue in the last section, but if not, Chrome DevTools' console is here to help:
Those warnings are triggered by preloads like this:
<link rel="preload" href="alphatauri_desktop.jpg" as="image" />
<link rel="preload" href="alphatauri_tablet.jpg" as="image" />
These are images for the main carousel at the top of the page. The developers added these for really sensible reasons; the carousel is created with JavaScript, and it triggers the loading of the images, meaning those images would otherwise start downloading really late. Since these are at the top of the page, it's important that they load early, and preloading solves that.
However, they're preloading the desktop and tablet versions of the image, throwing away the benefit of responsive images.
To avoid this, the preloads could use media
:
<link
rel="preload"
media="(max-width: 399px)"
href="alphatauri_tablet.jpg"
as="image"
/>
<link
rel="preload"
media="(min-width: 400px)"
href="alphatauri_desktop.jpg"
as="image"
/>
Or, instead of using JS to load key images, use an <img>
, which comes with all of the responsive image functionality built-in.
Issue: Inlined secondary content
The HTML is 213kB, which is pretty big given the content. HTML downloads with really high priority because it contains core content, and tells the browser about other resources that are needed.
The darker areas of the response in the waterfall represents bytes being received, and as you can see the HTML downloads entirely before other resources start downloading.
I took a look at the source it's littered with large inline SVG, which could be optimised:
I optimised the SVG using SVGOMG, but WebP seems like a better option (which I created using Squoosh).
But the real problem here is that it's inlined. "Inlining" means, rather than have the resource in a separate file, it's including in the HTML. Inlining is great for removing the request/response overhead for blocking or key assets. The downside of inlining is the browser has to download it all before it gets to the next thing in the markup.
In this case, it's a logo that appears far down the page, but it ends up using bandwidth that could be better used for urgent resources like the CSS.
If this was an <img>
, the browser could be smarter about the priority of the download, and shave off around a second of the time-to-content.
Avoid inlining, unless it's for content that would be render-blocking anyway, or if it's super tiny. If in doubt, profile it!
Issue: Large primary image
The first image that appears takes up most of the viewport, so it's important to get it in front of the user as soon as possible:
I'm recompressing the original here, so it's always going to be worse than the original due to generational loss. And remember, when it comes to use-cases like this, we're going for "quick and doesn't look bad" rather than "slow and perfect".
There's a little more smoothing around the arm than I'd usually go for, but on the site it's obscured by text, so it seems fine.
If I was building the site, I think I'd try and separate the picture of Gasly from the blue arrow overlay, and instead recreate that with SVG.
I used Squoosh to compress these images. Browser support for AVIF is limited to Chrome right now, but you can use <picture>
to allow browsers to select the best format they support.
<picture>
<source type="image/avif" srcset="img.avif" />
<img alt="…" src="img.jpg" />
</picture>
How fast could it be?
I wanted to get a feel for how fast these sites could be, so I created optimised versions to compare them to. I didn't have time to completely rebuild 10 sites of course, so I cut some corners. Here's what I did:
- Snappshotted the DOM after initial JS execution. Ideally sites should serve something like this, but with elements of interaction removed until the JavaScript can enhance those elements.
- Inlined required CSS. I discarded any CSS that wasn't selectable by the page.
- Removed JavaScript. A real version of the site would have JavaScript, but it shouldn't impact time-to-content in sites like this.
- Optimised key images.
I wrote a hacky script to automate some of this. It isn't 100% real, but it gives a fairly accurate feel for how the site could perform:
- Original
- Optimised
And that's it for now!
I'm not sure how many parts this series will be. It probably depends on how much there is to write about each site. The next part is definitely just about one team, because there's some very interesting JavaScript stuff going on…
- ➡️ Part 1: Methodology & Alpha Tauri
- Part 2: Alfa Romeo
- Part 3: Red Bull
- Part 4: Williams
- Part 5: Aston Martin
- Part 6: Ferrari
- Part 7: Haas
- Part 8: McLaren
- Bonus: Google I/O
- …more coming soon…