Who has the fastest F1 website in 2021? Part 5
This is part 5 in a multi-part series looking at the loading performance of F1 websites. Not interested in F1? It shouldn't matter. This is just a performance review of 10 recently-built/updated sites that have broadly the same goal, but are built by different teams, and have different performance issues.
- Methodology & Alpha Tauri
- Alfa Romeo
- Red Bull
- ➡️ Aston Martin
- …more coming soon…
Pretty good! This team was called Racing Point last year (and 2019), and their site was a bit of a performance disaster, but it looks like that's all fixed! In 2020 their car was a direct copy of the Mercedes, earning them the nickname Tracing Point, which is top-quality punnagement.
Cutting the mustard
I was particularly pleased to see this at the bottom of the
<script nomodule src="https://polyfill.io/v3/polyfill.min.js?…"></script>
They use Polyfill.io to bring in the required polyfills. Polyfill.io uses the User-Agent string to decide which polyfills are needed, and as a result the script is empty in modern browsers. To avoid making a request that results in nothing, the Aston Martin team used
This is a loose form of feature-detection that my former BBC colleague Tom Maslen called "Cutting the mustard". Browsers that support modules just so happen to support the rest of the stuff that's required, so
nomodule becomes convenient feature-detect.
- 3.5 second delay to content render caused by font foundry CSS. Covered in detail below.
- 1.25 second delay to content render caused by additional CSS on another server. This problem is covered in part 1, and the solution here is just to move the CSS to the same server as the page.
- Paragraph layout shift caused by late-loading fonts which could be performed earlier using some preload tags.
- 0.5 second delay to main image caused by poor image compression.
These delays overlap in some cases.
Key issue: Font foundry CSS
Here's the start of the waterfall:
I see extra connections on rows 2, 3, 6, and 12. The CSS on row 3 belongs to the site, so that should be moved to the same server as the page to avoid that extra connection, as covered in part 1.
The lateness of 12 suggests it's being initiated by another blocking resource, and Chrome DevTools confirms it:
I'm not sure why it thinks it's on line
-Infinity though. Anyway… If this seems familiar, it's because it's the same font foundry tracking issue we saw in part 1, and the best solution is to load this stuff async. But, there are some new details here worth exploring:
hello.myfonts.net CSS request on row 6 happens in parallel with the CSS on row 3, even though it's the CSS on row 3 that imports the CSS on row 6. This is because the developers have worked hard to limit the damage of those other-server requests:
<link rel="preconnect" href="https://p.typekit.net" crossorigin /> <link rel="preconnect" href="https://use.typekit.net" crossorigin /> <link rel="preload" href="//hello.myfonts.net/count/3cde9e" as="style" />
preload that makes the request happen in parallel. This is great to see! It doesn't solve the problem as effectively as loading the font CSS async, but it's still much faster than what we saw on the Alpha Tauri site.
preconnect too. Preconnects are great if you don't know the full URL of an important other-site resource, but you do know its origin. In this case, some fonts come from
use.typekit.net, and we can see the benefit of the
See how the connection phase of the request happens really early on row 59? That's the
preconnect in action. A full
preload would have been better, but I guess they didn't have a way to know the URLs ahead of time. But wait…
The connection on line 12 happens late. But the
<link rel="preconnect" href="https://p.typekit.net" crossorigin />
…so what's going on? Why isn't that pre-connect happening? Well, unfortunately it is, but it isn't used.
Connections and credentials
By default, CORS requests are made without credentials, which means no cookies and other things that directly identify the user. If browsers sent no-credentials requests down the same HTTP connection as credentialed requests, well, the whole thing is pointless. Imagine if, half way through a phone call, you put on a different voice and pretended to be someone else. The other person is unlikely to be fooled, because it's part of the same call. So, for requests to another origin, browsers use different connections for no-credential and credentialed requests.
Which brings us back to:
<link rel="preconnect" href="https://p.typekit.net" crossorigin /> <link rel="preconnect" href="https://use.typekit.net" crossorigin />
crossorigin attribute tells the browser to make a no-credential connection, which is ideal for requests that use CORS. Font requests use CORS, so that 2nd
preconnect is doing the right thing. But that first preconnect is to handle:
@import requests in CSS do not use CORS, it's a fully credentialed request. The pre-connection still happens, but it isn't used. In fact, it's likely taking up bandwidth that could have been used elsewhere.
Unfortunately Chrome DevTools doesn't show this extra connection, so I dug into
chrome://net-export/ to create a full log of browser network activity. This records all browser network activity, so I started a new instance of Chrome so I wasn't capturing too much noise from other tabs. Here are the key results:
This is pretty low-level stuff, so don't worry if it doesn't make sense, there's a lot of it I don't understand. But, what I can see is row 2691, which is a request to set up a connection to
p.typekit.net, which results in the socket in row 2695. The
pm (privacy-mode) code in the connection identifies this as a no-credentials connection. This is the
Then, in row 2872 we get the actual request for the CSS resource, and sadness, we get another socket in row 2883. This connection doesn't have the
pm code, because it's for credentialed requests.
The solution here is simple; just remove the
crossorigin attribute from that
preconnect. All requests to that particular origin are credentialed, so that works fine. Sometimes you'll need two
preconnects for the same origin, one for credentialed fetches, and another for no-credential fetches.
Issue: Main image compression
The image compression on the Aston Martin site is mostly very good. Someone clearly took time over it. Unfortunately, one place it isn't so good is the all-important main image of the page:
As usual, the WebP and AVIF have more smoothing, but this image sits underneath text, so you can get away with a lot of compression. Perhaps even more than above.
How fast could it be?
Here's how fast it looks with the first-render unblocked and the image optimised.
It isn't worlds apart. The Aston Martin site is generally well-built.
Aston Martin slots in just behind Red Bull. It's incredibly close for the lead! But with five teams still to go, can anyone beat Red Bull to the title?