Earlier in the year I spoke at Halfstack Online, an online version of the Halfstack London conference I have spoken at for the past 5 years. The title for this talk is Programatically Performant, a talk all about how as developers we should focus more time on capturing web performance metrics from our site so we can make informed decisions on how to improve using data. Having given the talk I realised how it might be useful to write a blog post on this topic as well, and that was how this post was born.
The different kinds of data
Web performance data about your site can be split into two kinds of data
The first type of data we will collect is synthetic data. Synthetic data, as the name suggests is data captured in a lab like setting, usually this means running your test from a server in a enviornment that has a consistant internet connection. Quite often as developers we will run these in platforms like AWS or Google Cloud Platform.
For both kinds of data there are 5 key metrics we should be looking at:
- Time to first byte (TTFB) — The time that it takes for a browser to receive the first byte of content
- First Input Delay (FID) — The time from a user first interacting with page to the time when the browser first responds
- First Contentful Paint (FCP) — The time until the first point a user can see something on their screen
- Largest Contentful Paint (LCP) — The time until when the page’s main content has likely loaded
- Cumulative Layout Shift (CLS) — A measure of how much the layout shifts unexpectantly, this metric will highlight if you have a problem with how your website loads assets.
Collecting Synthetic Data
To collect synthetic data we need an environment that will give us repeatable results. Usually this is on a server where you can have a controlled environment.