Web Vitals, World Wide.
Where your users are from can make surprising differences to Core Web Vitals
Recently, one of the tools I have on this site, the robots.txt Testing & Validation Tool, has seen a big increase in CLS reported.
Which is strange, because nothing changed with the tool in the few weeks running up to this. What the heck?!?!
Make sure you gather all the metrics!
The chart above is from my Visualize Core Web Vitals History tool, which is powered by the CrUX history API.
The Chrome User Experience Report (CrUX) is an invaluable data source for Core Web Vitals, and it's a great source of truth. It's the one that surfaces in your search console account, and at the top of PageSpeed Insights tests. It's great to give you a heads up that there IS a problem, but sometimes not quite granular enough to let you spot WHAT that problem is.
Fortunately, you can gather your own metrics, there's a ton of ways and tools to do that, with plenty of third-party services that can gather real user metrics (RUM), also frequently referred to as field data.
You should be looking to collect some other metrics along with the three main core metrics of Largest Contentful Paint (LCP), Interaction to Next Paint (INP) and CLS.
To really be able to dig in, the metrics I feel you need in addition are:
- Device Type: i.e. mobile / desktop, and I guess tablet.
NetworkInformation: effectiveType
: Chrome only and kinda lumpy, but can still be useful, gives you some idea if a connection is as good or better and 4g, or slower.- Screen Dimension: The width and height of the viewport.
- Location: Where the user is.
- Attribution: Some idea of the element where the issue is happening.
- Time to First Byte (TTFB): How long your server took to respond to the user's request.
I'm a big fan of DIY, so I like to spin stuff up myself. My go to stack for this starts with the ever useful web-vitals JavaScript library at the core of it. This excellent library gives you a good deal of these out of the box, you can get LCP, CLS, INP & TTFB metrics out of the box.
You can also get attribution, you need to make sure you're using the attribution build though. When collecting attribution I'm not really interested in that data unless the experience wasn't good, I only really need that level of info for when things are wrong, so no need to send it if it doesn't matter.
I add collection of viewport dimensions, and the user agent, basically it looks like this:
<script type="module">
import {
onTTFB,
onFCP,
onLCP,
onCLS,
onINP
} from '/wv/web-vitals.attribution-4.2.4.js?module';
function doCwvProcessing(metric) {
// we get the user agent string, to enable device type analysis
const ua = navigator.userAgent;
if (!ua.includes('Lighthouse') && !ua.includes('Headless')) {
// ^ don't want synthetic tests to pollute the data
// effectiveType is not available in all browsers, so default to '-'
let etype = '-';
// get the viewport dimensions
const w = Math.max(document.documentElement.clientWidth, window.innerWidth || 0);
const h = Math.max(document.documentElement.clientHeight, window.innerHeight || 0);
// get the connection type if available
if (navigator.connection) {
etype = navigator.connection.effectiveType;
}
// we only want to send data for metrics that are not 'good', so we default to null
let attribution = null;
// if the metric is not 'good', we want to send the attribution data
if (metric.rating !== 'good') {
// get the attribution data for the metric
switch (metric.name) {
case 'CLS':
attribution = metric.attribution.largestShiftTarget;
break;
case 'INP':
attribution = metric.attribution.interactionTarget;
break;
case 'LCP':
attribution = metric.attribution.element;
break;
}
}
// send the data to the server
const body = JSON.stringify({
metricname: metric.name,
metricvalue: metric.value,
navigationId: metric.navigationId,
navigationType: metric.navigationType,
rating: metric.rating,
id: metric.id,
ua: ua,
url: `${window.location.protocol}//${window.location.host}${window.location.pathname}${window.location.search}`,
etype: etype,
vwidth: w,
vheight: h,
attribution: attribution
});
(navigator.sendBeacon &&
navigator.sendBeacon('{{ your endpoint }}', body)) ||
fetch('/api/cwv.php', { body, method: 'POST', keepalive: true });
}
}
onTTFB(doCwvProcessing);
onFCP(doCwvProcessing);
onLCP(doCwvProcessing);
onCLS(doCwvProcessing);
onINP(doCwvProcessing);
</script>
I have a simple PHP endpoint it posts to, I use Mobile_Detect to get device type from the User Agent, and ipinfo.io to get the location from the IP address. Geo-Location from IP is very far from an exact science, but it works well enough to get a good idea.
That script then posts this to a logflare source, this is an excellent tool that ultimately stores your data, nicely data partitioned, in bigQuery.
You can then create simple reports in the tool of your choice, like looker studio for example.
Time to Dig in, or Where's Wally CLS
Now I have that data, I can begin to look deeper, sure enough, querying the data for that URL found that there was some poor CLS scores on that URL, attributed to this element, #svelte>main.main.grow.pt-4.pb-4.mt-4>div.container.mx-auto.p-4>div
.
That happens to be the breadcrumb section, enter the red herring. For tools, the crumb features a drop down widget that lets users switch between the tools. Surely that's the kind of interaction that can cause shifts, it must be it, right? Nope, not this time, I was suspicious because that's not changed in a while. Naturally I'd tested again to confirm, not only because browser updates can mean things happen differently. But rock solid on current and canary chrome.
So, I dug in a little deeper. Of the folks experiencing this shift, what were the common themes? They were all 4g
as an effective connection type, so perhaps not a speed related thing (at the time, I used a web-font, which could theoretically have been a cause of CLS, if the font loaded late) but it didn't add up in testing.
What they did have in common was that most of them were from Poland, with a few in Arabic speaking countries.
Why would they be getting CLS? Red herring number two, were they getting worse performance? Was Cloudflare not quite doing its job? To verify a late delivery of CSS or web-fonts I checked TTFB for Poland, nothing concerning. So what do users in Poland have in common?
I have heard a rumour that Polish as a language is quite popular…
It's coming from inside the house browser
We, or at least I, tend to only think of the implications of the HTML, CSS and JavaScript we send the user, and how we send it to them when thinking about core web vitals.
But browsers can sometimes do other useful things with our sites, like for example translate English into Polish, so the user has some idea of what you're going on about.
It turns out that different words and languages tend to layout in different sizes, for example:
Home > Tools > Robots.txt Testing & Validator Tool
Becomes:
Strona główna > Narzędzia > Narzędzie do testowania i walidacji pliku Robots.txt
When translated by Google into Polish. That's quite a lot longer.
I replicated the screen sizes users had, and used the browser to translate the page into Polish, and hey presto! I could replicate the CLS scores they were receiving.
I have some feelings about this. Personally I don't think it's really in the spirit of what cumulative layout shift is trying to measure, which is unexpected shifts. A user's browser changing the text from one language to another feels more like an expected shift to me. Hopefully one day they'll exclude these kinds of browser actions from recording shifts.
Why now, has Polish changed?
Not that I'm aware of! Although I guess the exact translation Google does can change over time, but it was more a seasonal thing, towards the end of the year, traffic from predominantly English speaking countries had trailed off, but the Polish visits had increased a little, so relatively they were a much greater proportion of visits. As CWV are measured at the 75th percentile, that was enough for these to suddenly be counting as the 75th percentile score for CLS.
How could you fix it?
It might seem like something that's not actionable, but that's not entirely true. I could, for example, offer a Polish version of the page, and use something like hreflang to help folks land on that.
But there's some things that need to be weighed up. Core web vitals are part of trying to give an end user a good experience on your site. Being understandable and not having garbled text is something that I would place above a bit of shifting around (that I personally don't even feel is necessarily unexpected). So to properly do it, I would need to turn to a proficient translator to translate into Polish for me. Something I'd be hard pushed to justify the cost of for a free tool Jamie Indigo and I put out there.
But for your site or specific URL on a site, it could be a good indication that you have an underserved market, and there's an opportunity to do better for them, something that might more than pay for itself.
So for me, it's about understanding that core web vitals are a tool, some data to help you measure where you are, and not a goal. If you can understand WHY a score is funky, you get to make the informed decision as to whether it's something you need to tackle. Often the answer is yes.
Occasionally it's a case of c'est la vie, or should I say takie jest życie.
About the Author:
Dave Smart
Technical SEO Consultant at Tame the Bots.