Looking ahead to 2024 Build faster, more efficient web experiences

Mondo Education Updated on 2024-01-30

Author丨Rick Viscomi

Translator |Knowing the mountain.

Curated |Ding Xiaoyun.

The web is getting faster and faster. Data from HTTP Archive shows that more and more of them are passing the core web metrics: load speed, interactive responsiveness, and layout stability.

Recently, the Chrome team released a retrospective report on the Web Vitals project, detailing some of the progress in the browser and ecosystem. According to the Chrome team, improvements to Core Web Vitals equate to about 10,000 years of waiting.

So, as we get closer to 2024, I want to take a closer look at how I can keep that momentum going and continue to make the web faster.

But there's a catch, the metrics we use to measure interaction responsiveness will change in 2024, and this new metric has identified a number of hitherto unnoticed responsiveness issues.

Will we be able to meet this new challenge? Will we be able to meet this challenge while maintaining performance improvements in 2023? I think so, but we need to learn some new tricks Xi.

In my opinion, this is taken for granted. I've been working and advocating for web performance optimization for the past 11 years, and sometimes it's naïve to think that everyone — at least in my circle — thinks so.

If we're going to continue to improve web performance, we need more developers and business leaders to agree that performance optimization is worth taking action.

So, let's talk about why you should optimize web performance.

Tammy Everts performance in November 2023now().

***schmitzoide@twitter

Last week, I had the opportunity to participate in the performance in Amsterdamnow() General Assembly. For many of us working on web performance, this has become an annual pilgrimage to how we come together to push the boundaries of web performance. Tammy Evers, who served as co-chair and one of the speakers at the conference, summed up the answer to this question perfectly in the slides above.

In 2016, Tammy published a book called "Time is Money", in which she listed some possible reasons why owners are interested in optimizing web performance:

Bounce RateCart SizeConversion RateRevenue Dwell TimePage ViewsUser SatisfactionUser RetentionOrganic Search TrafficBrand RecognitionBrand RecognitionProductivityBandwidth Savings CDN Competitive Advantage Based on decades of experience and numerous case studies and neuroscience research, Tammy believes that by improving the web

performance, all of which can have a positive impact.

Tammy also worked with Tim Kadlec to create WPO Statistics, a web performance case study documenting years of direct correlation between web performance improvements and better business outcomes.

For example, in one case study, Shopify ** saw a 25% and 61% improvement in loading performance and layout stability, a 4% reduction in bounce rates, and a 6% increase in conversion rates. In another case study,"obama for america"**60% increase in performance and a corresponding 14% increase in conversion rates. There are many more examples like this.

Happy users make more money. If you look at a typical conversion funnel, fewer and fewer users will go deeper down the funnel. Optimizing performance effectively "lubricates the funnel" and drives conversions by providing users with a smoother experience.

It's the impact on the business, but it's more fundamental because performance is about the user experience.

If web performance is measured by Google's Core Web Vitals, the modern web is the fastest ever. To form a comprehensive understanding, let's look at how we got to this point.

**Time series (January to September 2023) as assessed by Core Web Vitals

*:http archive

At the beginning of 2023, 401%** passed the Core Web Vitals assessment for mobile user experience. Since then, we have witnessed steady growth. As of September 2023, 42 percent have passed Core Web Vitals5%, an increase of 24 percentage points, an increase of 60%。That's a new level of work that represents a lot of work being done across the web ecosystem.

It looks like it's half full and half empty. You can say that nearly half of them have measurably good performance and celebrate it, and of course, you can think that more than half of them don't meet performance standards.

We can have both, too! The web has come so far that we can continue to work to continue this momentum into 2024.

So, can we maintain the current rate and get more 6%** through the assessment? I think we can, but everything will change as the metrics we use to evaluate page responsiveness change.

Earlier this year, I wrote a blog post in which I stated that "Interaction to Next Paint" (INP) will be a new responsiveness metric in Google's Core Web Vitals and will replace "First Input Delay" (FID) in March 2024.

This is a very good change because INP is more effective at catching situations that are less responsive. Still, there are far fewer ** with a good INP score compared to FID, especially when it comes to the mobile experience.

In the performance chapter of the 2022 Web Yearbook, I wrote about what the pass rate for Core Web Vitals would look like if INP was used instead of FID.

For the mobile experience, there are only 312% of sites pass the assessment, which is 84 percentage points (21.)2%)。This is based on data from June 2022. So what's the situation now?

Compare the percentage of people with good INP and FID scores by device (September 2023).

*:chrome ux report

In fact, the situation is much better! The gap on desktop has all but been eliminated, with the mobile experience trailing by only 6 percentage points (14.).2%)。

But the fact remains: once the INP is in effect, the pass rate will drop significantly.

While it may seem like a step backwards at first glance, keep in mind that INP gives us a more accurate picture of how real users experience interaction responses. The actual experience of the web hasn't changed, it's just that the way we measure it has changed. Therefore,A drop in pass rates doesn't actually mean that the web is slowing down

As a result, I remain optimistic that we will continue to improve performance in 2024. It's just that when INP comes along, we need to recalibrate our expectations based on the new benchmark.

FID is the oldest metric in Core Web Metrics. It first appeared in the Chrome UX Report dataset in June 2018. As of today, there are only 58%** have FID issues on desktop or mobile devices. So I think it's fair to say that, for the most part, we don't need to worry about interaction responsiveness.

INP challenges the inert satisfaction that we have developed over the past five years. To do this, we will have to use some web performance techniques that we may rarely or even never use. We're going to have to adopt some new tools.

A long task shown in the Chrome DevTools Performance panel.

*:optimize long tasks on web.dev

We're going to have to adapt to that.

This is what a long task looks like in the Chrome DevTools Performance panel. The red stripe indicates that the amount of tasks that exceed 50 milliseconds is "long". If the user tries to interact with the page at this point, the long task will prevent the page from responding, causing the user (and INP metrics) to perceive the interaction to be slower.

Long task splitting as shown in the Chrome DevTools Performance panel.

*:optimize long tasks on web.dev

Solving this problem may require a web performance technique that you've never tried before: splitting long tasks. The amount of tasks that are ultimately completed is the same, but by adding output points between the main blocks of work, the page will be able to respond more quickly to the user interactions that occur during the task.

Chrome is trying to solve problematic long tasks with some experimental APIs. The first is the scheduleryield() API, which is designed to give developers more control over splitting long tasks. It ensures that the task continues and is not interrupted by other tasks.

Understanding which long tasks need to be segmented is a science in itself. To solve this problem, Chrome also tries to use the Long Animation Frame API. Similar to the Long Tasks API, it reports long rendering updates, which may contain multiple tasks. Crucially, it also exposes more information about the actionable properties of the task, including the position of the characters in the script.

Similar to tracking INP performance in the profiling tool, developers can use the Long Animation Frame API to track down the cause of slow INP. Overall, this data narrows down the root causes of common performance issues and frees developers from having to experiment and optimize without having to go through trial and error.

These APIs are currently unstable, but they provide powerful new capabilities that complement existing toolkits to optimize responsiveness. While this may make us feel like we're just getting the pass rate back to the previous FID-centric assessment, it does make the web a lot faster!

With INP replacing FID, it seems that responsiveness will become the new bottleneck, but this is not the case. Load performance, as measured by LCP, is and will continue to be the weakest link in Core Web Vitals assessments.

To pass the Core Web Vitals assessment, one needs to be required to:All three indicatorson good performance. So, to keep moving forward, we need to focus on the metrics that need to be improved the most.

HTTP Archive's data as of September 2023 shows that 54 percent of mobile users have a good LCP2%, compared to 64 for INP and CLS, respectively1% and 760%。

Ever since web performance became an indispensable thing, developers have been talking about loading performance. Since the days of simple HTML programs, we've accumulated a lot of knowledge about traditional techniques like back-end performance and image optimization. But the web has changed a lot since then. They're getting more complex, with more and more third-party dependencies, richer and more sophisticated techniques used to render content on the client side. Solving modern problems requires modern solutions.

In 2022, Philip Walton shared a way to break down LCP time consumption: the time it takes to start receiving content on the client (TTFB), the time it takes to start loading an LCP image (resource load latency), the time it takes to finish loading an LCP image (asset load time), and the time it takes until an LCP element is rendered (element rendering delay). By measuring the slowest of these metrics, we can focus our attention on the most effective optimization measures.

The conventional wisdom is that if you want your LCP image to appear earlier, you should optimize the image itself, including using a more efficient image format, caching it for longer, resizing it to a smaller size, and so on. Judging by the LCP metrics, these will only improve resource load times, but what about the others?

As I mentioned earlier, I participated in Performancenow() General Assembly. Another speaker at the conference was Estella Franco, with whom I worked to share new data from real Chrome users, including where LCP time is typically spent.

Estel Franco's LCP time allocation for presenting Chrome data (November 2023).

***rick viscomi

The image above shows a slide from Estela, with an LCP metric as the mean LCP time. Here's the same data, in milliseconds:

Mean LCP Diagnostic Performance Analysis Grouped by LCP Score (October 2023).

*: Chrome 119 beta internal data.

The most surprising thing is that resource load time (load duration) is actually already the fastest LCP metric. The slowest part is actually the resource loading delay. Therefore,The most likely time to speed up slow LCP images is to load them as early as possible。Again, the problem is not how long it takes for images to load, but how early enough we are not loading them.

Browsers are often very good at discovering images in markup and loading them quickly. So why is there a problem? The developers didn't do a good job of making the LCP images discoverable.

I wrote about the LCP discoverability issue in the 2022 Web Yearbook. In that article, I said 387% of mobile pages included image LCPs, but didn't make them statically discoverable. Even in the latest data from the http archive, this number is still 360%。

A big part of this problem is still a lazy loading issue. I wrote about the negative performance impact of LCP lazy loading in 2021. Lazy loading isn't just about the native loading=lazy property, developers can also use j**ascript to dynamically set image sources. In the last year, I said 178% of pages with LCP images are lazy loaded in some way, while the latest data from HTTP Archive shows a slight improvement, with 168% of pages use lazy loading. It's not impossible to get a fast LCP if you're lazy loading, but it's certainly not going to be beneficial. LCP images should never be lazy loaded.

To be clear: lazy loading is good for performance, but only for non-critical content. Other content, including LCP images, must be loaded as early as possible.

Client-side rendering is a completely different problem. If you send a unique tag to the client is one rendered by j**ascript

container, the browser can't load the LCP image until it's finally discovered in the DOM. A better (albeit debatable) solution would be to switch to server-side rendering.

We also need to work with LCP images that are declared in CSS styles, for example, background-image: url("cat.gif")。These images are not captured by the browser's preloaded scanner, so they can't be loaded early, but use a normal one

element.

For these scenes, you can also use declarative preloading to make the image explicitly discoverable. The simplest form can be something like this:

Copy**.

The browser will start loading the image as early as possible, but as long as its rendering depends on j**ascript or css, then the problem just goes from loading delay to rendering delay. By placing directly in the HTML

element to eliminate these dependencies is the most straightforward way to avoid this delay.

So far, all of these LCP recommendations have basically been designed to address some of the complexities we've introduced into our applications: LCP lazy loading, client-side rendering, and LCP background images. There are also relatively new technologies that can be used to improve performance or even avoid these delays altogether.

In last year's Web Yearbook, I reported 003% of pages used FetchPriority=High on LCP images. This property implies to the browser that the image should be loaded higher than the default priority. In Chrome, images are usually low priority by default, so this can give them a significant performance boost.

A lot has changed since last year! The latest HTTP Archive data shows that there are 925% of pages are using FetchPriority=high to load LCP images. This is a huge leap forward, mainly because WordPress is in 6Fetchpriority is used in version 3.

There are also techniques that can effectively implement on-the-fly navigation: utilizing backward-forward caching and preloading.

When the user clicks the back or forward button, the previously visited page is restored. If the page is saved in the browser's backward forward memory cache (also known as bfcache), then it will be loaded immediately. The LCP image has been loaded, and the j**ascript required to render it has been run, but not all pages are suitable for caching. The unload or cache-control: no-store directives currently cause pages to be non-compliant with Chrome's caching conditions, even if these events*** are set by a third party.

Since I last reported on the applicability of bfcache in the web almanac, unload usage has dropped from 17% to 12% and no-store usage has dropped from 22% to 21%. As a result, more and more pages are suitable for this instant-load cache, which benefits all core web vitals.

Another on-the-fly navigation technique is known as speculative loading. Using the experimental Speculation Rules API, developers can prompt the browser that if the user is likely to navigate to the next page, the entire page should be rendered in advance. This API also supports prefetching, which is a less aggressive way to improve loading performance. But the downside is that it only loads the document itself, not the sub-resources, so it's less likely than the pre-rendered mode to deliver on the promise of "instant navigation".

Here's an example of a speculative load from the mdn documentation:

Both optimization techniques make use of different types of pre-rendering. By using bfcache, previously visited pages are kept in memory, so they can be immediately revisited from the history stack. By speculatively loading, pages that the user has not visited can also be pre-rendered. The end result is the same: instant navigation.

As more developers become aware of the challenges and opportunities to improve performance, I hope we'll see continued growth as assessed by Core Web Vitals beyond 2023.

One of the first hurdles to overcome is to know if yours have performance issues. The easiest way to do this is to use PageSpeed Insights, which uses public core web experience data from Chrome UX Report. Even if yours currently pass the assessment, keep an eye on Interaction to Next Paint (INP) performance, as this will become the new responsiveness standard metric in March 2024. You can also use the Core Web Vitals report in Google Search Console to monitor performance. A better way to understand performance is to take self-measurements, from which you can get more granular diagnostics about why it's slow.

The next hurdle is to invest time and effort and even money to improve performance, but first you need to focus on performance.

If the performance of the INP is poor, there may be a Xi curve to optimize long tasks with the help of documents, techniques, and tools. In terms of interactive responsiveness, FID gives us a false sense of security, but now we have the opportunity to find and fix issues that might otherwise frustrate our users.

Let's also not forget that LCP is the weakest link in the Core Web Vitals assessment. More than other metrics have problems with LCP. The way we build web apps has changed a lot over the years, so we need to adjust our optimization techniques accordingly to focus on loading images faster.

I hope this article has helped show some of the progress we've seen this year and room for improvement. The 6% increase in web speed is certainly cause for celebration, but most** are still not fast enough – at least not fast enough for now.

If we maintain a rate of change of 6% per year, by 2026, more than half of all ** will have a good core web experience on mobile. Let's continue to push the boundaries of the CMS, JASCRIPT framework, and third-party dependencies faster, and let's continue to be a performance best practice advocate in the web community. The next 6% in 2024 is just around the corner!

Original link: You can miss web3, but don't miss web5

The future of web3 that others won't tell you.

Web3 Nowadays, the best investment is to invest in yourself.

Reflection on web3, don't complain.

Related Pages