[Request for feedback] Adding shared performance insights to Lighthouse #16462
Replies: 36 comments 87 replies
-
well.....first feedback I guess :D my site is fully http2 and http3 supported via cloudflare but I get modern http error and it seems that all page elements are loaded over http1.1 so its either lighthouse bug with new metrics or some weird cloudflare issue. but testing the webpage gives it a green on http2 :/ |
Beta Was this translation helpful? Give feedback.
-
Wow! This is a really great update—the only issue is that I’m seeing the HTTP error even though my site is fully supported with http2 and http3. |
Beta Was this translation helpful? Give feedback.
-
Some of the language used on the new audits is much more developer-centric than before. Your tool is used by normal website owners, not just developers, so please don't alienate the average website owner even more.
Pagespeed insights is already hard for normal people to understand - please don't make it worse. |
Beta Was this translation helpful? Give feedback.
-
I think the new audit is unfair. I have a homepage that currently passes with flying colours (100% across the board). with the new insights, issues arise. One issue it states is that I'm serving an oversized image in a small slot. What I did was combine a couple of pictures together as one picture to minimize requests and used it as a CSS sprite. and despite it measuring my sprite with the correct dimensions, its claiming it can be optimized due to its "large size". I did this deliberately because I'm using an HTTP/1.1 only server and I'm trying to minimize connection requests for pictures. for reference, here's the sprite: https://buy.ontariospeeddating.ca/r0/1/allvenues.jpg and here's the website the sprite is loaded into: https://buy.ontariospeeddating.ca/ If pagespeed continues this way, then the people (think baby boomers and gen-x) who don't have new computers that want websites that serve their needs won't be able to see them because google would hide them over these silly new "optimizations". |
Beta Was this translation helpful? Give feedback.
-
Dumbing the UI down is never good, my initial server response time went from 1.2s to just 'server responded slowly'...? I like numbers. |
Beta Was this translation helpful? Give feedback.
-
I think the HTTP/2 suggestion can be improved.Looking through the insights code, it suggests that it flags a page if it has 6 or more static resources via HTTP/1.1, but it does not factor how they are loaded. Picture this: That shouldn't be flagged, because only 2 resources are immediately needed: the HTML page itself, and the 1 picture above the fold. It would take the user to scroll to load the remaining pictures. |
Beta Was this translation helpful? Give feedback.
-
A bit more emphasis on and perhaps accurate tracking of page size (network activity can sometimes be a lot more) would be great. We are currently seeing a month-on-month increase in page size despite a month-on-month increase in page loading speed. An excessive focus on speed may cause people to be cavalier about something that can be a source of instability and poor user experience. Related, I'm starting to wonder if Lighthouse needs to adapt its metrics a little. The real-life user experience can vary enormously between a site that scores 93-100-100-100 and one that scores 100-100-100-100. Meanwhile, the difference between something in the 70s and high 80s or even low 90s is often not so tangible. I love the general direction of these insights. It gives people things to improve even when they are getting top results. It's also good to focus more on standards and quality in light of the general internet bloat. I can see this applying good, subtle pressure on organisations to up their game a bit. Nice work. |
Beta Was this translation helpful? Give feedback.
-
Once v13 (~October 2025) releases – will it show exactly what PageSpeed Insights shows today when toggled in Insight Mode? E.g. all the audits with id ending in When reading the blog posts I thought only the new insights would remain, but after reading the code I understand that, for example, |
Beta Was this translation helpful? Give feedback.
-
Hello lighthouse and thank you for this exchange I have a recurring issue with image optimization. for example, I had an image that was originally 88kb. PageSpeed suggested compressing it to 60 kb. Great I did that. But after retesting, it suggested compressing it further to around 35 kb. I compressed it again, but it didn’t stop there!! next suggestion was 9kb. every time I optimize, it keeps flagging the image and asking for more compression, without any clear "endpoint" 🙄 (the same problem for every image on the website) The problem is that each time I compress further, the image loses more quality (especially if the image is the image of the CEO XD), and I don't know when I’ve “done enough to satisfy the tool". we can't just keep squeezing quality forever. speed is important, but so is the look of the site.. especially when the site is already fast. it would be great if there were clearer guidelines or a point where page speed acknowledges "good enough" 🫡 Secondo, i’m using a wordpress block theme, and i noticed page speed always flags the navigation and cover blocks (which are default core blocks) under render blocking requests and network dependency tree issues to test, i completely removed those blocks and just used my own custom min.css which is super lightweight.. only 4.7kb minified, and it’s the only css file i’m using now for the whole website to run far a way from "Network dependency tree" as much as i can. even with that, it still flags it under both categories. the other day it flags even a file with .9kb!!!!!!!!!!! Honestly, i’m wondering.. do you want us to stop using css completely? 😅😅 because i don’t see how i can possibly get lighter than that without turning the whole site into plain html! 🤔 Lastly also, for LCP by phase flag “each phase has specific improvement strategies…”, it always flags the logo (an svg in my case) as the LCP by phase. maybe you could exclude typical logos from this check 🤗 since 99% of websites have their first image as a logo i think. If you fix the first two issues at least i will thank you forever, anyway iam thanking you in advance for reeding this comment, good luck and have mercy on us (developers) i spend more hours to fix google page speed insights instead of building other websites 😭 |
Beta Was this translation helpful? Give feedback.
-
A great addition, but there may be a small issue with the tool recognising all types of responsive images. The corresponding HTML uses picture -> source elements with media queries. For a Moto G4 size screen, the browser should choose the appropriate image size based on the 'sizes' attribute and image widths listed in the 'srcset' attribute, whilst accounting for the display pixel density. It seems that the Insights tool is only accounting for the displayed dimensions (399x300) and not screen pixel density. The existing Lighthouse audit does seem to get this right - or at least it never throws a warning for mobile image sizes. Link to the site in question: https://creativetouchrotherham.co.uk/ |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
It complains about |
Beta Was this translation helpful? Give feedback.
-
I use the https://pagespeed.web.dev/ website to test websites and pages and most of the time the numbers I can achieve are in the Green but I do have a life outside creating websites and it would be good if there was more explanation / links. When there is a problem I struggle to understand what I should do to improve the score. Please think of those who are interested in creating better websites but are not experts in every topic. |
Beta Was this translation helpful? Give feedback.
-
I got an LCP request discovery issue where I think I shouldn't. On my website, what generates the LCP is a background image generated in-line from a data URI, and that generation is in a javascript tag right after the HTML body tag. Your pagespeed insights suggests I should add "fetch-priority=high" but how does that work with data URI's and why would that be needed if the image can be pulled from the HTML itself. Also, the data URI is within the first 20K of code (uncompressed). |
Beta Was this translation helpful? Give feedback.
-
I ran my site again through page speed insights and the stats are awfully confusing. What I show here is the desktop results. It states the LCP element loaded in 0.3 seconds (same time as FCP). It also states Speed index is 0.9 seconds. However, the new stat "Document request latency" goes red claiming the response was served slowly. How is that claim true when pretty much everything loaded in 0.3 seconds and the rest of the indicators are green? I also did manage to get the reported TTFB value through the pagespeed API which is roughly 160mS (which is below the 800mS threshhold). So now I'm completely lost. |
Beta Was this translation helpful? Give feedback.
-
The name of the new version is more difficult for me to understand than the old version. At least I knew the cause and solution in the old version, but the new version leaves me at a loss. |
Beta Was this translation helpful? Give feedback.
-
I seem to be having issues with 'Improve image delivery' - in mobile mode, it's highlighting picture elements with sources and a base img tag that are deliberately double width / double height for 2x DPI screens, claiming the image is (exactly double) the dimensions. See https://share.muckypuddle.com/6qup655A, and the correspoding element:
|
Beta Was this translation helpful? Give feedback.
-
I would love to get more insight regarding what resources are causing the render delay phase of LCP. We regularly get very high LCP scores that have 80-90% of their duration in the render delay phase. That number doesn't necessarily correspond in any meaningful way to the estimated time of the Render blocking requests. Sometimes it's significantly higher, sometimes significantly lower. I have tried using Chrome devtools to give me a waterfall graph so In diagnose the issue, but even when I set the CPU and network throttling to match Lighthouse, I can't ever seem to recreate the long LCP times. I regularly get ~half of the LCP in Chrome. Getting either a waterfall graph or just a list of resources causing the render delay would be incredibly helpful! |
Beta Was this translation helpful? Give feedback.
-
Wil this have an impact on how the numeric performance score is calculated, or just the suggestions and feedback provider? |
Beta Was this translation helpful? Give feedback.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
Hello Please take a look at: So I preloaded all those fonts: To me, this is misleading because I did something to fix a "red" warning, but I made my LCP worse. I work with a performance plugin and we know customers will see that red warning and think they need to fix it. But in doing so, they will make other things worse and they won't understand why. |
Beta Was this translation helpful? Give feedback.
-
What is it your needing?
…On Sat, Aug 23, 2025, 8:13 p.m. Lucy Beer ***@***.***> wrote:
@connorjclark <https://github.com/connorjclark> any chance of getting
your feedback on this conflict between network dependency warning vs LCP
metric? Thanks! ☝️
—
Reply to this email directly, view it on GitHub
<#16462 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BUWCVJEVAU7BS6QX6Z5U5D33PENVTAVCNFSM6AAAAAB3ZRCFPKVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTIMJZHE3TKMA>
.
You are receiving this because you are subscribed to this thread.Message
ID: <GoogleChrome/lighthouse/repo-discussions/16462/comments/14199750@
github.com>
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Not sure if anyone posted this (I read through quite a bit here but maybe I still missed it): WHY not just LEAVE both??? I just don't understand the point. Everyone here complaining about not seeing numbers and how it's basically more complicated for the average user now (the 'new' insights) I agree with 100%. What GOOD reason is there for removing something that works and is fine. Just leave it. Add your 'new' hard to understand version if you want but PLEASE KEEP THE OLD VERSION! I'll go to GTMetrix (and will pay them) if the numbers/good version disappears and only this new more complicated dumbed down version is left. I hope everyone goes to GTMetrix in that case. Just makes no sense at all. Leave both. Make everyone happy. Simple. |
Beta Was this translation helpful? Give feedback.
-
LCP detection likely inaccurate on mobile PSISite: https://www.schmidtkramer.com/ What PSI reportsClaimed LCP Element: This element is roughly 1600px from the top of the mobile viewport — far outside the visible area on initial load. It shouldn't qualify as LCP under normal interpretation of “largest above-the-fold content.” What likely is the real LCPThe hero section background image is the first large visual element loaded and rendered. Relevant Element: I plan to apply resource hints (e.g., This seems like a case of false-positive LCP detection. Please correct me if I'm wrong. |
Beta Was this translation helpful? Give feedback.
-
PageSpeed reports that I should use web/avif images when the image is already in .avif format and compressed ![]() . How do you explain this? Thanks for your attention. |
Beta Was this translation helpful? Give feedback.
-
Thanks, I'll follow the discussion you mentioned. |
Beta Was this translation helpful? Give feedback.
-
Hi Connor Clark, I would like to raise an issue regarding the inconsistency in page speed test results. Whenever we run a speed test, the scores often vary — not only for us but for others as well. These scores reflect overall performance metrics such as LCP (Largest Contentful Paint), CLS (Cumulative Layout Shift), and others — not just the actual page load time. This can be confusing, especially for developers who are specifically trying to measure and optimize load speed only. If you could provide some clarity or a solution regarding this inconsistency, it would be greatly appreciated by the developer community. Additionally, it would be extremely helpful if there could be a separate metric or feature dedicated solely to measuring raw page load speed — without being influenced by broader optimization or SEO-related factors. Having a standalone page load score would allow for more accurate testing when services or features are focused only on improving load times. When a service promises only "page load improvements" without addressing full performance optimization or SEO, it becomes difficult for us to justify or understand the reported test results. Separating these metrics clearly would help avoid confusion and set proper expectations. Thank you for considering this suggestion. Best regards, |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
EDIT: Oct 10, 2025: https://developer.chrome.com/blog/lighthouse-13-0
A more detailed blog post about these upcoming changes: Lighthouse is moving to performance insight audits
The Performance panel in Chrome DevTools recently added insights to the trace view. These performance insights are powered by a trace analysis library that was designed to also run in Lighthouse. The performance insights are analogous to existing Lighthouse performance audits, but with some tweaks and consolidation. The goal is to offer the same performance advice across all our performance tools - Chrome DevTools, Lighthouse and PageSpeed Insights.
In Lighthouse, the set of performance audits that have been replaced by equivalent insight audits will be removed.
If you have any questions of feedback, please add a comment here.
Related issue: #16323
Beta Was this translation helpful? Give feedback.
All reactions