I'm trying to fix my website's performance issues mentioned in Lighthouse check.
The highest (which is bad) score is for CLS.
In order to solve CLS problems in website, all I need to do is to move some media queries styles that are loaded in the bottom of <body> to be loaded in <head>.
Means, moving styles from one CSS file to another.
After I do those changes, the CLS score decreases to be 0! which is perfect.
BUT for some reason, once I do it, the LCP score multipates itself by 3!
No new requests, no additional CSS, no additional JS. And it still happends.
Has someone faced that issue? I really have no idea what to do.
Thanks a lot.
If you are not loading CSS asynchronously, then linking a CSS file in the head can effectively increase the loading time.
So it's not the CLS influencing LCP, it's just that the solution to one problem is causing a new one.
Try moving the smallest portion of your CSS that still fixes the issue to a new file, and then load only that file in the head.
Related
I have a CLS shift on this site, and I can't figure out what is causing it?
Typically Cumulative Layout Shifts are caused by not assigning width/height to images or lazyloading them but that is not the case for this site. Also, all of my important CSS is inlined so we don't have an issue with render blocking. I'm really at a loss. Performance insights and Lighthouse are not giving me any clues.
I've tried to take 101vh off of the body/html, and I've tried to take 80/90vh off of the hero image wrapper (the image is absolute so that is not an issue).
Does anyone have any clues for me?
I use the below Chrome extensions to find CLS issues.
Web Vitals Enable console logging for this and in the Chrome dev console, check the entries[n].LayoutShift.sources[n].LayoutShiftAttribution properties
Core Web Vitals Annotations
I have a website that is very simple, but very long -- a lot of text that could be scrolled through. It's a documentation site, and considering the nature of the content (a lot of short similar entries) I decided to show everything at once, so the user could either scroll from entry to entry or navigate via a sidebar index. It's a common documentation model that I like (e.g. Underscore, Backbone, and LoDash).
The site is here: http://davidtheclark.github.io/scut/. You could look at the pre-production code here: https://github.com/davidtheclark/scut/tree/master/docs/dev.
And here's the problem: For a number of users this site consistently crashes their iOS browsers. Not all users (not me); but for those that do experience the crash, it seems to recur consistently. (The site may also crash some people's Android phones, I don't know: haven't heard from any Android users.) I am hoping someone can help me diagnose and possibly fix this problem.
Part of the difficulty I have is that I cannot reproduce the crash myself -- not on my own iOS devices, not on the Xcode simulators. Because the site is not at all resource-heavy (~70KB load) and involves very little JavaScript, and because of the effects of a few prior attempts to fix this, I'm guessing that the problem involves memory usage -- that iOS browsers are crashing because the site is demanding too much memory. But I'm not sure that's the issue, and if it is I'm not sure how I can fix it.
I'm not sure what to try next, and I'm hoping some savvy StackOverflow whizzes have advice. What is it about this site, which seems so simple and basic to my eyes, that is making it so memory-demanding that it is crashing browsers?
Is it just too long? Is there CSS that is too difficult to render? Are there JavaScript memory leaks?
I'm interested both for the sake of this particular site and so that I can learn to anticipate-and-prevent and/or diagnose-and-fix similar problems on other sites in the future.
Feel free to look at or contribute to [the Github issue](in this Github issue, as well.
Addendum
Here are some things to know about the site that might be relevant:
The HTML doc is large relative to other sites' HTML docs. Unminified it looks to be ~225KB. (I notice that LoDash docs are even bigger -- does that site crash people's phones?)
The served HTML doc is minified.
Served CSS and JS are also minified.
The site uses Prism.js for syntax highlighting.
The site uses Overthrow to make the 2-scrolling-columns layout work on tablets.
<aside id="help-content"> is fixed and translated off-screen; it slides in when you click a [?] like the one by any utility's "use-name".
An iOS Crash Log
These look to me to be the potentially relevant lines of a crash report from an iPhone running Chrome and crashing on the site (I'm not sure whether they are actually relevant or not because I haven't developed iOS apps and don't know the ins-and-outs of these reports):
Free pages: 5674
Active pages: 117674
Inactive pages: 55121
Speculative pages: 3429
Throttled pages: 0
Purgeable pages: 0
Wired pages: 60906
File-backed pages: 23821
Anonymous pages: 152403
Compressions: 356216
Decompressions: 121241
Compressor Size: 16403
Uncompressed Pages in Compressor: 49228
Largest process: Chrome
[...]
Chrome <2a759438c2253e3baededaa0d13feb56> 166479 166479 200 [per-process-limit] (frontmost) (resume)
I think I fixed it!
The problem, as suspected, was rendering/painting caused by CSS layout. At phone-size, I had been hiding the content of each entry until it was selected; and the method I had been using to hide them, and remove any trace of them from the layout, included position: absolute. I didn't initially use display: none because of typical concerns about wanting to not see content but keep it there, for various readers and reasons. I threw those concerns aside and changed the layout so that the entries were hidden with display: none and shown with display: block -- and that seems to have fixed the crashing.
I think the absolute positioning was stacking a huge amount of content in the corner of the screen, and although it wasn't visible, it was demanding memory.
What clued me in to trying this was an answer to another related question, linked to above by #tea_totaler: https://stackoverflow.com/a/14866503/2284669. It says:
What tends to help me a lot is to keep anything that is not visible at this time under display: none. This might sound primitive but actually does the trick. It's a simple way to tell the renderer of the browser that you don't need this element at this time and therefore releases memory. This allows you to create mile long vertical scrollers with all sorts of 3d effects as long as you hide elements that you are not using at this time.
I think that my other hiding method was not releasing memory, despite its other advantages (which were possibly irrelevant to this particular site anyway). I'm sure it became a problem only because the site was so long.
That's something to consider, though, when you want to hide an element: rendering/memory demands.
On my site it was caused by elements with the css property -webkit-backface-visibility: hidden
removing this property fixed all crashes!
see iOS: Multiple divs with -webkit-backface-visibility:hidden crash browser when zooming
I ran an audit with Chrome on the site. It suggested this:
Remove unused CSS rules (44)
44 rules (10%) of CSS not used by the current page.
css-built.min.css: 10% is not used by the current page.
audio, canvas, video
audio:not([controls])
[hidden]
abbr[title]
dfn
hr
mark
q
sub, sup
sup
sub
svg:not(:root)
figure
fieldset
legend
button[disabled], html input[disabled]
input[type=checkbox], input[type=radio]
input[type=search]
input[type=search]::-webkit-search-cancel-button, input[type=search]::-webkit-search-decoration
textarea
table
.older-docs
.older-docs>li
.older-docs>li:not(:last-child):after
*, :before, :after
fieldset
textarea
:not(pre)>code[class*=language-], pre[class*=language-]
:not(pre)>code[class*=language-]
.namespace
.token.regex, .token.important
.token.important
.older-docs
.changelog dt
.changelog>dt
.changelog>dt:after
.changelog>dd
.changelog-i-list
:target>.entry-body
.sub--h
.example--css.is-active
.preload .help-content-c
.help-content-c.is-active
.help-content.is-active
The task manager on Chrome shows that the page takes up about 2x as much total memory than other sites, such as stackoverflow and dropbox. I would recommend dividing up the features into separate pages instead of one long page. By separating the features it would improve the server's efficiency and the browser's load time and memory usage. There would be less JavaScript and CSS running on each page and smaller amounts of data would be sent from the server. Having all the features on the home page is inefficient. For example, if a user only needed to look up how to make a Font Icon Label they would have to load other sections of the page that are not needed and take up memory.
Sorry for just making a guesses but I see two potential causes in your stylesheet which could be resulting in crash
1.) Using data-url for background image rendering such as here
.github,.source {
background-image: url("data:image/svg+xml;charset=US-ASCII,%3Csvg%20width%3D%22100%22%20height%3D%22100%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%3E%3Cpath%20d%3D%22M85.714%2050q0%2014.007-8.175%2025.195t-21.122%2015.485q-1.507.279-2.204-.391t-.698-1.674v-11.775q0-5.413-2.902-7.924%203.181-.335%205.72-1.004t5.246-2.176%204.52-3.711%202.958-5.859%201.144-8.398q0-6.752-4.408-11.496%202.065-5.078-.446-11.384-1.563-.502-4.52.614t-5.134%202.455l-2.121%201.339q-5.19-1.451-10.714-1.451t-10.714%201.451q-.893-.614-2.372-1.507t-4.66-2.148-4.799-.753q-2.455%206.306-.391%2011.384-4.408%204.743-4.408%2011.496%200%204.743%201.144%208.371t2.93%205.859%204.492%203.739%205.246%202.176%205.72%201.004q-2.232%202.009-2.734%205.748-1.172.558-2.511.837t-3.181.279-3.655-1.2-3.097-3.488q-1.06-1.786-2.706-2.902t-2.762-1.339l-1.116-.167q-1.172%200-1.618.251t-.279.642.502.781.725.67l.391.279q1.228.558%202.427%202.121t1.758%202.846l.558%201.283q.725%202.121%202.455%203.432t3.739%201.674%203.878.391%203.097-.195l1.283-.223q0%202.121.028%204.967t.028%203.013q0%201.004-.725%201.674t-2.232.391q-12.946-4.297-21.122-15.485t-8.175-25.195q0-11.663%205.748-21.512t15.597-15.597%2021.512-5.748%2021.512%205.748%2015.597%2015.597%205.748%2021.512z%22%2F%3E%3C%2Fsvg%3E");
background-repeat: no-repeat;
}
2.) Also -webkit-transition could be the culprit. Read here for more https://stackoverflow.com/a/11833285/900132
Your HTML markup has some errors (such as a div tag inside an h1 tag) that should be fixed before you try to analyze a crash.
I suggest you run it through an HTML validator, for example http://validator.w3.org/check?uri=http%3A%2F%2Fdavidtheclark.github.io%2Fscut%2F&charset=%28detect+automatically%29&doctype=Inline&group=0
The div inside h1 apparently caused a cascade of errors that the validator had to suppress to continue.
When I have browser crashing problems, HTML validation is always my first step. Then I try seeing what might be wrong with the javascript if correcting the HTML didn't help.
I just read this post and tried http://davidtheclark.github.io/scut/ on my iPad. Chrome crashes immediately, although sometimes shortly shows the home page. Safari renders the home page correct and many other pages, but clicking on the "about > installation" link at the left makes it crash right away (well, once it displayed OK, but clicking again crashed it). All of this is pretty consistent.
The errors are indeed due to LowMemory and it's the browser process that uses the most memory. The crashes happen at around 150000 pages (4KB/page? => 600MB???).
That being said, I'm afraid I don't have an answer to your question. Hope it helps at least a little bit.
Kind regards,
/Sigiswald
Removing position: sticky; helped me and my mobile safari crashing issues. Not sure exactly why.
body:before{
position:-webkit-sticky;
position:sticky;
}
In my case the crashing was caused by using CSS filter: blur(2px) to create a colored "glow" effect.
I fixed it by creating the glow in photoshop and using a PNG file to render the glow on my website.
This not only fixed the crash but created a nicer, more even glow that also didn't re-render in strange ways when zooming and scrolling.
i'm adding and removing html elements to make and infinity scroll.. But angular doesnt seem to be garbage collecting straight away.. Please have a look at the graph.
It climbs and climbs and then drops while scrolling..
and here is a sample of my code:
$scope.items = and array of lots of items.
$scope.itemsView.push($scope.item[i]);
$scope.itemsView.splice(theIndex,1);
Any ideas?
It's not up to Angular to do the garbage collecting, it's only responsible for removing the HTML elements from the DOM. I can't see from your graph whether Angular is doing its job or not.
Did you try forcing a GC by pressing the garbage bin icon at the bottom of Chrome Dev Tools? Chrome will do a GC when it deems it necessary and not instantly since it's a costly operation.
I'm working on a site with modx revo. I'm really annoyed by the slow loading op pages. There's a 2sec wait for a page load om my localhost ánd I have a SSD. I've been looking around to find out how to make pageload faster.
I do have alot of getResources-/Gallery (9 total) calls and two Wayfinder calls. I've read it had to to with those, so I got rid of all the getResources and changed them to customs snippets that do only what I need them to do, build a 3-4 item menu. It's still slow, only few hunderd ms slower.
The Galleries (5) are only 3-4 images. I also use babel that checks every resource id for it's translation counterpart.
I'm wondering if it has anything to do with my wampserver (v 2.2) settings...
Now that I've summed it all up, I does look like a heavy page. Will I get long pageloads with any CMS this way?
Any help/hint/tips are apreciated!
You might want to "cache" all snippet tags without using the exclamation mark [[! ... ]].
Here is a blog about caching guidelines: http://www.markhamstra.com/modx-blog/2011/10/caching-guidelines-for-modx-revolution/
Here is a current discussion about speed performance: http://forums.modx.com/thread?thread=74902#dis-post-415390
I'm working on a website with reasonably heavy traffic and I'm looking into using a CSS sprite to reduce the number of image loads in its design.
Are there any advantages to using a CSS sprite besides reducing the amount of transmitted data? How much space do you really save? Is there a threshold where using sprites becomes worthwhile to a website?
UPDATE: Thank you for your responses. They are obviously all very carefully thought out and present good sources to verify your points. I feel much more capable to make an informed decision about using CSS sprites in my site design now.
The question is generally not about the amount of bandwith it might save. It is more about lowering the number of HTTP requests needed to render a webpage.
Considering :
web browsers only do a few HTTP requests in parallel
doing an HTTP request means a round-trip to the server, which takes lots of time
we have "fast" internet connection, which means we download fast...
What takes time, when doing lots of requests to get small contents (like images, icons, and the like) is the multiple round-trips to the server : you end up spending time waiting for the request to go, and the server to respond, instead of using this time to download data.
If we can minimize the number of requests, we minimize the number of trips to the server, and use our hight-speed connection better (we download a bigger file, instead of waiting for many smaller ones).
That's why CSS sprites are used.
For more informations, you can have a look at, for instance : CSS Sprites: Image Slicing’s Kiss of Death
Less http requests = faster loading overall. Yahoo and co. use this technique, if you can imagine the amount of users they have it saves a lot of bandwidth. Imagine 50 seperate images for icons, that's 50 seperate http requests as opposed to having just one css sprite containing all the images, that would save 49 http requests and multiply that per all the users of the site.
Actually, sprites are not used to reduce the amount of transmitted data (in most cases it slightly increases the amount of data transferred), but to reduce the amount of requests done on the server.
HTTP requests on a browsers are traditionally done in sequence. Which means that one request will not start until the previous one is completed. Also, it is expensive to open a connection to do a request. By limiting the amount of requests made on the server, you are increasing the speed the elements load.
I think Yahoo has the best argument for CSS sprites. Besides, the whole page is worth reading:
http://developer.yahoo.com/performance/rules.html#num_http
Besides the performance enhancement of the overall page load by limiting the amount of requests, image sprites can also make dynamically swapping images (for example changing the background image of a nav item on hover) "perform" a little better since all you do is change the x,y instead of the src.
So I guess to answer what is the threshold to warrant using them, I'd say immediately because of the potential loading improvements on each individual client.
In addition to reducing HTTP requests (as already noted), CSS sprites aren't dependent on JavaScript. This gives a few other advantages:
less code to maintain
easier cross-browser testing
can be coded inline via style attributes
no DOM hacking
no image preloading (so less administrivia -- "Oh wait, I need to preload that new nav button ... crap which .js file has my preloader?")
you can use css classes to apply it to several selectors
can be applied to any selector with the :hover pseudoclass, or in any selector that can be wrapped with an anchor (not just imgs)
If you're not averse to DOM hacking, though, you can get some nifty animation effects just by pushing the X and Y values around. Which makes it easier to animate lots of different states (like keypress or onmouseclick).
There are a few interesting graphic production side effects as well:
fewer graphic production files
easier to do layout for buttons etc. directly in HTML (less need for PSD comps)
easier to make GUI changes without having to regenerate a ton of graphics
just that much tougher for image pirates to slurp your graphics