Lighthouse doesn't calculate right results - lighthouse

I'm optimizing a website using the Lighthouse. I can see that it doesn't really calculate right the results.
Here are some points I think it's wrong in the calculation.
First, although the website time load around 200ms and as you can see the site loaded on first frame but in the lighthouse, all the values of "First Contentful Paint", "Time to Interactive", "Speed Index", and "Largest Contentful Paint
" always are 1.8 seconds for every tests (likes fake number) and it's too high compared with the load time.
The "Avoid long main-thread tasks" show the time so big compared with the load time.
Second, I have kept minimum number of the resources and tried to minimize much as much posible the size of the resources, but the Lighthouse still show the suggestion for this one
How can I pass those things?
After read the comment of Graham, I have some answers already.
However, even I removed all the JS code and all the image, the "First Contentful Paint" reduces 0.1s and the score still is 99. I'm not sure if Google team considers that the website contains only a line text is popular in the morden website. And of course currently, the PWA is not standard yet and even we use the PWA we still should load for "full state" as well.

Related

How long google Page Speed Insight take to update Field Data

I m boosting my website performance . I'm testing my website page speed with Google PSI. I've done maximum correction and i'm getting 90+ score in PSI . All lab data attributes are green . But Field data is still not being updated. I m just wondering, how long Google page speed insight take to update the Field data. also, If Fields data will update, then it will be same as the Lab data ?
Page Insight Screenshort
The data displayed is aggregated daily, so the data should change day to day.
However the reason you do not see the results of any improvements you make instantly is because the data is taken over a rolling 28 day period.
After about 7 days you should start to see your score improve, in 28 days the report data will be reflective of changes you made today.
Because of this delay I would recommend taking your own metrics using the Web Vitals Library so you can see the results in real time. This also lets you verify the changes you made work at all screen sizes and cross browser.
Either that or you could query the CrUX data set yourself on shorter timescales.

Load testing example on CloudBees

I have a question about the load test example on this page: https://developer.cloudbees.com/bin/view/RUN/Monitoring .
In the load test results chart, could you please explain what's happened during the "Load test temporarily halted" ? Why the response times is increased while the number of requests serviced decreased?
Thanks in advance
Leon
I believe this is an artefact (noise) of the graphing calculations. As the number of requests dropped dramatically - the window of calculations for request time still was closing over a time window that spanned into when there were a lot of requests. So there was data that said "a lot of time is being spend in request" but then more current data said there were few requests - so the result was a temporary spike (you can see a dip where it partially corrects).
Also, it looks like a single data point - hence the straight line being interpolated up. Given there are few requests, it is also possible that there were some that did actually take a lot of time (perhaps there was a GC pause) - and that is more visible than it would otherwise be.
Over a longer period of time, and if you "zoom out" then it would probably appear to just be a glitch/quirk (for the purposes of this, I think you can ignore it).

Fast Application Switching Is Slow Application Switching in Mango

I have an issue I'm hoping someone can help with. I have an application that, for all intents and purposes, is working great. It's basically a picture viewer type of application - for something very specific. What it is is about 500 pictures.
I have all pictures set as Content and I load/unload one at a time. For the 500 pictures, I have a class that is used as data about each picture. So things like "place taken" , "index", "short description", etc. I never need to insert or delete from this list, but I may need to make some changes to each individual one, like "user viewed this picture on..." (date) or "favorite = true" (boolean where user marks a picture as a favorite picture).
When I deploy the app, this "picture metadata" is in xml file. It is then deserialized and saved to IsoStorage upon first run. A copy of it is maintained in memory and that is what is used to run my whole app. I have 3 different pages that all use that data, which is set as a static property in app.xaml.cs. Upon Deactivate/Closing the data is serialized back to xml - upon relaunching it is deserialized. Everything works fine and fast - everywhere. Including tombstoning.
The problem is resuming from Deactivation where the app is not tombstoned. It can take up to 10-15 seconds and definitely is returning from e.IsApplicationInstancePreserved in Application_Activated (i.e. it's not tombstoned).
Activating from brand new it takes about 3-4 seconds for the app to start. Returning from tombstoning it also takes about 3 seconds.
What I'm not understanding is why returning from e.IsApplicationInstancePreserved = true; is taking so long (and it won't allow me to pass certification). I've tested and found that if it is about 10 items in the List its incredibly snappy for FAS. If there are about 50 items in the List then it's not immediate. If there are 100 items, it's the first time you can see "Resuming..." (yep, that word coming from FAS, not tombstone). Where I have it, at 500 in the List, it is painfully slow to watch FAS, which is SAS.
It's interesting that in the emulator, FAS works perfectly fine, even with 1000 objects in memory. It's on an actual device where it is incredibly slow (Samsung Focus) in both debug and release mode.
Now I know the easy answer may be something like "why maintain a class with a list of 500 objects all the time?", but my whole architecture and user experience is based on having data about the pictures available on all three pages all the time. Linq is heavily used to report data everywhere.
Any thoughts or guidance on this situation?
Great to hear you found the problem.
"sounds a serialization/deserialization issue to me. Are you storing the list in a State 'variable' ? were the framework will serialize when leaving the app and deserialize it when you return"
Remember, your PC + Emulator is MUCH faster and more powerful than your phone's 1GHz single core ARM processor with 500MB RAM!
When your app is activated with e.IsApplicationInstancePreserved == true, then you shouldn't need to restore any state and you should be able to pick up where the user left off.
If you're saying that when your list catalogs 500 items, the performance is horribly slow when resuming the app, I wonder whether you're also keeping the 500 images in memory and your phone is having to swap all that data into your app's memory space. This will be further compounded if your app has multiple pages each containing copies of your 500 images!
I strongly encourage you to profile your app to measure its memory footprint and to see where the big memory, storage, IO and perf issues are.

What's the reasonable time for generating web page?

I'm working on web app (Rails 3 based). And I really don't like the time it takes to generate the page - depending on the displayed data it takes up to 2.5 and even 4 seconds.
So I just was wondering what is the average reasonable time for generating page in your apps. Saying you check the generation time, e.g. it's 750ms and think "Ok, that should be fine even without caching". Or when you see 1.5sec you think "Oh my God, the user won't wait so long and leave the site"
There's a huge amount of research data regarding the time from query to rendering and user's experience. I'd recommend reading this useit.com article. After all Google integrated page speed in its results for a reason ;)
The 3 response-time limits are the
same today as when I wrote about them
in 1993 (based on 40-year-old research
by human factors pioneers):
0.1 seconds gives the feeling of instantaneous response — that is, the
outcome feels like it was caused by
the user, not the computer. This level
of responsiveness is essential to
support the feeling of direct
manipulation (direct manipulation is
one of the key GUI techniques to
increase user engagement and control —
for more about it, see our Principles
of Interface Design seminar).
1 second keeps the user's flow of thought seamless. Users can sense a
delay, and thus know the computer is
generating the outcome, but they still
feel in control of the overall
experience and that they're moving
freely rather than waiting on the
computer. This degree of
responsiveness is needed for good
navigation.
10 seconds keeps the user's attention. From 1–10 seconds, users
definitely feel at the mercy of the
computer and wish it was faster, but
they can handle it. After 10 seconds,
they start thinking about other
things, making it harder to get their
brains back on track once the computer
finally does respond.
A 10-second delay will often make
users leave a site immediately. And
even if they stay, it's harder for
them to understand what's going on,
making it less likely that they'll
succeed in any difficult tasks.
As a rule of thumb, think that you always should aim for a balance of optimization time vs time gained. Don't spend days optimizing the hell out of one routine when your images aren't compressed correctly, or your scripts/css not combined. Yes, faster is better, but a 90% gain in generating the page by setting up a smart cache beats a 10% gain after one week tweaking the algorithm.
Also don't look too much into the first-render-time when the framework has to load everything, but use stress-testing, cached or not, to simulate various situations.
Now, some data; some of the latest sites i worked on used DotNetNuke, a huge open-source CMS, and Asp.Net MVC where you nearer to the metal. Average page time with average db queries was 600-700 milliseconds for DotNetNuke. For Asp.net MVC, it's 70-100 milliseconds... Users really like the second one :)
There's no 'right' answer to this - the faster the better. Personally I normally aim for < 200ms, although I know from experience that it can be quite difficult to achieve this in Rails on anything but simple apps. Try and figure out where your bottlenecks are and cache what you can.
Edit: There seems to be some confusion between page generation time and page render time. Obviously a quick page render is the goal, and on most sites doing things like reducing HTTP requests, gzipping CSS/JS are where you can get most of your quick wins. But if the page itself can take 4-5 seconds to generate, then you're probably right that your app is where you should start.
It depends on whether nothing is displayed for 2.5-4 seconds, or that the user already sees (a part of) the page from the start, and it finishes loading completely after 2.5-4 seconds. In that case the user doesn't experience a 2.5-4 second load. Take the http://www.nytimes.com/ website; I see most of it right away, but according to the Web Inspector it takes 1.94 seconds for it to be loaded completely.
And keep in mind that the speed will also depend on the browser, computer, internet connection. What's fast for you might be slower for others.
Measure your apdex score and see how it is performing. That will give you a rough indiciation. From there, you can decide how you want to increase performance.
It also depends on what your site is; an system application for a business or software as a service (SaaS)? If it's a system application, the users are forced to use it to performance can be negotiated. If it is a SaaS, then the higher your apdex score, the more chance you have of losing your user's interest.
There are a few gems out there that measure performance and report on what your apdex is.
Here's a little more info: http://apdex.org/blog/?p=630
My personal rule - no page should take more than 0.05 seconds, or you are in troubles.
As long as you write proper code, you don't need to spend much time on optimization to stay under 0.05.
If you stick to giant frameworks, then you are out of luck.

Tracking impressions/visits per web page

I have a site with several pages for each company and I want to show how their page is performing in terms of number of people coming to this profile.
We have already made sure that bots are excluded.
Currently, we are recording each hit in a DB with either insert (for the first request in a day to a profile) or update (for the following requests in a day to a profile). But, given that requests have gone from few thousands per days to tens of thousands per day, these inserts/updates are causing major performance issues.
Assuming no JS solution, what will be the best way to handle this?
I am using Ruby on Rails, MySQL, Memcache, Apache, HaProxy for running overall show.
Any help will be much appreciated.
Thx
http://www.scribd.com/doc/49575/Scaling-Rails-Presentation-From-Scribd-Launch
you should start reading from slide 17.
i think the performance isnt a problem, if it's possible to build solution like this for website as big as scribd.
Here are 4 ways to address this, from easy estimates to complex and accurate:
Track only a percentage (10% or 1%) of users, then multiply to get an estimate of the count.
After the first 50 counts for a given page, start updating the count 1/13th of the time by a count of 13. This helps if it's a few page doing many counts while keeping small counts accurate. (use 13 as it's hard to notice that the incr isn't 1).
Save exact counts in a cache layer like memcache or local server memory and save them all to disk when they hit 10 counts or have been in the cache for a certain amount of time.
Build a separate counting layer that 1) always has the current count available in memory, 2) persists the count to it's own tables/database, 3) has calls that adjust both places

Resources