Webpagetest's Lighthouse: field or lab? - lighthouse

Webpagetest.org offers "own" Lighthouse test. It can run as a part of the general WPT test or as a standalone test from the URL /lighthouse.
My question: is the WPT's Lighthouse test a LAB or a FIELD ?
I've got this question because of the following experience:
I've fixed a CLS issue on a website. I realized the CLS issue was gone when I tested this website with Lighthouse out of Chrome DevTools and PageSpeed Insights.
In the PageSpeed Insights results, it was displayed very clear:
while the field data displayed the CLS issue (correct, because this is the data from the past 28 days, and I fixed the issue just today),
the lab data displayed no CLS issue (correct too - the lab data comes in real-time from a specific environment).
And then, I tested the website with WPT's Lighthouse the CLS issue was displayed again. This is the cause, why I think WPT's Lighthouse data is the field and decided to ask.
PS: Firstly I thought to publish this question at https://webmasters.stackexchange.com/, but the most important tag [webpagetest] doesn't exist there.

It was my fail on terminology.
Lighthouse does always lab tests, independently of where it starts. On PageSpeed Insights there are just two different results sources:
Field data, from CrUX database,
Lab data, from Lighthouse test

Related

Monitor site's pages via lighthouse score in an automated way

We have a task to gather lighthouse metrics periodically (once a minute for several pages)
We want to use pagespeed api.
Maybe there is payed version of it we can use in such case?
What is the price poilicy if it exists?
Thanks!
You can use paid services for it, for example, I'm working on pagespeed.green - scheduled tests will be supported soon. It would be beneficial to know your needs.
I highly suggest you to integrate lighthouse with a reputed automation tool (like protractor) and audit your webpage through it. You can then run those lighthouse + protractor integrated test cases from jenkins/teamcity and periodically publish your report. In this way you can keep track of your website performance absolutely free.
And if you wonder how to integrate lighthouse with Protractor, you can refer here.
Not only protractor, you can integrate lighthouse with puppeteer as well.
Let me know, if you get any success.
check Run chrome lighthouse's audit from command line
and run it on hot-reload, it will runs on any reloads. and you can add this to build script and check its values.

predictionio not producing any predictions

I am trying to test out prediction-io for the first time. I followed the installation instructions for linux and developed several test engines. After repeatedly getting the following error on my own datasets I decided to follow the movie 100k tutorial (https://github.com/PredictionIO/PredictionIO-Docs/blob/cbca03b1c2bad949db951a3a798f0080c48b3674/source/tutorials/movie-recommendation.rst). The same error seems to persist even though it seems as if my Hadoop is running correctly (and not in safe mode) and the engine says that it is running and training is complete. The error that I am getting is:
predictionio.ItemRecNotFoundError: request: GET
/engines/itemrec/movie-rec/topn.json {'pio_n': 10, 'pio_uid': '28',
'pio_appkey':
'UsZmneFir39GXO9hID3wDhDQqYNje4S9Ea3jiQjrpHFzHwMEqCqwJKhtAziveC9D'}
/engines/itemrec/movie-rec/topn.json?pio_n=10&pio_uid=28&pio_appkey=UsZmneFir39GXO9hID3wDhDQqYNje4S9Ea3jiQjrpHFzHwMEqCqwJKhtAziveC9D
status: 404 body: {"message":"Cannot find recommendation for user."}
The rest of the tutorial runs as expected, just no predictions ever seem to appear. Can someone please point me in the right direction on how to solve this issue?
Thanks!
Several suggestions:
Check if there is data in PredictioIO's database. I saw jobs failing because there was some items in database but no users and no user-to-item actions. Look into Mongo database appdata - there should be collections named users, items and u2iActions. These collections are only created when you add first user-item-u2iaction there via API. That's bad that it is not clear whether job completed successfully or not via the web interface.
Check logs - PredictionIO logs, and Hadoop logs if you use Hadoop jobs. See if model training jobs did complete (BTW, did you invoke "Train prediction model now" via web interface?)
Verify if there is some data in predictionio_modeldata for your algorithm.
Well, even if model is trained OK, there can still be not enough data to produce recommendations for some user. Try "Random" to get the simplest recommendations available for all, to check if system as a whole works.

Website makes IE crash and requires restart (!)

Website built on: Rails 3.0.3 & Heroku
Installed: Exception Notifier & New Relic
I am rewriting this question since my previous attempt was unclear and subjective, hope this works better.
I have a website where users can perform calculations. Once in a while I get reports from the users through my (one way) communication media that "the website crashes and tells me I need to restart IE, but it still doesn't work" which is pretty much as specific information I have been retrieving.
I get no timestamps so I can not look for it in the logs (Heroku only allows 2000 lines of error logs), I get no exception notifications and I cannot make the error appear myself so I would like your help with the following:
What would make a website crash in the way that it would tell the user to restart the browser? I have never even heard of that! What should I look for in the logs, if I can get timestamps for the errors?
Assuming it is a JavaScript-problem (which seems likely). How could I trouble shoot this issue? What tools can I use? Firebug does not give me any errors.
Assuming it is a IE version thing. How can I test the application in a systematic manner? (without installing/reinstalling different versions). Is there any applications that can test an application for different browsers?
It seems to work for most users/combinations. Do you have an older version of IE installed and can produce this error? Site: www.countcalculate.com (try any calculation).
Probably related to a very intensive loop. For some reason IE thinks it's appropriate to block the UI thread while JavaScript is executing, so the whole thing will freeze up if your JavaScript breaks.
I can't reproduce the issue, so I'd suggest trying to get more detailed reports from your customers.
The problem was (appearantly) limited to IE8 & XP-users. That combination conflicted with a bug in jQuery 1.6.2 according to http://bugs.jquery.com/ticket/9981.
Downgrading to 1.6.1 solved the problem.

ASP.Net MVC Website.. extremely slow after publishing

Hi
I've been working on a medium sized MVC project. It works fine on the localhost at a good speed rate. In each page, there's a lot of server-side data retrieved, I use a lot of jquery to minimize the traffic to the server, but even then, the webpage loads very slowly. There are many events on which I retrieve json results, to get a specific number from the database and make calculations, this data takes a long time to be retrieved on the webpage, although on the localhost it is immediately shown. Also, when I submit pages, it takes awfully a lot of time to submit. I've published my project to GoDaddy's server and also my database is there. What could be the problem that is making the project that slow? How can I minimize it? And why is it only when the website is online and not on the localhost too?
As such, issue can be anywhere and only certain way to know is instrumenting the code. I will suggest that you add simple logging traces with date-time stamp in your server code (note that logging should be configurable, any logging framework (including System.Diagnostic.Trace) should support it) and check where the time is spent. For example, database trips can be expensive etc. If you don't find the culprit on server side code i.e. sever is serving the request in reasonable time then you have to look at the performance over network. Tools such as Fiddler (or Firefox) should help you here - sometimes issuing too many requests from browser is also problematic because browser may make only n concurrent requests or even server may have been configured to accept only n requests from particular client - this could result in serialization of request increasing total response time. These scenarios are difficult to catch on localhost because network latency is almost zero there. You may also use tool such as YSlow for related performance improvement suggestions. But please do your investigation first, find the bottlenecks and then ask for solutions to specific problems.
Run it in chrome. Turn on the developer tools. Expand the Console. watch for errors. Also from there you can monitor those network calls to see which is slow.
if MVC uses entity framework (based on LINQ), it will sure be slow
because LINQ is slow compared to the old ADO.NET

Agresso payment creation via acrbatchinput

We're attempting to generate payments in an Agresso 5.5 system. The mechanism we've been told to use is to write new payment data into table acrbatchinput where it will be picked up and processed by a regular job running in agrbibat.dll. We have code that worked on a previous version of Agresso but following the upgrade our payments get rejected by the agrbibat job. Sometimes it generates useful messages in the log, sometimes it doesn't, and working through failures without good information is becoming a bit of a slog.
Is there some documentation we're missing? In particular it would be useful to have a full list of validation rules the job is using so we can implement these ourselves rather than trying to infer them from the log. I can't find any - there's not a lot for acrbatchinput on Google. Does this list or some other documentation exist? Is agribibat something easily decompilable, e.g. .NET?
Thanks. The test system we have is running against Oracle on Solaris with the Agresso jobs hosted on Windows. We have limited access to the Oracle and Agresso systems because (I think!) the same Oracle server is hosting the live payment system, but I could probably talk finance into giving us agrbibat.dll if that might help. We're unlikely to get enough access to their servers to debug it in place.
It turns out that our problem is partly because the new test system we've been given access to wasn't set up correctly, so we might be able to progress this without extra information - we're waiting on the financial team here for input.
However we're still interested in acrbatchinput or agrbibat documentation or information. You've missed the bounty I set but ticks, votes and gratitude still available.
I know this is an ancient old question, but here's my response anyway for anyone else that finds it.
The only documentation is the usual Agresso help files from within the desktop client. Meaningful information is only gleaned through trial and error, however!
The required fields differs depending on whether a given record is a GL, AP/AR or tax transaction. (That much is, at least, explained in the help).
In addition to using the log file, it's often helpful to look at GL07's report output for errors.

Resources