I am trying to track the page speed of certain urls of my project on each merging of the pull requests in Github and output the results of report in HTML format or JSON file. On the CI side, I am going to use Jenkins. I have no prior knowledge on performance testing. I want to know about the best approach to automate the speed test, integrate it with Jenkins and output the result.
On researching over the internet, I noted few possibilities which could be done to achieve this goal.
Installing "Page Speed Insights (psi) node package", creating the script that uses the psi for fetching the speed of certain pages, generating the test reports for use with Jenkins. (Referred to this link by Oxagile)
Performance testing using Jmeter and integrating with Jenkins.
Performance analysis using LightHouse. (Referred to this link by Timo Stollenwerk)
Choosing the right approach is very important. Therefore, I would be very grateful if anyone can suggest me different approaches and thus the right one to use(with examples if possible)in my case to achieve this goal.
Thank you in advance.
After quite a bit of research, I found out that sitespeed.io is the best solution for achieving this goal. It is a complete web performance tool that helps us to measure the performance of the website. It is best for running in the continuous integration to find web performance regressions on commits and monitoring them in production and alerting on regressions.
Related
I am curious if it is possible to automatically measure whether a test suite is flaky from the Circle CI interface. I would measure flaky as - fail and pass with a re-trigger. Is this possible to easily do?
Not at the moment, as far as I'm aware. I've done an extensive research on build insights in general, which included flaky tests analysis and monitoring, and finally decided to build my own tool. The good news is that last I checked, they seem to be focusing on creating better insights tools in addition to what they currently have. They'll tell you all about it if you reach out to them.
In the interim, you have a few options:
Ask them how far away are they from supporting for your idea of what a flaky is (I'm hoping this point gets outdated shortly as they work on it)
Consume their data through their decent enough API, and build your own tool in the interim and crunch the numbers yourself (this is what I ended up doing and it isn't too bad)
For example: generally speaking, a flaky for my team is a test that failed more than a few times over a large timespan. Their API gives you whether a build failed or not, which test failed, when and how. This gave me enough to work with and figure out whether I consider that spec failure as flaky or not. I'd assume yours is sort of similar, with maybe the only difference being whether it was re-triggered (unsure if they provide that info specifically, but you could refer to the workflow, commit and build ID to figure that out; e.g. if the build ID of a new run is the same).
With that beind said, the "how easy is it?" part of your question is something I can't really say for certain. It was a relatively easy learning curve to go through their APIs, get familiar with it, run a couple of requests, look at the data, massage it, store in the DB, then build a web interface around it. But I'm not sure how much familiarity and experience the people building the tool on your end have.
This is sort of an open-ended question/request (hope that's allowed).
On my team we are using Karate API testing for our project, which we love. The tests are easy to write and fairly understandable to people without coding backgrounds. The biggest problem we're facing is that these API tests have some inherent degree of flakiness (since the code we're testing makes calls to other systems). When running the tests locally on my machine, it's easy to see where the test failed. However, we're also using a Jenkins pipeline, and when the tests fail in Jenkins it's difficult to see why/how they failed. By default we get a message like this:
com.company.api.OurKarateTests > [crossdock] Find Crossdock Location.[1:7] LPN is invalid FAILED
com.intuit.karate.exception.KarateException
Basically all this tells us is the file name and starting line of the scenario that failed. We do have our pipeline set up so that we can pass in a debug flag and get more information. There are two problems with this however; one is that you have to remember to put in this flag in every commit you want to see info on; the other is that we go from having not enough information to too much (reading through a 24MB file of the whole build).
What I'm looking for is suggestions on how to improve this process, preferably without making changes to the Jenkins pipeline (another team manages this, and it will likely take a long time). Though if changing the pipeline is the only way to do this, I'd like to know that. I'm willing to "think outside the box" and entertain unorthodox solutions (like, posting to a slack integration).
We're currently on Karate version 0.9.3, but I will probably plan to upgrade to 0.9.5 as part of this effort. I've read a bit about the changes. Would the "ExecutionHook" thing be a good way to do this? I will be experimenting with this on my own a bit.
Have other teams/devs faced this issue? What were your solutions? Again we really love Karate, just struggling with the integration of it to Jenkins.
Aren't you using the Cucumber Reporting library as described here: https://github.com/intuit/karate/tree/master/karate-demo#example-report
If you do - you will get an HTML report with all traffic (and anything you print) in-line with the test-steps, and of-course error traces, and most teams find this sufficient for build troubleshooting, there is no need to dig through logs.
Do try upgrade as well, because we keep trying to improve the usefulness of the logs, and you may see improvements if you had failed in a JS block or karate-config.js.
Else, yes the ExecutionHook would be a good thing to explore - but I would be really surprised if the HTML report does not give you what you need.
At my workplace, I've been tasked to look into some metrics that the Jenkins tool provides and somehow pull them programatically and display them in some presentable format. The metrics that I need to pull are:
How many unit tests are passing? Failing? Skipping? The total % of passing?
How many integration tests are passing? Failing? Skipping? The total % of passing?
How many acceptance tests are passing? Failing? Skipping? The total % of passing?
How long does it take to execute the test? Make the build?
What is the number of tests executing in pipelines?
... the list goes on
Now I have a very small 1000 ft understanding of Jenkins, and an even smaller understanding of the steps that I need to take to make this program come to life. I am an intern with not much programming experience either, but after some research, I learned that I can navigate through the Jenkins API by adding '.../api' to the link that I want to find API elements for, and I know that I'm going to need to develop a plugin. Aside from that I don't have much direction at all. I don't know what environment I need to develop these plugins (Maven? Never heard of it)... I don't know what languages are supported (I only know C++, Java, and JS)... I don't know how to even install a plugin or get to the plugin on the Jenkins site. I feel like I'm drinking from a firehose with this task and need some guidance.
Does anyone have good guides, advice, tips, tricks, videos... anything that might help me get started on Jenkins plugin development? Any insight into how I might solve this problem too would be much appreciated.
Thanks so much.
First of all there are tools out there which generates HTML reports. You can start there.
For example: MSTest report (.trx) can be converted to HTML by TRXER
and can be published using the HTML Publisher Plugin
However if you're into building your own plugin use NetBeans (I have tried it; and it works)
But creating Jenkins graphs you have to google and see.
Does anyobdy know a good solid CI service that provides the common features of build parallelization BUT also support for Junit reports?
The current ones that we have looked at (semaphoreapp, circleCI, travisCI,...) are good but relatively useless as we have to manually investigate what tests failed, since when, and how often, thus negating a lot of the benefits of a hosted service.
Things that we're looking to know (and are all provided by JUnit / Jenkins):
If the build failed, because of what test cases?
Total Number of Failures / Total Number of Tests (trends to better analyze things)
Individual Track record of any test (so we know exactly when it was broken, whether it's intermittent,...)
You mentioned the most famous CI services but there are alternatives where you can get a higher customization level, like installing plugins, fine configuration, etc.
CloudBees and ClinkerHQ are both based on Jenkins offered as a service. You can also get very useful metrics (coverage, failures, graphs, execution times, etc.) thanks to Jenkins Plugins and SonarQube. I think Jenkins and SonarQube are a perfect couple for you.
Notifications are very important too. You want to be notified when something is wrong. This feature is available on both.
Regards,
Antonio.
DISCLAIMER: I'm deeply involved in ClinkerHQ
I am thinking to put automated regression tests in place for page load times. We have few deployment scenarios and I think we could use jmeter with Jenkins/hudson integration, but I am not sure how to go about it and what are the best practices to implement it.
Can you suggest what could be the approach to implement such regression test?
Are there any better alternative if not jmeter?
You can do that either with JMeter or with Grinder.
Performance Plugin will do the trick for you:
Please check this answer
You can also use Plot Plugin to visualize any data you want, for example you can define your own key performance metrics (KPIs).
You can find more details Plotting arbitrary data for repository.