I am curious if it is possible to automatically measure whether a test suite is flaky from the Circle CI interface. I would measure flaky as - fail and pass with a re-trigger. Is this possible to easily do?
Not at the moment, as far as I'm aware. I've done an extensive research on build insights in general, which included flaky tests analysis and monitoring, and finally decided to build my own tool. The good news is that last I checked, they seem to be focusing on creating better insights tools in addition to what they currently have. They'll tell you all about it if you reach out to them.
In the interim, you have a few options:
Ask them how far away are they from supporting for your idea of what a flaky is (I'm hoping this point gets outdated shortly as they work on it)
Consume their data through their decent enough API, and build your own tool in the interim and crunch the numbers yourself (this is what I ended up doing and it isn't too bad)
For example: generally speaking, a flaky for my team is a test that failed more than a few times over a large timespan. Their API gives you whether a build failed or not, which test failed, when and how. This gave me enough to work with and figure out whether I consider that spec failure as flaky or not. I'd assume yours is sort of similar, with maybe the only difference being whether it was re-triggered (unsure if they provide that info specifically, but you could refer to the workflow, commit and build ID to figure that out; e.g. if the build ID of a new run is the same).
With that beind said, the "how easy is it?" part of your question is something I can't really say for certain. It was a relatively easy learning curve to go through their APIs, get familiar with it, run a couple of requests, look at the data, massage it, store in the DB, then build a web interface around it. But I'm not sure how much familiarity and experience the people building the tool on your end have.
I am trying to track the page speed of certain urls of my project on each merging of the pull requests in Github and output the results of report in HTML format or JSON file. On the CI side, I am going to use Jenkins. I have no prior knowledge on performance testing. I want to know about the best approach to automate the speed test, integrate it with Jenkins and output the result.
On researching over the internet, I noted few possibilities which could be done to achieve this goal.
Installing "Page Speed Insights (psi) node package", creating the script that uses the psi for fetching the speed of certain pages, generating the test reports for use with Jenkins. (Referred to this link by Oxagile)
Performance testing using Jmeter and integrating with Jenkins.
Performance analysis using LightHouse. (Referred to this link by Timo Stollenwerk)
Choosing the right approach is very important. Therefore, I would be very grateful if anyone can suggest me different approaches and thus the right one to use(with examples if possible)in my case to achieve this goal.
Thank you in advance.
After quite a bit of research, I found out that sitespeed.io is the best solution for achieving this goal. It is a complete web performance tool that helps us to measure the performance of the website. It is best for running in the continuous integration to find web performance regressions on commits and monitoring them in production and alerting on regressions.
Whats is load testing ? Give me the techniques or process for load testing ? Why my website is very slow ?
Whats is load testing ?
Load testing a website is the process of subjecting the site to multiple simultaneous users (usually simulated via a testing tool) and measuring the performance.
Give me the techniques or process for load testing ?
There is no short answer to that. There are many processes, techniques and tools involved in website load testing.
Why my website is very slow ?
Bandwidth limitations, CPU overload, poor code design, database locking, memory contention...could be any of those and a zillion other possibilities. You'll need to do some testing to narrow it down.
Judging by your questions, you've never done any performance testing before, so you might want to engage some help. My company as well as many others offer load testing services that can get answers for you quickly.
I'm looking for tools to monitor/test performance in rails, and I'm not having much luck finding anything particularly effective. I've read the rails 'performance' guide, but I use RSpec instead of Rake:Test, so I'm not particularly keen to use the rake:test framework.
So, what do folks use for performance testing in rails apart from the rake:test benchmarker? Any suggestions appreciated
Performance benchmarking is one of those things that you'll get different opinions about depending on who you ask. One thing I hear over and over is that you shouldn't obsess over performance early on. I'm not sure where you're at with your application, but this could be something to consider. After developing a rather large application, I can honestly say I agree with them. It's better to use good practice when developing and wait to do performance tuning at a later time. Best practices include things like indexing database columns.
For performance monitoring of live Rails applications, New Relic is one of the best tools out there*. The free plan is a little limited as it only provides 30 minutes of historical data, but the information it collects is priceless. Some of the cloud hosts like Heroku and Engine Yard are offering free bronze plan upgrades, which stores a week of data. Once you have information about your application, you can make educated decisions about where to focus your time.
* My opinion
When your app needs some performance testing, the default TestUnit based performance benchmarking tests are a great start. However, you shouldn't stop there, and should consider using a variety of tools based on the nature of your application.
For example, analyzing production logs using a tool like the request-log-analyzer is a great way to identify the real performance bottlenecks. Bullet is another great tool you can run in your development environment to identify performance inefficiencies in your database calls. For low level benchmarking, rails also gives you the benchmark helper methods in models, controllers and views. This can be handy if you are focusing on tuning some specific part of your application.
It is also worth noting that rspec is not the best tool for benchmarking performance (to date). In my opinion, trying to assert things like it should_take_less_than 50 is stretching the idea of performance testing and trying to force it into the concept of BDD. Performance is less often about absolute expectations and more about identifying the slowest parts of your app and making them faster.
There are many online resources on the topic. I've found these railscasts to be a great starting point:
http://railscasts.com/episodes/368-miniprofiler (free)
http://railscasts.com/episodes/411-performance-testing (pro, requires subscription)
The software development team in my organization (that develops API's - middleware) is gearing to adopt atleast one best practice at a time. The following are on the list:
Unit Testing (in its real sense),
Automated unit testing,
Test Driven Design & Development,
Static code analysis,
Continuous integration capabilities, etc..
Can someone please point me to a study that shows which 'best' practices when adopted have a better ROI, and improves software quality faster. Is there a study out there?
This should help me (support my claim to) prioritize the implementation of these practices.
"a study that shows which 'best' practices when adopted have a better ROI, and improves software quality faster"
Wouldn't that be great! If there was such a thing, we'd all be doing it, and you'd simply read it in DDJ.
Since there isn't, you have to make a painful judgement.
There is no "do X for an ROI of 8%". Some of the techniques require a significant investment. Others can be started for free.
Unit Testing (in its real sense) - Free - ROI starts immediately.
Automated unit testing - not free - requires automation.
Test Driven Design & Development - Free - ROI starts immediately.
Static code analysis - requires tools.
Continuous integration capabilities - inexpensive, but not free
You can't know the ROI. So you can only prioritize on investment. Some things are easier for people to adopt than others. You have to factor in your team's willingness to embrace the technique.
Edit. Unit Testing is Free.
"time spend coding the test could have been taken to code the next feature on the list"
True, testing means developers do more work, but support does less work debugging. I think this is not a 1:1 trade. A little more time spent writing (and passing) formal unit tests dramatically reduces support costs.
"What about legacy code?"
The point is that free is a matter of managing cost. If you add unit tests to legacy code, the cost isn't free. So don't do that. Instead, add unit tests as part of maintenance, bug-fixing and new development -- then it's free.
"Traning is an issue"
In my experience, it's a matter of a few solid examples, and management demand for unit tests in addition to code. It doesn't require more than an all-hands meeting to explain that unit tests are required and here are the examples. Then it requires everyone report their status as "tests written/tests passed". You aren't 60% done, you're 232 out of 315 tests.
"it's only free on average if it works for a given project"
Always true, good point.
"require more time, time aren't free for the business"
You can either write bad code that barely works and requires a lot of support, or you can write good code that works and doesn't require a lot of support. I think that the time spent getting tests to actually pass reduces support, maintenance and debugging costs. In my experience, the value of unit tests for refactoring dramatically reduces the time to make architectural changes. It reduces the time to add features.
"I do not think either that it's ROI immediately"
Actually, one unit test has such a huge ROI that it's hard to characterize. The first test to pass becomes the one think that you can really trust. Having just one trustworthy piece of code is a time-saver because it's one less thing you have to spend a lot of time thinking about.
War Story
This week I had to finish a bulk data loader; it validates and loads 30,000 row files we accept from customers. We have a nice library that we use for uploading some internally developed files. I wanted to use that module for the customer files. But the customer files are enough different that I could see that the library module API wasn't really suitable.
So I rewrote the API, reran the tests and checked the changes in. It was a significant API change. Much breakage. Much grepping the source to find every reference and fix them.
After running the relevant tests, I checked it in. And then I reran what I thought was an not-closely-related test. Ooops. It had a failure. It was testing something that wasn't part of the API, which also broke. Fixed. Checked in again (an hour late).
Without basic unit testing, this would have broken in QA, required a bug report, required debugging and rework. Look at the labor: 1 hour of QA person to find and report the bug + 2 hours of developer time to reconstruct the QA scenario and locate the problem + 1 hour to determine what to fix.
With unit testing: 1 hour to realize that a test didn't pass, and fix the code.
Bottom Line. Did it take me 3 hours to write the test? No. But the project got three hours back for my investment in writing the test.
Are you looking for something like this?
The ROI of Software Process Improvement A New 36 Month Case Study by Capers Jones
Agile Practices with the Highest Return on Investment
You're assuming that the list you present constitutes a set of "best practices" (although I'd agree that it probably does, btw)
Rather than try to cherry-pick one process change, why not examine your current practices?
Ask yourself this:
Where are you feeling the most pain? What might you change to reduce/eliminate it?
Repeat until pain-free.
You don't mention code reviews in your list. For our team, this is probably what gave us the greatest ROI (yes, investment was steep, but return was even greater). I know Code Complete (the original version at least) mentioned statistics relative to the efficiency of reviews in finding defect VS testing.
There are some references for ROI with respect to unit testing and TDD. See my response to this related question; Is there hard evidence of the ROI of unit testing?.
There is such a thing as “local optimum”. You can read about it in Goldratt book Goal. It says that innovation is of any value only if it improves overall throughput. Decision to implement new technology should be related to critical paths inside of projects. If technology speeds up already fast enough process it only creates unnecessary backlog of ready modules. Which is not necessary improve overall speed of projects development.
I wish I had a better answer than the other answers, but I don't, because what I think really pays off is not conventional at present. That is, in design, to minimize redundancy. It is easy to say but takes experience.
In data it means keeping the data normalized, and when it cannot be, handling it in a loose fashion that can tolerate some inconsistency, not relying on tightly-bound notifications. If you do this, it simplifies the code a lot and reduces the need for unit tests.
In source code, it means if some of your "input data" changes at a very slow rate, you could consider code generation, as a way to simplify source code and get additional performance. If the source code is simpler, it is easier to review, and the need for testing it is reduced.
Not to be a grump, but I'm afraid, from the projects I've seen, there is a strong tendency to over-design, with way too many "layers of abstraction" whose correctness would not have to be questioned if they weren't even there.
One practice at a time is not going to give the best ROI. The practices are not independent.