Circle CI Analysis - circleci

I am curious if it is possible to automatically measure whether a test suite is flaky from the Circle CI interface. I would measure flaky as - fail and pass with a re-trigger. Is this possible to easily do?

Not at the moment, as far as I'm aware. I've done an extensive research on build insights in general, which included flaky tests analysis and monitoring, and finally decided to build my own tool. The good news is that last I checked, they seem to be focusing on creating better insights tools in addition to what they currently have. They'll tell you all about it if you reach out to them.
In the interim, you have a few options:
Ask them how far away are they from supporting for your idea of what a flaky is (I'm hoping this point gets outdated shortly as they work on it)
Consume their data through their decent enough API, and build your own tool in the interim and crunch the numbers yourself (this is what I ended up doing and it isn't too bad)
For example: generally speaking, a flaky for my team is a test that failed more than a few times over a large timespan. Their API gives you whether a build failed or not, which test failed, when and how. This gave me enough to work with and figure out whether I consider that spec failure as flaky or not. I'd assume yours is sort of similar, with maybe the only difference being whether it was re-triggered (unsure if they provide that info specifically, but you could refer to the workflow, commit and build ID to figure that out; e.g. if the build ID of a new run is the same).
With that beind said, the "how easy is it?" part of your question is something I can't really say for certain. It was a relatively easy learning curve to go through their APIs, get familiar with it, run a couple of requests, look at the data, massage it, store in the DB, then build a web interface around it. But I'm not sure how much familiarity and experience the people building the tool on your end have.

Related

Karate API Test Debugging in Jenkins

This is sort of an open-ended question/request (hope that's allowed).
On my team we are using Karate API testing for our project, which we love. The tests are easy to write and fairly understandable to people without coding backgrounds. The biggest problem we're facing is that these API tests have some inherent degree of flakiness (since the code we're testing makes calls to other systems). When running the tests locally on my machine, it's easy to see where the test failed. However, we're also using a Jenkins pipeline, and when the tests fail in Jenkins it's difficult to see why/how they failed. By default we get a message like this:
com.company.api.OurKarateTests > [crossdock] Find Crossdock Location.[1:7] LPN is invalid FAILED
com.intuit.karate.exception.KarateException
Basically all this tells us is the file name and starting line of the scenario that failed. We do have our pipeline set up so that we can pass in a debug flag and get more information. There are two problems with this however; one is that you have to remember to put in this flag in every commit you want to see info on; the other is that we go from having not enough information to too much (reading through a 24MB file of the whole build).
What I'm looking for is suggestions on how to improve this process, preferably without making changes to the Jenkins pipeline (another team manages this, and it will likely take a long time). Though if changing the pipeline is the only way to do this, I'd like to know that. I'm willing to "think outside the box" and entertain unorthodox solutions (like, posting to a slack integration).
We're currently on Karate version 0.9.3, but I will probably plan to upgrade to 0.9.5 as part of this effort. I've read a bit about the changes. Would the "ExecutionHook" thing be a good way to do this? I will be experimenting with this on my own a bit.
Have other teams/devs faced this issue? What were your solutions? Again we really love Karate, just struggling with the integration of it to Jenkins.
Aren't you using the Cucumber Reporting library as described here: https://github.com/intuit/karate/tree/master/karate-demo#example-report
If you do - you will get an HTML report with all traffic (and anything you print) in-line with the test-steps, and of-course error traces, and most teams find this sufficient for build troubleshooting, there is no need to dig through logs.
Do try upgrade as well, because we keep trying to improve the usefulness of the logs, and you may see improvements if you had failed in a JS block or karate-config.js.
Else, yes the ExecutionHook would be a good thing to explore - but I would be really surprised if the HTML report does not give you what you need.

require to execute a script in between a test case

I have a scenario where, I have a RoR application, mysql, and there is a workflow, where
end user will follow that workflow, and register her software
software is local to end user, running on her machine
in between this workflow, I make a http request to this software and it responds back
this hand shaking take place between rails app and that software
updating a couple of entries in db
and now I have to write a test case for this
that after this workflow done,
proper entries are been added to db
checking whether workflow is executed successfully
plus hand shaking took place well, so a complete cycle
And Am looking for a best approach here to go with
For now, we do not have prepared, or planning for a nice way of testing entire app here, but just preparing a few important test cases only. And this one is the first one of this kind. So far we were doing it manually.
Now being lazy, we want to automate this, and I am thinking of using watir. I have a software simulator for hand shaking, I could execute that simulator in watir and get this whole cycle tested.
Does this sound good that my watir/rb script is
executing a script
checking db status
executing workflow
stopping that script
checking for db status
But obvious all ruby/rails units involved here would have their own unit test cases prepared apart, but I am interested in testing this whole cycle.
Any better suggestions, comments?
It's important to have tests at the unit AND functional level, IMO, so I think your general approach is good.
Watir, or Selenium/WebDriver would be good tools to use. If you don't already have an approach in mind, you should check out Cheezy's (Jeff Morgan's) page-object gem. It works with Watir-webdriver and Selenium-webdriver.
I like that you explicitly call out hitting the database to check for proper record creation. It's really important to hit the DB as a test oracle to ensure your system's working properly.
Don't want to start a philosophical debate, but I'll say that going down the road you're thinking has been a time killer for me in the past. I'd strongly recommend spending the time refactoring your code into a structure that can be unit tested. One of the truly nice side effects of focusing on unit tests is that you end of creating a code base that follows the Principle of Single Responsibility, whether you realize it or not.
Also, consider skimming this debate about the fallacies of higher-level testing frameworks
Anyway, good luck, friend.

Should a proof-of-concept application have automated tests?

If I'm developing a proof-of-concept application, does it make sense to invest time in writing automated tests? This is for a personal project where I am the sole developer.
I see the only benefit of automated testing at this point as:
If the concept catches, the tests already exist.
Some of the cons related to writing automated tests for this type of project could be:
It takes valuable time to write tests for an idea that might not be worthwhile to people.
At this level, time is better spent building a demonstration of your idea.
Can anyone provide pros and cons of investing time in writing automated tests for an application in its early stages?
This whole talk from the Google Testing Automation Conference is about your question:
http://www.youtube.com/watch?v=X1jWe5rOu3g
Basically, the conclusion is that it is more important to know you are building the right thing than to build something right (build the right "it", rather than build "it" right). The most important thing is to get a proof-of-concept through and make sure that it works and is liked. If people like your thing, then they will tolerate bugs; but if they don't like your thing, it can have no bugs and they still won't like it.
TDD is not really about testing, it's about designing. Doing TDD for your application will make it have a better design (probably) than just doing it on your feeling.
Your problem is : Do you need a good design ? Design is helpful for maintainance and most devs doing TDD consider themselves in maintainance mode just after having added their 1st feature.
On a more pragmatic perspective : if you're the only dev, have very accurate specs and work on this code to do it and never return to it (nor send someone else return to it), I would say that making it work is enough.
But then don't try to get anything back from it if your POC works, and just redo it.
You can save time by doing an ugly POC and come to the conclusion that your idea is not doable.
You can save time by doing an ugly POC and understanding much better the domain you're trying to model
You cannot save time by trying to get some lines of code out of an horrible codebase.
My best advice for estimating how much effort you should put in design (because overdesigning can be a big problem, too) is : try to estimate how long will that code live
Reference : I would suggest you to make some research on the motto "Make it work, make it right, make it fast" . The question you ask is about the 2 first points but you will sooner or later ask yourself the same question about optimization (the third point)
There's no "right" answer. TDD can make your concept stronger, more resilient, easier to bang on, and help drive API development. It also takes time, and radical changes mean test changes.
It's rare you get to completely throw away "prototype" code in real life, though.
The answer depends entirely on what happens if you prove your concept. True Proof-of-Concept applications are thrown away regardless of the outcome, and the real application is written afterward if the PoC proved out. Those PoCs obviously don't need tests. But there are way too many "productized PoCs" out there. Those applications probably should have tests written right up front. The other answers you've received give you solid support for both positions, you just need to decide which type of PoC you're building.

Using cucumber for writing the functional requirements document for a Rails app

In the light of BDD, would it be a good idea to use Cucumber features and scenarios to write up the functional requirements document at the start of a new Rails project?
Probably not. This would be a case of BDUF (http://en.wikipedia.org/wiki/Big_Design_Up_Front).
It's unlikely you'll be able to think of all the scenarios up front. You should split the high-level requirements into features to help estimate and prioritise them, but leave the detailed scenario writing to just before you're ready to begin implementing each feature.
If you are not the one making all the decisions and someone thinks you need these,
then it might appear to be a better choice than MS word.
I actually joined a project with a million un-implemented features,
so we had loads of integrations test in theory, just none
of them were actually implemented.
Its months later and we still have some,
its really difficult working in an environent where
everything is failing at once.
Its better to have 1 failing step at a time.
I also think feautures should be written by application developers who
understand user flow realities.
I like the business or client to explain stuff to me in a conversational syle,
a bit at a time, not their entire vision for world domination in 10 000 words,
I will keep that for bed time.

Which software development practices provide the highest ROI?

The software development team in my organization (that develops API's - middleware) is gearing to adopt atleast one best practice at a time. The following are on the list:
Unit Testing (in its real sense),
Automated unit testing,
Test Driven Design & Development,
Static code analysis,
Continuous integration capabilities, etc..
Can someone please point me to a study that shows which 'best' practices when adopted have a better ROI, and improves software quality faster. Is there a study out there?
This should help me (support my claim to) prioritize the implementation of these practices.
"a study that shows which 'best' practices when adopted have a better ROI, and improves software quality faster"
Wouldn't that be great! If there was such a thing, we'd all be doing it, and you'd simply read it in DDJ.
Since there isn't, you have to make a painful judgement.
There is no "do X for an ROI of 8%". Some of the techniques require a significant investment. Others can be started for free.
Unit Testing (in its real sense) - Free - ROI starts immediately.
Automated unit testing - not free - requires automation.
Test Driven Design & Development - Free - ROI starts immediately.
Static code analysis - requires tools.
Continuous integration capabilities - inexpensive, but not free
You can't know the ROI. So you can only prioritize on investment. Some things are easier for people to adopt than others. You have to factor in your team's willingness to embrace the technique.
Edit. Unit Testing is Free.
"time spend coding the test could have been taken to code the next feature on the list"
True, testing means developers do more work, but support does less work debugging. I think this is not a 1:1 trade. A little more time spent writing (and passing) formal unit tests dramatically reduces support costs.
"What about legacy code?"
The point is that free is a matter of managing cost. If you add unit tests to legacy code, the cost isn't free. So don't do that. Instead, add unit tests as part of maintenance, bug-fixing and new development -- then it's free.
"Traning is an issue"
In my experience, it's a matter of a few solid examples, and management demand for unit tests in addition to code. It doesn't require more than an all-hands meeting to explain that unit tests are required and here are the examples. Then it requires everyone report their status as "tests written/tests passed". You aren't 60% done, you're 232 out of 315 tests.
"it's only free on average if it works for a given project"
Always true, good point.
"require more time, time aren't free for the business"
You can either write bad code that barely works and requires a lot of support, or you can write good code that works and doesn't require a lot of support. I think that the time spent getting tests to actually pass reduces support, maintenance and debugging costs. In my experience, the value of unit tests for refactoring dramatically reduces the time to make architectural changes. It reduces the time to add features.
"I do not think either that it's ROI immediately"
Actually, one unit test has such a huge ROI that it's hard to characterize. The first test to pass becomes the one think that you can really trust. Having just one trustworthy piece of code is a time-saver because it's one less thing you have to spend a lot of time thinking about.
War Story
This week I had to finish a bulk data loader; it validates and loads 30,000 row files we accept from customers. We have a nice library that we use for uploading some internally developed files. I wanted to use that module for the customer files. But the customer files are enough different that I could see that the library module API wasn't really suitable.
So I rewrote the API, reran the tests and checked the changes in. It was a significant API change. Much breakage. Much grepping the source to find every reference and fix them.
After running the relevant tests, I checked it in. And then I reran what I thought was an not-closely-related test. Ooops. It had a failure. It was testing something that wasn't part of the API, which also broke. Fixed. Checked in again (an hour late).
Without basic unit testing, this would have broken in QA, required a bug report, required debugging and rework. Look at the labor: 1 hour of QA person to find and report the bug + 2 hours of developer time to reconstruct the QA scenario and locate the problem + 1 hour to determine what to fix.
With unit testing: 1 hour to realize that a test didn't pass, and fix the code.
Bottom Line. Did it take me 3 hours to write the test? No. But the project got three hours back for my investment in writing the test.
Are you looking for something like this?
The ROI of Software Process Improvement A New 36 Month Case Study by Capers Jones
Agile Practices with the Highest Return on Investment
You're assuming that the list you present constitutes a set of "best practices" (although I'd agree that it probably does, btw)
Rather than try to cherry-pick one process change, why not examine your current practices?
Ask yourself this:
Where are you feeling the most pain? What might you change to reduce/eliminate it?
Repeat until pain-free.
You don't mention code reviews in your list. For our team, this is probably what gave us the greatest ROI (yes, investment was steep, but return was even greater). I know Code Complete (the original version at least) mentioned statistics relative to the efficiency of reviews in finding defect VS testing.
There are some references for ROI with respect to unit testing and TDD. See my response to this related question; Is there hard evidence of the ROI of unit testing?.
There is such a thing as “local optimum”. You can read about it in Goldratt book Goal. It says that innovation is of any value only if it improves overall throughput. Decision to implement new technology should be related to critical paths inside of projects. If technology speeds up already fast enough process it only creates unnecessary backlog of ready modules. Which is not necessary improve overall speed of projects development.
I wish I had a better answer than the other answers, but I don't, because what I think really pays off is not conventional at present. That is, in design, to minimize redundancy. It is easy to say but takes experience.
In data it means keeping the data normalized, and when it cannot be, handling it in a loose fashion that can tolerate some inconsistency, not relying on tightly-bound notifications. If you do this, it simplifies the code a lot and reduces the need for unit tests.
In source code, it means if some of your "input data" changes at a very slow rate, you could consider code generation, as a way to simplify source code and get additional performance. If the source code is simpler, it is easier to review, and the need for testing it is reduced.
Not to be a grump, but I'm afraid, from the projects I've seen, there is a strong tendency to over-design, with way too many "layers of abstraction" whose correctness would not have to be questioned if they weren't even there.
One practice at a time is not going to give the best ROI. The practices are not independent.

Resources