Cucumber: tag scenario on the fly - jenkins

I am wondering if anyone's ever tried tagging scenarios on the fly.
Here's the use case:
We have hundreds of scenarios for the regression tests, and some of the tests might fail because the API is down (which usually means that the next time it runs, this should pass), or the data changed (which means that either the scripts are not robust enough and we need to fix it, or we need to change the data), or the requirement changed (which means that we need to change the scripts).
For the latter 2 cases, the same scenario should fail multiple times.
We need to tag the ones that require human intervention (either rewrite the scripts or change data) with #quarantine, and take out the #regression tag so that it wouldn't be run over and over while we know it would fail anyways.
I've not seen anyone does this. Is this doable? Or is the only way to do this in Cucumber without resorting to complicated shell script?

Right now, I ended up using the hooks, in the "after" scenario part, to modify the feature file directly. A bit crude, but it works for now.
I am still looking for a better resolution.

Related

Should I be using random data in Capybara integration tests?

I'm writing a very long integration test for a wizard that has around 15 steps. Each of these steps has around 20 inputs/select boxes.
I started out using static data in my tests, but now I've begun to write stuff like selecting a random value from a select box, and clicking a random radio button for an option. This does seem like it's more capable of catching bugs, for example; one of the buttons on the page might not be rendered correctly and therefore the value never gets saved to the database - this would never have been found using static data that selects the same option every time. Alternatively, I could manually write out every possible option that could be chosen, but that'd take an eternity to do.
I hear that one of the main reasons not to use random data is that you can not explicitly see the data used in your tests and it can make failing tests hard to resolve.
Is this path that I'm going down one to be avoided? or is testing in this manner something that's generally done?
This is inherently a QA question rather than an automation one. You'll need to ask yourself and your team whether or not testing every single permutation is even worth the time and effort. Usually it is not. In my experience it's best to get information on the most common user journeys in your wizard and branch out from there. I would tackle those first from an automation standpoint and then move onto lower risk paths.
I like to use random data in certain low-risk areas that the devs confirm are relatively inconsequential (for example, a true/false radio box) and you can always make sure you are logging output properly to catch bugs.

Multiple steps workflow in cucumber

I'm developing an application in rails with cucumber.
The application includes a workflow that have multiple steps.
Some steps are
A user import files (3 different files),
Other user make make some checks to date that was imported,
Other user input some parameter,
Other user apply the parameter to the data that was imported,
etc.
The steps must be executed in the correct order, and I is necessary to run all the previous steps in order to execute each one, for example to apply the parameter its necessary to have the data imported and the parameters defined.
My problem is how to build cucumber scenarios/features in this situation.
I know that a scenario is not suppose to call all the previous scenario.But the only other idea that I have is to create a very long scenario performing all this steps, and that make sense because it will be a scenario with more than 2 hundred steps.
Any thought on a pragmatical way of implementing cucumber in this kind of situation?
Many Tks
It sounds as if you have to perform every thing every time.
Will every usage of your system include importing three files? Are there any cases where the user may only need to import two files? If the case is that there will always be three files imported, then you might abstract that step as
given the files are imported
Things that always have to be done may be combined into some generic setup. As the setup never changes, the details may not be necessary to mention explicit.
My experience though, is that at the beginning it is hard to separate scenarios and try to do too much in a few scenarios with many steps. If you don't see any other way, start there. Look at your scenario and see if they possible to separate into perhaps two independent scenarios. It may be possible to separate it into two scenarios that are independent. Next step would be to see if these two new scenarios are possible to divide into two smaller, independent scenarios. It happens that it is possible.
It is obviously always possible that Cucumber is not the tool you need. It is possible that you would be better off with a unit test framework.

Can I reuse my integration test suite to profile a Rails app?

Most posts on Rails profiling recommend Ruby-Prof. To use Ruby-Prof I need to write at least one new test for each controller action, then manually compare the results to see what's taking the longest and might be a candidate for optimization.
This is good if I already know exactly what request I'm focusing on. It seems less good if I'm trying to identify the hot spots in the first place. Given that I already have a huge integration test suite covering all the app's functionality that I care about, it seems like what I really want to do is:
Run the entire test suite and capture the time spent in each controller action. (Or model method, or whatever level of granularity I want.)
Print two lists, of worst-case and average-case times in each controller action.
Sort each list and start investigating the longest-running controller actions, now using Ruby-prof or other profiling tools to drill down into the call stack. The worst-case times will identify request params that might be problematic (i.e., trigger slow code on the backend), without my having to think of them all when I write the performance test.
Is there some reason people don't use the integration test suite in this way, rather than basically duplicating it with a second performance test suite? I have not seen it suggested. Before I write code to do something like this (presumably with a before_action in ApplicationController, is there already a tool for this?
I think that the automated tests will not tell you anything about performance. You need real data. For example, your tests probably won't use indexes, but if you create 10,000 records without an index you may find a performance issue.
I need to write at least one new test for each controller action
Why would you performance test each controller action?
In my limited experience, performance testing was done after deploying the app and tested very specific things. I tested a chunk of code that was slow or code that I thought might be slow.
Also, if you use online performance tools it is not necessary to change your code. The online tool runs against an instance of the app that has been deployed.

Find unused code in a Rails app

How do I find what code is and isn't being run in production ?
The app is well-tested, but there's a lot of tests that test unused code. Hence they get coverage when running tests... I'd like to refactor and clean up this mess, it keeps wasting my time.
I have a lot of background jobs, this is why I'd like the production env to guide me. Running at heroku I can spin up dynos to compensate any performance impacts from the profiler.
Related question How can I find unused methods in a Ruby app? not helpful.
Bonus: metrics to show how often a line of code is run. Don't know why I want it, but I do! :)
Under normal circumstances the approach would be to use your test data for code coverage, but as you say you have parts of your code that are tested but are not used on the production app, you could do something slightly different.
Just for clarity first: Don't trust automatic tools. They will only show you results for things you actively test, nothing more.
With the disclaimer behind us, I propose you use a code coverage tool (like rcov or simplecov for Ruby 1.9) on your production app and measure the code paths that are actually used by your users. While these tools were originally designed for measuring test coverage, you could also use them for production coverage
Under the assumption that during the test time-frame all relevant code paths are visited, you can remove the rest. Unfortunately, this assumption will most probably not fully hold. So you will still have to apply your knowledge of the app and its inner workings when removing parts. This is even more important when removing declarative parts (like model references) as those are often not directly run but only used for configuring other parts of the system.
Another approach which could be combined with the above is to try to refactor your app into distinguished features that you can turn on and off. Then you can turn features that are suspected to be unused off and check if nobody complains :)
And as a final note: you won't find a magic tool to do your full analysis. That's because no tool can know whether a certain piece of code is used by actual users or not. The only thing that tools can do is create (more or less) static reachability graphs, telling you if your code is somehow called from a certain point. With a dynamic language like Ruby even this is rather hard to achieve, as static analysis doesn't bring much insight in the face of meta-programming or dynamic calls that are heavily used in a rails context. So some tools actually run your code or try to get insight from test coverage. But there is definitely no magic spell.
So given the high internal (mostly hidden) complexity of a rails application, you will not get around to do most of the analysis by hand. The best advice would probably be to try to modularize your app and turn off certain modules to test f they are not used. This can be supported by proper integration tests.
Checkout the coverband gem, it does what you exactly what are you searching.
Maybe you can try to use rails_best_practices to check unused methods and class.
Here it is in the github: https://github.com/railsbp/rails_best_practices .
Put 'gem "rails_best_practices" ' in your Gemfile and then run rails_best_practices . to generate configuration file
I had the same problem and after exploring some alternatives I realized that I have all the info available out of the box - log files. Our log format is as follows
Dec 18 03:10:41 ip-xx-xx-xx-xx appname-p[7776]: Processing by MyController#show as HTML
So I created a simple script to parse this info
zfgrep Processing production.log*.gz |awk '{print $8}' > ~/tmp/action
sort ~/tmp/action | uniq -c |sort -g -r > ~/tmp/histogram
Which produced results of how often an given controller#action was accessed.
4394886 MyController#index
3237203 MyController#show
1644765 MyController#edit
Next step is to compare it to the list of all controller#action pair in the app (using rake routes output or can do the same script for testing suite)
You got already the idea to mark suspicious methods as private (what will maybe break your application).
A small variation I did in the past: Add a small piece code to all suspicious methods to log it. In my case it was a user popup "You called a obsolete function - if you really need please contact the IT".
After one year we had a good overview what was really used (it was a business application and there where functions needed only once a year).
In your case you should only log the usage. Everything what is not logged after a reasonable period is unused.
I'm not very familiar with Ruby and RoR, but what I'd suggest some crazy guess:
add :after_filter method wich logs name of previous called method(grab it from call stack) to file
deploy this to production
wait for a while
remove all methods that are not in log.
p.s. probably solution with Alt+F7 in NetBeans or RubyMine is much better :)
Metaprogramming
Object#method_missing
override Object#method_missing. Inside, log the calling Class and method, asynchronously, to a data store. Then manually call the original method with the proper arguments, based on the arguments passed to method_missing.
Object tree
Then compare the data in the data store to the contents of the application's object tree.
disclaimer: This will surely require significant performance and resource consideration. Also, it will take a little tinkering to get that to work, but theoretically it should work. I'll leave it as an exercise to the original poster to implement it. ;)
Have you tried creating a test suite using something like sahi you could then record all your user journies using this and tie those tests to rcov or something similar.
You do have to ensure you have all user journies but after that you can look at what rcov spits out and at least start to prune out stuff that is obviously never covered.
This isn't a very proactive approach, but I've often used results gathered from New Relic to see if something I suspected as being unused had been called in production anytime in the past month or so. The apps I've used this on have been pretty small though, and its decently expensive for larger applications.
I've never used it myself, but this post about the laser gem seems to talk about solving your exact problem.
mark suspicious methods as private. If that does not break the code, check if the methods are used inside the class. then you can delete things
It is not the perfect solution, but for example in NetBeans you can find usages of the methods by right click on them (or press Alt+F7).
So if method is unused, you will see it.

TFS Check in rules

Hi I'm out of setting up an TFS server and I want to set some check-in rules.
I for example want to be able to set rules about method lenght, complexity and so on, I found NDepend very convenient can I somehow use NDepend to run some rules on the files trying to check in.
I also want to be able to bypass the rules sometimes.
Are there any blogs or discussions around this, if it wont work with NDepend are there any other tools or ways I can use?
I would be very careful about this. I worked at a place once that had strict method length rules. If Calculate(a,b,c) ended up 1.5 times the limit length, the devs would just move the last third of the function into Calculate2() and call it from Calculate(). All the active locals would become parameters, of course - sometimes there would be a dozen of them. The resulting mess passed the automated tests for method length but were definitely not better or more maintainable than the long methods would have been.
Would it have been nice if the devs had spotted something refactorable in the middle of the method, pulled it out and given it a good name? Yes it would. But systems are all game-able, and the sorts of "dammit I just want to check in and go home" changes that are made to comply with method length rules (among others) make the code worse. A lot worse.
Also to bypass the rules there's a way on the checkin to say you are bypassing and why.

Resources