How can we set default timeout for each test scenario in Cucumber (Java 8) - timeout

I am planning to add default timeout to all of my test scenarios (which has multiple Given, When, Then statements); not planning to add timeouts to individual Given, when, then functions. More specifically looking for a way to add a global timeout.
My all test scenarios are running in parallel (multiple threads)
Using Java 8 and Appium.
Any leads would be much appreciated.

Related

Jenkins - How to handle concurrent jobs that use a limited resource pool?

I'm trying to improve some of the testing procedures at work and since I'm not an expert on Jenkins was hoping you guys could maybe point me in the right direction?.
Our current situation is like this. We have a huge suite of E2E tests that run regularly. These tests rely on a pool of limited resources (AWS VMs that are use to run each tests). We have 2 test suites. A full blown regression that consumes, at its peak, a total of ~80% of those resources and a much more light weight smoke run that just uses 15% or so.
Right now I'm using the lockable resources plugin. When the Test Run step comes it checks whether you are running a regression or not and if you are then it will request the single lock. If it is available then all good and if not it will wait until it becomes available before continuing. This allows me to make sure that at no point there will be more than 1 regression running at the same point but it has a lot of gaps. Like a regression could be running and several smoke runs might be triggered which will exhaust the resource pool.
What I would like to accomplish on a best-case-scenario would be some sort of conditional rules that would decide whether the test execution step can go forward or not based on something like this:
Only 1 regression can be running at the same time.
If a regression is running allow only 1 smoke run to be run in
parallel.
If no regression is running then allow up to 5 or 6 smoke tests.
If 2 or more smoke tests are running do not allow a regression to
launch.
Would something like that be possible from a Jenkins pipeline? In this case I'm using the declarative pipeline with a bunch of helper groovy code I've put together over time. My first idea is to see if there's a way to check if a lockable resource is available or not (but without actually requesting it yet) and then go through a bunch of if/then/else to set up the logic. But again I'm not sure if there's a way to check a lockable resource state or how many of a kind have already been requested.
Honestly, something this complex might probably be outside of what Jenkins is supposed to handle but I'm not sure and figured asking here would be a good start.
Thanks!.
Create a declarative pipeline with steps that build individual jobs. Don't allow people to run the jobs ad-hoc, or when changes are pushed to the repository, and force a build schedule.
How can this solve your issue:
Only 1 regression can be running at the same time.
Put all these jobs in sequence in a declarative pipeline.
If a regression is running allow only 1 smoke run to be run in parallel.
Put smoke tests that are related to the regression test in sequence, just after the regression build, but run the smoke tests in parallel, prior to the next regression build.
If no regression is running then allow up to 5 or 6 smoke tests.
See previous
If 2 or more smoke tests are running do not allow a regression to launch.
It will never happen if you run things in sequence.
Here is an ugly picture explaining what I am talking about.
You can manually create the pipeline, or use the coolness of blue ocean to give you a graphical interface to put the steps in sequence or in parallel:
https://jenkins.io/doc/tutorials/create-a-pipeline-in-blue-ocean/
The downside is that if one of those jobs fails, it will stop the build, but that is not necessarily a bad thing if the jobs are highly correlated.
Completely forgot to update this but after reading and experimenting a bit more with the lockable resources plugin I found out you could have several resources under the same label and request a set quantity whenever a specific job starts.
I defined 5 resources and set the Jenkinsfile to check whether you are running the test suite with the parameter regression or not. If you are running a full regression it will try to request 4 locks while a smoke test will only try to request 1. This way when there aren't enough locks available the job will wait until either the enough amount becomes available or the timeout expires.
Here's a snippet from my Jenkinsfile:
stage('Test') {
steps {
lock(resource: null, label: 'server-farm-tokens', quantity: getQuantityBySuiteType()) {
<<whatever I do to run my tests here>>
}
}
resource has to be null due to a bug in Jenkin's declarative pipeline. If you're using the scripted one you can ignore that parameter.

How can I coordinate integration tests in a multi-container (Docker) system?

I have inherited a system that consists of a couple daemons that asynchronously process messages. I am trying to find a clean way to introduce integration testing into this system with minimal impact/risk on the existing programs. Here is a very simplified overview of their responsibilities:
Process 1 polls a queue for messages, and inserts a row into a DB for each one it dequeues.
Process 2 polls the DB for rows inserted by Process 1, does some calculations, and then deposits a file into a directory on the host and sends an email.
These processes are quite old and complex, and I am strongly inclined to avoid modifying them in any way. What I would like to do is put each of them in a container, and also stand up the dependencies (queue, DB, mail server) in other containers. This part is straightforward, but what I'm unsure about is the best way to orchestrate these tests. Since these processes consume and generate output asynchronously I will need to poll or wait for the expected outcome (mail sent, file created).
Normally I would just write a series of tests in a single test suite of my language of choice (Java, Go, etc), and make the setUp / tearDown hooks responsible for resetting the environment to the desired state. But because these processes have a lot of internal state I am afraid I cannot successfully "clean up" properly after each distinct test. This would be a problem if, for example, one test failed to generate the desired output in a specific period of time so I marked it as failed, but a subsequent test falsely got marked as passed because the original test case actually did output something (albeit much slower than anticipated) that was mistakenly attributed to the subsequent test. For these reasons I feel I need to recreate the world between each test.
In order to do this the only options I can see are:
Use a shell script to actually run my tests -- having it bring up the containers, execute a single test file, and then terminate my containers for each test.
Follow my usual pattern of setUp / tearDown in my existing test framework but call out to docker to terminate and start up the containers between each test.
Am I missing another option? Is there some kind of existing framework or pattern used for this sort of testing?

Automate testing of process flows in Rails

I am building a educational service, and it is a rather process heavy application. I have a lot of user actions triggering various actions, some present, some future.
For example, when a student completes a lesson for his day, the following should happen:
updating progress count for his user-module record
checking if he has completed a particular module and progressing him to the next one (which in turn triggers more actions)
triggering current emails to other users
triggering FUTURE emails to himself (ongoing lesson plans)
creating range of other objects (grading todos by teachers)
any other special case events
All these triggers are built into the observers of various objects, and the execution delayed using Sidekiq.
What is killing me is the testing, and the paranoia that I might breaking something whenever I push something. In the past, I do a lot of assertion and validations checks, and they were sufficient. For this project, I think this is not enough, given the elevated complexity.
As such, I would like to implement a testing framework, but after reading through the various options (Rspec, Cucumber), it is not immediately clear what I should be investing my effort into, given my rather specific needs, especially for the observers and scheduled events.
Any advice and tips on what approach and framework would be the most appropriate? Would probably save my ass in the very near future ;)
Not that it matters, but I am using Rails 3.2 / Mongoid. Happy to upgrade if it works.
Testing can be a very subjective topic, with different approaches depending on the problems at hand.
I would say that given your primary need for testing end-to-end processes (often referred to as acceptance testing), you should definitely checkout something like cucumber or steak. Both allow you to drive a headless browser and run through your processes. This kind of testing will catch any big show stoppers and allow you to modify the system and be notified of breaks caused by your changes.
Unit testing, although very important, and should always be used in parallel with acceptance tests, isn't for doing end-to-end testing, Its primarily for testing the output of specific methods in isolation
A common pattern to use is called Test Driven Development (TDD). In this, you write your acceptance tests first, in the "outer" test loop, and then code your app with Unit tests as part of the "inner" test loop. The idea being, when you've finished the code in the inner loop, then the outer loop should also pass, and you should have built up enough test coverage to have confidence that any future changes to the code will either pass/fail the test depending on if the original requirements are still met.
And lastly, a test suite is something that should grow and change as your app does. You may find that whole sections of your test suite can (and maybe should) be rewritten depending on how the requirements of the system change.
Unit Testing is a must. you can use Rspec or TestUnit for that. It will give you atleast 80% confidence.
Enable "render views" for controller specs. You will catch syntax errors and simple logical errors faster that way.There are ways to test sidekiq jobs. Have a look at this.
Once you are confident that you have enough unit tests, you can start looking into using cucumber/capybara or rspec/capybara for feature testing.

multiple instances of cucumber for 3000 scenarios

I have a test pack consisting of more than 3000 scenarios.now the problem is that when i run the scenarios in one shot ..it takes approx 10 hours to complete,i want to divide the scenarios in 4 blocks,each of approx 750 scenarios and wanted to run them parallel in different windows/terminal(VMware).is there a workaround ???
This question has a selenium tag, so (assuming that's accurate) Selenium-Grid would be an option for setting up a distributed parallel testing environment.
orde mentions Selenium Grid and that's one piece of the puzzle. The main benefit you get out of that is that you only have to identify your Server Hub in your code when creating a new selenium instance.
The next thing would be to actually execute your 4 blocks of 750 scenarios at once. I'd recommend using a CI tool like Jenkins to accomplish that and you'll have your results together on Jenkins' web gui.

optimize capybara times

I have test suite for acceptance tests in my rails app that uses pure capybara (no cucumber).
It has 220 examples and it takes 21 minutes to finish. My non-js driver is rack_test and my js_driver is capybara-webkit instead of selenium.
I would like to improve test times, but i have no idea if there is a common bottle-neck in this kind of testing.
Some ideas i have/had:
Change capybara server. It was using mongrel as a fallback. The default is thin. I installed thin but i didn't get any speed improvement. Seem like thins advantage is concurrency, an tests dont have it.
Since I am cleaning the database between tests, before each example of a private part of my app (MOST of the examples are like this) I need to login. That mean it loggin the app 200 times. There is a way to mantain session between examples in order to avoid loggin again and again?
there are two things that come to my mind:
parallel_tests can improve your test-speed if you run multicore https://github.com/grosser/parallel_tests
providing a backdoor-login-route for your test-login can improve test-speed by bypassing the login-step
in general acceptance-tests are slow. that's why i use them only for testing critical user workflows. i try to keep my whole test-suite within a 5 minute range. i really think that it's critical for your application test suite to be fast. that's why i try to put a lot of logic outside of rails tests so that a test-run completes within a second or less.

Resources