How to know which tests were run together in parallel? - jenkins

My question in general is this: when I run automated tests in parallel how can I know which of them were run at the same time?
My issue is this: I'm running tests in parallel (4 tests per time) but I have no idea which of them are executing together (at the same time) and which are waiting.
After my tests are executed there are some failures and it could be that the cause of the failures is other tests that were executing in the same time with the failed test. So I want to know which tests were executing together so I can run them together again to debug the failed test.
Technologies I use: NUnit 3.0, C#, Selenium WebDriver, Jenkins
I'm glad to hear any possible solution (doesn't matter if it's hard to apply)

For debug purposes, you can run the NUnit console with the --trace=Verbose option.
This will write a log to the directory you are executing in, which will look a little like this:
14:56:13.972 Info [13] TestWorker: Worker#4 executing TestsSuite1
14:56:13.973 Debug [13] WorkItemDispatcher: Directly executing TestA
14:56:13.976 Debug [12] WorkItemDispatcher: Directly executing TestB
14:56:13.976 Debug [16] WorkItemDispatcher: Directly executing TestC
14:56:13.980 Debug [12] WorkItemDispatcher: Directly executing TestD
14:56:13.982 Debug [16] WorkItemDispatcher: Directly executing TestE(False,False,False,0)
14:56:13.989 Debug [16] WorkItemDispatcher: Directly executing TestE(False,False,True,0)
I believe the numbers in the square brackets identify different threads, so here, you can see that workers 12 and 16 where running TestC and TestD simultaneously.
Of course, it's never as simple as just seeing which tests where running concurrently, but this may help narrow issues down.

The simplest approach is to disable parallelism selectively until you find the combination that causes the problem.
Disable all parallelism by running with --workers=0. If the same tests fail, then you don't have a parallelism problem.
Selectively rerun the failed tests together, using the --test or --where option to select them. Edit the fixtures to enable/disable parallelism until you hit the combination that does the job.

Related

cargo tests running in parallel even though --jobs 1 is specified

I'm running tests from the command line via:
cargo test --workspace --tests --jobs 1
I'm testing a server service, where each tests starts and stops the server and interacts with a service. So, running in parallel will not work. I observe multiple tests trying to start the server at the same time. I've resorted to guarding against this, where I'm observing multiple tests attempting to start the server, with the guard eventually passing when the owning test stops the server. My understanding is that the --jobs 1 prevents tests from running in parallel, which is exactly what I want. However, it seems to not be working. Is this a known issue? Have I done something wrong? Or did I misunderstand the usage of --jobs n ?
I can provide more details, that lead me to this conclusion, if needed.
Found that in addition to --jobs 1 I needed to add -- --test-threads=1

How to run a build in Travis when the build is in an infinite loop

I currently have a build of an application that is set to run infinitely. It is designed to run on a Raspberry Pi as a service, so it will continuously be running.
Whenever I try to test it on Travis-CI, the infinite loop portion draws an error even though the file builds correctly since it is running infinitely. Is there any way to stop this error, or do I have to remove the ability to run the build from the .travis.yml?
language: cpp
compiler:
- clang
- g++
script:
- make
- cd main
- ./jsonWeatherPrediction
I would expect it to error, I'm just not sure of a current way to stop it without removing - ./jsonWeatherPrediction
I don't know if this will help, but the build is located at https://travis-ci.org/DMoore12/json-weather-prediction
Thanks in advance :)
In most any reasonable CI workflow, the job should have well-defined start and finish. Your software you are testing may run forever, but your tests should not. So, first, I suggest re-thinking how you run your build.
Looking at build such as https://travis-ci.org/DMoore12/json-weather-prediction/jobs/474719832, I see that you are simply running your command (which raises a different question: The command is printing the same output forever in a tight loop. Is this the desired behavior?).
For testing, you need a different kind of behavior, one that can be tested (e.g., take input from STDIN or a command-line flag, print, and terminate).

How to run non-parallel in more than one process?

I am using parallel_tests gem and more specifically parallel_rspec. I have 2 sets of tests that can't run in parallel as they interfere with the state of some other tests.
Currently I am doing
parallel_rspec spec --single 'spec/set_A'
I now have the need to also run set_B non-parallely but how do I ensure that it runs in its own process and not with set_A's process above?
I have tried parallel_rspec spec --single 'spec/set_A|set_B' but it runs both sets in a single process which makes that process run for a really long time. Passing to separate --single flags also doesn't seem to achieve that.

Jenkins + Ant and parallel scp/sshexec

Have Jenkins build that use Ant to do heavy lifting.
First it fetch code, tar it, scp, sshexec to extract it, sshexec it again to install it.
There are 2 production servers right now. So I used for from ant-contrib to run scp/sshexec in parallel. For param is used to set property which is then used in scp/sshexec - to avoid issues with # vs $ notations.
However that's not working as expected.
I either get:
connection reset
ssh-agaent not present (from production server sshd logs)
Windows sockets not found
scp doulbe write server to which it's connecting (but that transfer succeds)
Build always fail at second scp/sshexec, which is strange since second connection should happen to different server.
Questions:
What am I doing wrong?
Or alternatively how to write that ant script differently, while still achieving parallelism?
This is root cause:
For param is used to set property which is then used in scp/sshexec - to avoid issues with # vs $ notations.
Ant properties are IMMUTABLE, so if set at first itteration to X, it would stay X for all the iterations of that loop!
So I either had to stick to serial execution and unsetting each parameter at the end of sequential or use # syntax and parallel loop if possible. sshexec did accept #syntax.

Travis failing when Cucumber task pending?

As title says, my travis-ci instance is telling me that my build is failing when I have cucumber step definitions that are pending (not undefined) - is there a way to stop this behaviour?
I can understand the reason why, but the undefined behaviour I have is undefined by purpose - I have no intention on doing that feature RIGHT NOW, but I know I want to keep the step definition down so I don't forget what I came up with. The reason I ask is because this behaviour (with it failing when I have a pending task) could end up masking real failures that actually matter.
There is a similar answer here which should be useful for you, the answer that he gives is:
I think cucumber gives non-zero process exit code either because of
skipped or because of pending tests. Try to get it to not run any
skipped, then any pending, then any skipped or pending tests and see
what exit codes it gives. To see the exit code (in Unix), run it with
something like:
cucumber ...args to select tests... ; echo $?
So basically you want to figure out what args you can provide to cucumber to have it pass and then provide a .travis.yml that runs that command. an example rake task for cucumber jasmine and rspec is here:
task :travis do
["rspec spec", "rake jasmine:ci", "rake cucumber"].each do |cmd|
puts "Starting to run #{cmd}..."
system("export DISPLAY=:99.0 && bundle exec #{cmd}")
raise "#{cmd} failed!" unless $?.exitstatus == 0
end
end
And read the docs for more info on creating a .travis.yml.
If all of that doesn't work, just create a bash script that catches the output and returns 0 in the right cases, I really doubt this is required though since I'm sure there's an option to cucumber to make it ignore pending.
EDIT: the cucumber docs say:
Pending steps
When a Step Definition’s Proc invokes the #pending method, the step is
marked as yellow (as with undefined ones), reminding you that you have
work to do. If you use --strict this will cause Cucumber to exit with
1.
Which seems to imply that they will not exit with 0 if you don't call --strict

Resources