cargo tests running in parallel even though --jobs 1 is specified - rust-cargo

I'm running tests from the command line via:
cargo test --workspace --tests --jobs 1
I'm testing a server service, where each tests starts and stops the server and interacts with a service. So, running in parallel will not work. I observe multiple tests trying to start the server at the same time. I've resorted to guarding against this, where I'm observing multiple tests attempting to start the server, with the guard eventually passing when the owning test stops the server. My understanding is that the --jobs 1 prevents tests from running in parallel, which is exactly what I want. However, it seems to not be working. Is this a known issue? Have I done something wrong? Or did I misunderstand the usage of --jobs n ?
I can provide more details, that lead me to this conclusion, if needed.

Found that in addition to --jobs 1 I needed to add -- --test-threads=1

Related

How to run a build in Travis when the build is in an infinite loop

I currently have a build of an application that is set to run infinitely. It is designed to run on a Raspberry Pi as a service, so it will continuously be running.
Whenever I try to test it on Travis-CI, the infinite loop portion draws an error even though the file builds correctly since it is running infinitely. Is there any way to stop this error, or do I have to remove the ability to run the build from the .travis.yml?
language: cpp
compiler:
- clang
- g++
script:
- make
- cd main
- ./jsonWeatherPrediction
I would expect it to error, I'm just not sure of a current way to stop it without removing - ./jsonWeatherPrediction
I don't know if this will help, but the build is located at https://travis-ci.org/DMoore12/json-weather-prediction
Thanks in advance :)
In most any reasonable CI workflow, the job should have well-defined start and finish. Your software you are testing may run forever, but your tests should not. So, first, I suggest re-thinking how you run your build.
Looking at build such as https://travis-ci.org/DMoore12/json-weather-prediction/jobs/474719832, I see that you are simply running your command (which raises a different question: The command is printing the same output forever in a tight loop. Is this the desired behavior?).
For testing, you need a different kind of behavior, one that can be tested (e.g., take input from STDIN or a command-line flag, print, and terminate).

How to run non-parallel in more than one process?

I am using parallel_tests gem and more specifically parallel_rspec. I have 2 sets of tests that can't run in parallel as they interfere with the state of some other tests.
Currently I am doing
parallel_rspec spec --single 'spec/set_A'
I now have the need to also run set_B non-parallely but how do I ensure that it runs in its own process and not with set_A's process above?
I have tried parallel_rspec spec --single 'spec/set_A|set_B' but it runs both sets in a single process which makes that process run for a really long time. Passing to separate --single flags also doesn't seem to achieve that.

How to know which tests were run together in parallel?

My question in general is this: when I run automated tests in parallel how can I know which of them were run at the same time?
My issue is this: I'm running tests in parallel (4 tests per time) but I have no idea which of them are executing together (at the same time) and which are waiting.
After my tests are executed there are some failures and it could be that the cause of the failures is other tests that were executing in the same time with the failed test. So I want to know which tests were executing together so I can run them together again to debug the failed test.
Technologies I use: NUnit 3.0, C#, Selenium WebDriver, Jenkins
I'm glad to hear any possible solution (doesn't matter if it's hard to apply)
For debug purposes, you can run the NUnit console with the --trace=Verbose option.
This will write a log to the directory you are executing in, which will look a little like this:
14:56:13.972 Info [13] TestWorker: Worker#4 executing TestsSuite1
14:56:13.973 Debug [13] WorkItemDispatcher: Directly executing TestA
14:56:13.976 Debug [12] WorkItemDispatcher: Directly executing TestB
14:56:13.976 Debug [16] WorkItemDispatcher: Directly executing TestC
14:56:13.980 Debug [12] WorkItemDispatcher: Directly executing TestD
14:56:13.982 Debug [16] WorkItemDispatcher: Directly executing TestE(False,False,False,0)
14:56:13.989 Debug [16] WorkItemDispatcher: Directly executing TestE(False,False,True,0)
I believe the numbers in the square brackets identify different threads, so here, you can see that workers 12 and 16 where running TestC and TestD simultaneously.
Of course, it's never as simple as just seeing which tests where running concurrently, but this may help narrow issues down.
The simplest approach is to disable parallelism selectively until you find the combination that causes the problem.
Disable all parallelism by running with --workers=0. If the same tests fail, then you don't have a parallelism problem.
Selectively rerun the failed tests together, using the --test or --where option to select them. Edit the fixtures to enable/disable parallelism until you hit the combination that does the job.

Development dependencies in Dockerfile or separate Dockerfiles for production and testing

I'm not sure if I should create different Dockerfile files for my Node.js app. One for production without the development dependencies and one for testing with the development dependencies included.
Or one file which is basically the development Dockerfile.dev. Then main difference of both files is the npm install command:
Production:
FROM ...
...
RUN npm install --quiet --production
...
CMD ...
Development/Test:
FROM ...
...
RUN npm install
...
CMD ...
The question arises because I want to be able to run my tests inside the container via docker run command. Therefore I need the test dependencies (typically dev dependencies for me).
Seems a little bit odd to put dependencies not needed in production into the image. On the other hand creating/maintaining a second Dockerfile.dev which just minor differences seems also not right. So what is the a good practise for this kind of problem.
No, you don't need to have different Dockerfiles and in fact you should avoid that.
The goal of docker is to ship your app in an immutable, well tested artifact (docker images) which is identical for production and test and even dev.
Why? Because if you build different artifacts for test and production how can you guarantee what you have already tested is working in production too? you can't because they are two different things.
Given all that, if by test you mean unit tests, then you can mount your source code inside docker container and run tests without building any docker images. And that's fine. Remember you can build image for tests but that terribly slow and makes development quiet difficult and slow which is not good at all. Then if your test passed you can build you app container safely.
But if you mean acceptance test that actually needs to run against your running application then you should create one image for your app (only one) and run tests in another container (mount test source code for example) and run tests against that container. This obviously means what your build for your app is different for npm installs for your tests.
I hope this gives you some over view.
Well then you'll have to support several Dockerfiles that are almost identical. Instead I recommend to use NodeJS feature like production profile. And another one recommendation regarding to
RUN npm install --quiet --production
It is better to create separate .sh file and do something like this instead:
ADD ./scripts/run.sh /run.sh
RUN chmod +x /*.sh
And also think about to start using Gulp.
UPD #1
By default npm install installs devDependencies. In order to get around this - use npm install --production OR set the NODE_ENV environment variable to production value.
Putting script line in separate file is a good practice in order not to change Dockerfile often. If you'll need changes next time then you'll have to update only script-file and you're done. In future you could also have some additional work to do.

Start god process on server startup (Ubuntu)

I'm currently struggling with executing a simple command which I know works when I run it manually when logged in as either root or non-root user:
god -c path/to/app/queue_worker.god
I'm trying to run this when the server starts (I'm running Ubuntu 12.04), and I've investigated adding it to /etc/rc.local just to see if it runs. I know I can add it to /etc/init.d and then use update-rc.d but as far as I understand it's basically the same thing.
My question is how I run this command after everything has booted up as clean as possible without any fuzz.
Im probably missing something in the lifecycle of how everything's initialized, but then I gladly encourage some education! Are there alternative ways or places of putting this command?
Thanks!
You could write a bash script to determine when Apache has started and then set it to run as a cron job at a set interval...
if [ "$(pidof apache)" ]
then
# process was found
else
# process not found
fi
Of course then you'll have a useless cron job running all the time, and you'll have to somehow flip a switch once it's run once so it doesn't run again.. This should give you an idea to start from..

Resources