duplicates in jest tests memory snapshot - memory

While running jest in watch mode, and inspecting the memory using Chrome Devtools with this command:
node --inspect-brk ./node_modules/jest/bin/jest.js --watch --runInBand --logHeapUsage
I see duplicates content in memory snapshot:
Tests are using a lot of memory so it may be related. How is that possible and how to prevent it?
Precision: as soon as more tests are run, the more duplicate entries there are.

Related

How to run a build in Travis when the build is in an infinite loop

I currently have a build of an application that is set to run infinitely. It is designed to run on a Raspberry Pi as a service, so it will continuously be running.
Whenever I try to test it on Travis-CI, the infinite loop portion draws an error even though the file builds correctly since it is running infinitely. Is there any way to stop this error, or do I have to remove the ability to run the build from the .travis.yml?
language: cpp
compiler:
- clang
- g++
script:
- make
- cd main
- ./jsonWeatherPrediction
I would expect it to error, I'm just not sure of a current way to stop it without removing - ./jsonWeatherPrediction
I don't know if this will help, but the build is located at https://travis-ci.org/DMoore12/json-weather-prediction
Thanks in advance :)
In most any reasonable CI workflow, the job should have well-defined start and finish. Your software you are testing may run forever, but your tests should not. So, first, I suggest re-thinking how you run your build.
Looking at build such as https://travis-ci.org/DMoore12/json-weather-prediction/jobs/474719832, I see that you are simply running your command (which raises a different question: The command is printing the same output forever in a tight loop. Is this the desired behavior?).
For testing, you need a different kind of behavior, one that can be tested (e.g., take input from STDIN or a command-line flag, print, and terminate).

Java process killed due to out of memory after running job in container for an hour

I have setup a job for running automation tests in CircleCI (https://hub.docker.com/r/jiteshsojitra/docker-headless-vnc-container), it works fine but after running tests for an hour it reaches to memory limit and suddenly kills running Java/ant job. So is there any way to increase the container memory so tests can be ran for 5-6 hours in container or its paid feature?
I tried by putting - JAVA_OPTS: -Xms512m -Xmx1024m in YAML script but overall container memory size reaches to ~4GB as it looks like.
References:
https://circleci.com/gh/jiteshsojitra/zm-selenium/231
https://circleci.com/api/v1.1/project/github/jiteshsojitra/zm-selenium/231/output/106/0?file=true
Log trail:
BUILD FAILED
/headless/zm-selenium/build.xml:348: Java returned: 137
Total time: 76 minutes 26 seconds
Exited with code 137
Hint: Exit code 137 typically means the process is killed because it was running out of memory
Hint: Check if you can optimize the memory usage in your app
Hint: Max memory usage of this container is 4286337024
according to /sys/fs/cgroup/memory/memory.max_usage_in_bytes
We had this problem. It is a limit in CircleCI (or the VM, really). Only solution is to make your app use less memory.
fiskeben is right. I think your mentioned container is an fork of our consol/docker-headless-vnc-container image. So you can add in the startup script the following lines.
# set correct java startup
export _JAVA_OPTIONS="-Duser.home=$HOME -Xmx${JVM_HEAP_XMX}m"
# add docker jvm flags, can maybe removed with JDK9
export _JAVA_OPTIONS="$_JAVA_OPTIONS -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
Now you would be able to set the environment variable JVM_HEAP_XMX with amount of megabytes your JVM should use, e.g.
docker run -e JVM_HEAP_XMX=512 ...
If you want to determine the size dynamically take a look at that script jvm_options.sh.

How to know which tests were run together in parallel?

My question in general is this: when I run automated tests in parallel how can I know which of them were run at the same time?
My issue is this: I'm running tests in parallel (4 tests per time) but I have no idea which of them are executing together (at the same time) and which are waiting.
After my tests are executed there are some failures and it could be that the cause of the failures is other tests that were executing in the same time with the failed test. So I want to know which tests were executing together so I can run them together again to debug the failed test.
Technologies I use: NUnit 3.0, C#, Selenium WebDriver, Jenkins
I'm glad to hear any possible solution (doesn't matter if it's hard to apply)
For debug purposes, you can run the NUnit console with the --trace=Verbose option.
This will write a log to the directory you are executing in, which will look a little like this:
14:56:13.972 Info [13] TestWorker: Worker#4 executing TestsSuite1
14:56:13.973 Debug [13] WorkItemDispatcher: Directly executing TestA
14:56:13.976 Debug [12] WorkItemDispatcher: Directly executing TestB
14:56:13.976 Debug [16] WorkItemDispatcher: Directly executing TestC
14:56:13.980 Debug [12] WorkItemDispatcher: Directly executing TestD
14:56:13.982 Debug [16] WorkItemDispatcher: Directly executing TestE(False,False,False,0)
14:56:13.989 Debug [16] WorkItemDispatcher: Directly executing TestE(False,False,True,0)
I believe the numbers in the square brackets identify different threads, so here, you can see that workers 12 and 16 where running TestC and TestD simultaneously.
Of course, it's never as simple as just seeing which tests where running concurrently, but this may help narrow issues down.
The simplest approach is to disable parallelism selectively until you find the combination that causes the problem.
Disable all parallelism by running with --workers=0. If the same tests fail, then you don't have a parallelism problem.
Selectively rerun the failed tests together, using the --test or --where option to select them. Edit the fixtures to enable/disable parallelism until you hit the combination that does the job.

/home/travis/build.sh: line 41: $pid Killed (exit code 137)

In the Apache Jackrabbit Oak travis build we have a unit test that
makes the build erroring out
Running org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT
/home/travis/build.sh: line 41: 3342 Killed mvn verify -P${PROFILE} ${FIXTURES} ${SUREFIRE_SKIP}
The command "mvn verify -P${PROFILE} ${FIXTURES} ${SUREFIRE_SKIP}" exited with 137.
https://travis-ci.org/apache/jackrabbit-oak/jobs/44526993
The test code can be seen at
https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/test/java/org/apache/jackrabbit/oak/plugins/segment/HeavyWriteIT.java
What's the actual explanation for the error code? How could we
workaround/solve the issue?
Error code 137 usually comes up when a script gets killed due to exhaustion of available system resources, in this case it's very likely memory. The infrastructure this build is running on has some limitations due to the underlying virtualization that can cause these errors.
I'd recommend trying out our new infrastructure, which has more resources available and should give you more stable builds: http://blog.travis-ci.com/2014-12-17-faster-builds-with-container-based-infrastructure/
Usually Killed message means that you are out of memory. Check your limits by ulimit -a or available memory by free -m, then try to increase your stack size, e.g. ulimit -s 82768 or even more.

Jenkins does not stop cucumber tests when aborting (pressing the stop[x] button)

I have calabash running iOS tests on Jenkins. When the job encounters fails I sometimes manually abort the tests by pressing the stop[x] button within the job. The problem is the next test in the feature file begins running even though I aborted. This behavior is not observed when launching the tests through the terminal. When exiting the cucumber test in the terminal the sim returns to home and no other test are launched.
I found a hook that might be useful
After do |s|
# Tell Cucumber to quit after this scenario is done - if it failed.
Cucumber.wants_to_quit = true if s.failed?
end
However, there are times when I don't want it to stop just because one scenario failed. I feel like Jenkins needs to kill all processes and its not.
If someone knows how to kill calabash and its instances manually via terminal after Jenkins has been instructed to abort, I would be interested in that too.
I tried:
ps aux | grep -i instruments | awk {'print $2'} | xargs kill -9
Unfortunately that did not work. Possibly two reasons
greping instruments shows two or more process
20272 ?? S 0:00.00 sh -c xcrun instruments -w "iPhone 5 (8.1 Simulator)...
20273 ?? S 0:00.45 /Applications/Xcode.app/Contents/Developer/usr/bin/instruments -w iPhone 5]...
Should I switch awk to print column 1?
or reason two
I'm not greping the correct process?
Here is some of my version info:
calabash-ios version: 0.11.4
Calabash::Cucumber::MIN_SERVER_VERSION: 0.11.4
Xcode 6.1
You have to let Jenkins to find all forked processes. Depending on the Job type you have to pass different environment entries into the forked process. This question is about the other way (so how to make Jenkins NOT to stop processes), but the names of the possible environment entries are there. Just pass these environment entries below to each forked process and then the process tree killer will find them:
BUILD_ID
HUDSON_SERVER_COOKIE
JENKINS_COOKIE
JENKINS_SERVER_COOKIE
HUDSON_COOKIE

Resources