Stop JMeter script after a certain time in jenkins build - jenkins

While running JMeterbuild in jenkin, it run as infinite mode "never stops" even though the configuration is set in JMX.
How to stop JMeter build after a certain time ?
I have tried to provide command line arguments thinking it isn't rewarding jmx configuration.
PATH/jmeter -Jjmeter.save.saveservice.output_format=xml -Jduration=60 -n -t Main.jmx -l Mainreport.csv
Expecting JMeterbuild to stop after 60 seconds but it never ends, monitored for 30 minutes.

You need to get value using ${__P(duration,)} in JMeter Scheduler's duration field:
You can add __P default value in second argument, for example, for default 30
${__P(duration,30)}

Related

Limit the docker containers executed in a loop

I am very new to docker, have created a Dockerfile to create an image that executes the protractor tests.
That Dockerfile has an entry point that expects a parameter with Suite Name that I want to execute.
It all runs very well when I provide suite names in the command line.
I have about 30 test suites, I am using another .sh file which filters out suite names, and in a for loop, it runs docker commands with different suite names.
Now I do not want to sun 30 suites simultaneously but want to set a limit of say 6 at a time and want to keep others waiting until one is finished.
I execute like this:
for (( i=0; i<${tests}; i++ ));
do
docker run -dit containername $testSuiteName
done
So how can I limit the maximum number of executions at a time?
There are going to be a number of ways of tackling this problem. Here
is one possible solution.
You can treat this as a shell scripting problem, rather than a Docker problem. For example, consider the following, which instead of docker run ... just uses sleep:
#!/bin/sh
let count=5
let tests=20
for (( i=0; i<tests; i++ )); do
sleep $((RANDOM % 10)) &
echo "started $!"
let count--
if (( count <= 0 )); then
wait -n
let count++
fi
done
echo "waiting for remaining jobs"
wait
echo "all done"
This starts $count processes in parallel, and then waits for one to
exit. When a process exits, it immediately starts a new one. Once it
has started all the jobs, it simply waits for everything to finish.
Using this model, you would drop the -d from your docker run
command line, since you need the shell to track the background
processes. Instead of sleep ... &, you would run:
docker run containername $testSuiteName > /dev/null 2>&1 &
Note that I've dropped the -i and -t options here, since it looks like
you're running these tests non-interactively.
A few more possible solutions:
If you run your tests in jenkins, you can create a job that runs one test suite. Set the max number of executors to 6. And start as many tests as you want. Now jenkins won't let run more than 6 jobs at a time.
This is I think the ideal and most correct approach, yet most difficult one. You can use an orchestrator as kubernetes. This actually controls all your docker image. Unfortunately, I don't have step by step guide how to achieve. But this is really the most professional way to tackle your problem

Java process killed due to out of memory after running job in container for an hour

I have setup a job for running automation tests in CircleCI (https://hub.docker.com/r/jiteshsojitra/docker-headless-vnc-container), it works fine but after running tests for an hour it reaches to memory limit and suddenly kills running Java/ant job. So is there any way to increase the container memory so tests can be ran for 5-6 hours in container or its paid feature?
I tried by putting - JAVA_OPTS: -Xms512m -Xmx1024m in YAML script but overall container memory size reaches to ~4GB as it looks like.
References:
https://circleci.com/gh/jiteshsojitra/zm-selenium/231
https://circleci.com/api/v1.1/project/github/jiteshsojitra/zm-selenium/231/output/106/0?file=true
Log trail:
BUILD FAILED
/headless/zm-selenium/build.xml:348: Java returned: 137
Total time: 76 minutes 26 seconds
Exited with code 137
Hint: Exit code 137 typically means the process is killed because it was running out of memory
Hint: Check if you can optimize the memory usage in your app
Hint: Max memory usage of this container is 4286337024
according to /sys/fs/cgroup/memory/memory.max_usage_in_bytes
We had this problem. It is a limit in CircleCI (or the VM, really). Only solution is to make your app use less memory.
fiskeben is right. I think your mentioned container is an fork of our consol/docker-headless-vnc-container image. So you can add in the startup script the following lines.
# set correct java startup
export _JAVA_OPTIONS="-Duser.home=$HOME -Xmx${JVM_HEAP_XMX}m"
# add docker jvm flags, can maybe removed with JDK9
export _JAVA_OPTIONS="$_JAVA_OPTIONS -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
Now you would be able to set the environment variable JVM_HEAP_XMX with amount of megabytes your JVM should use, e.g.
docker run -e JVM_HEAP_XMX=512 ...
If you want to determine the size dynamically take a look at that script jvm_options.sh.

How do I capture output from one Rundeck step to be used in a later step?

I'm attempting to build, launch, and link a set of docker containers using Rundeck. In short (for those not familiar with docker), when an image is launched, it returns a container ID. I would like to use this container ID in the launching of subsequent jobs.
When run from the command line, it would look something like this (example only!!):
# docker run -Pd 23ABCD45
34DEF123
# docker run -Pd --link 34DEF123:host1 ABC123EF
321CB456
(note the use of the first return value in the second command line)
At this point, there would be two containers running. The second would be linked to the first by the --link option, and it would be addressable using the hostname host1 from inside the second container. To be fair, docker generates (or may be given) a specific container name which can be used in place of the container id. I would prefer to use the container ID to avoid the hassle of having to create/track unique names.
I would like to be able to capture the output of the first command (the container ID) so that it can be reused in the second command. Is this possible?
Edit: These images are being used for testing immediately following a
"docker build" (which also outputs a similar ID I would like to
include in my chain) and might be followed by "docker rm" and "docker
rmi" commands, so there are a number of uses for capturing this type
of output and carrying it through a related set of operations. This
is not just about launching/linking containers.
There is no direct Rundeck implementation that allows you pass an output from one job to another job as an input, but there are work around I've tried in the past, and I've settled on the second approach.
1. Use a file to pass data
Save the ID/output into a tmp file in first job
Second job read that file
Things might go wrong since you depend on a file, but good code can improve.
2. Call two jobs using Rundeck CLI from another job
This is the approach I am using.
JobA printout two random numbers.
echo $RANDOM;echo $RANDOM
JobB print out the second random produced from JobA which is passed as an option "number"
echo "$RD_OPTION_NUMBER is the number JobB received"
JobC calls first job, save last line to a variable and pass it to JobB
#!/bin/bash
OUTPUT_FROM_JOB_A=`run -f --id <ID of JobA> | tail -n 1`
run -f --id <ID of JobB> -- -number $OUTPUT_FROM_JOB_A
Output:
[5394] execution status: succeeded
Job execution started:
[5395] JobB <https://hostname:4443/project/Project/execution/show/5395>
6186 is the number JobB received
[5395] execution status: succeeded
This is just primitive code sample. you can do alot with python subprocess or just use bash.

To use lsf bsub command without all the verbosity output

My problem is that: I have a bash script that do something and then call 800 bsub jobs like this:
pids=
rm -f ~/.count-*
for i in `ls _some_files_`; do
of=~/.count-${i}
bsub -I "grep _something_ $i > $of" &
pids="${!} ${pids}"
done
wait ${pids}
Then the scripts process the output files $of and echo the results.
The trouble is that I got a lot of lines like:
Job <7536> is submitted to default queue <interactive>.
<<Waiting for dispatch ...>>
<<Starting on hostA>>
It's actually 800 times the 3 lines above. Is there a way of suppressing this LSF lines?
I've tried in the loop above:
bsub -I "grep _something_ $i > $of" &> /dev/null
I does remove the LSF verbosity but instead of submitting almost all 800 jobs at once and then take less than 4 min to run, it submits just few jobs at a time and I have to wait more than an hour for the script to finish.
AFAIK lsf bsub doesn't seem to have a option to surpress all this verbosity. What can I do here?
You can suppress this output by setting the environment variable BSUB_QUIET to any value (including empty) before your bsub. So, before your loop say you could add:
export BSUB_QUIET=
Then if you want to return it back to normal you can clear the variable with:
unset BSUB_QUIET
Hope that helps you out.
Have you considered using job dependencies and post-process the logfiles?
1) Run each "child" job (removing the "-Is") and output the IO to separate output file. Each job should be submitted with a jobname (see -J). the jobname could form an array.
2) Your final job would be dependent on the children finishing (see -w).
Besides running concurrent across the cluster, another advantage of this approach is that your overall process is not susceptible to IO issues.

How do I increase timeout for a cronjob/crontab?

I have written a script that gets data from solr for which date is within the specified period, and I run the script using as a daily cron.
The problem is the cronjob does not complete the task. If I manually run the script (for the same time period), it works well. If I reduce the specified time period, the script runs from the cron as well. So my guess is cronjob is timing out while running the script is there is much data to process.
How do I increase the timeout for cronjob?
PS - 1. The script I am running in cronjob is a bash script which runs a python script.
Note that the ulimit -t solution suggested will limit the amount of CPU time used, not the amount of actual time that has passed.
From the bash manpage:
ulimit [-HSTabcdefilmnpqrstuvx [limit]]
...
-t The maximum amount of cpu time in seconds
And more importantly, cron doesn't impose any timeouts in the first place. It simply kicks off whatever process and moves on.
BTW: Sorry for posting this as an answer, but I don't have enough points to add comments yet.
You could try to use ulimit -t [number of seconds] in the cronjob before running the script.

Resources