Limit cpu's with Jenkins and docker plugin - docker

I've got a Jenkins setup using Docker containers as build slaves.
The containers are provisioned as agents via the Docker plugin - rather than the Jenkinsfiles instantiating the container.
Machine is an 8 thread i7 with 16GB ram and 16GB swapfile. Currently I'm running just a single container.
If the build in the container uses all 8 threads, it seems to cause OS OOM kills of some of the GCC processes when it is building. There is a mix of build tasks, some I can explicitly control the number of make threads, others it will query the system for the number of cores and go wide. When there are the maximum number of threads active it appears to run out of RAM without throttling or stalling the work - it also doesn't seem to make full use of the swap.
I want to limit the number of CPU cores the Docker slave is allowed to use but I can't find a way to pass the --cpus=2 argument to the Docker run command. the shares argument doesn't appear to have the effect I want.
I'm happy to divvy up the resources explicitly from the container configuration in order to make the server more reliable, but I don't want it running into hard limits and getting OOM killed.

I am not quite sure how you start the Docker container but if you do it with help of the plug-in, I am guessing that you make a call in your Jenkinsfile like myImage.inside("..."). In that case you should be able to add "--cpus=2" as part of the string argument. This works at least for us.
The drawback with this solution is however that all Jenkins jobs have to be updated and you can never be sure that all creators of Jenkins jobs remember this. Hence we implemented a different solution which should work for you if the above solution does not. We basically created a script to work as a wrapper when anyone called docker. It injects "--cpus=2" if the "docker run" is called. Otherwise it just forwards the call to Docker. We accomplished this by:
Put the following script in /home/jenkins/bin/docker . ("docker" being the file, not a folder)
#!/bin/bash
echo COMMAND: $0 with arguments: $# >&2
if [ "$1" = 'run' ] ; then
echo 'INFO: I found a "run" command, limiting to 2 cpus' >&2
echo RUNNING: /usr/bin/docker "$1" --cpus=2 "${#:2}" >&2
/usr/bin/docker "$1" --cpus=2 "${#:2}"
else
echo RUNNING: /usr/bin/docker "$#" >&2
/usr/bin/docker "$#"
fi
Make it executable
chmod a+x /home/jenkins/bin/docker
Modify the path by updating ~/.bashrc . (~/.bash_profile only takes affect when you log in, meaning that Jenkins will not pick it up.) Put "/home/jenkins/bin" FIRST in the path, e. g.
PATH=/home/jenkins/bin:$PATH

Related

Running a Docker command in the background?

How would you run a daemon or background process in Docker? I've seen some suggestions, like this answer that launches supervisor from CMD.
However, I'm trying to test a server configuration tool that connects via SSH. So I need to launch the SSH daemon in the background, and then run my tool.sh to test connecting via SSH to its own container. I need to monitor my tool's output in order to verify it's working. What's the best way to accomplish this?
Is there any way to make a RUN command run in the background, like RUN /usr/sbin/sshd -D & or would I have to have some wrapper script launched from CMD that does something like this?
#!/bin/bash
/usr/sbin/sshd -D
tool.sh
You can run a daemon inside a docker container the same as you would on a bare metal linux machine. The only hard part is getting it to start without the nice runlevel scripts to help.
How about this:
#!/bin/sh
function run_script() {
ssh_pids=0
while [ ${ssh_pids} -lt 1 ]; do
sleep 5
ssh_pids=$(pgrep sshd | wc -l)
done
test.sh
}
run_script &
sshd -D > /dev/null 2>&1
I've used this trick before to do what you describe, and it's worked OK. It will just background the call to run_script and proceed to start SSHD in non-daemon mode, piping it to /dev/null. Meanwhile, run_script polls for sshd; when it finds it, it quits polling and runs test.sh, which should still have the terminal as it's stdout. you'll probably need to use some external kill signal to stop the whole thing, once test.sh is done.
If you don't like this tomfoolery, the other option would be to do as you described: write a wrapper script to use as the CMD/ENTRYPOINT, and have it start SSHD without the debug flag, and then start test.sh.
The advantage of doing it with the script I posted is that the container will stick around after test.sh is finished, so you can log in and poke around, while also making your script wait until the daemon is running.

docker swarm service, pass args to a single replica in the swarm

I need to deploy a container that receives a specific running args, say a target ip to run a bash script against it.
I can get the container up and running on a single docker host and everything works just fine.
Since the load of the script is considerable and it takes some time to execute, it would be interresting to schedule say 50 replicas on 5 different hosts( each one with a different target ip) and in that way the docker swarm seems the straightforward option
Say the script is
test.sh
#!/bin/bash
TARGET_IP=$1
FOO=(nmap -p- -oG - $TARGET_IP)
echo "waiting 1s for every up port"
x=$( wichever cut/awk/grep to get the number of open ports)
sleep ${x}s
echo "end"
Dockerfile
FROM alpine:latest
RUN apk update
RUN apk add bash nmap
docker build high-load-scrip
(comand for a single host)
docker run --rm -it high-load-scrip ./test.sh 172.0.0.1
target hosts are 172.0.0.1 to 172.0.0.100
please excuse any obvious mistakes on the code since I'm using my phone =)
my first idea was to make a simple web server that you get a GET/POST, run the script and kill the container so that this target doesn't get targeted again.
or
have a zombie script that wait until a ENV is defined, call the script and kill the container, something like:
#add to Dockerfile
ENTRYPOINT /zombie.sh
zombie.sh
while true; do
if sintax for checking if a variable exists
/test.sh
exit 0
else
sleep 5s
fi
done
How can I implement this using swarm to leverage the workload distribution?

Docker - How to test if a service is running during image creation

I'm pretty green regarding docker and find myself facing the following problem:
I'm trying to create a dockerfile to generate an image with my companie software on it. During the installation of that software the install process check if ssh is running with the following command:
if [ $(pgrep sshd | wc -l) -eq 0 ]; then
I probably need to precise that I'm installing and starting open-ssh during that same process.
Can you at all check that a service is running during the image creation ?
I cannot ignore that step has it is executed as part of a self extracting mechanism.
Any clue toward the right direction would be appreciated.
An image cannot run services. You are just creating all the necessary things needed for your container to run, like installing databases, servers, or copying some config files etc in the Dockerfile. The last step in the Dockerfile is where you can give instructions on what to do when you issue a docker run command. A script or command can be specified using CMD or ENTRYPOINT in the Dockerfile.
To answer your question, during the image creation process, you cannot check whether a service is running or not. When the container is started, docker will execute the script or command that you can specify in the CMD or ENTRYPOINT. You can use that script to check if your services are running or not and take necessary action after that.
It is possible to run services during image creation. All processes are killed once a RUN command completes. A service will not keep running between RUN commands. However, each RUN command can start services and use them.
If an image creation command needs a service, start the service and then run the command that depends on the service, all in one RUN command.
RUN sudo service ssh start \
&& ssh localhost echo ok \
&& ./install
The first line starts the ssh server and succeeds with the server running.
The second line tests if the ssh server is up.
The third line is a placeholder: the 'install' command can use the localhost ssh server.
In case the service fails to start, the docker build command will fail.

Stop a docker container from within an alpine image

I'm trying to stop a docker container from within an alpine image:
> docker run -ti alpine sh
/ # poweroff
/ # poweroff -f
poweroff: Operation not permitted
/ # halt
/ # halt -f
halt: Operation not permitted
/ # whoami
root
Do you see what is the issue with this?
You cannot stop a docker image this way.
First, if poweroff had to function (and it did in the past, due to an issue) it would shutdown the entire computer, because of how the poweroff binary is working and power halting mechanic is designed on Linux and hardware.
What you have to do in order to properly shutdown your container is to quit the entrypoint (exit in shell), or send a signal to this process (eg: docker stop sends SIGTERM to the running entrypoint before killing it after a period of grace).
If you really want to shutdown the host computer from within a container (why would you ever want to do that?), you can activate the --privileged option which will give all power to your root within the container, and then do:
echo 1 > /proc/sys/kernel/sysrq; echo o > /proc/sysrq-trigger
Be careful, this will really shut down the host, and in a brutal manner. Writing 1 in sysrq will activate sysrq kernel features, which allows to make keyboard requests to the kernel with the SysRq key but also through the sysrq-trigger file. o means poweroff.
Fedora Project - Sysrq
You need to terminate the process sh, simply with this:
exit
From within the container. Think a container as an isolated process, not as a virtual machine.

Docker container CPU and Memory Utilization

I have a Docker container running with this command in my Jenkins job:
docker run --name="mydoc" reportgeneration:1.0 start=$START end=$END config=$myfile
This works very well. The image is created from a DockerFile which is executing a shell script with ENTRYPOINT.
Now I want to know how much CPU and memory has been utilized by this container. I am using a Jenkins job, where in the "execute shell command", I am running the above Docker run command.
I saw about 'docker stats' command. It works very well in my Ubuntu machine. But I want it to run via Jenkins as my container is running via Jenkins console. So here follows the limitations I have.
I don't know if there is any way to stop docker stats command. In Ubuntu command line, we hit 'ctrl+c' to stop it. How will I do it in Jenkins?
Even if I figure out a way to stop docker stats, once the 'docker run' command gets executed, the container will not be active and will be exited. For exited container, CPU and memory utilisation will be zero.
docker run 'image'
docker stats container id/name
With the above two lines, docker stats command will only get an exited container and I don't think docker stats will even work with Jenkins console as it cannot be stopped.
Is there any way that I can get container's resource utilization (CPU, memory) in a better way via Jenkins console?
Suggestion is to not run docker stats interactively, but have a piece of a shell script with a loop like this:
#!/bin/sh
# First, start the container
CONTAINER_ID=$(docker run -d ...)
# Then start watching that it's running (with inspect)
while [ "$(docker inspect -f {{.State.Running}} $CONTAINER_ID 2>/dev/null)" = "true" ]; do
# And while it's running, check stats
docker stats --no-stream $CONTAINER_ID
sleep 1
done
# When the script reaches this point, the container had stopped.
# For example, let's clean it up (assuming you haven't used --rm in run).
docker rm $CONTAINER_ID
The condition checks whenever the container is running or not, and docker stats --no-stream prints stats once then exits, making it suitable for non-interactive use.
I believe you can use a variant of such shell script file (obviously, updated to do something useful, rather than just starting the container and watching its stats) as a build step.
But if you need/want/have an interactive process that you want to stop, kill is the command you're looking for. Ctrl-C in a terminal just sends a SIGINT to the process.
You need to know an PID, of course. I'm not sure about Jenkins, but if you've just started a child process from a shell script with child-process & (e.g. docker stats &), then its PID would be in the $! variable. Or you can try to figure it using pidof or ps commands, but that may be error-prone in case of concurrent jobs (unless they're all isolated).
Here I've assumed that your Jenkins jobs are shell scripts that do the actual work. If your setup is different (e.g. if you use some plugins so Jenkins talk to Docker directly), things may be different and more complicated.

Resources