Best way to stop Docker container in Jenkins - jenkins

I have a CI-server (jenkins) which is building new Docker images.
Now I want to run a new Docker container when the build is succesful.
But therefor I have to stop the previous running container.
What's the best way to perform this?
localhost:5000/test/myapp:"${BUILD_ID} is the name of my new images. So I'm using the build-id as tag. First I thought to perform:
docker stop localhost:5000/dbm/my-php-app:${BUILD_ID-1}
But this isn't a right solution because when a build would fail, this would be wrong.
Build 1: succes -> run container 1
Build 2: failed -> run container 1
Build 3: succes -> stop container (3-1) =2 --> wrong (isn't running)
What could be a solution? Proposals where I have to change the tag-idea are also welcome

The docker stop command takes a docker container name as parameter, not a docker image ID.
You would have to name your container when you run it:
# build the new image
docker build -t localhost:5000/test/myapp:"${BUILD_ID}" .
# remove the existing container
docker rm -f myjob && echo "container myjob removed" || echo "container myjob does not exist"
# create and run a new container
docker run -d --name myjob localhost:5000/test/myapp:"${BUILD_ID}"
Just replace myjob with a better suited name in this example.

Thanks to #Thomasleveil I found the answer on my question:
# build the new image
docker build -t localhost:5000/test/myapp:"${BUILD_ID}" .
# remove old container
SUCCESS_BUILD=`wget -qO- http://jenkins_url:8080/job/jobname/lastSuccessfulBuild/buildNumber`
docker rm -f "${SUCCESS_BUILD}" && echo "container ${SUCCESS_BUILD} removed" || echo "container ${SUCCESS_BUILD} does not exist"
# run new container
docker run -d -p 80:80 --name "${BUILD_ID}" localhost:5000/test/myapp:${version}

If you are using Jenkins Pipeline and want to gracefully stop and remove containers then here is the solution.
First you have to check is any container is present, and if there is any then you will have to stop and remove it. Here is the code
def doc_containers = sh(returnStdout: true, script: 'docker container ps -aq').replaceAll("\n", " ")
if (doc_containers) {
sh "docker stop ${doc_containers}"
}
The variable doc_containers will store the container IDs, and you can perform the empty check. If container is not available then docker stop command should not be executed.
Here is the pipeline code
stage('Clean docker containers'){
steps{
script{
def doc_containers = sh(returnStdout: true, script: 'docker container ps -aq').replaceAll("\n", " ")
if (doc_containers) {
sh "docker stop ${doc_containers}"
}
}
}
}

Related

How to kill a specific docker container in a single command

i've setup a jenkins pipeline job in a groovy script....
i am trying to build the jenkins job which runs a docker command on remote server.
my jenkins is expected to connect to remote server and perform
docker run -d -p 60:80 <image name>
so for that i have used the following groovy script in jenkins pipeline job
stage ('Deploy on App Server')
{
def dockrun = 'docker run -d -p 60:80 <image name>'
sshagent(['dev-servr-crdntls'])
{
sh "ssh -o StrictHostKeyChecking=no ubuntu#xx.xxx.xx.xx ${dockrun}"
}
}
This scripts runs perfectly fine. Jenkins is connecting to remote server and running the docker command and app is running on port 60.
HOWEVER as this is in jenkins pipeline for CICD, next time when the Build is run job is getting failed because port 60 is already assigned. .
I want to kill the port 60 before running the docker run -d -p ......command. Any suggestions please
You could use the following command to kill the running container that occupies a given port:
docker kill $(docker ps -qf expose=<port>)
Explanation:
The docker ps command allows to list containers and has a lot of useful options. One of them is the -f flag for filtering for containers based on some properties. Now you could filter for the running container that occupies <port> by using -f expose=<port>. In addition, the -q flag can be used to only output the container ID. This output can be used as input to the docker kill command.
Edit:
Because the command mentioned above could potentially fail with an error if no container is running on the given port, the following command can be used to circumvent this problem:
docker kill $(docker ps -qf expose=<port>) 2> /dev/null || echo 'No container running on port <port>'
Now, the command will either kill the container occupying <port> if such container exists and is running, or output No container running on port <port> (optional)

testing tkinter-based function on jenkins in a docker container on AWS

I have a python code that passes all the test on my local machine. The code uses tkinter and provides a GUI. However, none of the test functions actually open the GUI. (They call tk.Tk() though).
I created a docker container locally and could use X11 forwarding to pass the tests on the "local" container as well.
Now, I'm trying to run the tests on Jenkins that I have set up on an EC2 instance. Jenkins is supposed to create a docker container using the Dockerfile that is on my repository. And then call "docker run -e ... -v ..." (similar to what I had in my local computer) to check the tests. I understand my ec2 instance does not have a gui and therefore x11 forwarding is not as simple as it was on my computer. There should be a way for tests using a gui to be checked through Jenkins setup on AWS. Any help is appreciated.
EDIT
Here is the build script that I have on AWS, it creates the docker container using the Dockerfile:
IMAGE_NAME="test-image"
CONTAINER_NAME="deidentifier_clinical"
echo "Check current working directory"
pwd
echo "Build docker image and run container"
docker build -t $IMAGE_NAME .
echo $DISPLAY
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY $IMAGE_NAME bash -c "cd /$CONTAINER_NAME;make test"
echo "Copy coverage.xml into Jenkins container"
rm -rf reports; mkdir reports
docker cp $CONTAINER_NAME:/deidentifier_clinical/htmlcov/* reports/.
echo "Cleanup"
docker stop $CONTAINER_NAME
docker rm $CONTAINER_NAME
docker rmi $IMAGE_NAME
This fails on the docker run line. This same script runs with no problem on my local computer after setting up the X11-forwarding.

Get return code of a Docker container run with --rm -d

If I docker run a container with some script inside using --rm and --detach, how can I found the RC of that container ? I.e. whether the script inside the container finished successfully or failed ?
Because of --rm flag I can't see that container in docker ps --all after it finishes.
You can't, since you're explicitly asking Docker to clean up after the image. That will include all of the metadata like the exit status.
On the other hand, if you're actively planning to check the status code anyways, you'll have the opportunity to do the relevant cleanup yourself.
CONTAINER_ID=$(docker run -d ...)
...
docker stop "$CONTAINER_ID" # if needed
docker wait "$CONTAINER_ID" # produces its exit status
CONTAINER_RC=$?
docker rm "$CONTAINER_ID"
if [ "$CONTAINER_RC" -ne 0 ]; then
echo "container failed" >&2
fi
The best way to check weather the script works is first capture the script response using command1 > everything.txt 2>&1
And lastly, you can go inside to the running container using docker exec -it <mycontainer> bash

How to run a docker image in Jenkins

I know how to build a docker image in Jenkins. Basically, calling docker.build("foo") from my Jenkinsfile is the equivalent of executing, from the command line, docker build -t foo ..
My question is how I RUN the image? What's the equivalent of docker run foo, assuming that foo has a defined ENTRYPOINT?
I'm familiar with docker.image('foo').inside() { ... }, which allows me to run a shell script or something inside the container, but that's not what I'm looking for. I want to run the container from it's ENTRYPOINT.
For running a Docker image from jenkinsfile you can use the below docker CLI commmand-
sh "docker run -it --entrypoint /bin/bash example"
It will start the docker container (run docker image) and you can ssh to the host where docker is running and can use docker ps command to list the running container.
You can probably have a look at .withRun -
Run a normal command OR use your entrypoint as argument -
docker.image('python:2.7').withRun('-u root --entrypoint /bin/bash') {
sh 'pip install version'
}

Jenkins docker - running a container, executing shell script etc

I'm trying to run docker containers in the Jenkins Pipeline.
I have the following in my Jenkinsfile:
stage('test') {
steps {
script {
parallel (
"gatling" : {
sh 'bash ./test-gatling.sh'
},
"python" : {
sh 'bash ./test-python.sh'
})
}
}
}
In the test-gatling.sh I have this:
#!/bin/bash
docker cp RecordedSimulation.scala denvazh/gatling:/RecordedSimulation.scala
docker run -it -m denvazh/gatling /bin/bash
ls
./gatling.sh
The ls command is there just for test, but when it's executed it lists files and folders of my github repository, rather than the files inside the denvazh/gatling container. Why is that? I thought the docker run -it [...] command would open the container so that commands could be run inside it?
================
Also, how do I run a container and just have it running, without executing any commands inside it? (In the Jenkins Pipeline ofc)
I'd like to run: docker run -d -p 8080:8080 -t [my_container] and access it on port 8080. How do I do that...?
If anyone has the same or similar problem, here are the answers:
Use docker exec [name of container] command, and to run any terminal commands inside a container, add /bin/bash -c "[command]"
To be able to access a container/app that is running on any port from a second container, when starting the second container run it with --net=host parameter

Resources