Upgrading a docker container in jenkins pipeline - docker

I have the following pipeline:
pipeline {
environment {
registry = "my-docker"
registryCredential = 'dockerhubcredentials'
dockerImage = ''
}
agent any
stages {
stage('Cloning our Git') {
steps {
git 'my-git'
}
}
stage('Building Docker Image') {
steps {
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploying Docker Image to Dockerhub') {
steps {
script {
docker.withRegistry('', registryCredential) {
dockerImage.push()
}
}
}
}
stage('Cleaning Up') {
steps{
sh "docker rmi --force $registry:$BUILD_NUMBER"
}
}
stage('Upgrade docker') {
steps{
// sh docker stop *Current CONTAINERID* (How do I get it?)
// sh docker run my-container:*NEW_BUILD_NUMBER*
}
}
}
}
Now I'm trying to add the upgrade docker stage, all other steps are working great.
How I can get the current container ID of the container I'm looking for in order to stop it?
After stop I want to pull and start the new one (I'll need the new build number, possibly $BUILD_NUMBER + 1, I think I can manage that - correct me if I'm wrong.
Is it a good practice to upgrade a docker container in jenkins? I couldn't find any examples and it feels common automation process.

If we address the two steps you are attempting to achieve in the pipeline and implement them, then your first two literal questions become irrelevant because they do not affect the implementation.
First, for stopping the container:
// sh docker stop *Current CONTAINERID* (How do I get it?)
your pipeline never runs a container, so you have no container to stop, and can safely skip this step method.
Second, for running the new container:
// sh docker run my-container:*NEW_BUILD_NUMBER*
The new container will be sourced from your new image, and your new image is part of the return object from your docker.build global variable method. Therefore, we can run your new container like:
// three methods available here
dockerImage.run([args, command])
dockerImage.withRun[(args[, command])] {…}
dockerImage.inside[(args)] {…}
because you assigned the return to dockerImage.
For the third question, building a new Docker image and running a new container as part of a pipeline is absolutely common. You can add various other stages to your pipeline if you want, such as deploying the new image to a Kubernetes cluster.

If you are deploying this through a Jenkins pipeline, you'll usually want to directly specify the name for the container:
sh "docker run -d --name my-container $registry:$BUILD_NUMBER"
Once you have that name, you can clean up the old container using that name, without having to know the container ID
sh 'docker stop my-container'
sh 'docker rm my-container'
The other important corollary to this is that you probably shouldn't docker rmi the newly-built image before you docker run it. Docker will pull the image you just pushed, but you just had that exact thing a minute ago. You might want to clean up the old image, maybe by using docker system prune to clean up anything unused.
Putting it all together, the end of your pipeline would be:
environment {
...
containerName = 'my-container'
}
stages {
...
stage('Upgrade docker') {
steps {
sh script: "docker stop $containerName", returnStatus: true
sh script: "docker rm $containerName", returnStatus: true
sh "docker run -d --name $containerName $registry:$BUILD_NUMBER"
}
}
stage('Cleaning up') {
steps {
sh "docker system prune --all --force"
}
}
}

Related

Run job inside docker container in specific agent Jenkins

I am trying to run the Jenkins pipeline steps in the Docker container in a specific agent.
I could use docker to run it but the container will run in a random agent but i need it to run in specific agent.
Here's what i tried:
pipeline {
agent { label 'agent-007' }
stages {
stage("Unit Tests") {
agent { docker 'image-name' }
steps {
sh 'pwd'
sh 'hostname'
}
}
}
}
In the documentation Specifying a Docker Label it says that in the configuration for your Jenkins job, you can specify which agent you want docker to run on. In your case, you could set "Docker Label" to "agent-007" in your job configuration. You can also specify which docker registry you want to pull from which is really handy if you use Artifactory, for instance.

Jenkins: Connect to a Docker container from a stage that is run with an agent (another Docker container)

I am in the process of reworking a pipeline to use Declarative Pipelines approach so that I will be able to use Docker images on each stage.
At the moment I have the following working code which performs integration tests connecting to a DB which is run in a Docker container.
node {
// checkout, build, test stages...
stage('Integration Tests') {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
Now with Declarative Pipelines the same code would look somehow like this:
pipeline {
agent none
stages {
// checkout, build, test stages...
stage('Integration Test') {
agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
steps {
script {
docker.image('mongo:3.4').withRun(' -p 27017:27017') { c ->
sh "./gradlew integrationTest"
}
}
}
}
}
}
Problem: The stage is now run inside a Docker container and running docker.image() leads to docker: not found error in the stage (it is looking for docker inside the openjdk image which is now used).
Question: How to start a DB container and connect to it from a stage in Declarative Pipelines?
What essentially you are trying is to use is DIND.
You are using a jenkins slave that is essentially created using docker agent { docker { image 'openjdk:11.0.4-jdk-stretch' } }
Once the container is running you are trying to execute a docker command. the error docker: not found is valid as there is no docker cli installed. You need to update the dockerfile/create a custom image having openjdk:11.0.4-jdk-stretch and docker dameon installed.
Once the daemon is installed you need to volume mount the /var/run/docker.sock so that the daemon will talk to the host docker daemon via socket.
The user should be root or a privileged user to avoid permission denied issue.
So if I get this correctly your tests needs two things:
Java Environment
DB Connection
In this case have you tried a different approach like Docker In Docker (DIND) ?
Where you can have custom image that uses docker:dind as a base image and contains your java environment and use it in the agent section then the rest of the pipeline steps will be able to use the docker command as you expected.
In your example you are trying to run a container inside openjdk:11.0.4-jdk-stretch. If this image has not docker daemon installed you will not be able to execute docker, but in this case it will run a docker inside docker that you should not.
So it depends when you want.
Using multiple containers:
In this case you can combine multiple docker images, but they are not dependent each others:
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:7-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
Running "sidecar" containers:
This example show you to use two containers simultaneously, which will be able to interacts each others:
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
Please refer to the official documentation -> https://jenkins.io/doc/book/pipeline/docker/
I hope it will help you.
I have had a similar problem, where I wanted to be able to use a off-the-shelf Maven Docker image to run my builds in while also being able to build a Docker image containing the application.
I accomplished this by first starting the Maven container in which the build is to be run giving it access to the hosts Docker endpoint.
Partial example:
docker run -v /var/run/docker.sock:/var/run/docker.sock maven:3.6.1-jdk-11
Then, inside the build-container, I download the Docker binaries and set the Docker host:
export DOCKER_HOST=unix:///var/run/docker.sock
wget -nv https://download.docker.com/linux/static/stable/x86_64/docker-19.03.2.tgz
tar -xvzf docker-*.tgz
cp docker/docker /usr/local/bin
Now I can run the docker command inside my build-container.
As a, for me positive, side-effect any Docker image built inside a container in one step of the build will be available to subsequent steps of the build, also running in containers, since they will be retained in the host.

How can I build Docker images on Jenkins Pipeline, without changing permissions on the underlying Jenkins VM?

I want to use Jenkins Pipeline to build, push, and deploy my Docker image.
I get this:
Got permission denied while trying to connect to the
Docker daemon socket at unix:///var/run/docker.sock
Other questions on StackOverflow suggest sudo usermod -a -G docker jenkins, then restart Jenkins, but I do not have access to the machine running Jenkins -- and anyway, it seems strange that Jenkins Pipeline, which is built all around Docker, cannot run a basic Docker command.
How can I build my Docker?
pipeline {
agent any
stages {
stage('deploy') {
agent {
docker {
image 'google/cloud-sdk:latest'
args '-v /var/run/docker.sock:/var/run/docker.sock'
}
}
steps {
script {
docker.build "gcr.io/myporject/mydockerimage:1"
}
}
}
}
}
The pipeline definition shown is trying to execute the docker build inside a docker container (google/cloud-sdk:latest). Instead you should do the following given the jenkins user on the host has permission to execute docker commands on the host.
pipeline {
agent any
stages {
stage('deploy') {
steps {
script {
docker.build "gcr.io/myporject/mydockerimage:1"
}
}
}
}
}
There is nothing strange about jenkins unable to execute docker commands without proper permission when they are installed and configured separately on the machine.

Jenkins pipeline should is removing container on remote daemon after deployment, i want to keep it running

I am trying to build deploy my code using jenkins pipeline and also using remote docker daemon for deployment.
everything is working but jenkins pipeline is stoping and removing all containers once pipeline script ends. server is coming up just for 10 seconds after that container stops and removed.
stage {
steps {
script {
docker.withServer('tcp://10.10.10.10:2375') {
docker.withRegistry('https://registry.my.com/','jenkins-registry') {
docker.image('registry.my.com/image-my/my:latest').withRun(' -p 9090:80 -i -t --name harpal ') {
sh 'docker ps -a'
}
}
}
}
}
output
[Flights-Docker-POC] Running shell script
+ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6a4c5094a8d2 registry.my.com/image-my/my:latest "/usr/bin/supervisord" 6 hours ago Up Less than a second 0.0.0.0:9090->80/tcp harpal
[Pipeline] sh
[Flights-Docker-POC] Running shell script
+ docker stop 6a4c5094a8d22179b364ee2d3b97e998a2c13e8b136c55816c0d8f838c17248b
6a4c5094a8d22179b364ee2d3b97e998a2c13e8b136c55816c0d8f838c17248b
+ docker rm -f 6a4c5094a8d22179b364ee2d3b97e998a2c13e8b136c55816c0d8f838c17248b
6a4c5094a8d22179b364ee2d3b97e998a2c13e8b136c55816c0d8f838c17248b
got the answer for it.it wasn't issue related to entry point in my image
was suppose to use image.run() method instead of withRun(), withRun() method internally calls run() method and stops container in finally block of its implementation.
public <V> V withRun(String args = '', Closure<V> body) {
docker.node {
Container c = run(args)
try {
body.call(c)
} finally {
c.stop()
}
}
}
btw thank you guys for help.
script was supposed to be like.
stage {
steps {
script {
docker.withServer('tcp://10.10.10.10:2375') {
docker.withRegistry('https://registry.my.com/','jenkins-registry') {
docker.image('registry.my.com/image-my/my:latest').run(' -p 9090:80 -i -t --name harpal ')
}
}
}
}
I don't believe there is a way to keep it alive using that Docker plugin Groovy class, it's intended to remove the container after the run.
If you're just trying to launch Docker containers from Jenkins, just use shell commands to do a
sh 'docker run -p 9090:80 -i -t --name harpal registry.my.com/image-my/my:latest '
If you're trying to keep a container alive to debug it and look around I usually add
sh 'sleep 30m'
Then go to the Docker machine and take a look around the container with
docker exec -it <ContainerID> bash

Jenkins pipeline is unable to terminate a docker container

I have a docker container that performs some tasks and is scheduled inside Jenkins pipeline like this:
pipeline {
stages {
stage('1') {
steps {
sh "docker run -i --rm test"
}
}
}
}
If the pipeline is aborted somehow, by timeout or manually for example, the container won't stop and stays alive.
How do I configure it to be terminated along with pipeline?
Docker version 17.06-ce
Hi Elessar you can configure an "always" in the post steps. Mainly it will run the command inside always without depending on the build cancelation, fail or success.
pipeline {
agent any
stages {
stage('Example') {
steps {
sh "docker run -i --rm test"
}
}
}
post {
always {
sh "docker stop test" //or something similar
}
}
}
I hope this solve your problem!

Resources