Is it possible to create and run Docker containers for CI/CD from within a running Jenkins Docker Container? So basically access Docker on the host server from within a running container.
On my host server (Ubuntu 19.04) Docker (Docker version 19.03.3) is installed. By running the following command I create a Jenkins Container that I give permissions to Docker (so I thought):
mkdir /home/myuser/Desktop/jenkins_home
docker run -dit --name jenkins -v /home/myuser/Desktop/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock -p 8080:8080 jenkins/jenkins:lts
Within Jenkins I create a Pipeline that loads a Jenkinsfile from Git that looks like this:
pipeline {
agent {
docker {
image 'ubuntu:19.04'
args '-u root:sudo -p 3000:3000'
}
}
stages {
stage('Install') {
steps {
sh 'apt-get update'
sh 'apt-get install -y curl'
sh 'curl -sL https://deb.nodesource.com/setup_13.x | sh -'
sh 'curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -'
sh 'echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list'
sh 'apt-get update'
sh 'apt-get install -y nodejs yarn'
}
}
stage('Build') {
steps {
sh './build.sh'
}
}
}
}
When I run the Pipeline it crashes when trying to instruct Docker to pull the ubuntu:19.04 Docker image. The error is docker: not found.
Somewhere a connection between my Jenkins Container and the host Docker access files is misconfigured. What configuration is necessary to run Docker commands on the host server from within the Docker Container?
If you want to create and run Docker containers for CI/CD from Jenkins container,
This can be achieved creating a shell command on Jenkins job that runs an ssh command on Docker host.
This needs as requirements that Jenkins container ssh public key is authorized on Docker host, so authorized_keys file should exist on Docker host.
To use the same ssh keys inside Jenkins container can be used a bind mount with ssh keys on Jenkins containers.
Example using docker-compose:
volumes:
- /home/user/.ssh/id_rsa:/var/jenkins_home/.ssh/id_rsa
- /home/user/.ssh/id_rsa.pub:/var/jenkins_home/.ssh/id_rsa.pub
This is an example content of a shell command used to launch and update containers on Docker host from a Jenkins job:
cat ./target/stack/installer-*.tar | ssh root#${DOCKER_HOST} \
/home/user/Build-Server/remote-installer.sh
In the command above an installer is launched on Docker host. As result new containers are deployed/updated on Docker host.
The remote-installer.sh script receive the file from standard input and unpack it using tar command.
TEMPDIR=`mktemp -d`
echo "unarchiving to $TEMPDIR"
tar xv -C "$TEMPDIR"
...
This works for both cases having Docker containers on same server as Jenkins container or having Docker containers and Jenkins container on different servers.
Related
working on zsh (mac1), I have used this docker run command:
- docker run --name container2 --link container1:alias -v /Users/omerboucris/Desktop/Devops_Final_Project/jenkins-data:/var/jenkins_home -p 8090:8080 -d jenkins/jenkins:lts
I have upload the Jenkins job and Im trying to docker cp some local file which created in /workspace/job1/file.txt to my vm host path:
/Users/omerboucris/Desktop/Devops_Final_Project/jenkins-data
why I have no access to my vm ? the jenkins runs on root only.. If I print 'pwd' on my job: /var/jenkins_home/workspace/MonitorJob
so how I can use the docker cp ?
this command not helpful:
docker cp d6dac560c25b:/var/jenkins_home/workspace/FinalProject_Devops/OurLandPage.jsp <HOME>/Desktop
even If Im trying to cd my Desktop I cant
thanks!
First of all, I couldn't find the answer here on SO (this is the closest post).
I have EC2 running Ubuntu. First I installed Jenkins, and then Docker.
It's not "DonD".
My project has a Jenkinsfile, on that I'm running some docker commands.
It's supposed to use a docker container like gradle, share a volume and build the project.
The final .war will be on the host file system.
The problem is: the gradle inside the container can't write on host's folder.
Here's my Jenkinsfile (one of countless tries):
#!/usr/bin/groovy
node {
checkout scm
stage 'Gradle'
sh 'sudo docker run --rm -v "$PWD":/api -w /api gradle gradle clean build --stacktrace'
}
Stacktrace important line:
Caused by: org.gradle.api.UncheckedIOException: Failed to create parent directory '/api/.gradle' when creating directory '/api/.gradle/4.3/fileHashes'
Solved!
I logged into container using the -v argument:
$ docker exec -it -v "$PWD":/api -w /api gradle bash
Then I tried to check the current user:
$ whoami
# gradle
So, the solution is to run the container as root:
sh 'docker -u root run --rm -v "$PWD":/api -w /api gradle gradle clean build'
This is the root of the container. Jenkins doesn't need root access.
I'm trying to run docker containers in the Jenkins Pipeline.
I have the following in my Jenkinsfile:
stage('test') {
steps {
script {
parallel (
"gatling" : {
sh 'bash ./test-gatling.sh'
},
"python" : {
sh 'bash ./test-python.sh'
})
}
}
}
In the test-gatling.sh I have this:
#!/bin/bash
docker cp RecordedSimulation.scala denvazh/gatling:/RecordedSimulation.scala
docker run -it -m denvazh/gatling /bin/bash
ls
./gatling.sh
The ls command is there just for test, but when it's executed it lists files and folders of my github repository, rather than the files inside the denvazh/gatling container. Why is that? I thought the docker run -it [...] command would open the container so that commands could be run inside it?
================
Also, how do I run a container and just have it running, without executing any commands inside it? (In the Jenkins Pipeline ofc)
I'd like to run: docker run -d -p 8080:8080 -t [my_container] and access it on port 8080. How do I do that...?
If anyone has the same or similar problem, here are the answers:
Use docker exec [name of container] command, and to run any terminal commands inside a container, add /bin/bash -c "[command]"
To be able to access a container/app that is running on any port from a second container, when starting the second container run it with --net=host parameter
I would like to do next steps using jenkins:
1- docker pull <image_name>
2- docker run -i -t <command>
I´ve installed docker plugin on jenkins but is it this prossible? The documentations in docker plugin page is very poor .
These steps are executed programmatically by the plugin.
Alternatively you can execute an script into a jenkins slave with docker installed in build->execute shell:
#!/bin/bash
export image=`docker images httpd|wc -l`
echo image $image
if [ "$image" -lt "1" ];
then
docker pull httpd
fi
export container=`docker ps -all -f="name=webcontainer"|wc -l`
echo container $container
if [ "$container" -gt "1" ];
then
echo "Deleting webcontainer"
docker rm -f webcontainer
fi
BUILD_ID=dontKillMe docker run -d -t -p8888:80 --name webcontainer httpd
You can interact with created docker with below command:
`docker exec -it webcontainer /bin/bash`
These days (mid 2017, more than a year after the OP's question), you would use an inside directive of a Jenkins pipeline to pull and run within a docker image some commands.
For instance (Using Jenkins Pipelines with Docker), using the Docker Pipeline plugin:
docker.image('ruby:2.3.1').inside {
stage("Install Bundler") {
sh "gem install bundler --no-rdoc --no-ri"
}
stage("Use Bundler to install dependencies") {
sh "bundle install"
}
}
I have a workflow as follows for publishing webapps to my dev server. The server has a single docker host and I'm using docker-compose for managing containers.
Push changes in my app to a private gitlab (running in docker). The app includes a Dockerfile and docker-compose.yml
Gitlab triggers a jenkins build (jenkins is also running in docker), which does some normal build stuff (e.g. run test)
Jenkins then needs to build a new docker image and deploy it using docker-compose.
The problem I have is in step 3. The way I have it set up, the jenkins container has access to the host docker so that running any docker command in the build script is essentially the same as running it on the host. This is done using the following DockerFile for jenkins:
FROM jenkins
USER root
# Give jenkins access to docker
RUN groupadd -g 997 docker
RUN gpasswd -a jenkins docker
# Install docker-compose
RUN curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
USER jenkins
and mapping the following volumes to the jenkins container:
-v /var/run/docker.sock:/var/run/docker.sock
-v /usr/bin/docker:/usr/bin/docker
A typical build script in jenkins looks something like this:
docker-compose build
docker-compose up
This works ok, but there are two problems:
It really feels like a hack. But the only other options I've found is to use the docker plugin for jenkins, publish to a registry and then have some way of letting the host know it needs to restart. This is quite a lot more moving parts, and the docker-jenkins plugin required that the docker host is on an open port, which I don't really want to expose.
The jenkins DockerFile includes groupadd -g 997 docker which is needed to give the jenkins user access to docker. However, the GID (997) is the GID on the host machine, and is therefore not portable.
I'm not really sure what solution I'm looking for. I can't see any practical way to get around this approach, but it would be nice if there was a way to allow running docker commands inside the jenkins container without having to hard code the GID in the DockerFile. Does anyone have any suggestions about this?
My previous answer was more generic, telling how you can modify the GID inside the container at runtime. Now, by coincidence, someone from my close colleagues asked for a jenkins instance that can do docker development so I created this:
FROM bdruemen/jenkins-uid-from-volume
RUN apt-get -yqq update && apt-get -yqq install docker.io && usermod -g docker jenkins
VOLUME /var/run/docker.sock
ENTRYPOINT groupmod -g $(stat -c "%g" /var/run/docker.sock) docker && usermod -u $(stat -c "%u" /var/jenkins_home) jenkins && gosu jenkins /bin/tini -- /usr/local/bin/jenkins.sh
(The parent Dockerfile is the same one I have described in my answer to: Changing the user's uid in a pre-build docker container (jenkins))
To use it, mount both, jenkins_home and docker.sock.
docker run -d /home/jenkins:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock <IMAGE>
The jenkins process in the container will have the same UID as the mounted host directory. Assuming the docker socket is accessible to the docker group on the host, there is a group created in the container, also named docker, with the same GID.
I ran into the same issues. I ended up giving Jenkins passwordless sudo privileges because of the GID problem. I wrote more about this here: https://blog.container-solutions.com/running-docker-in-jenkins-in-docker
This doesn't really affect security as having docker privileges is effectively equivalent to sudo rights.
Please take a look at this docker file I just posted:
https://github.com/bdruemen/jenkins-docker-uid-from-volume/blob/master/gid-from-volume/Dockerfile
Here the GID extracted from a mounted volume (host directory), with
stat -c '%g' <VOLUME-PATH>
Then the GID of the group of the container user is changed to the same value with
groupmod -g <GID>
This has to be done as root, but then root privileges are dropped with
gosu <USERNAME> <COMMAND>
Everything is done in the ENTRYPOINT, so the real GID is unknown until you run
docker run -d -v <HOST-DIRECTORY>:<VOLUME-PATH> ...
Note that after changing the GID, there might be other files in the container no longer accessible for the process, so you might need a
chgrp -R <GROUPNAME> <SOME-PATH>
before the gosu command.
You can also change the UID, see my answer here Changing the user's uid in a pre-build docker container (jenkins)
and maybe you want to change both to increase security.
I solved a similar problem in the following way.
Docker is installed on the host. Jenkins is deployed in the docker container of the host. Jenkins must build and run containers with web applications on the host.
Jenkins master connects to the docker host using REST APIs. So we need to enable the remote API for our docker host.
Log in to the host and open the docker service file /lib/systemd/system/docker.service. Search for ExecStart and replace that line with the following.
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
Reload and restart docker service
sudo systemctl daemon-reload
sudo service docker restart
Docker file for Jenkins
FROM jenkins/jenkins:lts
USER root
# Install the latest Docker CE binaries and add user `jenkins` to the docker group
RUN apt-get update
RUN apt-get -y --no-install-recommends install apt-transport-https \
apt-utils ca-certificates curl gnupg2 software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable"
RUN apt-get update && apt-get install -y docker-ce-cli docker-ce && \
apt-get clean && \
usermod -aG docker jenkins
USER jenkins
RUN jenkins-plugin-cli --plugins "blueocean:1.25.6 docker-workflow:1.29 ansicolor"
Build jenkins docker image
docker build -t you-jenkins-name .
Run Jenkins
docker run --name you-jenkins-name --restart=on-failure --detach \
--network jenkins \
--env DOCKER_HOST=tcp://172.17.0.1:4243 \
--publish 8080:8080 --publish 50000:50000 \
--volume jenkins-data:/var/jenkins_home \
--volume jenkins-docker-certs:/certs/client:ro \
you-jenkins-name
Your web application has a repository at the root of which is jenkins and a docker file.
Jenkinsfile for web app:
pipeline {
agent any
environment {
PRODUCT = 'web-app'
HTTP_PORT = 8082
DEVICE_CONF_HOST_PATH = '/var/web-app'
}
options {
ansiColor('xterm')
skipDefaultCheckout()
}
stages {
stage('Checkout') {
steps {
script {
//BRANCH_NAME = env.CHANGE_BRANCH ? env.CHANGE_BRANCH : env.BRANCH_NAME
deleteDir()
//git url: "git#<host>:<org>/${env.PRODUCT}.git", branch: BRANCH_NAME
}
checkout scm
}
}
stage('Stop and remove old') {
steps {
script {
try {
sh "docker stop ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker rm ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker image rm ${env.PRODUCT}"
} catch (Exception e) {}
}
}
}
stage('Build') {
steps {
sh "docker build . -t ${env.PRODUCT}"
}
}
// ④ Run the test using the built docker image
stage('Run new') {
steps {
script {
sh """docker run
--detach
--name ${env.PRODUCT} \
--publish ${env.HTTP_PORT}:8080 \
--volume ${env.DEVICE_CONF_HOST_PATH}:/var/web-app \
${env.PRODUCT} """
}
}
}
}
}