Accessing parent daemon from container in Jenkins - docker

We have a bunch of nodes running our jobs in Jenkins. I have the need to build two images from a Jenkins job. To do this, I've read that you should share the unix socket using bind mounting, and I've done that like this:
agent {
docker {
image 'custom-alpine-with-docker'
args '-v /var/run/docker.sock:/var/run/docker.sock'
}
}
I then want to use it as follows:
stage('Build and push image(s)') {
steps {
dir("${WORKING_DIRECTORY}") {
script {
echo 'Building amd64 image'
amd64image = docker.build("${IMAGE_NAME}:${BUILD_NUMBER}-amd64", "-f ./Dockerfile.amd64 .")
echo 'Building arm32v7 image'
arm32v7image = docker.build("${IMAGE_NAME}:${BUILD_NUMBER}-arm32v7", "-f ./Dockerfile.arm32v7 .")
}
script {
docker.withRegistry("${DOCKER_REGISTRY_URL}", "${REPOSITORY_CREDENTIALS}") {
amd64image.push()
arm32v7image.push()
}
}
}
}
}
However, as soon as the build command is issued in the jenkins job, I get the following error:
time="2019-01-16T16:55:33Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: permission denied"
17:56:59 Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock:
So a simple search shows the source of this error is that the user trying to access the daemon is not in the docker group, but I don't understand how these group memberships work when sharing a daemon like this.
If I go to the node that failed the build, and check the users in the docker group, I get the following:
$ getent group docker
docker:x:126:inst,jenkins
So how do I allow the user running in the container on that host to access the same daemon?
Small update
Just did it locally using docker run -v /var/run/docker.sock:/var/run/docker.sock -ti docker, and when I write docker ps in the container and on my host I see the same containers running.
Getting all the users on my development machine, it looks like this:
docker:x:999:overlord
So I'm guessing I need some special jenkins solution for this to work..

I think I've solved this in satisfactory way. Here's a step-by-step guide:
Ensure docker is installed in the container that needs to run it
Create the docker group and jenkins user:
CMD DOCKER_GID=$(stat -c '%g' /var/run/docker.sock) && \
groupmod --gid ${DOCKER_GID} ${DOCKER_GROUP} && \
usermod -a -G ${DOCKER_GROUP} ${JENKINS_USER} && \
gosu jenkins sh
It is important to note that here I fetch the group Id of the underlying system that runs docker. As I had already installed docker in my container and the group already existed, I modify the existing group to match the group id of the system. Finally I add the jenkins user to the docker group. Your /etc/group should look something like this in the container after it's run:
docker:x:999:jenkins
In your pipeline, start the agent as follows:
agent {
docker {
image 'storemanager-build'
args '-u root -v /var/run/docker.sock:/var/run/docker.sock'
}
}
By supplying the -u root flag, you override the user jenkins that jenkins forces on you when you use the declarative pipeline. You have to use root for the CMD command to work and to be able to create the group.
When the image is running, the command will switch to a jenkins user that is allowed to access the underlying unix socket.
Here's an excerpt fro my Jenkinsfile:
pipeline {
agent {
docker {
image 'build-image'
args '-u root -v /var/run/docker.sock:/var/run/docker.sock'
}
}
stages {
stage('Build jar') {
steps {
dir("${WORKING_DIRECTORY}") {
script {
if (isUnix()) {
sh './mvnw --batch-mode clean install'
} else {
bat 'mvnw.cmd --batch-mode clean install'
}
}
}
}
}
stage('Build and push image(s)') {
steps {
dir("${WORKING_DIRECTORY}") {
script {
amd64image = docker.build("${IMAGE_NAME}", "-f ./Dockerfile.amd64 .")
arm32v7image = docker.build("${IMAGE_NAME}", "-f ./Dockerfile.arm32v7 .")
}
script {
docker.withRegistry("${DOCKER_REGISTRY_URL}", "${STOREMANAGER_REPOSITORY_CREDENTIALS}") {
amd64image.push("${BUILD_NUMBER}-amd64")
arm32v7image.push("${BUILD_NUMBER}-arm32v7")
}
}
}
}
}
}
post {
always {
sh "chmod -R 777 ." // Jenkins can't clean built resources without this as we run the container as root
cleanWs()
}
}
}
And the resources that helped me:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
https://github.com/jenkinsci/docker/issues/196
Hope this helps.

Related

dial unix /var/run/docker.sock: connect: permission denied - Docker daemon socket - Jenkins on Ubuntu(ec2)

I'm trying to build and then run a docker image on Jenkins. I have set up Jenkins on ubuntu on an AWS ec2 server. When I try to build I get this error:
For reference, I have also attached my JenkinsFile.
pipeline {
agent any
stages {
stage('Start') {
steps {
echo 'Starting to build the docker-react-app.'
}
}
stage('building docker image') {
steps {
sh 'sudo docker build -t docker-react-app .'
}
}
stage('runing docker image') {
steps {
sh 'sudo docker run -dp 3001:3000 docker-react-app'
}
}
}
}
I am using Jenkins with the default administrator account.
I have also added the Jenkins user to the docker group. But it is hasn't solved the issue. I have also verified that by running:
cat /etc/group | grep docker
Which outputs:
docker:x:998:ubuntu,jenkins
Let me know if you need any further information.
The error says that your current user can’t access the docker engine, because you’re lacking permissions to access the unix socket to communicate with the engine.
In order to solve this issue :
Run below command in any of your shell:
sudo usermod -a -G docker $USER
NOTE : do a complete restart of machine and also jenkins.
After this step : Completely log out of your account and log back in.
For more info : https://docs.docker.com/engine/install/linux-postinstall/
Try to run your pipeline scripts via jenkins user on your target server
Maybe you should add jenkins user to root group

Docker: not found when running cmds in jenkinsfile

I am new to docker and CI. I am trying to create a jenkinsfile that would build and test my application, then build a docker image with the Dockerfile i've composed and then push it into AWS ECR. The steps I am stuck on is building an image with docker, i receive and error message docker: not found. I downloaded docker plug-in and configured it in the global tool configuration tab. Am i not adding it into tools correctly?
There was another post wear you could use kubernetes to do that however kubernetes no longer supports docker.
image of how i configured docker in global tools config:
global tool config
error
/var/jenkins_home/workspace/client-pipeline_feature-jenkins#tmp/durable-41220eb0/script.sh: 1: /var/jenkins_home/workspace/client-pipeline_feature-jenkins#tmp/durable-41220eb0/script.sh: docker: not found
error with permission to sock
def gv
containerVersion = "1.0"
appName = "foodcore"
imageName = appName + ":" + version
pipeline {
agent any
environment {
CI = 'true'
}
tools {
nodejs "node"
docker "docker"
}
stages {
stage("init") {
steps {
script {
gv = load "script.groovy"
CODE_CHANGES = gv.getGitChanges()
}
}
}
stage("build frontend") {
steps {
dir("client") {
sh 'npm install'
}
}
}
stage("build backend") {
steps {
dir("server") {
sh 'npm install'
}
}
}
stage("test") {
when {
expression {
script {
CODE_CHANGES == false
}
}
}
steps {
dir("client") {
sh 'npm test'
}
}
}
stage("build docker image") {
when {
expression {
script {
env.BRANCH_NAME.toString().equals('Main') && CODE_CHANGES == false
}
}
}
steps {
sh "docker build -t ${imageName} ."
}
}
stage("push docker image") {
when {
expression {
env.BRANCH_NAME.toString().equals('Main')
}
}
steps {
sh 'aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin repoURI'
sh 'docker tag foodcore:latest ...repoURI
sh 'docker push repoURI'
}
}
}
}
Use echo hello world to make...
Docker should be installed on the server Jenkins is running. The docker plugin provided by Jenkins is just like a tool to generate some snippets for the pipeline scripts. Installing and configuring the tool doesn't install a docker daemon. Please check if docker is installed on the OS or not.
As we can see in the thread, you are start getting permission denied on docker.sock.
docker.sock permissions will be lost if you restart system or docker service.
To make it persistence setup a cron to change ownership after each reboot
#reboot chmod 777 /var/run/docker.sock
and When you restart the docker, make sure to run the below command
chmod 777 /var/run/docker.sock
Or you can put a cron for it also, which will execute in each every 5 minutes.
To use docker inside Jenkins build, There are 2 methods.
Use Jenkins docker plugins as describe in above solution.
Or install docker itself in the Jenkins container and mount the docker.sock file.

Using Jekyll docker inside Jenkins

I'm trying to build a jekyll website via my Jenkins server (which runs inside a container) and I have a stage in my Jenkinsfile that looks like this:
stage('Building Website') {
agent {
docker {
image 'jekyll/jekyll:builder'
}
}
steps {
sh 'jekyll --version'
}
}
The very first time I run my job it pulls the jekyll docker image and runs fine (although it does fetch a bunch of gems before running jekyll which doesn't happen when I run the docker manually outside jenkins) but then the next jobs fail giving this error:
jekyll --version
/usr/jekyll/bin/jekyll: exec: line 15: /usr/local/bundle/bin/jekyll: not found
Any ideas what I'm doing wrong here?
As you can see in the jenkins log file, jenkins runs docker with the -u 1000:1000 argument, since this user does not exits in the jekyll/jekyll image, the command fails with the error .../bin/jekyll: not found
Here is a sample Jenkinsfile:
pipeline {
agent
{
docker
{
image 'jekyll/jekyll:3.8'
args '''
-u root:root
-v "${WORKSPACE}:/srv/jekyll"
'''
}
}
stages {
stage('Test') {
steps {
sh '''
cd /srv/jekyll
jekyll --version
'''
}
}
}
}
To add to the other answer, note the containerized Jenkins doesn't not contain the docker binary, so docker commands will still fail.
A few solutions
Make a dockerfile that inherits from the jenkins image and installs docker as well, creating a new image.
Manually install docker inside of the container. This will work until you pull a new image, and you'll have to do it over again.
Open an interactive terminal into the jenkins container
docker container exec -it -u root <container id> bash
Then install docker
curl https://get.docker.com/ > dockerinstall && chmod 777 dockerinstall && ./dockerinstall
Exit the container and set perms on docker.sock
sudo chmod 666 /var/run/docker.sock
Finished!

run jenkins pipeline agent with sudo

I have an Jenkins Server running in an docker container and have access to docker an the host system, so far it is working well. Now I want to set up a pipeline testing an script inside an docker container.
Jenkinsfile:
pipeline {
agent { docker 'nginx:1.11' }
stages {
stage('build') {
steps {
sh 'nginx -t'
}
}
}
}
Error Message:
> + docker pull nginx:1.11
>
> Warning: failed to get default registry endpoint from daemon (Got
> permission denied while trying to connect to the Docker daemon socket
> at unix:///var/run/docker.sock: Get
> http://%2Fvar%2Frun%2Fdocker.sock/v1.29/info: dial unix
> /var/run/docker.sock: connect: permission denied). Using system
> default: https://index.docker.io/v1/
>
> Got permission denied while trying to connect to the Docker daemon
> socket at unix:///var/run/docker.sock: Post
> http://%2Fvar%2Frun%2Fdocker.sock/v1.29/images/create?fromImage=nginx&tag=1.11:
> dial unix /var/run/docker.sock: connect: permission denied
>
> script returned exit code 1
My problem is that jenkins needs to run the docker command with sudo, but how to say the agent running the command with sudo?
I have faced the same issue. After analysing the console log, I have found that the reason is that the Docker Jenkins Plugin starts a new container with a specific option -u 107:112:
...
docker run -t -d -u 107:112 ...
...
After trying many options such as: add jenkins to sudo group (it did not work because jenkins user does not exist in container), add USER root into Dockerfile, ... but none of them do the trick.
Finally I have found a solution that is using args in docker agent to overwrite the -u option. This is my Jenkinsfile:
pipeline {
agent {
docker {
image 'ubuntu'
args '-u root:sudo -v $HOME/workspace/myproject:/myproject'
}
}
stages {
stage("setup_env") {
steps {
sh 'apt-get update -y'
sh 'apt-get install -y git build-essential gcc cmake make'
}
}
stage("install_dependencies") {
steps {
sh 'apt-get install -y libxml2-dev'
}
}
stage("compile_dpi") {
steps {
sh 'cd /myproject && make clean && make -j4'
}
}
stage("install_dpi") {
steps {
sh 'cd /myproject && make install'
}
}
stage("test") {
steps {
sh 'do some test here'
}
}
}
post {
success {
echo 'Do something when it is successful'
bitbucketStatusNotify(buildState: 'SUCCESSFUL')
}
failure {
echo 'Do something when it is failed'
bitbucketStatusNotify(buildState: 'FAILED')
}
}
}
There's maybe a security issue here but it is not the problem in my case.
I'd solve the problem differently, matching the jenkins group id inside the container to that of the docker socket you've mounted a volume. I do this with an entrypoint that runs as root, looks up the gid of the socket, and if that doesn't match that of the gid inside the current container, it does a groupmod to correct it inside the container. Then I drop privileges to the jenkins user to launch Jenkins. This entrypoint run on every startup, but fairly transparently to the Jenkins app that is launched.
All the steps to perform this are included in this github repo: https://github.com/sudo-bmitch/jenkins-docker/
You can work around that by:
1- In your Dockerfile add jenkins to the sudoers file:
RUN echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
2- Add an extra step in your Jenkinsfile to give jenkins the right permissions to use docker:
pipeline {
agent none
stages {
stage("Fix the permission issue") {
agent any
steps {
sh "sudo chown root:jenkins /run/docker.sock"
}
}
stage('Step 1') {
agent {
docker {
image 'nezarfadle/tools'
reuseNode true
}
}
steps {
sh "ls /"
}
}
}
}
As others have suggested, the issue is that jenkins does not have permission to run docker containers. Let's go over the ways you could launch jenkins first, and then see what could be done in each of these ways.
1. running jenkins manually
Surely you could download & run jenkins with java as suggested in here. In this method, you could do several things to allow your jenkins user to use docker:
a. give jenkins user root access:
I do not suggest this way, after all you are giving your pipelines access to everything! So you probably do not want this to happen.
b. add jenkins user to docker group
Like explained here you could manage docker as non-root user. just add your user to docker group and thats all. I recommend it if you know who is going to use docker (cause well, you are giving him root access in docker in a way).
c. make docker rootless
This is a new feature docker added to its arsenal recently. You could read in detail what it implies here. To tell you the truth I am not a fan of this feature! The reason is that you can not (at least I could not find a way) to make it work for a user in a container (as you need to stop docker service to make it happen), also I had some difficulties configing dns when using rootless mode. But it should be fine if you are not in a container.
2. running jenkins in docker
This method is more troublesome actually! I struggled with the ways I could use docker in jenkins container, but in the end got the results needed, so was worth the effort.
To run docker in jenkins (which is also a docker container itself) you have three ways:
1. use dind (docker in docker)
It is pretty straight forward, you run dind image & connect docker in jenkins container to the dind, without any special permission handling you can use docker at will.
2. use dood (docker outside of docker)
mount docker path as a volume in the docker run script for your jenkins, note that you need to use one of the two ways I explained above (in running jenkins manually) to be able to use docker, it could be a bit tricky but possible.
3. run agent as a docker in a different environment & connect remote agent in jenkins
At last it is possible to run the agent separately & connecting the remote agent in jenkins. Although this does not exactly answer your question, but is a way you could use.
These ways for just running a docker in jenkins, you will probably have some issues after you ran a docker as agent, like having permission issues in the agent container itself, which is most likely because of the agent's user (if you like, you could access the user with command
docker exec -it [agent container id] whoami
e.g. in this sample the user in agent is node
agent {
docker { image 'node:14-alpine' }
}
steps{
sh 'npm i -g random'
}
so it would throw an error because the node user does not have permission to install npm module globally (I know, it is weird!)
so as luongnv89 mentioned, you could change the user running the docker like this
agent {
docker { image 'node:14-alpine' args '-u root' }
}
Hope this was helpful understanding the whole picture. 😊
What worked for me was
node() {
String jenkinsUserId = sh(returnStdout: true, script: 'id -u jenkins').trim()
String dockerGroupId = sh(returnStdout: true, script: 'getent group docker | cut -d: -f3').trim()
String containerUserMapping = "-u $jenkinsUserId:$dockerGroupId "
docker.image('image')
.inside(containerUserMapping + ' -v /var/run/docker.sock:/var/run/docker.sock:ro') {
sh "..."
}
}
This way the user in the container still uses the jenkins user id + group id to avoid permissions conflicts with shared data but is also member of the docker group inside container which is required to access the docker socket (/var/run/docker.sock)
I prefer this solution as it doesn't require any additional scripts or dockerfiles
I just had the same exact issue. You need to add jenkins user to docker group:
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GROUP=docker
JENKINS_USER=jenkins
if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
sudo groupadd -for -g ${DOCKER_GID} ${DOCKER_GROUP}
sudo usermod -aG ${DOCKER_GROUP} ${JENKINS_USER}
fi
# Start Jenkins service
sudo service jenkins restart
After you run the above, pipelines successfully start docker
I might have found a reasonably good solution for this.
Setup
I run Jenkins as a container and use it to build containers on the dockerhost it's running on. To do this, I pass /var/run/docker.sock as a volume to the container.
Just to reiterate the disclaimer some other people already stated: Giving access to the docker socket is essentially like giving root access to the machine - be careful!
I assume that you've already installed docker into your Jenkins Image.
Solution
This is based on the fact, that the docker binary is not in the first directory of $PATH. We basically place a shell script that runs sudo docker instead of just the plain docker command (and passes the parameters along).
Add a file like this to your jenkins repository and call it docker_sudo_overwrite.sh:
#! /bin/sh
# This basically is a workaround to add sudo to the docker command, because aliases don't seem to work
# To be honest, this is a horrible workaround that depends on the order in $PATH
# This file needs to be place in /usr/local/bin with execute permissions
sudo /usr/bin/docker $#
Then extend your Jenkins Dockerfile like this:
# Now we need to allow jenkins to run docker commands! (This is not elegant, but at least it's semi-portable...)
USER root
## allowing jenkins user to run docker without specifying a password
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/bin/docker" >> /etc/sudoers
# Create our alias file that allows us to use docker as sudo without writing sudo
COPY docker_sudo_overwrite.sh /usr/local/bin/docker
RUN chmod +x /usr/local/bin/docker
# switch back to the jenkins-user
USER jenkins
This gives the jenkins service user the ability to run the docker binary as root with sudo (without providing a password). Then we copy our script to /usr/local/bin/docker which "overlays" the actual binary and runs it with sudo. If it helps, you can look at my example on Github.
Same issue here where.
[...]
agent { docker 'whatever_I_try_doesnt_work'} # sudo, jenkins user in dockerroot group etc
[...]
So my workaround is to add it as one of the steps in the the build stage of the pipeline as follow:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'sudo docker pull python:3.5.1'
}
}
}
}

Docker Plugin for Jenkins Pipeline - No user exists for uid 1005

I'm trying to execute an SSH command from inside a Docker container in a Jenkins pipeline. I'm using the CloudBees Docker Pipeline Plugin to spin up the container and execute commands, and the SSH Agent Plugin to manage my SSH keys. Here's a basic version of my Jenkinsfile:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
When the SSH command runs, I get this error:
+ ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a
No user exists for uid 1005
I combed through the logs and realized the Docker Pipeline Plugin is automatically telling the container to run with the same user that is logged in on the host by passing a UID as a command line argument:
$ docker run -t -d -u 1005:1005 [...]
I decided to check what users existed in the host and the container by running cat /etc/passwd in each environment. Sure enough, the list of users was different in each. 1005 was the jenkins user on the host machine, but that UID didn't exist in the container. To solve the issue, I mounted /etc/passwd from the host to the container when spinning it up:
node {
step([$class: 'WsCleanup'])
docker.image('node').inside('-v /etc/passwd:/etc/passwd') {
stage('SSH') {
sshagent (credentials: [ 'MY_KEY_UUID' ]) {
sh "ssh -vvv -o StrictHostKeyChecking=no ubuntu#example.org uname -a"
}
}
}
}
The solution provided by #nathan-thompson is awesome, but in my case I was unable to find the user even in the /etc/passwd of the host machine! It means mounting the passwd file did not fix the problem. This question https://superuser.com/questions/580148/users-not-found-in-etc-passwd suggested some users are logged in the host using an identity provider like LDAP.
The solution was finding a way to add the proper line to the passwd file on the container. Calling getent passwd $USER on the host will provide the passwd line for the Jenkins user running the container.
I added a step running on the node (and not the docker agent) to get the line and save it in a file. Then in the next step I mounted the generated passwd to the container:
stages {
stage('Create passwd') {
steps {
sh """echo \$(getent passwd \$USER) > /tmp/tmp_passwd
"""
}
}
stage('Test') {
agent {
docker {
image '*******'
args '***** -v /tmp/tmp_passwd:/etc/passwd'
reuseNode true
registryUrl '*****'
registryCredentialsId '*****'
}
}
steps {
sh """ssh -i ********
"""
}
}
}
I just found another solution to this problem, that I want to share. It differentiates from the existing solutions in that it allows to run the complete pipeline in one agent, instead of per stage.
The trick is to, instead of directly using an image, refer to a Dockerfile (which may be build FROM the original) and then add the user:
# Dockerfile
FROM node
ARG jenkinsUserId=
RUN if ! id $jenkinsUserId; then \
usermod -u ${jenkinsUserId} jenkins; \
groupmod -g ${nodeId} jenkins; \
fi
// Jenkinsfile
pipeline {
agent {
dockerfile {
additionalBuildArgs "--build-arg jenkinsUserId=\$(id -u jenkins)"
}
}
}
agent {
docker {
image 'node:14.10.1-buster-slim'
args '-u root:root'
}
}
environment {
SSH_deploy = credentials('e99988ea-6bdc-45fc-b9e1-536b875bcac7')
}
stage('build') {
steps {
sh '''#!/bin/bash
eval $(ssh-agent -s)
cat $SSH_deploy | tr -d '\r' | ssh-add -
touch .env
echo 'REACT_APP_BASE_API = "//172.22.132.115:8080"' >> .env
echo 'REACT_APP_ADMIN_PANEL_URL = "//172.22.132.115"' >> .env
yarn install
CI=false npm run build
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'rm -rf /usr/local/src/build'
scp -r -o StrictHostKeyChecking=no build root#172.22.132.115:/usr/local/src/
ssh -t -o StrictHostKeyChecking=no root#172.22.132.115 'systemctl restart nginx'
'''
}
From the solution provided by Nathan Thompson, I modified it this way for Jenkins DOCKER build container which runs inside a Jenkins DOCKER-slave. #docker in docker
if (validated_parameters.custom_gradle_image){
docker.image(validated_parameters.custom_gradle_image).inside(" -v /etc/passwd:/etc/passwd -v /var/lib/jenkins/.ssh/:/var/lib/jenkins/.ssh/ "){
sshagent(['jenkins-git-io']){
sh "${gradleCommand}"
}

Resources