I have an Jenkins Server running in an docker container and have access to docker an the host system, so far it is working well. Now I want to set up a pipeline testing an script inside an docker container.
Jenkinsfile:
pipeline {
agent { docker 'nginx:1.11' }
stages {
stage('build') {
steps {
sh 'nginx -t'
}
}
}
}
Error Message:
> + docker pull nginx:1.11
>
> Warning: failed to get default registry endpoint from daemon (Got
> permission denied while trying to connect to the Docker daemon socket
> at unix:///var/run/docker.sock: Get
> http://%2Fvar%2Frun%2Fdocker.sock/v1.29/info: dial unix
> /var/run/docker.sock: connect: permission denied). Using system
> default: https://index.docker.io/v1/
>
> Got permission denied while trying to connect to the Docker daemon
> socket at unix:///var/run/docker.sock: Post
> http://%2Fvar%2Frun%2Fdocker.sock/v1.29/images/create?fromImage=nginx&tag=1.11:
> dial unix /var/run/docker.sock: connect: permission denied
>
> script returned exit code 1
My problem is that jenkins needs to run the docker command with sudo, but how to say the agent running the command with sudo?
I have faced the same issue. After analysing the console log, I have found that the reason is that the Docker Jenkins Plugin starts a new container with a specific option -u 107:112:
...
docker run -t -d -u 107:112 ...
...
After trying many options such as: add jenkins to sudo group (it did not work because jenkins user does not exist in container), add USER root into Dockerfile, ... but none of them do the trick.
Finally I have found a solution that is using args in docker agent to overwrite the -u option. This is my Jenkinsfile:
pipeline {
agent {
docker {
image 'ubuntu'
args '-u root:sudo -v $HOME/workspace/myproject:/myproject'
}
}
stages {
stage("setup_env") {
steps {
sh 'apt-get update -y'
sh 'apt-get install -y git build-essential gcc cmake make'
}
}
stage("install_dependencies") {
steps {
sh 'apt-get install -y libxml2-dev'
}
}
stage("compile_dpi") {
steps {
sh 'cd /myproject && make clean && make -j4'
}
}
stage("install_dpi") {
steps {
sh 'cd /myproject && make install'
}
}
stage("test") {
steps {
sh 'do some test here'
}
}
}
post {
success {
echo 'Do something when it is successful'
bitbucketStatusNotify(buildState: 'SUCCESSFUL')
}
failure {
echo 'Do something when it is failed'
bitbucketStatusNotify(buildState: 'FAILED')
}
}
}
There's maybe a security issue here but it is not the problem in my case.
I'd solve the problem differently, matching the jenkins group id inside the container to that of the docker socket you've mounted a volume. I do this with an entrypoint that runs as root, looks up the gid of the socket, and if that doesn't match that of the gid inside the current container, it does a groupmod to correct it inside the container. Then I drop privileges to the jenkins user to launch Jenkins. This entrypoint run on every startup, but fairly transparently to the Jenkins app that is launched.
All the steps to perform this are included in this github repo: https://github.com/sudo-bmitch/jenkins-docker/
You can work around that by:
1- In your Dockerfile add jenkins to the sudoers file:
RUN echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
2- Add an extra step in your Jenkinsfile to give jenkins the right permissions to use docker:
pipeline {
agent none
stages {
stage("Fix the permission issue") {
agent any
steps {
sh "sudo chown root:jenkins /run/docker.sock"
}
}
stage('Step 1') {
agent {
docker {
image 'nezarfadle/tools'
reuseNode true
}
}
steps {
sh "ls /"
}
}
}
}
As others have suggested, the issue is that jenkins does not have permission to run docker containers. Let's go over the ways you could launch jenkins first, and then see what could be done in each of these ways.
1. running jenkins manually
Surely you could download & run jenkins with java as suggested in here. In this method, you could do several things to allow your jenkins user to use docker:
a. give jenkins user root access:
I do not suggest this way, after all you are giving your pipelines access to everything! So you probably do not want this to happen.
b. add jenkins user to docker group
Like explained here you could manage docker as non-root user. just add your user to docker group and thats all. I recommend it if you know who is going to use docker (cause well, you are giving him root access in docker in a way).
c. make docker rootless
This is a new feature docker added to its arsenal recently. You could read in detail what it implies here. To tell you the truth I am not a fan of this feature! The reason is that you can not (at least I could not find a way) to make it work for a user in a container (as you need to stop docker service to make it happen), also I had some difficulties configing dns when using rootless mode. But it should be fine if you are not in a container.
2. running jenkins in docker
This method is more troublesome actually! I struggled with the ways I could use docker in jenkins container, but in the end got the results needed, so was worth the effort.
To run docker in jenkins (which is also a docker container itself) you have three ways:
1. use dind (docker in docker)
It is pretty straight forward, you run dind image & connect docker in jenkins container to the dind, without any special permission handling you can use docker at will.
2. use dood (docker outside of docker)
mount docker path as a volume in the docker run script for your jenkins, note that you need to use one of the two ways I explained above (in running jenkins manually) to be able to use docker, it could be a bit tricky but possible.
3. run agent as a docker in a different environment & connect remote agent in jenkins
At last it is possible to run the agent separately & connecting the remote agent in jenkins. Although this does not exactly answer your question, but is a way you could use.
These ways for just running a docker in jenkins, you will probably have some issues after you ran a docker as agent, like having permission issues in the agent container itself, which is most likely because of the agent's user (if you like, you could access the user with command
docker exec -it [agent container id] whoami
e.g. in this sample the user in agent is node
agent {
docker { image 'node:14-alpine' }
}
steps{
sh 'npm i -g random'
}
so it would throw an error because the node user does not have permission to install npm module globally (I know, it is weird!)
so as luongnv89 mentioned, you could change the user running the docker like this
agent {
docker { image 'node:14-alpine' args '-u root' }
}
Hope this was helpful understanding the whole picture. 😊
What worked for me was
node() {
String jenkinsUserId = sh(returnStdout: true, script: 'id -u jenkins').trim()
String dockerGroupId = sh(returnStdout: true, script: 'getent group docker | cut -d: -f3').trim()
String containerUserMapping = "-u $jenkinsUserId:$dockerGroupId "
docker.image('image')
.inside(containerUserMapping + ' -v /var/run/docker.sock:/var/run/docker.sock:ro') {
sh "..."
}
}
This way the user in the container still uses the jenkins user id + group id to avoid permissions conflicts with shared data but is also member of the docker group inside container which is required to access the docker socket (/var/run/docker.sock)
I prefer this solution as it doesn't require any additional scripts or dockerfiles
I just had the same exact issue. You need to add jenkins user to docker group:
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GROUP=docker
JENKINS_USER=jenkins
if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
sudo groupadd -for -g ${DOCKER_GID} ${DOCKER_GROUP}
sudo usermod -aG ${DOCKER_GROUP} ${JENKINS_USER}
fi
# Start Jenkins service
sudo service jenkins restart
After you run the above, pipelines successfully start docker
I might have found a reasonably good solution for this.
Setup
I run Jenkins as a container and use it to build containers on the dockerhost it's running on. To do this, I pass /var/run/docker.sock as a volume to the container.
Just to reiterate the disclaimer some other people already stated: Giving access to the docker socket is essentially like giving root access to the machine - be careful!
I assume that you've already installed docker into your Jenkins Image.
Solution
This is based on the fact, that the docker binary is not in the first directory of $PATH. We basically place a shell script that runs sudo docker instead of just the plain docker command (and passes the parameters along).
Add a file like this to your jenkins repository and call it docker_sudo_overwrite.sh:
#! /bin/sh
# This basically is a workaround to add sudo to the docker command, because aliases don't seem to work
# To be honest, this is a horrible workaround that depends on the order in $PATH
# This file needs to be place in /usr/local/bin with execute permissions
sudo /usr/bin/docker $#
Then extend your Jenkins Dockerfile like this:
# Now we need to allow jenkins to run docker commands! (This is not elegant, but at least it's semi-portable...)
USER root
## allowing jenkins user to run docker without specifying a password
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/bin/docker" >> /etc/sudoers
# Create our alias file that allows us to use docker as sudo without writing sudo
COPY docker_sudo_overwrite.sh /usr/local/bin/docker
RUN chmod +x /usr/local/bin/docker
# switch back to the jenkins-user
USER jenkins
This gives the jenkins service user the ability to run the docker binary as root with sudo (without providing a password). Then we copy our script to /usr/local/bin/docker which "overlays" the actual binary and runs it with sudo. If it helps, you can look at my example on Github.
Same issue here where.
[...]
agent { docker 'whatever_I_try_doesnt_work'} # sudo, jenkins user in dockerroot group etc
[...]
So my workaround is to add it as one of the steps in the the build stage of the pipeline as follow:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'sudo docker pull python:3.5.1'
}
}
}
}
Related
I'm trying to build and then run a docker image on Jenkins. I have set up Jenkins on ubuntu on an AWS ec2 server. When I try to build I get this error:
For reference, I have also attached my JenkinsFile.
pipeline {
agent any
stages {
stage('Start') {
steps {
echo 'Starting to build the docker-react-app.'
}
}
stage('building docker image') {
steps {
sh 'sudo docker build -t docker-react-app .'
}
}
stage('runing docker image') {
steps {
sh 'sudo docker run -dp 3001:3000 docker-react-app'
}
}
}
}
I am using Jenkins with the default administrator account.
I have also added the Jenkins user to the docker group. But it is hasn't solved the issue. I have also verified that by running:
cat /etc/group | grep docker
Which outputs:
docker:x:998:ubuntu,jenkins
Let me know if you need any further information.
The error says that your current user can’t access the docker engine, because you’re lacking permissions to access the unix socket to communicate with the engine.
In order to solve this issue :
Run below command in any of your shell:
sudo usermod -a -G docker $USER
NOTE : do a complete restart of machine and also jenkins.
After this step : Completely log out of your account and log back in.
For more info : https://docs.docker.com/engine/install/linux-postinstall/
Try to run your pipeline scripts via jenkins user on your target server
Maybe you should add jenkins user to root group
This question is a follow up to this question
How to pass jenkins credentials into docker build command?
I am getting the ssh key file from jenkins credential store in my groovy pipeline and
passing it into docker build command via --build-arg so that I can checkout and build artifacts from the private git repos from within my docker container
credentials store id : cicd-user, which works for checking out my private works as expected from my groovy Jenkinsfile
checkout([$class: 'GitSCM',
userRemoteConfigs: [[credentialsId: 'cicd-user', url:'ssh://git#bitbucket.myorg.co:7999/A/software.git']]
I access it and try to pass the same to docker build command:
withCredentials([sshUserPrivateKey(credentialsId: 'cicd-user', keyFileVariable: 'FILE')]) {
sh "cd ${WORKSPACE} && docker build -t ${some-name} --build-arg USERNAME=cicd-user --build-arg PRIV_KEY_FILE=\$FILE --network=host -f software/tools/jenkins/${some-name}/Dockerfile ."
}
in Dockerfile I do
RUN echo "$PRIV_KEY_FILE" > /home/"$USERNAME"/.ssh/id_rsa && \
chmod 700 /home/"$USERNAME"/.ssh/id_rsa
RUN echo "Host bitbucket.myorg.co\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
But I am seeing the following issue
"Load key "/home/cicd-user/.ssh/id_rsa" :(invalid format)
"git#Bitbucket.mycomp.co:Permission denied( Public key)
"fatal: could not read from remote repository"
In the past I have passed the ssh priv key as --build-arg from outside by cat'ing like below
--build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)"
Should I do something similar
--build-arg PRIV_KEY_FILE="$(cat $FILE)"
Any idea on what might be going wrong or where I should be looking for debugging this correctly ?
I ran into the same issue yesterday and I think I've come up with a workable solution.
Here are the basic steps I took - using the sshagent plugin to manage the sshagent within the Jenkins job. You could probably use withCredentials as well, though that's not what I ended up finding success with.
The ssagent (or alternatively the key) can be made available to specific build steps using the docker build commands --ssh flag. (Feature reference) It's important to note that for this to work (at the current time) you need to set DOCKER_BUILDKIT=1. If you forget to do this, then it seems like it ignores this configuration and the ssh connection will fail. Once that's set, the sshagent
Cut down look at the pipeline:
pipeline {
agent {
// ...
}
environment {
// Necessary to enable Docker buildkit features such as --ssh
DOCKER_BUILDKIT = "1"
}
stages {
// other stages
stage('Docker Build') {
steps {
// Start ssh agent and add the private key(s) that will be needed in docker build
sshagent(['credentials-id-of-private-key']) {
// Make the default ssh agent (the one configured above) accessible in the build
sh 'docker build --ssh default .'
}
}
// other stages
}
}
}
In the Dockerfile it's necessary to explicitly give lines that need it access to the ssh agent. This can be done by including mount=type=ssh in the relevant RUN command.
For me, this looked roughly like this:
FROM node:14
# Retrieve bitbucket host key
RUN mkdir -p -m -0600 ~/.ssh && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
...
# Mount ssh agent for install
RUN --mount=type=ssh npm i
...
With this configuration, the npm install was able to install a private git repo stored on Bitbucket by utilizing the SSH private key within docker build via sshagent.
After spending one week I found some how reasonable way to do.
just add
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
into your docker file and it will install if it needs to install private packages as well.
add pass your GIT_ACCESS_TOKEN (you can have it in your github settings account with setting proper permissions) where you are building your image. Like
docker build --build-arg GIT_ACCESS_TOKEN=yourtoken -t imageNameAndTag .
I'm trying to run a Python image in Jenkins to perform a series of unit tests with pytest, but I'm getting some strange behavior with Docker.
My Jenkinsfile pipeline is
agent {
docker { image 'python:3.6-jessie' }
}
stages {
stage('Run tests') {
steps {
withCredentials([
string(credentialsId: 'a-secret', variable: 'A_SECRET')
{
sh label: "Install dependencies", script: 'pip install -r requirements.txt'
sh label: 'Execute tests', script: "pytest mytests.py"
}
}
}
}
However, when I run the pipeline, Docker appears to be executing a very long instruction (with significantly more -e environment variables than I defined as credentials?), followed by cat.
The build then simply hangs and never finishes:
Jenkins does not seem to be running inside a container
$ docker run -t -d -u 996:994
-w /var/lib/jenkins/workspace/myproject
-v /var/lib/jenkins/workspace/myproject:/var/lib/jenkins/workspace/myproject:rw,z
-v /var/lib/jenkins/workspace/myproject#tmp:/var/lib/jenkins/workspace/myproject#tmp:rw,z
-e ******** -e ******** python:3.6-jessie cat
When I SSH into my instance and run docker ps, I see
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
240d00459d92 python:3.6-jessie "cat" About a minute ago Up About a minute kind_wright
Why is Jenkins running cat? Why does Jenkins say I am not running inside a container, when it has clearly created a container for me? And most importantly, why are my pip install -r requirements and other steps not executing?
I finally figured this out. If you have empty global environment variables in your Jenkins configuration, it appears that you'll get a malformed docker run command since Jenkins will write the command, with your empty string environment variable, as docker run -e some_env_var=some_value -e = ...
This will cause the container to simply hang.
A telltale sign that this is happening is you'll get the error message:
invalid argument "=" for "-e, --env" flag: invalid environment variable: =
This is initially difficult to diagnose since Jenkins (rightfully) hides your actual credentials with ***, so the empty environment strings do not show up as empty.
You need to check your Jenkins global configuration and make sure you don't have any empty environment variables accidentally defined:
If these exist, you need to delete them and rerun.
We have a bunch of nodes running our jobs in Jenkins. I have the need to build two images from a Jenkins job. To do this, I've read that you should share the unix socket using bind mounting, and I've done that like this:
agent {
docker {
image 'custom-alpine-with-docker'
args '-v /var/run/docker.sock:/var/run/docker.sock'
}
}
I then want to use it as follows:
stage('Build and push image(s)') {
steps {
dir("${WORKING_DIRECTORY}") {
script {
echo 'Building amd64 image'
amd64image = docker.build("${IMAGE_NAME}:${BUILD_NUMBER}-amd64", "-f ./Dockerfile.amd64 .")
echo 'Building arm32v7 image'
arm32v7image = docker.build("${IMAGE_NAME}:${BUILD_NUMBER}-arm32v7", "-f ./Dockerfile.arm32v7 .")
}
script {
docker.withRegistry("${DOCKER_REGISTRY_URL}", "${REPOSITORY_CREDENTIALS}") {
amd64image.push()
arm32v7image.push()
}
}
}
}
}
However, as soon as the build command is issued in the jenkins job, I get the following error:
time="2019-01-16T16:55:33Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: permission denied"
17:56:59 Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock:
So a simple search shows the source of this error is that the user trying to access the daemon is not in the docker group, but I don't understand how these group memberships work when sharing a daemon like this.
If I go to the node that failed the build, and check the users in the docker group, I get the following:
$ getent group docker
docker:x:126:inst,jenkins
So how do I allow the user running in the container on that host to access the same daemon?
Small update
Just did it locally using docker run -v /var/run/docker.sock:/var/run/docker.sock -ti docker, and when I write docker ps in the container and on my host I see the same containers running.
Getting all the users on my development machine, it looks like this:
docker:x:999:overlord
So I'm guessing I need some special jenkins solution for this to work..
I think I've solved this in satisfactory way. Here's a step-by-step guide:
Ensure docker is installed in the container that needs to run it
Create the docker group and jenkins user:
CMD DOCKER_GID=$(stat -c '%g' /var/run/docker.sock) && \
groupmod --gid ${DOCKER_GID} ${DOCKER_GROUP} && \
usermod -a -G ${DOCKER_GROUP} ${JENKINS_USER} && \
gosu jenkins sh
It is important to note that here I fetch the group Id of the underlying system that runs docker. As I had already installed docker in my container and the group already existed, I modify the existing group to match the group id of the system. Finally I add the jenkins user to the docker group. Your /etc/group should look something like this in the container after it's run:
docker:x:999:jenkins
In your pipeline, start the agent as follows:
agent {
docker {
image 'storemanager-build'
args '-u root -v /var/run/docker.sock:/var/run/docker.sock'
}
}
By supplying the -u root flag, you override the user jenkins that jenkins forces on you when you use the declarative pipeline. You have to use root for the CMD command to work and to be able to create the group.
When the image is running, the command will switch to a jenkins user that is allowed to access the underlying unix socket.
Here's an excerpt fro my Jenkinsfile:
pipeline {
agent {
docker {
image 'build-image'
args '-u root -v /var/run/docker.sock:/var/run/docker.sock'
}
}
stages {
stage('Build jar') {
steps {
dir("${WORKING_DIRECTORY}") {
script {
if (isUnix()) {
sh './mvnw --batch-mode clean install'
} else {
bat 'mvnw.cmd --batch-mode clean install'
}
}
}
}
}
stage('Build and push image(s)') {
steps {
dir("${WORKING_DIRECTORY}") {
script {
amd64image = docker.build("${IMAGE_NAME}", "-f ./Dockerfile.amd64 .")
arm32v7image = docker.build("${IMAGE_NAME}", "-f ./Dockerfile.arm32v7 .")
}
script {
docker.withRegistry("${DOCKER_REGISTRY_URL}", "${STOREMANAGER_REPOSITORY_CREDENTIALS}") {
amd64image.push("${BUILD_NUMBER}-amd64")
arm32v7image.push("${BUILD_NUMBER}-arm32v7")
}
}
}
}
}
}
post {
always {
sh "chmod -R 777 ." // Jenkins can't clean built resources without this as we run the container as root
cleanWs()
}
}
}
And the resources that helped me:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
https://github.com/jenkinsci/docker/issues/196
Hope this helps.
I'm trying to build a jekyll website via my Jenkins server (which runs inside a container) and I have a stage in my Jenkinsfile that looks like this:
stage('Building Website') {
agent {
docker {
image 'jekyll/jekyll:builder'
}
}
steps {
sh 'jekyll --version'
}
}
The very first time I run my job it pulls the jekyll docker image and runs fine (although it does fetch a bunch of gems before running jekyll which doesn't happen when I run the docker manually outside jenkins) but then the next jobs fail giving this error:
jekyll --version
/usr/jekyll/bin/jekyll: exec: line 15: /usr/local/bundle/bin/jekyll: not found
Any ideas what I'm doing wrong here?
As you can see in the jenkins log file, jenkins runs docker with the -u 1000:1000 argument, since this user does not exits in the jekyll/jekyll image, the command fails with the error .../bin/jekyll: not found
Here is a sample Jenkinsfile:
pipeline {
agent
{
docker
{
image 'jekyll/jekyll:3.8'
args '''
-u root:root
-v "${WORKSPACE}:/srv/jekyll"
'''
}
}
stages {
stage('Test') {
steps {
sh '''
cd /srv/jekyll
jekyll --version
'''
}
}
}
}
To add to the other answer, note the containerized Jenkins doesn't not contain the docker binary, so docker commands will still fail.
A few solutions
Make a dockerfile that inherits from the jenkins image and installs docker as well, creating a new image.
Manually install docker inside of the container. This will work until you pull a new image, and you'll have to do it over again.
Open an interactive terminal into the jenkins container
docker container exec -it -u root <container id> bash
Then install docker
curl https://get.docker.com/ > dockerinstall && chmod 777 dockerinstall && ./dockerinstall
Exit the container and set perms on docker.sock
sudo chmod 666 /var/run/docker.sock
Finished!