I am using Jenkins for my CI/CD and want to run most of my jobs in Docker.
I installed the plugin "CloudBees Docker Custom Build Environment Plugin" which allow me to run my jobs in a given docker as below:
When I check logs I see this:
docker exec --tty --user 996:994 890fd5fc166283923e61ea515d5f49a149e508c231281c39dc05e14d6ab43a09
The uid 996 is the jenkins user which does not even exist inside docker.
That is a problem because I can't do anything once inside docker (apt update, apt install)
Do you guys have any idea how, using that plugin, I can use the real user inside the docker ? (in this case the user must be "node")
Thanks
Related
I used docker extensively for a couple of years now and we always just added the user to the docker group to use the client.
Recently, on a ubuntu 18.04 using a docker 18.09 I need to call newgrp docker in each shell before I get access to the docker socket!?!
Why like this? It sucks!
This is needed within a jenkins container. the easy workaround is to do a usermod -g docker jenkins inside the container. However, this was NOT necessary one year ago :(
Recently, on a ubuntu 18.04 using a docker 18.09 I need to call newgrp docker in each shell before I get access to the docker socket!?!
Have you logged out and back in again, since adding yourself to the docker group?
This is needed within a jenkins container. the easy workaround is to do a usermod -g docker jenkins inside the container. However, this was NOT necessary one year ago :(
Provided you mount the Docker socket into your Jenkins container, this should not be required. Which user does Jenkins run as?
I am trying to configure jenkins slave as docker container, have enabled docker API and connections works fine to the API
Have added the configuration for docker template and docker cloud but it seems that my job does not starts
I can see container getting created on my docker node but the job does not start
Docker cloud configuration image
docker template image
One thing to note is that when i run the container specifically on the docker node and then try to ssh using the same credentials that i am using in jenkins i can ssh into the container.
This message of "Jenkins doesn't have label XXXX" is rather misleading and unhelpful.
You think the problem is something you did wrong in your configuration and when you find out what happen it is nothing to do with jenkins or how you set up the docker plugin.
I run into the same problem than you, and the problem was the docker installation I was using.
The steps I followed to fix it were:
(I was using CENTOS7,jenkins 2.1.38, docker version 1.13.1)
1) Go to the logs of your jenkins (centos logs are /var/log/jenkins.log)
2) Looking into the logs you are going to find out the problem. For instance for me was this:
com.github.dockerjava.api.exception.NotFoundException: {"message":"driver failed programming external connectivity on endpoint happy_heyrovsky (cbfa0d43f8c89d2531323249468503be11e9dd603597a870530d28540c662695): exec: \"docker-proxy\": executable file not found in $PATH"}
As you see the problem is that docker it is not able to find docker-proxy ¿how to fix this?
Go to /usr/libexec/docker and you will see docker-proxy-current. so what you have to do is create a link:
sudo ln -s docker-proxy-current docker-proxy
Tha´s all. After doing this change I execute my build on jenkins and it works.
I've got a jenkins declarative pipeline build that runs gradle and uses a gradle plugin to create a docker image. I'm also using a dockerfile agent directive, so the entire thing runs inside a docker container. This was working great with jenkins itself installed in docker (I know, that's a lot of docker). I had jenkins installed in a docker container on docker for mac, with -v /var/run/docker.sock:/var/run/docker.sock (DooD) per https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/. With this setup, the pipeline docker agent ran fine, and the docker build command within the pipeline docker agent ran fine as well. I assumed jenkins also mounted the docker socket on its inner docker container.
Now I'm trying to run this on jenkins installed on an ec2 instance with docker installed properly. The jenkins user has the docker group as its primary group. The jenkins user is able to run "docker run hello-world" successfully. My pipeline build starts the docker agent container (based on the gradle image with various things added) but when gradle attempts to run the docker build command, I get the following:
* What went wrong:
Execution failed for task ':docker'.
> Docker execution failed
Command line [docker build -t config-server:latest /var/lib/****/workspace/nfig-server_feature_****-HRUNPR3ZFDVG23XNVY6SFE4P36MRY2PZAHVTIOZE2CO5EVMTGCGA/build/docker] returned:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Is it possible to build docker images inside a docker agent using declarative pipeline?
Yes, it is.
The problem is not with Jenkins' declarative pipeline, but how you're setting up and running things.
From the error above, looks like there's a missing permission which needs to be granted.
Maybe if you share what your configuration looks like and how your're running things, more people can help.
I want to run a docker image create command from a Jenkins job which is running native on my machine with VM running Docker.
I've installed docker build step plugin and in manage Jenkins page command fails when I try to configure docker using url and version, it says:
Test Connection Something went wrong, cannot connect to
http://192.168.99.100:2376, cause: null
I've got this docker url from the docker-machine env command.
In the version field, I tried both REST API version and Docker version, but no success.
Is it possible to call Docker from outside the docker VM and from a Jenkins job? If yes how to configure it in Jenkins?
I am new to Bamboo and are trying to get the following process flow using Bamboo and Docker:
Developer commits code to a Bitbucket branch
Build plan detects the change
Build plan then starts a Docker container on a dedicated AWS instance where Docker is installed. In the Docker container a remote agent is started as well. I use the atlassian/bamboo-java-agent:latest docker container.
Remote agent registers with Bamboo
The rest of the build plan runs in the container
Container and agent gets removed when plan completes
I setup a test build plan and in the plan My first task is to start a Docker instance like follows:
sudo docker run -d --name "${bamboo.buildKey}_${bamboo.buildNumber}" \
-e HOME=/root/ -e BAMBOO_SERVER=http://x.x.x.x:8085/ \
-i -t atlassian/bamboo-java-agent:latest
The second task is to get the source code and deploy. 3rd task is test and 4th task is shutting down the container.
There are other agents online on Bamboo as well and my build plan sometimes uses those and not the Docker container that I started as part of the build plan.
Is there a way for me to do the above?
I hope it all makes sense. I am truly new to this and any help will be appreciated.
We (Atlassian Build Engineering) have created a set of plugins to run Docker based agents in a cluster (ECS) that comes online, builds a single job and then exits. We've recently open sourced the solution.
See https://bitbucket.org/atlassian/per-build-container for more details.
first you need to make sure the "main" docker container is not exiting when you run it.
check with
docker ps -a
you should see it is running
now assuming it is running you can execute commands inside the container
to get into the container
docker exec -it containerName bash
to execute commands inside the container from outside the container
docker exec -it containerName commandToExecuteInsideTheContainer
you could as part of the containers dockerfile COPY a script in it that does something.
Then you can execute that script from outside the container using the above approach.
Hope this gives some insight.