Use docker command in jenkins container - docker

My centos version and docker version(install by yum)
Use docker common error in container
My docker run command:
docker run -it -d -u root --name jenkins3 -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker docker.io/jenkins/jenkins
but,its error when I exec docker info in jenkins container
/usr/bin/docker: 2: .: Can't open /etc/sysconfig/docker

Exposing the host's docker socket to your jenkins container will work with
-v /var/run/docker.sock:/var/run/docker.sock
but you will need to have the docker executable installed in your jenkins image via a Dockerfile.
It is likely the example you are looking at is already using a docker image. A quick google search brings up https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ whose example uses a docker image (already has the executable installed):
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
Also note from that same post your exact issue with mounting the binary:
Former versions of this post advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.

Related

Install Docker in Alpine Docker

I have a Dockerfile with a classic Ubuntu base image and I'm trying to reduce the size.
That's why I'm using Alpine base.
In my Dockerfile, I have to install Docker, so Docker in Docker.
FROM alpine:3.9
RUN apk add --update --no-cache docker
This works well, I can run docker version inside my container, at least for the client. Because for the server I have the classic Docker error saying :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I know in Ubuntu after installing Docker I have to run
usermod -a -G docker $USER
But what about in Alpine ? How can I avoid this error ?
PS:
My first idea was to re-use the Docker socket by bind-mounting /var/run/docker.sock:/var/run/docker.sock for example and thus reduce the size of my image even more, since I don't have to reinstall Docker.
But as bind-mount is not allowed in Dockerfile, do you know if my idea is possible and how to do it ? I know it's possible in Docker-compose but I have to use Dockerfile only.
Thanks
I managed to do that the easy way
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker --privileged docker:dind sh
I am using this command on my test env!
You can do that, and your first idea was correct: just need to expose the docker socket (/var/run/docker.sock) to the "controlling" container. Do that like this:
host:~$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
<my_image>
host:~$ docker exec -u root -it <container id> /bin/sh
Now the container should have access to the socket (I am assuming here that you have already installed the necessary docker packages inside the container):
root#guest:/# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED ...
69340bc13bb2 my_image "/sbin/tini -- /usr/…" 8 minutes ago ...
Whether this is a good idea or not is debatable. I would suggest not doing this if there is any way to avoid it. It's a security hole that essentially throws out the window some of the main benefits of using containers: isolation and control over privilege escalation.

jenkins in docker - Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

I'm running https://hub.docker.com/r/jenkinsci/blueocean/ in docker. Trying to build a docker image in jenkins.
but i get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
clearly the jenkins version in docker does not have access to the docker binary.
I confirmed this by,
docker exec -it db4292380977 bash
docker images
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
"db4292380977" is the running container. It shows the same error.
Question:
how do I allow access to docker in the jenkins container?
The docker client is installed on the jenkinsci/blueocean image, but not the daemon. Docker client will use the daemon (by default via the socket unix:///var/run/docker.sock). Docker client needs a Docker daemon in order to work, you can read Docker Architecture for more info.
What you can do:
Use docker-in-docker (DinD) image
Library Docker image provides a way to run a Docker daemon in Docker, you can then use it from another container. For example, using plain docker CLI:
docker run --name docker-dind --privileged -d docker:stable-dind
docker run --name jenkins --link=docker-dind -d jenkinsci/blueocean
docker exec jenkins docker -H docker-dind images
REPOSITORY TAG IMAGE ID CREATED SIZE
Docker daemon runs in docker-dind container and can be reached using the same hostname. You just need to provide the docker client with the daemon host (-H docker-dind in the example, you can also use DOCKER_HOST env variable as described in the doc).
Mount host machine /var/run/docker.sock in your container
As described by #Herman Garcia answer:
docker run -p 8080:8080 --user root \
-v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean
You need to mount your local /var/run/docker.sock and run the container as root user
NOTE: this might be a security flaw so be careful who has access to the jenkins container
docker run -p 8080:8080 --user root \
-v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean
you will be able to execute docker inside the container
➜ ~ docker exec -it gracious_agnesi bash
bash-4.4# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
c4dc85b0d88c jenkinsci/blueocean "/sbin/tini -- /usr/…" 18 seconds ago Up 16 seconds 0.0.0.0:8080->8080/tcp, 50000
/tcp gracious_agnesi
Just only try to do the same command but with sudo in the beginning
For example
sudo docker images
sudo docker exec -it db4292380977 bash
To avoid use sudo in the future you should run this command in Unix O.S
sudo usermod -aG docker <your-user>
Change for the user that you are using at this moment. Remember to log out and back in for this to take effect! More information about Docker installation click here

How can I call docker daemon of the host-machine from a container?

Here is exactly what I need. I already have a project which is starting up a particular set of docker images and it works completely fine.
But I want to create another image, which is particularly to build this project from the scratch having all the dependencies inside. So, the problem is, when building, to create docker images, we need to access the docker daemon running on the host machine from the building container.
Is there any way of doing this?
If you need to access docker on the host from inside a container, you can simply expose the Docker socket inside the container using a host mount (-v /host/path:/container/path on the docker run command line).
For example, if I start a new fedora container exposing the docker socket on my host:
$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock fedora bash
Then install docker inside the container:
[root#d28650013548 /]# yum -y install docker
...many lines elided...
I can now talk to docker on my host:
[root#d28650013548 /]# docker info
Containers: 6
Running: 1
Paused: 0
Stopped: 5
Images: 530
Server Version: 17.05.0-ce
...
You can let the container access to the host's docker daemon through the docker socket and "tricking" it to have the docker executable inside the container without installing docker inside it. Just on this way (with an Ubuntu-Xenial container for the example):
docker run --name dockerInsideContainer -ti -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker ubuntu:xenial
Inside this, you can launch any docker command like for example docker images to check it's working.
If you see an error like this: docker: error while loading shared libraries: libltdl.so.7: cannot open shared object file: No such file or directory you should install inside the container a package called libltdl7. So for example you can create a Dockerfile for the container or installing it directly on run:
FROM ubuntu:xenial
apt update
apt install -y libltdl7
or
docker run --name dockerInsideContainer -ti -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker ubuntu:xenial bash -c "apt update && apt install libltdl7 && bash"
Hope it helps

Starting Jenkins in Docker Container

I want to run Jenkins in a Docker Container on Centos7.
I saw the official documentation of Jenkins:
First, pull the official jenkins image from Docker repository.
docker pull jenkins
Next, run a container using this image and map data directory from the container to the host; e.g in the example below /var/jenkins_home from the container is mapped to jenkins/ directory from the current path on the host. Jenkins 8080 port is also exposed to the host as 49001.
docker run -d -p 49001:8080 -v $PWD/jenkins:/var/jenkins_home -t jenkins
But when I try to run the docker container I get the following error:
/usr/local/bin/jenkins.sh: line 25: /var/jenkins_home/copy_reference_file.log: Permission denied
Can someone tell me how to fix this problem?
The official Jenkins Docker image documentation says regarding volumes:
docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
This will store the jenkins data in /your/home on the host. Ensure that /your/home is accessible by the jenkins user in container (jenkins user - uid 1000) or use -u some_other_user parameter with docker run.
This information is also found in the Dockerfile.
So all you need to do is to ensure that the directory $PWD/jenkins is own by UID 1000:
mkdir jenkins
chown 1000 jenkins
docker run -d -p 49001:8080 -v $PWD/jenkins:/var/jenkins_home -t jenkins
The newest Jenkins documentation says to use Docker 'volumes'.
Docker is kinda tricky on this, the difference between the two is a full path name with the -v option for bind mount and just a name for volumes.
docker run -d -p 49001:8080 -v jenkins-data:/var/jenkins_home -t jenkins
This command will create a docker volume named "jenkins-data" and you will no longer see the error.
Link to manage volumes:
https://docs.docker.com/storage/volumes/

Boot2Docker to Google Compute Engine VM: saving Docker container

I am running Boot2Docker v1.0.1 on Windows, and wish to fire up a Docker container I have created on a Google Compute Engine VM.
In order to do so, I need to save the container and upload it to Google Cloud Storage.
I issue the following command:
docker save --output=mycontainer.tar mycontainer:latest
The command completes without error. However, I cannot find the rce_env.tar file anywhere on my hard drive.
Does anyone have any experience with this? If not, is there a better way to run containers on GCE VM's?
You can run google/docker-registry locally to push your container images to GCS.
docker run -ti --name gcloud-config google/cloud-sdk \
gcloud auth login
docker run -ti --volumes-from gcloud-config google/cloud-sdk \
gcloud config set project <project>
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 \
--volumes-from gcloud-config google/docker-registry
docker tag imagename localhost:5000/imagename
docker push localhost:5000/imagename
And then run it on GCE to pull your containers from GCS.
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 google/docker-registry
docker run localhost:5000/imagename
I understand that you are using boot2docker on windows.
On a similar setup, using OSX and boot2docker 1.1.0, the following works:
docker save --output mycontainer.tar mycontainer:latest
As also does redirecting standard output:
docker save mycontainer:latest > mycontainer.tar
GCE now allows to store docker images for your projects using the gcloud command.
you can now run $ gcloud preview docker push gcr.io/YOUR-PROJECT/IMAGE-NAME
Source: https://cloud.google.com/tools/container-registry/#pushing_to_the_registry

Resources