Docker - Is volume mapping of socket file an override behavior? - docker

Below is the code snippet of jenkins image taken from here:
# Install Docker Engine
RUN apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D && \
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | tee /etc/apt/sources.list.d/docker.list && \
apt-get update -y && \
apt-get purge lxc-docker* -y && \
apt-get install docker-engine=${DOCKER_ENGINE:-1.10.2}-0~trusty -y && \
usermod -aG docker jenkins && \
usermod -aG users jenkins
that installs docker engine within jenkins image. My understanding is, var/run/docker.sock is created withing Jenkins container, due to installation of docker engine.
Below is the volume mapping syntax taken from here:
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
that launches jenkins container(above) on EC2 host.
EC2 host also has docker daemon running.
So, there is docker daemon running in EC2 host. There is also a docker daemon running within docker container(Jenkins)
With this syntax(/var/run/docker.sock:/var/run/docker.sock) in docker-compose(above) for socket files,
Does docker daemon within Jenkins container override its own socket file with the socket file present in EC2 host? If yes... what is its implication?

/var/run/docker.sock in the container is the host's Docker socket, and nothing else. This is because:
A Docker container doesn't run any programs other than what's explicitly started in its entrypoint and/or command, and that's almost always just a single application program.
You're presumably not going out of your way to start the Docker daemon, so it's installed but not running.
Unix socket files won't get created until a daemon starts up and it bind(2) a socket to the specific file.
The docker run -v option will always "push" the host's content into the container, and this happens before any of the container processes run.
So in the scenario you're describing, it can't be anything other than the host Docker socket, because there isn't a second Docker daemon.
Let's say for the sake of argument that you actually are starting a second daemon this way.
The order of operations here is (1) Docker sets up the container filesystem, (2) Docker starts running the entrypoint, (3) the entrypoint starts the daemon, (4) the daemon tries to create the socket file. At the point where the daemon starts up, its socket file will already exist. I believe that will cause the bind(2) call to fail with EADDRINUSE, and the daemon won't start up. Hopefully this will cause your container startup to fail.
You could legitimately want to start a daemon in a container, that publishes a Unix socket, that you want to access from the host. To make this work you need to mount a directory into the container, and point the daemon at it. It probably can't be /var/run on either side (there's a lot of stuff in the host /var/run; mounting the directory hides the existing contents in the container and you could want the container's /var/run too). It must be a directory and not a socket filename since Docker will create an empty directory if it doesn't exist; something will exist in the container at that path and the bind will fail.
So if you wanted to start a hypothetical foo daemon inside a container, it would look roughly like
docker run \
--name foo \ # container name
-v $PWD/socket:/socket \ # bind mount a directory
foo \ # image name
food \ # command to run in the container
--foreground \ # don't daemonize; keep the container alive
--bind fd://socket/foo.sock # put the socket in the shared directory
On the host you'd need to set FOO_SOCKET_PATH=$PWD/socket/foo.sock or otherwise point to this specific file.

From the docs:
Docker-engine is a client-server application
Please note that when you install docker-engine you install docker-daemon (server) and docker cli (client).
It means that if a docker daemon isn't running you will still be able run docker cli commands:
docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Jenkins image you shared doesn't have instructions to run docker engine. So i assume it's not running inside the container.
/var/run/docker.sock:/var/run/docker.sock volume maps docker host's docker engine socket to the container.
So docker cli commands run within the container control the docker-engine running on the docker host.
This makes sense if you do CI/CD on your host from within containerized Jenkins.
Jenkins pipelines may use docker, docker-compose and docker swarm commands to run tests, build artifacts and deploy new versions of applications.

Related

Run Docker as jenkins-agent, in a docker-container, as non-root user

Simular Questions
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock
How to solve Docker permission error when trigger by Jenkins
https://github.com/jenkinsci/docker/issues/263
Dockerfile
FROM jenkins/jenkins:lts
USER root
RUN apt-get -qq update && apt-get -qq -y install --no-install-recommends curl
RUN curl -sSL https://get.docker.com/ | sh
RUN usermod -aG docker jenkins
USER jekins
Terminal command
docker run -p 8080:8080 -p 50000:50000 \
-v jenkins_home:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-ti bluebrown/docker-in-jenkins-in-docker /bin/bash
Inside the container
docker image ls
Output
Got permission denied while trying to connect to the Docker daemon
socket at unix:///var/run/docker.sock: Get
http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/json: dial unix
/var/run/docker.sock: connect: permission denied
When I comment the last line of the dockerfile out, to run the instance as root user,
# USER jenkins
I can access the docker socket without issues, for obvious reasons. However, I think this is not a proper solution. That is why I want to ask if anyone managed to access the docker socket as non root user.
You've added the docker group to the Jenkins user inside the container. However, that will not necessarily work because the mapping of users and groups to uids and gids can be different between the host and container. That's normally not an issue, but with host volumes and other bind mounts into the container, the files are mapped with the same uid/gid along with the permissions. Therefore, inside the container, the docker group will not have access to the docker socket unless the gid happens to be identical between the two environments.
There are several solutions, including manually passing the host gid as the gid to use inside the container. Or you can get the gid of the host and build the image with that value hard coded in.
My preferred solution is to start an entrypoint as root, fix the docker group inside the container to match the gid of the mounted docker socket, and then switch to the Jenkins user to launch the app. This works especially well in development environments where control of uid/gids may be difficult. All the steps/scripts for this are in my repo: https://github.com/sudo-bmitch/jenkins-docker
For production in a controlled environment, I try to get standardized uid/gid values, both on the host and in containers, for anything that mounts host volumes. Then I can run the container without the root entrypoint steps.
In your dockerfile you are enabling docker access for the user jenkins but droping down to the user jekins not jenkins?
Is this just a typo on this page?
I use this approach as you've described and it works correctly.

Airflow inside docker running a docker container

I have airflow running on an EC2 instance, and I am scheduling some tasks that spin up a docker container. How do I do that? Do I need to install docker on my airflow container? And what is the next step after. I have a yaml file that I am using to spin up the container, and it is derived from the puckel/airflow Docker image
I got a simpler solution working which just requires a short Dockerfile to build a derived image:
FROM puckel/docker-airflow
USER root
RUN groupadd --gid 999 docker \
&& usermod -aG docker airflow
USER airflow
and then
docker build -t airflow_image .
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /usr/bin/docker:/bin/docker:ro \
-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7:ro \
-d airflow_image
Finally resolved
My EC2 setup is running unbuntu Xenial 16.04 and using a modified the puckel/airflow docker image that is running airflow
Things you will need to change in the Dockerfile
Add USER root at the top of the Dockerfile
USER root
mounting docker bin was not working for me, so I had to install the
docker binary in my docker container
Install Docker from Docker Inc. repositories.
RUN curl -sSL https://get.docker.com/ | sh
search for wrapdocker file on the internet. Copy it into scripts directory in the folder where the Dockerfile is located. This starts the docker daemon inside airflow docker
Install the magic wrapper
ADD ./script/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
add airflow as a user to the docker group so the airflow can run docker jobs
RUN usermod -aG docker airflow
switch to airflow user
USER airflow
Docker compose file or command line arguments to docker run
Mount docker socket from docker airflow to the docker image just installed
- /var/run/docker.sock:/var/run/docker.sock
You should be good to go !
You can spin up docker containers from your airflow docker container by attaching volumes to your container.
Example:
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro -v /path/to/bin/docker:/bin/docker:ro your_airflow_image
You may also need to attach some libraries required by docker. This depends on the system you are running Docker on. Just read the error messages you get when running a docker command inside the container, it will indicate you what you need to attach.
Your airflow container will then have full access to Docker running on the host.
So if you launch docker containers, they will run on the host running the airflow container.

Running docker command from docker container

Need to write a Dockerfile that installs docker in container-a. Because container-a needs to execute a docker command to container-b that's running alongside container-a.
My understanding is you're not supposed to use "sudo" when writing the Dockerfile.
But I'm getting stuck -- what user to I assign to docker group? When you run docker exec -it, you are automatically root.
sudo usermod -a -G docker whatuser?
Also (and I'm trying this out manually inside container-a to see if it even works) you have to do a newgrp docker to activate the changes to groups. Anytime I do that, I end up sudo'ing when I haven't sudo'ed. Does that make sense? The symptom is -- I go to exit the container, and I have to exit twice (as if I changed users).
What am I doing wrong?
If you are trying to run the containers alongside one another (not container inside container), you should mount the docker socket from the host system and execute commands to other containers that way:
docker run --name containera \
-v /var/run/docker.sock:/var/run/docker.sock \
yourimage
With the the docker socket mounted you can control docker on the host system.

Installing systemd inside a ubuntu14.04 docker container - Is it possible?

Am trying to install and configure openstack (devstack) inside docker container. While installing am getting the following error
"Failed to get D-Bus connection: No connection to service manager."
Later, I checked and found that its because of systemd problem. When I tried executing the command systemd
$>systemd
Am getting the following output.
Trying to run as user instance, but the system has not been booted with systemd.
Following are the things which am used.
Host machine OS : Ubuntu 14.04,
Docker Version : Docker version 1.12.4, build 1564f02,
Docker Container OS : Ubuntu 14.04
Can anyone help in this. Thanks in advance.
First of all, systemd expects /sys/fs/cgroup to be mounted. Additionally, you must make the container privileged, or else this happens:
docker run -v /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged -it --rm ubuntu
Then you can go ahead and run /bin/systemd --system --unit=basic.target from bash, and it should run normally (with some errors of course, because Docker does not virtualize an entire system, nor is the library:ubuntu image more than the minimum size required to run properly):
After you have systemd running (semi-)properly, you can simply use a docker stop to stop the container.
This post is based on my own research, a few weeks of it too, for a project I like to call initbuntu (originally I tried to get init running, but running systemd directly was my only solution after all my failed tries). The container will be available on Docker Hub as logandark/initbuntu, Soon™. For now, a broken copy (or not broken, I dunno) is available there at the time of posting.
Sources (kinda):
/sys/fs/cgroup: Here
systemd --system: A StackOverflow post I lost the link to.
Existing DevStack on Docker Project
First of all, you can get a preconfigured Dockerfile with DevStack Ocata/Pike on Docker here. The repository also contains further information on DevStack and containers.
Build Your Own Image
Running systemd in Docker is certainly possible and has been done before. I found Ubuntu 16.04 LTS is a good foundation for the Docker host as well as the base image.
Your systemd/DevStack Dockerfile needs this configuration, which also cleans up services you probably don't want inside a Docker container:
FROM ubuntu:16.04
#####################################################################
# Systemd workaround from solita/ubuntu-systemd and moby/moby#28614 #
#####################################################################
ENV container docker
# No need for graphical.target
RUN systemctl set-default multi-user.target
# Gracefully stop systemd
STOPSIGNAL SIGRTMIN+3
# Cleanup unneeded services
RUN find /etc/systemd/system \
/lib/systemd/system \
-path '*.wants/*' \
-not -name '*journald*' \
-not -name '*systemd-tmpfiles*' \
-not -name '*systemd-user-sessions*' \
-exec rm \{} \;
# Workaround for console output error moby/moby#27202, based on moby/moby#9212
CMD ["/bin/bash", "-c", "exec /sbin/init --log-target=journal 3>&1"]
If you intend to run OpenStack/DevStack inside said container, it might save you lots of trouble to start it privileged instead of defining separate security capabilities and volumes:
docker run \
--name devstack \
--privileged \
--detach \
image
To get a bash inside your new systemd container try this:
docker exec \
--tty \
--interactive \
devstack \
bash
Systemd should work inside properly configured container. You can run the container in privileged mood to run systemd.
"Systemd cannot run without SYS_ADMIN, less privileges than that won't work (see #2296 (comment)). Yes it's possible to make it "easier" (a tool that automatically sets these), but it'll still need certain privileges"
See this Github issue
After all docker is an application container and it runs the process which you specify at run time , after completing that process it will exit. May be you need an OS container or a virtual machine for your use case. See OS container vs Application Container here
In most cases the error messages comes up because an installer program has tried to run "systemctl start ". Unlike initscripts the systemctl command will not try execute the start script directly - instead it tries to contact the systemd daemon to execute the start sequence of the service. So all services have a common parent in the systemd daemon.
It can be quite overdone to run a systemd daemon inside a docker container just to start a service. You could use the systemctl-docker-replacement overwriting /usr/bin/systemctl in which case the target service is started without the help of a systemd daemon. It runs the ExecStart from the *.service file directly.

How to run a Docker host inside a Docker container?

I have a Jenkins container running inside Docker and I want to use this Jenkins container to spin up other Docker containers when running integration tests etc.
So my plan was to install Docker in the container but this doesn't seem to work so well for me. My Dockerfile looks something like this:
FROM jenkins
MAINTAINER xxxx
# Switch user to root so that we can install apps
USER root
RUN apt-get update
# Install latest version of Docker
RUN apt-get install -y apt-transport-https
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
RUN sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
RUN apt-get update
RUN apt-get install -y lxc-docker
# Switch user back to Jenkins
USER jenkins
The jenkins image is based on Debian Jessie. When I start bash terminal inside container based on the generated image and do for example:
docker images
I get the following error message:
FATA[0000] Get http:///var/run/docker.sock/v1.16/images/json: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
I suspect that this could be because the docker service is not started. But my next problem arise when I try to start the service:
service docker start
This gives me the following error:
mount: permission denied
I've tracked the error in /etc/init.d/docker to this line:
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
So my questions are:
How do I actually start a Docker host inside a container? Or is this
something that should be avoided?
Is there something special I need to do if I'm running Mac and boot2docker?
Perhaps I should instead link to the Docker on the host machine as described here?
Update: I've tried the container as user root and jenkins. sudo is not installed.
A simpler alternative is to mount the docker socket and create sibling containers. To do this, install docker on your image and run something like:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock myimage
In the container you should now be able to run docker commands as if you were on the host. The advantage of this method is that you don't need --privileged and get to use the cache from the host. The disadvantage is that you can see all running containers, not just the ones the created from the container.
1.- The first container you start (the one you launch other one inside) must be run with the --privileged=true flag.
2.- I think there is not.
3.- Using the privileged flag you don't need to mount the docker socket as a volume.
Check this project to see an example of all this.

Resources