Running Docker inside Docker container: Cannot connect to the Docker daemon - docker

I created a Dockerfile to run Docker inside Docker:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - &&\
apt-key fingerprint 0EBFCD88
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" && \
apt-get update && \
apt-get install -y docker-ce && \
systemctl enable docker
After i launched my container and run docker ps i got:
"Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
i executed the command dockerd inside my container resulted:
Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.6.0: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
(exit status 3)
Please advise

The recommendation I received for this was to use the -v parameter in docker run to map the docker socket between containers like this:
-v /var/run/docker.sock:/var/run/docker.sock

If you really want to run a Docker container inside an other Docker container, you should use already existing images provided by Docker (https://hub.docker.com/_/docker) instead of creating your own base image : choose images tagged as dind (docker in docker) or <docker_version>-dind (like 18.09.0-dind). If you want to run your own image (not recommended though), don't forget to run it with --privileged option (that's why you get the error).
Example with docker official images :
# run Docker container running Docker daemon
docker run --privileged --name some-docker -d docker:18.09.0-dind
# run hello-world Docker image inside the Docker container previously started
docker exec -i -t some-docker docker run hello-world
Nevertheless, I agree with #DavidMaze comment and the reference blog post he referred to (Do not use Docker-in-Docker for CI) : Docker-in-Docker should be avoided as much as possible.

Related

mkdir: cannot create directory ‘cpuset’: Read-only file system when running a "service docker start" in Dockerfile

I have a Dockerfile that extends the Apache Airflow 2.5.1 base image. What I want to do is be able to use docker inside my airflow containers (i.e. docker-in-docker) for testing and evaluation purposes.
My docker-compose.yaml has the following mount:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
My Dockerfile looks as follows:
FROM apache/airflow:2.5.1
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker airflow
RUN service docker start
USER airflow
Basically:
Install docker.
Add the airflow user to the docker group.
Start the docker service.
Continue as airflow.
Unfortunately, this does not work. During RUN service docker start, I encounter the following error:
Step 11/12 : RUN service docker start
---> Running in 77e9b044bcea
mkdir: cannot create directory ‘cpuset’: Read-only file system
I have another Dockerfile for building a local jenkins image, which looks as follows:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker jenkins
RUN service docker start
USER jenkins
I.e. it is exactly the same, except that I am using the jenkins user. Building this image works.
I have not set any extraneous permission on my /var/run/docker.sock:
$ ls -la /var/run/docker.sock
srw-rw---- 1 root docker 0 Jan 18 17:14 /var/run/docker.sock
My questions are:
Why does RUN service start docker not work when building my airflow image?
Why does the exact same command in my jenkins Dockerfile work?
I've tried most of the answers to similar questions, e.g. here and here, but they have unfortunately not helped.
I'd rather try to avoid the chmod 777 /var/run/docker.sock solution if at all possible, and it should be since my jenkins image can build correctly...
Just delete the RUN service start docker line.
The docker CLI tool needs to connect to a Docker daemon, which it normally does through the /var/run/docker.sock Unix socket file. Bind-mounting the socket into the container is enough to make the host's Docker daemon accessible; you do not need to separately start Docker in the container.
There are several issues with the RUN service ... line specifically. Docker has a kind of complex setup internally, and some of the things it does aren't normally allowed in a container; that's probably related to the "cannot create directory" error. In any case, a Docker image doesn't persist running processes, so if you were able to start Docker inside the build, it wouldn't still be running when the container eventually ran.
More conceptually, a container doesn't "run services", it is a wrapper around only a single process (and its children). Commands like service or systemctl often won't work the way you expect, and I'd generally avoid them in a Docker context.

docker container cannot authenticate with dockerhub to push image built inside of docker container

I am trying to build a docker image inside of a docker container.
I'm am attempting to use the an authenticated docker daemon on the host machine to push the docker image on dockerhub.
I'm running the docker container with
docker run -v /var/run/docker.sock:/var/run/docker.sock bitcoin-s-build:latest
This docker instance on the host machine is authenticated correctly with dockerhub. I can run docker push ... on the host machine and correctly push an image.
I would like to run the docker push ... in the docker container, and use the mounted socket to push the image to dockerhub.
When doing so I get this error:
...
#44 exporting manifest list sha256:14472c602ddb92ba1d7c3f8ab0715b807276eaedc16b10230e7f266b2115a3a0 done
#44 pushing layers
#44 pushing layers 0.5s done
#44 ERROR: authorization status: 401: authorization failed
#3 [linux/arm64 internal] load metadata for docker.io/library/ubuntu:latest
------
> exporting to image:
------
error: failed to solve: authorization status: 401: authorization failed
To be clear, I know the docker daemon on the host machine is properly authenticated.
here is the Dockerfile I am using for the build
FROM hseeberger/scala-sbt:17.0.2_1.6.2_2.13.8
WORKDIR /build
RUN apt-get update && apt-get install -y git \
ca-certificates \
curl \
gnupg \
lsb-release
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# https://docs.docker.com/engine/install/debian/
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN cat /etc/apt/sources.list.d/docker.list
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin runit-systemd
RUN git clone --depth 1 https://github.com/bitcoin-s/bitcoin-s.git
WORKDIR "/build/bitcoin-s"
ENTRYPOINT ["sbt", "appServer/docker:publish"]
What am i doing wrong?
I think you should use docker login command for which you want to push image to docker hub or try to use shell program in project and include each commands in shell so that everything will deploy automatically
For example:
shell program will be shell.sh.
docker login.
docker build -t "name".
docker tag name hubuser/name:tag.
docker push hubuser/name:tag.
And in Docker file use CMD ["sh", "shell.sh"].
and run container in interactive mode.
docker run -it -p 0000:0000 --name container imagename.
I hope it will help you.

Got permission denied while trying to connect to the Docker daemon socket: without chmod

This question is related to this but I am trying to avoid solutions which make use of chmod. I can't change the permissions of /var/run/docker.sock inside the Dockerfile because it is a volume and I am looking to not have to manually interfere with the environment. I am running on MacOS.
I have a Dockerfile which installs the docker engine into a debian based container, and adds a user xyz to the group docker.
FROM debian
USER root
# https://docs.docker.com/engine/install/debian/
RUN apt-get update
RUN apt-get --yes install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | \
gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
RUN echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get --yes install docker-ce docker-ce-cli containerd.io
RUN useradd xyz
RUN usermod -a -G docker xyz
RUN newgrp docker
USER xyz
This is my docker-compose.yml:
services:
my_service:
build:
context: .
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: tail -f /dev/null
The user xyz gets created and gets added to the docker group which according to Docker's instructions here should be enough to allow the user xyz access to the docker socket but I still find permission issues.
> docker compose exec my_service whoami
xyz
> docker compose exec my_service groups
xyz docker
> docker compose exec my_service docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
Hopefully this is reproducible for others - it would be good to know whether others experience the same issue.

Can Docker client(within container) talk to docker daemon on EC2 using UNIX socket?

As part of Jenkins docker image,
am supposed to install docker client(only),
that can talk to docker daemon installed on underlying EC2 instance.
UNIX socket, I mean socket(AF_UNIX,,)
Background
As per the instruction, given here,
I do not see the necessity to install docker daemon withink jenkins image,
because the author is using UNIX socket to talk to underlying docker daemon running in EC2 instance, as shown here.
My understanding is, installing docker client installation(only) within jenkins image, would suffice to talk to docker daemon running on EC2 instance, using UNIX socket(/var/run/docker.sock)
1)
Can docker client running in jenkins image communicate to docker daemon running in underlying EC2 instance? with below mapping...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
2)
How to install docker client only in below jenkins image?
FROM jenkins:1.642.1
# Suppress apt installation warnings
ENV DEBIAN_FRONTEND=noninteractive
# Official Jenkins image does not include sudo, change to root user
USER root
# Used to set the docker group ID
# Set to 497 by default, which is the groupID used by AWS Linux ECS instance
ARG DOCKER_GID=497
# Create Docker Group with GID
# Set default value of 497 if DOCKER_GID set to blank string by Docker compose
RUN groupadd -g ${DOCKER_GID:-497} docker
To use Docker in Jenkins, Jenkins must have access to the docker.sock.
What you are proposing here is a docker in docker approach, by installing docker inside the jenkins container, but actually this is not necessary. You only need a valid docker daemon, and for that reason, the usual approach is to map /var/run/docker.sock from the host to the container.
Have a look at this amazing post https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
You need to install docker inside the jenkins image then bind mount the /var/run/docker.sock so that you can run side car containers as explained in Jérôme Petazzoni's blog post on the subject. This is my jenkins Dockerfile:
FROM jenkins/jenkins:lts
USER root
RUN apt-get update && \
apt-get install -y \
maven \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
lsb-release \
software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
RUN apt-get update && \
apt-get install -y \
docker-ce \
docker-ce-cli \
containerd.io
RUN usermod -a -G docker jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt
USER jenkins
WORKDIR /var/jenkins_home
Note: you can install your plugins during the build using the plugins.sh as explained here.
Build the jenkins image i.e.: docker build --rm -t so:58652650 .
Run the container mounting /var/run/docker.sock i.e.: docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock --entrypoint bash so:58652650
Inside the image as the jenkins user the docker commands should work as expected:

Docker in docker fails to start if container restarted

We are running a docker build agent inside a docker container.
It's based off debian jessie, and gets docker directly from docker as documented here.
The docker daemon runs fine the first time you start the container, but not the second time. (if you don't delete the container)
Dockerfile:
FROM debian:jessie
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get -y install -q \
apt-transport-https \
ca-certificates \
software-properties-common \
curl \
&& curl -fsSL https://yum.dockerproject.org/gpg | apt-key add - \
&& add-apt-repository \
"deb https://apt.dockerproject.org/repo/ \
debian-$(lsb_release -cs) \
main" \
&& apt-get update \
&& apt-get install -y \
docker-engine
CMD []
docker-compose.yml:
services:
dockerTest:
container_name: dockerTest
privileged: true
image: tomeinc/intel-docker-node:latest
command: bash -c "service docker start && sleep 2 && docker ps"
To reproduce: build the Dockerfile with docker build -t test . and then use docker-compose up twice. The second time, docker-ps will fail with
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Weirdly, if the container keeps running, you can manually start docker by running docker exec -it test /bin/bash and then executing service docker start and docker ps.
I'm not really sure how to approach debugging this, any suggestions are welcomed.
Turns out to be that docker thought that it and or containterd was still running(which it wasn't, but the PID files didn't get cleaned up)
Recommended starting approach to debugging issues: Look at the log files. I am shocked by this revelation.
Anyway adding rm /var/run/docker/libcontainerd/docker-containerd.pid /var/run/docker.pid to the start command before service docker start fixes it.

Resources