How can I install Docker inside an alpine container? - docker

How can I install Docker inside an alpine container and run docker images?
I could install, but could not start docker and while running get "docker command not found error".

Dockerfile for running docker-cli inside alpine
FROM alpine:3.10
RUN apk add --update docker openrc
RUN rc-update add docker boot
Build docker image
docker build -t docker-alpine .
Run container (host and the alipne container will share the same docker engine)
docker run -it -v "/var/run/docker.sock:/var/run/docker.sock:rw" docker-alpine:latest /bin/sh

All you need is to install Docker CLI in an image based on Alpine and run the container mounting docker.sock. It allows running sibling Docker containers using host's Docker Engine. It is known as Docker-out-of-Docker and is considered a good alternative to running a separate Docker Engine inside a container (aka Docker-in-Docker).
Dockerfile
FROM alpine:3.11
RUN apk update && apk add --no-cache docker-cli
Build the image:
docker build -t alpine-docker .
Run the container mounting the docker.sock (-v /var/run/docker.sock:/var/run/docker.sock):
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock alpine-docker docker ps
The command above should successfully run docker ps inside the Alpine-based container.

Related

Docker Showing Error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

I am trying to run TeamCity CI Server within Docker DinD(Docker in Docker) by using a dockerfile. I am using the official docker:19-dind image as the base image.
The main purpose is to create a DinD container and run TeamCity's official container within that DinD container. First of all, is that really possible using DinD?
The dockerfile is as follows:
.dockerignore
# Official Docker in Docker 19 version as base image.
FROM docker:19-dind AS base
# Create work directory
WORKDIR /teamcity-ci-server
# Command to check version
RUN docker --version
# Final image inherited from base image
FROM base as final
# Adding directory
WORKDIR /teamcity-ci-server
# Run commands to setup TeamCity CI Server
RUN docker pull jetbrains/teamcity-server \
&& docker images \
&& docker run -d --privileged --name teamcity-ci-server -p 5002:8111 jetbrains/teamcity-server
# Add volume mount for DinD
VOLUME /var/run/docker.sock:/var/run/docker.sock
# Exposing port
EXPOSE 5001
However, after running docker build -f .dockerignore -t teamcity-ci-server:v1 ., I am getting the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I believe that this error is displaying because docker is not running. Think I cannot run systemctl start docker since this is not a linux image and systemctl does not work here.
Does anyone know how to fix this issue that's happening within Docker DinD images?

Install Docker in Alpine Docker

I have a Dockerfile with a classic Ubuntu base image and I'm trying to reduce the size.
That's why I'm using Alpine base.
In my Dockerfile, I have to install Docker, so Docker in Docker.
FROM alpine:3.9
RUN apk add --update --no-cache docker
This works well, I can run docker version inside my container, at least for the client. Because for the server I have the classic Docker error saying :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I know in Ubuntu after installing Docker I have to run
usermod -a -G docker $USER
But what about in Alpine ? How can I avoid this error ?
PS:
My first idea was to re-use the Docker socket by bind-mounting /var/run/docker.sock:/var/run/docker.sock for example and thus reduce the size of my image even more, since I don't have to reinstall Docker.
But as bind-mount is not allowed in Dockerfile, do you know if my idea is possible and how to do it ? I know it's possible in Docker-compose but I have to use Dockerfile only.
Thanks
I managed to do that the easy way
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker --privileged docker:dind sh
I am using this command on my test env!
You can do that, and your first idea was correct: just need to expose the docker socket (/var/run/docker.sock) to the "controlling" container. Do that like this:
host:~$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
<my_image>
host:~$ docker exec -u root -it <container id> /bin/sh
Now the container should have access to the socket (I am assuming here that you have already installed the necessary docker packages inside the container):
root#guest:/# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED ...
69340bc13bb2 my_image "/sbin/tini -- /usr/…" 8 minutes ago ...
Whether this is a good idea or not is debatable. I would suggest not doing this if there is any way to avoid it. It's a security hole that essentially throws out the window some of the main benefits of using containers: isolation and control over privilege escalation.

Docker file not found error inside the container to create a new image

I need to create a container for which I'm able to create new images.
My first guest was to run docker on docker but found that the right
way to do this was using the --privileged argument so the container
has access to the docker daemon.
For this I'm runnin the following comand:
docker run --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /home/user/container_data:/app/app -d -p 5100:5100 mcf2:latest
I'm using -v /home/user/container_data:/app/app because I'm creating the folder for the new images from
templates for flask apps and saving them on that directory.
One of the files I'm creating from the templates is 'create_image.sh' which has the docker build statement E.G.
'docker build -t new_container:latest .'
for that I'm running the following code inside the running container:
bash_path= 'app/classification_model/create_image.sh'
subprocess.call([bash_path],shell=True)
But I always get this error:
/bin/sh: 1: app/model/create_image.sh: docker: not found
But the file does exist, if do ls in the container 'app/' is in the list of folders
I have also checked the bind directory and
'/home/user/container_data/classification_model/create_image.sh'
Does exist.
I have tried changing bash_path to
bash_path= '/app/classification_model/create_image.sh'
and
bash_path= '/app/app/classification_model/create_image.sh'
But get the same error for all the cases
**EDIT: **
I have changed the Docker file to:
From docker:dind
FROM ubuntu:18.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
...
...
And run again:
docker run --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /home/user/container_data:/app/app -d -p 5100:5100 mcf2:latest
I'm still getting the same error:
/bin/sh: 1: docker: not found
You are mixing two thing
Docker in Docker
Docker in Docker with host Docker Socket
In the both cases, Docker should be installed in the container, it does not mean by mounting -v /var/run/docker.sock:/var/run/docker.sock this any container will able to launch or run docker command.
In the first option, it will start containers as a child container.
In the second option, the container will have access to the Docker socket, and will, therefore, be able to start containers. Except that instead of starting “child” containers, it will start “sibling” containers.
updated:
Docker offical dind image is alpine based so you can install using apk instead of apt.
FROM docker:dind
RUN apk add --no-cache python3 python3-dev
https://pkgs.alpinelinux.org/packages

can't list file in docker in docker (dind)

I face this strange issue and can't explain why.
$ docker run -d --name dind --privileged --net=host -v `pwd`:/app -w /app docker:stable-dind
fe66d6e7e5effcf15e439a332a2368fddab810e9bc8ac3445392c8e56b0aa38a
$ docker exec dind ls
Dockerfile
$ docker exec dind docker run -v `pwd`:/app -w /app alpine ls
$ docker exec dind docker build -t demo .
Sending build context to Docker daemon 521.7kB
Step 1/24 : FROM alpine
So why I can't see my files in docker container which running in docker?
Why it can read the file Dockerfile with command docker build, but not docker run?
This is because pwd in your code will be parsed in your host machine, not in the container, so the container which in the container get the current directory of host machine, not current directory of container machine, you can prove it by change following command from:
$ docker exec dind docker run -v `pwd`:/app -w /app alpine ls
to
$ docker exec dind docker run -v /app:/app -w /app alpine ls
Then, you will see your Dockerfile output. FYI.

Airflow inside docker running a docker container

I have airflow running on an EC2 instance, and I am scheduling some tasks that spin up a docker container. How do I do that? Do I need to install docker on my airflow container? And what is the next step after. I have a yaml file that I am using to spin up the container, and it is derived from the puckel/airflow Docker image
I got a simpler solution working which just requires a short Dockerfile to build a derived image:
FROM puckel/docker-airflow
USER root
RUN groupadd --gid 999 docker \
&& usermod -aG docker airflow
USER airflow
and then
docker build -t airflow_image .
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /usr/bin/docker:/bin/docker:ro \
-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7:ro \
-d airflow_image
Finally resolved
My EC2 setup is running unbuntu Xenial 16.04 and using a modified the puckel/airflow docker image that is running airflow
Things you will need to change in the Dockerfile
Add USER root at the top of the Dockerfile
USER root
mounting docker bin was not working for me, so I had to install the
docker binary in my docker container
Install Docker from Docker Inc. repositories.
RUN curl -sSL https://get.docker.com/ | sh
search for wrapdocker file on the internet. Copy it into scripts directory in the folder where the Dockerfile is located. This starts the docker daemon inside airflow docker
Install the magic wrapper
ADD ./script/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
add airflow as a user to the docker group so the airflow can run docker jobs
RUN usermod -aG docker airflow
switch to airflow user
USER airflow
Docker compose file or command line arguments to docker run
Mount docker socket from docker airflow to the docker image just installed
- /var/run/docker.sock:/var/run/docker.sock
You should be good to go !
You can spin up docker containers from your airflow docker container by attaching volumes to your container.
Example:
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro -v /path/to/bin/docker:/bin/docker:ro your_airflow_image
You may also need to attach some libraries required by docker. This depends on the system you are running Docker on. Just read the error messages you get when running a docker command inside the container, it will indicate you what you need to attach.
Your airflow container will then have full access to Docker running on the host.
So if you launch docker containers, they will run on the host running the airflow container.

Resources