I have airflow running on an EC2 instance, and I am scheduling some tasks that spin up a docker container. How do I do that? Do I need to install docker on my airflow container? And what is the next step after. I have a yaml file that I am using to spin up the container, and it is derived from the puckel/airflow Docker image
I got a simpler solution working which just requires a short Dockerfile to build a derived image:
FROM puckel/docker-airflow
USER root
RUN groupadd --gid 999 docker \
&& usermod -aG docker airflow
USER airflow
and then
docker build -t airflow_image .
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /usr/bin/docker:/bin/docker:ro \
-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7:ro \
-d airflow_image
Finally resolved
My EC2 setup is running unbuntu Xenial 16.04 and using a modified the puckel/airflow docker image that is running airflow
Things you will need to change in the Dockerfile
Add USER root at the top of the Dockerfile
USER root
mounting docker bin was not working for me, so I had to install the
docker binary in my docker container
Install Docker from Docker Inc. repositories.
RUN curl -sSL https://get.docker.com/ | sh
search for wrapdocker file on the internet. Copy it into scripts directory in the folder where the Dockerfile is located. This starts the docker daemon inside airflow docker
Install the magic wrapper
ADD ./script/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
add airflow as a user to the docker group so the airflow can run docker jobs
RUN usermod -aG docker airflow
switch to airflow user
USER airflow
Docker compose file or command line arguments to docker run
Mount docker socket from docker airflow to the docker image just installed
- /var/run/docker.sock:/var/run/docker.sock
You should be good to go !
You can spin up docker containers from your airflow docker container by attaching volumes to your container.
Example:
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro -v /path/to/bin/docker:/bin/docker:ro your_airflow_image
You may also need to attach some libraries required by docker. This depends on the system you are running Docker on. Just read the error messages you get when running a docker command inside the container, it will indicate you what you need to attach.
Your airflow container will then have full access to Docker running on the host.
So if you launch docker containers, they will run on the host running the airflow container.
Related
I am trying to run TeamCity CI Server within Docker DinD(Docker in Docker) by using a dockerfile. I am using the official docker:19-dind image as the base image.
The main purpose is to create a DinD container and run TeamCity's official container within that DinD container. First of all, is that really possible using DinD?
The dockerfile is as follows:
.dockerignore
# Official Docker in Docker 19 version as base image.
FROM docker:19-dind AS base
# Create work directory
WORKDIR /teamcity-ci-server
# Command to check version
RUN docker --version
# Final image inherited from base image
FROM base as final
# Adding directory
WORKDIR /teamcity-ci-server
# Run commands to setup TeamCity CI Server
RUN docker pull jetbrains/teamcity-server \
&& docker images \
&& docker run -d --privileged --name teamcity-ci-server -p 5002:8111 jetbrains/teamcity-server
# Add volume mount for DinD
VOLUME /var/run/docker.sock:/var/run/docker.sock
# Exposing port
EXPOSE 5001
However, after running docker build -f .dockerignore -t teamcity-ci-server:v1 ., I am getting the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I believe that this error is displaying because docker is not running. Think I cannot run systemctl start docker since this is not a linux image and systemctl does not work here.
Does anyone know how to fix this issue that's happening within Docker DinD images?
I have a Dockerfile with a classic Ubuntu base image and I'm trying to reduce the size.
That's why I'm using Alpine base.
In my Dockerfile, I have to install Docker, so Docker in Docker.
FROM alpine:3.9
RUN apk add --update --no-cache docker
This works well, I can run docker version inside my container, at least for the client. Because for the server I have the classic Docker error saying :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I know in Ubuntu after installing Docker I have to run
usermod -a -G docker $USER
But what about in Alpine ? How can I avoid this error ?
PS:
My first idea was to re-use the Docker socket by bind-mounting /var/run/docker.sock:/var/run/docker.sock for example and thus reduce the size of my image even more, since I don't have to reinstall Docker.
But as bind-mount is not allowed in Dockerfile, do you know if my idea is possible and how to do it ? I know it's possible in Docker-compose but I have to use Dockerfile only.
Thanks
I managed to do that the easy way
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker --privileged docker:dind sh
I am using this command on my test env!
You can do that, and your first idea was correct: just need to expose the docker socket (/var/run/docker.sock) to the "controlling" container. Do that like this:
host:~$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
<my_image>
host:~$ docker exec -u root -it <container id> /bin/sh
Now the container should have access to the socket (I am assuming here that you have already installed the necessary docker packages inside the container):
root#guest:/# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED ...
69340bc13bb2 my_image "/sbin/tini -- /usr/…" 8 minutes ago ...
Whether this is a good idea or not is debatable. I would suggest not doing this if there is any way to avoid it. It's a security hole that essentially throws out the window some of the main benefits of using containers: isolation and control over privilege escalation.
How can I install Docker inside an alpine container and run docker images?
I could install, but could not start docker and while running get "docker command not found error".
Dockerfile for running docker-cli inside alpine
FROM alpine:3.10
RUN apk add --update docker openrc
RUN rc-update add docker boot
Build docker image
docker build -t docker-alpine .
Run container (host and the alipne container will share the same docker engine)
docker run -it -v "/var/run/docker.sock:/var/run/docker.sock:rw" docker-alpine:latest /bin/sh
All you need is to install Docker CLI in an image based on Alpine and run the container mounting docker.sock. It allows running sibling Docker containers using host's Docker Engine. It is known as Docker-out-of-Docker and is considered a good alternative to running a separate Docker Engine inside a container (aka Docker-in-Docker).
Dockerfile
FROM alpine:3.11
RUN apk update && apk add --no-cache docker-cli
Build the image:
docker build -t alpine-docker .
Run the container mounting the docker.sock (-v /var/run/docker.sock:/var/run/docker.sock):
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock alpine-docker docker ps
The command above should successfully run docker ps inside the Alpine-based container.
Here is exactly what I need. I already have a project which is starting up a particular set of docker images and it works completely fine.
But I want to create another image, which is particularly to build this project from the scratch having all the dependencies inside. So, the problem is, when building, to create docker images, we need to access the docker daemon running on the host machine from the building container.
Is there any way of doing this?
If you need to access docker on the host from inside a container, you can simply expose the Docker socket inside the container using a host mount (-v /host/path:/container/path on the docker run command line).
For example, if I start a new fedora container exposing the docker socket on my host:
$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock fedora bash
Then install docker inside the container:
[root#d28650013548 /]# yum -y install docker
...many lines elided...
I can now talk to docker on my host:
[root#d28650013548 /]# docker info
Containers: 6
Running: 1
Paused: 0
Stopped: 5
Images: 530
Server Version: 17.05.0-ce
...
You can let the container access to the host's docker daemon through the docker socket and "tricking" it to have the docker executable inside the container without installing docker inside it. Just on this way (with an Ubuntu-Xenial container for the example):
docker run --name dockerInsideContainer -ti -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker ubuntu:xenial
Inside this, you can launch any docker command like for example docker images to check it's working.
If you see an error like this: docker: error while loading shared libraries: libltdl.so.7: cannot open shared object file: No such file or directory you should install inside the container a package called libltdl7. So for example you can create a Dockerfile for the container or installing it directly on run:
FROM ubuntu:xenial
apt update
apt install -y libltdl7
or
docker run --name dockerInsideContainer -ti -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker ubuntu:xenial bash -c "apt update && apt install libltdl7 && bash"
Hope it helps
I want to run Jenkins in a Docker Container on Centos7.
I saw the official documentation of Jenkins:
First, pull the official jenkins image from Docker repository.
docker pull jenkins
Next, run a container using this image and map data directory from the container to the host; e.g in the example below /var/jenkins_home from the container is mapped to jenkins/ directory from the current path on the host. Jenkins 8080 port is also exposed to the host as 49001.
docker run -d -p 49001:8080 -v $PWD/jenkins:/var/jenkins_home -t jenkins
But when I try to run the docker container I get the following error:
/usr/local/bin/jenkins.sh: line 25: /var/jenkins_home/copy_reference_file.log: Permission denied
Can someone tell me how to fix this problem?
The official Jenkins Docker image documentation says regarding volumes:
docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
This will store the jenkins data in /your/home on the host. Ensure that /your/home is accessible by the jenkins user in container (jenkins user - uid 1000) or use -u some_other_user parameter with docker run.
This information is also found in the Dockerfile.
So all you need to do is to ensure that the directory $PWD/jenkins is own by UID 1000:
mkdir jenkins
chown 1000 jenkins
docker run -d -p 49001:8080 -v $PWD/jenkins:/var/jenkins_home -t jenkins
The newest Jenkins documentation says to use Docker 'volumes'.
Docker is kinda tricky on this, the difference between the two is a full path name with the -v option for bind mount and just a name for volumes.
docker run -d -p 49001:8080 -v jenkins-data:/var/jenkins_home -t jenkins
This command will create a docker volume named "jenkins-data" and you will no longer see the error.
Link to manage volumes:
https://docs.docker.com/storage/volumes/