Dockerized gitlab use host docker to CI - docker

I'm currently learning about gitlab-ci, and deploying.
My gitlab instance runs in a docker container, and I would like to use the host's docker in order to build and deploy an image.
Is there such a way to do this?

Yes. If docker is not installed in the image (the current gitlab/gitlab-ce doesn't have it) you need to extend the image with an installation. E.g.
FROM gitlab/gitlab-ce:8.14.4-ce.0
ENV DOCKER_API_VERSION 1.23
RUN apt-get update && apt-get install -y docker.io
The ENV DOCKER_API_VERSION 1.23 is there to ensure API compatibility between the installations. At the time of writing, you'll receive version 1.12.1 from the apt-get install. If you have the same version on the host, then you can leave out the environment variable. If you have 1.11 on the host, then you'll need it (if you have some other version, you'll get an error message with the version number to use).
Build the image like this
docker build -t myrepo/myorg/mygitlab:8.14.4-ce.0 .
And then run it like this
docker run -d --name gitlab -v /var/run/docker.sock:/var/run/docker.sock myrepo/myorg/mygitlab:8.14.4-ce.0
You'll now have docker available from the container:
docker exec -it gitlab bash
$~ docker ps

Related

How to pre-pull docker images in a Dockerfile?

I need to dockerize an existing script which runs docker containers himself: this results in a docker in docker schema.
Currently, I am able to build a basic docker image with docker installed in it along with my scripts' code dependencies. Unfortunately, each time I run this image, a new container is created based on this image and needs to pull all the docker images needed to run my script (with an ENTRYPOINT script). This takes a lot of time and feels wrong.
I would like to be able to pre-pull the docker images required by my script inside the Dockerfile so that all child containers do not need to do so.
The thing is, I cannot manage to launch the docker service in the Dockerfile and it is needed to pull those images.
Am I doing things correctly? Should i completely revisit my approach? Or what should i adapt?
My Dockerfile:
FROM debian:buster
# Install docker
RUN apt-get update && apt-get install -y curl
RUN curl -fsSL https://get.docker.com -o get-docker.sh
RUN sh ./get-docker.sh
# I tried:
# RUN docker pull hello-world
# RUN dockerd && docker pull hello-world
# RUN service docker start && docker pull hello-world

Airflow inside docker running a docker container

I have airflow running on an EC2 instance, and I am scheduling some tasks that spin up a docker container. How do I do that? Do I need to install docker on my airflow container? And what is the next step after. I have a yaml file that I am using to spin up the container, and it is derived from the puckel/airflow Docker image
I got a simpler solution working which just requires a short Dockerfile to build a derived image:
FROM puckel/docker-airflow
USER root
RUN groupadd --gid 999 docker \
&& usermod -aG docker airflow
USER airflow
and then
docker build -t airflow_image .
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /usr/bin/docker:/bin/docker:ro \
-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/x86_64-linux-gnu/libltdl.so.7:ro \
-d airflow_image
Finally resolved
My EC2 setup is running unbuntu Xenial 16.04 and using a modified the puckel/airflow docker image that is running airflow
Things you will need to change in the Dockerfile
Add USER root at the top of the Dockerfile
USER root
mounting docker bin was not working for me, so I had to install the
docker binary in my docker container
Install Docker from Docker Inc. repositories.
RUN curl -sSL https://get.docker.com/ | sh
search for wrapdocker file on the internet. Copy it into scripts directory in the folder where the Dockerfile is located. This starts the docker daemon inside airflow docker
Install the magic wrapper
ADD ./script/wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
add airflow as a user to the docker group so the airflow can run docker jobs
RUN usermod -aG docker airflow
switch to airflow user
USER airflow
Docker compose file or command line arguments to docker run
Mount docker socket from docker airflow to the docker image just installed
- /var/run/docker.sock:/var/run/docker.sock
You should be good to go !
You can spin up docker containers from your airflow docker container by attaching volumes to your container.
Example:
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro -v /path/to/bin/docker:/bin/docker:ro your_airflow_image
You may also need to attach some libraries required by docker. This depends on the system you are running Docker on. Just read the error messages you get when running a docker command inside the container, it will indicate you what you need to attach.
Your airflow container will then have full access to Docker running on the host.
So if you launch docker containers, they will run on the host running the airflow container.

gcloud docker not working on Compute Engine VM

I am trying to get docker images from Container Engine to run on a Compute Engine VM. On my laptop I can run gcloud docker pull gcr.io/projectid/image-tag
I just spun up a Debian VM on Compute Engine, but when I try to run any gcloud docker command I get ERROR: (gcloud.docker) Docker is not installed.
> gcloud --version
Google Cloud SDK 140.0.0
alpha 2017.01.17
beta 2017.01.17
bq 2.0.24
bq-nix 2.0.24
core 2017.01.17
core-nix 2017.01.17
gcloud
gsutil 4.22
gsutil-nix 4.22
> gcloud docker --version
ERROR: (gcloud.docker) Docker is not installed.
https://cloud.google.com/sdk/gcloud/reference/docker makes it seem like gcloud docker should work.
Am I supposed to install docker on the VM before running gcloud docker?
Per intuition i tried to install docker with sudo apt-get install docker, but I was wrong, the actual docker package name is docker.io, so I restarted the process and worked this way:
Install the docker package:
sudo apt-get install docker.io
Test if docker is working
sudo gcloud docker ps
Pull your image from the image repository, e.g. gcr.io. If dont have
a particular tag use the latest one.
sudo gcloud docker -- pull gcr.io/$PROJECT_NAME/$APPLICATION_IMAGE_NAME:latest
Run your image. Remember to specify the port mapping correctly, the first port is the one will be exposed in the GCE instance and the second one is the one exposed internally by the docker container, e.g EXPOSE 8000. For instance in the following example my app is configured to work at the 8000 port but it will be accessed by the public on the default www port, the 80.
sudo docker run -d -p 80:8000 --name=$APPLICATION_IMAGE_NAME \
--restart=always gcr.io/$PROJECT_NAME/$APPLICATION_IMAGE_NAME:latest
The --restart flag will allow this container to be restarted every time the instance restarts
I hope it works for you.
Am I supposed to install docker on the VM before running gcloud docker?
Yes. The error message is telling you that Docker needs to be installed on the machine for gcloud docker to work.
You can either install docker manually on your Debian VM or you can launch a VM that has docker pre-installed onto the machine, such as the Container-Optimized OS from Google.

How to link binaries between docker containers

Is it possible to use docker to expose the binary from one container to another container?
For example, I have 2 containers:
centos6
sles11
I need both of these containers to have similar versions git installed. Unfortunately the sles container does not have the version of git that I need.
I want to spin up a git container like so:
$ cat Dockerfile
FROM ubuntu:14.04
MAINTAINER spuder
RUN apt-get update
RUN apt-get install -yq git
CMD /usr/bin/git
# ENTRYPOINT ['/usr/bin/git']
Then link the centos6 and sles11 containers to the git container so that they both have access to a git binary, without going through the trouble of installing it.
I'm running into the following problems:
You can't link a container to another non running container
I'm not sure if this is how docker containers are supposed to be used.
Looking at the docker documentation, it appears that linked containers have shared environment variables and ports, but not necessarily access to each others entrypoints.
How could I link the git container so that the cent and sles containers can access this command? Is this possible?
You could create a dedicated git container and expose the data it downloads as a volume, then share that volume with the other two containers (centos6 and sles11). Volumes are available even when a container is not running.
If you want the other two containers to be able to run git from the dedicated git container, then you'll need to install (or copy) that git binary onto the shared volume.
Note that volumes are not part of an image, so they don't get preserved or exported when you docker save or docker export. They must be backed-up separately.
Example
Dockerfile:
FROM ubuntu
RUN apt-get update; apt-get install -y git
VOLUME /gitdata
WORKDIR /gitdata
CMD git clone https://github.com/metalivedev/isawesome.git
Then run:
$ docker build -t gitimage .
# Create the data container, which automatically clones and exits
$ docker run -v /gitdata --name gitcontainer gitimage
Cloning into 'isawesome'...
# This is just a generic container, but what I do in the shell
# you could do in your centos6 container, for example
$ docker run -it --rm --volumes-from gitcontainer ubuntu /bin/bash
root#e01e351e3ba8:/# cd gitdata/
root#e01e351e3ba8:/gitdata# ls
isawesome
root#e01e351e3ba8:/gitdata# cd isawesome/
root#e01e351e3ba8:/gitdata/isawesome# ls
Dockerfile README.md container.conf dotcloud.yml nginx.conf

Docker client execution

I have a very basic doubt regarding docker.
I have a docker host installed in ubuntuA.
So, to test this from the client(UbuntuB) , should the docker be installed in UbuntuB machine also?
More correct answer is "only docker client" need to be installed in UbuntuB
In UbuntuB, install docker client only, it is around 17M
# apt-get update && apt-get install -y curl
# curl https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
# chmod +x /usr/local/bin/docker
In order to run docker command, you need talk to daemon in ubuntuA (port 2375 is used since docker 1.0)
$ docker -H tcp://ubuntuA:2375 images
or
$ export DOCKER_HOST tcp://ubuntuA:2375
$ docker images
see more detail in http://docs.docker.com/articles/basics/
Yes,
you have to install docker on both client and server.

Resources