Run docker in docker on a dedicated server? - docker

I will run docker on a dedicated sever by a service provider. It is not
possible to install docker on this server. Apache, Git and a lot more is
installed. So I try to run docker in a container. I will pull a docker image
from the gitlab registry and run in a sub domain. I wrote a .gitlab-ci.yml. But
I get an error message.
I found this answer:
You can't (*) run Docker inside Docker containers or images. You can't (*)
start background services inside a Dockerfile. As you say, commands like
systemctl and service don't (*) work inside Docker anywhere. And in any case
you can't use any host-system resources, including the host's Docker socket,
from anywhere in a Dockerfile.
How do I solve this problem?
$ sudo docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
ERROR: Job failed: exit code 1
.gitlab-ci.yml
image: ubuntu:latest
before_script:
- apt-get update
- apt-get install -y apt-transport-https ca-certificates curl software-properties-common
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add
- apt-key fingerprint 0EBFCD88
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
stages:
- test
test:
stage: test
script:
- apt-get update
- apt-cache search docker-ce
- apt-get install -y docker-ce
- docker run -d hello-world

This .gitlab-ci.yml works for me.
image: ubuntu:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2375/
before_script:
- apt-get update
- apt-get install -y apt-transport-https ca-certificates curl software-properties-common
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add
- apt-key fingerprint 0EBFCD88
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update
stages:
- test
test:
stage: test
script:
- apt-get install -y docker-ce
- docker info
- docker pull ...
- docker run -p 8000:8000 -d --name ...

The answer that you found.... is a little old. There are options to run systemd in a container and it is also possible to run some systemctl-replacement script.
However, I am not sure what application you really want to install.

Related

Gitlab pipeline docker error: during connect: Get "http://docker:2375/v1.24/info": dial tcp: lookup docker on 172.31.0.2:53: no such host

One week ago the Gitlab pipeline was working fine. The code is shown below. But then it is suddenly failing although nothing has been done in the docker-runner and no config were changes. The error is: ERROR: error during connect: Get "http://docker:2375/v1.24/info": dial tcp: lookup docker on 172.31.0.2:53: no such host
image:
name: python:3.8
entrypoint: [ "" ]
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
services:
- docker:19.03.0-dind
before_script:
# Install docker
- apt update --assume-yes
- apt install apt-transport-https ca-certificates curl software-properties-common --assume-yes
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- curl -O https://download.docker.com/linux/debian/dists/buster/pool/stable/amd64/containerd.io_1.4.3-1_amd64.deb
- apt install ./containerd.io_1.4.3-1_amd64.deb
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
- apt update --assume-yes
- apt-cache policy docker-ce
- apt install docker-ce --assume-yes
- docker info

Is there a way to run sbt-native-packager within Gitlab shared runner?

I've been trying to build docker images in Gitlab shared runner. I'm building my application using image: "hseeberger/scala-sbt:11.0.6_1.3.10_2.11.12" image normally and I locally build the docker image with sbt-native-packager which made me think that i need to use DiD service.
I'm currently having an issue which the sbt-native-packager cannot locate docker executable and fails to build the image. What I am missing here?
My script is as follows:
docker:
stage: deploy
image: "hseeberger/scala-sbt:11.0.6_1.3.10_2.11.12"
services:
- name: docker:dind
script:
- sbt docker:publishLocal
- docker push registry.gitlab.com/groupName/moduleName
The following actually did the trick for me although it is quite heavy to install Docker in the runner every time, however, that's the only way I could make it work.
docker:image:
stage: release
image: "hseeberger/scala-sbt:11.0.6_1.3.10_2.11.12"
before_script:
- apt-get update
- apt-get install sudo
- apt-get install apt-transport-https ca-certificates curl software-properties-common -y
- curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- apt-get update
- apt-get install docker-ce -y
- sudo service docker start
- docker login <to your account>
script:
- sbt docker:publishLocal
- docker tag module:latest registry.gitlab.com/moduleGroup/module:0.1-SNAPSHOT
- docker push registry.gitlab.com/moduleGroup/module
I've built and published docker images that contains sbt, docker and git; to simplify this common task. you can use it from here
just use one of the built images, for example:
hnaderi/sbt-ci-image:openjdk-11.0.16_1.8.0_3.2.0_20.10-cli

How to push a docker image to Google Cloud Registry using GitLab ci pipeline

I am trying to push a Docker image to Google Cloud Registry via the GitLab ci pipeline.
The image builds but when its time to push to registry i get the following error.
denied: Token exchange failed for project 'nice-column-247216'. Caller
does not have permission 'storage.buckets.get'. To configure
permissions, follow instructions at:
https://cloud.google.com/container-registry/docs/access-control
.gitlab-ci.yml
stages:
- security
- quality
- test
- build
- deploy
image: node:10.16.0
services:
- mongo
- docker:dind
.before_script_template: &before_docker_script
before_script:
- apt-get update
- apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
- curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- apt-get update
- apt-get -y install docker-ce
- curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- echo "$SERVICE_ACCOUNT_KEY" > key.json
- docker login -u _json_key --password-stdin https://eu.gcr.io < key.json
build:
stage: build
<<: *before_docker_script
variables:
DOCKER_IMAGE_TAG: 'eu.gcr.io/nice-column-247216/my-application'
script:
- docker build --cache-from "${DOCKER_IMAGE_TAG}" -t "${DOCKER_IMAGE_TAG}" .
- docker push ${DOCKER_IMAGE_TAG}
As you can see I am logging in to Docker via the json key. You can see in the below image the permissions this token has, both Storage Admin and Storage Object Viewer.

Building and pushing a docker image from inside a container

Context: I am using repo2docker to build images containing experiments, then to push them to a private registry.
I am dockerizing this whole pipeline (cloning the code of the experiment, building the image, pushing it) with docker-compose.
This is what I tried:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y python3-pip python3-dev git apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
RUN apt-get update && apt-get install docker-ce --yes
RUN service docker start
# more setup
ENTRYPOINT rqworker -c settings image_build_queue
Then I pass the jobs to the rqworker (the rqworker part works well).
But docker doesn't start in my container. Therefore I can't login to the registry and can't build the image.
(Note that I need docker to run, but I don't need to run containers.)
The solution was to share the host's Docker socket, so the build actually happens on the host.

How to build docker image in Jenkins running into docker?

I just tried build my test image for Jenkins course and got the issue
+ docker build -t nginx_lamp_app .
/var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: 2: /var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: docker: not found
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
But I've already configured docker socket in docker-compose file for Jenkins, like this
version: "2"
services:
jenkins:
image: "jenkins/jenkins:lts"
ports:
- "8080:8080"
restart: "always"
volumes:
- "/var/jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
But, when I attach to container I see also "docker: not found" when I type command "docker"...
And I've changed permissions to socket like 777
What's can be wrong?
Thanks!
You are trying to achieve a Docker-in-Docker kind of thing. Mounting just the docker socket will not make it working as you expect. You need to install docker binary into it as well. You can do this by either extending your jenkins image/Dockerfile or create(docker commit) a new image after installing docker binary into it & use that image for your CI/CD. Try to integrate below RUN statement with the extended Dockerfile or the container to be committed(should work on ubuntu docker image) -
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce
Ref - https://github.com/jpetazzo/dind
PS - It isn't really recommended (http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/)
Adding to that, you shouldn't mount host docker binary inside the container -
⚠️ Former versions of this post advised to bind-mount the docker
binary from the host to the container. This is not reliable anymore,
because the Docker Engine is no longer distributed as (almost) static
libraries.

Resources