GitLab CI/CD Docker-In-Docker Failing with Custom DIND Service - docker

I've had CI/CD set up in our private GitLab instance for a while now to build some packages, create docker images from them, and push them to our internal registry. The configuration looks like this:
stages:
- build
services:
- docker:18.09.4-dind
image: localregistry/utilities/tools:1.0.5
build:
stage: build
script:
- mvn install
- docker build -t localregistry/proj/image:$VERSION .
- docker push localregistry/proj/image:$VERSION
tags:
- docker
This has worked quite well up until today, when we started getting hit with rate limiting errors from Docker. We have a large company so this isn't entirely unexpected, but it prompted me to look at locally caching some of the docker images that we use frequently. As a quick test, I pulled, retagged, and pushed to our local registry the docker:18.09.4-dind image and changed the line in the CI/CD configuration to:
services:
- localregistry/utilities/docker:18.09.4-dind
To my surprise when running the CI/CD job, while the image appeared to start up fine I started having docker problems:
$ docker build -t localregistry/proj/image:$VERSION .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
The next hour or so was spent examining the runner and the various docker environments that get executed there, trying to figure out what the difference could be from simply retagging the DIND image, but couldn't figure anything out; the only difference that could be found is that DOCKER_HOST=tcp://docker:2375 was set in the environment when using docker:18.09.4-dind, but not when using localregistry/utilities/docker:18.09.4-dind - although setting it explicitly didn't help, triggering this message:
error during connect: Get http://docker:2375/v1.39/containers/json?all=1: dial tcp: lookup docker on 151.124.118.131:53: no such host
During that time the rate limit was lifted and I was able to switch back to the normally tagged version, but I can't see a reason why a locally tagged version wouldn't work; any ideas as to why this is?

I guess your whole problem would be solved with using an alias for your new docker dind image. Just replace the services section with the following:
services:
- name: localregistry/utilities/docker:18.09.4-dind
alias: docker
This causes your docker daemon (dind) service to be accessible under the name docker, which is the default hostname for docker daemon.
See also extended docker configuration options in GitLab CI for more details.

Related

Jenkins build Docker container on remote host with dockerfile

I'm quite new to Jenkins and spent 2 whole days not twisting my head (and google and stackoverflow) around, how to get a docker container built on a remote host (from Jenkins host perspective).
My setup:
Docker runs on a MacOS machine (aka the "remote host")
Jenkins runs as docker container on this machine
Bitbucket Cloud runs at Atlassian
PyCharm is my development tool - running on the MacOS machine
Everything works fine so far. Now, I want Jenkins to build a docker container (on the "remote host") containing my python demo.
I'm using a dockerfile in my project:
FROM python:3
WORKDIR /usr/src/app
COPY . .
CMD ["test.py"]
ENTRYPOINT ["python3"]
I'm trying to build a jenkinsfile, I'm expecting to do 2 things
Pull the repo
Build the docker image with the help of the dockerfile on the "remote host"
Docker is installed as plugin and configured.
Docker is installed via Jenkins configuration.
Docker remote host is set up in "Cloud" setup in Jenkins - connection works (with the help of socat running as docker container)
Docker Host ist set to the remote host IP and port 2376
I'm using a jenkins pipeline project.
Most promising threat about using remote hosts is of course https://www.jenkins.io/doc/book/pipeline/docker/#using-a-remote-docker-server
But using docker.withServer('tcp://192.168.178.26:2376') (in my case, locally, no credentials because not reachable from outside), I had no luck at all.
Most common error message: hudson.remoting.ProxyException: groovy.lang.MissingMethodException: No signature of method: org.jenkinsci.plugins.docker.workflow.Docker.withServer() is applicable for argument types: (java.lang.String, java.lang.String) values: [tcp://192.168.178.26:2376]
If I try to let Jenkins build it inside it's own container with its own docker, it tells me /var/jenkins_home/workspace/dockerbuild#tmp/durable-6e12255b/script.sh: 1: /var/jenkins_home/workspace/dockerbuild#tmp/durable-6e12255b/script.sh: docker: not found
Strange, as I thought, docker was installed. But I want to build at remote host anyway.
In my eyes the most promising jenkinsfile is the following - but to be honest, I am totally lost at the moment and really need some help:
node {
checkout scm
docker.withServer('tcp://192.168.178.26:2376')
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.inside {
sh 'make test'
}
I appreciate any hint and am greatful for your help.
Best regards
Muhackl

GitLab CI/CD - deploy images

I have problem with GitLab CI/CD. I try build image and run to server where i have runner. My gitlab-ci.yaml
image: docker:latest
services:
- docker:dind
variables:
TEST_NAME: registry.gitlab.com/pawelcyrklaf/learn-devops:$CI_COMMIT_REF_NAME
stages:
- build
- deploy
before_script:
- docker login -u pawelcyrklaf -p examplepass registry.gitlab.com
build_image:
stage: build
script:
- docker build -t $TEST_NAME .
- docker push $TEST_NAME
deploy_image:
stage: deploy
script:
- docker pull $TEST_NAME
- docker kill $(docker ps -q) || true
- docker rm $(docker ps -a -q) || true
- docker run -dt -p 8080:80 --name gitlab_learn $TEST_NAME
My Dockerfile
FROM centos:centos7
RUN yum install httpd -y
COPY index.html /var/www/html/
CMD [“/usr/sbin/httpd”,” -D”,” FOREGROUND”]
EXPOSE 80
Docker images is build successfully it is in registry, also deploy is successful, but when i execute docker ps, i don't have running this image.
I do all this same with this tutorial https://www.youtube.com/watch?v=eeXfb05ysg4
What I do wrong?
Job is scheduled in container together with another service container which has docker inside. It works, it starts container but after job finish, neighbour service with docker stops too. You are checking, and see no container on the host.
Try to remove:
services:
- docker:dind
Also, check out predefined list of CI variables. You can omit using hardcoded credentials and image path.
P.S. you use to kill and rm all containers and your CI will someday remove containers which are not managed buy this repo...
when i execute docker ps, i don't have running this image
You didn't mention how you check running container so I assume next considerations
Make sure you physically check at the right runner.
As soon you didn't set any tags on jobs it will pick first available. You can see at which runner it executed at the job page
Make sure your container is not down or finished.
To see all containers use docker ps -a — it shows all container even stopped one. There would be exit code by which you could determine the reason. Debug it with docker logs {container_id} (put container_id without braces)
Gitlab.com:
Not sure you can run a docker application in your Gitlab CI, try removing the -d option in your docker run command which will run the docker in the background.
$ docker run -t -p 8080:80 --name gitlab_learn $TEST_NAME
If this does work, it will probably force the pipeline to never finish and it will drain your CI/CD minutes.
Self-hosted Gitlab:
Your Gitlab CI is meant to run actions to build and deploy your application, so it doesn't make sense to have your application running on the same instance your Gitlab CI runner does. Even if you want to run the app on the same instance, it shouldn't be running on the same container the runner does and to achieve this you should configure Gitlab CI runner to use the docker on the host.
Anyways, would strongly recommend deploying somewhere outside where your Gitlab runner is running and even better to a managed docker service, Kubernetes or AWS ECS.
You did not specify what your setup is, but based on information in your question I can deduce that you're using gitlab.com (as opposed to private GitLab instance) and self-hosted runner with Docker executor.
You cannot use a runner with Docker executor to deploy containers directly to the underlying Docker installation.
There are two ways to do this:
Use a runner with a shell executor as described in the YT tutorial you posted.
Prepare a helper image that will use SSH to connect to your server and run docker commands there.

Docker run in pipeline says `docker: Error response from daemon: authorization denied`

I am trying to setup a bitbucket pipeline and that uses a docker run statement. But build fails with the following error message:
docker: Error response from daemon: authorization denied
Here is the pipeline configuration
pipelines:
default:
- step:
script:
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t solc .
# Test the solidity files in project
- docker run solc
Question: I did not perform any operation requiring authorization. Why is the error message talking of authorization.
You are running docker commands on a shared environment. As of the time of this question, Bitbucket does not allow you to run docker run commands in that environment for security purposes. The list of docker commands you can run (as of the time of this question) are:
docker login
docker build
docker tag
docker pull
docker push
docker version
Docker is a client/server application. You are running the client commands and bitbucket has secured their environment on the dockerd daemon.
You can see the current capabilities of their docker integration from their documentation which has been extended since this question was first answered. As of the time of this update, it filters privileged containers and mounting host volumes outside of a predefined subdirectory.

What happens to docker images pulled / run in gitlab CI?

I'm experiencing strange problem with gitlab CI build.
My script looks like that:
- docker pull myrepo:myimage
- docker tag myrepo:myimage myimage
- docker run --name myimage myimage
It was working for a few times, but afterwards I've started getting errors:
docker: Error response from daemon: Conflict. The container name
"/myimage" is already in use by container ....
I've logged on the particular machine where that step was executed, and docker ps -a has shown, that the image was left on the build machine...
I've expected that gitlab CI build steps are fully separated from external environment via running them in docker containers... so that a build would not 'spoil' the environment for other builds. So I've expected all images and containers created by CI build to simply perish... which is not the case...
Is my gitlab somehow misconfigured, or this is expected behaviour, that docker images / containers exists in context of host machine and not within docker image?
In my build, I use image docker:latest
No, your Gitlab is not misconfigured. Gitlab does clean its runners and executors (the docker image you run your commands in).
Since you are using DinD (Docker-in-Docker) any container you start or build is actually build on the same host and runs besides your job executor container, not 'inside' it.
Therefore you should clean up, Gitlab has no knowledge of what you do inside of your job so its not responsible for it.
I run various pipelines with the same situation you described so some suggestions:
job:
script:
- docker pull myrepo:myimage
- docker tag myrepo:myimage myimage
- docker run --name myimage myimage
after_script:
# Stop any running containers, if they are not running anymore (since its not a run -d), ignore errors about that.
- docker rm -f myrepo:myimage myimage || true
# Remove pulled images
- docker rmi -f myrepo:myimage image
Also (and I don't know your exact job of course) this could be shorter:
job:
script:
# Only pull if you want to 'refresh' any images that would be left behind
- docker pull myrepo:myimage
- docker run --name myimage myrepo:myimage
after_script:
# Stop any running containers, if they are not running anymore (since its not a run -d), ignore errors about that.
- docker rm -f myrepo:myimage || true
# Remove pulled image
- docker rmi -f myrepo:myimage
The problem was, /var/run/docker.sock was mapped as volume, which caused all docker commands to be invoked on host, not inside image. It's not a misconfiguration per se, but the alternative is to use dind service: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor
It required 3 things to be done:
Add following section to gitlab ci config:
services:
- docker:dind
Define variable DOCKER_DRIVER: overlay2 either in ci config, or globally in config.toml
Load kernel module overlay (/etc/modules)

Jenkins mesosphere/jenkins-dind:0.3.1 and proxy

All,
I am using DCOS and the associated Jenkins.
My company is having a proxy for any external traffic.
Jenkins is running properly and can access the internal network as well as any external network.
I can get jobs to curl a URL on internet if I set the HTTP proxy. I can pass this proxy to mesosphere/jenkins-dind:0.3.1 container as environment variable however, I can't run any docker pull or docker run while being in docker in docker mode.
I managed to reproduce the issue on one of the agent box.
sudo docker run hello-world
Hello from Docker!
This works!!
However, sudo docker run --privileged mesosphere/jenkins-dind:0.3.1 wrapper.sh "docker run hello-world" will fail with
docker: Error while pulling image: Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate is valid for FG3K6C3A13800607, not index.docker.io.
This is typically showing that the docker daemon is not having access to the proxy.
Do you know how to ensure that the dind is getting access to the proxy settings?
Antoine
This error can also manifest itself if the Docker daemon is unauthenticated against your registry but it looks like you're running against the public image, so that's not likely to be the problem.
You could try creating a new Parameter to the Jenkins node (see the instructions here for an example for how to set an environment variable called DOCKER_EXTRA_OPTS: https://docs.mesosphere.com/1.8/usage/service-guides/jenkins/advanced-configuration/).
In this case, we want to do the same (with Name env) but with the contents of Value set to something like HTTP_PROXY=http://proxy.example.com:80/.

Resources