I want to build docker with "ready to run" application in docker CI, so to do that I have created in .gitlab-ci.yml this code:
rebuild base docker:
stage: prepear
image: docker
script:
- docker build -t base_django environment
I want to use the official docker image to do it (image: docker), of course in directory environment I have placed Dockerfile. Unfortunately job faild on:
Running with gitlab-runner 11.8.0 (4745a6f3)
on docker-auto-scale 72989761
Using Docker executor with image docker ...
Pulling docker image docker ...
Using docker image sha256:639de9917ae1f2e4a586893c9a6ea3f21fd774bc4037184ecac35f3153a293b5 for docker ...
Running on runner-72989761-project-9841176-concurrent-0 via runner-72989761-srm-1552402128-c63119b1...
Cloning repository...
Cloning into '/builds/*****/*****'...
Checking out a804a12f as master...
Skipping Git submodules setup
$ docker build -t base_django environment
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: exit code 1
You need a specific configuration to run docker in docker with Gitlab CI. You can find more information right here : https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
Related
After pushing a commit to GitLab the build pipeline starts to check the new commit. Build and Test stage run successfully. But the Deploy stage stops with the following error:
Running with gitlab-runner 12.3.0 (a8a019e0)
on gitlab-runner2 QNyj_HGG
Using Docker executor with image nexus.XXX.com/YYY/ZZZ-engines ...
Authenticating with credentials from /root/.docker/config.json
Pulling docker image nexus.XXX.com/YYY/ZZZ-engines ...
ERROR: Job failed: Error response from daemon: manifest for
nexus.XXX.com/YYY/ZZZ-engines:latest not found: manifest unknown: manifest unknown (executor_docker.go:188:0s)
What could be the reason behind that?
I had the same problem, I solved it by rebuilding and republishing the docker image that the GitLab CI file was referencing and then re-run the pipeline again and it worked.
There is no latest tag for that specific docker image. Most likely you're building using a specific tag, e.g v1.0:
docker build -t nexus.XXX.com/YYY/ZZZ-engines:v1.0 .
then using that image without a tag in your .gitlab-ci.yml:
image: nexus.XXX.com/YYY/ZZZ-engines
# OR
build-job:
stage: build
image: nexus.XXX.com/YYY/ZZZ-engines
...
To fix, either specify a tag in you ci file, or tag the image with latest when building:
docker build -t nexus.XXX.com/YYY/ZZZ-engines:v1.0 -t nexus.XXX.com/YYY/ZZZ-engines:latest .
The gitlab docker template has a nice example of how to automatically tag a build as latest on the default branch: https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Docker.gitlab-ci.yml
I'm using packer docker builder with ansible to create docker image (https://www.packer.io/docs/builders/docker.html)
I have a machine(client) which is meant to run build scripts. The packer docker is executed with ansible from this machine. This machine has docker client. It's connected to a remote docker daemon. The environment variable DOCKER_HOST is set to point to the remote docker host. I'm able to test the connectivity and things are working good.
Now the problem is, when I execute packer docker to build the image, it errors out saying:
docker: Run command: docker run -v /root/.packer.d/tmp/packer-docker612435850:/packer-files -d -i -t ubuntu:latest /bin/bash
==> docker: Error running container: Docker exited with a non-zero exit status.
==> docker: Stderr: docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
==> docker: See 'docker run --help'.
It seems the packer docker is stuck looking at local daemon.
Workaround: I renamed docker binary and introduced a script called "docker" which sets DOCKER_HOST and invokes the original docker binary with parameters passed on.
Is there a better way to deal this?
Packers Docker builder doesn't work with remote hosts since packer uses the /packer-files volume mount to communicate with the container. This is vaguely expressed in the docs with:
The Docker builder must run on a machine that has Docker installed.
And explained in Overriding the host directory.
I am attempting to build a docker image from a Dockerfile using a declarative pipeline in Jenkins. I've successfully added the 'jenkins' user to the docker group, and can run 'docker run hello-world' as the jenkins user manually. However, when I attempt to build through the pipeline, I can't even run 'docker run hello-world':
From the pipeline:
[workspace] Running shell script
+ whoami
jenkins
[workspace] Running shell script
+ groups jenkins
jenkins : jenkins docker
[workspace] Running shell script
+ docker run hello-world
docker: Got permission denied while trying to connect to the Docker
daemon socket at unix:///var/run/docker.sock: Post
http://%2Fvar%2Frun%2Fdocker.sock/v1.30/containers/create: dial unix
/var/run/docker.sock: connect: permission denied.
Manually sshing into Jenkins and switching to the 'jenkins' user:
*********#auto-jenkins-01:~$ sudo su - jenkins -s/bin/bash
jenkins#auto-jenkins-01:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
Some other useful information: Jenkins is running from a VM.
Needed to give jenkins user group privileges to docker unix socket by editing /etc/default/docker and adding:
DOCKER_OPTS=' -G jenkins'
I'm experiencing strange problem with gitlab CI build.
My script looks like that:
- docker pull myrepo:myimage
- docker tag myrepo:myimage myimage
- docker run --name myimage myimage
It was working for a few times, but afterwards I've started getting errors:
docker: Error response from daemon: Conflict. The container name
"/myimage" is already in use by container ....
I've logged on the particular machine where that step was executed, and docker ps -a has shown, that the image was left on the build machine...
I've expected that gitlab CI build steps are fully separated from external environment via running them in docker containers... so that a build would not 'spoil' the environment for other builds. So I've expected all images and containers created by CI build to simply perish... which is not the case...
Is my gitlab somehow misconfigured, or this is expected behaviour, that docker images / containers exists in context of host machine and not within docker image?
In my build, I use image docker:latest
No, your Gitlab is not misconfigured. Gitlab does clean its runners and executors (the docker image you run your commands in).
Since you are using DinD (Docker-in-Docker) any container you start or build is actually build on the same host and runs besides your job executor container, not 'inside' it.
Therefore you should clean up, Gitlab has no knowledge of what you do inside of your job so its not responsible for it.
I run various pipelines with the same situation you described so some suggestions:
job:
script:
- docker pull myrepo:myimage
- docker tag myrepo:myimage myimage
- docker run --name myimage myimage
after_script:
# Stop any running containers, if they are not running anymore (since its not a run -d), ignore errors about that.
- docker rm -f myrepo:myimage myimage || true
# Remove pulled images
- docker rmi -f myrepo:myimage image
Also (and I don't know your exact job of course) this could be shorter:
job:
script:
# Only pull if you want to 'refresh' any images that would be left behind
- docker pull myrepo:myimage
- docker run --name myimage myrepo:myimage
after_script:
# Stop any running containers, if they are not running anymore (since its not a run -d), ignore errors about that.
- docker rm -f myrepo:myimage || true
# Remove pulled image
- docker rmi -f myrepo:myimage
The problem was, /var/run/docker.sock was mapped as volume, which caused all docker commands to be invoked on host, not inside image. It's not a misconfiguration per se, but the alternative is to use dind service: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor
It required 3 things to be done:
Add following section to gitlab ci config:
services:
- docker:dind
Define variable DOCKER_DRIVER: overlay2 either in ci config, or globally in config.toml
Load kernel module overlay (/etc/modules)
I am using Jenkins to make build of project, but now my client wants to make builds inside of a Docker image. i have installed docker on server and its running on 172.0.0.1:PORT. I have installed Docker plugin and assigned this TCP URL to Docker URL. I have also created an image with the name jenkins-1
In configure project I use Build environment Build with Docker Container and provide image name. and then in Build in put Execute Shell and then Build it
But it gives the Error:
Pull Docker image jenkins-1 from repository ...`
$ docker pull jenkins-1`
Failed to pull Docker image jenkins-1`
FATAL: Failed to pull Docker image jenkins-1`
java.io.IOException: Failed to pull Docker image jenkins-1``
at com.cloudbees.jenkins.plugins.docker_build_env.PullDockerImageSelector.prepare DockerImage(PullDockerImageSelector.java:34)`
at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.setUp(DockerB`uildWrapper.java:169)`
at hudson.model.Build$BuildExecution.doRun(Build.java:156)`
at `hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)`
at hudson.model.Run.execute(Run.java:1720)`
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)`
at hudson.model.ResourceController.execute(ResourceController.java:98)`
at hudson.model.Executor.run(Executor.java:404)`
Finished: FAILURE`
I just have run into the same issue. There is a 'Verbose' check-box in the configuration of build environment after selecting 'Advanced...' link to expand on the error details:
CloudBees plug-in Verbose option
In my case I ran out of space downloading the build Docker images. Expanding ec2 volume has resolved the issue.
But there is an ongoing trouble with space as docker does not auto cleans images and I have ended up adding a manual cleanup step into the build:
docker volume ls -qf dangling=true | xargs -r docker volume rm
Complete build script:
https://bitbucket.org/vk-smith/dotnetcore-api/src/master/ci-build.sh?fileviewer=file-view-default