I want to create a docker image, start it as a container (to configure database credentials etc.), commit those changes, tag it and push it to the container registry:
from .gitlab-ci.yml:
configure_db_image:
stage: docker_build
tags:
- docker-in-docker
script:
- docker login <gitlab-CI-CR> -u gitlab-ci-token -p $CI_JOB_TOKEN
- docker pull <gitlab-CI-CR>/db-template/db-template-image:latest
- docker tag <gitlab-CI-CR>/db-template/db-template-image:latest <gitlab-CI-CR>/my-project/my-repo/test-db-image:latest
# Remove the container if it exists already
- docker rm -f test-db-image-container || true
- docker create -i -p 5432:5432 --name test-db-image-container --env 'CREATE_ONLY_ON_FIRST_RUN=yes' --env 'DB_USER=user' --env 'DB_PASS=pass' --env 'DB_NAME=dbname' <gitlab-CI-CR>/my-project/my-repo/test-db-image:latest
- docker start -i test-db-image-container
- docker stop test-db-image-container
- docker commit test-db-image-container test-db-image
- docker tag test-db-image <gitlab-CI-CR>/my-project/my-repo/test-db-image:latest
- docker push <gitlab-CI-CR>/my-project/my-repo/test-db-image:latest
I don't see why but in spite of the docker push the image I pull from the registry isn't configured. Where am I going wrong?
This works as described - the issue is with the Dockerfile in the parent image which declares the path where database changes happen in the file system as a Docker volume. As such they don't persist when the image is pushed to the registry.
Related
I am trying to run a process in gitlab ci that mimics the clients use case to make sure our modifications do not disrupt their use case. This is the specific job that is failing.
docker-source:
stage: build
image: carlallen/docker:buildx
services:
- name: docker:dind
command: ["dockerd", "--host=tcp://0.0.0.0:2375"]
alias: 'docker'
script:
- echo "Building..."
- docker --version
- docker buildx
- docker buildx create --use --config buildkit.toml --driver-opt network=host --buildkitd-flags '--allow-insecure-entitlement security.insecure --allow-insecure-entitlement network.host' --name test_name
- docker run -d -p 5000:5000 --restart=always --name registry registry:2
- ./build-docker.sh
$ docker --version
Docker version 19.03.14, build 5eb3275
$ docker buildx
Usage: docker buildx [OPTIONS] COMMAND
Build with BuildKit
Options:
--builder string Override the configured builder instance
Management Commands:
imagetools Commands to work on images in registry
Commands:
bake Build from a file
build Start a build
create Create a new builder instance
du Disk usage
inspect Inspect current builder instance
ls List builder instances
prune Remove build cache
rm Remove a builder instance
stop Stop builder instance
use Set the current builder instance
version Show buildx version information
Run 'docker buildx COMMAND --help' for more information on a command.
$ docker buildx create --use --config buildkit.toml --driver-opt network=host --buildkitd-flags '--allow-insecure-entitlement security.insecure --allow-insecure-entitlement network.host' --name test_name
test_name
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
docker: error during connect: Post http://docker:2375/v1.40/containers/create?name=registry: dial tcp: lookup docker on XXX.XX.X.X:53: no such host.
See 'docker run --help'.
Thank you for the help!
Do not override the command or entrypoint for the docker:dind image. Use environment variables to control the behavior.
variables:
DOCKER_HOST: 'docker'
DOCKER_TLS_CERTDIR: "" # disable tls, force use of port 2375
services:
- docker:dind
script:
- docker info # verify connection/server details
If this doesn't work, then you are probably using a self-hosted runner that is not configured correctly for use with docker-in-docker. You should follow the docker in docker guide and make sure you runner is setup according to the documentation.
i have a job that starts android UI-tests on GitLab CI/CD. It somehow runs a container from image android-uitests:1.0 from registry. I don't know where and how Gitlab CI/CD runs that image using command "docker run ...", but i need to extend that command and i want to pass some variables (or arguments) in this command.
Here below example of command that i want Gitlab to do:
docker run -d \
-t \
--privileged \
-e "SNAPSHOT_DISABLED"="true" \
-e "QTWEBENGINE_DISABLE_SANDBOX"=1 \
-e "WINDOW"="true" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-p 5555:5555 -p 5554:5554 -p 5901:5901 \
--name emulator \
android-uitest:1.0
this is a stage and its job with image
ui-tests:
image: registry.myproject:5000/android-uitests:1.0
stage: ui-tests
only:
- merge_requests
- schedules
when: manual
script:
- bash /run-emulator.sh
- adb devices
- adb shell input keyevent 82
- adb shell settings put global window_animation_scale 0 &
- adb shell settings put global transition_animation_scale 0 &
- adb shell settings put global animator_duration_scale 0 &
- ./gradlew mobile:connectedDevDebugAndroidTest -Pandroid.testInstrumentationRunnerArguments.package=app.online.tests.uitests
tags:
- androidtest
So another words i want to configure that "under the hood docker run command" that runs my image.
Tell me please how to do that?
Considering you're using a Docker container, I'll assume you're using a Gitlab Runner on Docker executor mode, which means you're essentially running a similar script to this when you don't specify a docker image to run the CI job:
image: docker:19.03.13
variables:
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.13-dind
script:
- (your commands)
To understand what's going on, let's break it in multiple steps:
image: docker:19.03.13
(...)
services:
- docker:19.03.13-dind
docker-19.03.13-dind, what's the difference to docker:19.03.13?
why is it a service instead of an image?
DIND means Docker-in-Docker, this part of Gitlab's documentation can explain it in further technical details, but what is important to understand from this part is what is a service on Gitlab's CI context and why they have to specify an additional Docker image when already using a Docker image as a default environment. When you write a service on Gitlab CI, you are able to use its command while you're inside an existing container. e.g. when you want to connect a PostgreSQL (database) container to a backend container you're building, but without having to set up a docker-compose or multiple containers.
Using a docker service to run together with a docker image, it means you can directly use docker run within your job without any additional setup. This previous StackOverflow question explains this question further.
Back to your code, instead of deploying your registry image as a job directly:
ui-tests:
image: registry.myproject:5000/android-uitests:1.0
You may want to first, build your container and upload it to your registry:
image: docker:19.03.12
services:
- docker:19.03.12-dind
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
This snippet will build your Dockerfile in the root folder of your repository and upload it to your Gitlab private (or public, if your project is set as public) image registry. Now you can specify an additional job specifically to do what you want:
Final example
image: docker:19.03.12
stages:
- build
- release
services:
- docker:19.03.12-dind
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
deploy:
stage: release
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker run -d -t --privileged -e "SNAPSHOT_DISABLED"="true" -e "QTWEBENGINE_DISABLE_SANDBOX"=1 -e "WINDOW"="true" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -p 5555:5555 -p 5554:5554 -p 5901:5901 --name emulator $CI_REGISTRY_IMAGE:latest
Why are you using $VARIABLES?
In case of an environment variable may sound confusing, here's the list of default environment variables Gitlab generates for every job it creates.
The last example I cited will result in a Docker container running in the same machine your executor is registered, with the environment variables you have specified.
If you need a practical example, you can use this gitlab-ci.yml of a project of mine as reference.
I have a simple project setup in Gitlab CI/CD using Docker to serve the site on a Container following this guide. But I get "Container already in use..." error whenever there is a new job running on a push event. How do I "push" the new code to my already running website without taking it down or killing the container?
# .gitlab-ci.yml
stages:
- build
job 1:
stage: build
tags:
- windows-test
script:
- docker build -t vuejs-cookbook/dockerize-vuejs-app .
- docker run -p 8080:80 --rm --name dockerize-vuejs-app-1 vuejs-cookbook/dockerize-vuejs-app
The container name is the same every time. Stop and remove the old container first.
Run docker stop dockerize-vuejs-app-1 and docker rm dockerize-vuejs-app-1 after docker build.
Beside that I would suggest to run your container detached (-d) with --restart always (docs).
docker build -t vuejs-cookbook/dockerize-vuejs-app .
docker stop dockerize-vuejs-app-1
docker rm dockerize-vuejs-app-1
docker run -p 8080:80 -d --restart always --name dockerize-vuejs-app-1 vuejs-cookbook/dockerize-vuejs-app
I'm testing gitlab-ci and trying to generate an image on the registry from the Dockerfile.
I have the same code just to test:
#gitlab-ci
image: docker:latest
tages:
- build
- deploy
build_application:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . -f Dockerfile
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA-test
output:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
docker is running the image is being pulled but I can not execute docker commands.
In my local environment if a run:
docker run -it docker:latest
I stay inside the container and run docker info i have the same problem. I had to fix it by running the container on this way:
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock docker:latest
but I do not know how to fix it on gitlab-ci. I configured my runner so:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Maybe someone can put me in the right direction.
thanks
By default it is not possible to run docker-in-docker (DIND) (as a security measure).
This section in the Gitlab docs is your solution. You must use Docker-in-Docker.
After configuring your runner to use DIND your .gitlab-ci.yml will look like this:
#gitlab-ci
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
stages:
- build
- deploy
build_application:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . -f Dockerfile
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA-test
I'm using docker in docker to host my containers as they work through the pipeline. The container I create from my code is setup to have a volume to pass in a gcloud key to the container. This works perfectly on my local machine, but on the gitlab-runner it doesn't link correctly.
From reading this appears to be because it links the host to my container, rather than the dind host to my container.
How do I link the directory that is inside dind to my container?
(Also ignore any minor issues with tagging and such, this ci file is very early in development)
GitLab ci below
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
SPRING_PROFILES_ACTIVE: gitlab-ci
CONTAINER_TEST_IMAGE: registry.gitlab.com/fdsa
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/asdf
stages:
- build_test_image
- deploy
.docker_login: &docker_login | # This is an anchor
docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
build test image:
stage: build_test_image
script:
- *docker_login
- docker build -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
test run:
stage: deploy
script:
- *docker_login
- mkdir /key
- echo $GCP_SVC_KEY > /key/application_default_credentials.json
# BROKEN LINE HERE
- docker run --rm -v "/key:/.config/gcloud/" $CONTAINER_TEST_IMAGE
tags:
- docker
Background
Your problem is in the fact that DIND runs ALL containers on your host (or top-level Docker engine) so when you mount a directory to your $CONTAINER_TEST_IMAGE (2nd level Docker) this image in fact runs on your host through the mounted socket and thus the container is looking for that directory on your Docker host.
I've had this same issue mounting tests in containers and solved it through linking volumes between containers.
Solution
In your case I think the docker cp command could solve your need to copy the /key/application_default_credentials.json file to the container.
Something like:
- docker run --name="myContainer" -d $CONTAINER_TEST_IMAGE
- docker cp /key/application_default_credentials.json myContainer::/.config/gcloud/application_default_credentials.json
- docker exec -it myContainer 'run_tests_or_whatever_command'
- docker rm -f myContainer
The other solution given is perfectly valid but I wanted to share my solution:
Apparently dind will mount the /build directory so subcontainers can "see" its contents. So by placing the key in "./" it is viewable by those containers. I use $(pwd) because docker run doesn't accept ~ or .
test run:
stage: deploy
script:
- *docker_login
- mkdir ./key
- echo $GCP_SVC_KEY > ./key/application_default_credentials.json
- docker run --rm -v "$(pwd)/key:/.config/gcloud/" $CONTAINER_TEST_IMAGE
tags:
- docker