In the second stage of my CI pipeline (after building a fresh docker image). I'm using other docker containers to test the new image, but I'm not able to get send any requests between these containers. My CI config is as follows:
stages:
- build
- test
services:
- docker:dind
variables:
# Enable network-per-build to allow gitlab services to see one another
FF_NETWORK_PER_BUILD: "true"
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: '/certs'
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
# IMAGE SETTINGS
NODE_ENV: "development"
API_URL: "http://danielgtaylor-apisprout:8000"
PORT: 8080
build:
stage: build
image: docker
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t ${CI_REGISTRY_IMAGE}:test .
- docker push ${CI_REGISTRY_IMAGE}:test
test:
stage: test
image: docker
services:
- docker:dind
- name: ${CI_REGISTRY_IMAGE}:test
alias: server
script:
- docker run --rm --name apisprout -d -v $CI_PROJECT_DIR/v2-spec.yml:/api.yaml danielgtaylor/apisprout /api.yaml
- docker run --rm --name newman -v $CI_PROJECT_DIR:/etc/newman postman/newman run 'Micros V2.postman_collection.json'
And receive the following error ENOTFOUND server server:8080
I have also tried with a new bridge network:
test:
stage: test
image: docker
services:
- docker:dind
script:
- docker network create -d bridge mynet
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker run -d --network=mynet --hostname=server ${CI_REGISTRY_IMAGE}:test
- docker run --rm --network=mynet --hostname=apisprout --name apisprout -d -v $CI_PROJECT_DIR/v2-spec.yml:/api.yaml danielgtaylor/apisprout /api.yaml
- docker run --rm --network=mynet --name newman -v $CI_PROJECT_DIR:/etc/newman postman/newman run 'Micros V2.postman_collection.json'
But I receive the same error ENOTFOUND server server:8080.
I am unable to run the docker run containers as services as I don't believe attaching volumes is supported yet.
I'm also running this on Gitlab.com, not a private runner.
Related
I use Gitlab runner on an EC2 to build, test and deploy docker images on a ECS.
I start my CI workflow using a "push/pull" logic: I build all my docker images during the first stage and push them to my gitlab repository then I pull them during the test stage.
I thought that I could drastically improve the workflow time by keeping the image built during the build stage between build and test stages.
My gitlab-ci.yml looks like this:
stages:
- build
- test
- deploy
build_backend:
stage: build
image: docker
services:
- docker:dind
before_script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
script:
- docker build -t backend:$CI_COMMIT_BRANCH ./backend
only:
refs:
- develop
- master
build_generator:
stage: build
image: docker
services:
- docker:dind
before_script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
script:
- docker build -t generator:$CI_COMMIT_BRANCH ./generator
only:
refs:
- develop
- master
build_frontend:
stage: build
image: docker
services:
- docker:dind
before_script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
script:
- docker build -t frontend:$CI_COMMIT_BRANCH ./frontend
only:
refs:
- develop
- master
build_scraping:
stage: build
image: docker
services:
- docker:dind
before_script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
script:
- docker build -t scraping:$CI_COMMIT_BRANCH ./scraping
only:
refs:
- develop
- master
test_backend:
stage: test
needs: ["build_backend"]
image: docker
services:
- docker:dind
before_script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
- DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
- mkdir -p $DOCKER_CONFIG/cli-plugins
- apk add curl
- curl -SL https://github.com/docker/compose/releases/download/v2.3.2/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
- chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
script:
- docker compose -f docker-compose-ci.yml up -d backend
- docker exec backend pip3 install --no-cache-dir --upgrade -r requirements-test.txt
- docker exec db sh mongo_init.sh
- docker exec backend pytest test --junitxml=report.xml -p no:cacheprovider
artifacts:
when: always
reports:
junit: backend/report.xml
only:
refs:
- develop
- master
test_generator:
stage: test
needs: ["build_generator"]
image: docker
services:
- docker:dind
before_script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
- DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
- mkdir -p $DOCKER_CONFIG/cli-plugins
- apk add curl
- curl -SL https://github.com/docker/compose/releases/download/v2.3.2/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
- chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
script:
- docker compose -f docker-compose-ci.yml up -d generator
- docker exec generator pip3 install --no-cache-dir --upgrade -r requirements-test.txt
- docker exec generator pip3 install --no-cache-dir --upgrade -r requirements.txt
- docker exec db sh mongo_init.sh
- docker exec generator pytest test --junitxml=report.xml -p no:cacheprovider
artifacts:
when: always
reports:
junit: generator/report.xml
only:
refs:
- develop
- master
[...]
gitlab-runner/config.toml:
concurrent = 5
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "Docker Runner"
url = "https://gitlab.com/"
token = ""
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:19.03.12"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/certs/client", "/cache"]
shm_size = 0
docker-compose-ci.yml:
services:
backend:
container_name: backend
image: backend:$CI_COMMIT_BRANCH
build:
context: backend
volumes:
- ./backend:/app
networks:
default:
ports:
- 8000:8000
- 587:587
- 443:443
environment:
- ENVIRONMENT=development
depends_on:
- db
generator:
container_name: generator
image: generator:$CI_COMMIT_BRANCH
build:
context: generator
volumes:
- ./generator:/var/task
networks:
default:
ports:
- 9000:8080
environment:
- ENVIRONMENT=development
depends_on:
- db
db:
container_name: db
image: mongo
volumes:
- ./mongo_init.sh:/mongo_init.sh:ro
networks:
default:
environment:
MONGO_INITDB_DATABASE: DB
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: admin
ports:
- 27017:27017
frontend:
container_name: frontend
image: frontend:$CI_COMMIT_BRANCH
build:
context: frontend
volumes:
- ./frontend:/app
networks:
default:
ports:
- 8080:8080
depends_on:
- backend
networks:
default:
driver: bridge
When I comment context: in my docker-compose-ci.yml, Docker can't find my image and indeed it is not keep between jobs.
What is the best Docker approach during CI to build -> test -> deploy?
Should I zip my docker image and share them between stages using artifacts? It doesn't seem to be the most efficient way to do this.
I'm a bit lost about which approach I should use to perform a such common workflow in Gitlab CI using Docker.
The best way to do this is to push the image to the registry and pull it in other stages where it is needed. You appear to be missing the push/pull logic.
You also want to make sure you've leveraging docker caching in your docker builds. You'll probably want to specify the cache_from: key in your compose file.
For example:
build:
stage: build
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
# pull latest image to leverage cached layers
- docker pull $CI_REGISTRY_IMAGE:latest || true
# build and push the image to be used in subsequent stages
- docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA # push the image
test:
stage: test
needs: [build]
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
# pull the image that was built in the previous stage
- docker pull $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker-compose up # or docker run or whatever
Try mounting the "Docker Root Dir" as a persistent/nfs volume that is shared by the fleet of runners.
Docker images are stored in "Docker Root Dir" path. You can find out your docker root by running:
# docker info
...
Storage Driver: overlay2
Docker Root Dir: /var/lib/docker
...
Generally the default paths based on the OS are
Ubuntu: /var/lib/docker/
Fedora: /var/lib/docker/
Debian: /var/lib/docker/
Windows: C:\ProgramData\DockerDesktop
MacOS: ~/Library/Containers/com.docker.docker/Data/vms/0/
Once properly mounted to all agents, you will be able to access all local docker images.
References:
https://docs.gitlab.com/runner
https://blog.nestybox.com/2020/10/21/gitlab-dind.html
I have node.js application that I need to deploy to exising kubernetes cluster.
The cluster is setup using kops on AWS.
I have created .gitlab-ci.yml file for building docker images.
So, whenever a change is pushed to either master or develop branch. It will build the docker image.
I have already followed steps defined here to add an existing cluster.
Now, I have to roll update to exisitng kubernetes cluster..
# This file is a template, and might need editing before it works on your project.
docker-build-master:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:prod" .
- docker push "$CI_REGISTRY_IMAGE:prod"
only:
- master
docker-build-dev:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:dev" .
- docker push "$CI_REGISTRY_IMAGE:dev"
only:
- develop
For now, I am using shared runner.
How can I integrate kubernetes deployment after image is built with gitlab ci/cd to deploy on aws (cluster is created with kops)?
For registry I am using gitlab's container registry not docker hub.
Update
I changed configuration and doing below,
stages:
- docker-build
- deploy
docker-build-master:
image: docker:latest
stage: docker-build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:prod" .
- docker push "$CI_REGISTRY_IMAGE:prod"
only:
- master
deploy-prod:
stage: deploy
image: roffe/kubectl
script:
- kubectl apply -f scheduler-deployment.yaml
only:
- master
docker-build-dev:
image: docker:latest
stage: docker-build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:dev" .
- docker push "$CI_REGISTRY_IMAGE:dev"
only:
- develop
But now I am getting below error.
roffe/kubectl with digest roffe/kubectl#sha256:ba13f8ffc55c83a7ca98a6e1337689fad8a5df418cb160fa1a741c80f42979bf ...
$ kubectl apply -f scheduler-deployment.yaml
error: the path "scheduler-deployment.yaml" does not exist
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
File scheduler-deployment.yaml does exist in the root directory.
i have a job that starts android UI-tests on GitLab CI/CD. It somehow runs a container from image android-uitests:1.0 from registry. I don't know where and how Gitlab CI/CD runs that image using command "docker run ...", but i need to extend that command and i want to pass some variables (or arguments) in this command.
Here below example of command that i want Gitlab to do:
docker run -d \
-t \
--privileged \
-e "SNAPSHOT_DISABLED"="true" \
-e "QTWEBENGINE_DISABLE_SANDBOX"=1 \
-e "WINDOW"="true" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-p 5555:5555 -p 5554:5554 -p 5901:5901 \
--name emulator \
android-uitest:1.0
this is a stage and its job with image
ui-tests:
image: registry.myproject:5000/android-uitests:1.0
stage: ui-tests
only:
- merge_requests
- schedules
when: manual
script:
- bash /run-emulator.sh
- adb devices
- adb shell input keyevent 82
- adb shell settings put global window_animation_scale 0 &
- adb shell settings put global transition_animation_scale 0 &
- adb shell settings put global animator_duration_scale 0 &
- ./gradlew mobile:connectedDevDebugAndroidTest -Pandroid.testInstrumentationRunnerArguments.package=app.online.tests.uitests
tags:
- androidtest
So another words i want to configure that "under the hood docker run command" that runs my image.
Tell me please how to do that?
Considering you're using a Docker container, I'll assume you're using a Gitlab Runner on Docker executor mode, which means you're essentially running a similar script to this when you don't specify a docker image to run the CI job:
image: docker:19.03.13
variables:
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.13-dind
script:
- (your commands)
To understand what's going on, let's break it in multiple steps:
image: docker:19.03.13
(...)
services:
- docker:19.03.13-dind
docker-19.03.13-dind, what's the difference to docker:19.03.13?
why is it a service instead of an image?
DIND means Docker-in-Docker, this part of Gitlab's documentation can explain it in further technical details, but what is important to understand from this part is what is a service on Gitlab's CI context and why they have to specify an additional Docker image when already using a Docker image as a default environment. When you write a service on Gitlab CI, you are able to use its command while you're inside an existing container. e.g. when you want to connect a PostgreSQL (database) container to a backend container you're building, but without having to set up a docker-compose or multiple containers.
Using a docker service to run together with a docker image, it means you can directly use docker run within your job without any additional setup. This previous StackOverflow question explains this question further.
Back to your code, instead of deploying your registry image as a job directly:
ui-tests:
image: registry.myproject:5000/android-uitests:1.0
You may want to first, build your container and upload it to your registry:
image: docker:19.03.12
services:
- docker:19.03.12-dind
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
This snippet will build your Dockerfile in the root folder of your repository and upload it to your Gitlab private (or public, if your project is set as public) image registry. Now you can specify an additional job specifically to do what you want:
Final example
image: docker:19.03.12
stages:
- build
- release
services:
- docker:19.03.12-dind
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
deploy:
stage: release
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker run -d -t --privileged -e "SNAPSHOT_DISABLED"="true" -e "QTWEBENGINE_DISABLE_SANDBOX"=1 -e "WINDOW"="true" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -p 5555:5555 -p 5554:5554 -p 5901:5901 --name emulator $CI_REGISTRY_IMAGE:latest
Why are you using $VARIABLES?
In case of an environment variable may sound confusing, here's the list of default environment variables Gitlab generates for every job it creates.
The last example I cited will result in a Docker container running in the same machine your executor is registered, with the environment variables you have specified.
If you need a practical example, you can use this gitlab-ci.yml of a project of mine as reference.
I want to upload my frontend to sentry, but I need to get the folder using docker commands. However when I use image: getsentry/sentry-cli
docker doesn't works and e.g. in before_script I get error that docker doesn't exist
sentry_job:
stage: sentry_job
image: getsentry/sentry-cli
services:
- docker:18-dind
before_script:
- docker login -u gitlab-ci-token -p "$CI_JOB_TOKEN" registry.gitlab.cz
script:
# script...
. # Get the dist folder from the image
- mkdir frontend_dist
- docker run --rm -v $PWD/frontend_dist:/mounted --entrypoint="" $IMAGE /bin/sh -c "cp /frontend/dist /mounted"
- ls frontend_dist
tags:
- dind
How do I fix that?
To achieve what you want, you need to use a single job (to have the same build context) and specify docker:stable as the job image (along with docker:stable-dind as a service).
This setup is called docker-in-docker and this is the standard way to allow a GitLab CI script to run docker commands (see doc).
Thus, you could slightly adapt your .gitlab-ci.yml code like this:
sentry_job:
stage: sentry_job
image: docker:stable
services:
- docker:stable-dind
variables:
IMAGE: "${CI_REGISTRY_IMAGE}:latest"
before_script:
- docker login -u gitlab-ci-token -p "${CI_JOB_TOKEN}" registry.gitlab.cz
script:
- git pull "$IMAGE"
- mkdir -v frontend_dist
- docker run --rm -v "$PWD/frontend_dist:/mounted" --entrypoint="" "$IMAGE" /bin/sh -c "cp -v /frontend/dist /mounted"
- ls frontend_dist
- git pull getsentry/sentry-cli
- docker run --rm -v "$PWD/frontend_dist:/work" getsentry/sentry-cli
tags:
- dind
Note: the git pull commands are optional (they ensure Docker will use the latest version of the images).
Also, you may need to change the definition of variable IMAGE.
I'm testing gitlab-ci and trying to generate an image on the registry from the Dockerfile.
I have the same code just to test:
#gitlab-ci
image: docker:latest
tages:
- build
- deploy
build_application:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . -f Dockerfile
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA-test
output:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
docker is running the image is being pulled but I can not execute docker commands.
In my local environment if a run:
docker run -it docker:latest
I stay inside the container and run docker info i have the same problem. I had to fix it by running the container on this way:
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock docker:latest
but I do not know how to fix it on gitlab-ci. I configured my runner so:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Maybe someone can put me in the right direction.
thanks
By default it is not possible to run docker-in-docker (DIND) (as a security measure).
This section in the Gitlab docs is your solution. You must use Docker-in-Docker.
After configuring your runner to use DIND your .gitlab-ci.yml will look like this:
#gitlab-ci
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
stages:
- build
- deploy
build_application:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . -f Dockerfile
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA-test