In a Gitlab pipeline I want to spin up a service that allows me to send K6 performance test data into Datadog. So I defined a stage called "build" and a stage "run tests"
In "build", I run a docker container to setup the Datadog agent.
This has been defined on the K6 website: https://k6.io/docs/results-output/real-time/datadog/
In the "run tests" stage I want to connect to the Datadog agent that is running in the Gitlab pipeline. However this results in an error:
"lookup datadog on 127.0.0.11:53: no such host"
I have tried to run a docker-compose in gitlab-ci.yml, then all services are able to connect to each other. So the original issue looks like a network issue.
Also running the Datadog agent via Docker and running the tests via Docker on my local machine works fine. However in Gitlab not so much.
What would the best approach be in Gitlab to build a service in one stage, and to utilise that service and it's port in another stage?
gitlab-ci.yml
stages:
- build
- run tests
build datadog agent:
stage: build
image:
name: docker:stable
services:
- name: docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script: |-
DOCKER_CONTENT_TRUST=1 \
docker run --rm -d \
--name datadog666 \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /proc/:/host/proc/:ro \
-v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
-e DD_SITE="datadoghq.eu" \
-e DD_API_KEY=<API KEY>\
-e DD_DOGSTATSD_NON_LOCAL_TRAFFIC=1 \
-p 8125:8125/udp \
datadog/agent:latest
run k6 tests:
image:
name: loadimpact/k6:latest
entrypoint: ['']
stage: run tests
variables:
FF_NETWORK_PER_BUILD: 1
script:
- k6 run -e K6_STATSD_ADDR=datadog:8125 -e K6_STATSD_ENABLE_TAGS=true --out statsd ./k6/tests/k6.js
Thanks!
Related
I am trying to pass the env variable to my node js docker build image ,while running as shown below
stages:
- publish
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
publish:
image: docker:latest
stage: publish
services:
- docker:dind
script:
- touch env.txt
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: alpine:latest
stage: deploy
tags:
- deployment
script:
- chmod og= $ID_RSA
- apk update && apk add openssh-client
- echo "AWS_ACCESS_KEY_ID"=$AWS_ACCESS_KEY_ID >> "env.txt"
- echo "AWS_S3_BUCKET"=$AWS_S3_BUCKET >> "env.txt"
- echo "AWS_S3_REGION"=$AWS_S3_REGION >> "env.txt"
- echo "AWS_SECRET_ACCESS_KEY"=$AWS_SECRET_ACCESS_KEY >> "env.txt"
- echo "DB_URL"=$DB_URL >> "env.txt"
- echo "JWT_EXPIRES_IN"=$JWT_EXPIRES_IN >> "env.txt"
- echo "OTP_EXPIRE_TIME_SECONDS"=$OTP_EXPIRE_TIME_SECONDS >> "env.txt"
- echo "TWILIO_ACCOUNT_SID"=$TWILIO_ACCOUNT_SID >> "env.txt"
- echo "TWILIO_AUTH_TOKEN"=$TWILIO_AUTH_TOKEN >> "env.txt"
- echo "TWILLIO_SENDER"=$TWILLIO_SENDER >> "env.txt"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run --env-file env.txt -d -p 8080:8080 --name my-app $TAG_COMMIT"
environment:
name: development
url: 90900
only:
- master
I am running this command docker run --env-file env.txt ,but it gives me an error docker: open env.txt: no such file or directory.
How Can I solve the issue ,to pass multiple variables in my docker run command
Which job is failing? In your deploy job, you are creating the env.txt locally and using SSH to do the docker building, but you never scp your local env.txt to $SERVER_USER#$SERVER_ID for the remote process to pick it up.
I had the same issue using Gitlab ci/cd. i.e. Trying to inject env vars that were referenced in the project .env file via the runner (docker executor) into the output docker container.
We don't want to commit any sensitive info into git so one option is to save them on the server in a file and include via the --env-file flag but Gitlab runner creates a new container for every run so not possible to use this as the host server running the yaml script is ephemeral and not the actual server that Gitlab runner was installed onto.
The suggestion by #dmoonfire to scp the file over sounded like a good solution but I couldn't get it to work to copy a file from external to the gitlab runner. I'd need to copy the public key from the executor to the gitlab runner server but the docker executor is ephemeral.
I found the simplest solution to use the Gitlab CI/CD variable settings. It's possible to mask variables and restrict to protected branches or protected tags etc. These get injected into the container so that your .env file can access.
The main reason I'm trying to use Gitlab CI is to automate unit testing before deployment. I want to
build my Docker images and push them to my image repository, then
ensure all my pytest unit tests pass, and finally
deploy to my production server.
However, my pytest command doesn't run at all if I include the -T flag as follows. It just instantly returns 0 and "succeeds", which is not correct because I have a failing test in there:
docker-compose exec -T web_service pytest /app/tests/ --junitxml=report.xml
On my local computer, I run the tests without the -T flag as follows, and it runs correctly (and the test fails as expected):
docker-compose exec web_service pytest /app/tests/ --junitxml=report.xml
But if I do that in Gitlab CI, I get the error the input device is not a TTY if I omit the -T flag.
Here's some of my ".gitlab-ci.yml" file:
image:
name: docker/compose:1.29.2
# Override the entrypoint (important)
entrypoint: [""]
# Must have this service
# Note: --privileged is required for Docker-in-Docker to function properly,
# but it should be used with care as it provides full access to the host environment
services:
- docker:dind
stages:
- build
- test
- deploy
variables:
# DOCKER_HOST is essential
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
before_script:
# First test that gitlab-runner has access to Docker
- docker --version
# Set variable names
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export MY_IMAGE=$IMAGE:web_service
# Install bash
- apk add --no-cache bash
# Add environment variables stored in GitLab, to .env file
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
# Login to the Gitlab registry and pull existing images to use as cache
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
# Pull the image for the build cache, and continue even if this image download fails (it'll fail the very first time)
- docker pull $MY_IMAGE || true
# Build and push Docker images to the Gitlab registry
- docker-compose -f docker-compose.ci.build.yml build
- docker push $MY_IMAGE
only:
- master
test:
stage: test
script:
# Pull the image
- docker pull $MY_IMAGE
# Start the containers and run the tests before deployment
- docker-compose -f docker-compose.ci.test.yml up -d
# TODO: The following always succeeds instantly with "-T" flag,
# but won't run at all if I exclude the "-T" flag...
- docker-compose -f docker-compose.ci.test.yml exec -T web_service pytest --junitxml=report.xml
- docker-compose -f docker-compose.ci.test.yml down
artifacts:
when: always
paths:
- report.xml
reports:
junit: report.xml
only:
- master
deploy:
stage: deploy
script:
- bash deploy.sh
only:
- master
I've found a solution here. It's a quirk with docker-compose exec. Instead, I find the container ID with $(docker-compose -f docker-compose.ci.test.yml ps -q web_service) and use docker exec --tty <container_id> pytest ...
In the test stage, I've made the following substitution:
test:
stage: test
script:
- # docker-compose -f docker-compose.ci.test.yml exec -T myijack pytest /app/tests/ --junitxml=report.xml
- docker exec --tty $(docker-compose -f docker-compose.ci.test.yml ps -q web_service) pytest /app/tests --junitxml=report.xml
i have a job that starts android UI-tests on GitLab CI/CD. It somehow runs a container from image android-uitests:1.0 from registry. I don't know where and how Gitlab CI/CD runs that image using command "docker run ...", but i need to extend that command and i want to pass some variables (or arguments) in this command.
Here below example of command that i want Gitlab to do:
docker run -d \
-t \
--privileged \
-e "SNAPSHOT_DISABLED"="true" \
-e "QTWEBENGINE_DISABLE_SANDBOX"=1 \
-e "WINDOW"="true" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-p 5555:5555 -p 5554:5554 -p 5901:5901 \
--name emulator \
android-uitest:1.0
this is a stage and its job with image
ui-tests:
image: registry.myproject:5000/android-uitests:1.0
stage: ui-tests
only:
- merge_requests
- schedules
when: manual
script:
- bash /run-emulator.sh
- adb devices
- adb shell input keyevent 82
- adb shell settings put global window_animation_scale 0 &
- adb shell settings put global transition_animation_scale 0 &
- adb shell settings put global animator_duration_scale 0 &
- ./gradlew mobile:connectedDevDebugAndroidTest -Pandroid.testInstrumentationRunnerArguments.package=app.online.tests.uitests
tags:
- androidtest
So another words i want to configure that "under the hood docker run command" that runs my image.
Tell me please how to do that?
Considering you're using a Docker container, I'll assume you're using a Gitlab Runner on Docker executor mode, which means you're essentially running a similar script to this when you don't specify a docker image to run the CI job:
image: docker:19.03.13
variables:
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:19.03.13-dind
script:
- (your commands)
To understand what's going on, let's break it in multiple steps:
image: docker:19.03.13
(...)
services:
- docker:19.03.13-dind
docker-19.03.13-dind, what's the difference to docker:19.03.13?
why is it a service instead of an image?
DIND means Docker-in-Docker, this part of Gitlab's documentation can explain it in further technical details, but what is important to understand from this part is what is a service on Gitlab's CI context and why they have to specify an additional Docker image when already using a Docker image as a default environment. When you write a service on Gitlab CI, you are able to use its command while you're inside an existing container. e.g. when you want to connect a PostgreSQL (database) container to a backend container you're building, but without having to set up a docker-compose or multiple containers.
Using a docker service to run together with a docker image, it means you can directly use docker run within your job without any additional setup. This previous StackOverflow question explains this question further.
Back to your code, instead of deploying your registry image as a job directly:
ui-tests:
image: registry.myproject:5000/android-uitests:1.0
You may want to first, build your container and upload it to your registry:
image: docker:19.03.12
services:
- docker:19.03.12-dind
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
This snippet will build your Dockerfile in the root folder of your repository and upload it to your Gitlab private (or public, if your project is set as public) image registry. Now you can specify an additional job specifically to do what you want:
Final example
image: docker:19.03.12
stages:
- build
- release
services:
- docker:19.03.12-dind
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
deploy:
stage: release
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker run -d -t --privileged -e "SNAPSHOT_DISABLED"="true" -e "QTWEBENGINE_DISABLE_SANDBOX"=1 -e "WINDOW"="true" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -p 5555:5555 -p 5554:5554 -p 5901:5901 --name emulator $CI_REGISTRY_IMAGE:latest
Why are you using $VARIABLES?
In case of an environment variable may sound confusing, here's the list of default environment variables Gitlab generates for every job it creates.
The last example I cited will result in a Docker container running in the same machine your executor is registered, with the environment variables you have specified.
If you need a practical example, you can use this gitlab-ci.yml of a project of mine as reference.
I am configuring very basic Gitlab pipeline for protractor tests.
There is only one service, using docker image selenium/standalone-chrome-debug.
I want to start the image within GitLab CI as a service but with specific command line options: "docker run -d -p 4444:4444 -p 5900:5900 selenium/standalone-chrome-debug".
I found out that it is possible to pass command line arguments by creating a custom docker image with CMD (details here https://gitlab.com/gitlab-org/gitlab-runner/-/issues/2514), but I can't find a word about passing command line options. It is even possible?
stage: test
services:
- name: selenium/standalone-chrome-debug
before_script:
- removed as it is not relevant
script:
- npm run $TEST_SUITE_NAME -- --host=selenium__standalone-chrome-debug
You can do it with docker or docker-compose
Change your service to docker:dind and try this
stage: test
services:
- name: docker:dind
before_script:
- removed as it is not relevant
- docker run -d -p 4444:4444 -p 5900:5900 selenium/standalone-chrome-debug
script:
- npm run $TEST_SUITE_NAME -- --host=localhost:5900
I didn't test it...but I think it will work with some improvements.
I'm getting the error docker: command not found while running the following CI script inside gitlab-ci. This error is happening during before_script for the deploy phase.
services:
- docker:dind
stages:
- build
- test
- deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
image: docker:latest
script:
- docker info
- docker version
- docker build --pull -t $SERVICE_NAME:$CI_COMMIT_REF_NAME .
- docker image list
- docker tag $SERVICE_NAME:$CI_COMMIT_REF_NAME $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
- docker push $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
test:
image: docker:latest
stage: test
script:
- docker pull $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME
- docker image list
- docker run $CI_REGISTRY_IMAGE/$SERVICE_NAME:$CI_COMMIT_REF_NAME npm test
deploy:
image: google/cloud-sdk
stage: deploy
environment: Production
script:
- echo $DEPLOY_KEY_FILE_PRODUCTION > /tmp/GCLOUD_KEYFILE.json
- gcloud auth activate-service-account --key-file /tmp/GCLOUD_KEYFILE.json
- rm /tmp/GCLOUD_KEYFILE.json
- gcloud info
- gcloud components list
only:
- master
I'm a bit confused, because I'm runing docker-in-docker (docker:dind) as a service so the docker command should be made available to all stages (if I understand this correctly), however it's clearly not.
Is it due to an interaction with google/cloud-sdk ?
You probably misunderstood what services mean. From the doc,
The services keyword defines just another docker image that is run during your job and is linked to the docker image that the image keyword defines.
What you need is a custom docker executor that uses dind image and preinstalled with gcloud sdk. You can build such an image with this Dockerfile:
FROM docker:latest
RUN apk add --no-cache \
bash \
build-base \
curl \
git \
libffi-dev \
openssh \
openssl-dev \
python \
py-pip \
python-dev
RUN pip install docker-compose fabric
RUN curl https://sdk.cloud.google.com | bash -s -- --disable-prompts
The question was asked almost 5 years ago, I am unsure if by that time the image google/cloud-sdk shipped without docker binaries, I can't think of anything else for a docker: command not found error more than it was not available in the standard location. Anyways, today 2022 google/cloud-sdk comes with docker and it can interact with the docker service, and since I ended up here several times after running into problems trying to use docker:dind and google/cloud-sdk, I will add the following:
Is possible to use docker from the google/cloud-sdk image, there is no need to create a custom image for your Gitlab CI. The problem is that docker in google/cloud-sdk tries to connect to the socket in /var/run/docker.sock as is presented in the logs:
$ docker build -t gcr.io/$GCP_PROJECT_ID/test:$CI_COMMIT_SHORT_SHA .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Anyways you can also checks in your logs of the service docker:dind that docker listens in a socket (not reachable from the job container) and a tcp port (reachable via the hostname docker). So, you just need to use the tcp port in your docker commands, either by setting the env variable DOCKER_HOST or adding a -H tcp://docker:2375 as in
$ docker -H tcp://docker:2375 build -t gcr.io/$GCP_PROJECT_ID/test:$CI_COMMIT_SHORT_SHA .
Step 1/8 : FROM python:latest
You forgot to inform the image tag at the top.
image: docker:latest
services:
- docker:dind
...
Works for me! :)
See: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html