Gitlab runner does not print output of docker run command - docker

Is there any way to print docker run coomand output on a gitlab runner?
I am running the following:
docker-build:
# Official docker image.
image: docker/compose:debian-1.26.2
stage: build
services:
- docker:dind
before_script:
- apt update && apt install -y curl
- docker-compose build
- docker-compose up -d
- DB_CONTAINER=`docker ps | grep server | awk '{print $NF}'`
- until (docker logs $DB_CONTAINER 2>&1 | grep -e ' Running on'); do echo "Waiting for server..."; sleep 1; done # Wait for the DB to be ready
script:
- cd tests
- docker build -t testimg .
- docker -t -v $PWD/tests_integration_test1.py:/tests_integration_test1.py run testimg pytest tests_integration_test1.py
but nothing is printed to the gitlab ci log

Related

GitLab CI/CD not taking latest code changes

So I have used GitLab CI/CD to deploy changes to private docker hub repo and using Digital Ocean droplet to run the server using docker but the changes are not being reflected in the docker container running on digital ocean. Here's the config file.
variables:
IMAGE_NAME: codelyzer/test-repo
IMAGE_TAG: test-app-1.0
stages:
- test
- build
- deploy
run_tests:
stage: test
image:
node:16
before_script:
- npm install jest
script:
npm run test
build_image:
stage: build
image: docker:20.10.16
services:
- docker:20.10.16-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $REGISTRY_USER -p $REGISTRY_PASS
script:
- docker build -t $IMAGE_NAME:$IMAGE_TAG .
- docker push $IMAGE_NAME:$IMAGE_TAG
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY root#159.89.175.212 "
docker login -u $REGISTRY_USER -p $REGISTRY_PASS &&
docker image prune -f &&
docker ps -aq | xargs docker stop | xargs docker rm &&
docker run -d -p 5001:5001 $IMAGE_NAME:$IMAGE_TAG"
The digital ocean server wasn't fetching the latest image from the repository so I added docker prune as additional step to do.
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY root#159.89.175.212 "
docker login -u $REGISTRY_USER -p $REGISTRY_PASS &&
docker ps -aq | (xargs docker stop || true) | (xargs docker rm || true) &&
docker system prune -a -f &&
docker run -d -p 5001:5001 $IMAGE_NAME:$IMAGE_TAG"

CI/CD script for build & deploy docker image in aws EC2

can I build ,push(to gitlab registry) and deploy the image (to aws EC2) using this CI/CD configuration?
stages:
- build
- deploy
build:
# Use the official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
# Run this job in a branch where a Dockerfile exists
deploy:
stage: deploy
before_script:
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- ssh -o StrictHostKeyChecking=no ubuntu#18.0.0.82 "sudo docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY; sudo docker pull $CI_REGISTRY_IMAGE${tag}; cd /home/crud_app; sudo docker-compose up -d"
after_script:
- sudo docker logout
rules:
- if: $CI_COMMIT_BRANCH
exists:
- Dockerfile
after the script build is getting suceed, deploy gets fail.
(build suceeded)
(deploy got failed)
the configuration must be build and deploy the image
There are a couple of errors, but the overall Pipeline seems good.
You cannot use ssh-add without having the agent running
Why you create the .ssh folder manually if afterwards you're explicitly ignoring the key that is going to be stored under known_hosts?
Using StrictHostKeyChecking=no is dangerous and totally unrecommended.
On the before_script add the following:
before_script:
- eval `ssh-agent`
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- ssh-keyscan -H 18.0.0.82 >> ~/.ssh/known_hosts
Also, don't use sudo on your ubuntu user, better add it to the docker group or connect through SSH to an user that is in the docker group.
In case you don't have already a docker group in your EC2 instance, now it's a good moment to configure it:
Access to your EC2 instance and create the docker group:
$ sudo groupadd docker
Add the ubuntu user to the docker group:
$ sudo usermod -aG docker ubuntu
Now change your script to:
script:
- echo $CI_REGISTRY_PASSWORD > docker_password
- scp docker_password ubuntu#18.0.0.82:~/tmp/docker_password
- ssh ubuntu#18.0.0.82 "cat ~/tmp/docker_password | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY; docker pull $CI_REGISTRY_IMAGE${tag}; cd /home/crud_app; docker-compose up -d; docker logout; rm -f ~/tmp/docker_password"
Also, remember that in the after_script you aren't in the EC2 instance but within the runner image so you don't need to logout, but it would be good to kill the SSH agent tho.
Final Job
deploy:
stage: deploy
before_script:
- eval `ssh-agent`
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- ssh-keyscan -H 18.0.0.82 >> ~/.ssh/known_hosts
script:
- echo $CI_REGISTRY_PASSWORD > docker_password
- scp docker_password ubuntu#18.0.0.82:~/tmp/docker_password
- ssh ubuntu#18.0.0.82 "cat ~/tmp/docker_password | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY; docker pull $CI_REGISTRY_IMAGE${tag}; cd /home/crud_app; docker-compose up -d; docker logout; rm -f ~/tmp/docker_password"
after_script:
- kill $SSH_AGENT_PID
- rm docker_password
rules:
- if: $CI_COMMIT_BRANCH
exists:
- Dockerfile

Travis-ci stages - conditional logic

This is my travis.yml file followed by the latest run
https://travis-ci.com/github/harryyy27/allies-art-club:
sudo: required
services:
- docker
stages:
- name: before_deploy
if: branch = master
- name: before_install
if: branch != master
- name: scripts
if: branch != master
before_install:
- docker build -t harryyy27/allies_art_club/frontend -f ./client/Dockerfile.dev ./client
- docker build -t harryyy27/allies_art_club/backend -f ./src/Dockerfile.dev ./src
scripts:
- docker run -e CI=true harryyy27/allies_art_club/frontend npm test
- docker run -e CI=true harryyy27/allies_art_club/backend npm test
before_deploy:
- docker build -t harryyy27/aac-client ./client
- docker tag harryyy27/aac-client registry.heroku.com/$HEROKU_APP/client
- docker build -t harryyy27/aac-src ./src
- docker tag harryyy27/aac-src registry.heroku.com/$HEROKU_APP/src
- docker build -t harryyy27/aac-nginx ./nginx
- docker tag harryyy27/aac-nginx registry.heroku.com/$HEROKU_APP/nginx
# Log in to docker CLI
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
- curl https://cli-assets.heroku.com/install.sh | sh
- echo "$HEROKU_API" | docker login -u "$HEROKU_USERNAME" --password-stdin registry.heroku.com
deploy:
skip_cleanup: true
provider: script
script:
docker ps -a;
docker push harryyy27/aac-client;
docker push registry.heroku.com/$HEROKU_APP/client;
docker push harryyy27/aac-src;
docker push registry.heroku.com/$HEROKU_APP/src;
docker push harryyy27/aac-nginx;
docker push registry.heroku.com/$HEROKU_APP/nginx;
heroku container:release client src nginx --app $HEROKU_APP;
on:
branch: master
However, I'd like to know why my branch != master does not work for the before_install and scripts stages. It runs both of these stages even on master branch after merging my pull request.
(I am aware of the other issues with this travis.yml, I have raised them as separate questions)
Resolved this, set it up differently though. See below. I think the stages have to be set up with the jobs/include object as seen below
see new travis.yml
sudo: required
language: generic
services:
- docker
stages:
- dev
- prod
jobs:
include:
- stage: dev
if: NOT(branch=master)
scripts:
- docker build -t harryyy27/allies_art_club/frontend -f ./client/Dockerfile.dev ./client
- docker build -t harryyy27/allies_art_club/backend -f ./src/Dockerfile.dev ./src
- docker run -e CI=true harryyy27/allies_art_club/frontend npm test
- docker run -e CI=true harryyy27/allies_art_club/backend npm test
- stage: prod
if: branch=master
before_deploy:
- docker build -t harryyy27/aac-client ./client
- docker tag harryyy27/aac-client registry.heroku.com/$HEROKU_APP/client
- docker build -t harryyy27/aac-src ./src
- docker tag harryyy27/aac-src registry.heroku.com/$HEROKU_APP/src
- docker build -t harryyy27/aac-nginx ./nginx
- docker tag harryyy27/aac-nginx registry.heroku.com/$HEROKU_APP/nginx
# Log in to docker CLI
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
- curl https://cli-assets.heroku.com/install.sh | sh
- echo "$HEROKU_API" | docker login -u "$HEROKU_USERNAME" --password-stdin registry.heroku.com
deploy:
skip_cleanup: true
provider: script
script:
docker ps -a;
docker push harryyy27/aac-client;
docker push registry.heroku.com/$HEROKU_APP/client;
docker push harryyy27/aac-src;
docker push registry.heroku.com/$HEROKU_APP/src;
docker push harryyy27/aac-nginx;
docker push registry.heroku.com/$HEROKU_APP/nginx;
heroku container:release client src nginx --app $HEROKU_APP;
***note I had to add language to avoid a Rakefile error. Best to use generic here as using node_js will prompt travis to look for a package.json and and a "make test" error will occur

Getting error 'jq: command not found' in Gitlab pipeline for docker

I am building the Docker images and deploying it to AWS ECS service using Gitlab pipeline but getting the error as 'jq: command not found' in spite having successfully installed the jq package (Refer images)
Error Image
jq package installation step status
.gitlab-ci.yml file for reference.
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
stages:
- build_dev
- deploy_dev
before_script:
- docker run --rm docker:git apk update
- docker run --rm docker:git apk upgrade
- docker run --rm docker:git apk add --no-cache curl jq
- docker run --rm docker:git apk add python3 py3-pip
- pip3 install awscli
- aws configure set aws_access_key_id $AWS_ACCESS_KEY
- aws configure set aws_secret_access_key $AWS_SECRET_KEY
- aws configure set region $AWS_DEFAULT_REGION
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_LOGIN_URI
build_dev:
stage: build_dev
only:
- dev
script:
- docker build -t research-report-phpfpm .
- docker tag research-report-phpfpm:latest $REPOSITORY_URI_LARAVEL:latest
- docker push $REPOSITORY_URI_LARAVEL:latest
- docker build -t research-report-nginx -f Dockerfile_Nginx .
- docker tag research-report-nginx:latest $REPOSITORY_URI_NGINX:latest
- docker push $REPOSITORY_URI_NGINX:latest
deploy_dev:
stage: deploy_dev
script:
- echo $REPOSITORY_URI_LARAVEL:$IMAGE_TAG
- TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_DEFINITION_NAME" --region "${AWS_DEFAULT_REGION}")
- NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | jq --arg IMAGE "$REPOSITORY_URI_LARAVEL:$IMAGE_TAG" '.taskDefinition.containerDefinitions[0].image = $IMAGE | .taskDefinition.containerDefinitions[0]')
- echo "Registering new container definition..."
- aws ecs register-task-definition --region "${AWS_DEFAULT_REGION}" --family "${TASK_DEFINITION_NAME}" --container-definitions "${NEW_CONTAINER_DEFINTIION}"
- echo "Updating the service..."
- aws ecs update-service --region "${AWS_DEFAULT_REGION}" --cluster "${CLUSTER_NAME}" --service "${SERVICE_NAME}" --task-definition "${TASK_DEFINITION_NAME}"
NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | jq --arg IMAGE "$REPOSITORY_URI_LARAVEL:$IMAGE_TAG" '.taskDefinition.containerDefinitions[0].image = $IMAGE | .taskDefinition.containerDefinitions[0]')
This command is not running inside the docker container where you are installing jq.
Your gitlab ci/cd configuration is running within a container tagged as docker:latest.
You've made the docker image docker:dind available during runtime as a container (presumably to avoid having to start dockerd manually).
You're then running commands on a container called docker:git, which is separate to the context of this build.
You are also running these commands with --rm, which guarantees the apk add you are running is lost after the statement.
Not having used gitlab pipelines myself, I can't be 100%, but I'd be 90% that one of these may resolve your issue:
Install the packages locally in the already running container:
apk update && apk add curl jq python3 py3-pip
Don't use --rm
Change the installation of jq from container docker:git, to docker:latest
docker run docker:latest apk update
docker run docker:latest apk add curl jq python3 py3-pip
Given that the command:
- NEW_CONTAINER_DEFINTIION=$(echo $TASK_DEFINITION | jq --arg IMAGE "$REPOSITORY_URI_LARAVEL:$IMAGE_TAG" '.taskDefinition.containerDefinitions[0].image = $IMAGE | .taskDefinition.containerDefinitions[0]')
Is actually running in the context of the pipeline (presumably within docker:latest), my bet is on 1 - you are running jq on the 'host', but installing jq inside a container within the host.

GitLab CI/CD: building multiarch Docker images

I want an easy way to build multiarch Docker images in a GitLab runner. By easy, I mean that I just would have to add a .gitlab-ci.yml in my project and it would work.
Here is the .gitlab-ci.yml that I wrote. It builds a multiarch image using buildx and then pushes it to the GitLab registry:
image: cl00e9ment/buildx
services:
- name: docker:dind
variables:
PLATFORMS: linux/amd64,linux/arm64
TAG: latest
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
build:
stage: build
script:
- docker buildx build --platform "$PLATFORMS" -t "${CI_REGISTRY_IMAGE}:${TAG}" . --push
The problem is that the linux/arm64 platform isn't available.
Here is how I built the cl00e9ment/buildx image (strongly inspired from snadn/docker-buildx):
Here is the Dockerfile:
FROM docker:latest
ENV DOCKER_CLI_EXPERIMENTAL=enabled
ENV DOCKER_HOST=tcp://docker:2375/
RUN mkdir -p ~/.docker/cli-plugins \
&& wget -qO- https://api.github.com/repos/docker/buildx/releases/latest | grep "browser_download_url.*linux-amd64" | cut -d : -f 2,3 | tr -d '"' | xargs wget -O ~/.docker/cli-plugins/docker-buildx \
&& chmod a+x ~/.docker/cli-plugins/docker-buildx
RUN docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
RUN docker context create buildx \
&& docker buildx create buildx --name mybuilder \
&& docker buildx use mybuilder
RUN docker buildx inspect --bootstrap
...add here is the .gitlab-ci.yml file used to build and push the cl00e9ment/buildx image:
image: docker:latest
services:
- name: docker:dind
before_script:
- docker login -u cl00e9ment -p "$DOCKER_HUB_TOKEN"
build:
stage: build
script:
- docker build --add-host docker:`grep docker /etc/hosts | awk 'NR==1{print $1}'` --network host -t cl00e9ment/buildx .
- docker run --add-host docker:`grep docker /etc/hosts | awk 'NR==1{print $1}'` --network host cl00e9ment/buildx docker buildx inspect --bootstrap
- docker push cl00e9ment/buildx
test:
stage: test
script:
- docker run --add-host docker:`grep docker /etc/hosts | awk 'NR==1{print $1}'` --network host cl00e9ment/buildx docker buildx inspect --bootstrap
So what's happening?
At the end of the build, in the Dockerfile, I run docker buildx inspect --bootstrap to list the available platforms. It gives linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6. So it's all good.
After that, I run it again (just after the build and just before the push) and it still gives linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6.
However, on the test stage, when the image is freshly downloaded from Docker Hub on a clean environment, it gives linux/amd64, linux/386.
Why?
There is a lot of outdated and incorrect information on building multiarch images on GitLab CI unfortunately. The seems to change quite frequently as it's still an experimental feature. But as of the time of this post, this is how I got my multiarch build working on GitLab public runners (armv6, armv6, arm64, amd64):
First, one must build and push a Docker image containing the buildx binary. Here is the Dockerfile I am using for that:
FROM docker:latest
ARG BUILDX_VER=0.4.2
RUN mkdir -p /root/.docker/cli-plugins && \
wget -qO ~/.docker/cli-plugins/docker-buildx \
https://github.com/docker/buildx/releases/download/v${BUILDX_VER}/buildx-v${BUILDX_VER}.linux-amd64 && \
chmod +x /root/.docker/cli-plugins/docker-buildx
The current GitLab runner image does not initialize the binfmt handlers correctly despite running the initialization code: https://gitlab.com/gitlab-org/gitlab-runner/-/blob/523854c8/.gitlab/ci/_common.gitlab-ci.yml#L91
So we have to do it in our pipeline. We refer to the comments in MR 1861 of the GitLab Runner code and add in the following magic sauce to our .gitlab-ci.yml:
before_script:
- docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
Then we can run the rest of our pipeline script with docker login, docker buildx build --use, docker buildx build --push ... and so on.
Now the runner is ready to build for multiple architectures.
My final .gitlab-ci.yml can be seen here: https://github.com/oofnikj/nuttssh/blob/multiarch/.gitlab-ci.yml
Ok, I think I know whats going on here: you need to call update-binfmts --enable somewhere to enable the extra formats provided by binfmt_misc for .
I was able to get multiarch images working with buildx on gitlab-ci (after lots of searching) using this repo and its docker images: https://gitlab.com/ericvh/docker-buildx-qemu
However that repo has self dependency on its own docker image repository to build multiarch versions of itself AND it depends on a gitlab-ci template repo for its ci. I'm not super confident in how this web of dependency all began and the owner of that repo is far more skilled than me, but for my uses, I've forked the repo and I'm now trying to modify its CI to be less dependent on external sources.
EDIT: For people from the future this is the Dockerfile:
FROM debian
# Install Docker and qemu
# TODO Use docker stable once it properly supports buildx
RUN apt-get update && apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - && \
add-apt-repository "deb https://download.docker.com/linux/debian $(lsb_release -cs) stable" && \
apt-get update && apt-get install -y \
docker-ce-cli \
binfmt-support \
qemu-user-static
# Install buildx plugin
RUN mkdir -p ~/.docker/cli-plugins && \
ARCH=`dpkg --print-architecture` && echo Running on $ARCH && curl -s https://api.github.com/repos/docker/buildx/releases/latest | \
grep "browser_download_url.*linux-$ARCH" | cut -d : -f 2,3 | tr -d \" | \
xargs curl -L -o ~/.docker/cli-plugins/docker-buildx && \
chmod a+x ~/.docker/cli-plugins/docker-buildx
# Write version file
RUN printf "$(docker --version | perl -pe 's/^.*\s(\d+\.\d+\.\d+.*),.*$/$1/')_$(docker buildx version | perl -pe 's/^.*v?(\d+\.\d+\.\d+).*$/$1/')" > /version && \
cat /version
And a stripped down version of .gitlab-ci.yml
build:
image: docker:dind
stage: build
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# See https://github.com/docker-library/docker/pull/166
DOCKER_TLS_CERTDIR: ""
before_script:
- |
if [[ -z "$CI_COMMIT_TAG" ]]; then
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG}
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA}
else
export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}
export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG}
fi
- echo "$CI_REGISTRY_PASSWORD" | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY
script:
- docker build -t "$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG" -t "$CI_APPLICATION_REPOSITORY:latest" .
- docker push "$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG"
- docker push "$CI_APPLICATION_REPOSITORY:latest"
EDIT:
Further, I've found that this gitlabci configuration that uses the image built above can use the build cache:
stages:
- build
variables:
CI_BUILD_ARCHS: "linux/amd64,linux/arm/v6,linux/arm/v7"
CI_BUILD_IMAGE: "registry.gitlab.com/gdunstone/docker-buildx-qemu"
build_master:
image: $CI_BUILD_IMAGE
stage: build
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# See https://github.com/docker-library/docker/pull/166
DOCKER_TLS_CERTDIR: ""
retry: 2
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
# Use docker-container driver to allow useful features (push/multi-platform)
- update-binfmts --enable # Important: Ensures execution of other binary formats is enabled in the kernel
- docker buildx create --driver docker-container --use
- docker buildx inspect --bootstrap
script:
- >
docker buildx build --platform $CI_BUILD_ARCHS
--cache-from=type=registry,ref=$CI_REGISTRY_IMAGE/cache:latest
--cache-to=type=registry,ref=$CI_REGISTRY_IMAGE/cache:latest
--progress plain
--pull --push
--build-arg CI_PROJECT_PATH=$CI_PROJECT_PATH
-t "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
-t "$CI_REGISTRY_IMAGE:latest" .
only:
- master

Resources