How to run a mounted shell-script inside a docker container? - docker

I'm trying to run a mounted shell-script inside a docker container by following these steps:
build stage: build the docker image.
test stage: mount a directory into the container at runtime with a shell-script file inside.
test stage: run the shell-script file from inside the docker.
could someone please explain how this should be done?
see line: #- ?? HERE I SHOULD RUN THE TEST: /test/check.sh ??
services:
- docker:dind
stages:
- build
- test
before_script:
- docker info
# Build the docker image
build:
image: docker:latest
services:
- docker:dind
before_script:
- docker login docker.example.com -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
only:
- master
script:
- docker build -t our-docker .
- docker save our-docker > our-docker.tar
artifacts:
paths:
- our-docker.tar
expire_in: 1 week
stage: build
test:
image: docker:latest
only:
- master
script:
- docker load < our-docker.tar
- docker run --volume source="$(pwd)/test",target="/test" our-docker
#- ?? HERE I SHOULD RUN THE TEST: /test/check.sh ??
stage: test

First, there was an issue with the docker run command itself:
docker run --volume source="$(pwd)/test",target="/test" our-docker # buggy
as the syntax to setup a bind-mount is:
either docker run -v "$PWD/test":"/test" our-docker
(-v being the short form of --volume)
or docker run --mount type=bind,source="$PWD/test",target="/test" our-docker
(Note: I replaced above "$(pwd)" with the special shell variable "$PWD" which avoids spinning yet another process.)
Next, you cannot just append the line /test/check.sh after the docker run line because you precisely need to run that command within the context of docker run. To this aim, you may want to use the pattern I proposed in this other SO thread: How do I set docker-credential-ecr-login in my PATH before anything else in GitLab CI (which contains more details/remarks about set -e, quotes and shell escaping in the context of that pattern).
Wrap-up
More precisely, could you try the following adaptation of your .gitlab-ci.yml? (I've added some ls commands that should help debugging your configuration):
services:
- docker:dind
stages:
- build
- test
before_script:
- docker info
# Build the docker image
build:
image: docker:latest
services:
- docker:dind
before_script:
- docker login docker.example.com -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD"
only:
- master
script:
- docker build -t our-docker .
- docker save our-docker > our-docker.tar
artifacts:
paths:
- our-docker.tar
expire_in: 1 week
stage: build
test:
image: docker:latest
# note: use /bin/sh below as this image doesn't provide /bin/bash
only:
- master
script:
- docker load < our-docker.tar
- echo "$PWD"
- ls
- ls -Rhal test
- |
docker run --rm -v "$PWD/test":"/test" our-docker /bin/sh -c "
set -ex
ls -Rhal /test
/test/check.sh
"
stage: test

Related

Pytest doesn't run with "docker-compose exec -T web_service pytest" in Gitlab CI Docker executor with Docker-Compose

The main reason I'm trying to use Gitlab CI is to automate unit testing before deployment. I want to
build my Docker images and push them to my image repository, then
ensure all my pytest unit tests pass, and finally
deploy to my production server.
However, my pytest command doesn't run at all if I include the -T flag as follows. It just instantly returns 0 and "succeeds", which is not correct because I have a failing test in there:
docker-compose exec -T web_service pytest /app/tests/ --junitxml=report.xml
On my local computer, I run the tests without the -T flag as follows, and it runs correctly (and the test fails as expected):
docker-compose exec web_service pytest /app/tests/ --junitxml=report.xml
But if I do that in Gitlab CI, I get the error the input device is not a TTY if I omit the -T flag.
Here's some of my ".gitlab-ci.yml" file:
image:
name: docker/compose:1.29.2
# Override the entrypoint (important)
entrypoint: [""]
# Must have this service
# Note: --privileged is required for Docker-in-Docker to function properly,
# but it should be used with care as it provides full access to the host environment
services:
- docker:dind
stages:
- build
- test
- deploy
variables:
# DOCKER_HOST is essential
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
before_script:
# First test that gitlab-runner has access to Docker
- docker --version
# Set variable names
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export MY_IMAGE=$IMAGE:web_service
# Install bash
- apk add --no-cache bash
# Add environment variables stored in GitLab, to .env file
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
# Login to the Gitlab registry and pull existing images to use as cache
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
# Pull the image for the build cache, and continue even if this image download fails (it'll fail the very first time)
- docker pull $MY_IMAGE || true
# Build and push Docker images to the Gitlab registry
- docker-compose -f docker-compose.ci.build.yml build
- docker push $MY_IMAGE
only:
- master
test:
stage: test
script:
# Pull the image
- docker pull $MY_IMAGE
# Start the containers and run the tests before deployment
- docker-compose -f docker-compose.ci.test.yml up -d
# TODO: The following always succeeds instantly with "-T" flag,
# but won't run at all if I exclude the "-T" flag...
- docker-compose -f docker-compose.ci.test.yml exec -T web_service pytest --junitxml=report.xml
- docker-compose -f docker-compose.ci.test.yml down
artifacts:
when: always
paths:
- report.xml
reports:
junit: report.xml
only:
- master
deploy:
stage: deploy
script:
- bash deploy.sh
only:
- master
I've found a solution here. It's a quirk with docker-compose exec. Instead, I find the container ID with $(docker-compose -f docker-compose.ci.test.yml ps -q web_service) and use docker exec --tty <container_id> pytest ...
In the test stage, I've made the following substitution:
test:
stage: test
script:
- # docker-compose -f docker-compose.ci.test.yml exec -T myijack pytest /app/tests/ --junitxml=report.xml
- docker exec --tty $(docker-compose -f docker-compose.ci.test.yml ps -q web_service) pytest /app/tests --junitxml=report.xml

Docker doesn't work when using getsentry/sentry-cli

I want to upload my frontend to sentry, but I need to get the folder using docker commands. However when I use image: getsentry/sentry-cli
docker doesn't works and e.g. in before_script I get error that docker doesn't exist
sentry_job:
stage: sentry_job
image: getsentry/sentry-cli
services:
- docker:18-dind
before_script:
- docker login -u gitlab-ci-token -p "$CI_JOB_TOKEN" registry.gitlab.cz
script:
# script...
. # Get the dist folder from the image
- mkdir frontend_dist
- docker run --rm -v $PWD/frontend_dist:/mounted --entrypoint="" $IMAGE /bin/sh -c "cp /frontend/dist /mounted"
- ls frontend_dist
tags:
- dind
How do I fix that?
To achieve what you want, you need to use a single job (to have the same build context) and specify docker:stable as the job image (along with docker:stable-dind as a service).
This setup is called docker-in-docker and this is the standard way to allow a GitLab CI script to run docker commands (see doc).
Thus, you could slightly adapt your .gitlab-ci.yml code like this:
sentry_job:
stage: sentry_job
image: docker:stable
services:
- docker:stable-dind
variables:
IMAGE: "${CI_REGISTRY_IMAGE}:latest"
before_script:
- docker login -u gitlab-ci-token -p "${CI_JOB_TOKEN}" registry.gitlab.cz
script:
- git pull "$IMAGE"
- mkdir -v frontend_dist
- docker run --rm -v "$PWD/frontend_dist:/mounted" --entrypoint="" "$IMAGE" /bin/sh -c "cp -v /frontend/dist /mounted"
- ls frontend_dist
- git pull getsentry/sentry-cli
- docker run --rm -v "$PWD/frontend_dist:/work" getsentry/sentry-cli
tags:
- dind
Note: the git pull commands are optional (they ensure Docker will use the latest version of the images).
Also, you may need to change the definition of variable IMAGE.

Execute external bash script inside GitLab-ci Docker build

I would like to execute an external (on the local machine) bash script from gitlab-ci.yml which uses the docker:stable image. I would like to execute startup.sh located outside the gitlab docker image. Is this possible or are there better options?
gitlab-ci.yaml
image: docker:stable
#Build script
variables:
CI_DEBUG_TRACE: "true"
DOCKER_DRIVER: overlay
before_script:
- docker --version
build:
services:
- docker:dind
script:
- docker build --no-cache -t <tag> .
- docker login -u root -p <pass> <registry>
- docker tag ...
- docker push ...
- echo "build completed"
stage: build
tags:
- <tag>
deploy_staging:
stage: deploy
script:
- ./sh startup.sh
bash script
#!/bin/bash
docker login -u root -p <pass>
docker pull <image>
docker-compose up -d
I am not sure if this is the best practice for your use-case but
the simple way to share files with an image is to add volume and share this code to the image by editing your config.toml file .
add this line to config.toml under [runners.docker]
volumes = ["/cache",path to startup.sh:/root/scripts"]
and then inside your.gilatb.yml
deploy_staging:
stage: deploy
script:
- chmod +x /root/scripts/startup.sh
- ./sh /root/scripts/startup.sh

Gitlab CI how to run tests before building docker image

I have a Python based repository and I am trying to setup Gitlab CI to Build a Docker image using Dockerfile and pushing the image to Gitlab's Registry.
Before building and deploying the Docker image to registry, I want to run my unit tests using Python. Here is my current gitlab-ci.yml file that only does testing:
image: python:3.7-slim
before_script:
- pip3 install -r requirements.txt
test:
variables:
DJANGO_SECRET_KEY: some-key-here
script:
- python manage.py test
build:
DO NOT KNOW HOW TO DO IT
I am checking some templates from Gitlab's website and found one for Docker:
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
However, both of these do not work for me because I need to have python for testing and docker for building the image. Is there way to do it with Gitlab CI without creating a custom Docker image that has both python and Docker installed?
I found out that I can create multiple jobs, each with their own images:
stages:
- test
- build
test:
stage: test
image: python:3.7-slim
variables:
DJANGO_SECRET_KEY: key
before_script:
- pip3 install -r requirements.txt
script:
- python manage.py test
only:
- master
build:
stage: build
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master

Docker image with PHP, Composer and Docker installed

I use Gitlab.com for my CI using their shared docker runners. I have a project which requires PHP and composer installed, while it also needs docker to build a docker image of the project.
I've tried for hours to build a docker image which has PHP, composer and docker installed, but I can't seem to figure it out.
For reference, my gitlab-ci.yml file looks like this;
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
composer:install:
stage: build
artifacts:
paths:
- /
expire_in: 1 week
script:
- docker run --rm --interactive --tty --volume $PWD:/app composer install
build:image:
stage: build
dependencies:
- composer:install
script:
- docker login registry.gitlab.com -u $REGISTRY_USERNAME -p $REGISTRY_PASSWORD
- docker build -t registry.gitlab.com/accountname/projectname/develop .
- docker push registry.gitlab.com/accountname/projectname/develop
Using the sample build script provided by Stefan below, I put together the following build file which appears to work perfectly. It builds the project using the project Dockerfile, and pushed the resulting image to my Gitlab repository.
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker run --rm --volume $PWD:/app composer install
build:image:
stage: build
script:
- docker login registry.gitlab.com -u $REGISTRY_USERNAME -p $REGISTRY_PASSWORD
- docker build -t registry.gitlab.com/accountname/projectname/develop .
- docker push registry.gitlab.com/accountname/projectname/develop
You should use a 2 job CI and use the artifact feature provided by Gitlab to pass the composer install result as dependencies between jobs.
Your first job (composer:install) could use something like https://hub.docker.com/r/library/composer/ in your script section to install all composer packages, then pass it to the build:image job that builds your Docker image.
This would roughly look like this:
image: docker:latest
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
composer:install:
stage: composer
artifacts:
paths:
- /
expire_in: 1 week
script:
- docker run --rm --interactive --tty --volume $PWD:/app composer install
build:image:
stage: build
dependencies:
- composer:install
script:
- docker build -t myimage:latest .
Where your Dockerfile could be something like this (based on this):
FROM php:7.0-cli
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]

Resources