How to rebuild docker image on push before CI script jobs - docker

I want to generate Dockerfile in GitLab CI script and build it. Then use this newly generated image in build jobs. How can I do this? Tried to use global before_script, but it already starts in default container. I need to do this out of any containers.

before_script is run before every job so it's not something you want. But you can have a first job to do the image build and take advantage of the fact that each job can use a different Docker image. The build of the image is covered in the manual.
Option A (uhm... sort of OK)
Have 2 runners, one with a shell executor (tagged shell) and one with a Docker executor (tagged docker). You would then have a first stage with a job dedicated to building the docker image. It would use the shell runner.
image_build:
stage: image_build
script:
- # create dockerfile
- # run docker build
- # push image to a registry
tags:
- shell
The second job would then run using the runner with docker executor and use this created image:
job_1:
stage: test
image: [image you created]
script:
- # your tasks
tags:
- docker
The problem with this is that the runner would need to be part of the docker group which has security implications.
Option B (better)
The second option would do the same but would have only one runner using Docker executor. The Docker image would be built within a running container (gitlab/dind:latest image) = "docker in docker" solution.
stages:
- image_build
- test
image_build:
stage: image_build
image: gitlab/dind:latest
script:
- # create dockerfile
- # run docker build
- # push image to a registry
job_1:
stage: test
image: [image you created]
script:
- # your tasks

Related

Recover docker image after a gitlab-ci run

Let's say I build a docker image and then run some CI build like this:
stages:
- create_builder_image
- test
Create Builder Image:
stage: create_builder_image
script:
- export DOCKER_BRANCH_TAG=$CI_COMMIT_REF_SLUG
# do stuff to build the image, using cache to speed it up
- docker push $GITLAB_IMAGE/builder:$DOCKER_BRANCH_TAG
Run Tests:
image: $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
stage: build
script:
# build some stuff in the image
Then I want to push the resulting image, with the builded stuff inside
docker-package:
stage: package
script:
- docker commit ?
- docker push dockerhub:latest
That may not be possible at all.
Similar to In Gitlab CI/CD, how to commit and publish the docker container that is running our stages

Automate local deployment of docker containers with gitlab runner and gitlab-ci without privileged user

We have a prototype-oriented develop environment, in which many small services are being developed and deployed to our on-premise hardware. We're using GitLab to manage our code and GitLab CI / CD for continuous integration. As a next step, we also want to automate the deployment process. Unfortunately, all documentation we find uses a cloud service or kubernetes cluster as target environment. However, we want to configure our GitLab runner in a way to deploy docker containers locally. At the same time, we want to avoid using a privileged user for the runner (as our servers are so far fully maintained via Ansible / services like Portainer).
Typically, our .gitlab-ci.yml looks something like this:
stages:
- build
- test
- deploy
dockerimage:
stage: build
# builds a docker image from the Dockerfile in the repository, and pushes it to an image registry
sometest:
stage: test
# uses the docker image from build stage to test the service
production:
stage: deploy
# should create a container from the above image on system of runner without privileged user
TL;DR How can we configure our local Gitlab Runner to locally deploy docker containers from images defined in Gitlab CI / CD without usage of privileges?
The Build stage is usually the one that people use Docker in Docker (find). To not have to use the privileged user you can use the kaniko executor image in Gitlab.
Specifically you would use the kaniko debug image like this:
dockerimage:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
rules:
- if: $CI_COMMIT_TAG
You can find examples of how to use it in Gilab's documentation.
If you want to use that image in the deploy stage you simply need to reference the created image.
You could do something like this:
production:
stage: deploy
image: $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
With this method, you do not need a privileged user. But I assume this is not what you are looking to do in your deployment stage. Usually, you would just use the image you created in the container registry to deploy the container locally. The last method explained would only deploy the image in the GitLab runner.

Gitlab CI docker-in-docker deployment not running commands inside of the container

I am trying to set up a new build pipeline for one of our projects. In a first step I am building a new docker image for successive testing. This step works fine. However, when the test jobs are executed, the image is pulled, but the commands are running on the host instead of the container.
Here's the contents of my gitlab-ci.yml:
stages:
- build
- analytics
variables:
TEST_IMAGE_NAME: 'registry.server.de/testimage'
build_testing_container:
stage: build
image: docker:stable
services:
- dind
script:
- docker build --target=testing -t $TEST_IMAGE_NAME .
- docker push $TEST_IMAGE_NAME
mess_detection:
stage: analytics
image: $TEST_IMAGE_NAME
script:
- vendor/bin/phpmd app html tests/md.xml --reportfile mess_detection.html --suffixes php
artifacts:
name: "${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}"
paths:
- mess_detection.html
expire_in: 1 week
when: always
except:
- production
allow_failure: true
What do I need to change to make gitlab runner execute the script commands inside the container it's successfully pulling?
UPDATE:
It's getting even more interesting:
I just changed the script to sleep for a while so I can attach to the container. When I run a pwd from the ci script, it says /builds/namespace/project.
However, running pwd on the server with docker exec using the exact same container, it returns /app as it is supposed to.
UPDATE2:
After some more research, I learned that gitlab executes four sub-steps for each build step:
After some more research, I found that gitlab runs 4 sub-steps for each build step:
Prepare : Create and start the services.
Pre-build : Clone, restore cache and download artifacts from previous stages. This is run on a special Docker Image.
Build : User build. This is run on the user-provided docker image.
Post-build : Create cache, upload artifacts to GitLab. This is run on a special Docker Image.
It seems like in my case, step 3 isn't executed properly and the command is still running inside the gitlab runner docker image.
UPDATE3
In the meantime I tested executing the mess_detection step on an separate machine using the command gitlab-runner exec docker mess_detection. The behaviour is the exact same. So it's not gitlab specific, but has to be some configuration option in either the deployment script or the runner config.
this is the usual behavior The image keyword is the name of the Docker image the Docker executor will run to perform the CI tasks.
you can use The services keyword which defines just another Docker image that is run during your job and is linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.
access can be done by a script or entry-points for example :
in the docker file of the image you are going to build add a script that you want to execute like that :
ADD exemple.sh /
RUN chmod +x exemple.sh
then you can add the image as a service in gitlab-ci and the script would change to :
docker exec <container_name> /exemple.sh
this will run a script inside the container or specify an entrypoint to the docker image and then the script would be :
docker exec <container> /bin/sh -c "cmd1;cmd2;...;cmdn"
here's a reference :
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html

Bitbucket Pipelines - steps - docker - cant find image

I'm building my pipline to create a docker image, then push it to AWS. I have it broken into steps, and in Bitbucket, you have to tell it what artifacts to share between them. I have a feeling this is a simple bug, but I just cannot figure it out.
It's failing at 'docker tag' in step 4 with:
docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Error response from daemon: No such image: projectname:v.11
Basically it cannot find the docker image created...
Here's my pipeline script (some of it simplified)
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
name: 1. Install dotnet
script:
# Do things
- step:
name: 2. Install AWS CLI
script:
# Do some more things
- step:
name: 3. Build Docker Image
script:
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
artifacts:
- ./**
- step:
name: 4. Push Docker Image to AWS
script:
# Tag and push my docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Now, I know this script works, but only if I remove all the steps. For whatever reason, step 4 doesn't have access to the docker image created in step 3. Any help is appreciated!
Your docker images are not stored in the folder where you start the build, so they are not saved to the artefacts, and not available in the next step.
Even if they were (you could pack/unpack it through docker save), you would probably run against the size limits for artefacts, not to mention the time the time it takes to pack/unpack.
I guess you'd be better off if you created a Dockerfile in your project yourself, and combine step 1 & 2 there. Your bitbucket pipeline could then be based on a docker image that already contains the AWS-cli and uses docker as a service, and your one step would then consist of building your project's Dockerfile and uploading to AWS. This also lowers your dependency on bitbucket pipelines, as
The Docker image is not being passed from step 3 to step 4 as the Docker image is not stored in the build directory.
The simplest solution would be to combine all four of your steps into a single step as follows:
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
script:
# Install dependencies
- ./install-dot-net
- ./install-aws-cli
# Build the Docker image
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
# Tag and push the Docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER

gitlab runner using wrong docker image for build container

I set up an gitlab-ci-multi-runner on my VM.
In the build process I provide an docker container with all the dependencies.
But when I run it, the runner uses a different (wrong) docker image for his build container.
The messages look like that:
Running with gitlab-ci-multi-runner 9.2.1 (f0xxxx4) on runnerVM
(f5xxxxf0) Using Docker executor with image
docker.com/xxx/xxx/docker-build:stable ... Using docker image
sha256:fe32xxx...xxxa63c for predefined container... Pulling
docker image docker.com/xxx/xxx/docker-build:stable ... Using
docker image docker.com/xxx/xxx/docker-build:stable
ID=sha256:9608xxx...xxxdf09 for build container...
Can someone tell me why the runner uses a different docker image for build container?
Why is it not taking the predefined container (because that's the right one...)
Here you can see my gitlab-ci.yml:
image: docker.com/xxx/xxx/docker-build:stable
before_script:
- echo "Before script"
after_script:
- echo "After Script"
stages:
- build
- test
- deploy
build_release:
stage: build
script:
- sudo make all BUILD_TYPE=Release
only:
- master
tags:
- tag1
build_debug:
stage: build
script:
- sudo make all BUILD_TYPE=Debug
only:
- develop
- runner-test
tags:
- tag1
- tag2
In your .gitlab-ci.yml you are referencing the complete URL to your container; it should however be in the format group/container, e.g. library/nginx.
Optionally, you may use a specific version, e.g. library/nginx:1.13.9.
For more information, see: https://docs.gitlab.com/ce/ci/docker/using_docker_images.html
I assume you are using the docker executor. Therefore the gitlab-ci-runner is creating a new Image gitlab/gitlab-runner-helper which will isolate the build-steps from your VM's docker-environment. This image shall be the predefined container.
The stages themself will be performed inside containers of images you specify for the job or the image you specify globally for all jobs. This container is the build container.
The build container should be made from the image you specify with image on top of your .gitlab-ci.yml. You can verify it by doing
$ docker image ls | grep -E '(fe32|9608)'
on your VM. It shows you the image names and tags of your predefined- and build-container.

Resources