Our project uses a multi-stage CI setup where the first stage checks for modification of files like package-lock.json and Gemfile.lock, compiles all these dependencies and then pushes them to the Gitlab container registry.
Using --cache-from in Docker build based on the current mainline branch, this is quite fast and the Docker layering mechanism helps to prevent repetition of steps.
Subsequent stages and jobs then use the Docker image pushed in the first stage as their image:.
Abbreviated configuration for readability:
stages:
- create_builder_image
- test
Create Builder Image:
stage: create_builder_image
script:
- export DOCKER_BRANCH_TAG=$CI_COMMIT_REF_SLUG
# do stuff to build the image, using cache to speed it up
- docker push $GITLAB_IMAGE/builder:$DOCKER_BRANCH_TAG
Run Tests:
image: $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
stage: test
script:
# do stuff in the context of the image build in the first stage
Unfortunately, when working on longer-running feature branches, we now have a situation where it looks like the image in the second step is sometimes outdated and not pulling the latest version from the registry before starting the job, which makes subsequent jobs complain about missing dependencies.
Is there anything I can do to force it to always pull the latest image for each job?
As already written in the comments, i would not use the $CI_COMMIT_REF_SLUG for tagging. Simply because it is not guaranteed that all pipelines will run in the same order, and this alone can create issues. The same one you are currently experiencing.
I recommend to use $CI_COMMIT_SHA as it is bound to the pipeline. I would also rely on previous builds for caching and i will shortly outline my approach here
stages:
- create_builder_image
- test
- deploy
Create Builder Image:
stage: create_builder_image
script:
- (docker pull $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG && export DOCKER_CACHE_TAG=$CI_COMMIT_REF_SLUG) || (docker pull $GITLAB_IMAGE/builder:latest && export DOCKER_CACHE_TAG=latest) || true
- docker build --cache-from $GITLAB_IMAGE/builder:$DOCKER_CACHE_TAG ...
# do stuff to build the image, using cache to speed it up
- docker push $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
Run Tests:
image: $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
stage: test
script:
# do stuff in the context of the image build in the first stage
Push image: # pushing the image for the current branch ref, as i know it is a working image and it can than be used for caching by others.
image: docker:20
stage: deploy
variables:
GIT_STRATEGY: none
stage: push
script:
- docker pull $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
- docker tag $GITLAB_IMAGE/builder:$CI_COMMIT_SHA $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
- docker push $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
i know it might generate additional build steps, but this way, you can ensure that you will always have the image which belong to the pipeline. You still can use caching and layering from docker, and beneficiary, the image will not be pushed currently if the tests are failing.
Furthermore you can also create an step before creating the build image, where you can figure out, if you do need a new image at all.
Related
A very typical scenario with GitLab CI is to install a few packages you need for your jobs (linters, code coverage tools, deployment-specific helpers and so on) and to then run your actual stages/steps of a building, testing and deploying your software.
The Docker runner is a very neat and clean solution, but it seems very wasteful to always run the steps that install the base software. Normally, Docker is able to cache such layers, but with the way the GitLab Docker runner works, that doesn't happen.
Do we realize that setting up another project to produce pre-configured Docker images would be one solution, but are there any better ones? Basically, what we want to say is: "If the before section hasn't changed, you can reuse the image from last time, no need to reinstall wget or whatever".
Any solution like that out there?
You can use the registry of your gitlab project.
eg.
images:
stage: build
image: docker
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY # login
# pull the current image or in case the image does not exit, do not stop the script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
# build with the pulled image as cache:
- docker build --pull --cache-from $CI_REGISTRY_IMAGE:latest -t "$CI_REGISTRY_IMAGE:latest" .
# push the final image:
- docker push "$CI_REGISTRY_IMAGE:latest"
This way docker build will profit from the work done by the last run of the job. See the docs. Maybe you want to avoid unnecessary runs by some rules.
I have a repository that includes three parts: frontend, admin and server. Each contains its own Dockerfile.
After building the image I wanted to add a test for admin. My tests go through but take a lot of time because it pulls the base image and builds everything from scratch on each stage (like 8mins per stage). This is my .gitlab-ci.yml
image: tmaier/docker-compose
services:
- docker:dind
stages:
- build
- test
build:
stage: build
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker-compose build
- docker-compose push
test:admin:
stage: test
script:
- docker-compose -f docker-compose.yml -f docker-compose.test.yml up admin
I am not quite sure if I need to push/pull images between stages or if I should do that with artifacts/cache/whatever. As I understood I only need to push/pull if I want to deploy my images to another server. But also I added a docker-compose push which runs through but Gitlab doesn't show me any images in my registry.
I have been researching a lot on this but most example code I found was only about a single docker container and they didn't make use of docker-compose.
Any ideas? :)
Gitlab currently has no way to share Docker images between stages as artifacts. They have had an outstanding feature request for this for 3 years.
You'll need to push the docker image to the docker registry and pull it in later stages that need it. (Or do everything related to the image in one stage)
Mark could you show the files docker-compose.yml docker-compose.test.yml?
May be you try to push and pull different images. BTW try place docker login at before_script section that make it works at all jobs.
Is it possible to pass docker images built in earlier job in circle ci
example
jobs:
build:
steps:
- checkout
// build image
deploy:
steps:
- deploy earlier image
i cant see how i can access the image without rebuilding it
Each job can run on a different host, so to share the image you would need to push it to a registry from the job that builds it.
To reference the same job that was pushed you'll need an identifier that is known ahead of time. A good example of this is the CIRCLE_SHA1 environment variable. You can use this variable as the image tag
jobs:
build:
machine: true
steps:
...
- run: |
docker build -t repo/app:$CIRCLE_SHA1 .
docker push repo/app:$CIRCLE_SHA1
test:
docker:
- image: repo/app:$CIRCLE_SHA1
steps:
...
I believe you can achieve this by persisting the image to a workspace and then attaching the workspace when you want to deploy it. See CircleCI's workspace documentation here: https://circleci.com/docs/workspaces
I'm building my pipline to create a docker image, then push it to AWS. I have it broken into steps, and in Bitbucket, you have to tell it what artifacts to share between them. I have a feeling this is a simple bug, but I just cannot figure it out.
It's failing at 'docker tag' in step 4 with:
docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Error response from daemon: No such image: projectname:v.11
Basically it cannot find the docker image created...
Here's my pipeline script (some of it simplified)
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
name: 1. Install dotnet
script:
# Do things
- step:
name: 2. Install AWS CLI
script:
# Do some more things
- step:
name: 3. Build Docker Image
script:
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
artifacts:
- ./**
- step:
name: 4. Push Docker Image to AWS
script:
# Tag and push my docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Now, I know this script works, but only if I remove all the steps. For whatever reason, step 4 doesn't have access to the docker image created in step 3. Any help is appreciated!
Your docker images are not stored in the folder where you start the build, so they are not saved to the artefacts, and not available in the next step.
Even if they were (you could pack/unpack it through docker save), you would probably run against the size limits for artefacts, not to mention the time the time it takes to pack/unpack.
I guess you'd be better off if you created a Dockerfile in your project yourself, and combine step 1 & 2 there. Your bitbucket pipeline could then be based on a docker image that already contains the AWS-cli and uses docker as a service, and your one step would then consist of building your project's Dockerfile and uploading to AWS. This also lowers your dependency on bitbucket pipelines, as
The Docker image is not being passed from step 3 to step 4 as the Docker image is not stored in the build directory.
The simplest solution would be to combine all four of your steps into a single step as follows:
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
script:
# Install dependencies
- ./install-dot-net
- ./install-aws-cli
# Build the Docker image
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
# Tag and push the Docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
I want to generate Dockerfile in GitLab CI script and build it. Then use this newly generated image in build jobs. How can I do this? Tried to use global before_script, but it already starts in default container. I need to do this out of any containers.
before_script is run before every job so it's not something you want. But you can have a first job to do the image build and take advantage of the fact that each job can use a different Docker image. The build of the image is covered in the manual.
Option A (uhm... sort of OK)
Have 2 runners, one with a shell executor (tagged shell) and one with a Docker executor (tagged docker). You would then have a first stage with a job dedicated to building the docker image. It would use the shell runner.
image_build:
stage: image_build
script:
- # create dockerfile
- # run docker build
- # push image to a registry
tags:
- shell
The second job would then run using the runner with docker executor and use this created image:
job_1:
stage: test
image: [image you created]
script:
- # your tasks
tags:
- docker
The problem with this is that the runner would need to be part of the docker group which has security implications.
Option B (better)
The second option would do the same but would have only one runner using Docker executor. The Docker image would be built within a running container (gitlab/dind:latest image) = "docker in docker" solution.
stages:
- image_build
- test
image_build:
stage: image_build
image: gitlab/dind:latest
script:
- # create dockerfile
- # run docker build
- # push image to a registry
job_1:
stage: test
image: [image you created]
script:
- # your tasks