A very typical scenario with GitLab CI is to install a few packages you need for your jobs (linters, code coverage tools, deployment-specific helpers and so on) and to then run your actual stages/steps of a building, testing and deploying your software.
The Docker runner is a very neat and clean solution, but it seems very wasteful to always run the steps that install the base software. Normally, Docker is able to cache such layers, but with the way the GitLab Docker runner works, that doesn't happen.
Do we realize that setting up another project to produce pre-configured Docker images would be one solution, but are there any better ones? Basically, what we want to say is: "If the before section hasn't changed, you can reuse the image from last time, no need to reinstall wget or whatever".
Any solution like that out there?
You can use the registry of your gitlab project.
eg.
images:
stage: build
image: docker
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY # login
# pull the current image or in case the image does not exit, do not stop the script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
# build with the pulled image as cache:
- docker build --pull --cache-from $CI_REGISTRY_IMAGE:latest -t "$CI_REGISTRY_IMAGE:latest" .
# push the final image:
- docker push "$CI_REGISTRY_IMAGE:latest"
This way docker build will profit from the work done by the last run of the job. See the docs. Maybe you want to avoid unnecessary runs by some rules.
Related
I've been trying to wrap my head around some old CI/CD scripts my company has written previously, to deploy some applications. The gitlab pipeline has several stages, as is seen in the beginning of the .gitlab-ci.yml file:
image: docker:stable
variables:
DOCKER_DRIVER: overlay2
CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_API: $CI_REGISTRY_IMAGE/career_api:latest
CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_APP: $CI_REGISTRY_IMAGE/career_app:latest
CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP: $CI_REGISTRY_IMAGE/career_dev_app:latest
CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_API: $CI_REGISTRY_IMAGE/career_dev_api:latest
# from https://storage.googleapis.com/kubernetes-release/release/stable.txt
K8S_STABLE_VERSION_URL: https://storage.googleapis.com/kubernetes-release/release/v1.18.4/bin/linux/amd64/kubectl
stages:
- prebuild
- test
- transform
- build
- deploy
Then, later on, the file specifies all these stages for a DEV and a MASTER branch. Specifically, I have trouble understanding the script in the prebuild stage of the dev branch:
prebuild_dev:
stage: prebuild
extends: .prebuildreq
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP || true
- docker build -f Dockerfile --pull -t $CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP --cache-from $CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP .
- docker push $CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP
only:
refs:
- dev
tags:
- testcicd
How I see it now is that the gitlab runner is ran as docker container? (Signified by the image:docker and DOCKER_DRIVER:overlay2 in the beginning of the file). Then, in the prebuild stage it does 4 steps:
login to the container registry with predefined vars $CI_REGISTRY_USER, $CI_REGISTRY_PASSWORD, and $CI_REGISTRY.
Pull CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP from this registry. First question: What does || true do here?
Build a dockerfile but also pull $CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP? Second question: What is happening in this line?
push image (The pulled one or the built one?) back to container registry
Some help to understand this would be greatly appreciated.
docker pull $CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP || true
i assume not 100% sure, not to fail command if docker pull image doesn't exist.
Question : 1
docker build -f Dockerfile --pull -t
$CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP --cache-from
$CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP .
Docker build --pull fetch the specified image digest always for the base image. Instead of using the local version.
Consider it like your base image available at your Build Jenkin machine but it won't use and pull again.
note : --pull --no-cache are flags, you wont be passing any values with it.
Like we do with docker -t or docker -p
Question : 2
docker build -f Dockerfile --pull -t
$CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP --cache-from
$CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP .
$CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP is not pulling image, after -t it's tagging the image with name.
Question : 3
push image (The pulled one or the built one?) back to container
registry
Build one since you have tag image with $CONTAINER_RELEASE_IMAGE_CAREER_GROWTH_DEV_APP
I have a repository that includes three parts: frontend, admin and server. Each contains its own Dockerfile.
After building the image I wanted to add a test for admin. My tests go through but take a lot of time because it pulls the base image and builds everything from scratch on each stage (like 8mins per stage). This is my .gitlab-ci.yml
image: tmaier/docker-compose
services:
- docker:dind
stages:
- build
- test
build:
stage: build
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker-compose build
- docker-compose push
test:admin:
stage: test
script:
- docker-compose -f docker-compose.yml -f docker-compose.test.yml up admin
I am not quite sure if I need to push/pull images between stages or if I should do that with artifacts/cache/whatever. As I understood I only need to push/pull if I want to deploy my images to another server. But also I added a docker-compose push which runs through but Gitlab doesn't show me any images in my registry.
I have been researching a lot on this but most example code I found was only about a single docker container and they didn't make use of docker-compose.
Any ideas? :)
Gitlab currently has no way to share Docker images between stages as artifacts. They have had an outstanding feature request for this for 3 years.
You'll need to push the docker image to the docker registry and pull it in later stages that need it. (Or do everything related to the image in one stage)
Mark could you show the files docker-compose.yml docker-compose.test.yml?
May be you try to push and pull different images. BTW try place docker login at before_script section that make it works at all jobs.
I was previously using the shell for my gitlab runner to build my project. So far I have set up the pipeline that will run whatever commands I have set in the gitlab-ci.yml file seen below:
gitlab-ci.yml using shell runner
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Now, I want to switch to a docker image. I have reconfigured the runner to use a docker image, and I specified the image in my new gitlab-ci.yml file seen below. I followed the gitlab-ci docker tutorial and this is where it left off so I'm not entirely sure where to go from here:
gitlab-ci.yml using docker runner
image: node:8.10.0
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Questions:
With my current gitlab-ci.yml file, how does this build a docker image/does it even build one? If it does, what does that mean? Currently the pipeline passed, but I have no idea if it did in a docker image or not (am I supposed to be able to tell?).
Also, let's say the docker image was created, ran the tests, and the pipeline passed; it should push the code to a new repository (not included in yml file yet). From what I gathered, the image isn't being pushed, it's just the code, right? So what do I do with this created docker image?
How does the Dockerfile get used? I see no link between the gitlab-ci.yml file and Dockerfile.
Do I need to surround all commands in the gitlab-ci.yml file in docker run <commands> or docker exec <commands>? Without including one of these 2 commands, it seems like it would just run on the server and not in a docker image.
I've seen people specify an image in both the gitlab-ci.yml file and Dockerfile. I have an angular project, and I specified an image of image: node:8.10.0. In the Dockerfile, should I specify the same image? I've seen some projects where they are completely different and I'm wondering what the use of both images are/if picking one image over another will severely impact my builds.
You have to take a different approach on building your app if you want to fully dockerize it. Export angular things into Dockerfile and get docker operations inside your .gitlab-ci instead of angular stuff like here:
stages:
- build
# - release
# - deploy
.build_template: &build_definition
stage: build
image: docker:17.06
services:
- docker:17.06-dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $CONTAINER_RELEASE_IMAGE || true
- docker build --cache-from $CONTAINER_RELEASE_IMAGE -t $CONTAINER_IMAGE -f $DOCKERFILE ./
- docker push $CONTAINER_IMAGE
build_app_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/app:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/app:latest
DOCKERFILE: ./Dockerfile.app
build_nginx_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/nginx:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/nginx:latest
DOCKERFILE: ./Dockerfile
You can set up a few build jobs - for production, development, staging etc.
Right next to your .gitlab-ci.yaml you can put Dockerfile and Dockerfile.app - Dockerfile.app stands for building you angular app:
FROM node:10.5.0-stretch
RUN mkdir -p /usr/src/app
RUN mkdir -p /usr/src/remote
WORKDIR /usr/src/app
COPY . .
# do your commands here
Now with your app built, it can be served via a web server - it's your choice and a different configuration that follows with each choice - cant even scratch a surface here. That'd be implemented in Dockerfile - we usually use Nginx in our company.
From here on it's about releasing your images and deploying them. I've only specified how to build them in docker as it seems this is what the question is about.
If you want to deploy your image and run it somewhere - choose a provider - AWS, Heroku, own infrastructure - have it your way, but this is far too much to cover all those in a single answer so I'll leave it for another question when you specify where'd you like to deploy your newly built images and how would you like to serve it. In our company, we orchestrate things with Rancher, but there are multiple awesome and competing options in the market.
Edit for a custom registry
The above .gitlab-ci configuration works with Gitlab's "internal" registry only, in case you want to utilize your own registry, change the values accordingly:
#previous configs
script:
- docker login -u mysecretlogin -p mysecretpasswd registry.local.com
# further configs
from -u gitlab-ci-token to your login in the registry,
from $CI_JOB_TOKEN to your password
from $CI_REGISTRY to your registry address
Those values should be stored in Gitlab's CI secret variables and referenced via env variables so that they are not saved in the repository.
Finally, your script might look like below in case you decided to protect these values. Refer to Gitlab's official docs on how to add secret CI variables - super easy task.
#previous configs
script:
- docker login -u $registrylogin -p $registrypasswd $registryaddress
# further configs
I recently got into CI/CD, and a good starting point for me was GitLab, since they provide an easy interface for that and i got started about what pipelines and stages are, but i have run into some kind of contradictory thought about GitLab CI running on Docker.
My app runs on Docker Compose. It contains (blah blah) that makes it easy to build & run containers. Each service in the Docker Compose creates a single Docker container, excepting the php-fpm one, which is able to do the thing called "horizontal scale", so I can scale it later.
I will use that Docker Compose for production, I am currently using it in development and I want to use it too in CI/CD pipelines.
However the .gitlab-ci.yml provides support for only one image, so I have to build it and push it to either their GitLab Registry or Docker Hub in order to pull it later in the CI/CD process.
How can I build my Docker Compose's service as a single image in order to push it to the Registry/Docker so I can pull it in the CI/CD?
My project contains a docker folder and a docker-compose.yml. In the docker folder, each service has its own separate directory (php-fpm, nginx, mysql, etc.) and each one (prepare yourself) contains a Dockerfile with build details, especially the php-fpm one (deps and libs are strong with this one)
Each service in the docker-compose.yml has a build context in each of their own folder.
If I was unclear, I can provide additonal info.
However the .gitlab-ci.yml provides support for only one image
This is not true. From the official documentation:
Your image will be named after the following scheme:
<registry URL>/<namespace>/<project>/<image>
GitLab supports up to three levels of image repository names.
Following examples of image tags are valid:
registry.example.com/group/project:some-tag
registry.example.com/group/project/image:latest
registry.example.com/group/project/my/image:rc1
So the solution to your problem is simple - just build individual images and push them to GitLab container registry under different image name.
If you would like an example, my pipelines are set up like this:
.template: &build_template
image: docker:stable
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest || true
- if [ -z ${CI_COMMIT_TAG+x} ];
then docker build
--cache-from $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
--file $DOCKERFILE_NAME
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_TAG
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest . ;
else docker build
--cache-from $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
--file $DOCKERFILE_NAME
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest . ;
fi
- docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
- if [ -z ${CI_COMMIT_TAG+x} ]; then
docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_TAG;
fi
- docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
build:image1:
<<: *build_template
variables:
IMAGE_NAME: image1
DOCKERFILE_NAME: Dockerfile.1
build:image2:
<<: *build_template
variables:
IMAGE_NAME: image2
DOCKERFILE_NAME: Dockerfile.2
And you should be able to pull the same image using $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA in later pipeline jobs or your compose file (provided that the variables are passed to where you run your compose file).
You don't need dind to run a docker-compose stack. You can run multiple docker-compose up commands.
acceptance_testing:
stage: test
before_script:
- docker-compose -p $CI_JOB_ID up -d
script:
- docker-compose -p $CI_JOB_ID exec -T /run/your/test/suite.sh
after_script:
- docker-compose -p $CI_JOB_ID down -v --remove-orphans || true
I think you search something like this
# .gitlab-ci.yml
image: docker
services:
- docker:dind
build:
script:
- apk add --no-cache py-pip
- pip install docker-compose
- docker-compose up -d
Also good to know:
In Docker, what's the difference between a container and an image?
Building Docker images with GitLab CI/CD
I have a project of Drupal which contains two images: one for Drupal source code & another for MySQL database.
I tagged them:
docker build -t registry.mysite.net/drupal/blog/blog_db:v1.3 mysql/db
docker build -t registry.mysite.net/drupal/blog/blog_drupal:v1.3 src/drupal
Where registry.mysite.net is the url of the git site, and can be found under Container registry settings.
drupal is the group name,
blog is the project name,
blog_db is the image for database, mysql/db is the location for the Dockerfile, and likewise for the other image.
And then to push it to gitlab use:
docker push registry.mysite.net/drupal/blog/blog_db:v1.3
docker push registry.mysite.net/drupal/blog/blog_drupal:v1.3
Hope this might help someone.
Been trying to set-up Gitlab CI which can build a docker image, and came across that DinD was enabled initially only for separate runners and Blog Post suggest it would be enabled soon for shared runners,
Running DinD requires enabling privileged mode in runners, which is set as a flag while registering runner, but couldn't find an equivalent mechanism for Shared Runners
The shared runners are now capable of building Docker images. Here is the job that you can use:
stages:
- build
- test
- deploy
# ...
# other jobs here
# ...
docker:image:
stage: deploy
image: docker:1.11
services:
- docker:dind
script:
- docker version
- docker build -t $CI_REGISTRY_IMAGE:latest .
# push only for tags
- "[[ -z $CI_BUILD_TAG ]] && exit 0"
- docker tag $CI_REGISTRY_IMAGE:latest $CI_REGISTRY_IMAGE:$CI_BUILD_TAG
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE:$CI_BUILD_TAG
This job assumes that you are using the Container Registry provided by Gitlab. It pushes the images only when the build commit is tagged with a version number.
Documentation for Predefined variables.
Note that you will need to cache or generate as temporary artifacts of any dependencies for your service which are not committed in the repository. This is supposed to be done in other jobs. e.g. node_modules are not generally contained in the repository and must be cached from the build/test stage.