Share env variables between Docker-Compose and GitLab-CI - docker

Note: I've omitted some details, stages and settings from the following config files to make the post shorter and the question more "readable". Please comment if you believe essential details are missing, and I'll (re-) add them.
Now, consider a docker-compose project, described by the following config,
# docker-compose.yml
version: '3'
services:
service_1:
build: ./service_1
image: localhost:8081/images/name_of_service_1
container_name: name_of_service_1
service_2:
build: ./service_2
image: localhost:8081/images/name_of_service_2
container_name: name_of_service_2
Then, in the project's git repository, we have another file, for the GitLab continuous integration config,
# .gitlab-ci.yml
image: docker:latest
stages:
- build
- release
Build:
stage: build
script:
- docker-compose build
- docker-compose push
# Release by re-tagging some specific (not all) images with version num
Release:
stage: release
script:
- docker pull localhost:8081/images/name_of_service_1
- docker tag localhost:8081/images/name_of_service_1 localhost:8081/images/name_of_service_1:rel-18.04
Now, this works fine and all, but I find it frustrating how I must duplicate image names in both files. The challenge here (in my own opinion) is that the release stage does not release all images that are part of the compose, because some are pure mock/dummy-images purely meant for testing. Hence, I need to tag/push the images and containers individually.
I would like to be able to define the image names only once: I tried introducing the .env file, which is automatically imported by docker-compose.yml,
# .env
GITLAB=localhost:8081/images
SERVICE_1=name_of_service_1
SERVICE_2=name_of_service_2
This let's me update the docker-compose.yml to use these variables and write image: "${GITLAB}/${SERVICE_1}" as opposed to the original file above. However, I'm unable to import these variables into gitlab-ci.yml, and hence need to duplicate them.
Is there a simple way to make docker-compose.yml and .gitlab-ci.yml share some (environment) variables?

I'm not sure what your problem is. So you have an .env file in your project's root directory that contains all the variavles you need to be set in your release job? Why don't you just load them like this:
script:
- source ./env
- docker pull "${GITLAB}/${SERVICE_1}"
- docker tag "${GITLAB}/${SERVICE_1}" "${GITLAB}/${SERVICE_1}:rel-18.04"

There is no builtin way for .gitlab-ci.yml to load environment variables via an external file (e.g., your .env file).
You could try adding the environment variables to your .gitlab-ci.yml file itself (in addition to your .env file used locally).
This may seem like you're copy-pasting code, but CI systems run the same environment you run locally by the means of a file (e.g., .gitlab-ci.yml for GitLab CI) with the same commands you've used locally, so it's okay.

Related

Proper way to build a CICD pipeline with Docker images and docker-compose

I have a general question about DockerHub and GitHub. I am trying to build a pipeline on Jenkins using AWS instances and my end goal is to deploy the docker-compose.yml that my repo on GitHub has:
version: "3"
services:
db:
image: postgres
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_HOST: db
I've read that in CI/CD pipelines people build their images and push them to DockerHub but what is the point of it?
You would be just pushing an individual image. Even if you pull the image later in a different instance, in order to run the app with the different services you will need to run the container using docker-compose and you wouldn't have it unless you pull it from the github repo again or create it on the pipeline right?
Wouldn't be better and straightforward to just fetch the repo from Github and do docker-compose commands? Is there a "cleaner" or "proper" way of doing it? Thanks in advance!
The only thing you should need to copy to the remote system is the docker-compose.yml file. And even that is technically optional, since Compose just wraps basic Docker commands; you could manually docker network create and then docker run the two containers without copying anything at all.
For this setup it's important to delete the volumes: that require a copy of the application code to overwrite the image's content. You also shouldn't need an override command:. For the deployment you'd need to replace build: with image:.
version: "3.8"
services:
db: *from-the-question
web:
image: registry.example.com/me/web:${WEB_TAG:-latest}
ports:
- "3000:3000"
depends_on:
- db
environment: *web-environment-from-the-question
# no build:, command:, volumes:
In a Compose setup you could put the build: configuration in a parallel docker-compose.override.yml file that wouldn't get copied to the deployment system.
So what? There are a couple of good reasons to structure things this way.
A forward-looking answer involves clustered container managers like Kubernetes, Nomad, or Amazon's proprietary ECS. In these a container runs somewhere in a cluster of indistinguishable machines, and the only way you have to copy the application code in is by pulling it from a registry. In these setups you don't copy any files anywhere but instead issue instructions to the cluster manager that some number of copies of the image should run somewhere.
Another good reason is to support rolling back the application. In the Compose fragment above, I refer to an environment variable ${WEB_TAG}. Say you push out one build a day and you give each a date-stamped tag; registry.example.com/me/web:20220220. But, something has gone wrong with today's build! While you figure it out, you can connect to the deployment machine and run
WEB_TAG=20220219 docker-compose up -d
and instantly roll back, again without trying to check out anything or copy the application.
In general, using Docker, you want to make the image as self-contained as it can be, though still acknowledging that there are things like the database credentials that can't be "baked in". So make sure to COPY the code in, don't override the code with volumes:, do set a sensible CMD. You should be able to start with a clean system with only Docker installed and nothing else, and docker run the image with only Docker-related setup. You can imagine writing a shell script to run the docker commands, and the docker-compose.yml file is just a declarative version of that.
Finally remember that you don't have to use Docker. You can use a general-purpose system-management tool like Ansible, Salt Stack, or Chef to install Ruby on to the target machine and manually copy the code across. This is a well-proven deployment approach. I find Docker simpler, but there is the assumption that the code and all of its dependencies are actually in the image and don't need to be separately copied.

docker-compose: (Re)Build Dockerfile from inside docker-compose file?

Had a hard time googling this question as most suggestions is how to do it through command line which I sadly do not have access to in this environment. Is it possible to do the equivalent of
docker-compose up --build --force-recreate
From inside a docker-compose file?
The environment you describe sounds similar to Kubernetes in a couple of ways, except that it's driven by a Docker Compose YAML file. The strategies that work for Kubernetes will work here too. In Compose there's no way to put "actions" in a YAML file, or flag that a service always needs to be rebuilt or recreated. It sounds like the only thing it's possible to do in your environment is run docker-compose up -d.
The trick that I'd use here is to change the image: for a container whenever you have a change you need to deploy. That means the image tag needs to be something unique; it could be a date stamp or source control ID.
version: '3.8'
services:
myapp:
image: registry.example.com/myapp:20220209
Now when you have a change to your application, you (or your CI system) need to build a new copy of it, offline, and docker push it to a registry. Then change this image: value, and push the updated file to the deployment system. Compose will see that it's only running version 20220208 from yesterday and recreate that specific container.
If you have the ability to specify environment variables, you can use that in the Compose setup
image: registry.example.com/myapp:${MYAPP_TAG:-latest}
to avoid having to physically modify the file.

Using docker-compose to build one of many dockerfiles in a repository

We can build a Dockerfile directly from the root of a git repository, e.g.
$ docker build https://github.com/docker/rootfs.git#container:docker
This takes a dockerfile located at the root of the git repository and uses the root as build context.
However, sometimes there are multiple docker files for one repository, for example the onedrive linux client there are 5 dockerfiles and I want to build a specific file.
What is the correct way to do this in a docker-compose file? My setup currently looks as follows
services:
onedrive:
image: local-onedrive
# I want to build the following file:
# https://github.com/abraunegg/onedrive/blob/master/contrib/docker/Dockerfile-rpi
build: https://github.com/abraunegg/onedrive.git#
restart: unless-stopped
environment:
- ONEDRIVE_UID=${PUID}
- ONEDRIVE_GID=${PGID}
volumes:
- ${CONFIG_DIR}/onedrive:/onedrive/conf
- ${ONEDRIVE_DIR}:/onedrive/data
A main advantage would be that updating would be directly done from the author's repository.
The Docker documentation link you reference says:
When the URL parameter points to the location of a Git repository, the repository acts as the build context.
You should be able to use the docker build -f option or the Compose dockerfile: setting to specify an alternate Dockerfile, within that context directory.
build:
context: 'https://github.com/abraunegg/onedrive.git#'
dockerfile: Dockerfile-alpine

Best practices for adding .env-File to Docker-Image during build in Gitlab CI

I have a node.js Project which I run as Docker-Container in different environments (local, stage, production) and therefor configure it via .env-Files. As always advised I don't store the .env-Files in my remote repository which is Gitlab. My production- and stage-systems are run as kubernetes cluster.
What I want to achieve is an automated build via Gitlab's CI for different environments (e.g. stage) depending on the commit-branch (named stage as well), meaning when I push to origin/stage I want an Docker-image to be built for my stage-environment with the corresponding .env-File in it.
On my local machine it's pretty simple, since I have all the different .env-Files in the root-Folder of my app I just use this in my Dockerfile
COPY .env-stage ./.env
and everything is fine.
Since I don't store the .env-Files in my remote repo, this approach doesn't work, so I used Gitlab CI Variables and created a variable named DOTENV_STAGE of type file with the contents of my local .env-stage file.
Now my problem is: How do I get that content as .env-File inside the docker image that is going to be built by gitlab since that file is not yet a file in my repo but a variable instead?
I tried using cp (see below, also in the before_script-section) to just copy the file to an .env-File during the build process, but that obviously doesn't work.
My current build stage looks like this:
image: docker:git
services:
- docker:dind
build stage:
only:
- stage
stage: build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- cp $DOTENV_STAGE .env
- docker pull $GITLAB_IMAGE_PATH-$CI_COMMIT_BRANCH || true
- docker build --cache-from $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH --file=Dockerfile-$CI_COMMIT_BRANCH -t $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH:$CI_COMMIT_SHORT_SHA .
- docker push $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH
This results in
Step 12/14 : COPY .env ./.env
COPY failed: stat /var/lib/docker/tmp/docker-builder513570233/.env: no such file or directory
I also tried cp $DOTENV_STAGE .env as well as cp $DOTENV_STAGE $CI_BUILDS_DIR/.env and cp $DOTENV_STAGE $CI_PROJECT_DIR/.env but none of them worked.
So the part I actually don't know is: Where do I have to put the file in order to make it available to docker during build?
Thanks
You should avoid copying .env file into the container altogether. Rather feed it from outside on runtime. There's a dedicated prop for that: env_file.
web:
env_file:
- .env
You can store contents of the .env file itself in a Masked Variable in the GitLabs CI backend. Then dump it to .env file in the runner and feed to Docker compose pipeline.
After some more research I stumbled upon a support-forum entry on gitlab.com, which exactly describes my situation (unfortunately it got deleted in the meanwhile) and it got solved by the same approach I was trying to use, namely this:
...
script:
- cp $DOTENV_STAGE $CI_PROJECT_DIR/.env
...
in my .gitlab-ci.yml
The part I was actually missing was adjusting my .dockerignore-File accordingly (removing .env from it) and then removing the line
COPY .env ./.env
from my Dockerfile
An alternative approach I thought about after joyarjo's answer could be to use a ConfigMap for Kubernetes. But I didn't try it yet

Docker-compose for multi-stage build?

Let's say that we have docker-compose.yml which build few services with Dockerfile's in different locations like this:
version: '3.4'
services:
service1:
build: ./service1-dir
service2:
build: ./service2-dir
Let's say that in Dockerfile of service1 we have some folder that we already copy.
Can I pass this folder to docker file of service2?
With another words - can I use multi-stage build technique to pass layers between different Dockerfiles or multi-stage build should be only in one Dockerfile?
You can split things up the way you describe; it'll probably be more robust to do things in two stages in a single Dockerfile, or to do the primary part of your build using host tools.
There are two essential parts to this. The first is that, in your docker-compose.yml file, if you specify both a build: description and an image: name, then Docker Compose will tag the built image for you. The second is that the Dockerfile COPY directive can copy content --from a preceding build stage or from an arbitrary other image. So if your docker-compose.yml says
version: '3'
services:
service1:
build: ./service1-dir
image: me/service1
service2:
build: ./service2-dir
and service2-dir/Dockerfile says
COPY --from=me/service1 /app/somefile .
it will copy content from one image to the other.
The one challenge here is that docker-compose build doesn't specify the order the images are built in. If it builds service2 first, it will get old content from the previous build of service1 (or fail if it's the initial build). To do this reliably, you need to do something like
docker-compose build service1
docker-compose build
docker-compose up -d
If the build sequence isn't too complex, just including it in both Dockerfiles could make sense. It can also work to built whatever artifacts you need on the host, and have your Dockerfiles copy that in as-is instead of building it themselves; this works well especially if that content is platform-neutral (Java .jar files, HTML/Javascript/CSS files from a Webpack build for a browser application).

Resources