Use variables in docker-compose file image - docker

I would like to pull and push my images to different repositories depending on the environment I am deploying them to.
My .env file obviously only works in a single environment and currently i have to use multiple docker-compose files to get around this.
Simply I would like to do something like this:
version: '2.1'
services:
my_service:
build:
dockerfile: Dockerfile.somedockerfile
image: ${docker_repo_host}:${docker_repo_port}/myimage:latest
Where I could pass in docker_repo_host and docker_repo_port dynamically when I am building. I am unsure if there is a way to do this neatly but any suggestion would be appreciated.

Export the two variables
export docker_repo_host=your_host
export docker_repo_port=your_port
Now run the docker-compose command

Related

Can you specify the tag to pull when doing a docker-compose pull?

I have a docker-compose.yml file that kind of looks like this:
version: '3.5'
services:
service1:
image:service-one-image:develop
I know that develop is the specified tag that will be pulled if I do a docker-compose pull. But I would like to be able to pull a different tag without having the change every tag (they're all develop, so I could do a find and replace) in the yml file each time I need to switch tags. Is there a way to do that? Like a docker-compose pull --tag=release
This is possible! I found an answer here
Basically you change the compose file to have a variable
version: '3.5'
services:
service1:
image:service-one-image:$TAG
And on the command line you run TAG={value} docker-compose {command}

Best practice - having multiple docker-compose files in a repo

I'am currently working on a fullstack web project that consists of the following components:
Database (MariaDB)
Frontend (Angular)
Backend (NodeJS)
Every component should be deployable through docker. For that I have a Dockerfile for each of them. I also defined a docker-compose in the repository root to deploy all of them together.
# current repo structure
|frontend/
|src/
|docker/
-Dockerfile
-docker-compose.yml
|backend/
|src/
|docker/
-Dockerfile
-docker-compose.yml
|database/
|src/
|docker/
-Dockerfile
-docker-compose.yml
-docker-compose.yml
Do you think this is good practice? I am unsure because I think this my current structure is kind of confusing. How do you handle it in similar projects?
docker-compose is designed to orchestrate multiple components of a project in one single place: docker-compose file.
In your case, and as m303945 said, you don't need multiple docker-compose files. Indeed, your main docker-compose.yml should call the Dockerfile of each of your component. This file could contain something like this:
services:
frontend:
build:
context: frontend
dockerfile: docker/Dockerfile
backend:
build:
context: backend
dockerfile: docker/Dockerfile
database:
build:
context: database
dockerfile: docker/Dockerfile
you dont need multiple docker-compose files. if you want to run specific app together, for example only database and backend just run this command.
docker-compose -f docker-compose-file.yml up -d database backend
which database and backend is service name in the docker-compose file.

Share env variables between Docker-Compose and GitLab-CI

Note: I've omitted some details, stages and settings from the following config files to make the post shorter and the question more "readable". Please comment if you believe essential details are missing, and I'll (re-) add them.
Now, consider a docker-compose project, described by the following config,
# docker-compose.yml
version: '3'
services:
service_1:
build: ./service_1
image: localhost:8081/images/name_of_service_1
container_name: name_of_service_1
service_2:
build: ./service_2
image: localhost:8081/images/name_of_service_2
container_name: name_of_service_2
Then, in the project's git repository, we have another file, for the GitLab continuous integration config,
# .gitlab-ci.yml
image: docker:latest
stages:
- build
- release
Build:
stage: build
script:
- docker-compose build
- docker-compose push
# Release by re-tagging some specific (not all) images with version num
Release:
stage: release
script:
- docker pull localhost:8081/images/name_of_service_1
- docker tag localhost:8081/images/name_of_service_1 localhost:8081/images/name_of_service_1:rel-18.04
Now, this works fine and all, but I find it frustrating how I must duplicate image names in both files. The challenge here (in my own opinion) is that the release stage does not release all images that are part of the compose, because some are pure mock/dummy-images purely meant for testing. Hence, I need to tag/push the images and containers individually.
I would like to be able to define the image names only once: I tried introducing the .env file, which is automatically imported by docker-compose.yml,
# .env
GITLAB=localhost:8081/images
SERVICE_1=name_of_service_1
SERVICE_2=name_of_service_2
This let's me update the docker-compose.yml to use these variables and write image: "${GITLAB}/${SERVICE_1}" as opposed to the original file above. However, I'm unable to import these variables into gitlab-ci.yml, and hence need to duplicate them.
Is there a simple way to make docker-compose.yml and .gitlab-ci.yml share some (environment) variables?
I'm not sure what your problem is. So you have an .env file in your project's root directory that contains all the variavles you need to be set in your release job? Why don't you just load them like this:
script:
- source ./env
- docker pull "${GITLAB}/${SERVICE_1}"
- docker tag "${GITLAB}/${SERVICE_1}" "${GITLAB}/${SERVICE_1}:rel-18.04"
There is no builtin way for .gitlab-ci.yml to load environment variables via an external file (e.g., your .env file).
You could try adding the environment variables to your .gitlab-ci.yml file itself (in addition to your .env file used locally).
This may seem like you're copy-pasting code, but CI systems run the same environment you run locally by the means of a file (e.g., .gitlab-ci.yml for GitLab CI) with the same commands you've used locally, so it's okay.

How to set environment variable into docker container using docker-compose

I want to set credentials to use Google Translate Api Client so I have to set environment variable GOOGLE_APPLICATION_CREDENTIALS that value is path to credential file (from Google Cloud).
When I have been used docker build and docker run it was pretty easy.
I have been used docker run
--env GOOGLE_APPLICATION_CREDENTIALS=/usr/src/app/CryptoTraderBot-901d31d199ce.json and environment variable has been set.
More difficult things come when I tried to set it in docker-compose. I have to use docker-compose because I need few containers so it is only way to achieve this.
Based on Docker compose environment variables documentation I created my docker-compose.yml file that looks like this:
version: "3"
services:
redis:
image: redis:4-alpine
crypto-bot:
build: .
depends_on:
- redis
environment:
- GOOGLE_APPLICATION_CREDENTIALS = /usr/src/app/CryptoTraderBot-901d31d199ce.json
I also have been tried multiple combination of path to .json file but none of this has been worked properly.
Have you got any idea how can I set it properly ?
While creating this question I have been resolve this problem in a funny and easy way but I have been thought that I post answer to help someone in the future with similiar problem.
All you have to do is remove " " (space) next = sign so two last lines of docker-compose.yml should looks like this:
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/usr/src/app/CryptoTraderBot-901d31d199ce.json
Docker Compose has a newer feature called secrets. You can bind the credentials like this:
services:
secret-service:
build:
context: secret-service
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/run/secrets/gcp-credentials
secrets:
- gcp-credentials
secrets:
gcp-credentials:
file: ./gcp-credentials.json
Reference: https://docs.docker.com/compose/compose-file/#secrets

docker-compose scaleable way to provide environment variables

I am searching for a scaleable solution to the problem of having numerous possible environments for a containerised app. Let's say am creating a web app and I have the usual deployment environments, develop, testing, production but I also have multiple instances of the app for different clients, client1, client2, client3 etc.
This quickly becomes a mess if I have to create separate docker-compose files:
docker-compose-client1-develop.yml
docker-compose-client1-testing.yml
docker-compose-client1-production.yml
docker-compose-client2-develop.yml
...
Breaking the client specific configuration into a .env file and dockers variable substitution gets me most of the way there, I can now have one docker-compose.yml file and just do:
services:
webapp:
image: 'myimage:latest'
env_file:
- ./clients/${CLIENT}.env # client specific .env file
environment:
- DEPLOY # develop, testing, production
so now I just need the CLIENT and DEPLOY environment variables set when I run docker-compose up which is fine, but I'm wondering about a convenient way to pass those environment variables in to docker-compose. There's the potential (at least during development) for a decent amount of context-switching. Is there a tidy way to pass in different CLIENT and DEPLOY env vars to docker-compose up every time I run it?
What you are trying to achieve is to set environment variables per-command.
Are you running on Linux? Take a look at env command. Just prepend your docker-compose command line like this:
env CLIENT=client1 DEPLOY=production docker-compose ...
On Windows, you may have to do something more complicated (like this), but there could be simpler ways.
Have you tried docker-compose file extending?
For instance you can have base docker-compose.yml file which is the production one and multiple extending files where you only change what needs to be overloaded:
docker-compose.dev.yml
version: '2'
services:
webapp:
env_file: path_to_the_file_env
Then you simply use both:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
To spin up the production it's as easy as:
docker-compose up
I personally use this technique a lot in many of my projects.

Resources