docker compose - ignore build context path - docker

I have docker-compose.yml file with build context property specified like this:
version: '3'
services:
my-service:
container_name: my-service
image: my-service
build:
context: foo
ports:
- 8088:8088
# other services
When I run docker-compose up locally, build context does exist and everything works fine. However, my CI server is configured to use the same docker-compose.yml file but there is no build context (images are copied as .tar archive via SSH and then loaded via docker load).
Now I've got an error:
ERROR: build path /foo either does not exist, is not accessible, or is
not a valid URL.
So I've tried to find a way to suppress looking for this build context when running docker-compose up (I don't want to build images cause they are already up-to-date), but docker-compose up --no-build does not work. Any ideas?

I posted your issue as a feature request on the docker-compose repository. Let's see how it progresses:
https://github.com/docker/compose/issues/7674
Meanwhile, you will have to workaround this by modifying your CI script that does the docker-compose up --no-build so it does the mkdir -p ... that you need.

docker-compose.override.yml is good solution in this case. You may override only build block and this is not hard to mantain as two independent files.
docker-compose.override.yml:
version: '3'
services:
my-service:
build:
context: foo
docker-compose.yml
version: '3'
services:
my-service:
container_name: my-service
image: my-service
ports:
- 8088:8088
See https://docs.docker.com/compose/extends/

I had the same problem, and my solution, until "docker-compose config" provides a way to skip the directory-exists check, is to automatically create those directories that "docker-compose config" expects.
Here is a one-liner that does this:
egrep ' (context:|build)' < docker-compose.yml | sed -E 's/\s+\S+:\s*//' | xargs mkdir -p .
It's ugly, but I couldn't figure out another way. (The .extends. way mentioned by dngnezdi is not a solution, as I needed an automatic method.)

Related

Running docker-compose up ends up with error "Service has neither an image nor a build context specified."

My microservices project structure is like this:
my-service-one/
- Dockerfile
- ...
my-service-two/
- Dockerfile
- ...
docker-compose.yml
As you can see, each service directory contains a Dockerfile. There is a docker-compose.yml in the root level.
The docker-compose.yml :
version: "3"
services:
service-one:
container_name: service-one
build:
dockerfile: ./my-service-one/Dockerfile
ports:
- "8081:8081"
service-two:
container_name: service-two
build:
dockerfile: ./my-service-two/Dockerfile
ports:
- "8082:8082"
Now, I run docker-compose up -d from the root. I end up with error:
$ docker-compose up -d
ERROR: The Compose file is invalid because:
Service service-one has neither an image nor a build context specified. At least one must be provided.
My question is why does docker-compose think my service-one doesn't have a build context specified? Didn't I specify it already with:
build:
dockerfile: ./my-service-one/Dockerfile
Why this error?
why does docker-compose think my service-one doesn't have a build context specified?
Weeeell, because you did not specified the build context.
Didn't I specify it already with:
No, you specified the dockerfile. No the context.
Why this error?
You have to specify the context so that docker knows what to build.
If you want to build with the context of current directory, you would do:
build:
context: .
dockerfile: ./my-service-two/Dockerfile
Maybe the context is inside my-service-two, I suspect youw antto write:
build:
context: ./my-service-two
dockerfile: ./Dockerfile
or really just:
build: ./my-service-two
Peovide a context property below both services in build section like that:
build:
context: YOUR_DIRECTORY
dockerfile: ./my-service-one/Dockerfile
YOUR_DIRECTORY is the place where the files for your project are listed.
Most probably YOUR_DIRECTORY is already written i the child .yml files.
You have a couple of main approaches:
To copy paste the context from the child .yml
To produce the docker build using the child .yml with a command like:
docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
--build

How to use secrets when building docker compose locally

I recently started using Buildkit to hide some env vars, and it worked great in prod by gha!
My Dockerfile now is something like this:
# syntax=docker/dockerfile:1.2
...
RUN --mount=type=secret,id=my_secret,uid=1000 \
MY_SECRET=$(cat /run/secrets/my_secret) \
&& export MY_SECRET
And my front was something like this:
DOCKER_BUILDKIT=1 docker build \
--secret id=my_secret,env="MY_SECRET"
And when I run this on my Github actions, it works perfectly.
But now, the problem here is when I try to build it locally. When performing a docker-compose build it fails. Of course, because I'm not passing in any secret so my backend (Dockerfile) won't be able to read it from run/secrets/.
What I've tried to do, so far, to accomplish the local build using docker-compose build:
1. Working with Docker secrets:
I basically tried doing:
$ docker swarm init
$ echo "my_secret_value" docker secret create my_secret -
I thought that saving a secret would fix the problem but didn't work. I still got the same error message:
cat: can't open '/run/secrets/my_secret': No such file or directory
I also tried passing in the secret on my docker-compose file like the following but didn't work either:
version: '3'
services:
app:
build:
context: "."
args:
- "MY_SECRET"
secrets:
- my_secret
secrets:
my_secret:
external: true
I also tried storing the secret in a local file, but didn't work, the same error:
version: '3'
services:
app:
build:
context: "."
args:
- "MY_SECRET"
secrets:
- my_secret
secrets:
my_secret:
file: ./my_secret.txt
I also tried doing something like this answer something like this:
args:
- secret=id=my_secret,src=./my_secret.txt
But still got the same error:
cat: can't open '/run/secrets/my_secret': No such file or directory
What am I doing wrong to successfully perform a docker-compose build?
I'm aware that I can easily use two Dockerfiles, a Dockerfile to build in local and a Dockerfile to build in prod but I just want to use Buildkit as it is, by only modifying my docker-compose.yml file.
Does anyone have an idea about what am I missing to be able to build locally reading from /run/secrets/?
Support for this was recently implemented in v2. See the below pull requests.
https://github.com/docker/compose/pull/9386
https://github.com/compose-spec/compose-spec/pull/238
The provided example looks like this:
services:
frontend:
build:
context: .
secrets:
- server-certificate
secrets:
server-certificate:
file: ./server.cert
So you are close, but you have to add the secret key under the build key.
Also keep in mind that you have to use docker compose instead of docker-compose, in order to use v2 which is built into the docker client.

Can you use the current project_name in a docker compose file?

I see lots of questions around setting/changing the COMPOSE_PROJECT_NAME or PROJECT_NAME using ENV variables.
I'm fine with the default project name, but I would like to reference it in my compose file.
version: "3.7"
services:
app:
build: DockerFile
container_name: app
volumes:
- ./:/var/app
networks:
- the-net
npm:
image: ${project_name}_app
volumes:
- ./:/var/app
depends_on:
- app
entrypoint: [ 'npm' ]
networks:
- the-net
npm here is arbitrary , hopefully the fact that could be run as its own container or in other ways does not distract from the questions.
is it possible to reference the project name with out setting it manually or first?
Unfortunately it is not possible.
As alluded to, you can create a .env file and populate it with COMPOSE_PROJECT_NAME=my_name, but the config option does not present itself in your environment by default.
Unfortunately the env substitution in docker-compose is fairly limited, meaning we cannot use the available PWD env variable and greedy match it at all
$ cd ~
$ pwd
/home/tqid
$ echo "Base Dir: ${PWD##*/}"
Base Dir: tqid
When we use this reference, compose has issues:
$ docker-compose up -d
ERROR: Invalid interpolation format for "image" option in service "demo": "${PWD##*/}"
It's probably better to be explicit anyway, the COMPOSE_PROJECT_NAME is based on your dir, and if someone clones to a new folder then it gets out of whack, including the .env file in source control would provide a re-usable and consistent place to reference the name
https://docs.docker.com/compose/reference/envvars/#compose_project_name
using the same image as another container was what I was after ... reuse the image and change the entry point.
Specify the same build: options for both containers.
This seems inefficient, in that it will trigger the build sequence twice and docker images will list both of them. However, the way Docker's layer caching works, if identical RUN commands are run on identical input images, the resulting layer will simply be reused, and the two final images will have the same image ID; they will literally be the same image with two names.
The context I've run into this the most is with a Python application where the same code base is used for a Django or Flask Web server, plus a Celery worker. The Docker-level setup is fairly language-independent, though: specify the same build: for both containers, and override the command: for the container(s) that need to do a non-default task.
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
environment:
REDIS_HOST: redis
worker:
build: . # <-- same as app
command: npm run worker # <-- overrides Dockerfile CMD
environment:
REDIS_HOST: redis
redis:
image: redis
It is also valid to specify build: and image: together in the docker-compose.yml file; this specifies the name of the image that will be built. It's frequently useful to explicitly specify this because you will need to point at a specific Docker Hub or other registry location to push the built image. If you do this, then you'll know the image name and don't need to depend on the context name.
version: '3.8'
services:
app:
build: .
image: registry.example.com/my/app:${TAG:-latest}
worker:
image: registry.example.com/my/app:${TAG:-latest}
command: npm run worker
You will need to manually docker-compose build in this setup. Compose's workflow doesn't have a way to specify that one container's build must run before a different container can start.

docker compose orphan containers warning

How to be with orphan images when you have 2 independent projects and you want them to work at the same time or at least to build running docker-compose up -d without --remove-orphans flag when images are already built for another project.
docker compose file1:
version: '2'
services:
applications:
image: tianon/true
volumes:
- ../../:/var/www/vhosts/project1
nginx:
build: ./images/nginx
image: project1/nginx:latest
ports:
- "80:80"
volumes_from:
- applications
networks:
appnet:
aliases:
- project1.app
- admin.project1.app
php:
image: project1/php:latest
ports:
- "7778:7778"
build:
context: ./images/php
dockerfile: Dockerfile
volumes_from:
- applications
networks:
- appnet
mysql:
image: project1/mysql:latest
build: ./images/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- mysqldata:/var/lib/mysql
networks:
- appnet
ports:
- "33066:3306"
workspace:
image: project1/workspace:latest
build:
context: ./images/workspace
volumes_from:
- applications
working_dir: /var/www/vhosts/project1
networks:
- appnet
networks:
appnet:
driver: "bridge"
volumes:
mysqldata:
driver: "local"
the second docker compose file:
version: '2'
services:
project2_applications:
image: tianon/true
volumes:
- ../../:/var/www/vhosts/project2
project2_nginx:
build: ./images/nginx
image: project2/nginx:latest
ports:
- "8080:80"
volumes_from:
- project2_applications
networks:
project2_appnet:
aliases:
- project2.app
- admin.project2.app
project2_php:
image: project2/php:latest
ports:
- "7777:7777"
build:
context: ./images/php
dockerfile: Dockerfile
volumes_from:
- project2_applications
networks:
- project2_appnet
project2_mysql:
image: project2/mysql:latest
build: ./images/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- project2_mysqldata:/var/lib/mysql
networks:
- project2_appnet
ports:
- "33067:3306"
project2_workspace:
image: project2/workspace:latest
build:
context: ./images/workspace
volumes_from:
- project2_applications
working_dir: /var/www/vhosts/videosite
networks:
- project2_appnet
networks:
project2_appnet:
driver: "bridge"
volumes:
project2_mysqldata:
driver: "local"
And now when I have already built project1 and trying to run docker-compose up -d for the second project I see warning:
WARNING: Found orphan containers (docker_workspace_1, docker_nginx_1, docker_php_1, docker_mysql_1, docker_memcached_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
I have a supposition that it's because container names for project1 should be more specific and I need to add some prefixes like I'm doing for project2, but project1 is in use by many other developers and I do not want to change it.
Is there any way to turn off orphan check?
And the second thing: is just a warning message but for some reason, after it appearing compose is failing with error:
ERROR: Encountered errors while bringing up the project.
And to make it work I need to run docker-compose up -d --remove-orphans
Compose uses the project name (which defaults to the basename of the project directory) internally to isolate projects from each other. The project name is used to create unique identifiers for all of the project's containers and other resources. For example, if your project name is myapp and it includes two services db and web, then Compose starts containers named myapp_db_1 and myapp_web_1 respectively.
You get the "Found orphan containers" warning because docker-compose detects some containers which belong to another project with the same name.
To prevent different projects from interfering with each other (and suppress the warning) you can set a custom project name by using any of the following options:
The -p command line option.
COMPOSE_PROJECT_NAME environment variable. This environment variable can also be set via an environment file (.env in the current working directory by default).
Top-level name element in the Compose file. Note: if you pass multiple files to docker-compose via the -f option, then the value from the last file will be used.
docker-compose takes the name of the directory it is in as the default project name.
You can set a different project name by using -p or --project-name.
https://docs.docker.com/compose/reference/#use--p-to-specify-a-project-name
I had a similar problem because my projects all had the docker/docker-compose.yml structure.
To build on other answers, I create a .env file with my docker compose projects. I have a number of projects that all use the docker directory but are different projects.
To use docker-compose -p is a bit error prone, so creating .env file in the same directory as the docker-compose.yml:
-rw-rw-r-- 1 auser auser 1692 Aug 22 20:34 docker-compose.yml
-rw-rw-r-- 1 auser auser 31 Aug 22 20:44 .env
alleviates the necessary overhead of remembering -p.
In the .env file, I can now set the COMPOSE_PROJECT_NAME variable:
COMPOSE_PROJECT_NAME=myproject
On running:
docker-compose up -d
the COMPOSE_PROJECT_NAME is substituted without the use of -p.
Reference:
https://docs.docker.com/compose/env-file/
docker-compose up --remove-orphans
you can run this command to clean orphan containers. As specified in the warning
If the orphaned containers are expected and not intended to remove, you can set COMPOSE_IGNORE_ORPHANS variable to true.
Consise but just right away working source is here.
One option is to put it as a line into .env file next to docker-compose.yml like this:
COMPOSE_IGNORE_ORPHANS=True
Another option is pass or set it as an environment variable.
sh:
COMPOSE_IGNORE_ORPHANS=True docker-compose up -d
or
export COMPOSE_IGNORE_ORPHANS=True
docker-compose up -d
cmd:
SET COMPOSE_IGNORE_ORPHANS=True&& docker-compose up -d
powershell:
$env:COMPOSE_IGNORE_ORPHANS = 'True'; & docker-compose up -d
TL;DR
You can also add a unique name: myproject to each of your compose files.
My journey
In case this helps anybody else scrounging around to find help for the above issue (This is in support of the already good comments here):
I have several config files in the same directory
redis.yml
mariadb.yml
...
and I kept getting the same error about orphan containers when I ran
docker-compose -f <one of my configs>.yml up
as of now you can simply put each yml file into a separate project. This is simply done using the command like parameter "-p my_project_name" as has already been mentioned before. BUT the name must be in all lowercase!
This got me a little closer but I also kept forgetting that to bring the docker container down using docker-compose I needed to include that parameter as well.
For example to start the container:
docker-compose -p myproject-d redis.yml up -d
and to destroy the container
docker-compose -p myproject-d redis.yml down
Today I found that I can simply add the name: bit into the yml config. Here is an example for redis:
version: '3.9'
name: redis
services:
redis_0:
...
Now I can simply start the container with the following and don't have to worry about project names again:
docker-compose -f redis.yml <up/down>
This happens when your docker-compose file has got updated. I received similar error on Docker startup and found out that another team member updated the docker-compose.yml as part of cleanup.
To fix this, I deleted the docker group using the Delete button in Docker Desktop and started it again. This fixed the error for me.
As a complement for the existing answers, if you're using docker-compose with the -f option, to my surprise docker-compose will use the name of the parent folder of the first file passed via -f as the project name.
For example, assuming the following folder structure:
/
└── Users/
└── papb/
├── a.yml
└── foo/
└── b.yml
If you're in /Users and run docker-compose -f papb/a.yml -f papb/foo/b.yml:
The project name will be inferred as papb
Any relative paths you have in both files will be resolved against /Users/papb
If you're in /Users and run docker-compose -f papb/foo/b.yml -f papb/a.yml:
The project name will be inferred as foo
Any relative paths you have in both files will be resolved against /Users/papb/foo
If you're in /Users/papb and run docker-compose -f foo/b.yml -f a.yml:
The project name will be inferred as foo
Any relative paths you have in both files will be resolved against /Users/papb/foo

docker compose override a ports property instead of merging it

My docker compose configs look like this:
docker-compose.yml
version: '3.5'
services:
nginx:
ports:
- 8080:8080
docker-compose.prod.yml
version: '3.5'
services:
nginx:
ports:
- 80:80
Now, when I run command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up the nginx exposes on host machine two ports: 8000 and 80, because it merges ports properties:
version: '3.5'
services:
nginx:
ports:
- 8080:8080
- 80:80
Is there a way to override it? I want to expose only port 80.
This behaviour is documented at https://docs.docker.com/compose/extends/#adding-and-overriding-configuration
For the multi-value options ports, expose, external_links, dns, dns_search, and tmpfs, Compose concatenates both sets of values
Since the ports will be the concatenation of the ports in all your compose files, I would suggest creating a new docker-compose.dev.yml file which contains your development port mappings, removing them from the base docker-compose.yml file.
As Nikson says, you can name this docker-compose.override.yml to apply your development configuration automatically without chaining the docker-compose files. docker-compose.override.yml will not be applied if you manually specify another override file (e.g. docker-compose -f docker-compose.yml -f docker-compose.prod.yml)
it isn't possible a the moment but I found quite good way to fix this issue using the command yq.
You need to remove the ports from the original file.
Example:
Be careful this command will remove the nginx ports from your current docker-compose.yml (because of the -i option)
yq e -i 'del(.services.nginx.ports)' docker-compose.yml
You can execute this command on your deployment script or manually before your docker-compose up -d
There's also an open issue on docker-compose, that you may want to check once in a while.
Just keep the docker-compose.yml super simple and add the ports in another file docker-compose.develop.yml, then run it like docker-compose -f docker-compose.yml -f docker-compose.develop.yml up.
This way you can separate it from your docker-compose.override.yml file.
So you will have three files:
|- docker-compose.yml # no ports specified
|- docker-compose.override.yml # ports 8080:8080
|- docker-compose.develop.yml #ports 80:80
Refer to this post for longer explanation: https://mindbyte.nl/2018/04/04/overwrite-ports-in-docker-compose.html
I've faced the same problem. The proposed solution with docker-compose.override.yml sounds pretty well and is also an official one.
Although for some of my own projects I've applied the erb template engine to make docker-compose.yml.erb file compile for multiple environments. In short I use:
COMPOSE_TEMPLATE_ENV=production erb docker-compose.yml.erb > docker-compose.yml
COMPOSE_TEMPLATE_ENV=production erb docker-compose.yml.erb > docker-compose-production.yml
And then I can use the ENV['COMPOSE_TEMPLATE_ENV'] in my template and also the syntax of ERB, so only one file to configure and no worries about piplening them properly. Here's the short post article I've written about it
Use .override.yml file for overriding properties and a clear separation of properties need to be overridden
docker-compose.override.yml
Example:
version: '3.5'
services:
nginx:
ports:
- 80:80
Default:
docker-compose up
will use your docker-compose.yml and docker-compose.override.yml files
Reference: docker-compose multiple compose

Resources