I start under docker and I do not understand the difference between volumes and configs.
https://docs.docker.com/compose/compose-file/#volumes-top-level-element
https://docs.docker.com/compose/compose-file/#configs-top-level-element (added in version 3.3 of docker compose)
Should the configs properties be defined if, for example, a configuration file is used on different services ?
In what case does it apply ?
For example to share the Apache Root Document (/usr/local/apache2/htdocs) with volumes :
version: '3.8'
services:
apache:
image: httpd:2.4
restart: always
ports:
- 8000:80
volumes:
- ./:/usr/local/apache2/htdocs
- php-socket:/run/php
depends_on:
- php-fpm
networks:
- code
php-fpm:
image: php:7.4-fpm
restart: always
ports:
- 9000:9000
volumes:
- ./:/usr/local/apache2/htdocs
- ./.docker/php-fpm/zz-docker.conf:/usr/local/etc/php-fpm.d/zz-docker.conf
- php-socket:/run/php
networks:
- code
volumes:
php-socket:
networks:
code:
What is the difference with the configs properties ? :
version: '3.8'
services:
apache:
image: httpd:2.4
restart: always
ports:
- 8000:80
volumes:
- php-socket:/run/php
configs:
- source: apache-www
target: /usr/local/apache2/htdocs
depends_on:
- php-fpm
networks:
- code
php-fpm:
image: php:7.4-fpm
restart: always
ports:
- 9000:9000
volumes:
- ./.docker/php-fpm/zz-docker.conf:/usr/local/etc/php-fpm.d/zz-docker.conf
- php-socket:/run/php
configs:
- source: apache-www
target: /usr/local/apache2/htdocs
networks:
- code
volumes:
php-socket:
configs:
apache-www:
file: ./
networks:
code:
The two examples above work, But I don't understand the difference between volumes and configs ?
Can anyone explain to me ? Thanks !
Configs were added for Swarm Mode. They are immutable objects added into swarm, that get pushed to the worker node when needed, and mounted as a file in the container. They solved the issue that the file you wanted to mount may not be on the worker node in the cluster.
If you aren't using Swarm Mode, you likely don't need any of the v3 syntax and can stick to either v2 or compose-spec and mount the file directly as a volume. You'll find newer version of compose have added compatibility features to treat compose files written for Swarm with comparable features in compose, effectively treating these as volume mounts.
It is the same thing.
The config attribute is just syntactic sugar and does the same thing as volumes.
Related
I have the following docker-compose configuration:
version: '2'
services:
nginx:
image: 'nginx:latest'
expose:
- '80'
- '8080'
container_name: nginx
ports:
- '80:80'
- '8080:8080'
volumes:
- '/home/ubuntu/nginx.conf:/etc/nginx/nginx.conf'
networks:
- default
restart: always
inmates:
image: 'xxx/inmates:mysql'
container_name: 'inmates'
expose:
- '3000'
env_file: './inmates.env'
volumes:
- inmates_documents_images:/data
- inmates_logs:/logs.log
networks:
- default
restart: always
we19:
image: 'xxx/we19:dev'
container_name: 'we19'
expose:
- '3000'
env_file: './we19.env'
volumes:
- we19_logs:/logs.log
networks:
- default
restart: always
desktop:
image: 'xxx/desktop:dev'
container_name: 'desktop'
expose:
- '3000'
env_file: './desktop.env'
volumes:
- desktop_logs:/logs.log
networks:
- default
restart: always
volumes:
inmates_documents_images:
inmates_logs:
desktop_logs:
we19_logs:
Assume I did docker-compose up -d --buiild.
Now the 4 containers (services) are runnig.
Now, I want to update ./desktop.env file with new content. Is there any possible way to reset only desktop container with the new env file? Or docker-compose restart is neccessary?
Basically I'm trying to restart only desktop container with the new env file but keep all 3 others container up running without restarting them.
Extract from docker-compose up --help
[...]
If there are existing containers for a service, and the service's configuration or image was changed after the container's creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
[...]
Usage: up [options] [--scale SERVICE=NUM...] [SERVICE...]
[...]
The following command should do the trick in your case.
docker-compose up -d desktop
If not, see the documentation for other options you can use to meet your exact requirement (e.g. --force-recreate, --renew-anon-volumes, ...)
I am trying to learn docker by reading the official documentation. I am on the task of Use Compose to develop locally. Trying to compose mongodb but I got an error
The Compose file './docker-compose.dev.yml' is invalid because:
Unsupported config option for services.volumes: 'mongodb'
here is docker-compose.dev.yml file:
version: '3.8'
services:
notes:
build:
context: .
ports:
- 8080:8080
- 9229:9229
environment:
- SERVER_PORT=8080
- DATABASE_CONNECTIONSTRING=mongodb://mongo:27017/notes
volumes:
- ./:/code
command: npm run debug
mongo:
image: mongo:4.2.8
ports:
- 27017:27017
volumes:
- mongodb:/data/db
- mongodb_config:/data/configdb
volumes:
mongodb:
mongodb_config:
How can I make it work?
That's a small mistake on your part, the volumes section of the docker-compose.yaml file is related to all services and not one in particular, because of how yaml files are formatted the indentation level matters a lot, in your example you didn't use the volumes parameter, instead you defined a service called volumes and services don't have a parameter called mongodb.
You have to simply decrease the identation level on the last 3 lines and it will work just fine.
version: '3.8'
services:
mongo:
image: mongo:4.2.8
ports:
- 27017:27017
volumes:
- mongodb:/data/db
- mongodb_config:/data/configdb
volumes:
mongodb:
mongodb_config:
I already have a docker-compose.yml file like this:
version: "3.1"
services:
memcached:
image: memcached:alpine
container_name: dl-memcached
redis:
image: redis:alpine
container_name: dl-redis
mysql:
image: mysql:5.7.21
container_name: dl-mysql
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: dl-phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=dl-mysql
- PMA_PORT=3306
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
restart: always
ports:
- 8002:80
volumes:
- /application
links:
- mysql
elasticsearch:
build: phpdocker/elasticsearch
container_name: dl-es
volumes:
- ./phpdocker/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "8003:9200"
webserver:
image: nginx:alpine
container_name: dl-webserver
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./logs:/var/log/nginx:delegated
ports:
- "9003:80"
php-fpm:
build: phpdocker/php-fpm
container_name: dl-php-fpm
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
- ./../docker/php-fpm/certs/store_stock/:/usr/local/share/ca-certificates/
- ./logs:/var/log:delegated # nginx logs
- /application/var/cache
environment:
XDEBUG_CONFIG: remote_host=host.docker.internal
PHP_IDE_CONFIG: "serverName=dl"
node:
build:
dockerfile: dl/phpdocker/node/Dockerfile
context: ./../
container_name: dl-node
working_dir: /application
ports:
- "8008:3000"
volumes:
- ./../:/application:cached
tty: true
My goal is to have 2 isolate environments working at the same time in the same server with the same docker-compose file? I wonder if it's possible?
I want to be able to stop and update one env. while the other one is still running and getting the traffic.
Maybe I need another approach in my case?
There are a couple of problems with what you're trying to do. If your goal is to put things behind a load balancer, I think that rather than trying to start multiple instances of your project, a better solution would be to use the scaling features available to docker-compose. In particular, if your goal is to put some services behind a load balancer, you probably don't want multiple instances of things like your database.
If you combine this with a dynamic front-end proxy like Traefik, you can make the configuration largely automatic.
Consider a very simple example consisting of a backend container running a simple webserver and a traefik frontend:
---
version: "3"
services:
webserver:
build:
context: web
labels:
traefik.enable: true
traefik.port: 80
traefik.frontend.rule: "PathPrefix:/"
frontend:
image: traefik
command:
- --api
- --docker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "80:80"
- "127.0.0.1:8080:8080"
If I start it like this, I get a single backend and a single frontend:
docker-compose up
But I can also ask docker-compose to scale out the backend:
docker-compose up --scale webserver=3
In this case, I get a single frontend and three backend servers. Traefik will automatically discover the backends and will round-robin connections between them. You can download this example and try it out.
Caveats
There are a few aspects of your configuration that would need to change in order to make this work (and in fact, you would need to change them even if you were to create multiple instances of your project as you have proposed in your question).
Conflicting paths
Take for example the configuration of your webserver container:
volumes:
- ./logs:/var/log/nginx:delegated
If you start two instances of this service, both containers will mount ./logs on /var/log/nginx. If they both attempt to write to /var/log/nginx/access.log, you're going to have problems.
The easiest solution here is to avoid bind mounts for things like log directories (and any other directories to which you will be writing), and instead use named docker volumes.
Hardcoding container names
In some places, you are hardcoding the container name, like this:
mysql:
image: mysql:5.7.21
container_name: dl-mysql
This will cause problems if you attempt to start multiple instances of this project or multiple instances of the mysql container. Don't statically set the container name.
Deprecated links syntax
Your configuration is using the deprecated links syntax:
links:
- mysql
Don't do that. In modern docker, containers on the same network can simply refer to each other by name. In other words, if your compose configuration has:
mysql:
image: mysql:5.7.21
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
Other containers in your compose stack can simply use the hostname mysql to refer to this service.
You won't be able to run same compose file on a host without changing the port mappings because that will cause port conflict. I'd recommend creating a base compose file and using extends to override port mappings for different environments.
I am deploying a small stack onto a UCP
One of the issues I am facing is naming the container for service1.
I need to have a static name for the container, since it's utilized by mycustomimageforservice2
The container_name option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
I have to use version: 3 compose files.
version: "3"
services:
service1:
image: dockerhub/service1
ports:
- "8080:8080"
container_name: service1container
networks:
- mynet
service2:
image: myrepo/mycustomimageforservice2
networks:
- mynet
restart: on-failure
networks:
mynet:
What are my options?
You can't force a containerName in compose as its designed to allow things like scaling a service (by updating the number of replicas) and that wouldn't work with names.
One service can access the other using servicename (http://serviceName:internalServicePort) instead and docker will do the rest for you (such as resolving to an actual container address, load balancing between replicas....).
This works with the default network type which is overlay
You can face your problem linking services in docker-compose.yml file.
Something like:
version: "3"
services:
service1:
image: dockerhub/service1
ports:
- "8080:8080"
networks:
- mynet
service2:
image: myrepo/mycustomimageforservice2
networks:
- mynet
restart: on-failure
links:
- service1
networks:
mynet:
Using links arguments in your docker-compose.yml you will allow some service to access another using the container name, in this case, service2 would establish a connection to service1 thanks to the links parameter. I'm not sure why you use a network but with the links parameter would not be necessary.
container_name option is ignored when deploying a stack in swarm mode since container names need to be unique.
https://docs.docker.com/compose/compose-file/#container_name
If you do have to use version 3 but don't work with swarms, you can add --compatibility to your commands.
Specify a custom container name, rather than a generated default name.
container_name: my-web-container
see this in the full docker-compose file
version: '3.9'
services:
node-ecom:
build: .
image: "node-ecom-image:1.0.0"
container_name: my-web-container
ports:
- "4000:3000"
volumes:
- ./:/app:ro
- /app/node_modules
- /config/.env
env_file:
- ./config/.env
know more
I am looking for guidance on how what is the cleanest way to make a docker-compose.yml version 2 that:
Has container state clearly separated from the container.
Has container state mounted to the host for simplicity (single data point, simply backup /data on the host and you're done). I'm open to be wrong about this, see questions below).
The app is a classic web app with a mysql & redis database for the backend, and with a webserver that is behind a proxy that serves static assets directly. Some details like depends_on, environment variables and the networks are intentionally left out.
Here is what I use at the moment:
version: "2"
services:
proxy:
build:
context: ./apps/nginx
ports:
- "80:80"
- "443:443"
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/web/assets:/var/www/assets:ro
- ./data/web/puma:/var/run/puma
web:
build:
context: ./apps/rails
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/web/assets:/srv/app/public/assets
- ./data/web/puma:/var/run/puma
db:
image: mysql:5.7
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/mysql:/var/lib/mysql
redis:
image: redis
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/redis:/data
Here is what I plan to use for the next release:
version: "2"
services:
proxy:
build:
context: ./apps/nginx
ports:
- "80:80"
- "443:443"
volumes_from:
- localtime
- web-assets-data:ro
- web-puma-data
web:
build:
context: ./apps/rails
volumes_from:
- localtime
- web-assets-data
- web-puma-data
db:
image: mysql:5.7
volumes_from:
- localtime
- db-data
redis:
image: redis
volumes_from:
- localtime
- redis-data
web-assets-data:
image: ubuntu:14.04
volumes:
- ./data/web/assets:/srv/app/public/assets
web-puma-data:
image: ubuntu:14.04
volumes:
- ./data/web/puma:/var/run/puma
db-data:
image: ubuntu:14.04
volumes:
- ./data/mysql:/var/lib/mysql
redis-data:
image: ubuntu:14.04
volumes:
- ./data/redis:/data
localtime:
image: ubuntu:14.04
volumes:
- /etc/localtime:/etc/localtime:ro
I think the benefits of the new version are:
It's more clear where the data is.
It's easier to share data among multiple containers (no need to remember the exact paths like in the current version).
So, my questions are:
Is it problematic to use different images between the container and it's container-data? for example, should db-data use mysql:5.7 instead of ubuntu:14.04?
Is it correct to say that there's no way of having "data stored at a specific path on the host" with a top level volumes: key?
What are the advantages and inconvenients of using a named volume (with a top-level "volumes" key)? Should I prefer using a named volume over a host mount? Workflow comparisons would be nice.
Is it problematic to use different images between the container and it's container-data
Not at all, this is normal.
Is it correct to say that there's no way of having "data stored at a specific path on the host" with a top level volumes: key?
Correct. The top level volumes key is for named volumes, but you can't name host volumes.
What are the advantages and inconveniences of using a named volume (with a top-level "volumes" key)? Should I prefer using a named volume over a host mount? Workflow comparisons would be nice.
Named volumes let you use volume drivers, so you could have the data stored somewhere other than the local filesystem. However named volumes need to be initialized with data, so you might have to add a script or something to do so.