Previously I used volumes_from to mount multiple volume locations to multiple containers, like so:
app:
image: mageinferno/magento2-nginx:1.11-1
links:
- phpfpm
volumes_from:
- appdata
ports:
- 8000:80
phpfpm:
image: mageinferno/magento2-php:7.0-fpm-1
links:
- db
volumes_from:
- appdata
appdata:
image: tianon/true
volumes:
- /var/www/html
- ~/.composer:/var/www/.composer
- ./html/app/code:/var/www/html/app/code
- ./html/app/design:/var/www/html/app/design
However, in docker-compose version 3 when using native volume mounts, volumes_from is not available, which leads me to do something like this:
version: "3"
services:
app:
image: mageinferno/magento2-nginx:1.11-1
links:
- phpfpm
volumes:
- appdata:/var/www/html
- ~/.composer:/var/www/.composer
- ./html/app/code:/var/www/html/app/code
- ./html/app/design:/var/www/html/app/design
ports:
- 8000:80
phpfpm:
image: mageinferno/magento2-php:7.0-fpm-1
links:
- db
volumes:
- appdata:/var/www/html
- ~/.composer:/var/www/.composer
- ./html/app/code:/var/www/html/app/code
- ./html/app/design:/var/www/html/app/design
Is there any way I can reference the same group of volume mounts to multiple services, without defining them twice?
YAML supports "anchors" for re-using bits: (From https://learnxinyminutes.com/docs/yaml/)
# YAML also has a handy feature called 'anchors', which let you easily duplicate
# content across your document. Both of these keys will have the same value:
anchored_content: &anchor_name This string will appear as the value of two keys.
other_anchor: *anchor_name
# Anchors can be used to duplicate/inherit properties
base: &base
name: Everyone has same name
foo: &foo
<<: *base
age: 10
bar: &bar
<<: *base
age: 20
Here is a docker-compose version 3 example where an anchor is used for the environment variables.
The values are set the first time they are used, and then referenced in any additional services that use the same environment variables.
Note the use of &environment in setting the anchor, and *environment in referencing it.
version: '3'
services:
ui:
build:
context: ./ui
ports:
- 80:80
- 8080:8080
networks:
- cluster-net
environment: &environment
A_VAR: 'first-var'
ANOTHER_VAR: 'second-var'
api:
build:
context: ./api
networks:
- cluster-net
environment: *environment
networks:
cluster-net:
driver: bridge
Related
I have a docker compose similar to this one
services:
service-a:
image: "service-a"
volumes:
- /home/user/data:/data
service-b:
image: "service-b"
volumes:
- /home/user/data:/data
service-c:
image: "service-c"
volumes:
- /home/user/data:/data
I'd like to use a named volume for all 3 to simplify it and make it more manageable, like this
services:
service-a:
image: "service-a"
volumes:
- namedvolume:/data
service-b:
image: "service-b"
volumes:
- namedvolume:/data
service-c:
image: "service-c"
volumes:
- namedvolume:/data
volumes:
- namedvolume
The thing is that i need to insert some files into namedvolume, so I'd like to have it binded to a local directoy like it was before. How can I achieve this?
I have 2 services which use the same image:, what can i do, to force docker-compose to generate 2 seperate containers?
Thanks!
EDIT:
Full docker-compose:
version: "3.5"
services:
database:
container_name: proj-database
env_file: ../orm/.env.${PROJ_ENV}
image: postgres
restart: always
ports:
- 5432:5432
networks:
- proj
api:
image: golang:1.17
container_name: proj-api
env_file: ../cryptoModuleAPI/.env.${PROJ_ENV}
restart: always
build: ../cryptoModuleAPI/
links:
- database:database
ports:
- 8080:8080
volumes:
- ../cryptoModuleAPI:/proj/api
- ../orm:/proj/orm
networks:
- proj
admin:
image: golang:1.17
container_name: proj-admin
env_file: ../admin/.env.${PROJ_ENV}
restart: always
build: ../admin/
links:
- database:database
ports:
- 8081:8081
volumes:
- ../admin:/proj/admin
- ../orm:/proj/orm
networks:
- proj
networks:
proj:
external:
name: proj
I just run with docker-compose up
You misunderstand how the build and image directives work when used together.
Paraphrasing the docs,
https://docs.docker.com/compose/compose-file/compose-file-v3/#build
If you specify image as well as build, then Compose names the built image with the value of the image directive.
Compose is going to build two images, both named the same thing. Only one will survive. I'm surprised your app spins up at all!
Provide a different name for the image directive of each service, or leave it out entirely.
I am fairly new to docker, i have been having issues for days now setting up docker-machine to share local files on my windows pc through the use of volumes.
Basically, i am using the github repo as staerting point https://github.com/koutsoumposval/laravel-microservices. I noticed that when i do not use docker-machine the files are shared using the 'volumes' configuration in my docker-compose file.
However, when i host the same project on the docker machine the files do not show. i can see the top level folders when i ssh into the docker machine but they are all empty.
Also i was able to get the local files to show up in the docker-machine by using the 'COPY' directive in the Dockerfile. but i am not comfortable with this, as changes made to the local files are not automatically reflected in the docker machine.
So my question is how can i synchronize the local files with the docker-machine since the 'volumes' directory is obviously not working. Also please point me in the right direction if i am thinking about this in the wrong way.
DOCKER-COMPOSE.YML
version: '3'
services:
proxy:
image: traefik
command: --web --docker --docker.domain=lm.local --docker.exposedbydefault=false --logLevel=DEBUG
networks:
- webgateway
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
order:
build:
context: order/php-apache
volumes:
- ../order:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Host:order.lm.local"
- "traefik.backend=order"
networks:
- webgateway
- web
restart: always
user:
build:
context: user/php-apache
volumes:
- ../user:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Host:user.lm.local"
- "traefik.backend=user"
networks:
- webgateway
- web
restart: always
inventory:
build:
context: inventory/php-apache
volumes:
- ../inventory:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Host:inventory.lm.local"
- "traefik.backend=inventory"
networks:
- webgateway
- web
restart: always
api:
build:
context: api-gateway/php-apache
volumes:
- ../api-gateway:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Host:api.lm.local"
- "traefik.backend=api"
networks:
- webgateway
- web
restart: always
networks:
webgateway:
driver: bridge
web:
external:
name: traefik_webgateway
The image below shows the errors i am experiencing as a result of the local files not being copied to the the virtual machine. So the 'html' folder which is suppose to contain the full microservice repo is empty.
I want to know the equivalent of the configuration below to suit version 3 of docker-composer.yml! volumes_from is no longer valid so am I supposed to skip the data volume and replace it with top level volumes ?
version: '2'
services:
php:
build: ./docker-files/php-fpm/.
volumes_from:
- data
working_dir: /code
links:
- mysql
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
volumes_from:
- data
links:
- php
data:
image: tianon/true
volumes:
- .:/code
By default named volumes allow you to share data between containers. But it is some troubles with storing data in the same place on the host machine after restarting containers. But we can use local-persist docker plugin for fix it.
For migration to version 3 you need
1) install local-persist docker plugin (if you want to store volumes data to the particular place on the host machine)
2) modify docker-compose.yml
version: '3'
services:
php:
build: ./docker-files/php-fpm/.
volumes:
- data:/code
working_dir: /code
links:
- mysql
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
volumes:
- data:/code
links:
- php
data:
image: tianon/true
volumes:
- data:/code
# If you use local persist plugin
volumes:
data:
driver: local-persist
driver_opts:
mountpoint: /path/on/host/machine/
# Or If you dont want using local persist plugin
volumes:
data:
Also you can store volumes data to the host machine with this volumes section:
volumes:
data:
external: true #< it means store my data to the host machine
But you can't specify path for this volume on host machine
I am looking for guidance on how what is the cleanest way to make a docker-compose.yml version 2 that:
Has container state clearly separated from the container.
Has container state mounted to the host for simplicity (single data point, simply backup /data on the host and you're done). I'm open to be wrong about this, see questions below).
The app is a classic web app with a mysql & redis database for the backend, and with a webserver that is behind a proxy that serves static assets directly. Some details like depends_on, environment variables and the networks are intentionally left out.
Here is what I use at the moment:
version: "2"
services:
proxy:
build:
context: ./apps/nginx
ports:
- "80:80"
- "443:443"
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/web/assets:/var/www/assets:ro
- ./data/web/puma:/var/run/puma
web:
build:
context: ./apps/rails
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/web/assets:/srv/app/public/assets
- ./data/web/puma:/var/run/puma
db:
image: mysql:5.7
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/mysql:/var/lib/mysql
redis:
image: redis
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/redis:/data
Here is what I plan to use for the next release:
version: "2"
services:
proxy:
build:
context: ./apps/nginx
ports:
- "80:80"
- "443:443"
volumes_from:
- localtime
- web-assets-data:ro
- web-puma-data
web:
build:
context: ./apps/rails
volumes_from:
- localtime
- web-assets-data
- web-puma-data
db:
image: mysql:5.7
volumes_from:
- localtime
- db-data
redis:
image: redis
volumes_from:
- localtime
- redis-data
web-assets-data:
image: ubuntu:14.04
volumes:
- ./data/web/assets:/srv/app/public/assets
web-puma-data:
image: ubuntu:14.04
volumes:
- ./data/web/puma:/var/run/puma
db-data:
image: ubuntu:14.04
volumes:
- ./data/mysql:/var/lib/mysql
redis-data:
image: ubuntu:14.04
volumes:
- ./data/redis:/data
localtime:
image: ubuntu:14.04
volumes:
- /etc/localtime:/etc/localtime:ro
I think the benefits of the new version are:
It's more clear where the data is.
It's easier to share data among multiple containers (no need to remember the exact paths like in the current version).
So, my questions are:
Is it problematic to use different images between the container and it's container-data? for example, should db-data use mysql:5.7 instead of ubuntu:14.04?
Is it correct to say that there's no way of having "data stored at a specific path on the host" with a top level volumes: key?
What are the advantages and inconvenients of using a named volume (with a top-level "volumes" key)? Should I prefer using a named volume over a host mount? Workflow comparisons would be nice.
Is it problematic to use different images between the container and it's container-data
Not at all, this is normal.
Is it correct to say that there's no way of having "data stored at a specific path on the host" with a top level volumes: key?
Correct. The top level volumes key is for named volumes, but you can't name host volumes.
What are the advantages and inconveniences of using a named volume (with a top-level "volumes" key)? Should I prefer using a named volume over a host mount? Workflow comparisons would be nice.
Named volumes let you use volume drivers, so you could have the data stored somewhere other than the local filesystem. However named volumes need to be initialized with data, so you might have to add a script or something to do so.