How to apply the same volumes to multiple docker compose services? - docker

Let's suppose there are two services and they have several volumes defined. But most of those volumes are used on both services:
version: '3'
services:
service1:
image: node:lts-alpine
working_dir: /
volumes:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./packages:/packages
- ./node_modules:/node_modules
- ./services/service1:/services/service1
command: yarn service1:start
service2:
image: node:lts-alpine
working_dir: /
volumes:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./packages:/packages
- ./node_modules:/node_modules
- ./services/service2:/services/service2
command: yarn service2:start
Is there a way to prevent this duplication?
I would love to do something like this:
version: '3'
services:
service1:
image: node:lts-alpine
working_dir: /
volumes:
- myVolumeList
- ./services/service1:/services/service1
command: yarn start
service2:
image: node:lts-alpine
working_dir: /
volumes:
- myVolumeList
- ./services/service2:/services/service2
command: yarn start
myVolumeList:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./packages:/packages
- ./node_modules:/node_modules
Edit: I use docker compose for local development only. Volumes are great for me because changing source code files will automatically restart my services. Thus copying files once isn't enough

The code for your application should generally be in a Docker image. You can launch multiple containers from the same image, possibly with different command:. For example, you might write a Dockerfile like:
FROM node:lts-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY ./ ./
CMD yarn start
Having described this image, you can reference it in the docker-compose.yml, overriding the command: for each service:
version: '3'
services:
service1:
build: .
command: 'yarn service1:start'
service2:
build: .
command: 'yarn service2:start'
(Compose will probably try to build a separate image for each service, but because of Docker layer caching, "building" the service2 image will run very quickly and wind up with a second tag on the same image.)
This setup needs no bind-mounts at all, and if you push the built images to a Docker registry, you can run them on a system without the application code or even Node available.

Natively, you can do:
Maybe this solve your problem.
version: "3"
services:
srv1:
image: someimage
volumes:
- data:/data
srv2:
image: someimage
volumes:
- data:/data
volumes:
data:
There's a plugin - https://github.com/MatchbookLab/local-persist (read it before use!)- that let you change the volume mountpoint.
Basicaly, install it: curl -fsSL https://raw.githubusercontent.com/MatchbookLab/local-persist/master/scripts/install.sh | sudo bash
Then create a volume:
docker volume create -d local-persist -o mountpoint=/data/images --name=images
Then use as many containers as you want:
docker run -d -v images:/path/to/images/on/one/ one
docker run -d -v images:/path/to/images/on/two/ two
If you whant to use docker-compose, there's a example:
version: '3'
services:
one:
image: alpine
working_dir: /one/
command: sleep 600
volumes:
- data:/one/
two:
image: alpine
working_dir: /two/
command: sleep 600
volumes:
- data:/two/
volumes:
data:
driver: local-persist
driver_opts:
mountpoint: /data/local-persist/data
Almost the same question here: docker volume custom mount point
This only work on docker-compose version '2':
version: '2'
services:
srv1:
image: sometag
volumes_from:
- data
srv2:
image: sometag
volumes_from:
- data
data:
image: sometag
volumes:
- ./code-in-host:/code

Related

Can't install packages in docker container using Dockerfile

I'd like to install a package in a docker image, via a Dockerfile.
docker-compose.yml:
version: "3.5"
services:
transmission:
build:
context: .
dockerfile: Dockerfile
image: ghcr.io/linuxserver/transmission
container_name: transmission
environment:
- PUID=1000
- PGID=1000
volumes:
- ./config/public:/config
- /data:/data
ports:
- 60020:60020
- 60010:60010
- 60010:60010/udp
restart: unless-stopped
network_mode: host
Dockerfile:
RUN apk update
RUN apk add --no-cache flac
In the Dockerfile, I specify that I'd like to install the flac package.
After that I run docker-compose up -d, and sudo docker exec -it transmission bash to check whether it's present, but it's not.
What am I doing wrong?
Your Dockerfile isn't valid (if you've posted the whole file). You've also specified both build: and image: tags in your docker-compose file which is used when you want to build an image and give it a tag when built.
What I think you're trying to accomplish is to add flac to the transmission image. To do that, you'd create a Dockerfile like this
FROM ghcr.io/linuxserver/transmission
RUN apk update
RUN apk add --no-cache flac
Then in your docker-compose file, you remove the image specification like this
version: "3.5"
services:
transmission:
build:
context: .
dockerfile: Dockerfile
container_name: transmission
environment:
- PUID=1000
- PGID=1000
volumes:
- ./config/public:/config
- /data:/data
ports:
- 60020:60020
- 60010:60010
- 60010:60010/udp
restart: unless-stopped
network_mode: host

How to swap env file for another docker service

I have a docker-compose.yml
services:
nextjs:
container_name: next_app
build:
context: ./
restart: on-failure
command: npm run dev
volumes:
- ./:/app
- /app/node_modules
- /app/.next
ports:
- "3000:3000"
cypress:
image: "cypress/included:9.4.1"
depends_on:
- next_app
environment:
- CYPRESS_baseUrl=http://nextjs:3000
working_dir: /e2e
volumes:
- ./e2e:/e2e
I want to change env_file for next_app from cypress service. I found solution like this
cypress:
image: "cypress/included:9.4.1"
depends_on:
- next_app
environment:
- CYPRESS_baseUrl=http://nextjs:3000
working_dir: /e2e
volumes:
- ./e2e:/e2e
next_app:
env_file: .env.test
But this solution does not work. Is it even possible ?
Try something like cp .env #docker/.env
No. In Compose (or Docker, or even more generally in Linux/Unix) there is no way for one container (process) to specify environment variables for another.
You can think of a docker-compose.yml file as a set of instructions only for running containers. If you need a specific set of containers for a specific context – you don't normally need to run Cypress in production, but this is an integration-test setup – it's fine to write a separate Compose file just for that setup.
# docker-compose.cypress.yml
# Used only for integration testing
version: '3.8'
services:
nextjs:
build: .
restart: on-failure
ports:
- "3000:3000"
env_file: .env.test # <-- specific to this test-oriented Compose file
cypress:
build: ./e2e
depends_on:
- nextjs
environment:
- CYPRESS_baseUrl=http://nextjs:3000
docker-compose -f docker-compose.cypress.yml up --build
This can also be a case where using multiple Compose files together can be a reasonable option. You can define a "standard" Compose setup that only defines the main service, and then an e2e-test Compose file that adds the Cypress container and the environment settings.
# docker-compose.yml
version: '3.8'
services:
nextjs:
image: registry.example.com/nextjs:${NEXTJS_TAG:-latest}
restart: on-failure
ports:
- '3000:3000'
# docker-compose.e2e.yaml
version: '3.8'
services:
nextjs:
# These add to the definitions in the base `docker-compose.yml`
build: .
env_file: .env.test
cypress:
# This is a brand new container for this specific setup
depends_on: [nextjs]
et: cetera # copy from question or previous Compose setup
docker-compose \
-f docker-compose.yml \
-f docker-compose.e2e.yml \
up --build

How to share docker volume between two services with one being the source of truth?

I have two services in my docker-compose:
version: '3.9'
services:
web:
build:
context: .
ports:
- 8080:8080
links:
- php
volumes:
- "html:/usr/share/nginx/html/"
php:
env_file:
- ".env"
image: php:7-fpm
volumes:
- "html:/usr/share/nginx/html/"
volumes:
html:
and a Dockerfile:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY public_html/* /usr/share/nginx/html/
but when I run docker-compose up --build it does no update the files in the volume. I have to delete the volume for the files inside public_html to update on both services.
The volumes in your docker-compose has precedence over the files you have added in the Dockerfile.
Those containers don't take the content you are trying to add in your Dockerfile - they are taking the content from the html volume which is living in your host machine.
Those are two different techniques - mounting volume vs. adding files to an image in Dockerfile.
One solution, without using volumes might be to build both images every time:
PhpDockerfile content:
FROM php:7-fpm
COPY public_html/* /usr/share/nginx/html/
and the docker-compose.yml:
version: '3.9'
services:
web:
build:
context: .
ports:
- 8080:8080
links:
- php
php:
env_file:
- ".env"
build:
context: .
dockerfile: PhpDockerfile
EDIT:
The second approach, using volumes instead of adding them in dockerfile (will be quicker since you don't have to build each time, better for development environment):
version: '3.9'
services:
web:
build:
context: .
ports:
- 8080:8080
links:
- php
volumes:
- "./public_html/:/usr/share/nginx/html/"
php:
env_file:
- ".env"
image: php:7-fpm
volumes:
- "./public_html/:/usr/share/nginx/html/"
and then you can remove the
COPY public_html/* /usr/share/nginx/html/
from your dockerfile.
Note that you might need to use the full path instead of a relative path in the docker-compose file.

docker-compose: max depth exceeded

I was using docker-compose, but when I tried to build it again, this error shows, I have build this docker-compose multiple times:
ERROR: Service 'api' failed to build: max depth exceeded
I tried to execute docker system prune to clean my containers, but it didn't work.
docker-compose.yml
version: "3"
services:
client:
container_name: my_client
image: mhart/alpine-node:12
build: ./client
restart: always
ports:
- "3000:3000"
working_dir: /client
volumes:
- ./client:/client
entrypoint: ["npm", "start"]
links:
- api
networks:
- my_network
api:
container_name: my_api
build: ./api
restart: always
ports:
- "9000:9000"
environment:
DB_HOSTNAME: mysql
working_dir: /api
volumes:
- ./api:/api
depends_on:
- mysql
networks:
- my_network
mysql:
container_name: my_mysql
build: ./db
restart: always
volumes:
- /var/lib/mysql
- ./db:/db
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD=n
- MYSQL_USER=n
- MYSQL_PASSWORD=n
- MYSQL_DATABASE=n
networks:
- my_network
command: '--default-authentication-plugin=mysql_native_password'
networks:
my_network:
driver: bridge
this is the Dockerfile:
FROM mhart/alpine-node:12
WORKDIR /api
COPY package*.json /api/
RUN npm i -G nodemon
RUN npm install
COPY . /api/
EXPOSE 9000
CMD ["npm", "run", "dev"]
any help is appreciated.
So, I figure out, I just needed to execute docker system prune -a to remove any stopped container. Now --build is working again.
This command deleted all my local docker images related to my dockerfile. After building it so many times my local storage has reached a limited, thus the error max depth exceeded.
Max depth doesn't indicate an out-of-storage-capacity error (though a prune could accidentally fix it).
Rather it indicates that the api image that you were building had too many layers.
A plausible theory is that you have a recursion caused by having this in your compose file:
image: mhart/alpine-node:12
build: ./client
and this in a Dockerfile
FROM mhart/alpine-node:12
(I'm assuming the Dockerfile in ./client is also FROM the same image).
Your build is essentially adding a few layers onto your local mhart/alpine-node:12 image every time you run it (you can confirm by running docker history mhart/alpine-node:12).
If so, you should probably rename the image in your compose file.

docker-compose and using local image with mounted volume

I have an image I create with Dockerfile
FROM mhart/alpine-node:latest
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY src /app
Now in docker-compose.yml I build this image
version: '3.7'
services:
enginetonic:
build:
context: .
image: enginetonic:compose
mongodb:
image: mongo:latest
container_name: 'mongodb'
ports:
- 27017:27017
restart: always
monitor-service:
image: enginetonic:compose
container_name: monitorService
command: nodemon monitor/monitor.js
restart: on-failure
#common services
access-token-service:
image: enginetonic:compose
container_name: accessTokenService
command: nodemon service/access-token-service/access-token-service.js
restart: on-failure
depends_on:
- mongodb
In all documentation to bind:mount or use volumes I found, it is used with other docker commands
example
$ docker service create \
--mount 'type=volume,src=<VOLUME-NAME>,dst=<CONTAINER-PATH>,volume-driver=local,volume-opt=type=nfs,volume-opt=device=<nfs-server>:<nfs-path>,"volume-opt=o=addr=<nfs-address>,vers=4,soft,timeo=180,bg,tcp,rw"'
--name myservice \
<IMAGE>
How to use volumes, so that every service that covers the whole /src/ directory, so that every service I start with nodemon reflects the files changed in the whole source code?
I would do a volume map in docker-compose.yml like this:
volumes:
- ./app/monitor:/path/to/your/workdir/monitor
And adjust the command to use file monitor, like nodemon, to restart service when there is any file changes:
command: ["nodemon", "/path/to/your/workdir/monitor/monitor.js"]
You may need to adjust the nodemon arguments or configs based on what you need.
PS. you do not need to tag/push your image. Simply build it directly in docker-compose#build

Resources