I'd like to install a package in a docker image, via a Dockerfile.
docker-compose.yml:
version: "3.5"
services:
transmission:
build:
context: .
dockerfile: Dockerfile
image: ghcr.io/linuxserver/transmission
container_name: transmission
environment:
- PUID=1000
- PGID=1000
volumes:
- ./config/public:/config
- /data:/data
ports:
- 60020:60020
- 60010:60010
- 60010:60010/udp
restart: unless-stopped
network_mode: host
Dockerfile:
RUN apk update
RUN apk add --no-cache flac
In the Dockerfile, I specify that I'd like to install the flac package.
After that I run docker-compose up -d, and sudo docker exec -it transmission bash to check whether it's present, but it's not.
What am I doing wrong?
Your Dockerfile isn't valid (if you've posted the whole file). You've also specified both build: and image: tags in your docker-compose file which is used when you want to build an image and give it a tag when built.
What I think you're trying to accomplish is to add flac to the transmission image. To do that, you'd create a Dockerfile like this
FROM ghcr.io/linuxserver/transmission
RUN apk update
RUN apk add --no-cache flac
Then in your docker-compose file, you remove the image specification like this
version: "3.5"
services:
transmission:
build:
context: .
dockerfile: Dockerfile
container_name: transmission
environment:
- PUID=1000
- PGID=1000
volumes:
- ./config/public:/config
- /data:/data
ports:
- 60020:60020
- 60010:60010
- 60010:60010/udp
restart: unless-stopped
network_mode: host
Related
Hello I have a project in elixir, but I have doubts about how I can make a link whenever I update my files locally update the docker,
so you don't have to use docker-compose up every time something is updated.
my docker file :
FROM elixir:alpine
RUN apk add --update --no-cache curl py-pip
RUN apk add --no-cache build-base git
WORKDIR /app
RUN mix local.hex --force && \
mix local.rebar --force
COPY mix.exs mix.lock ./
COPY config config
RUN mix do deps.get, deps.compile
COPY priv priv
COPY lib lib
COPY numbers.csv numbers.csv
COPY docker-entrypoint.sh docker-entrypoint.sh
EXPOSE 4000
docker-compose:
version: "3.7"
services:
app:
restart: on-failure
build: .
command: /bin/sh docker-entrypoint.sh
ports:
- "4000:4000"
depends_on:
- postgres-db
links:
- postgres-db
env_file:
- .env
postgres-db:
image: "postgres:12"
restart: always
container_name: "postgres-db"
environment:
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_USER: ${DB_USER}
POSTGRES_DB: ${DB_NAME}
ports:
- "5432:5432"
folder structure:
You should have another docker-compose file called docker-compose.override.yml which is your setup for local development. In that file, you can use volumes to get local file updates being reflected in the docker container (while it's running):
It will look something like this (look at the volumes part):
version: "3.8"
services:
db:
image: postgres:13.0
env_file:
- ./docker/dev.env
restart: always
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
spiritpay:
image: spiritpay:local
build:
context: .
dockerfile: ./Dockerfile
depends_on:
- db
stdin_open: true
tty: true
env_file:
- ./docker/dev.env
ports:
- "4000:4000"
- "4002:4002"
volumes:
- /opt/spiritpay/assets/node_modules
- ./assets:/opt/spiritpay/assets
- ./config:/opt/spiritpay/config:ro
- ./lib:/opt/spiritpay/lib:ro
- ./priv:/opt/spiritpay/priv
- ./test:/opt/spiritpay/test:ro
- ./mix.exs:/opt/spiritpay/mix.exs:ro
- ./mix.lock:/opt/spiritpay/mix.lock:ro
volumes:
db-data:
I have two services in my docker-compose:
version: '3.9'
services:
web:
build:
context: .
ports:
- 8080:8080
links:
- php
volumes:
- "html:/usr/share/nginx/html/"
php:
env_file:
- ".env"
image: php:7-fpm
volumes:
- "html:/usr/share/nginx/html/"
volumes:
html:
and a Dockerfile:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY public_html/* /usr/share/nginx/html/
but when I run docker-compose up --build it does no update the files in the volume. I have to delete the volume for the files inside public_html to update on both services.
The volumes in your docker-compose has precedence over the files you have added in the Dockerfile.
Those containers don't take the content you are trying to add in your Dockerfile - they are taking the content from the html volume which is living in your host machine.
Those are two different techniques - mounting volume vs. adding files to an image in Dockerfile.
One solution, without using volumes might be to build both images every time:
PhpDockerfile content:
FROM php:7-fpm
COPY public_html/* /usr/share/nginx/html/
and the docker-compose.yml:
version: '3.9'
services:
web:
build:
context: .
ports:
- 8080:8080
links:
- php
php:
env_file:
- ".env"
build:
context: .
dockerfile: PhpDockerfile
EDIT:
The second approach, using volumes instead of adding them in dockerfile (will be quicker since you don't have to build each time, better for development environment):
version: '3.9'
services:
web:
build:
context: .
ports:
- 8080:8080
links:
- php
volumes:
- "./public_html/:/usr/share/nginx/html/"
php:
env_file:
- ".env"
image: php:7-fpm
volumes:
- "./public_html/:/usr/share/nginx/html/"
and then you can remove the
COPY public_html/* /usr/share/nginx/html/
from your dockerfile.
Note that you might need to use the full path instead of a relative path in the docker-compose file.
Let's suppose there are two services and they have several volumes defined. But most of those volumes are used on both services:
version: '3'
services:
service1:
image: node:lts-alpine
working_dir: /
volumes:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./packages:/packages
- ./node_modules:/node_modules
- ./services/service1:/services/service1
command: yarn service1:start
service2:
image: node:lts-alpine
working_dir: /
volumes:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./packages:/packages
- ./node_modules:/node_modules
- ./services/service2:/services/service2
command: yarn service2:start
Is there a way to prevent this duplication?
I would love to do something like this:
version: '3'
services:
service1:
image: node:lts-alpine
working_dir: /
volumes:
- myVolumeList
- ./services/service1:/services/service1
command: yarn start
service2:
image: node:lts-alpine
working_dir: /
volumes:
- myVolumeList
- ./services/service2:/services/service2
command: yarn start
myVolumeList:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./packages:/packages
- ./node_modules:/node_modules
Edit: I use docker compose for local development only. Volumes are great for me because changing source code files will automatically restart my services. Thus copying files once isn't enough
The code for your application should generally be in a Docker image. You can launch multiple containers from the same image, possibly with different command:. For example, you might write a Dockerfile like:
FROM node:lts-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY ./ ./
CMD yarn start
Having described this image, you can reference it in the docker-compose.yml, overriding the command: for each service:
version: '3'
services:
service1:
build: .
command: 'yarn service1:start'
service2:
build: .
command: 'yarn service2:start'
(Compose will probably try to build a separate image for each service, but because of Docker layer caching, "building" the service2 image will run very quickly and wind up with a second tag on the same image.)
This setup needs no bind-mounts at all, and if you push the built images to a Docker registry, you can run them on a system without the application code or even Node available.
Natively, you can do:
Maybe this solve your problem.
version: "3"
services:
srv1:
image: someimage
volumes:
- data:/data
srv2:
image: someimage
volumes:
- data:/data
volumes:
data:
There's a plugin - https://github.com/MatchbookLab/local-persist (read it before use!)- that let you change the volume mountpoint.
Basicaly, install it: curl -fsSL https://raw.githubusercontent.com/MatchbookLab/local-persist/master/scripts/install.sh | sudo bash
Then create a volume:
docker volume create -d local-persist -o mountpoint=/data/images --name=images
Then use as many containers as you want:
docker run -d -v images:/path/to/images/on/one/ one
docker run -d -v images:/path/to/images/on/two/ two
If you whant to use docker-compose, there's a example:
version: '3'
services:
one:
image: alpine
working_dir: /one/
command: sleep 600
volumes:
- data:/one/
two:
image: alpine
working_dir: /two/
command: sleep 600
volumes:
- data:/two/
volumes:
data:
driver: local-persist
driver_opts:
mountpoint: /data/local-persist/data
Almost the same question here: docker volume custom mount point
This only work on docker-compose version '2':
version: '2'
services:
srv1:
image: sometag
volumes_from:
- data
srv2:
image: sometag
volumes_from:
- data
data:
image: sometag
volumes:
- ./code-in-host:/code
I was using docker-compose, but when I tried to build it again, this error shows, I have build this docker-compose multiple times:
ERROR: Service 'api' failed to build: max depth exceeded
I tried to execute docker system prune to clean my containers, but it didn't work.
docker-compose.yml
version: "3"
services:
client:
container_name: my_client
image: mhart/alpine-node:12
build: ./client
restart: always
ports:
- "3000:3000"
working_dir: /client
volumes:
- ./client:/client
entrypoint: ["npm", "start"]
links:
- api
networks:
- my_network
api:
container_name: my_api
build: ./api
restart: always
ports:
- "9000:9000"
environment:
DB_HOSTNAME: mysql
working_dir: /api
volumes:
- ./api:/api
depends_on:
- mysql
networks:
- my_network
mysql:
container_name: my_mysql
build: ./db
restart: always
volumes:
- /var/lib/mysql
- ./db:/db
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD=n
- MYSQL_USER=n
- MYSQL_PASSWORD=n
- MYSQL_DATABASE=n
networks:
- my_network
command: '--default-authentication-plugin=mysql_native_password'
networks:
my_network:
driver: bridge
this is the Dockerfile:
FROM mhart/alpine-node:12
WORKDIR /api
COPY package*.json /api/
RUN npm i -G nodemon
RUN npm install
COPY . /api/
EXPOSE 9000
CMD ["npm", "run", "dev"]
any help is appreciated.
So, I figure out, I just needed to execute docker system prune -a to remove any stopped container. Now --build is working again.
This command deleted all my local docker images related to my dockerfile. After building it so many times my local storage has reached a limited, thus the error max depth exceeded.
Max depth doesn't indicate an out-of-storage-capacity error (though a prune could accidentally fix it).
Rather it indicates that the api image that you were building had too many layers.
A plausible theory is that you have a recursion caused by having this in your compose file:
image: mhart/alpine-node:12
build: ./client
and this in a Dockerfile
FROM mhart/alpine-node:12
(I'm assuming the Dockerfile in ./client is also FROM the same image).
Your build is essentially adding a few layers onto your local mhart/alpine-node:12 image every time you run it (you can confirm by running docker history mhart/alpine-node:12).
If so, you should probably rename the image in your compose file.
I have an image I create with Dockerfile
FROM mhart/alpine-node:latest
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY src /app
Now in docker-compose.yml I build this image
version: '3.7'
services:
enginetonic:
build:
context: .
image: enginetonic:compose
mongodb:
image: mongo:latest
container_name: 'mongodb'
ports:
- 27017:27017
restart: always
monitor-service:
image: enginetonic:compose
container_name: monitorService
command: nodemon monitor/monitor.js
restart: on-failure
#common services
access-token-service:
image: enginetonic:compose
container_name: accessTokenService
command: nodemon service/access-token-service/access-token-service.js
restart: on-failure
depends_on:
- mongodb
In all documentation to bind:mount or use volumes I found, it is used with other docker commands
example
$ docker service create \
--mount 'type=volume,src=<VOLUME-NAME>,dst=<CONTAINER-PATH>,volume-driver=local,volume-opt=type=nfs,volume-opt=device=<nfs-server>:<nfs-path>,"volume-opt=o=addr=<nfs-address>,vers=4,soft,timeo=180,bg,tcp,rw"'
--name myservice \
<IMAGE>
How to use volumes, so that every service that covers the whole /src/ directory, so that every service I start with nodemon reflects the files changed in the whole source code?
I would do a volume map in docker-compose.yml like this:
volumes:
- ./app/monitor:/path/to/your/workdir/monitor
And adjust the command to use file monitor, like nodemon, to restart service when there is any file changes:
command: ["nodemon", "/path/to/your/workdir/monitor/monitor.js"]
You may need to adjust the nodemon arguments or configs based on what you need.
PS. you do not need to tag/push your image. Simply build it directly in docker-compose#build