I want to deploy some services into my server and all of them will use nginx as web server, every project has it own .conf file and I want to share all of then with nginx container. I tried to use named volumes but when it's used by more than one container the data gets replaced. I want to get all this .conf files from diferent containers and put in a volume so it can be read by nginx container. I also tried to use subdirectories in named volumes, but, use namedVolumeName/path do not work.
Obs: I'm using docker-compose in all projects
version: "3.7"
services:
backend:
container_name: jzmimoveis-backend
image: paulomesquita/jzmimoveis-backend
command: uwsgi --socket :8000 --wsgi-file jzmimoveis/wsgi.py
volumes:
- nginxConfFiles:/app/nginx
- jzmimoveisFiles:/app/src
networks:
- jzmimoveis
restart: unless-stopped
expose:
- 8000
frontend:
container_name: jzmimoveis-frontend
image: paulomesquita/jzmimoveis-frontend
command: serve -s build/
volumes:
- nginxConfFiles:/app/nginx
networks:
- jzmimoveis
restart: unless-stopped
expose:
- 5000
volumes:
nginxConfFiles:
external: true
jzmimoveisFiles:
external: true
networks:
jzmimoveis:
external: true
For example, is this case i linked both frontend and backend nginx file to the named volume nginxConfFiles, but, when I do docker-compose up -d in this file, just one of the .conf file appears in volume, I think it gets overwritten by the other container in the same file.
Probably you could have, on the nginx container, the shared volume pointing to /etc/nginx/conf.d, and then use different names for each project conf file.
Below a proof-of-concept, three servers with a config file to be attached on each one, and a proxy (your Nginx) with the shared volume bound to /config:
version: '3'
services:
server1:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config
- ./server1.conf:/app/server1.conf
command: sh -c "cp /app/server1.conf /config; tail -f /dev/null"
server2:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config
- ./server2.conf:/app/server2.conf
command: sh -c "cp /app/server2.conf /config; tail -f /dev/null"
server3:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config
- ./server3.conf:/app/server3.conf
command: sh -c "cp /app/server3.conf /config; tail -f /dev/null"
proxy1:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config:ro
command: tail -f /dev/null
volumes:
deleteme_after_demo:
Let's create 3 config files to be included:
➜ echo "server 1" > server1.conf
➜ echo "server 2" > server2.conf
➜ echo "server 3" > server3.conf
then:
➜ docker-compose up -d
Creating network "deleteme_default" with the default driver
Creating deleteme_server2_1 ... done
Creating deleteme_server3_1 ... done
Creating deleteme_server1_1 ... done
Creating deleteme_proxy1_1 ... done
And finally, let's verify the config files are accessible from proxy container:
➜ docker-compose exec proxy1 sh -c "cat /config/server1.conf"
server 1
➜ docker-compose exec proxy1 sh -c "cat /config/server2.conf"
server 2
➜ docker-compose exec proxy1 sh -c "cat /config/server3.conf"
server 3
I hope it helps.
Cheers!
Note: you should see mounting a volume exactly the same way as using Unix mount command. If you already have content inside the mount point, after mount you are not going to see it, but the content of the mounted device (unless it was empty and first created here). Whatever you want to see there needs to be already on the device or you need to move it afterward.
So, I did it by mounting the files because I had no data in the container I used. Then copying these with the starting command. You could address it a different way, eg copying the config file to the mounted volume by the use of an entry point script in your image.
A named volume is initialized when it's empty/new and a container is started using that volume. The initialization is from the image filesystem, and after that, the named volume is persistent and will retain the state from the previous use.
In this case, what you have is a race condition. The volume is sharing the files, but it depends on which container compose starts up first to control which image is used to initialize the volume. The named volume is shared between multiple images, it's just the content that you want to be different.
For your use case, you may be better off putting some logic in the image build and entrypoint to save the files you want to mirror in the volume to a different location in the image on build, and then update the volume on container startup. By moving this out of the named volume initialization steps, you avoid the race condition, and allow the volume to be updated with future changes from the image. An example of this is in my base image with the save-volume you'd run in the Dockerfile, and load-volume you'd run in your entrypoint.
As a side note, it's also a good practice to mount that named volume as read-only in the containers that have no need to write to the config files.
Related
I am trying to use docker volume for the first time and I am having a hard time getting the container to share files with the host machine (Ubuntu). I can see the files my code is writing inside the container using docker exec but none of the files are in the volume under /var/lib/docker/volumes.
My DockerFile
FROM node:16-alpine
RUN apk add dumb-init
RUN addgroup gp && adduser -S appuser -G gp
RUN mkdir -p /usr/src/app/logs
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . /usr/src/app/
RUN chown -R appuser:gp /usr/src/app/logs/
USER appuser
My docker-compole.yml
version: "3.6"
services:
my-service:
user: appuser
container_name: demou
build:
context: .
image: "myService"
working_dir: /usr/src/app
ports:
- 8080:8080 #
environment:
- NODE_VERSION=16
volumes:
- /logs:/logs/:rw
command: sh -c "dumb-init node src/server.js"
networks:
- Snet
# restart: always
volumes:
logs:
# driver: local
name: "logs"
networks:
Snet:
name: "Snetwork"
server.js doesn't do anything besides writing a helloworld.txt file to the logs directory. when I run the app in the container,I dont see any errors or even warning. It's just the logs are not available on the host machine where docker keeps its volumes. What I missing here?
Thanks
The compose file uses a bind mount (indicated by the leading / before logs:
...
services:
my-service:
...
volumes:
- /logs:/logs/:rw
# ^ this slash makes the mount a bind mount
...
We actually want to use a named volume by removing the leading /:
...
services:
my-service:
...
volumes:
- logs:/logs/:rw
# ^ no slash, will be interpreted as named volume
# referencing the named volume "logs" defined below
...
volumes:
logs:
# driver: local
name: "logs"
...
For more details, please refer to the relevant docker-compose file documentation.
As an aside: I had problems starting the docker-compose.yml file due to an invalid reference format. The image name must not include uppercase letters. So I had to change it to my-service. Even then, I was not able to build the my-service image due to missing files.
Here is a full docker-compose.yml that reproduces the desired behaviour, I used an alpine with a simple script to write to the volume:
version: "3.6"
services:
my-service:
image: alpine:3.14.3
working_dir: /logs
volumes:
- logs:/logs/:rw
command: sh -c 'echo "Hello from alpine" > log.txt'
volumes:
logs:
name: logs
You hint that you're trying to actually read the logs that come out, reasonably enough. For this use case you should use a Docker bind mount and not a named volume.
Where you specify
volumes:
- /logs:/logs:rw
The first part (starting with a slash) is an absolute path on the host; if you ls / on the host system, outside a container, you should see the logs directory there. The second part is a path inside the container, which doesn't match what you've indicated in the Dockerfile. If you change it to
volumes:
- ./logs:/usr/src/app/logs:rw
# ^^ ^^^^^^^^^^^^
making it a relative path on the host side and the intended directory on the container side, then you will be able to directly read the logs in a subdirectory of the directory containing the docker-compose.yml file. You can delete the volumes: block at the end of the file.
(For completeness, if the left-hand side of a volumes: entry doesn't contain a slash at all, it refers to a named volume specified in the top-level volumes: block; see also #Turing85's answer.)
Permissions-wise, the container process must run as the same numeric user ID that owns the log directory. Any other directories that the container writes to must also have the same numeric owner. It doesn't matter if the code in the image is owned by root (in fact, it's better, because it prevents the code from being accidentally overwritten).
user: 1000 # matches host uid; try running `id -u`
volumes: # or `ls -lnd logs`
- ./logs:/usr/src/app/logs
Also consider setting your application to log to stdout, instead of a file. That avoids this problem, and you can use docker logs to read the log output. In more involved container environments like Kubernetes, there are standard ways to collect logs-to-stdout from containers, but it's much trickier to collect logs-to-files.
We use the docker image nginx:stable-apline in a docker compose setup:
core-nginx:
image: nginx:stable-alpine
restart: always
environment:
- NGINX_HOST=${NGINX_HOST}
- NGINX_PORT=${NGINX_PORT}
- NGINX_APP_HOST=${NGINX_APP_HOST}
volumes:
- ./nginx/conf/dev.template:/tmp/default.template
- ./log/:/var/log/nginx/
depends_on:
- core-app
command: /bin/sh -c "envsubst '$$NGINX_HOST $$NGINX_PORT $$NGINX_APP_HOST'< /tmp/default.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
ports:
- 5001:5001
Logfiles are unlimited in size in this setup.
Can anybody provide some pointers how to limit the size of access.log and error.log?
There are a couple of ways of tackling this problem.
Docker log driver
The Nginx container you're using, by default, configures the access.log to go to stdout and the error.log to go to stderr. If you were to remove the volume you're mounting on /var/log/nginx, you would get this default behavior, which means you would be able to manage logs via the Docker log driver. The default json-file log driver has a max-size option that would do exactly what you want.
With this solution, you would use the docker logs command to inspect the nginx logs.
Containerized log rotation
If you really want to log to local files instead of using the Docker log driver, you can add a second container to your docker-compose.yml that:
Runs cron
Periodically calls a script to rename the log files
Sends an appropriate signal to the nginx process
To make all this work:
The cron container needs access to the nginx logs. Because you're storing the logs on a volume, you can just mount that same volume in the cron container.
The cron container needs to run in the nginx pid namespace in order to send the restart signal. This is the --pid=container:... option to docker run, or the pid: option in docker-compose.yml.
For example, something like this:
version: "3"
services:
nginx:
image: nginx:stable-alpine
restart: always
volumes:
- ./nginx-logs:/var/log/nginx
- nginx-run:/var/run
ports:
- 8080:80
logrotate:
image: alpine:3.13
restart: always
volumes:
- ./nginx-logs:/var/log/nginx
- nginx-run:/var/run
- ./cron.d:/etc/periodic/daily
pid: service:nginx
command: ["crond", "-f", "-L", "/dev/stdout"]
volumes:
nginx-run:
In cron.d in my local directory, I have rotate-nginx-logs (mode 0755) that looks like this:
#!/bin/sh
pidfile=/var/run/nginx.pid
logdir=/var/log/nginx
if [ -f "$pidfile" ]; then
echo "rotating nginx logs"
for logfile in access error; do
mv ${logdir}/${logfile}.log ${logdir}/${logfile}.log.old
done
kill -HUP $(cat "$pidfile")
fi
With this configuration in place, the logrotate container will rename the logs once/day and send a USR1 signal to nginx, causing it to re-open its log files.
My preference would in general be for the first solution (gathering logs with Docker and using Docker log driver options to manage log rotation), since it reduces the complexity of the final solution.
The docker-compose.yml file I am using is following:
services:
web:
image: nginx:stable-alpine
ports:
- "8080-8130:80"
volumes:
- ${TEMPDIR}:/run
I launch 20 containers with the following command:
TEMPDIR=`mktemp -d` docker-compose up -d --scale web=20
All of the containers launch ok, but they all have the same temp volume mounted at /run, whereas I need each running container to have a unique temp volume. I understand that the problem is that the above yml file is doing Variable Substitution whereas what I need is Command Substitution but I could not find any way of doing that in the official reference, especially when launching multiple instances of the same docker image. Is there some way to fix this problem?
Maybe what you are looking for is tmpfs.
Mount a temporary file system inside the container. Can be a single value or a list.
You can use it simply like this
services:
web:
image: nginx:stable-alpine
ports:
- "8080-8130:80"
tmpfs: /run
Here are the references:
tmpfs on Docker
tmpfs on Docker compose
I've got a docker-compose.yml like this:
db:
image: mongo:latest
ports:
- "27017:27017"
server:
image: artificial/docker-sails:stable-pm2
command: sails lift
volumes:
- server/:/server
ports:
- "1337:1337"
links:
- db
server/ is relative to the folder of the docker-compose.yml file. However when I docker exec -it CONTAINERID /bin/bash and check /server it is empty.
What am I doing wrong?
Aside from the answers here, it might have to do with drive sharing in Docker Setting. On Windows, I discovered that drive sharing needs to be enabled.
In case it is already enabled and you recently changed your PC's password, you need to disable drive sharing (and click "Apply") and re-enable it again (and click "Apply"). In the process, you will be prompted for your PC's new password. After this process, run your docker command (run or compose) again
Try using:
volumes:
- ./server:/server
instead of server/ -- there are some cases where Docker doesn't like the trailing slash.
As per docker volumes documentation,
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume
The host-dir can either be an absolute path or a name value. If you
supply an absolute path for the host-dir, Docker bind-mounts to the
path you specify. If you supply a name, Docker creates a named volume
by that name
I had similar issue when I wanted to mount a directory from command line:
docker run -tid -p 5080:80 -v /d/my_project:/var/www/html/my_project nimmis/apache-php5
The container has been started successfully but the mounted directory was empty.
The reason was that the mounted directory must be under the user's home directory. So, I created a symlink under c:\Users\<username> that mounts to my project folder d:\my_project and mounted that one:
docker run -tid -p 5080:80 -v /c/Users/<username>/my_project/:/var/www/html/my_project nimmis/apache-php5
If you are using Docker for Mac then you need to go to:
Docker Desktop -> Preferences -> Resources -> File Sharing
and add the folder you intend to mount. See the screenshot:
I don't know if other people made the same mistake but the host directory path has to start from /home
So my msitake was that in my docker-compose I was WRONGLY specifying the following:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- /Desktop/subfolder/subfolder2:/app/subfolder
When the host path should have been full path from /home. something like:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- home/myuser/Desktop/subfolder/subfolder2:/app/subfolder
On Ubuntu 20.04.4 LTS, with Docker version 20.10.12, build e91ed57, I started observing a similar symptom with no apparent preceding action. After a docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command, with no changes to one of the services (production-001-volumeConsumingService is up-to-date), a part of the volumes stopped mounting.
# deploy/docker-compose.yml
version: "3"
services:
...
volumeConsumingService:
container_name: production-001-volumeConsumingService
hostname: production-001-volumeConsumingService
image: group/production-001-volumeConsumingService
build:
context: .
dockerfile: volumeConsumingService.Dockerfile
depends_on:
- anotherServiceDefinedEarlier
restart: always
volumes:
- ../data/certbot/conf:/etc/letsencrypt # mouning
- ../data/certbot/www:/var/www/certbot # not mounting
- ../data/www/public:/var/www/public # not mounting
- ../data/www/root:/var/www/root # not mounting
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- default
- external
...
networks:
external:
name: routing
A workaround that seems to be working is to enforce a restart on the failing service immediately after the docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command:
docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build && docker stop production-001-volumeConsumingService && docker start production-001-volumeConsumingService
In the case when the volumes are not mounted after a host reboot, adding a cron task to restart the service once should do.
In my case, the volume was empty because I did not use the right path format without quotes.
If you have a relative or absolute path with spaces in it, you do not need to use double quotes around the path, you can just use any path with spaces and it will be understood since docker-compose has the ":" as the delimiter and does not check spaces.
Ways that do not work (double quotes are the problem!):
volumes:
- "MY_PATH.../my server":/server
- "MY_PATH.../my server:/server" (I might have missed testing this, not sure!)
- "./my server":/server
- ."/my server":/server
- "./my server:/server"
- ."/my server:/server"
Two ways how you can do it (no double quotes!):
volumes:
- MY_PATH.../my server:/server
- ./my server:/server
In my docker-compose.yml file, I first define a service for a data-only container with a volume of /data:
data:
image: library/ubuntu:14.04
volumes:
- /data
command: tail -F /dev/null
I then have a second service that has a Dockerfile.
test:
build:
context: .
dockerfile: "Dockerfile"
volumes_from:
- data:rw
depends_on:
- data
In that Dockerfile I want to write to that /data volume that comes from the data service (e.g., "RUN touch /data/helloworld.txt"). But when I run docker-compose up and then exec into test to look at the contents of /data, the directory is empty. If I wait to do "touch /data/helloworld.txt" until after the containers are running (e.g., via exec), then the file is present in the /data volume and accessible from either container.
Is there a way for a Dockerfile to make use of a volume from another container defined in docker-compose.yml?