I have an Nginx container set up which serves assets for a static website. The idea is for the webserver to always stay up, and overwrite the assets whenever they are recompiled. Currently the docker setup looks like this:
docker-compose.yml:
version: '3'
services:
web:
build: ./app
volumes:
- site-assets:/app/dist:ro
nginx:
build: ./nginx
ports:
- 80:80
- 443:443
volumes:
- site-assets:/app:ro
- https-certs:/etc/nginx/certs:ro
depends_on:
- web
volumes:
site-assets:
https-certs:
Web (asset-builder) Dockerfile:
FROM node:latest
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY ./ .
RUN npm run generate
Nginx Dockerfile:
FROM nginx:latest
RUN mkdir /app
COPY nginx.conf /etc/nginx/nginx.conf
The certbot container is managed separately and is not relevant to the problem I'm having, but the Nginx container does need to be able to mount the https-certs volume.
This setup seemed good, until I realized the site-assets volume would not be updated after first creation. The volume would need to be destroyed and re-created on each app deployment for this to work, requiring the Nginx container to be stopped to unmount the volume. So much for that approach.
Is there a way to manage application data in this setup without bringing the Nginx container down? Preferably, I would want to do this declaratively with a docker-compose file, avoid multiple application instances as this doesn't need to scale, and avoid using docker inspect to find the volume on the filesystem and modify it directly.
I hope there is a sane answer to this other than "It's a static site, why aren't you using Netlify or GitHub Pages?" :)
Here is an example that would move your npm run generate from image build time to container run time. It is a minimal example to illustrate how moving the process to the run time makes the volume available to both the running container at startup and future ones at run time.
With the following docker-compose.yml:
version: '3'
services:
web:
image: ubuntu
volumes:
- site-assets:/app/dist
command: bash -c "echo initial > /app/dist/file"
restart: "no"
nginx:
image: ubuntu
volumes:
- site-assets:/app:ro
command: bash -c "while true; do cat /app/file; sleep 5; done"
volumes:
site-assets:
We can launch it with docker-compose up in a terminal. Our nginx server will initially miss the data but the initial web service will launch and generate our asset (with contents initial):
❯ docker-compose up
Creating network "multivol_default" with the default driver
Creating volume "multivol_site-assets" with default driver
Creating multivol_web_1 ... done
Creating multivol_nginx_1 ... done
Attaching to multivol_nginx_1, multivol_web_1
nginx_1 | cat: /app/file: No such file or directory
multivol_web_1 exited with code 0
nginx_1 | initial
nginx_1 | initial
nginx_1 | initial
nginx_1 | initial
In another terminal we can update our asset (your npm run generate command):
❯ docker-compose run web bash -c "echo updated > /app/dist/file"
And now we can see our nginx service serving the updated content:
❯ docker-compose up
Creating network "multivol_default" with the default driver
Creating volume "multivol_site-assets" with default driver
Creating multivol_web_1 ... done
Creating multivol_nginx_1 ... done
Attaching to multivol_nginx_1, multivol_web_1
nginx_1 | cat: /app/file: No such file or directory
multivol_web_1 exited with code 0
nginx_1 | initial
nginx_1 | initial
nginx_1 | initial
nginx_1 | initial
nginx_1 | updated
nginx_1 | updated
nginx_1 | updated
nginx_1 | updated
^CGracefully stopping... (press Ctrl+C again to force)
Stopping multivol_nginx_1 ... done
Hope this was helpful to illustrate a way to take advantage of volume mounting at container run time.
Related
After spending hours to make it happen, I just can't make it work. I'm desperate for help as I couldn't find any questions related to my issue.
I've developed a Node.js web app for my university. IT department needs me to prepare a Docker image shared on a Docker Hub (although I chose Github Packages) and a docker-compose file so it can be easily run. I tried to host the app on my Raspberry Pi, but when I pull the image (with docker-compose.yaml, Dockerfile and .env present) it fails during build process:
npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/app/package.json'
and during compose up process:
pi#raspberrypi:~/projects $ docker-compose up
Starting mysql ... done
Starting backend ... done
Attaching to mysql, backend
backend | exec /usr/local/bin/docker-entrypoint.sh: exec format error
mysql | 2022-09-22 08:04:47+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.30-1.el8 started.
mysql | 2022-09-22 08:04:48+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
mysql | 2022-09-22 08:04:48+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.30-1.el8 started.
mysql | '/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock'
backend exited with code 1
I executed bash inside my Docker container (on my dev machine) so I'm sure that /usr/src/app folder structure matches my app folder structure.
What's wrong with my solution? Should I provide more files than just docker-compose.yaml, Dockerfile and .env?
Dockerfile:
FROM node:18-alpine
WORKDIR /usr/src/app
COPY . ./
RUN npm i && npm cache clean --force
RUN npm run build
ENV NODE_ENV production
CMD [ "node", "dist/main.js" ]
EXPOSE ${PORT}
docker-compose.yaml:
version: "3.9"
services:
backend:
command: npm run start:prod
container_name: backend
build:
context: .
dockerfile: Dockerfile
image: ghcr.io/rkn-put/web-app/docker-backend/test
ports:
- ${PORT}:${PORT}
depends_on:
- mysql
environment:
- NODE_ENV=${NODE_ENV}
- PORT=${PORT}
- ORIGIN=${ORIGIN}
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
- DB_USERNAME=${DB_USERNAME}
- DB_PASSWORD=${DB_PASSWORD}
- DB_SYNCHRONIZE=${DB_SYNCHRONIZE}
- EXPIRES_IN=${EXPIRES_IN}
- SECRET=${SECRET}
- GMAIL_USER=${GMAIL_USER}
- GMAIL_CLIENT_ID=${GMAIL_CLIENT_ID}
- GMAIL_CLIENT_SECRET=${GMAIL_CLIENT_SECRET}
- GMAIL_REFRESH_TOKEN=${GMAIL_REFRESH_TOKEN}
- GMAIL_ACCESS_TOKEN=${GMAIL_ACCESS_TOKEN}
mysql:
image: mysql:latest
container_name: mysql
hostname: mysql
restart: always
ports:
- ${DB_PORT}:${DB_PORT}
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${DB_NAME}
- MYSQL_USER=${DB_USERNAME}
- MYSQL_PASSWORD=${DB_PASSWORD}
volumes:
- ./mysql:/var/lib/mysql
cap_add:
- SYS_NICE
Even if this is not a clear solution, there are multiple things that you should fix (and understand) and then things should work.
You say: "but when I pull the image (with docker-compose.yaml, Dockerfile and .env present) it fails during build process". This is actually where the biggest confusion happens. If you pull, there should be no build anymore.
You build locally, you push with docker-compose push and the image that you have in Github is ready to use. Because of this, on the target machine (where you want to run the project) you don't need to build any more - therefor you don't need a Dockerfile anymore.
The docker-compose.yml that you deliver should not have the build section for your app. Only the image name so that docker-compose knows where to pull the image from.
In local (your development environment) you should have the same docker-compose.yml without the build section, but also a file docker-compose.override.yml that should look like:
version: "3.9"
services:
backend:
build:
context: .
docker-compose automatically merges docker-compose.yml and docker-compose.override.yml when it finds the second one. That's also why it is important to not deliver the override file.
Only this should make your application work on the target machine. Remember all you need there is docker-compose.yml (no build section) and the .env.
Other points that you might want to address:
dockerfile: Dockerfile - not needed since that is the default
command: npm run start:prod if you overwrite it, why not just put it this way in the Dockerfile? If you have a good reason to do this then leave it
EXPOSE ${PORT} you are not declaring PORT anywhere in your Dockerfile. Just make your run on port 80 and expose port 80.
read the docs and save yourself some typing. if the env variables have the same names as in .env then docker-compose is clever enough to pick them if you only declare them
don't expose mysql ports on host: ${DB_PORT}:${DB_PORT}
consider using a volume for mysql instead of a folder. If you use a folder maybe place is in a different location so that you don't delete it by mistake
I've 2 problems with flask app in docker. Application working slowly and freeze after finish last request (for example: first route work fine, next click other link/page app freeze. If i go to homepage via URL and run page again working ok ). Outside docker app working very fast.
Second problem is docker not synch files in container after change files.
# Dockerfile
FROM python:3.9
# set work directory
WORKDIR /base
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update
RUN pip install --upgrade pip
COPY ./requirements.txt /base/requirements.txt
COPY ./base_app.py /base/base_app.py
COPY ./config.py /base/config.py
COPY ./certs/ /base/certs/
COPY ./app/ /base/app/
COPY ./tests/ /base/tests/
RUN pip install -r requirements.txt
# docker-compose
version: '3.3'
services:
web:
build: .
command: tail -f /dev/null
volumes:
- ${PWD}/app/:/usr/src/app/
networks:
- flask-network
ports:
- 5000:5000
depends_on:
- flaskdb
flaskdb:
image: postgres:13-alpine
volumes:
- ${PWD}/postgres_database:/var/lib/postgresql/data/
networks:
- flask-network
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
ports:
- "5432:5432"
restart: always
networks:
flask-network:
driver: bridge
`
You have a couple of significant errors in the code you show.
The first problem is that your application doesn't run at all: the Dockerfile is missing the CMD line that tells Docker what to run, and you override it in the Compose setup with a meaningless tail command. You should generally set this in the Dockerfile:
CMD ["./base_app.py"]
You can remove most of the Compose settings you have. You do not need command: (it's in the Dockerfile), volumes: (what you have is ineffective and the code is in the image anyways), or networks: (Compose provides a network named default; delete all of the networks: blocks in the file).
Second problem is docker not synch files in container after change files.
I don't usually recommend trying to do actual development in Docker. You can tell Compose to just start the database
docker-compose up -d flaskdb
and then you can access it from the host (PGHOST=localhost, PGPORT=5432). This means you can use an ordinary non-Docker Python virtual environment for development.
If you do want to try to use volumes: to simulate a live development environment (you talk about performance; this specific path can be quite slow on non-Linux hosts) then you need to make sure the left side of volumes: is the host directory with your code (probably .), the right side is the container directory (your Dockerfile uses /base), and your Dockerfile doesn't rearrange, modify, or generate the files at all (the bind mount hides all of it).
# don't run the application in the image; use the Docker infrastructure
# to run something else
volumes:
# v-------- left side: host path (matches COPY source directory)
- .:/base
# ^^^^-- right side: container path (matches WORKDIR/destination directory)
When I use docker-compose up
the container exits with code 10 and says
Could not locate Gemfile or .bundle/ directory
but if I do docker run web entrypoint.sh
The rails app seems to start without an issue.
What could be the cause of this inconsistent behavior?
Entrypoint.sh
#!/bin/bash
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
bundle exec rails s -b 0.0.0.0 -p 8080
Relevant part from the docker-compose file.
docker-compose.yml
...
web:
build:
context: "./api/"
args:
RUBY_VERSION: '2.7.2'
BUNDLER_VERSION: '2.2.29'
entrypoint: entrypoint.sh
volumes:
- .:/app
tty: true
stdin_open: true
ports:
- "8080:8080"
environment:
- RAILS_ENV=development
depends_on:
- mongodb
...
When you docker run web ..., you're running exactly what's in the image, no more and no less. On the other hand, the volumes: directive in the docker-compose.yml file replaces the container's /app directory with arbitrary content from the host. If your Dockerfile RUN bundle install expecting to put content in /app/vendor in the image, the volumes: hide that.
You can frequently resolve problems like this by deleting volumes: from the Compose setup. Since you're running the code that's built into your image, this also means you're running the exact same image and environment you'll eventually run in production, which is a big benefit of using Docker here.
(You should also be able to delete the tty: and stdin_open: options, which aren't usually necessary, and the entrypoint: and those specific build: { args: }, which replicate settings that should be in the Dockerfile.)
(The Compose file suggests you're building a Docker image out of the api subdirectory, but then bind-mounting the current directory . -- api's parent directory -- over the image contents. That's probably the immediate cause of the inconsistency you see.)
My docker-compose.yml:
solr:
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
After bringing up the docker with docker-compose up -d --build, the solr container is down and the log (docker logs myproject-solr) shows this:
Copying solr.xml
cp: cannot create regular file '/var/solr/data/solr.xml': Permission denied
I've noticed that if I give full permissions on my machine to the data directory sudo chmod 777 ./data/solr/ -R and I run the Docker again, everything is fine.
I guess the issue comes when the solr user is not my machine, because Docker creates the data/solr folder with root:root. Having my ./data folder gitignored, I cannot manage these folder permissions.
I'd like to know a workaround to manage permissions properly with the purpose of persisting data
It's a known "issue" with docker-compose: all files created by Docker engine are owned by root:root. Usually it's solved in one of the two ways:
Create the volume in advance. In your case, you can create the ./data/solr directory in advance, with appropriate permissions. You might make it accessible to anyone, or, better, change its owner to the solr user. The solr user and group ids are hardcoded inside the solr image: 8983 (Dockerfile.template)
mkdir -p ./data/solr
sudo chown 8983:8983 ./data/solr
If you want to avoid running additional commands before docker-compose, you can create additional container which will fix the permissions:
version: "3"
services:
initializer:
image: alpine
container_name: solr-initializer
restart: "no"
entrypoint: |
/bin/sh -c "chown 8983:8983 /solr"
volumes:
- ./data/solr:/solr
solr:
depends_on:
- initializer
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
There is docker-compose-only solution :)
Problem
Docker mounts local folders with root permissions.
In Solr's docker image, the default user is solr - for a good reason: Solr commands should be run with this user (you can force to run them with root but that is not recommended).
Most Solr commands require write permissions to /var/solr/, for data and logs storage.
In this context, when you run a solr command as the solr user, you are rejected because you don't have write permission to /var/solr/.
Solution
What you can do is to first start the container as root to change the permissions of /var/solr/. And then switch to solr user to run all necessary solr commands. You can't start our Solr server.
In the example below, we use solr-precreate to create a default core and start solr.
version: '3.7'
services:
solr:
image: solr:8.5.2
volumes:
- ./mnt/solr:/var/solr
ports:
- 8983:8983
user: root # run as root to change the permissions of the solr folder
# Change permissions of the solr folder, create a default core and start solr as solr user
command: bash -c "
chown -R 8983:8983 /var/solr
&& runuser -u solr -- solr-precreate default-core"
Set with a Dockerfile
It's possibly not exactly what you wanted as the files aren't persisted when rebuilding the container, but it solves the 'rights' problem. Copy the files over and chown them with a Dockerfile:
FROM solr:8.7.0
COPY --chown=solr ./data /var/solr/data
This is more useful if you're trying to initialise a single core:
FROM solr:8.7.0
COPY --chown=solr ./core /var/solr/data/someCollection
It also has the advantage that you can create an image for reuse.
With a named volume
For persistence, you can also create a volume (in this case core) and copy the contents of a directory (also called core here), assigning the rights to the files on the way:
docker container create --name temp -v core:/data tianon/true || exit $?
tar -cf - --directory core --owner 8983 --group 8983 . | docker cp - temp:/data
docker rm temp
This was adapted from these answers:
https://github.com/moby/moby/issues/25245#issuecomment-365980572
https://stackoverflow.com/a/52446394
Then you can mount the named volume in your Docker Compose file:
version: '3'
services:
solr:
image: solr:8.7.0
networks:
- internal
ports:
- 8983:8983
volumes:
- core:/var/solr/data/someCollection
volumes:
core:
external: true
This solution persists the data without overriding the data on the host. And it doesn't need the extra build step. And can obviously be adapted for mounting the entire /var/solr/data folder.
It doesn't seem to matter that the mounted volume/directory doesn't have the correct rights (/var/solr/data/someCollection has owner root:root).
I want to deploy some services into my server and all of them will use nginx as web server, every project has it own .conf file and I want to share all of then with nginx container. I tried to use named volumes but when it's used by more than one container the data gets replaced. I want to get all this .conf files from diferent containers and put in a volume so it can be read by nginx container. I also tried to use subdirectories in named volumes, but, use namedVolumeName/path do not work.
Obs: I'm using docker-compose in all projects
version: "3.7"
services:
backend:
container_name: jzmimoveis-backend
image: paulomesquita/jzmimoveis-backend
command: uwsgi --socket :8000 --wsgi-file jzmimoveis/wsgi.py
volumes:
- nginxConfFiles:/app/nginx
- jzmimoveisFiles:/app/src
networks:
- jzmimoveis
restart: unless-stopped
expose:
- 8000
frontend:
container_name: jzmimoveis-frontend
image: paulomesquita/jzmimoveis-frontend
command: serve -s build/
volumes:
- nginxConfFiles:/app/nginx
networks:
- jzmimoveis
restart: unless-stopped
expose:
- 5000
volumes:
nginxConfFiles:
external: true
jzmimoveisFiles:
external: true
networks:
jzmimoveis:
external: true
For example, is this case i linked both frontend and backend nginx file to the named volume nginxConfFiles, but, when I do docker-compose up -d in this file, just one of the .conf file appears in volume, I think it gets overwritten by the other container in the same file.
Probably you could have, on the nginx container, the shared volume pointing to /etc/nginx/conf.d, and then use different names for each project conf file.
Below a proof-of-concept, three servers with a config file to be attached on each one, and a proxy (your Nginx) with the shared volume bound to /config:
version: '3'
services:
server1:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config
- ./server1.conf:/app/server1.conf
command: sh -c "cp /app/server1.conf /config; tail -f /dev/null"
server2:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config
- ./server2.conf:/app/server2.conf
command: sh -c "cp /app/server2.conf /config; tail -f /dev/null"
server3:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config
- ./server3.conf:/app/server3.conf
command: sh -c "cp /app/server3.conf /config; tail -f /dev/null"
proxy1:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config:ro
command: tail -f /dev/null
volumes:
deleteme_after_demo:
Let's create 3 config files to be included:
➜ echo "server 1" > server1.conf
➜ echo "server 2" > server2.conf
➜ echo "server 3" > server3.conf
then:
➜ docker-compose up -d
Creating network "deleteme_default" with the default driver
Creating deleteme_server2_1 ... done
Creating deleteme_server3_1 ... done
Creating deleteme_server1_1 ... done
Creating deleteme_proxy1_1 ... done
And finally, let's verify the config files are accessible from proxy container:
➜ docker-compose exec proxy1 sh -c "cat /config/server1.conf"
server 1
➜ docker-compose exec proxy1 sh -c "cat /config/server2.conf"
server 2
➜ docker-compose exec proxy1 sh -c "cat /config/server3.conf"
server 3
I hope it helps.
Cheers!
Note: you should see mounting a volume exactly the same way as using Unix mount command. If you already have content inside the mount point, after mount you are not going to see it, but the content of the mounted device (unless it was empty and first created here). Whatever you want to see there needs to be already on the device or you need to move it afterward.
So, I did it by mounting the files because I had no data in the container I used. Then copying these with the starting command. You could address it a different way, eg copying the config file to the mounted volume by the use of an entry point script in your image.
A named volume is initialized when it's empty/new and a container is started using that volume. The initialization is from the image filesystem, and after that, the named volume is persistent and will retain the state from the previous use.
In this case, what you have is a race condition. The volume is sharing the files, but it depends on which container compose starts up first to control which image is used to initialize the volume. The named volume is shared between multiple images, it's just the content that you want to be different.
For your use case, you may be better off putting some logic in the image build and entrypoint to save the files you want to mirror in the volume to a different location in the image on build, and then update the volume on container startup. By moving this out of the named volume initialization steps, you avoid the race condition, and allow the volume to be updated with future changes from the image. An example of this is in my base image with the save-volume you'd run in the Dockerfile, and load-volume you'd run in your entrypoint.
As a side note, it's also a good practice to mount that named volume as read-only in the containers that have no need to write to the config files.