I am trying to use docker volume for the first time and I am having a hard time getting the container to share files with the host machine (Ubuntu). I can see the files my code is writing inside the container using docker exec but none of the files are in the volume under /var/lib/docker/volumes.
My DockerFile
FROM node:16-alpine
RUN apk add dumb-init
RUN addgroup gp && adduser -S appuser -G gp
RUN mkdir -p /usr/src/app/logs
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . /usr/src/app/
RUN chown -R appuser:gp /usr/src/app/logs/
USER appuser
My docker-compole.yml
version: "3.6"
services:
my-service:
user: appuser
container_name: demou
build:
context: .
image: "myService"
working_dir: /usr/src/app
ports:
- 8080:8080 #
environment:
- NODE_VERSION=16
volumes:
- /logs:/logs/:rw
command: sh -c "dumb-init node src/server.js"
networks:
- Snet
# restart: always
volumes:
logs:
# driver: local
name: "logs"
networks:
Snet:
name: "Snetwork"
server.js doesn't do anything besides writing a helloworld.txt file to the logs directory. when I run the app in the container,I dont see any errors or even warning. It's just the logs are not available on the host machine where docker keeps its volumes. What I missing here?
Thanks
The compose file uses a bind mount (indicated by the leading / before logs:
...
services:
my-service:
...
volumes:
- /logs:/logs/:rw
# ^ this slash makes the mount a bind mount
...
We actually want to use a named volume by removing the leading /:
...
services:
my-service:
...
volumes:
- logs:/logs/:rw
# ^ no slash, will be interpreted as named volume
# referencing the named volume "logs" defined below
...
volumes:
logs:
# driver: local
name: "logs"
...
For more details, please refer to the relevant docker-compose file documentation.
As an aside: I had problems starting the docker-compose.yml file due to an invalid reference format. The image name must not include uppercase letters. So I had to change it to my-service. Even then, I was not able to build the my-service image due to missing files.
Here is a full docker-compose.yml that reproduces the desired behaviour, I used an alpine with a simple script to write to the volume:
version: "3.6"
services:
my-service:
image: alpine:3.14.3
working_dir: /logs
volumes:
- logs:/logs/:rw
command: sh -c 'echo "Hello from alpine" > log.txt'
volumes:
logs:
name: logs
You hint that you're trying to actually read the logs that come out, reasonably enough. For this use case you should use a Docker bind mount and not a named volume.
Where you specify
volumes:
- /logs:/logs:rw
The first part (starting with a slash) is an absolute path on the host; if you ls / on the host system, outside a container, you should see the logs directory there. The second part is a path inside the container, which doesn't match what you've indicated in the Dockerfile. If you change it to
volumes:
- ./logs:/usr/src/app/logs:rw
# ^^ ^^^^^^^^^^^^
making it a relative path on the host side and the intended directory on the container side, then you will be able to directly read the logs in a subdirectory of the directory containing the docker-compose.yml file. You can delete the volumes: block at the end of the file.
(For completeness, if the left-hand side of a volumes: entry doesn't contain a slash at all, it refers to a named volume specified in the top-level volumes: block; see also #Turing85's answer.)
Permissions-wise, the container process must run as the same numeric user ID that owns the log directory. Any other directories that the container writes to must also have the same numeric owner. It doesn't matter if the code in the image is owned by root (in fact, it's better, because it prevents the code from being accidentally overwritten).
user: 1000 # matches host uid; try running `id -u`
volumes: # or `ls -lnd logs`
- ./logs:/usr/src/app/logs
Also consider setting your application to log to stdout, instead of a file. That avoids this problem, and you can use docker logs to read the log output. In more involved container environments like Kubernetes, there are standard ways to collect logs-to-stdout from containers, but it's much trickier to collect logs-to-files.
Related
I've 2 problems with flask app in docker. Application working slowly and freeze after finish last request (for example: first route work fine, next click other link/page app freeze. If i go to homepage via URL and run page again working ok ). Outside docker app working very fast.
Second problem is docker not synch files in container after change files.
# Dockerfile
FROM python:3.9
# set work directory
WORKDIR /base
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update
RUN pip install --upgrade pip
COPY ./requirements.txt /base/requirements.txt
COPY ./base_app.py /base/base_app.py
COPY ./config.py /base/config.py
COPY ./certs/ /base/certs/
COPY ./app/ /base/app/
COPY ./tests/ /base/tests/
RUN pip install -r requirements.txt
# docker-compose
version: '3.3'
services:
web:
build: .
command: tail -f /dev/null
volumes:
- ${PWD}/app/:/usr/src/app/
networks:
- flask-network
ports:
- 5000:5000
depends_on:
- flaskdb
flaskdb:
image: postgres:13-alpine
volumes:
- ${PWD}/postgres_database:/var/lib/postgresql/data/
networks:
- flask-network
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
ports:
- "5432:5432"
restart: always
networks:
flask-network:
driver: bridge
`
You have a couple of significant errors in the code you show.
The first problem is that your application doesn't run at all: the Dockerfile is missing the CMD line that tells Docker what to run, and you override it in the Compose setup with a meaningless tail command. You should generally set this in the Dockerfile:
CMD ["./base_app.py"]
You can remove most of the Compose settings you have. You do not need command: (it's in the Dockerfile), volumes: (what you have is ineffective and the code is in the image anyways), or networks: (Compose provides a network named default; delete all of the networks: blocks in the file).
Second problem is docker not synch files in container after change files.
I don't usually recommend trying to do actual development in Docker. You can tell Compose to just start the database
docker-compose up -d flaskdb
and then you can access it from the host (PGHOST=localhost, PGPORT=5432). This means you can use an ordinary non-Docker Python virtual environment for development.
If you do want to try to use volumes: to simulate a live development environment (you talk about performance; this specific path can be quite slow on non-Linux hosts) then you need to make sure the left side of volumes: is the host directory with your code (probably .), the right side is the container directory (your Dockerfile uses /base), and your Dockerfile doesn't rearrange, modify, or generate the files at all (the bind mount hides all of it).
# don't run the application in the image; use the Docker infrastructure
# to run something else
volumes:
# v-------- left side: host path (matches COPY source directory)
- .:/base
# ^^^^-- right side: container path (matches WORKDIR/destination directory)
It seems to be a misunderstood point from me about volumes. I have a docker-compose file with two services : jobs which is a Flask api built from a Dockerfile (see below), and mongo which is from official MongoDb image.
I have two volumes : - .:/code is linked from my host working directory to /code folder in the container, and a named volume mongodata.
version: "3"
services:
jobs:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: ${FLASK_ENV}
FLASK_APP: ${FLASK_APP}
depends_on:
- mongo
mongo:
image: "mongo:3.6.21-xenial"
restart: "always"
ports:
- "27017:27017"
volumes:
- mongodata:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
volumes:
mongodata:
Dockerfile for jobs service :
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=job-checker
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run", "--host=0.0.0.0"]
Every time I remove these container and re-run, everything is fine, I still have my data in mongodata volume. But when I check the volume list I can see that a new volume is created from - .:/code with a long volume name, for example :
$ docker volume ls
DRIVER VOLUME NAME
local 55c08cd008a1ed1af8345cef01247cbbb29a0fca9385f78859607c2a751a0053
local abe9fd0c415ccf7bf8c77346f31c146e0c1feeac58b3e0e242488a155f6a3927
local job-checker_mongodata
Here I ran docker-compose up, then I removed containers, then ran up again, so I have two volumes from my working folder.
Is this normal that every up create a new volume instead of using the previous one ?
Thanks
Hidden at the end of the Docker Hub mongo image documentation is a note:
This image also defines a volume for /data/configdb...
The image's Dockerfile in turn contains the line
VOLUME /data/db /data/configdb
When you start the container, you mount your own volume over /data/db, but you don't mount anything on the second path. This causes Docker to create an anonymous volume there, which is the volume you're seeing with only a long hex ID.
It should be safe to remove the extra volumes, especially if you're sure they're not attached to a container and they don't have interesting content.
This behavior has nothing to do with the bind mount in the other container; bind mounts never show up in the docker volume ls listing at all.
My docker-compose.yml:
solr:
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
After bringing up the docker with docker-compose up -d --build, the solr container is down and the log (docker logs myproject-solr) shows this:
Copying solr.xml
cp: cannot create regular file '/var/solr/data/solr.xml': Permission denied
I've noticed that if I give full permissions on my machine to the data directory sudo chmod 777 ./data/solr/ -R and I run the Docker again, everything is fine.
I guess the issue comes when the solr user is not my machine, because Docker creates the data/solr folder with root:root. Having my ./data folder gitignored, I cannot manage these folder permissions.
I'd like to know a workaround to manage permissions properly with the purpose of persisting data
It's a known "issue" with docker-compose: all files created by Docker engine are owned by root:root. Usually it's solved in one of the two ways:
Create the volume in advance. In your case, you can create the ./data/solr directory in advance, with appropriate permissions. You might make it accessible to anyone, or, better, change its owner to the solr user. The solr user and group ids are hardcoded inside the solr image: 8983 (Dockerfile.template)
mkdir -p ./data/solr
sudo chown 8983:8983 ./data/solr
If you want to avoid running additional commands before docker-compose, you can create additional container which will fix the permissions:
version: "3"
services:
initializer:
image: alpine
container_name: solr-initializer
restart: "no"
entrypoint: |
/bin/sh -c "chown 8983:8983 /solr"
volumes:
- ./data/solr:/solr
solr:
depends_on:
- initializer
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
There is docker-compose-only solution :)
Problem
Docker mounts local folders with root permissions.
In Solr's docker image, the default user is solr - for a good reason: Solr commands should be run with this user (you can force to run them with root but that is not recommended).
Most Solr commands require write permissions to /var/solr/, for data and logs storage.
In this context, when you run a solr command as the solr user, you are rejected because you don't have write permission to /var/solr/.
Solution
What you can do is to first start the container as root to change the permissions of /var/solr/. And then switch to solr user to run all necessary solr commands. You can't start our Solr server.
In the example below, we use solr-precreate to create a default core and start solr.
version: '3.7'
services:
solr:
image: solr:8.5.2
volumes:
- ./mnt/solr:/var/solr
ports:
- 8983:8983
user: root # run as root to change the permissions of the solr folder
# Change permissions of the solr folder, create a default core and start solr as solr user
command: bash -c "
chown -R 8983:8983 /var/solr
&& runuser -u solr -- solr-precreate default-core"
Set with a Dockerfile
It's possibly not exactly what you wanted as the files aren't persisted when rebuilding the container, but it solves the 'rights' problem. Copy the files over and chown them with a Dockerfile:
FROM solr:8.7.0
COPY --chown=solr ./data /var/solr/data
This is more useful if you're trying to initialise a single core:
FROM solr:8.7.0
COPY --chown=solr ./core /var/solr/data/someCollection
It also has the advantage that you can create an image for reuse.
With a named volume
For persistence, you can also create a volume (in this case core) and copy the contents of a directory (also called core here), assigning the rights to the files on the way:
docker container create --name temp -v core:/data tianon/true || exit $?
tar -cf - --directory core --owner 8983 --group 8983 . | docker cp - temp:/data
docker rm temp
This was adapted from these answers:
https://github.com/moby/moby/issues/25245#issuecomment-365980572
https://stackoverflow.com/a/52446394
Then you can mount the named volume in your Docker Compose file:
version: '3'
services:
solr:
image: solr:8.7.0
networks:
- internal
ports:
- 8983:8983
volumes:
- core:/var/solr/data/someCollection
volumes:
core:
external: true
This solution persists the data without overriding the data on the host. And it doesn't need the extra build step. And can obviously be adapted for mounting the entire /var/solr/data folder.
It doesn't seem to matter that the mounted volume/directory doesn't have the correct rights (/var/solr/data/someCollection has owner root:root).
i have pem files to use in a lots of containers, however i would like to store this file into a unique volume call keys.
I create the volume:
docker run -v /data --name keys busybox
And add the files there:
docker cp JWT_PRIVATE_KEY.pem keys:/data/
Now, when a build the services whom need those files a want to copy them from keys:data to my /api workdir.
This is my docker-compose:
version: '3'
services:
my_api:
container_name: my_api
build: .
ports:
- "5555:5555"
volumes:
- keys:/data
networks:
- my-network
env_file:
- .env
volumes:
keys:
networks:
my-network:
external: true
and this is my DockerFile:
FROM node:lts-alpine
WORKDIR /api
COPY package.json /api
RUN yarn install
COPY . /api
RUN yarn build
COPY ./docker-entrypoint.sh /
EXPOSE 5555
RUN ["chmod", "+x", "/docker-entrypoint.sh"]
ENTRYPOINT ["/docker-entrypoint.sh"]
If you have something like a TLS key and certificate that exist outside of Docker space, it will generally be easier to inject it into the container using a bind mount than a named volume.
volumes:
# this references a local directory holding the keys
- ./keys:/data
In the setup you show above, you have a container named keys, with an anonymous volume mounted on /data, but this is separate from the volume with the Compose name of keys (and with a docker volume ls name that will be something like api_keys, starting with the name of the current directory).
If you really need to use a named volume here, probably the easiest way to copy data into it is to docker-compose run a temporary container:
docker-compose run \
-v $PWD:/keys \
my_api \
cp /keys/* /data
This should inherit the volumes: from the docker-compose.yml file (so mounting the volume on /data), but also adds a bind-mount from the host system. In that temporary container you copy the files from the host bind mount into the named volume mount, and then they'll persist.
A named-volume setup makes more sense if you're using something like Let's Encrypt where the certificates can be obtained and managed entirely within Docker space.
I have 2 containers that I fire up using docker-compose up.
The first I just pull from the docker hub nginx:stable
The second one I build on top of the php from the hub
dockerfile
FROM composer:1.9.3
RUN mkdir /fatfree
RUN ["composer","require","bcosca/fatfree-core","--working-dir","/fatfree"]
FROM php:7.4-fpm
COPY --from=0 /fatfree /fatfree
I also tried VOLUME /fatfree in the above file to no avail.
docker-compose.yml
version: "3.7"
services:
webserver:
image: nginx:stable
ports:
- "80:8080"
volumes:
- ./www:/www
- fatfree:/fatfree
links:
- php
php:
build:
context: .
dockerfile: dockerfile
volumes:
- ./www:/www
- "fatfree:/fatfree"
volumes:
fatfree:
If I interpreted correctly the docker documentation, my www/index.php should be able to see whatever is in /fatfree, but it doesn't. The folder itself shows up, but it appears empty.
If I run the dockerfile interactively docker container run -i -t test bash , the /fatfree folder exists and it has all the files I expect it to have.
There are plenty of stackoverflow questions asking how to achieve this, and they all seem to suggest that what I'm doing is actually ok, but it doesn't work, and I have no clue why.
Any suggestion is appreciated.
Your mapping is incorrect.
You want:
volumes:
- /fatfree:/www
The first entry /fatfree refers to the path on your host machine.
The second entry /www refers to the path in the container.
In my example, your host's /fatfree directory (and content) will be mapped to the container's /www directory.
Change as desired.