i have pem files to use in a lots of containers, however i would like to store this file into a unique volume call keys.
I create the volume:
docker run -v /data --name keys busybox
And add the files there:
docker cp JWT_PRIVATE_KEY.pem keys:/data/
Now, when a build the services whom need those files a want to copy them from keys:data to my /api workdir.
This is my docker-compose:
version: '3'
services:
my_api:
container_name: my_api
build: .
ports:
- "5555:5555"
volumes:
- keys:/data
networks:
- my-network
env_file:
- .env
volumes:
keys:
networks:
my-network:
external: true
and this is my DockerFile:
FROM node:lts-alpine
WORKDIR /api
COPY package.json /api
RUN yarn install
COPY . /api
RUN yarn build
COPY ./docker-entrypoint.sh /
EXPOSE 5555
RUN ["chmod", "+x", "/docker-entrypoint.sh"]
ENTRYPOINT ["/docker-entrypoint.sh"]
If you have something like a TLS key and certificate that exist outside of Docker space, it will generally be easier to inject it into the container using a bind mount than a named volume.
volumes:
# this references a local directory holding the keys
- ./keys:/data
In the setup you show above, you have a container named keys, with an anonymous volume mounted on /data, but this is separate from the volume with the Compose name of keys (and with a docker volume ls name that will be something like api_keys, starting with the name of the current directory).
If you really need to use a named volume here, probably the easiest way to copy data into it is to docker-compose run a temporary container:
docker-compose run \
-v $PWD:/keys \
my_api \
cp /keys/* /data
This should inherit the volumes: from the docker-compose.yml file (so mounting the volume on /data), but also adds a bind-mount from the host system. In that temporary container you copy the files from the host bind mount into the named volume mount, and then they'll persist.
A named-volume setup makes more sense if you're using something like Let's Encrypt where the certificates can be obtained and managed entirely within Docker space.
Related
I am trying to use docker volume for the first time and I am having a hard time getting the container to share files with the host machine (Ubuntu). I can see the files my code is writing inside the container using docker exec but none of the files are in the volume under /var/lib/docker/volumes.
My DockerFile
FROM node:16-alpine
RUN apk add dumb-init
RUN addgroup gp && adduser -S appuser -G gp
RUN mkdir -p /usr/src/app/logs
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . /usr/src/app/
RUN chown -R appuser:gp /usr/src/app/logs/
USER appuser
My docker-compole.yml
version: "3.6"
services:
my-service:
user: appuser
container_name: demou
build:
context: .
image: "myService"
working_dir: /usr/src/app
ports:
- 8080:8080 #
environment:
- NODE_VERSION=16
volumes:
- /logs:/logs/:rw
command: sh -c "dumb-init node src/server.js"
networks:
- Snet
# restart: always
volumes:
logs:
# driver: local
name: "logs"
networks:
Snet:
name: "Snetwork"
server.js doesn't do anything besides writing a helloworld.txt file to the logs directory. when I run the app in the container,I dont see any errors or even warning. It's just the logs are not available on the host machine where docker keeps its volumes. What I missing here?
Thanks
The compose file uses a bind mount (indicated by the leading / before logs:
...
services:
my-service:
...
volumes:
- /logs:/logs/:rw
# ^ this slash makes the mount a bind mount
...
We actually want to use a named volume by removing the leading /:
...
services:
my-service:
...
volumes:
- logs:/logs/:rw
# ^ no slash, will be interpreted as named volume
# referencing the named volume "logs" defined below
...
volumes:
logs:
# driver: local
name: "logs"
...
For more details, please refer to the relevant docker-compose file documentation.
As an aside: I had problems starting the docker-compose.yml file due to an invalid reference format. The image name must not include uppercase letters. So I had to change it to my-service. Even then, I was not able to build the my-service image due to missing files.
Here is a full docker-compose.yml that reproduces the desired behaviour, I used an alpine with a simple script to write to the volume:
version: "3.6"
services:
my-service:
image: alpine:3.14.3
working_dir: /logs
volumes:
- logs:/logs/:rw
command: sh -c 'echo "Hello from alpine" > log.txt'
volumes:
logs:
name: logs
You hint that you're trying to actually read the logs that come out, reasonably enough. For this use case you should use a Docker bind mount and not a named volume.
Where you specify
volumes:
- /logs:/logs:rw
The first part (starting with a slash) is an absolute path on the host; if you ls / on the host system, outside a container, you should see the logs directory there. The second part is a path inside the container, which doesn't match what you've indicated in the Dockerfile. If you change it to
volumes:
- ./logs:/usr/src/app/logs:rw
# ^^ ^^^^^^^^^^^^
making it a relative path on the host side and the intended directory on the container side, then you will be able to directly read the logs in a subdirectory of the directory containing the docker-compose.yml file. You can delete the volumes: block at the end of the file.
(For completeness, if the left-hand side of a volumes: entry doesn't contain a slash at all, it refers to a named volume specified in the top-level volumes: block; see also #Turing85's answer.)
Permissions-wise, the container process must run as the same numeric user ID that owns the log directory. Any other directories that the container writes to must also have the same numeric owner. It doesn't matter if the code in the image is owned by root (in fact, it's better, because it prevents the code from being accidentally overwritten).
user: 1000 # matches host uid; try running `id -u`
volumes: # or `ls -lnd logs`
- ./logs:/usr/src/app/logs
Also consider setting your application to log to stdout, instead of a file. That avoids this problem, and you can use docker logs to read the log output. In more involved container environments like Kubernetes, there are standard ways to collect logs-to-stdout from containers, but it's much trickier to collect logs-to-files.
I'm making an image to host a PHP application. I'm using COPY to populate /var/www/html with the app files and creating a VOLUME /var/www/html to be able to mount the dir on the host and edit files like config.
But:
When I mount the volume on docker-compose.yml, the directory is empty.
When I omit the "volmue" entry on the docker-compose.yml and connect with the container shell, the directory /var/www/html is filled.
I already read tons of examples and documentations but, sincerely, don't know what is wrong.
dockerfile:
FROM php:8.0-apache-buster
LABEL description="OCOMON 3.2 Frontend (noDB)"
RUN docker-php-ext-install pdo_mysql
WORKDIR /var/www/html/ocomon
COPY ./ocomon .
VOLUME /var/www/html/ocomon
docker-compose.yml:
version: '3.5'
services:
ocomon:
image: ocomon:3.2
container_name: ocomon
volumes:
- ./volumes/ocomon/ocomon:/var/www/ocomon
ports:
- 4682:80
Assuming your host's directory is ${PWD}/www/html, then you need only provide the volumes value in docker-compose.yml and it should be:
Dockerfile:
FROM php:8.0-apache-buster
LABEL description="OCOMON 3.2 Frontend (noDB)"
RUN docker-php-ext-install pdo_mysql
WORKDIR /var/www/html/ocomon
COPY ./www/html/ocomon .
and:
docker-compose.yml:
version: '3.5'
services:
ocomon:
image: ocomon:3.2
container_name: ocomon
volumes:
- ${PWD}/www/html:/var/www/html
ports:
- 4682:80
Explanation
Dockerfile VOLUMES creates a volume in the image. By following WORKDIR (which always create if the path does not exist), with VOLUME, you overwrite whatever was either in (or created by WORKDIR). NOTE this is done during image build.
Docker Compose volumes mounts a directory from the host into the image. The syntax is volumes: - ${HOST_PATH}:${IMAGE_PATH}. NOTE this is done during image run.
It seems to be a misunderstood point from me about volumes. I have a docker-compose file with two services : jobs which is a Flask api built from a Dockerfile (see below), and mongo which is from official MongoDb image.
I have two volumes : - .:/code is linked from my host working directory to /code folder in the container, and a named volume mongodata.
version: "3"
services:
jobs:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: ${FLASK_ENV}
FLASK_APP: ${FLASK_APP}
depends_on:
- mongo
mongo:
image: "mongo:3.6.21-xenial"
restart: "always"
ports:
- "27017:27017"
volumes:
- mongodata:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
volumes:
mongodata:
Dockerfile for jobs service :
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=job-checker
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run", "--host=0.0.0.0"]
Every time I remove these container and re-run, everything is fine, I still have my data in mongodata volume. But when I check the volume list I can see that a new volume is created from - .:/code with a long volume name, for example :
$ docker volume ls
DRIVER VOLUME NAME
local 55c08cd008a1ed1af8345cef01247cbbb29a0fca9385f78859607c2a751a0053
local abe9fd0c415ccf7bf8c77346f31c146e0c1feeac58b3e0e242488a155f6a3927
local job-checker_mongodata
Here I ran docker-compose up, then I removed containers, then ran up again, so I have two volumes from my working folder.
Is this normal that every up create a new volume instead of using the previous one ?
Thanks
Hidden at the end of the Docker Hub mongo image documentation is a note:
This image also defines a volume for /data/configdb...
The image's Dockerfile in turn contains the line
VOLUME /data/db /data/configdb
When you start the container, you mount your own volume over /data/db, but you don't mount anything on the second path. This causes Docker to create an anonymous volume there, which is the volume you're seeing with only a long hex ID.
It should be safe to remove the extra volumes, especially if you're sure they're not attached to a container and they don't have interesting content.
This behavior has nothing to do with the bind mount in the other container; bind mounts never show up in the docker volume ls listing at all.
I have a python and java app that I want to run inside a container.
I have a folder named pass-hash with:
--h2o-start folder containing Dockerfile that I use to start h2o.jarwhich starts a server for machine learning.
--model-generator folder containing passhash.py and a data.csv file.
The passhash.py app contains h2o.import_file("/var/my-data/data.csv") which takes the data.csv file from the my-data folder I created in the container and generates a POJO file with it.
The h2o-start Dockerfile contains:
FROM openjdk:8
ADD h2o.jar h2o.jar
EXPOSE 54321
EXPOSE 54322
ENTRYPOINT ["java", "-jar", "h2o.jar"]
The model-generator Dockerfile contains:
FROM python:2.7-slim
WORKDIR /model-generator
ADD . /model-generator
RUN mkdir /var/my-data
COPY data.csv /var/my-data
RUN chmod 777 /var/my-data/data.csv
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 8080
ENV NAME World
CMD ["python", "passhash.py"]
The docker-compose.yml file contains:
version: "3"
services:
h2o-start:
image: milanpanic2/h2o-start
build:
context: ./h2o-start
ports:
- "54321:54321"
- "54322:54322"
volumes:
- "home/data"
model-generator:
image: milanpanic2/model-generator
build:
context: ./model-generator
ports:
- "8080:8080"
depends_on:
- "h2o-start"
volumes:
- "csvdata:/var/my-data"
volumes:
csvdata:
Docker Volumes are designed to share folder in between Host Machine and Docker containers. If you copy any file on your host machine (volume location path), it will be automatically going to available inside containers.
The syntax for docker volume is as below:
-v /home/data:/data
In the above syntax /home/data is folder available on the host machine and /data this folder is available inside docker container.
If you copy any file on the host machine inside /home/data folder, it will be automatically going to be available inside container /data folder.
Hope this is clear to you.
If you are using docker-compose then add volume tag as below
volumes:
- /home/data:/data
for example:
version '3'
services:
app:
image: nginx:alpine
ports:
- 80:80
volumes:
- /home/data:/data
I don't know, that is the solution i came up with. Can you tell me the better solution for my problem? My problem is: I have a python app that uses a data.csv file to generate a POJO machine learning model. When I specify a path to this python app to the data file, it gives an exception that the file doen't exist. Also, I have another app written in java, that uses the generated POJO file and it gives predictions based on that data. The java app also updates the data.csv file every day. I want every app (microservice) to run in separate container, but want them both to use the data.csv
To answer this, You need to use volumes.
Try below code.
This is your docker-compose file
version: "3"
services:
h2o-start:
image: milanpanic2/h2o-start
build: context: ./h2o-start
ports:
- "54321:54321" - "54322:54322"
volumes:
- /home/data:/var/my-data
model-generator:
image: milanpanic2/model-generator
build: context: ./model-generator
ports:
- "8080:8080"
depends_on:
- "h2o-start"
volumes:
- /home/data:/var/my-data
This is your docker file
FROM python:2.7-slim
WORKDIR /model-generator
ADD . /model-generator
RUN mkdir /var/my-data
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 8080
ENV NAME World
CMD ["python", "passhash.py"]
And where is your java docker file?
Now just create a default data.csv file and copy it in your host machine location /home/data.
and run application let me know.
If you mean add files when do docker build, take a look at ADD & COPY instructions.
In my docker-compose.yml file, I first define a service for a data-only container with a volume of /data:
data:
image: library/ubuntu:14.04
volumes:
- /data
command: tail -F /dev/null
I then have a second service that has a Dockerfile.
test:
build:
context: .
dockerfile: "Dockerfile"
volumes_from:
- data:rw
depends_on:
- data
In that Dockerfile I want to write to that /data volume that comes from the data service (e.g., "RUN touch /data/helloworld.txt"). But when I run docker-compose up and then exec into test to look at the contents of /data, the directory is empty. If I wait to do "touch /data/helloworld.txt" until after the containers are running (e.g., via exec), then the file is present in the /data volume and accessible from either container.
Is there a way for a Dockerfile to make use of a volume from another container defined in docker-compose.yml?