I want to know if i can use a docker volume by many docker containers .
If so is the docker volume locked ?
You can first create a named volume, and then use it in wherever you want, one or many dockers.
When you create a named volume, for example called myvolume, if you don't specify driver option, local is used. So, docker creates a folder in /var/lib/docker/volumes. Your data will persist in /var/lib/docker/volumes/myvolume/_data
Nevertheless, that was just information, you don't need to manage that. You just have to create with:
docker volume create myvolume
And then, use volume name as source.
docker run -v myvolume:/yourdestinationpath ...
If you use docker compose, syntax is the same:
services:
myservice:
...
volumes:
- myvolume:/yourdestinationpath
The key is that you're not using a bind volume, where you specify as source a concrete path to be mounted, but a docker volume name.
This is a minimal example, in which I am mounting the local directory inside the container. My local directory will act as a persistence volume on which I can read and write from the container.
My current directory contains an input-file (this file shall be read by the container)
The container cats the content of the input-file and appends it to the output-file (Here I am faking your conversion)
// The initial wiorking directory content
.
└── input-file
// Read the `input-file` and append its content to the `output-file` (both files are persisted on the host machine)
docker run -v $(pwd):/root/some-path ubuntu /bin/bash -c "cat /root/some-path/input-file >> /root/some-path/output-file"
// The outcome
.
├── input-file
└── output-file
Related
I'm trying to understand volumes.
When I build and run this image with docker build -t myserver . and docker run -dp 8080:80 myserver, the web server on it prints "Hallo". When I change "Hallo" to "Huhu" in the Dockerfile and rebuild & run the image/container, it shows "Huhu". So far, no surprises.
Next, I added a docker-compose.yaml file that has two volumes. One volume is mounted on an existing path of where the Dockerfile creates the index.html. The other is mounted on a new and unused path. I build and run everything with docker compose up --build.
On the first build, the web server prints "Hallo" as expected. I can also see the two volumes in Docker GUI and its contents. The index.html that was written to the image, is now present in the volume. (I guess the volume gets mounted before the Dockerfile can write to it.)
On the second build (swap "Hallo" with "huhu" and run docker compose up --build again) I was expecting the webserver to print "Huhu". But it prints "Hallo". So I'm not sure why the data on the volume was not overwritten by the Dockerfile.
Can you explain?
Here are the files:
Dockerfile
FROM nginx
# First build
RUN echo "Hallo" > /usr/share/nginx/html/index.html
# Second build
# RUN echo "Huhu" > /usr/share/nginx/html/index.html
docker-compose.yaml
services:
web:
build: .
ports:
- "8080:80"
volumes:
- html:/usr/share/nginx/html
- persistent:/persistent
volumes:
html:
persistent:
There are three different cases here:
When you build the image, it knows nothing about volumes. Whatever string is in that RUN echo line, it is stored in the image. Volumes are not mounted when you run the docker-compose build step, and the Dockerfile cannot write to a volume at all.
The first time you run a container with the volume mounted, and the first time only, if the volume is empty, Docker copies content from the mount point in the image into the volume. This only happens with named volumes and not bind mounts; it only happens on native Docker and not Kubernetes; the volume content is never updated at all after this happens.
The second time you run a container with the volume mounted, since the volume is already populated, the content from the volume hides the content in the image.
You routinely see various cases that uses named volumes to "pass through" to the image (especially Node applications) or to "share files" with another container (frequently an Nginx server). These only work because Docker (and only Docker) automatically populates empty named volumes, and therefore they only work the first time. If you change your package.json, your Node application that mounts a volume over node_modules won't see updates; if you change your static assets that you're sharing with a Web server, the named volume will hide those changes in both the application and HTTP-server containers.
Since the named-volume auto-copy only happens in this one very specific case, I'd try to avoid using it, and more generally try to avoid mounting anything over non-empty directories in your image.
I am checking the docker documentation on how to use named volumes to share data between containers.
In Populate a volume using a container it is specified that:
If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume. The container then mounts and uses the volume, and other containers which use the volume also have access to the pre-populated content.
So I did a simple example where:
I start a container which creates the volume and mounts it to a directory with existing files
I start a second container on which I mount the volume and indeed I can see the first container's files.
So far so good.
However I wanted to see if it is possible to have pre-populated content from more than one containers.
What I did was
Create two simple images which have their respective configuration files in the same directory
FROM alpine:latest
WORKDIR /opt/test
RUN mkdir -p "/opt/test/conf" && \
echo "container from image 1" > /opt/test/conf/config_1.cfg
FROM alpine:latest
WORKDIR /opt/test
RUN mkdir -p "/opt/test/conf" && \
echo "container from image 2" > /opt/test/conf/config_2.cfg
Create a docker compose which defines a named volume which is mounted on both services
services:
test_container_1:
image:
test_image_1
volumes:
- test_volume:/opt/test/conf
tty: true
test_container_2:
image:
test_image_2
volumes:
- test_volume:/opt/test/conf
tty: true
volumes:
test_volume:
Started the services.
> docker-compose -p example up
Creating network "example_default" with the default driver
Creating volume "example_test_volume" with default driver
Creating example_test_container_2_1 ... done
Creating example_test_container_1_1 ... done
Attaching to example_test_container_1_1, example_test_container_2_1
According to the logs container_2 was created first and it pre-populated the volume. However, the volume was then mounted to container_1 and the only file available on the mount was apparently /opt/test/conf/config_2.cfg effectively removing config_1.
So my question is, if it is possible to have a volume populated with data from 2 or more containers.
The reason I want to explore this, is so that I can have additional app configuration loaded from different containers, to support a multi tenant scenario, without having to rework the app to read the tenant configuration from different folders.
Thank you in advance
Once there is any content in a named volume at all, Docker will never automatically copy content into it. It will not merge content from two different images, update the volume if one of the images changes, or anything else.
I'd advise you to ignore the paragraph you quote in the Docker documentation. Assume any volume you mount into the container is initially empty. This matches the behavior you'll get with Docker bind-mounts (host directories), Kubernetes persistent volumes, and basically any other kind of storage besides Docker named volumes proper. Don't mount a volume over the content in your image.
If you can, restructure your application to avoid sharing files at all. One common use of named volumes I see is trying to republish static assets to a reverse proxy, for example; rather than trying to use a named volume (which will never update itself) you can COPY the static assets into a dedicated Web server image. This avoids the various complexities around trying to use a volume here.
If you really don't have a choice in the matter, then you can approach this with dedicated code in both of the containers. The basic setup here is:
Have a data directory somewhere outside your application directory, and mount the volume there.
Include the original files in the image somewhere different.
In an entrypoint wrapper script, copy the original files into the data directory (the mounted volume).
Let's say for the sake of argument that you've installed the application into /opt/test, and the data directory will be /etc/test. The entrypoint wrapper script can be as little as
#!/bin/sh
# Copy config files from the application tree into the config tree
# (overwriting anything that's already there)
cp /opt/test/* "$TEST_CONFIG_DIR"
# Run the main container command
exec "$#"
In the Dockerfile, you need to make sure that directory exists (and if you'll use a non-root user, that user needs permission to write to it).
FROM alpine
WORKDIR /opt/test
COPY ./ ./
ENV TEST_CONFIG_DIR=/etc/test
RUN mkdir "$TEST_CONFIG_DIR"
ENTRYPOINT ["./entrypoint.sh"]
CMD ["./my_app"]
Finally, in the Compose setup, mount the volume on that data directory (you can't use the environment variable, but consider the filesystem path part of the image's API):
version: '3.8'
volumes:
test_config:
services:
one:
build: ./one
volumes:
- test_config:/etc/test
two:
build: ./two
volumes:
- test_config:/etc/test
You would be able to run, for example,
docker-compose run one ls /etc/test
docker-compose run two ls /etc/test
to see both sets of files appear there.
The entrypoint script is code you control. There's nothing especially magical about it beyond the final exec "$#" line to run the main container command. If you want to ignore files that already exist, for example, or if you have a way to merge in changes, then you can implement something more clever than a simple cp command.
Currently I am working on a task that needs to create 4-5 different docker containers. Now the catch is that all these containers should be using the same volume mounted inside the container.
I am building the images using individual Dockerfiles and then running the containers using these images. The preferable way is to use VOLUME in Dockerfiles. But I am not sure how exactly to use it.
Here's a code snippet of a Dockerfile -
FROM [imagename]
WORKDIR \app
VOLUME \app\[container's folder] #this folder is separate for each container. eg. VOLUME \app\sql
COPY shell.sh \app\[container's folder]\shell.sh
.
.
rest of code
After running the containers and bashing into it, for a particular container the files are getting stored inside the particular path; eg. as in above snippet, shell.sh is present inside path \app\[container's name]
But when I check in another container, I am able to see that container's files in the particular path, but cant see the first container's files.
This is how I want the structure -
app
|- sql
| |- shell-script files
|- tsdb
| |- shell-script files
|- kernel
| |- shell-script files
.
.
OR
/app/sql/shellsql.sh
/app/tsdb/shelltsdb.sh
/app/kernel/shellkernel.sh
Assumming you have the following volumes for the containers
VOLUME /app/container1
VOLUME /app/container2
VOLUME /app/container3
The way share the volumes is as such. Create 3 volumes from command line:
docker volume create vol1
docker volume create vol2
docker volume create vol3
When running each container, mount the all the volumes
docker run -v vol1:/app/container1 -v vol2:/app/container2 -v vol3:/app/container3 <image1>
docker run -v vol1:/app/container1 -v vol2:/app/container2 -v vol3:/app/container3 <image2>
...
Why are you not using docker-compose?
docker-compose is the better way to run all containers in one command (you can run specific container also. In docker-compose, you can create configuration easily (mount volume, link containers to each other, network ... etc).
Please have a look at sample docker-compose file
container1:
build: image_name
volumes_from:
- data_volume
container2:
build: vra_manager2
volumes_from:
- data_volume
With above example, both containers are used the same data_volume. You don't have to write big command for each container.
For example:
volumes:
- /vol1:/app/container1
volumes_from:
- data_volume
I am trying to create a jenkins and nexus integration using docker compose file. Where in my jenkins updated with few plugins using Dockerfile and volume created under /var/lib/jenkins/.
VOLUME ["/var/lib/jenkins/"]
in compose file am trying to map my volume to local store /opt/jenkins/
jenkins:
build: ./jenkins
ports:
- 9090:8080
volumes:
- /opt/jenkins/:/var/lib/jenkins/
But Nothing is copying to my persistence directory(/opt/jenkins/).
I can see in all my jenkins jobs created under _data/jobs/ directory under some volume. not in my volume defined /var/lib/jenkins/
Can any one help me on this why this is happening?
From the documentation:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization. (Note that this does not apply when mounting a host directory.)
And in the mount a host directory as data volume:
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
So basically you are overlaying (hiding) anything that was in var/lib/jenkins. Can your image function if those things are hidden?
I would like to know if there's any way that I can share a directory from my host machine to the docker container (Shared Volume) using Dockerfile.
I understand that we can do that using volumes (-v option) while using docker run. But I couldn't find any way by which I can do that while using an Instruction of Dockerfile.
I already tried VOLUME instruction in Dockerfile but couldn't succeed.
Here's some details about my environment:
[me#myHost new]$ tree -L 1
.
|-- docker-compose.yml
|-- Dockerfile
|-- Shared // This is the directory I wish to share with my containers.
`
I was using docker-compose.yml file to mount this directory till now:
volumes:
- ./Shared:/shared # "Relative Path at the host":"Absolute Path at the container"
But now, due to some reasons, I need to mount it in Dockerfile. I already tried the following but couldn't succeed (It is creating a new empty volume at /shared.):
VOLUME ./Shared:/shared
I could use docker run and save the image by making manual changes in the container, but I wished if I could do that in Dockerfile itself.
Thanks.
You can't mount a local directory using commands in a Dockerfile. You must do this with docker run, or a proxy to docker run like docker-compose.