By default docker uses /var/lib/docker/volumes/ for any started container.
Is there any way to launch a new container and have it consume all the required disk on a different specified path on the host?
Basically have the root volume different.
For a specific container only, the simplest way i think would be to use docker volumes, Create docker volume and then attach the volume to the container. So the process running on the container uses up the share, so this is using the disk you would like to use.
More information on the following webpage,
https://docs.docker.com/storage/volumes/
you can define the volume path.
docker run -it --rm -v PWD$:/MyVolume ubuntu bash
This command will use the current folder where you execute the command from.
In the container you'll find your file under /MyVolume.
jens#DESKTOP:~$ docker run -it --rm -v $PWD:/MyVolume ubuntu bash
root#71969d68099e:/# cd /MyVolume/
root#71969d68099e:/MyVolume# ls
But you can define any path:
docker run -it --rm -v /home/someuser/somevolumepath:/MyVolume ubuntu bash
Almost the same is available in docker compose.
ports:
- "80:8080"
- "443:443"
volumes:
- $HOME/userhome/https_cert:/etc/nginx/certs
Jens
Related
Whenever I run the below command:
docker volume ls
I can see some volumes already created in my docker engine.
DRIVER VOLUME NAME
local 5df9458932cd504e10b2b37856c434cbdf3876733684b100cbf390c965ac9581
local 6f7037bc33861a5e42a9f8bcd699f8184ff1916a297a718ccc4df5f369d07530
local 8a86c462020f35f1051b47c48555228a1df359251f2496c32ed45a9081bb1872
local 85ed838d2e081eddc672fd8ddb15bbb3eecc73adb270678c98b7c50a03ecb2fc
Why are those volume created ?
How can I find for what purpose they exists ?
If you started a Docker container with a volume that doesn't have a name or host mount point, Docker will create a unique name for them. These docs briefly mention anonymous volumes like this. Most likely, a Dockerfile had a VOLUME section and wasn't run with a corresponding --mount or -v flag to bind some local volume to the container's volume.
Also see this devops stack exchange answer.
Here's an example of when an anonymous volume is created:
Dockerfile with anonymous volumes:
FROM alpine:3.9
VOLUME ["/root", "/test"]
Building/running container without mounting or otherwise naming the /root, /test volumes:
$ docker volume ls
DRIVER VOLUME NAME
$ docker build -t test .
$ docker run -it --rm -d --name volume-test test:latest sh
$ docker volume ls
DRIVER VOLUME NAME
local 5b332abd25b77c1ac324a0e3c00dc9a554cfe80c996a20bd77ef10c35c8ef98a
local 05c903f47f3f3666e03ee06154ff54b23547a5cc65750ca18bb40be40ed4049c
local 6f595aada6ae7c9fb16831996c2bdd8d652bec55a7cedf96afef95aec8f4e6e1
local 7f54c9dbbec46acc5a843499c65a50e23a78baa884facd026704d0dcb0362c9e
local 47a791197d6164757b015df1e2aba48bac3999720ead6b5981820a3aaece4113
local 214155fe63200cc859c1eddd2b31aa990fd6eb7c8614aa02bd8b57690b0fe53e
Of course, you can always inspect the volumes to try to find out where they came from but this may or may not be useful for you:
docker inspect 5b332abd25b77c1ac324a0e3c00dc9a554cfe80c996a20bd77ef10c35c8ef98a
I have a script used in production that does basically this:
make.ext4 ... /dev/sdb1
mount /dev/sdb1 /folder
and so on
I have a Docker environment where I simulate my production environment. Now, what I need is the possibility to use the same script on both the env. To do that, I need the possibility in Docker to have a /dev/sdb1 device and attach on it a volume in some way, so that when I run the commands above my volume is attached to /folder.
I know this can be done easily with:
docker run -t <tag> -v <my volume>:/folder -it /bin/bash
But in this way, things are a little different in Docker container and I need to modify my script (In my case I have several scripts to change).
Is there a way to do something like:
docker run -t <tag> -v <my volume>:/dev/sdb1 -it /bin/bash
so that when in Docker I do:
mount /dev/sdb1 /folder
I mount my external volume to /folder in the container?
Have you tried to run docker with privileges to do mount?
Maybe if you launch docker run --privileged or docker run --cap-add=SYS_ADMIN, you have /dev/sdb1 accessible from docker, so, is possible to do mount /dev/sdb1/
For further information about docker container privileges, please, see: Docker Documentation privileged mode and capabilities
I am looking for a way to create a Docker volume and put some data on it just before a specific container is started - which needs the configuration on startup.
I do not want to modify the container. I would like to use a vanilla container straight from the Docker Hub.
Any ideas?
Update
I did not mention that all this has to be done in a compose file. If I would do it manually, I could wait for the configuration injecting container to finish.
Absolutely! Just create your volume beforehand, attach it to any container (A base OS like Ubuntu would work great), add your data, and you're good to go!
Create the volume:
docker volume create test_volume
Attach it to an instance where you can add data:
docker run --rm -it --name ubuntu_1 -v test_volume:/app ubuntu /bin/sh
Add some data:
Do this within the container; which you are in from the previous command.
touch /app/my_file
Exit the container:
exit
Attach the volume to your new container:
Of course, replace ubuntu with your real image name.
docker run --rm -it --name ubuntu_2 -v test_volume:/app ubuntu /bin/sh
Verify the data is there:
~> ls app/
my_file
I have a named volume with stuff in it.
I would like to provide this volume as I provide a path: docker run -v /host:/path.in.docker.container - this works for paths. I'd like to do the same with a volume I manually created and filled.
I know about --volumes-from, but how do i first connect the volume to the empty container.
You can create a volume thanks to docker create, see the documentation, then, mount this volume-container with the option --volume of the command docker run as in docker run -v volumename:/data -it my_image.
I have the following containers:
Data container which is build directly in quay.io from a github repo, basically is a website.
FPM container
NGINX container
The three of them are linked together and working just fine. BUT the problem is that every time I change something in the website (Data container) it is rebuilt (of course) and I have to remove that container and also the FPM and NGINX and recreate them all to be able to read the new content.
I started with a "backup approach" for what I'm copying the data from the container to a host directory and mounting that into the FPM and NGINX containers, this way I can update the data without restarting/removing any service.
But the idea of moving the data from the data container into the host, really doesn't like me. So wondering if there a "docker way" or a better way of doing it.
Thanks!
UPDATE: Adding more context
Dockerfile d`ata container definition
FROM debian
ADD data/* /home/mustela/
VOLUME /home/mustela/
Where data only has 2 files: hello.1 and hello.2
Compiling the image:
docker build -t="mustela/data" .
Running the data container:
docker run --name mustela-data mustela/data
Creating another container to link to the previous one:
docker run -d -it --name nginx --volumes-from mustela-data ubuntu bash
Listing the mounted files:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
Now, lets rebuild the data container image, but first adding some new files, so now inside data we have hello.1 hello.2 hello.3 hello.4
docker rm mustela-data
docker build -t="mustela/data" .
docker run --name mustela-data mustela/data
If I ls /home/mustela from the running container, the files aren't being updated:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
But if I run a new container I can see the files
docker run -it --name nginx2 --volumes-from mustela-data ubuntu ls /home/mustela
Result: hello.1 hello.2 hello.3 hello.4