why does docker have docker volumes and volume containers - docker

Why does docker have docker volumes and volume containers? What is the primary difference between them. I have read through the docker docs but couldn't really understand it well.

Docker volumes
You can use Docker volumes to create a new volume in your container and to mount it to a folder of your host. E.g. you could mount the folder /var/log of your Linux host to your container like this:
docker run -d -v /var/log:/opt/my/app/log:rw some/image
This would create a folder called /opt/my/app/log inside your container. And this folder will be /var/log on your Linux host. You could use this to persist data or share data between your containers.
Docker volume containers
Now, if you mount a host directory to your containers, you somehow break the nice isolation Docker provides. You will "pollute" your host with data from the containers. To prevent this, you could create a dedicated container to store your data. Docker calls this container a "Data Volume Container".
This container will have a volume which you want to share between containers, e.g.:
docker run -d -v /some/data/to/share --name MyDataContainer some/image
This container will run some application (e.g. a database) and has a folder called /some/data/to/share. You can share this folder with another container now:
docker run -d --volumes-from MyDataContainer some/image
This container will also see the same volume as in the previous command. You can share the volume between many containers as you could share a mounted folder of your host. But it will not pollute your host with data - everything is still encapsulated in isolated containers.
My resources
https://docs.docker.com/userguide/dockervolumes/

Related

Adding volume after docker-compose up

I am using a multi-container Docker application in a EC2 linux instance.
I have started it with: docker-compose -p myapplication up -d
I also have mounted my EFS under (in my EC2 host machine): /mnt/efs/fs1/
Everything is working fine at that point.
Now I need to access this EFS from one of my docker containers.
So I guess I have to add a volume to one of my containers linking /mnt/efs/fs1/ (in host) to /mydestinationpath (on container)
I can see my running containers IDs and images with: docker container ls
How can attach the volume to my container?
Edit the docker-compose.yml file to add the volumes: you need, and re-run the same docker-compose up -d command. Compose will notice that some specific services' configurations have changed, and delete and recreate those specific containers.
Most of the configuration for a Docker container (image name and tag, environment variables, published ports, volume mounts, ...) can only be specified when the container is first created. At the same time, the general expectation is that there's nothing important in a container filesystem. So it's extremely routine to delete and recreate a container to change options like this, and Compose can automate it for you.

docker share folder from container to host system

I have a container with code in it. This container runs on production server. How can I share folder with code in this container to my local machine? May be with Samba server and then mount (cifs) this folder with code on my machine? May be some examples...
Using
docker cp <containerId>:/file/path/within/container /host/path/target
you could copy some data from the container. If the data in the container and on your machine need to be in sync constantly I suggest you use a data volume to share a directory from your server with the container. This directory can then be shared from the server to your local machine with any method (e.g. sshfs)
The docker documentation about Manage data in containers shows how to add a volume:
$ docker run -d -P --name web -v /webapp training/webapp python app.py
The data from you severs training/webapp location will then be available in the docker container at /webapp.

Is there a way to start a sibling docker container mounting volumes from the host?

the scenario: I have a host that has a running docker daemon and a working docker client and socket. I have 1 docker container that was started from the host and has a docker socket mounted within it. It also has a mounted docker client from the host. So I'm able to issue docker commands at will from whithin this docker container using the aforementioned mechanism.
the need: I want to start another docker container from within this docker container; in other words, I want to start a sibling docker container from another sibling docker container.
the problem: A problem arises when I want to mount files that live inside the host filesystem to the sibling container that I want to spin up from the other docker sibling container. It is a problem because when issuing docker run, the docker daemon mounted inside the docker container is really watching the host filesystem. So I need access to the host file system from within the docker container which is trying to start another sibling.
In other words, I need something along the lines of:
# running from within another docker container:
docker run --name another_sibling \
-v {DockerGetHostPath: path_to_host_file}:path_inside_the_sibling \
bash -c 'some_exciting_command'
Is there a way to achieve that? Thanks in advance.
Paths are always on the host, it doesn't matter that you are running the client remotely (or in a container).
Remember: the docker client is just a REST client, the "-v" is always about the daemon's file system.
There are multiple ways to achieve this.
You can always make sure that each container mounts the correct host directory
You can use --volumes-from ie :
docker run -it --volumes-from=keen_sanderson --entrypoint=/bin/bash debian
--volumes-from Mount volumes from the specified container(s)
You can use volumes

How can I create a docker volume container in specific directory?

I have an SSD drive mounted at /ssd. I'd like to create a docker volume container for a MySQL server that will be run in a container, and have it use this SSD drive for data storage. It seems that by default, docker creates data volume containers in /var/lib/docker. How can I force docker to use the SSD drive for the data volume container?
The point of containers is they're decoupled. There's not any way to segregate 'where they go' by local filesystem.
However what I do for my databases is pass through mount:
docker create -v /ssd:/path/to/data --name data_for_mysql <imagename> /bin/true
Then you can run with --volumes-from data_for_mysql. It will write directly on the filesystem.

Can docker containers share a directory amongst them

Is it possible to share a directory between docker instances to allow the different docker instances / containers running on the same server directly share access to some data?
You can mount the same host directory to both containers docker run -v /host/shared:/mnt/shared ... or use docker run --volumes-from=some_container to mount a volume from another container.
Yes, this is what "Docker volumes" are. See Managing Data in Containers:
Mount a Host Directory as a Data Volume
[...] you can also mount a directory from your own host into a container.
$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
This will mount the local directory, /src/webapp, into the container as the /opt/webapp directory.
[...]
Creating and mounting a Data Volume Container
If you have some persistent data that you want to share between containers, or want to use from non-persistent containers, it's best to create a named Data Volume Container, and then to mount the data from it.

Resources