Can docker containers share a directory amongst them - docker

Is it possible to share a directory between docker instances to allow the different docker instances / containers running on the same server directly share access to some data?

You can mount the same host directory to both containers docker run -v /host/shared:/mnt/shared ... or use docker run --volumes-from=some_container to mount a volume from another container.

Yes, this is what "Docker volumes" are. See Managing Data in Containers:
Mount a Host Directory as a Data Volume
[...] you can also mount a directory from your own host into a container.
$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
This will mount the local directory, /src/webapp, into the container as the /opt/webapp directory.
[...]
Creating and mounting a Data Volume Container
If you have some persistent data that you want to share between containers, or want to use from non-persistent containers, it's best to create a named Data Volume Container, and then to mount the data from it.

Related

How to access files in host from a Docker Container?

I have a Docker Ubuntu bionic container on A Ubuntu server host. From the container I can see the host drive is mounted as /etc/hosts which is not a directory. Tried unmounting and remounting on a different location but throws permission denied error, this happens when I am trying as root.
So How do you access the contents of your host system ?
Firstly, etc/hosts is a networking file present on all linux systems, it is not related to drives or docker.
Secondly, if you want to access part of the host filesystem inside a Docker container you need to use volumes. Using the -v flag in a docker run command you can specify a directory on the host to mount into the container, in the format:
-v /path/on/host:/path/inside/container
for example:
docker run -v /path/on/host:/path/inside/container <image_name>
Example.
container id: 32162f4ebeb0
#HOST BASH SHELL
docker cp 32162f4ebeb0:/dir_inside_container/image1.jpg /dir_inside_host/image1.jpg
docker cp /dir_inside_host/image1.jpg 32162f4ebeb0:/dir_inside_container/image1.jpg
Docker directly manages the /etc/hosts files in containers. You can't bind-mount a file there.
Hand-maintaining mappings of host names to IP addresses in multiple places can be tricky to keep up to date. Consider running a DNS server such as BIND or dnsmasq, or using a hosted service like Amazon's Route 53, or a service-discovery system like Consul (which incidentally provides a DNS interface).
If you really need to add entries to a container's /etc/hosts file, the docker run --add-host option or Docker Compose extra_hosts: setting will do it.
As a general rule, a container can't access the host's filesystem, except to the extent that the docker run -v option maps specific directories into a container. Also as a general rule you can't directly change mount points in a container; stop, delete, and recreate it with different -v options.
run this command for linking local folder to docker container
docker run -it -v "$(pwd)":/src centos
pwd: present working directroy(we can use any directory) and
src: we linking pwd with src

Docker container to NAS Storage

I'm trying to mount the storage volume to inside the container. like the linux i given the below command
**mount 10.#.##.###:/nvol1 /tmp**
this gives "Access denied" error, i have added the Container and host IP to storage to allow the traffic from the Container & the host server. But i cannot mount the storage volume in the container ? Am i missing something ? i'm using Centos Operating System.
Edit Note: I have already mount the storage to the Docker host successfully and made the communication between the host file system and the container. however this new test case seeking directly mount the storage volume on the container not on the Docker host.
You won't be able to run a mount command from inside of the container without disabling some of the isolation that docker provides (otherwise an untrusted app could mount the host root filesystem and escape). Docker prevents this by removing various capabilities from the root user inside the container.
For an NFS mount, you would typically mount this as a volume into the container in one of two ways:
Mount the NFS directory on the host, and map the host directory into the container. This allows you to manage the volume directly on the host in addition to inside the container.
Mount the NFS directory as a volume directly into the container.
For option 2, you can define the volume with something like:
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.1.23.123,rw \
--opt device=:/nvol1 \
nvol1
$ docker run -v nvol1:/tmp your_image
Edit: to skip the docker volume create step, you can do this from a run command with the --mount option:
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo

docker share folder from container to host system

I have a container with code in it. This container runs on production server. How can I share folder with code in this container to my local machine? May be with Samba server and then mount (cifs) this folder with code on my machine? May be some examples...
Using
docker cp <containerId>:/file/path/within/container /host/path/target
you could copy some data from the container. If the data in the container and on your machine need to be in sync constantly I suggest you use a data volume to share a directory from your server with the container. This directory can then be shared from the server to your local machine with any method (e.g. sshfs)
The docker documentation about Manage data in containers shows how to add a volume:
$ docker run -d -P --name web -v /webapp training/webapp python app.py
The data from you severs training/webapp location will then be available in the docker container at /webapp.

How can I create a docker volume container in specific directory?

I have an SSD drive mounted at /ssd. I'd like to create a docker volume container for a MySQL server that will be run in a container, and have it use this SSD drive for data storage. It seems that by default, docker creates data volume containers in /var/lib/docker. How can I force docker to use the SSD drive for the data volume container?
The point of containers is they're decoupled. There's not any way to segregate 'where they go' by local filesystem.
However what I do for my databases is pass through mount:
docker create -v /ssd:/path/to/data --name data_for_mysql <imagename> /bin/true
Then you can run with --volumes-from data_for_mysql. It will write directly on the filesystem.

why does docker have docker volumes and volume containers

Why does docker have docker volumes and volume containers? What is the primary difference between them. I have read through the docker docs but couldn't really understand it well.
Docker volumes
You can use Docker volumes to create a new volume in your container and to mount it to a folder of your host. E.g. you could mount the folder /var/log of your Linux host to your container like this:
docker run -d -v /var/log:/opt/my/app/log:rw some/image
This would create a folder called /opt/my/app/log inside your container. And this folder will be /var/log on your Linux host. You could use this to persist data or share data between your containers.
Docker volume containers
Now, if you mount a host directory to your containers, you somehow break the nice isolation Docker provides. You will "pollute" your host with data from the containers. To prevent this, you could create a dedicated container to store your data. Docker calls this container a "Data Volume Container".
This container will have a volume which you want to share between containers, e.g.:
docker run -d -v /some/data/to/share --name MyDataContainer some/image
This container will run some application (e.g. a database) and has a folder called /some/data/to/share. You can share this folder with another container now:
docker run -d --volumes-from MyDataContainer some/image
This container will also see the same volume as in the previous command. You can share the volume between many containers as you could share a mounted folder of your host. But it will not pollute your host with data - everything is still encapsulated in isolated containers.
My resources
https://docs.docker.com/userguide/dockervolumes/

Resources