Docker shared volumes create and attach problem - docker

I've a question about docker shared volumes.
I know that if I run a container with -v option I create a volume that I can share to another container with --volumes-from:
docker run -d -v DataVolume1:/datavolume1 --name container1 image1:v1.0.0
docker run --name container2 --volumes-from container1 image2:v1.0.0
But I cannot understand the full behaviour of this. It seems like volume from container 1 is a master and the same volume in container 2 is a slave. So container 2 can write and container 1 read or only the opposite?
Why I cannot use -v option on all my container like that?
docker run -d -v DataVolume1:/datavolume1 --name container1 image1:v1.0.0
docker run -d -v DataVolume1:/datavolume1 --name container2 image2:v1.0.0
or create a volume with:
docker volume create --name DataVolume1
and then attach to the two container with:
docker run -d -v DataVolume1:/datavolume1 --name container1 image1:v1.0.0
docker run -d -v DataVolume1:/datavolume1 --name container2 image2:v1.0.0
Is there some trouble because each -v recreate the volume and cut the link with the previous container? Or something other?
Because with two "docker run -v" I could also specify different mounting path for the same volume, so if it work for me is better, but I never see anyone use this way so what's the problem?
Thanks in advance!

Related

Volume empty after container restart

I am experiencing that the volume is empty when restarting a container.
Command for the first time start of the container is
docker run -d --name ${dockerId} --memory="512m" --restart=always -v volume-${dockerId}:/app/public:Z -p ${port}:80 --network my-network ${dockerId}:v1
Any idea what can be done?
I found a great solution.
https://sysadmins.co.za/guide-to-setup-ranchers-convoy-volume-driver-for-docker-swarm-with-nfs/
Using convoy plugin for docker.

Start a Docker volume and give it a name

I want to know how to start a Docker container with a named volume. I've tried this
docker run -it --name container1 -v path:path --name volumename image bin/bash
But the container was also named "volumename"
How can I resolve this issue?
First of all, if you don't have an existing image then you have to create one by using:
docker volume create --name [volume name]
For instance:
docker volume create --name namedvolume
If you did this you can check if it has been created. Just type:
docker volume ls
You did all right, if it shows your created volume.
Last step:
docker run -v [volume name]:[container directory]
With a directory:
docker run -it --name container -v data-volume:/data image /bin/bash

Can we run docker inside a docker container which is running in a virtual-box of Ubuntu 18.04?

I want to run docker inside another docker container. My main container is running in a virtualbox of OS Ubuntu 18.04 which is there on my Windows 10. On trying to run it, it is showing me as:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
How can I resolve this issue?
Yes, you can do this. Check for dind (docker in docker) on docker webpage how to achieve it: https://hub.docker.com/_/docker
Your error indicates that either dockerd in the top level container is not running or you didn't mount docker.sock on the dependent container to communicate with dockerd running on your top-level container.
I am running electric-flow in a docker container in my Ubuntu virtual-box using this docker command: docker run --name efserver --hostname=efserver -d -p 8080:8080 -p 9990:9990 -p 7800:7800 -p 7070:80 -p 443:443 -p 8443:8443 -p 8200:8200 -i -t ecdocker/eflow-ce. Inside this docker container, I want to install and run docker so that my CI/CD pipeline in electric-flow can access and use docker commands.
From your above description, ecdocker/eflow-ce is your CI/CD solution container, and you just want to use docker command in this container, then you did not need dind solution. You can just access to a container's host docker server.
Something like follows:
docker run --privileged --name efserver --hostname=efserver -d -p 8080:8080 -p 9990:9990 -p 7800:7800 -p 7070:80 -p 443:443 -p 8443:8443 -p 8200:8200 -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -i -t ecdocker/eflow-ce
Compared to your old command:
Add --privileged
Add -v $(which docker):/usr/bin/docker, then you can use docker client in container.
Add -v /var/run/docker.sock:/var/run/docker.sock, then you can access host's docker daemon using client in container.

How to properly share a folder between few docker containers in read mode?

I have Docker installed in top of a CentOS system.
I tried to use volume but each new container is deleting (or hidding) the content of the folder to share.
My volume is always empty after a Docker run.
In order to create my containers, I use
docker run -dit --name $CONTAINER_NAME -p $PORT:8080 \
-v $VOLUME_PATH:/opt/conf/ \
$IMAGE_NAME
I aim at sharing a folder from the host between few Docker containers (to READ) AND I want also to write into this folder from the host.
What is an elegant way to do that ?
One solution is to use Data Volume Containers.
First, create a data volume container
docker run -d --name <data-volume-name> -v /<data-volume-name> ubuntu
You can add any data you want in this container.
Create your containers that will share by using the option volume-from
Let's create container foo and container bar using the shared datacontainer :
docker run -it --name foo --volumes-from=<data-volume-name> ubuntu
docker run -it --name bar --volumes-from=<data-volume-name> centos
Enjoy yourself
Each container in my example is mapped to the root folder.
From either bar or foo you can see /, in the filesytem.
You can also use volume field.
Create a volume
docker volume create --name <volume-name>
Create containers foo and bar that witl be mapped to the volume
docker run -dit --name foo -v test-volume:/path/in/container/ <image-name>
docker run -dit --name bar -v test-volume:/path/in/container/ <image-name>
Each container that will write in the volume will be visible by other.

Share Same resource in multiple Container in docker

I need to setup one container volume use to multiple container.
for example:
Container 1(web app1): volume path -v /var/www/html/
Container 2 (web app2): volume path -v /var/www/html/
Container 3(Commaon Files): volume path -v /var/www/html/
I need to setup Container-3 Common file use other two Containers.
How can I Achive this.
You should name your volumes so you can mount them by name instead of by container. So:
docker run -d --name web1 -v web1-html:/var/www/html web-img
docker run -d --name web2 -v web2-html:/var/www/html web-img
docker run -d --name common -v web1-html:/var/www/web1/html \
-v web2-html:/var/www/web2/html your-img
With the volumes created today from your two web apps, you'll see them listed with a guid under docker volume ls. By giving them a name, you can easily reused those volumes in other containers.

Resources