delete docker volume does not remvoe the local file - docker

I run a docker container of nginx with the following: docker container run -d --name nginx3 -p 85:80 -v $(pwd):/usr/share/nginx/html nginx , then when I add files in the container volume (/usr/share/nginx/html) they are also added locally on the $pwd folder.
But when I remove the container, image, and volume with docker rm -vf $(docker ps -aq) && docker rmi -f $(docker images -aq) && docker volume prune the files on my local $pwd folder are still there.. why were they not deleted when I removed the volume?

That's because docker volume prune delete the docker volumes and not the mounted volumes from the host.
If you define a volume with docker volume create nginx_volume and then use
docker container run -d --name nginx3 -p 85:80 -v nginx_volume:/usr/share/nginx/html nginx
the volume will be deleted

You are not using a docker volume, you are using a bind mount. This was not that clear with the -v syntax, that's why docker recommends the new --mount syntax for new users:
This creates a bind mount from the host OS. Docker is not owner of it and therefore not deleting the folder if you unmount the binding.
docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Further reading
This creates a docker volume, which is managed by docker. And therefore all volume-commands can be applied:
docker run -d \
--name devtest \
--mount source=myvol2,target=/app \
nginx:latest
Further reading

Related

Docker mount CIFS in run command

I want to mount a CIFS share for my docker container to use. This already worked by explicitly creating a docker volume like so:
docker volume create --driver local --opt type=cifs --opt o=vers=2.0,username=xxxx,password=xxxx --opt device=//servername/Documents Documents
docker run -i --rm -v Documents:/mnt busybox ls -l /mnt
however I want to prevent that I have to create explicit volumes. As it has worked similarly for NFS volumes I now want to mount the volume in the docker run command directly:
docker run -i --rm --mount type=volume,volume-driver=local,dst=/mnt,volume-opt=type=cifs,volume-opt=o=vers=2.0,volume-opt=o=username=xxxx,volume-opt=device=//servername/Documents busybox ls -l /mnt
Here I am getting 'permission denied' errors, which is obvious as I have not used the password. How can I specify the password? All my attempts writing something produced different error messages.
docker run -i --rm --mount type=volume,volume-driver=local,dst=/mnt,volume-opt=type=cifs,volume-opt=o=vers=2.0,volume-opt=o=username=xxxx,password=xxxx,volume-opt=device=//servername/Documents busybox ls -l /mnt
gives me unexpected key 'password' in 'password=xxxx'
docker run -i --rm --mount type=volume,volume-driver=local,dst=/mnt,volume-opt=type=cifs,volume-opt=o=vers=2.0,volume-opt=o=username=xxxx,volume-opt=o=password=xxxx,volume-opt=device=//servername/Documents busybox ls -l /mnt
gives me password=xxxx: invalid argument.
My feeling is I need to somehow escape the comma but do not know how.

How to mount gluster volume to host folder in docker?

I have ran my docker container like this:
docker run -v /sys/fs/cgroup:/sys/fs/cgroup -v /opt/doc:/opt/doc \
--privileged=true --net=host -itd --name=gluster gluster-docker
then I mount a volume to a folder in container:
mount -t glusterfs 192.168.1.100:/documents /opt/doc
When I write data to the /opt/doc of my real server, the data won't be rsync to the /opt/doc of the container.
Is there any idea to rsync data between container and server after I have mounted the folder ?
gluster-docker: https://github.com/gluster/gluster-containers
Finally, I found --mount in docker-ce 17.06.
mount --bind /data/fff /data/fff
mount --make-shared /data/fff
docker run -v /sys/fs/cgroup:/sys/fs/cgroup -v /opt/doc:/opt/doc \
--privileged=true --net=host --mount \
type=bind,source=/data/fff,target=/data/fff,bind-propagation=rshared \
-itd --name=gluster gluster-docker
then I mount a volume to the folder in container:
mount -t glusterfs 192.168.1.100:/documents /data/fff
OK.
https://docs.docker.com/engine/admin/volumes/bind-mounts/
https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt

How to add directory to existing Data container in Docker?

I have made a data container in Docker with the directory /tmp:
sudo docker create -v /tmp --name datacontainer ubuntu
I will add another directory to this existing data container like /opt.
How can i do this?
You cannot add a new data volume to an existing (created or running) container.
With docker 1.9+, you would use instead docker volume create:
docker volume create --name my-tmp
docker volume create --name my-opt
Then you can mount those volumes to any container you want (when you run those containers, not when they are already running)
docker run -d -P \
-v my-tmp:/tmp \
-v my-opt:/opt \
--name mycontainer myimage

Mount volume to host

I am currently using Boot2Docker on Windows. Is it possible to mount root to host?
Say that I'm using an Ubuntu image and I would like to mount / to the host. How can I do so?
I've been looking around and trying:
docker run -v /c/Users/ubuntu:/ --name ubuntu -dt ubuntu
But I ended up with an error:
docker: Error response from daemon: Invalid bind mount spec "/c/Users/ubuntu:/": volumeslash: Invalid specification: destination can't be '/' in '/c/Users/Leon/ubuntu:/'.
If I understand correctly, you are trying to mount root inside a container as a volume? If that is the case, rather create a new directory inside and expose that one.
For example, dockerfile:
RUN mkdir /something
VOLUME /something
As the Docker documentation says, the container directory must always be an absolute path such as /src/docs. The host-dir can either be an absolute path or a name value.
For more information read this: https://docs.docker.com/engine/userguide/containers/dockervolumes/#mount-a-host-directory-as-a-data-volume and part "Mount a host directory as a data volume" should give you better understanding.
It's the problem with how you are specifying the path. See the example of mounting a local volume to be used by a container for MongoDB:
docker run --name *container-name* -v **/Users/SKausha3/mongo/imageservicedb/data**:/*data* -v **/Users/SKausha3/mongo/imageservicedb/backup**:/*backup*
c:/Users/SKausha3/mongo/imageservicedb/data is my local folder, but you have to remove 'c:' from the path.
Since you cant mount "/" one option is to add a "WORKDIR" to your Dockerfile, that way all subsequent commands will be relative to that dir and you wont have to modify anything!
FROM python:latest
WORKDIR /myapp
COPY appfile.py appfile.py
In your docker image, the "appfile.py" file will be in the /myapp/appfily.py location.
You cannot specify the '/' root directory of container but you can mount all the folders in to docker volumes present in root directory.....
create volumes by running these command one by one or you can create bash script
docker volume create var
docker volume create usr
docker volume create tmp
docker volume create sys
docker volume create srv
docker volume create sbin
docker volume create run
docker volume create root
docker volume create proc
docker volume create opt
docker volume create mnt
docker volume create media
docker volume create libx32
docker volume create lib64
docker volume create lib32
docker volume create lib
docker volume create home
docker volume create etc
docker volume create dev
docker volume create boot
docker volume create bin
Then run this command
docker run -it -d \
--name=ubuntu-container \
--mount source=var,destination=/var \
--mount source=usr,destination=/usr \
--mount source=tmp,destination=/tmp \
--mount source=sys,destination=/sys \
--mount source=srv,destination=/srv \
--mount source=sbin,destination=/sbin \
--mount source=run,destination=/run \
--mount source=root,destination=/root \
--mount source=opt,destination=/opt \
--mount source=mnt,destination=/mnt \
--mount source=media,destination=/media \
--mount source=libx32,destination=/libx32 \
--mount source=lib64,destination=/lib64 \
--mount source=lib32,destination=/lib32 \
--mount source=lib,destination=/lib \
--mount source=home,destination=/home \
--mount source=etc,destination=/etc \
--mount source=boot,destination=/boot \
--mount source=bin,destination=/bin \
ubuntu:latest

How can I add a volume to an existing Docker container?

I have a Docker container that I've created simply by installing Docker on Ubuntu and doing:
sudo docker run -i -t ubuntu /bin/bash
I immediately started installing Java and some other tools, spent some time with it, and stopped the container by
exit
Then I wanted to add a volume and realised that this is not as straightforward as I thought it would be. If I use sudo docker -v /somedir run ... then I end up with a fresh new container, so I'd have to install Java and do what I've already done before just to arrive at a container with a mounted volume.
All the documentation about mounting a folder from the host seems to imply that mounting a volume is something that can be done when creating a container. So the only option I have to avoid reconfiguring a new container from scratch is to commit the existing container to a repository and use that as the basis of a new one whilst mounting the volume.
Is this indeed the only way to add a volume to an existing container?
You can commit your existing container (that is create a new image from container’s changes) and then run it with your new mounts.
Example:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5a8f89adeead ubuntu:14.04 "/bin/bash" About a minute ago Exited (0) About a minute ago agitated_newton
$ docker commit 5a8f89adeead newimagename
$ docker run -ti -v "$PWD/somedir":/somedir newimagename /bin/bash
If it's all OK, stop your old container, and use this new one.
You can also commit a container using its name, for example:
docker commit agitated_newton newimagename
That's it :)
We don't have any way to add volume in running container, but to achieve this objective you may use the below commands:
Copy files/folders between a container and the local filesystem:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
For reference see:
https://docs.docker.com/engine/reference/commandline/cp/
I've successfully mount /home/<user-name> folder of my host to the /mnt folder of the existing (not running) container. You can do it in the following way:
Open configuration file corresponding to the stopped container, which can be found at /var/lib/docker/containers/99d...1fb/config.v2.json (may be config.json for older versions of docker).
Find MountPoints section, which was empty in my case: "MountPoints":{}. Next replace the contents with something like this (you can copy proper contents from another container with proper settings):
"MountPoints":{"/mnt":{"Source":"/home/<user-name>","Destination":"/mnt","RW":true,"Name":"","Driver":"","Type":"bind","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/home/<user-name>","Target":"/mnt"},"SkipMountpointCreation":false}}
or the same (formatted):
"MountPoints": {
"/mnt": {
"Source": "/home/<user-name>",
"Destination": "/mnt",
"RW": true,
"Name": "",
"Driver": "",
"Type": "bind",
"Propagation": "rprivate",
"Spec": {
"Type": "bind",
"Source": "/home/<user-name>",
"Target": "/mnt"
},
"SkipMountpointCreation": false
}
}
Restart the docker service: service docker restart
This works for me with Ubuntu 18.04.1 and Docker 18.09.0
Jérôme Petazzoni has a pretty interesting blog post on how to Attach a volume to a container while it is running. This isn't something that's built into Docker out of the box, but possible to accomplish.
As he also points out
This will not work on filesystems which are not based on block devices.
It will only work if /proc/mounts correctly lists the block device node (which, as we saw above, is not necessarily true).
Also, I only tested this on my local environment; I didn’t even try on a cloud instance or anything like that
YMMV
Unfortunately the switch option to mount a volume is only found in the run command.
docker run --help
-v, --volume list Bind mount a volume (default [])
There is a way you can work around this though so you won't have to reinstall the applications you've already set up on your container.
Export your container
docker container export -o ./myimage.docker mycontainer
Import as an image
docker import ./myimage.docker myimage
Then docker run -i -t -v /somedir --name mycontainer myimage /bin/bash
A note for using Docker Windows containers after I had to look for this problem for a long time!
Condiditions:
Windows 10
Docker Desktop (latest version)
using Docker Windows Container for image microsoft/mssql-server-windows-developer
Problem:
I wanted to mount a host dictionary into my windows container.
Solution as partially discripted here:
create docker container
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer
go to command shell in container
docker exec -it <CONTAINERID> cmd.exe
create DIR
mkdir DirForMount
stop container
docker container stop <CONTAINERID>
commit container
docker commit <CONTAINERID> <NEWIMAGENAME>
delete old container
docker container rm <CONTAINERID>
create new container with new image and volume mounting
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y -v C:\DirToMount:C:\DirForMount <NEWIMAGENAME>
After this i solved this problem on docker windows containers.
My answer will be little different. You can stop your container, add the volume and restart it. How to do it, follow the steps.
docker volume create ubuntu-volume
docker stop <container-name>
sudo docker run -i -t --mount source=ubuntu-volume,target=<target-path-in-container> ubuntu /bin/bash
You can stop and remove the container, append the existing volume in a startup script, and restart from the image. If the already existing existing partitions do keep the data, you shouldn't experience any loss of information. This should also work the same way with Dockerfile and Docker composer.
eg (solr image).
(initial script)
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
file with the second volume
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-v "/XXXX/backups/solr_snapshot_folder":/var/solr_snapshots \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
Use symlink to the already mounted drive:
ln -s Source_path targer_path_which_is_already_mounted_on_the_running_docker
The best way is to copy all the files and folders inside a directory on your local file system by: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
SRC_PATH is on container
DEST_PATH is on localhost
Then do docker-compose down attach a volume to the same DEST_PATH and run Docker containers by using docker-compose up -d
Add volume by following in docker-compose.yml
volumes:
- DEST_PATH:SRC_PATH

Resources