Mount volume to host - docker

I am currently using Boot2Docker on Windows. Is it possible to mount root to host?
Say that I'm using an Ubuntu image and I would like to mount / to the host. How can I do so?
I've been looking around and trying:
docker run -v /c/Users/ubuntu:/ --name ubuntu -dt ubuntu
But I ended up with an error:
docker: Error response from daemon: Invalid bind mount spec "/c/Users/ubuntu:/": volumeslash: Invalid specification: destination can't be '/' in '/c/Users/Leon/ubuntu:/'.

If I understand correctly, you are trying to mount root inside a container as a volume? If that is the case, rather create a new directory inside and expose that one.
For example, dockerfile:
RUN mkdir /something
VOLUME /something
As the Docker documentation says, the container directory must always be an absolute path such as /src/docs. The host-dir can either be an absolute path or a name value.
For more information read this: https://docs.docker.com/engine/userguide/containers/dockervolumes/#mount-a-host-directory-as-a-data-volume and part "Mount a host directory as a data volume" should give you better understanding.

It's the problem with how you are specifying the path. See the example of mounting a local volume to be used by a container for MongoDB:
docker run --name *container-name* -v **/Users/SKausha3/mongo/imageservicedb/data**:/*data* -v **/Users/SKausha3/mongo/imageservicedb/backup**:/*backup*
c:/Users/SKausha3/mongo/imageservicedb/data is my local folder, but you have to remove 'c:' from the path.

Since you cant mount "/" one option is to add a "WORKDIR" to your Dockerfile, that way all subsequent commands will be relative to that dir and you wont have to modify anything!
FROM python:latest
WORKDIR /myapp
COPY appfile.py appfile.py
In your docker image, the "appfile.py" file will be in the /myapp/appfily.py location.

You cannot specify the '/' root directory of container but you can mount all the folders in to docker volumes present in root directory.....
create volumes by running these command one by one or you can create bash script
docker volume create var
docker volume create usr
docker volume create tmp
docker volume create sys
docker volume create srv
docker volume create sbin
docker volume create run
docker volume create root
docker volume create proc
docker volume create opt
docker volume create mnt
docker volume create media
docker volume create libx32
docker volume create lib64
docker volume create lib32
docker volume create lib
docker volume create home
docker volume create etc
docker volume create dev
docker volume create boot
docker volume create bin
Then run this command
docker run -it -d \
--name=ubuntu-container \
--mount source=var,destination=/var \
--mount source=usr,destination=/usr \
--mount source=tmp,destination=/tmp \
--mount source=sys,destination=/sys \
--mount source=srv,destination=/srv \
--mount source=sbin,destination=/sbin \
--mount source=run,destination=/run \
--mount source=root,destination=/root \
--mount source=opt,destination=/opt \
--mount source=mnt,destination=/mnt \
--mount source=media,destination=/media \
--mount source=libx32,destination=/libx32 \
--mount source=lib64,destination=/lib64 \
--mount source=lib32,destination=/lib32 \
--mount source=lib,destination=/lib \
--mount source=home,destination=/home \
--mount source=etc,destination=/etc \
--mount source=boot,destination=/boot \
--mount source=bin,destination=/bin \
ubuntu:latest

Related

delete docker volume does not remvoe the local file

I run a docker container of nginx with the following: docker container run -d --name nginx3 -p 85:80 -v $(pwd):/usr/share/nginx/html nginx , then when I add files in the container volume (/usr/share/nginx/html) they are also added locally on the $pwd folder.
But when I remove the container, image, and volume with docker rm -vf $(docker ps -aq) && docker rmi -f $(docker images -aq) && docker volume prune the files on my local $pwd folder are still there.. why were they not deleted when I removed the volume?
That's because docker volume prune delete the docker volumes and not the mounted volumes from the host.
If you define a volume with docker volume create nginx_volume and then use
docker container run -d --name nginx3 -p 85:80 -v nginx_volume:/usr/share/nginx/html nginx
the volume will be deleted
You are not using a docker volume, you are using a bind mount. This was not that clear with the -v syntax, that's why docker recommends the new --mount syntax for new users:
This creates a bind mount from the host OS. Docker is not owner of it and therefore not deleting the folder if you unmount the binding.
docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Further reading
This creates a docker volume, which is managed by docker. And therefore all volume-commands can be applied:
docker run -d \
--name devtest \
--mount source=myvol2,target=/app \
nginx:latest
Further reading

Dockerfile VOLUME command, data is lost when mounted as bindmount

I created a Dockerfile and ran the container with bindMount, the contents are lost ( no content)
FROM alpine:3.8
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
docker run -it --name volDemo2 -v $(pwd)/myvol:/myvol voldemo sh
I am expecting the "myvol" under "pwd" should contain "greeting", which is not the case
root#default:/home/docker# docker run -it --name volDemo2 -v $(pwd)/myvol:/myvol voldemo sh
/ # cd myvol/
/myvol # ls
/myvol #
However, the same works fine if it is mounted the following way
docker#default:~$ docker run -it --name volDemo1 -v myvol:/myvol voldemo sh
/ # cd myvol/
/myvol # ls
1.txt greeting
/myvol # exit
Is it the expected behavior, VOLUME instruction will work only with "volume" and not "bindmounts"
This is how bind mounts work. They mount one folder in another path on the filesystem. All access to the target path get mapped directly back to the source directory.
What docker provides for a named volume (your second example) is an initialization step when that named volume is empty on container creation. They will copy all files, directories, and metadata like file owner and permissions, from the image filesystem into the named volume before the container is started. This only happens with named volumes and not host mounts or tmpfs volumes. And this only happens when the named volume is empty, so it will not updated as you change the image.
You can make a named volume that mounts other directories on the host by passing additional options, giving you something between a host mount and default named volume, since they are both implemented with a bind mount. Three different examples of that are shown below:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
If your goal is to keep things simple for other users of the image, and potentially update the volume with new versions of the image, then you'll want to do this as part of your entrypoint script. I do this in my volume caching scripts included in my base image. You copy the volume directory to a safe location inside the image, and then on container startup, the entrypoint script will copy the files into the volume.

How to copy a file folder to docker?

I tried to use the
docker cp :
it always says the
Error: No such container:path:Docker container ID..
Can you tell me how to find the path of the docker(location conatiner) ?
I think the windows file path is correct.
If you want to do it using Dockerfile you can use COPY instruction. Else if you want to do it using docker run command, you can do it the following way:
docker run -d \
--name devtest \
--mount source=myvol2,target=/app \
nginx:latest
or if you want to use docker volume
docker run -d \
--name devtest \
-v myvol2:/app \
nginx:latest
Reference: https://docs.docker.com/storage/volumes/
In your case as you want to mount the host directory to container's, it'll be the former
you can copy files/folder in a container using COPY/ADD command in Dockerfile.
COPY <host-source> <container-dest>
or else you can mount volumes, during docker build or in the docker-compose.yml file.
Eg. for Dockerfile:
docker run -d \
--name devtest \
--mount source=myvol2,target=/app \
nginx:latest
Eg. for docker-compose.yml file
volumes:
- db-data:/var/lib/mysql/data
Dockerfile: https://docs.docker.com/storage/volumes/
docker-compose.yml: https://docs.docker.com/compose/compose-file/
To use docker cp use the instruction as shown:
docker cp ./path/to/local/folder/. ContainerName:/app/
You have to provide the container id/name for container name.
docker ps --filter status=running
The above command will show you the running containers, from where you can select the target container name or id for use (first about 4 characters of the id should be enough)

Docker volume backup error: Tar: MYCONTAINER_VOLUME: Cannot stat: No such file or directory

I'm trying to backup my volume as described here in the docker documentation: https://docs.docker.com/storage/volumes/#backup-restore-or-migrate-data-volumes
I'm running the command with the path to the volume:
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/docker/volumes/MYCONTAINER_VOLUME
... and also trying with just the name of my volume
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar MYCONTAINER_VOLUME
but no matter what I get an error like: tar: MYCONTAINER_VOLUME: Cannot stat: No such file or directory
This volume was created and linked to the container with docker-compose and its using a local driver for the volume.
When I run docker volume ls I get:
DRIVER VOLUME NAME
local MYCONTAINER_VOLUME
Can someone please tell me what i'm doing wrong with this?
I figured out what the issue was -
The last part of the command should be the path of the volume mounted in the CONTAINER, not the path of the volume on the HOST.
So basically, the formula for this command should be:
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu tar cvf /backup/MY_BACKUP.tar /PATH/INSIDE/CONTAINER/TO/VOLUME/data
... and this will create MY_BACKUP.tar in the current directory of the HOST.
also, make sure to STOP the container before archiving the volume if its something like postgres like in my case.
Then, to restore the volume if you're using docker-compose (since I had trouble with this too because the documentation isn't specific to preexisting containers / volumes created this way)
1) STOP the container
2) Make sure MY_BACKUP.tar is in the root project directory of the HOST
3) run
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu bash -c "cd / && tar xvf /backup/MY_BACKUP.tar
4) restart container
Hope this helps someone and I'm certainly open to any ideas to streamline this.
The documentation assume your container does have a volume associated to your container.
Meaning: your container was started with a volume.
Example:
$ docker run -d \
--name devtest \
--mount source=myvol2,target=/app \
nginx:latest
Check at the very least if you do have volumes created with:
docker volume ls

How to add directory to existing Data container in Docker?

I have made a data container in Docker with the directory /tmp:
sudo docker create -v /tmp --name datacontainer ubuntu
I will add another directory to this existing data container like /opt.
How can i do this?
You cannot add a new data volume to an existing (created or running) container.
With docker 1.9+, you would use instead docker volume create:
docker volume create --name my-tmp
docker volume create --name my-opt
Then you can mount those volumes to any container you want (when you run those containers, not when they are already running)
docker run -d -P \
-v my-tmp:/tmp \
-v my-opt:/opt \
--name mycontainer myimage

Resources