Where docker volumes are located? - docker

Need to know where docker volumes are located when using the docker machine on macOS.
The installation is using boot2docker, so the VM works behind.
Example:
docker volume create test-data
docker inspect shows a path, but where can I find the specific (physical) location?

It’s inside the virtual machine and isn’t directly accessible from the host.
Debug-level commands like docker volume inspect will give you a path, but they really are only for emergency debugging and not for routine use. If you have a way to get a shell in the VM you can see that path, but you really shouldn’t be directly accessing files there, and you shouldn’t be routinely docker inspecting anything.

macOS use a virtual machine it's different to linux where you can access to volumes from /var/lib/docker/volumes.
For macOS you should connect to a VM to find your volumes.
If you use persistent data volumes in Docker, and you want to access them with command-line.
If your docker host is Linux, that’s not a problem; you can find Docker volumes by /var/lib/docker/volumes path.
However, that’s not the case when you use Docker for Mac.
Try to cd /var/lib/docker/volumes from your MacOS terminal, you ‘ll get nothing.
You see, your Mac machine isn’t a real Docker host. Docker for Mac runs a virtual machine and hides it from you to make things simple.
So, to access persistent volumes created by Docker for Mac, you need to connect on that VM.
In order to accomplish this, we need to use a serial terminal on Mac. There’s a terminal application called “screen” that’s going to help us.
We need to “screen into” the Docker driver by executing a command:
screen
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
You should see a blank screen, just press Enter , and after a while, you should see a command line prompt
Now you’re inside Docker’s VM and you can cd into volumes dir by typing: cd /var/lib/docker/volumes
Profit, you got there!
If you need to transfer files from your MacOS host into Docker host you can refer to File Sharing
Hope this helps you!

If you have installed docker using snap then volumes are located at:
/var/snap/docker/common/var-lib-docker/volumes/

location of volumes when using docker official install
/var/lib/docker/volumes/

Normally, if you want to "know" where a volume lives, you would want to map a volume to the local filesystem. When you create a named volume you are just allocating "shared" storage. However, if your really need to know, run this command:
docker volume inspect test-data

Related

Docker compose volumes where can be found on host windows

I have docker-compose file with volumes section for given container:
video-streaming:
image: video-streaming
build:
context: ./video-streaming
dockerfile: Dockerfile-dev
container_name: video-streaming
volumes:
- /tmp/history/npm-cache:/root/.npm:z
I'm running docker on windows and image is linux based.
When I enter container and add file to /root/.npm and then close the container and run it again then the file is still there so this volume works. But the question is where can I find it's location on Windows host?
You should find the volumes in C:\ProgramData\docker\volumes. The filename will be a hash, which you can check with docker inspect.
If not, then note that you are simply mounting a host directory /tmp/history/npm-cache to your container. This directory is your volume.
When using docker for windows the question is if you are using the old Docker Toolbox or the newer ones that use WSL/WSL2
Docker Desktop configured Linux Containers and WSL/WSL2
The docker engine is actually not running on the windows, but inside the WSL instance, docker desktop makes docker commands available on the windows for ease of use.
So the volumes are probably inside that WSL instance (linux)
you can find out what WSL instances you have by typing wsl -l in powershell.
their file-system is available in \\\wsl$ path on windows.
In your case, the volume is not named, its in the exact location you specified for it.
/tmp/history/npm-cache but inside the WSL instance that docker engine is installed on.
Through WSL
in powershell write wsl ls /tmp/history, you should see npm-cache there.
wsl command allows piping linux commands that will be run on the actual linux wsl instance (default one) which is probably the one running the docker engine.
alternatively, you can connect to that linux by just typing wsl and going to that path cd /tmp/history
once inside the wsl instance you can write explorer.exe . to open explorer in that location (on windows)
notice that the path will always start with \\wsl$ so you can go to that path on windows and see all of you wsl instances and their file-systems, try to search for "npm-cache" in explorer, you might find it.
via Docker commands
docker volume ls will give you all of the available volumes. yours is not named, so its probably one of the 'UUID' ones. you can inspect each one to find its location (probably still inside the wsl instance)
docker volume inspact {the-uuid-of-the-volume}
ones you inspect it, you will see each volume has a Mountpoint field which points to the location of the volume (inside the wsl instance)
unnamed volumes are created with different permissions from your user, so you might need sudo to interact with them via the wsl terminal.
if its through windows file explorer on \\wsl$ you might not need extra permissions.

Access Host Folder from Docker Container without run -v command

I want to share access with my host (Ubuntu) or from an nfs server and a container or image (Ubuntu). I can't use the -v command, since the container is started by a program that only allows the container name and runs it itself. Copying is not possible since the folder is big and the content might change regulary.
The nfs-mount inside of the container does throw the error: "Protocol not supported"(done the same way as on host).
Until now it got the information that a "hardcoded" mount is not possible for images and nfs-mounts might not work with docker.
I'd be open for some "hacky" solutions as well if docker might not support it.
Bind mounts (the docker run -v option) are the only way to do this. It's considered a major design goal and security feature of Docker that containers can't generally access the host filesystem, so it'd be a major bug if there was some way to bypass this isolation.
You need to change the calling code to include the -v option, or rebuild your image to embed the data you need (if it's read-only).

How can I use a local file on container?

I'm trying create a container to run a program. I'm using a pre configurate image and now I need run the program. However, it's a machine learning program and I need a dataset from my computer to run.
The file is too large to be copied to the container. It would be best if the program running in the container searched the dataset in a local directory of my computer, but I don't know how I can do this.
Is there any way to do this reference with some docker command? Or using Dockerfile?
Yes, you can do this. What you are describing is a bind mount. See https://docs.docker.com/storage/bind-mounts/ for documentation on the subject.
For example, if I want to mount a folder from my home directory into /mnt/mydata in a container, I can do:
docker run -v /Users/andy/mydata:/mnt/mydata myimage
Now, /mnt/mydata inside the container will have access to /Users/andy/mydata on my host.
Keep in mind, if you are using Docker for Mac or Docker for Windows there are specific directories on the host that are allowed by default:
If you are using Docker Machine on Mac or Windows, your Docker Engine daemon has only limited access to your macOS or Windows filesystem. Docker Machine tries to auto-share your /Users (macOS) or C:\Users (Windows) directory. So, you can mount files or directories on macOS using.
Update July 2019:
I've updated the documentation link and naming to be correct. These type of mounts are called "bind mounts". The snippet about Docker for Mac or Windows no longer appears in the documentation but it should still apply. I'm not sure why they removed it (my Docker for Mac still has an explicit list of allowed mounting paths on the host).

how to sync mac local directory with native docker container?

i am using native docker for mac and i have a small application running with docker container .
currently i am manually copying the data from my mac to docker container using docker cp command.
i want to make it dynamic, i want to put the data in my local directory which should get sync with docker container .
example:
mac local dir : users/vishnu/data/
which should get sync to
`<Docker-container-ID>:/opt/deploy/`
the container is already running ,i should not release the running container . i can only stop and start . is there a way ?? Thanks in advance
host mounted volume.
when you docker run you add a -v /Users/vishnu/data:/opt/deploy parameters.
if you need to add a mounted volume to your existing container, use the Kitematic UI. it's easier that way. but in general, you should add this when you docker run.
...
also, FYI - the idea that you can't delete a container is an anti-pattern with Docker. if you can't delete your container, because it would cause too many problems, you're doing something wrong. https://derickbailey.com/2017/04/05/what-i-learned-by-deleting-all-of-my-docker-images-and-containers/

Shared folder in Docker. With Windows. Not only "C/user/" path

I'm new to Docker, I come from Vagrant.
I'm using Docker (1.9.1) inside my "D:/Works/something/DockerFirstTime" folder.
Now I create the machine with
docker-machine create first
and simple Dockerfile:
FROM ruby:2.2-onbuild
and simple Gemfile:
source 'https://rubygems.org'
gem 'rails'
Now with this command I want to use a shared folder like in Vagrant in the same hard drive of my Dockerfile:
docker run -it -v //d/Works/something/DockerFirstTime:/usr/src/app -w /usr/src/app ruby:2.2 bundle install
But it doesn't works.
How to do this?
I know that Docker only shares the /c/User/folder, is that right?
How can I use the folder with the files and modify my files with editor in Windows and then restart server like in a normal shell on a single PC or like in Vagrant?
This question and this question have a similar root problem, mounting a non C:/ drive folder in boot2docker. I wrote an in-depth answer to the other question that provide the same information that is in the first half of #VonC's answer.
From Docker Docs:
All other paths come from your virtual machine’s filesystem. [...] In
the case of VirtualBox you need to make the host folder available as a
shared folder in VirtualBox. Then, you can mount it using the Docker
-v flag.
To get your folder mounted in a container:
This mounts your entire D:\ drive, you can simply change the file paths to be more granular and specific.
Share the directory with VBox:
This only needs to be done once.
In windows CMD:
VBoxManage sharedfolder add "boot2docker-vm" --name "d-share" --hostpath "D:\"
Mount the shared directory in your VM:
This will need to be done each time you restart the VM.
In the Boot2Docker VM terminal:
mount -t vboxsf -o uid=1000,gid=50 d-share /d
To see sources and explanation for how this works see my full answer to the other similar question
After this you can use the -v/--volume flag in Docker to mount this folder or any sub-folders or files into containers. If you mounted your whole D:\ drive you can use that exact docker run command from your question and it should now work. If you mounted a specific part of you drive you will have to change the paths to match.
To edit in windows, run in docker:
Also from Docker Docs:
Mounting a host directory can be useful for testing. For example, you
can mount source code inside a container. Then, change the source code
and see its effect on the application in real time.
As a VBox shared directory you should be able to see changes made from the Windows side reflected in the boot2docker vm.
You may need to restart containers to see the changes actually appear, this depends on how the program running inside the container, in your case ruby, uses the files. If the files are compiled into an app when the container starts, for example, you will definitely need to restart the container to see the changes.
Note:
Beware the CR LF vs. LF line ending difference when writing files in Windows and reading them in Linux. Make sure your text editor is saving files with Unix line endings or else you may start to see errors caused by '^M' appended to the end of all your lines.
I know that Docker only shares the /c/User/folder, is that right?
It does, and it is able to do so because the VirtualBox VM used for providing a Linux host for docker is sharing C:\Users.
For docker to see another folder, you would need to:
use VBoxmanage sharedfolder add "VM name" --name "sharename" --hostpath "D:\Works"
then mount /D/Works within a VM session, as mentioned in "share windows folder (other than c/Users/) with docker container (using docker windows client)", and mentioned in boot2docker:
mount -t vboxsf -o uid=1000,gid=50 sharename /some/mount/location
The issue with that last alternative is described in "
Introduction to boot2docker" (scroll down to the "Shared folders" section)
The main issue with vboxsf is that it does not do any sort of caching sort of caching so when you are attempting to share a large amount of small files (big git repo’s) or anything that is filesystem read heavy (grunt) performance becomes a factor.
The best solution I have come up with so far is using vagrant with a customized version of boot2docker with NFS support enabled, which has very little “hacking” to get working which is nice.
And a good enough selling point for me is the speed increase by using NFS instead of vboxsf, it’s pretty staggering actually.
This is the project that I have been using https://vagrantcloud.com/yungsang/boxes/boot2docker.
The magic sauce in the volume sharing is in this line.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
Which tells Vagrant to share your current directory in to the boot2docker VM in the /vagrant directory, using NFS.
However, that project seems quite old and would need to be adapted in order to include the latest boot2docker.iso (docker 1.9.1).

Resources