Docker compose volumes where can be found on host windows - docker

I have docker-compose file with volumes section for given container:
video-streaming:
image: video-streaming
build:
context: ./video-streaming
dockerfile: Dockerfile-dev
container_name: video-streaming
volumes:
- /tmp/history/npm-cache:/root/.npm:z
I'm running docker on windows and image is linux based.
When I enter container and add file to /root/.npm and then close the container and run it again then the file is still there so this volume works. But the question is where can I find it's location on Windows host?

You should find the volumes in C:\ProgramData\docker\volumes. The filename will be a hash, which you can check with docker inspect.
If not, then note that you are simply mounting a host directory /tmp/history/npm-cache to your container. This directory is your volume.

When using docker for windows the question is if you are using the old Docker Toolbox or the newer ones that use WSL/WSL2
Docker Desktop configured Linux Containers and WSL/WSL2
The docker engine is actually not running on the windows, but inside the WSL instance, docker desktop makes docker commands available on the windows for ease of use.
So the volumes are probably inside that WSL instance (linux)
you can find out what WSL instances you have by typing wsl -l in powershell.
their file-system is available in \\\wsl$ path on windows.
In your case, the volume is not named, its in the exact location you specified for it.
/tmp/history/npm-cache but inside the WSL instance that docker engine is installed on.
Through WSL
in powershell write wsl ls /tmp/history, you should see npm-cache there.
wsl command allows piping linux commands that will be run on the actual linux wsl instance (default one) which is probably the one running the docker engine.
alternatively, you can connect to that linux by just typing wsl and going to that path cd /tmp/history
once inside the wsl instance you can write explorer.exe . to open explorer in that location (on windows)
notice that the path will always start with \\wsl$ so you can go to that path on windows and see all of you wsl instances and their file-systems, try to search for "npm-cache" in explorer, you might find it.
via Docker commands
docker volume ls will give you all of the available volumes. yours is not named, so its probably one of the 'UUID' ones. you can inspect each one to find its location (probably still inside the wsl instance)
docker volume inspact {the-uuid-of-the-volume}
ones you inspect it, you will see each volume has a Mountpoint field which points to the location of the volume (inside the wsl instance)
unnamed volumes are created with different permissions from your user, so you might need sudo to interact with them via the wsl terminal.
if its through windows file explorer on \\wsl$ you might not need extra permissions.

Related

Docker volumes on WSL2 using Docker Desktop

I'm just trying out WSL 2 with Docker for Windows and I'm having an issues with mounted volumes :
version: "3.7"
services:
node:
build: .
container_name: node
hostname: node
volumes:
- ./app:/app
stdin_open: true
the container build and start well, I access it with docker exec nicely but the /app folder inside the container isn't bound to my laptop app folder. However the right path is actually correctly mounted on the running container :
(here I do pwd on the host to if it matches perfectly with what is mounted on the container)
➜ app pwd
/mnt/c/Users/willi/devspace/these/app
And this is screen of portainer telling me what path are mounted where in the container and everything matches.
The file I create int he app folder on the host are not visible in the app folder of the container and vice-versa. This is weird and I don't know how to debug it.
Complementary infos:
Windows 10 Pro 10.0.19041
Docker for Windows version : 2.3.0.4
docker version output in WSL : 19.03.12
docker-compose version : 1.26.2
Thanks
As #Pablo mentioned, the Best-Practice seems to be using WSL File system for mapping Volumes.
Take a look at the Docker Documentation concerning WSL2:
Best practices
To get the best out of the file system performance when bind-mounting files:
Store source code and other data that is bind-mounted into Linux containers (i.e., with docker run -v <host-path>:<container-path>) in the Linux filesystem, rather than the Windows filesystem.
Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME.
If you have concerns about the size of the docker-desktop-data VHDX, or need to change it, take a look at the WSL tooling built into Windows.
If you have concerns about CPU or memory usage, you can configure limits on the memory, CPU, Swap size allocated to the WSL 2 utility VM.
To avoid any potential conflicts with using WSL 2 on Docker Desktop, you must uninstall any previous versions of Docker Engine and CLI installed directly through Linux distributions before installing Docker Desktop.
Everything works perfectly now, it seems that my problem was that my WSL distro was still in version 1. You can verify it with the command : wsl -l -v
NAME STATE VERSION
* docker-desktop-data Stopped 2
docker-desktop Stopped 2
Ubuntu-20.04 Running 2 <- This was at 1
Upgrade to WSL2

Where are Docker volumes located when running WSL using Docker Desktop?

I am running Windows Subsystem Linux (WSL) with Ubuntu as client OS under Windows 10. Now I installed Docker Desktop on the Windows host and enabled the WSL integration in the Docker settings. That works fine so far, I can access the Docker daemon running on the Windows host from my WSL Ubuntu client.
Now I am wondering where all the Docker volumes and other data is stored in this setup. Usually these are under /var/lib/docker, but it seems when using WSL this is not the case. When running df -h I can see the following Docker-related lines:
/dev/sdd 251G 3.1G 236G 2% /mnt/wsl/docker-desktop-data/isocache
/dev/sdc 251G 120M 239G 1% /mnt/wsl/docker-desktop/shared-sockets
/dev/loop0 244M 244M 0 100% /mnt/wsl/docker-desktop/cli-tools
So they are somewhere on the Windows host it seems.
... but where?
When I create a volume named shared_data in docker, I can find it under
\\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\shared_data\\_data
You can find WSL2 volumes under a hidden network share. Open Windows Explorer, and type \\wsl$ into the location bar. Hit enter, and it should display your WSL volumes, including the ones for Docker for Windows.
If you are wondering where on the Windows host the docker volumes are located, for me they seem to be at:
C:\Users\username\AppData\Local\Docker\wsl\data\ext4.vhdx
and
C:\Users\username\AppData\Local\Docker\wsl\distro\ext4.vhdx
presumably, these are docker-desktop-data and docker-desktop respectively.
In theory, these WSL2 instances can be re-located to an alternate drive to free disk space as per this post; that is the standard method for exporting, unregistering, and re-importing an instance from a new location. This process is also described here (with regard to standard WSL instances).
(Caveat - I haven't yet done this with the docker WSL2 instances yet myself, only for Ubuntu using the method in the second link.)
Windows 10 + WSL2
I run docker-desktop on Windows 10 + WSL2. Just make sure you run the docker desktop, so the path would be accessible from a network.
I found my volume data under
\\wsl$\docker-desktop-data\data\docker\volumes
Note that you need to have docker desktop running before you will be able to discover those network direcotories:
Docker Desktop's WSL2 feature creates two new wsl2 containers docker-desktop and docker-desktop-data, which can be seen by the command wsl -l -v
NAME STATE VERSION
* Ubuntu-18.04 Running 2
docker-desktop Running 2
docker-desktop-data Running 2
This is where the docker daemon actually runs and where you can find the data you are looking for.
The volumes in the wsl2 kernel are mapped as follows:
docker run -ti -v host_dir:/app amazing-container will get mapped to /mnt/wsl/docker-desktop-data/data/docker/volumes/host_dir/_data/
The above is the right path, even though docker volume inspect amazing-container will tell you differently (/var/lib/docker/volumes/).
To conclude, the volumes are mapped to: /mnt/wsl/docker-desktop-data/data/docker/volumes/
Most answers on this topic are about the location from the Windows side, I needed to access the container log files (the issue is the same as for volumes) from my WSL distribution, the Windows path \\wsl$ was not an option.
The files could be found on Windows in \\wsl$\docker-desktop-data\version-pack-data\community\docker\containers.
From the WSL distribution, I could go to /mnt/wsl/docker-desktop-data/version-pack-data but it was empty.
Finally found a solution here:
From Windows, create a disk for docker-desktop-data:
net use w: \\wsl$\docker-desktop-data
From your WSL distribution, mount it to docker:
sudo mkdir /mnt/docker
sudo mount -t drvfs w: /mnt/docker
Now you can get everything you want, in my case log files:
ls -l /mnt/docker/version-pack-data/community/docker/containers/
total 0
drwxrwxrwx 4 root root 512 May 19 15:06 3f41ade0891c06725e828853524d73f185b415d035262f9c51d6b6e03654d505
In my case, i install docker-desktop on wsl2, windows 10 home. i find my image files in
\\wsl$\docker-desktop-data\version-pack-data\community\docker\overlay2
All image files are stored there, and have been seperated into several folders with long string names. When i look into every folder, i can find all the real image files in "diff" folders.
Although the terminal show the path "var/lib/docker", but the folder doesn't exsit and the actual files are not stored there. i think there is no error, the "var/lib/docker" is just linked or mapped to the real folder, kind like that.
In windows, we also use mklink to link two folders, it is similar, right?
You can find volumes and others data when using docker with WSL under docker-desktop-data
If you are running Docker on Windows host, using Docker Desktop, you can access the volumes at \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\ (search this path from windows explorer and make sure docker engine is running).
When running Docker desktop app, the app creates its own Linux VM or using WSL to run the docker container and the path /var/lib/docker/volumes/ is from within that VM I think. The volumes are created as mountable .vhdx file at
C:\Users\username\AppData\Local\Docker\wsl\distro\
but accessing this directly is tricky.
Ref: Google how to access WSl files from Windows
Windows 10 + WSL2, Docker Desktop v4.13.1, free service tier, 2022-11-03:
I found my volumes at \\wsl$\docker-desktop-data\data\docker\volumes

Where docker volumes are located?

Need to know where docker volumes are located when using the docker machine on macOS.
The installation is using boot2docker, so the VM works behind.
Example:
docker volume create test-data
docker inspect shows a path, but where can I find the specific (physical) location?
It’s inside the virtual machine and isn’t directly accessible from the host.
Debug-level commands like docker volume inspect will give you a path, but they really are only for emergency debugging and not for routine use. If you have a way to get a shell in the VM you can see that path, but you really shouldn’t be directly accessing files there, and you shouldn’t be routinely docker inspecting anything.
macOS use a virtual machine it's different to linux where you can access to volumes from /var/lib/docker/volumes.
For macOS you should connect to a VM to find your volumes.
If you use persistent data volumes in Docker, and you want to access them with command-line.
If your docker host is Linux, that’s not a problem; you can find Docker volumes by /var/lib/docker/volumes path.
However, that’s not the case when you use Docker for Mac.
Try to cd /var/lib/docker/volumes from your MacOS terminal, you ‘ll get nothing.
You see, your Mac machine isn’t a real Docker host. Docker for Mac runs a virtual machine and hides it from you to make things simple.
So, to access persistent volumes created by Docker for Mac, you need to connect on that VM.
In order to accomplish this, we need to use a serial terminal on Mac. There’s a terminal application called “screen” that’s going to help us.
We need to “screen into” the Docker driver by executing a command:
screen
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
You should see a blank screen, just press Enter , and after a while, you should see a command line prompt
Now you’re inside Docker’s VM and you can cd into volumes dir by typing: cd /var/lib/docker/volumes
Profit, you got there!
If you need to transfer files from your MacOS host into Docker host you can refer to File Sharing
Hope this helps you!
If you have installed docker using snap then volumes are located at:
/var/snap/docker/common/var-lib-docker/volumes/
location of volumes when using docker official install
/var/lib/docker/volumes/
Normally, if you want to "know" where a volume lives, you would want to map a volume to the local filesystem. When you create a named volume you are just allocating "shared" storage. However, if your really need to know, run this command:
docker volume inspect test-data

How can I use a local file on container?

I'm trying create a container to run a program. I'm using a pre configurate image and now I need run the program. However, it's a machine learning program and I need a dataset from my computer to run.
The file is too large to be copied to the container. It would be best if the program running in the container searched the dataset in a local directory of my computer, but I don't know how I can do this.
Is there any way to do this reference with some docker command? Or using Dockerfile?
Yes, you can do this. What you are describing is a bind mount. See https://docs.docker.com/storage/bind-mounts/ for documentation on the subject.
For example, if I want to mount a folder from my home directory into /mnt/mydata in a container, I can do:
docker run -v /Users/andy/mydata:/mnt/mydata myimage
Now, /mnt/mydata inside the container will have access to /Users/andy/mydata on my host.
Keep in mind, if you are using Docker for Mac or Docker for Windows there are specific directories on the host that are allowed by default:
If you are using Docker Machine on Mac or Windows, your Docker Engine daemon has only limited access to your macOS or Windows filesystem. Docker Machine tries to auto-share your /Users (macOS) or C:\Users (Windows) directory. So, you can mount files or directories on macOS using.
Update July 2019:
I've updated the documentation link and naming to be correct. These type of mounts are called "bind mounts". The snippet about Docker for Mac or Windows no longer appears in the documentation but it should still apply. I'm not sure why they removed it (my Docker for Mac still has an explicit list of allowed mounting paths on the host).

Linux+Docker - How to run host's apps from inside Docker container?

I want to know if Docker can run apps installed in host in the container so that I dont need to install the app on each images which wastes the hard disk space.
I know Linux is different since it requires dependencies and packages locally but I wonder if it is possible to use it like in Windows VM.
In Windows Hyper-V, I did this by sharing the network folder containing portable apps with the container and run apps from inside the Windows VM.
Thank you.
You can link a directory on your host containing the executables into your container. Then it will be accessible in the container. To do so, you can use VOLUMES -- Mount a host directory as a data volume and mount a host directory (here: /tmp/foo) into your container (here: /foo) and execute a script called foo.sh in your container's location /foo/foo.sh:
mkdir /tmp/foo
echo -e "#\!/bin/sh\n\necho foo" > /tmp/foo/foo.sh
docker run --rm -v /tmp/foo:/foo alpine sh /foo/foo.sh
=> foo
The same way, you can add binaries from your host to your container... But I do not think that this is intended and should be used, because a container should work as a standalone, isolated "lightweight-VM". You add an unnecessary dependency to your host machine to it, which seems not to be an elegant solution.

Resources