Locating data volumes in Docker Desktop (Windows) - docker

I'm trying to learn docker at the moment and I'm getting confused about where data volumes actually exist.
I'm using Docker Desktop for Windows. (Windows 10)
In the docs they say that running docker inspect on the object will give you the source:https://docs.docker.com/engine/tutorials/dockervolumes/#locating-a-volume
$ docker inspect web
"Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/webapp",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
however I don't see this, I get the following:
$ docker inspect blog_postgres-data
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/blog_postgres-data/_data",
"Name": "blog_postgres-data",
"Options": {},
"Scope": "local"
}
]
Can anyone help me? I just want to know where my data volume actually exists is it on my host machine? If so how can i get the path to it?

I am Windows + WSL 2 (Ubuntu 18.04).
Type in the Windows file explorer :
For Docker version 20.10.+ : \\wsl$\docker-desktop-data\data\docker\volumes
For Docker Engine v19.03: \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\
You will have one directory per volume.

Your volume directory is /var/lib/docker/volumes/blog_postgres-data/_data, and /var/lib/docker usually mounted in C:\Users\Public\Documents\Hyper-V\Virtual hard disks. Anyway you can check it out by looking in Docker settings.
You can refer to these docs for info on how to share drives with Docker on Windows.
BTW, Source is the location on the host and Destination is the location inside the container in the following output:
"Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/webapp",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Updated to answer questions in the comment:
My main curiosity here is that sharing images etc is great but how do I share my data?
Actually volume is designed for this purpose (manage data in Docker container). The data in a volume is persisted on the host FS and isolated from the life-cycle of a Docker container/image. You can share your data in a volume by:
Mount Docker volume to host and reuse it
docker run -v /path/on/host:/path/inside/container image
Then all your data will persist in /path/on/host; you could back it up, copy it to another machine, and re-run your container with the same volume.
Create and mount a data container.
Create a data container: docker create -v /dbdata --name dbstore training/postgres /bin/true
Run other containers based on this container using --volumes-from: docker run -d --volumes-from dbstore --name db1 training/postgres, then all data generated by db1 will persist in the volume of container dbstore.
For more information you could refer to the official Docker volumes docs.
Simply speaking, volumes is just a directory on your host with all your container data, so you could use any method you used before to backup/share your data.
can I push a volume to docker-hub like I do with images?
No. A Docker image is something you can push to a Docker hub (a.k.a. 'registry'); but data is not. You could backup/persist/share your data with any method you like, but pushing data to a Docker registry to share it does not make any sense.
can I make backups etc?
Yes, as posted above :-)

For Windows 10 + WSL 2 (Ubuntu 20.04), Docker version 20.10.2, build 2291f61
Docker artifacts can be found in
DOCKER_ARTIFACTS == \\wsl$\docker-desktop-data\version-pack-data\community\docker
Data volumes can be found in
DOCKER_ARTIFACTS\volumes\[VOLUME_ID]\_data

\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\
Worked for me as well (Windows 10 Home), great stuff.

I have found that my setup of Docker with WSL 2 (Ubuntu 20.04) uses this location at Windows 10:
C:\Users\Username\AppData\Local\Docker\wsl\data\ext4.vhdx
Where Username is your username.

When running linux based containers on a windows host, the actual volumes will be stored within the linux VM and will not be available on the host's fs, otherwise windows running on windows => C:\ProgramData\Docker\volumes\
Also docker inspect <container_id> will list the container configuration, under Mounts section see more details about the persistence layer.
Update:
Not applicable for Docker running on WSL.

If you have wsl2 enabled, u can find it in file explorer under \\wsl$\docker-desktop\mnt\host\wsl\docker-desktop-data\data\docker

you can find the volume associated with host on below path for Docker Desktop(Windows)
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes

In my case, i install docker-desktop on wsl2, windows 10 home. i find my image files in
\\wsl$\docker-desktop-data\version-pack-data\community\docker\overlay2
\\wsl$\docker-desktop-data\version-pack-data\community\docker
Containers, images volumes infos are all there.
All image files are stored there, and have been seperated into several folders with long string names. When i look into every folder, i can find all the real image files in "diff" folders.
Although the terminal show the path "var/lib/docker", but the folder doesn't exsit and the actual files are not stored there. i think there is no error, the "var/lib/docker" is just linked or mapped to the real folder, kind like that

For me I found my volumes in
\\wsl$\docker-desktop-data\data\docker\volumes\
Using WSL2 and Windows 21H1

Mounting any NTFS based directories did not work for my purpose (MongoDB - as far as I'm aware it is also the case for Redis and CouchDB at least): NTFS permissions did not allow necessary access for such DBs running in containers. The following is a setup with named volumes on HyperV.
The following approach starts an ssh server within a service, setup with docker-compse such that it automatically starts up and uses public key encryption between host and container for authorization. This way, data can be uploaded/downloaded via scp or sftp.
The full docker-compose.yml for a webapp + mongodb is below, together with some documentation on how to use ssh service:
version: '3'
services:
foo:
build: .
image: localhost.localdomain/${repository_name}:${tag}
container_name: ${container_name}
ports:
- "3333:3333"
links:
- mongodb-foo
depends_on:
- mongodb-foo
- sshd
volumes:
- "${host_log_directory}:/var/log/app"
mongodb-foo:
container_name: mongodb-${repository_name}
image: "mongo:3.4-jessie"
volumes:
- mongodata-foo:/data/db
expose:
- '27017'
#since mongo data on Windows only works within HyperV virtual disk (as of 2019-4-3), the following allows upload/download of mongo data
#setup: you need to copy your ~/.ssh/id_rsa.pub into $DOCKER_DATA_DIR/.ssh/id_rsa.pub, then run this service again
#download (all mongo data): scp -r -P 2222 user#localhost:/data/mongodb [target-dir within /c/]
#upload (all mongo data): scp -r -P 2222 [source-dir within /c/] user#localhost:/data/mongodb
sshd:
image: maltyxx/sshd
volumes:
- mongodata-foo:/data/mongodb
- $DOCKER_DATA_DIR/.ssh/id_rsa.pub:/home/user/.ssh/keys/id_rsa.pub:ro
ports:
- "2222:22"
command: user::1001
#please note: using a named volume like this for mongo is necessary on Windows rather than mounting an NTFS directory.
#mongodb (and probably most other databases) are not compatible with windows native data directories due ot permissions issues.
#this means that there is no direct access to this data, it needs to be dumped elsewhere if you want to reimport something.
#it will however be persisted as long as you don't delete the HyperV virtual drive that docker host is using.
#on Linux and Docker for Mac it is not an issue, named volumes are directly accessible from host.
volumes:
mongodata-foo:
this is unrelated, but for a fully working example, before any docker-compose call the following script needs to be run:
#!/usr/bin/env bash
set -o errexit
set -o pipefail
set -o nounset
working_directory="$(pwd)"
host_repo_dir="${working_directory}"
repository_name="$(basename ${working_directory})"
branch_name="$(git rev-parse --abbrev-ref HEAD)"
container_name="${repository_name}-${branch_name}"
host_log_directory="${DOCKER_DATA_DIR}/log/${repository_name}"
tag="${branch_name}"
export host_repo_dir
export repository_name
export container_name
export tag
export host_log_directory
Update: Please note that you can also just use docker cp nowadays, so the sshd container outlined above is probably not necessary anymore, except if you need remote access to the file system running in a container under a Windows host.

If you find \\wsl$ a pain to enter or remember, there's a more GUI-friendly method in Windows 10 release 2004 and onwards. With WSL 2, you can safely navigate to all the special WSL shares via the new Linux icon in File Explorer:
From there you can drill down to (e.g.) \docker-desktop-data\data\docker\volumes, as mentioned in other answers.
For more details, refer to Microsoft's official WSL filesystems documentation, which mentions these access methods. For the technically curious, Microsoft's deep dive video should answer a lot of questions.

If you're searching where the data is actually located when you put a volume that is pointing to the docker "vm" like here:
version: '3.0'
services:
mysql-server:
image: mysql:latest
container_name: mysql-server
restart: always
ports:
- 3306:3306
volumes:
- /opt/docker/mysql/data:/var/lib/mysql
The "/opt/docker/mysql/data" or just the / is located in \\wsl$\docker-desktop\mnt\version-pack\containers\services\docker\rootfs
Hope it's helping :)

In Windows 11 with Docker Desktop v4.15.0 with WSL2 enabled, the path to navigate to the volumes folder is \\wsl.localhost\docker-desktop-data\data\docker\volumes

If you're on windows and use Docker For Windows then Docker works via VM (MobyLinuxVM). Your volumes (as everting else) are in this VM! It is how to find them:
# get a privileged container with access to Docker daemon
docker run --privileged -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker alpine sh
# in second power-shell run a container with full root access to MobyLinuxVM
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
# switch to host FS
chroot /host
# and then go to the volume you asked for
cd /var/lib/docker/volumes/YOUR_VOLUME_NAME/_data

Each container has its own filesystem which is independent from the host filesystem. If you run your container with the -v flag you can mount volumes so that the host and container see the same data (as in docker run -v hostFolder:containerFolder).
The first output you printed describes such a mounted volume (hence mounts) where /var/lib/docker/volumes/fac362...80535/_data (host) is mounted to /webapp (container).
I assume you did not use -v hence the folder is not mounted and only accessible in the container filesystem which you can find in /var/lib/docker/volumes/blog_postgres-data/_data. This data will be deleted if you remove the container (docker -rm) so it might be a good idea to mount the folder.
As to the question where you can access this data from windows. As far as I know, docker for windows uses the bash subsystem in Windows 10. I would try to run bash for windows10 and go to that folder or find out how to access the linux folders from windows 10. Check this page for a FAQ on the linux subsystem in windows 10.
Update: You can also use docker cp to copy files between host and container.

If your using windows, your docker files (in this case your volumes) exist on a virtual machine that docker uses for windows either Hyper-V or WSL. However if you need to access those files, you can copy your container files and store them locally on your machine and access the data this way.
docker cp container_Id_Here:/var/lib/mysql path_To_Your_Local_Machine_Here

Related

docker run not syncing local folder in windows

I want to sync my local folder with that of a docker container. I am using a windows system with Wsl 2 backend. I tried running the following command as per the instructions of a docker course instructor but it didn't seem to have synced it:
docker run -v ${pwd}:\app:ro --env-file ./.env -d -p 3000:4000 --name node-app node-app-image
I faced a similar issue when I started syncing local folders with that of a docker container in my windows system. The solution was actually quite simple, instead of using -v ${pwd}:\app:ro in your first volume it should be -v ${pwd}:/app:ro. Notice the / instead of \. Since your docker container is a Linux container the path should have /.
As #Sysix pointed out, docker will always overwrite the folder in the container with the one on the host (no matter if it already existed or not). Only those files will be in that folder/volume that were created either on the host, or in the container during runtime.
Learn more about bind mounts and volumes here.

I have created volumes with Docker Desktop, I see the volumes using "docker volume ls" but the folder in windows are empy [duplicate]

I'm trying to learn docker at the moment and I'm getting confused about where data volumes actually exist.
I'm using Docker Desktop for Windows. (Windows 10)
In the docs they say that running docker inspect on the object will give you the source:https://docs.docker.com/engine/tutorials/dockervolumes/#locating-a-volume
$ docker inspect web
"Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/webapp",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
however I don't see this, I get the following:
$ docker inspect blog_postgres-data
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/blog_postgres-data/_data",
"Name": "blog_postgres-data",
"Options": {},
"Scope": "local"
}
]
Can anyone help me? I just want to know where my data volume actually exists is it on my host machine? If so how can i get the path to it?
I am Windows + WSL 2 (Ubuntu 18.04).
Type in the Windows file explorer :
For Docker version 20.10.+ : \\wsl$\docker-desktop-data\data\docker\volumes
For Docker Engine v19.03: \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\
You will have one directory per volume.
Your volume directory is /var/lib/docker/volumes/blog_postgres-data/_data, and /var/lib/docker usually mounted in C:\Users\Public\Documents\Hyper-V\Virtual hard disks. Anyway you can check it out by looking in Docker settings.
You can refer to these docs for info on how to share drives with Docker on Windows.
BTW, Source is the location on the host and Destination is the location inside the container in the following output:
"Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/webapp",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Updated to answer questions in the comment:
My main curiosity here is that sharing images etc is great but how do I share my data?
Actually volume is designed for this purpose (manage data in Docker container). The data in a volume is persisted on the host FS and isolated from the life-cycle of a Docker container/image. You can share your data in a volume by:
Mount Docker volume to host and reuse it
docker run -v /path/on/host:/path/inside/container image
Then all your data will persist in /path/on/host; you could back it up, copy it to another machine, and re-run your container with the same volume.
Create and mount a data container.
Create a data container: docker create -v /dbdata --name dbstore training/postgres /bin/true
Run other containers based on this container using --volumes-from: docker run -d --volumes-from dbstore --name db1 training/postgres, then all data generated by db1 will persist in the volume of container dbstore.
For more information you could refer to the official Docker volumes docs.
Simply speaking, volumes is just a directory on your host with all your container data, so you could use any method you used before to backup/share your data.
can I push a volume to docker-hub like I do with images?
No. A Docker image is something you can push to a Docker hub (a.k.a. 'registry'); but data is not. You could backup/persist/share your data with any method you like, but pushing data to a Docker registry to share it does not make any sense.
can I make backups etc?
Yes, as posted above :-)
For Windows 10 + WSL 2 (Ubuntu 20.04), Docker version 20.10.2, build 2291f61
Docker artifacts can be found in
DOCKER_ARTIFACTS == \\wsl$\docker-desktop-data\version-pack-data\community\docker
Data volumes can be found in
DOCKER_ARTIFACTS\volumes\[VOLUME_ID]\_data
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\
Worked for me as well (Windows 10 Home), great stuff.
I have found that my setup of Docker with WSL 2 (Ubuntu 20.04) uses this location at Windows 10:
C:\Users\Username\AppData\Local\Docker\wsl\data\ext4.vhdx
Where Username is your username.
When running linux based containers on a windows host, the actual volumes will be stored within the linux VM and will not be available on the host's fs, otherwise windows running on windows => C:\ProgramData\Docker\volumes\
Also docker inspect <container_id> will list the container configuration, under Mounts section see more details about the persistence layer.
Update:
Not applicable for Docker running on WSL.
If you have wsl2 enabled, u can find it in file explorer under \\wsl$\docker-desktop\mnt\host\wsl\docker-desktop-data\data\docker
you can find the volume associated with host on below path for Docker Desktop(Windows)
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes
In my case, i install docker-desktop on wsl2, windows 10 home. i find my image files in
\\wsl$\docker-desktop-data\version-pack-data\community\docker\overlay2
\\wsl$\docker-desktop-data\version-pack-data\community\docker
Containers, images volumes infos are all there.
All image files are stored there, and have been seperated into several folders with long string names. When i look into every folder, i can find all the real image files in "diff" folders.
Although the terminal show the path "var/lib/docker", but the folder doesn't exsit and the actual files are not stored there. i think there is no error, the "var/lib/docker" is just linked or mapped to the real folder, kind like that
For me I found my volumes in
\\wsl$\docker-desktop-data\data\docker\volumes\
Using WSL2 and Windows 21H1
Mounting any NTFS based directories did not work for my purpose (MongoDB - as far as I'm aware it is also the case for Redis and CouchDB at least): NTFS permissions did not allow necessary access for such DBs running in containers. The following is a setup with named volumes on HyperV.
The following approach starts an ssh server within a service, setup with docker-compse such that it automatically starts up and uses public key encryption between host and container for authorization. This way, data can be uploaded/downloaded via scp or sftp.
The full docker-compose.yml for a webapp + mongodb is below, together with some documentation on how to use ssh service:
version: '3'
services:
foo:
build: .
image: localhost.localdomain/${repository_name}:${tag}
container_name: ${container_name}
ports:
- "3333:3333"
links:
- mongodb-foo
depends_on:
- mongodb-foo
- sshd
volumes:
- "${host_log_directory}:/var/log/app"
mongodb-foo:
container_name: mongodb-${repository_name}
image: "mongo:3.4-jessie"
volumes:
- mongodata-foo:/data/db
expose:
- '27017'
#since mongo data on Windows only works within HyperV virtual disk (as of 2019-4-3), the following allows upload/download of mongo data
#setup: you need to copy your ~/.ssh/id_rsa.pub into $DOCKER_DATA_DIR/.ssh/id_rsa.pub, then run this service again
#download (all mongo data): scp -r -P 2222 user#localhost:/data/mongodb [target-dir within /c/]
#upload (all mongo data): scp -r -P 2222 [source-dir within /c/] user#localhost:/data/mongodb
sshd:
image: maltyxx/sshd
volumes:
- mongodata-foo:/data/mongodb
- $DOCKER_DATA_DIR/.ssh/id_rsa.pub:/home/user/.ssh/keys/id_rsa.pub:ro
ports:
- "2222:22"
command: user::1001
#please note: using a named volume like this for mongo is necessary on Windows rather than mounting an NTFS directory.
#mongodb (and probably most other databases) are not compatible with windows native data directories due ot permissions issues.
#this means that there is no direct access to this data, it needs to be dumped elsewhere if you want to reimport something.
#it will however be persisted as long as you don't delete the HyperV virtual drive that docker host is using.
#on Linux and Docker for Mac it is not an issue, named volumes are directly accessible from host.
volumes:
mongodata-foo:
this is unrelated, but for a fully working example, before any docker-compose call the following script needs to be run:
#!/usr/bin/env bash
set -o errexit
set -o pipefail
set -o nounset
working_directory="$(pwd)"
host_repo_dir="${working_directory}"
repository_name="$(basename ${working_directory})"
branch_name="$(git rev-parse --abbrev-ref HEAD)"
container_name="${repository_name}-${branch_name}"
host_log_directory="${DOCKER_DATA_DIR}/log/${repository_name}"
tag="${branch_name}"
export host_repo_dir
export repository_name
export container_name
export tag
export host_log_directory
Update: Please note that you can also just use docker cp nowadays, so the sshd container outlined above is probably not necessary anymore, except if you need remote access to the file system running in a container under a Windows host.
If you find \\wsl$ a pain to enter or remember, there's a more GUI-friendly method in Windows 10 release 2004 and onwards. With WSL 2, you can safely navigate to all the special WSL shares via the new Linux icon in File Explorer:
From there you can drill down to (e.g.) \docker-desktop-data\data\docker\volumes, as mentioned in other answers.
For more details, refer to Microsoft's official WSL filesystems documentation, which mentions these access methods. For the technically curious, Microsoft's deep dive video should answer a lot of questions.
If you're searching where the data is actually located when you put a volume that is pointing to the docker "vm" like here:
version: '3.0'
services:
mysql-server:
image: mysql:latest
container_name: mysql-server
restart: always
ports:
- 3306:3306
volumes:
- /opt/docker/mysql/data:/var/lib/mysql
The "/opt/docker/mysql/data" or just the / is located in \\wsl$\docker-desktop\mnt\version-pack\containers\services\docker\rootfs
Hope it's helping :)
In Windows 11 with Docker Desktop v4.15.0 with WSL2 enabled, the path to navigate to the volumes folder is \\wsl.localhost\docker-desktop-data\data\docker\volumes
If you're on windows and use Docker For Windows then Docker works via VM (MobyLinuxVM). Your volumes (as everting else) are in this VM! It is how to find them:
# get a privileged container with access to Docker daemon
docker run --privileged -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker alpine sh
# in second power-shell run a container with full root access to MobyLinuxVM
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
# switch to host FS
chroot /host
# and then go to the volume you asked for
cd /var/lib/docker/volumes/YOUR_VOLUME_NAME/_data
Each container has its own filesystem which is independent from the host filesystem. If you run your container with the -v flag you can mount volumes so that the host and container see the same data (as in docker run -v hostFolder:containerFolder).
The first output you printed describes such a mounted volume (hence mounts) where /var/lib/docker/volumes/fac362...80535/_data (host) is mounted to /webapp (container).
I assume you did not use -v hence the folder is not mounted and only accessible in the container filesystem which you can find in /var/lib/docker/volumes/blog_postgres-data/_data. This data will be deleted if you remove the container (docker -rm) so it might be a good idea to mount the folder.
As to the question where you can access this data from windows. As far as I know, docker for windows uses the bash subsystem in Windows 10. I would try to run bash for windows10 and go to that folder or find out how to access the linux folders from windows 10. Check this page for a FAQ on the linux subsystem in windows 10.
Update: You can also use docker cp to copy files between host and container.
If your using windows, your docker files (in this case your volumes) exist on a virtual machine that docker uses for windows either Hyper-V or WSL. However if you need to access those files, you can copy your container files and store them locally on your machine and access the data this way.
docker cp container_Id_Here:/var/lib/mysql path_To_Your_Local_Machine_Here

Docker Volumes - Create options (Driver)

Description
Official Docker documentation are usually not very useful, and alot of times things remain unclear even after reading through their sections.
There are many things unclear, but this question I just want to target these:
When running docker volume create:
--driver
--opt device
--opt type
When I run docker volume create --driver local --opt device=:/var/www/html/app --opt type=volume volumename I actually do get a volume :
$docker volume inspect customvolume`
[
{
"CreatedAt": "2020-08-03T09:28:10Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/customvolume/_data",
"Name": "customvolume",
"Options": {
"device": ":/var/www/html/customfolder",
"type": "volume"
},
"Scope": "local"
}
]
Trying to mount this new volume:
docker run --name test-with-volume \
--mount source=customvolume,target=/var/www/html/app77' \
my-app-only:latest
Error:
Error response from daemon: error while mounting
volume '/var/lib/docker/volumes/customvolume/_data': failed to
mount local volume: mount :/var/www/html/customfolder:/var/lib/docker/volumes/customvolume/_data: no such device.
Questions
Clearly the options allow you to do some unexpected things, I was able to create a volume volume at a custom location, but it is not mountable.
What are the options for type (with difference of each explained) : when using docker volume create, they are unclear to me.
docker run --mount documentation talks about volume, bind, tmp, but on docker volume create they only show examples, which are tmpfs, btrfs, nfs.
When can you use device?
I thought this could be used to create a custom location for volume type (aka named volumes) on the source host (similar to how bind-mounts can be mounted)
I assumed I could use the 'recommended way of named volumes including a custom folder location' instead of host mounts (bind-mounts).
Finally, how could you setup a docker-compose.yml volume custom driver correctly as well.
I think the confusion lies in the fact that docker run --mount vs docker volume create seems to be inconsistent, because of how unclear Docker documentation quality is
There are two main categories of data — persistent and non-persistent.
Persistent is the data you need to keep. Things like; customer records, financial data, research results, audit logs, and even some types of application log data. Non-persistent is the data you don’t need to keep.
Both are important, and Docker has solutions for both.
To deal with non-persistent data, every Docker container gets its own non-persistent storage. This is automatically created for every container and is tightly coupled to the lifecycle of the container. As a result, deleting the container will delete the storage and any data on it.
To deal with persistent data, a container needs to store it in a volume. Volumes are separate objects that have their lifecycles decoupled from containers. This means you can create and manage volumes independently, and they’re not tied to the lifecycle of any container. Net result, you can delete a container that’s using a volume, and the volume won’t be deleted.
This writable layer of local storage is managed on every Docker host by a storage driver (not to be confused with a volume driver). If you’re running Docker in production on Linux, you’ll need to make sure you match the right storage driver with the Linux distribution on your Docker host. Use the following list as a guide:
Red Hat Enterprise Linux: Use the overlay2 driver with modern
versions of RHEL running Docker 17.06 or higher. Use the devicemapper
driver with older versions. This applies to Oracle Linux and other
Red Hat related upstream and downstream distros.
Ubuntu: Use the overlay2 or aufs drivers. If you’re using a Linux 4.x
kernel or higher you should go with overlay2.
SUSE Linux Enterprise Server: Use the btrfs storage driver.
Windows Windows only has one driver and it is configured by default.
By default, Docker creates new volumes with the built-in local driver. As the name suggests, volumes created with the local driver are only available to containers on the same node as the volume. You can use the -d flag to specify a different driver. Third-party volume drivers are available as plugins. These provide Docker with seamless access external storage systems such as cloud storage services and on-premises storage systems including SAN or NAS.
$ docker volume inspect myvol
[
{
"CreatedAt": "2020-05-02T17:44:34Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/myvol/_data",
"Name": "myvol",
"Options": {},
"Scope": "local"
}
]
Notice that the Driver and Scope are both local. This means the volume was created with the local driver and is only available to containers on this Docker host. The Mountpoint property tells us where in the Docker host’s filesystem the volume exists.
With bind mounts
version: '3.7'
services:
maria_db:
image: mariadb:10.4.13
environment:
MYSQL_ROOT_PASSWORD: Test123#123
MYSQL_DATABASE: database
ports:
- 3306:3306
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data_mariadb/:/var/lib/mysql/
With volume mount
version: "3.8"
services:
web:
image: mariadb:10.4.13
volumes:
- type: volume
source: dbdata
target: /var/lib/mysql/
volumes:
dbdata:
Bind mounts explanation
Bind mounts have been around since the early days of Docker. Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full or relative path on the host machine. By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents.
tmpfs mounts explanation
Volumes and bind mounts let you share files between the host machine and container so that you can persist data even after the container is stopped. If you’re running Docker on Linux, you have a third option: tmpfs mounts. When you create a container with a tmpfs mount, the container can create files outside the container’s writable layer. As opposed to volumes and bind mounts, a tmpfs mount is temporary and only persisted in the host memory. When the container stops, the tmpfs mount is removed, and files are written there won’t be persisted.
Volume explanation
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker.
Recently I searched for something similar: how to force a docker volume into writing its data to a custom path that is actually the mount point of a persistent disk. There were 2 motives:
first avoid that the docker volume would be stuck inside the VM
Image's disk space.
second have the data outlive the docker volume itself (e.g. easy to reuse on another VM instance and freshly created docker volume).
This seemed feasible by passing extra options to the standard local driver when executing docker volume create. For example the command below makes the docker volume tmp-volume write into the device's argument value. Note that docker volume inspect still outputs a completely different but unused MountPoint. It worked when Ubuntu was the host OS inside that VM instance:
docker volume create -d local --name tmp-volume\
--opt device="/mnt/disks/disk-instance-test-volume" \
--opt type="none" \
--opt o="bind"
Maybe this is overlapping with your use-case? I blogged the whole story here in more detail: https://medium.com/#francis.meyvis/how-to-force-a-docker-volume-on-a-gce-disk-45b59d4973e?source=friends_link&sk=0e71ef39db84f4cb0ecccc7cd0f3c254
Damith's detailed explanation about named-volumes vs bind-mounts is a good reference to read for anyone. To answer the question I had, he talked about 3rd party plugins so I had to investigate further.
There seems to be no way to use custom location when using a named-volume (only the bind-mounts are able to) with a default Docker installation, but there is indeed a plugin that acts similarly to named-volumes but with some extra functionality.
While this only partially answers some of the things I mentioned in question (and still not clear about), use this for reference if you want to use named-volume acting like bind-mounts
Solution
For my particular use case, the Docker plugin local-persist seems to solve my requirements, it has the capability to 1) persist data when containers get deleted and 2) provide a way to use a custom location.
Matchbooklab Docker local-persist
Installation:
Confirmed to work with Ubuntu 20.04 installation
Run this install script: note: there is also custom installation instructions at the github link if you want to install it manually.
curl -fsSL https://raw.githubusercontent.com/MatchbookLab/local-persist/master/scripts/install.sh | sudo bash
This will install and setup startup script for local-persist to monitor volumes.
Setup volume
Create a new local-persist volume:
docker volume create -d local-persist --opt mountpoint=/custom/path/on/host --name new-volume-name
Usage
Attach the volume to a container:
Newer --mount syntax:
docker run --name container-name --mount 'source=new-volume-name,target=/path/inside/container'
-v syntax: (not tested - as shown in github readme)
docker run -d -v images:/path/inside/container/ imagename:version
Or with docker-compose.yml: (example shows v2; not tested yet)
version: '2'
services:
one:
image: alpine
working_dir: /one/
command: sleep 600
volumes:
- data:/one/
two:
image: alpine
working_dir: /two/
command: sleep 600
volumes:
- data:/two/
volumes:
data:
driver: local-persist
driver_opts:
mountpoint: /data/local-persist/data

Docker Desktop for Windows- Where exactly the volume is present [duplicate]

I'm trying to learn docker at the moment and I'm getting confused about where data volumes actually exist.
I'm using Docker Desktop for Windows. (Windows 10)
In the docs they say that running docker inspect on the object will give you the source:https://docs.docker.com/engine/tutorials/dockervolumes/#locating-a-volume
$ docker inspect web
"Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/webapp",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
however I don't see this, I get the following:
$ docker inspect blog_postgres-data
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/blog_postgres-data/_data",
"Name": "blog_postgres-data",
"Options": {},
"Scope": "local"
}
]
Can anyone help me? I just want to know where my data volume actually exists is it on my host machine? If so how can i get the path to it?
I am Windows + WSL 2 (Ubuntu 18.04).
Type in the Windows file explorer :
For Docker version 20.10.+ : \\wsl$\docker-desktop-data\data\docker\volumes
For Docker Engine v19.03: \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\
You will have one directory per volume.
Your volume directory is /var/lib/docker/volumes/blog_postgres-data/_data, and /var/lib/docker usually mounted in C:\Users\Public\Documents\Hyper-V\Virtual hard disks. Anyway you can check it out by looking in Docker settings.
You can refer to these docs for info on how to share drives with Docker on Windows.
BTW, Source is the location on the host and Destination is the location inside the container in the following output:
"Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/webapp",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Updated to answer questions in the comment:
My main curiosity here is that sharing images etc is great but how do I share my data?
Actually volume is designed for this purpose (manage data in Docker container). The data in a volume is persisted on the host FS and isolated from the life-cycle of a Docker container/image. You can share your data in a volume by:
Mount Docker volume to host and reuse it
docker run -v /path/on/host:/path/inside/container image
Then all your data will persist in /path/on/host; you could back it up, copy it to another machine, and re-run your container with the same volume.
Create and mount a data container.
Create a data container: docker create -v /dbdata --name dbstore training/postgres /bin/true
Run other containers based on this container using --volumes-from: docker run -d --volumes-from dbstore --name db1 training/postgres, then all data generated by db1 will persist in the volume of container dbstore.
For more information you could refer to the official Docker volumes docs.
Simply speaking, volumes is just a directory on your host with all your container data, so you could use any method you used before to backup/share your data.
can I push a volume to docker-hub like I do with images?
No. A Docker image is something you can push to a Docker hub (a.k.a. 'registry'); but data is not. You could backup/persist/share your data with any method you like, but pushing data to a Docker registry to share it does not make any sense.
can I make backups etc?
Yes, as posted above :-)
For Windows 10 + WSL 2 (Ubuntu 20.04), Docker version 20.10.2, build 2291f61
Docker artifacts can be found in
DOCKER_ARTIFACTS == \\wsl$\docker-desktop-data\version-pack-data\community\docker
Data volumes can be found in
DOCKER_ARTIFACTS\volumes\[VOLUME_ID]\_data
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\
Worked for me as well (Windows 10 Home), great stuff.
I have found that my setup of Docker with WSL 2 (Ubuntu 20.04) uses this location at Windows 10:
C:\Users\Username\AppData\Local\Docker\wsl\data\ext4.vhdx
Where Username is your username.
When running linux based containers on a windows host, the actual volumes will be stored within the linux VM and will not be available on the host's fs, otherwise windows running on windows => C:\ProgramData\Docker\volumes\
Also docker inspect <container_id> will list the container configuration, under Mounts section see more details about the persistence layer.
Update:
Not applicable for Docker running on WSL.
If you have wsl2 enabled, u can find it in file explorer under \\wsl$\docker-desktop\mnt\host\wsl\docker-desktop-data\data\docker
you can find the volume associated with host on below path for Docker Desktop(Windows)
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes
In my case, i install docker-desktop on wsl2, windows 10 home. i find my image files in
\\wsl$\docker-desktop-data\version-pack-data\community\docker\overlay2
\\wsl$\docker-desktop-data\version-pack-data\community\docker
Containers, images volumes infos are all there.
All image files are stored there, and have been seperated into several folders with long string names. When i look into every folder, i can find all the real image files in "diff" folders.
Although the terminal show the path "var/lib/docker", but the folder doesn't exsit and the actual files are not stored there. i think there is no error, the "var/lib/docker" is just linked or mapped to the real folder, kind like that
For me I found my volumes in
\\wsl$\docker-desktop-data\data\docker\volumes\
Using WSL2 and Windows 21H1
Mounting any NTFS based directories did not work for my purpose (MongoDB - as far as I'm aware it is also the case for Redis and CouchDB at least): NTFS permissions did not allow necessary access for such DBs running in containers. The following is a setup with named volumes on HyperV.
The following approach starts an ssh server within a service, setup with docker-compse such that it automatically starts up and uses public key encryption between host and container for authorization. This way, data can be uploaded/downloaded via scp or sftp.
The full docker-compose.yml for a webapp + mongodb is below, together with some documentation on how to use ssh service:
version: '3'
services:
foo:
build: .
image: localhost.localdomain/${repository_name}:${tag}
container_name: ${container_name}
ports:
- "3333:3333"
links:
- mongodb-foo
depends_on:
- mongodb-foo
- sshd
volumes:
- "${host_log_directory}:/var/log/app"
mongodb-foo:
container_name: mongodb-${repository_name}
image: "mongo:3.4-jessie"
volumes:
- mongodata-foo:/data/db
expose:
- '27017'
#since mongo data on Windows only works within HyperV virtual disk (as of 2019-4-3), the following allows upload/download of mongo data
#setup: you need to copy your ~/.ssh/id_rsa.pub into $DOCKER_DATA_DIR/.ssh/id_rsa.pub, then run this service again
#download (all mongo data): scp -r -P 2222 user#localhost:/data/mongodb [target-dir within /c/]
#upload (all mongo data): scp -r -P 2222 [source-dir within /c/] user#localhost:/data/mongodb
sshd:
image: maltyxx/sshd
volumes:
- mongodata-foo:/data/mongodb
- $DOCKER_DATA_DIR/.ssh/id_rsa.pub:/home/user/.ssh/keys/id_rsa.pub:ro
ports:
- "2222:22"
command: user::1001
#please note: using a named volume like this for mongo is necessary on Windows rather than mounting an NTFS directory.
#mongodb (and probably most other databases) are not compatible with windows native data directories due ot permissions issues.
#this means that there is no direct access to this data, it needs to be dumped elsewhere if you want to reimport something.
#it will however be persisted as long as you don't delete the HyperV virtual drive that docker host is using.
#on Linux and Docker for Mac it is not an issue, named volumes are directly accessible from host.
volumes:
mongodata-foo:
this is unrelated, but for a fully working example, before any docker-compose call the following script needs to be run:
#!/usr/bin/env bash
set -o errexit
set -o pipefail
set -o nounset
working_directory="$(pwd)"
host_repo_dir="${working_directory}"
repository_name="$(basename ${working_directory})"
branch_name="$(git rev-parse --abbrev-ref HEAD)"
container_name="${repository_name}-${branch_name}"
host_log_directory="${DOCKER_DATA_DIR}/log/${repository_name}"
tag="${branch_name}"
export host_repo_dir
export repository_name
export container_name
export tag
export host_log_directory
Update: Please note that you can also just use docker cp nowadays, so the sshd container outlined above is probably not necessary anymore, except if you need remote access to the file system running in a container under a Windows host.
If you find \\wsl$ a pain to enter or remember, there's a more GUI-friendly method in Windows 10 release 2004 and onwards. With WSL 2, you can safely navigate to all the special WSL shares via the new Linux icon in File Explorer:
From there you can drill down to (e.g.) \docker-desktop-data\data\docker\volumes, as mentioned in other answers.
For more details, refer to Microsoft's official WSL filesystems documentation, which mentions these access methods. For the technically curious, Microsoft's deep dive video should answer a lot of questions.
If you're searching where the data is actually located when you put a volume that is pointing to the docker "vm" like here:
version: '3.0'
services:
mysql-server:
image: mysql:latest
container_name: mysql-server
restart: always
ports:
- 3306:3306
volumes:
- /opt/docker/mysql/data:/var/lib/mysql
The "/opt/docker/mysql/data" or just the / is located in \\wsl$\docker-desktop\mnt\version-pack\containers\services\docker\rootfs
Hope it's helping :)
In Windows 11 with Docker Desktop v4.15.0 with WSL2 enabled, the path to navigate to the volumes folder is \\wsl.localhost\docker-desktop-data\data\docker\volumes
If you're on windows and use Docker For Windows then Docker works via VM (MobyLinuxVM). Your volumes (as everting else) are in this VM! It is how to find them:
# get a privileged container with access to Docker daemon
docker run --privileged -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker alpine sh
# in second power-shell run a container with full root access to MobyLinuxVM
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
# switch to host FS
chroot /host
# and then go to the volume you asked for
cd /var/lib/docker/volumes/YOUR_VOLUME_NAME/_data
Each container has its own filesystem which is independent from the host filesystem. If you run your container with the -v flag you can mount volumes so that the host and container see the same data (as in docker run -v hostFolder:containerFolder).
The first output you printed describes such a mounted volume (hence mounts) where /var/lib/docker/volumes/fac362...80535/_data (host) is mounted to /webapp (container).
I assume you did not use -v hence the folder is not mounted and only accessible in the container filesystem which you can find in /var/lib/docker/volumes/blog_postgres-data/_data. This data will be deleted if you remove the container (docker -rm) so it might be a good idea to mount the folder.
As to the question where you can access this data from windows. As far as I know, docker for windows uses the bash subsystem in Windows 10. I would try to run bash for windows10 and go to that folder or find out how to access the linux folders from windows 10. Check this page for a FAQ on the linux subsystem in windows 10.
Update: You can also use docker cp to copy files between host and container.
If your using windows, your docker files (in this case your volumes) exist on a virtual machine that docker uses for windows either Hyper-V or WSL. However if you need to access those files, you can copy your container files and store them locally on your machine and access the data this way.
docker cp container_Id_Here:/var/lib/mysql path_To_Your_Local_Machine_Here

Change file permissions in mounted folder inside docker container on Windows Host

Disclaimer/Edit 2
Some years later, for everyone reading this question - If you are on Windows and want to use docker with linux containers, I highly recommend not using docker for windows at all and instead starting the entire docker environment inside a VM altogether. This Ext3 NTFS issue will break your neck on so many different levels that installing docker-machine might not even be worth the effort.
Edit:
I am using docker-machine which starts a boot2docker instance inside a Virtualbox VM with a shared folder on /c/Users from which you can mount volumes into your containers. The permissions of said volumes are the ones the question is about. The VMs are stored under /c/Users/tom/.docker/
I chose to use the docker-machine Virtualbox workflow over Hyper-V because I need VBox in my daily workflow and running Hyper-V and Virtualbox together on one system is not possible due to incompabilities between different Hypervisors.
Original question
I am currently trying to set up PHPMyAdmin in a container on windows but I can't change the permissions of the config.inc.php file.
I found: Cannot call chown inside Docker container (Docker for Windows) and thought this might be somewhat related but it appears to apply only to MongoDB.
This is my docker-compose.yml
version: "3"
services:
pma:
image: (secrect company registry)/phpmyadmin
ports:
- 9090:80
volumes:
- /c/Users/tom/projects/myproject/data/var/www/public/config.inc.php:/var/www/public/config.inc.php
now, when I docker exec -it [container] bash and change in the mounted directory, I try to run chmod on the config.inc.php but for some reason, it fails silently.
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
root#22a4bag43245: chmod 655 config.inc.php
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
Considering the linked answer, I thought I could just move the volume out of my Userhome but then vbox doesn't mount the folder at all.
How do I change the file permissions of /var/www/public/config.inc.php persistently?
I had the same problem of not being able to change ownership even after using chown. And as I researched, it was because of NTFS volumes being mounted inside ext filesystem. So I used another approach.
The volumes internal to docker are free from these problems. So you can mount your file on internal docker volume and then create a hard symlink to that file inside your local folder wherever you want:
sudo ln $(docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>) <absolute_path_of_destination>
This way you can have your files in desired place, inside docker and without any permission issues, and you will be able to modify the contents of file as in the normal volume mount due to hard symlink.
Here is a working implementation of this process which mounts and links a directory. In case you wanna know about the details, see possible fix section in issue.
EDIT
Steps to implement this approach:
Mount the concerned file in internal docker-volume(also known as named volumes).
Before making hardlink, make sure volumes and concerned file are present there. To ensure this, you should have run your container at least once before or if you want to automate this file creation, you can include a docker run which creates the required files and exits.
docker run --rm -itd \
-v "<Project_name>_<volume_name>:/absolute/path" \
<image> bash -c "touch /absolute/path/<my_file>"
This docker run will create volumes and required files. Here, container is my project name, by default, it is the name of the folder in which project is present and <volume_name> is the same as one which we want to use in our original container. <image> can be the same one which is already being used in your original containers.
Create a hardlink in your OS to the actual file location on your system. You can find the file location using docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>/<my_file>. Linux users can use ln in terminal and windows users can use mklink in command prompt.
In step 3 we have not used /absolute/path since the <volume_name> refers to that location already, and we just need to refer to the file.
Try one of the following:
If you can rebuild the image image: image: (secrect company registry)/docker-stretchimal-apache2-php7-pma then inside the docker file, add the following
USER root
RUN chmod 655 config.inc.php
Then you can rebuild the image and push it to the registry, and what you were doing should work. This should be your preferred solution, as you don't want to be manually changing the permissions everytime you start a new container
Try to exec using the user root explicitly
docker exec -it -u root [container] bash

Resources