Docker: where is docker volume located for this compose file - docker

I was setting up some materials for a trainning, when I came around this sample compose file:
https://github.com/dockersamples/example-voting-app/blob/master/docker-compose.yml
and I couldn't find out how this volume is mounted, on lines 48 and 49 of the file:
volumes:
db-data:
Can someone explain me where is this volume on the host? Couldn't find it and I wouldn't like to keep any postgresql data dangling around after the containers are gone. Similar thing happens to the networks:
networks:
front-tier:
back-tier:
Why docker compose accepts empty network definitions like this?

Finding the volumes
Volumes like this are internal to Docker and stored in the Docker store (which is usually all under /var/lib/docker). You can get a list of volumes:
$ docker volume ls
DRIVER VOLUME NAME
local 1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465
local 2f13b0cec834a0250845b9dcb2bce548f7c7f35ed9cdaa7d5990bf896e952d02
local a3d54ec4582c3c7ad5a6172e1d4eed38cfb3e7d97df6d524a3edd544dc455917
local e6c389d80768356cdefd6c04f6b384057e9fe2835d6e1d3792691b887d767724
You can find out exactly where the volume is stored on your system if you want to:
$ docker inspect 1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465/_data",
"Name": "1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465",
"Options": {},
"Scope": "local"
}
]
Cleaning up unused volumes
As far as just ensuring that things are not left dangling, you can use the prune commands, in this case docker volume prune. That will give you this output, and you choose whether to continue pruning or not.
$ docker volume prune
WARNING! This will remove all volumes not used by at least one container.
Are you sure you want to continue? [y/N]
"Empty" definitions in docker-compose.yml
There is a tendency to accept these "empty" definitions for things like volumes and networks when you don't need to do anything other than define that a volume or network should exist. That is, if you want to create it, but are okay with the default settings, then there is no particular reason to specify the parameters.

first method
you have to see your volume list :
docker volume ls
then run this command :
sudo docker inspect <volume-name> | grep Mountpoint | awk '{ print $2 }'
second method
you can use this method :
first run docker ps to get your container id then run this :
docker inspect --format="{{.Mounts}}" $containerID
We will get volume path.

Related

How to know which data is within a docker container to be able to map it correctly as a volume

I'm trying to install bluespice wiki and wanted to persist the data outside of the container.
https://hub.docker.com/r/bluespice/bluespice-free
To save some space on my host, I wanted to put some data on a S3 storage. As far as I understood the developer of the wiki have all the services directly within the container (Webserver, etc.)
So my idea would be to have the important data (e.g. webserver) on the host and the actually files (pictures, videos, posts) on the S3.
Is this something which can be achieved? If so how would I best approach this? Currently I don't understand how I know the correct paths.
my docker-compose file looks like this currently:
services:
bluespice:
container_name: Bluespice-Wiki
image: bluespice/bluespice-free:3.2
command: -H unix:///var/run/docker.sock
restart: always
volumes:
- /mnt/s3fs:/data
This will put all data on the 3s storage and this is really slow and probably also not the smartes move.
So my understanding would be to create something like this:
volumes:
- <localshare>:/data/webserver
- <s3share>:/data/www/datafiles
Hope someone understands my problem :)
Here is demonstration for your better understanding:
Once after :
# docker-compose up -d
You can use any of the below to know container volumes available:
# docker inspect -f '{{ .Mounts }}' <your-container-id-or-name>
# docker inspect <your-container-id-or-name> | jq --raw-output .[].Mounts
For example I have mariadb container:
root#sys:/home/akshay/Documents/test2# docker inspect 00f70198a466 | jq --raw-output .[].Mounts
[
{
"Type": "volume",
"Name": "2a583fc243a9a2bb80cf45a80e5befbdc88db3b14026ff2349f80345f58c9562",
"Source": "/var/lib/docker/volumes/2a583fc243a9a2bb80cf45a80e5befbdc88db3b14026ff2349f80345f58c9562/_data",
"Destination": "/var/lib/mysql",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Here you can see Source: which is actual path on your Host, and Destination: is absolute path inside your container.
Now say for example if you wish to store on cloud initially stop your container, mount your external storage on host, move/copy contents in Source: if any needed.
then in volumes you just have to set path, for example
# creating local directory
# mkdir mysql-data
# copying contents whatever in volume
# cp -r /var/lib/docker/volumes/2a583fc243a9a2bb80cf45a80e5befbdc88db3b14026ff2349f80345f58c9562/_data mysql-data
# we copied data to directory of our interest
# instead of keeping in /var/lib/docker/volumes/...../_data
docker-compose down
inside your docker-compose.yml
volumes:
# local mount
- "./mysql-data:/var/lib/mysql"
# path to your remote storage ex: upload directory
- "/path/where/s3-bucket/mounted:/var/www/somesite/uploads/"
and then
# now we refer volumes in local directory
docker-compose up -d
If no volumes available, then just enter your container like below and find out absolute path of the directory which you wanted to persist the data outside of the container.
# with bash
# docker exec -it <your-container-id-or-name> bash
# or with shell
# docker exec -it <your-container-id-or-name> sh
# and then browser folders
# for example
root#sys:~# docker exec -it 00f70198a466 bash
root#00f70198a466:/# pwd
/
root#00f70198a466:/# ls
bin boot dev docker-entrypoint-initdb.d etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
root#00f70198a466:/# cd /var/lib/mysql/
root#00f70198a466:/var/lib/mysql# pwd
/var/lib/mysql

How to remove unnamed volumes when docker compose down?

I have a docker-compose file which describes several services. All services have volumes attached to them, however only one has the volume named. When I run docker compose down I want to automatically delete the not named volumes while at the same time create all volumes that are missing.
services:
service1:
image: some/image:1
volumes:
- named-volume:/home/user1
service2:
image: some/image:2
#volumes: not declared volumes that are named automatically with a hash
volumes:
named-volume:
name: volume-for-service1
The first time I run docker compose up I want to automatically create all volumes (named and unnamed) and when I run docker compose down I want that unnamed volumes to be deleted while the named one (volume-for-service1) to be preserved. Next time I run docker compose up it should only create the unnamed volumes as the named one already exists.
I have tried:
docker compose down -v which removed no volume
docker compose down --remove-orphans which removed no volume
docker compose down --rmi local which removed no volume
docker-compose down -v which removed the named volume
docker-compose down --remove-orphans which removed no volume
docker-compose down --rmi local which removed no volume
OS: Windows 10 x64
I don't quite get it. What command should I run to achieve desired results?
Try using --renew-anon-volumes flag when bringing up the services
and use --volumes when bringing down the services
> docker-compose --renew-anon-volumes up
> docker-compose --volumes down
Refer the docker compose documentation
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.
https://docs.docker.com/compose/reference/down/
To prevent removing named volumes, you should define them as external in the config file:
volumes:
volume-for-service1:
name: volume-for-service1
external: true
But you have to initially create them outside the config file somewhere else, either through:
docker volume create volume-for-service-1
or in a separate config file.
Reference: https://docs.docker.com/compose/compose-file/#external-1
I'm not aware of a way to remove unnamed volumes automatically, but you can match its hash and remove it with a small script.
To reuse your docker-compose.yml example, first you get the container name given the service name with:
docker-compose ps service2 # this is the one with unnamed volume in your example
Output could be something like:
NAME COMMAND SERVICE STATUS
project-service2-1 "docker-entrypoint.s…" service2 exited (0)
Then using the container name you can find its unamed volume hash:
docker inspect -f '{{ (index .Mounts 0).Name }}' project-service2-1
Now before deleting the volume you need to bring the container down or the volume would be in use.
docker-compose down
docker volume rm $volume # replace the "volume" var with the inspect output
Now that we saw the steps, let's try to make it a little script (slightly adjusted):
service_name=service2 # set the variable accordingly
container_id=$(docker-compose ps $service_name --quiet)
volume_name=$(docker inspect -f '{{ (index .Mounts 0).Name }}' $container_id)
docker-compose down
docker volume rm -f $volume_name

How to mount a folder in a machine to a docker container in another machine? [duplicate]

I have a compose file with v3 where there are 3 services sharing/using the same volume. While using swarm mode we need to create extra containers & volumes to manage our services across the cluster.
I am planning to use NFS server so that single NFS share will get mounted directly on all the hosts within the cluster.
I have found below two ways of doing it but it needs extra steps to be performed on the docker host -
Mount the NFS share using "fstab" or "mount" command on the host & then use it as a host volume for docker services.
Use Netshare plugin - https://github.com/ContainX/docker-volume-netshare
Is there a standard way where i can directly use/mount NFS share using docker compose v3 by performing only few/no steps(I understand that "nfs-common" package is required anyhow) on the docker host?
After discovering that this is massively undocumented,here's the correct way to mount a NFS volume using stack and docker compose.
The most important thing is that you need to be using version: "3.2" or higher. You will have strange and un-obvious errors if you don't.
The second issue is that volumes are not automatically updated when their definition changes. This can lead you down a rabbit hole of thinking that your changes aren't correct, when they just haven't been applied. Make sure you docker rm VOLUMENAME everywhere it could possibly be, as if the volume exists, it won't be validated.
The third issue is more of a NFS issue - The NFS folder will not be created on the server if it doesn't exist. This is just the way NFS works. You need to make sure it exists before you do anything.
(Don't remove 'soft' and 'nolock' unless you're sure you know what you're doing - this stops docker from freezing if your NFS server goes away)
Here's a complete example:
[root#docker docker-mirror]# cat nfs-compose.yml
version: "3.2"
services:
rsyslog:
image: jumanjiman/rsyslog
ports:
- "514:514"
- "514:514/udp"
volumes:
- type: volume
source: example
target: /nfs
volume:
nocopy: true
volumes:
example:
driver_opts:
type: "nfs"
o: "addr=10.40.0.199,nolock,soft,rw"
device: ":/docker/example"
[root#docker docker-mirror]# docker stack deploy --with-registry-auth -c nfs-compose.yml rsyslog
Creating network rsyslog_default
Creating service rsyslog_rsyslog
[root#docker docker-mirror]# docker stack ps rsyslog
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
tb1dod43fe4c rsyslog_rsyslog.1 jumanjiman/rsyslog:latest swarm-4 Running Starting less than a second ago
[root#docker docker-mirror]#
Now, on swarm-4:
root#swarm-4:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d883e0f14d3f jumanjiman/rsyslog:latest "rsyslogd -n -f /e..." 6 seconds ago Up 5 seconds 514/tcp, 514/udp rsyslog_rsyslog.1.tb1dod43fe4cy3j5vzsy7pgv5
root#swarm-4:~# docker exec -it d883e0f14d3f df -h /nfs
Filesystem Size Used Available Use% Mounted on
:/docker/example 7.2T 5.5T 1.7T 77% /nfs
root#swarm-4:~#
This volume will be created (but not destroyed) on any swarm node that the stack is running on.
root#swarm-4:~# docker volume inspect rsyslog_example
[
{
"CreatedAt": "2017-09-29T13:53:59+10:00",
"Driver": "local",
"Labels": {
"com.docker.stack.namespace": "rsyslog"
},
"Mountpoint": "/var/lib/docker/volumes/rsyslog_example/_data",
"Name": "rsyslog_example",
"Options": {
"device": ":/docker/example",
"o": "addr=10.40.0.199,nolock,soft,rw",
"type": "nfs"
},
"Scope": "local"
}
]
root#swarm-4:~#
Depending on how I need to use the volume, I have the following 3 options.
First, you can create the named volume directly and use it as an external volume in compose, or as a named volume in a docker run or docker service create command.
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=nfsvers=4,addr=nfs.example.com,rw \
--opt device=:/path/to/dir \
foo
Next, there is the --mount syntax that works from docker run and docker service create. This is a rather long option, and when you are embedded a comma delimited option within another comma delimited option, you need to pass some quotes (escaped so the shell doesn't remove them) to the command being run. I tend to use this for a one-off container that needs to access NFS (e.g. a utility container to setup NFS directories):
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,\"volume-opt=o=nfsvers=4,addr=nfs.example.com\",volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,\"volume-opt=o=nfsvers=4,addr=nfs.example.com\",volume-opt=device=:/host/path \
foo
Lastly, you can define the named volume inside your compose file. One important note when doing this, the name volume only gets created once, and not updated with any changes. So if you ever need to modify the named volume you'll want to give it a new name.
# inside a docker-compose file
...
services:
example-app:
volumes:
- "nfs-data:/data"
...
volumes:
nfs-data:
driver: local
driver_opts:
type: nfs
o: nfsvers=4,addr=nfs.example.com,rw
device: ":/path/to/dir"
...
In each of these examples:
Type is set to nfs, not nfs4. This is because docker provides some nice functionality on the addr field, but only for the nfs type.
The o are the options that gets passed to the mount syscall. One difference between the mount syscall and the mount command in Linux is the device has the portion before the : moved into an addr option.
nfsvers is used to set the NFS version. This avoids delays as the OS tries other NFS versions first.
addr may be a DNS name when you use type=nfs, rather than only an IP address. Very useful if you have multiple VPC's with different NFS servers using the same DNS name, or if you want to adjust the NFS server in the future without updating every volume mount.
Other options like rw (read-write) can be passed to the o option.
The device field is the path on the remote NFS server. The leading colon is required. This is an artifact of how the mount command moves the IP address to the addr field for the syscall. This directory must exist on the remote host prior to the volume being mounted into a container.
In the --mount syntax, the dst field is the path inside the container. For named volumes, you set this path on the right side of the volume mount (in the short syntax) on your docker run -v command.
If you get permission issues accessing a remote NFS volume, a common cause I've encountered is containers running as root, with the NFS server set to root squash (changing all root access to the nobody user). You either need to configure your containers to run as a well known non-root UID that has access to the directories on the NFS server, or disable root squash on the NFS server.
Yes you can directly reference an NFS from the compose file:
volumes:
db-data:
driver: local
driver_opts:
type: nfs
o: addr=$SOMEIP,rw
device: ":$PathOnServer"
And in an analogous way you could create an nfs volume on each host.
docker volume create --driver local --opt type=nfs --opt o=addr=$SomeIP,rw --opt device=:$DevicePath --name nfs-docker
My solution for AWS EFS, that works:
Create EFS (don't forget to open NFS port 2049 at security group)
Install nfs-common package:
sudo apt-get install -y nfs-common
Check if your efs works:
mkdir efs-test-point
sudo chmod go+rw efs-test-point
sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport [YOUR_EFS_DNS]:/ efs-test-point
touch efs-test-point/1.txt
sudo umount efs-test-point/
ls -la efs-test-point/
directory must be empty
sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport [YOUR_EFS_DNS]:/ efs-test-point
ls -la efs-test-point/
file 1.txt must exists
Configure docker-compose.yml file:
services:
sidekiq:
volumes:
- uploads_tmp_efs:/home/application/public/uploads/tmp
...
volumes:
uploads_tmp_efs:
driver: local
driver_opts:
type: nfs
o: addr=[YOUR_EFS_DNS],nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2
device: [YOUR_EFS_DNS]:/
My problem was solved with changing driver option type to NFS4.
volumes:
my-nfs-share:
driver: local
driver_opts:
type: "nfs4"
o: "addr=172.24.0.107,rw"
device: ":/mnt/sharedwordpress"
If you are using AutoFS too, on docker-compose you may add :shared to all paths, like this:
volumes:
- /some/nfs/mounted:/path:shared
I found this a better approach to my case thanks to a colleague. Our users were having an error stating 'too many symbolic links'...
Cheers!

Hyperledger Fabric BYFN - Unable to find directory listed in docker-compose-base.yaml

I am looking at docker-compose-base.yaml line 27:
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
- orderer.example.com:/var/hyperledger/production/orderer
I can find below 3 directories on my filesystem
../channel-artifacts/genesis.block
../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp
../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/
but I cannot find a directory named orderer.example.com. I think this is not meant to be a directory but related to the
container_name: orderer.example.com
in some way. Could anyone explain the meaning of the last mapping:
- orderer.example.com:/var/hyperledger/production/orderer
it does not look like a local <-> docker directory mapping. Then what is it?
Before starting the container it's normal that the orderer.example.com folder is not create, since it will be created when the container starts.
Mount the directory /var/hyperledger/production/orderer between the host and the container can be useful if you orderer is run is solo mode.
Indeed, if your container crash (for example your server restart) you can keep track of the blocks that the orderer has build. It's can be used for restart the blockchain network easier (or move the orderer in other server) .
TL;DR: The first three
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
are examples of bind mounts. The last one
- orderer.example.com:/var/hyperledger/production/orderer
is a volume. Its confusing because bind mounts can be created using the volume syntax and hence also get referred to as volumes in documentation.
This is not related to
container_name: orderer.example.com
docker-compose-base.yaml is a base file that is used e.g., by docker-compose-e2e-template.yaml. In that file one can see volumes being defined:
volumes:
orderer.example.com:
peer0.org1.example.com:
peer1.org1.example.com:
peer0.org2.example.com:
peer1.org2.example.com:
these volumes behave the same way as if one had created them using the docker volume create command. see https://docs.docker.com/engine/reference/commandline/volume_create/ to understand what that command does. It is a way to create persistent storage that 1. does not get deleted when the docker containers stop and exit and, 2. that can be used to share data amongst containers.
To see list of all volumes created by docker on the machine, run:
docker volume ls
To inspect a volume, run (to give an example):
$ docker volume inspect net_orderer.example.com
[
{
"CreatedAt": "2018-11-06T22:10:42Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/net_orderer.example.com/_data",
"Name": "net_orderer.example.com",
"Options": null,
"Scope": "local"
}
]
re: the original question:
Unable to find directory listed in docker-compose-base.yaml
You will not be able to find this directory. E.g., if you try listing the mountpoint above:
$ ls /var/lib/docker/volumes/net_orderer.example.com/_data
ls: /var/lib/docker/volumes/net_orderer.example.com/_data: No such file or directory
docker volumes are not created as regular directories on the host. The steps to "get" to a volume are quite complex in fact. see https://forums.docker.com/t/host-path-of-volume/12277/9 for details.

Is there a way to tag or name volume instances using docker compose?

When using docker compose, I find a lot of volume instances:
› docker volume ls
DRIVER VOLUME NAME
local 4a34b9a352a459171137aac4c046a83f61e6e325b1df4b67dc2ddda8439a6427
local 6ce3e52ea363441b2c9d4b04c26b283d8b4cf631a137987da88db812a9a2d223
local a7af289b29c833510f2201647266001e4746e206128dc63313fe894821fa044d
local fb09475f75fe943671a4e73d76c09c27a4f592b8ddf62224fc4b20afa0095809
I'd like to tag or name them, then reuse them if possible rather recreating them each time.
Is that possible?
Those are anonymous container volumes that happen when you define a volume without a name or bind it to a host folder. This may be with the VOLUME definition in your Dockerfile, a docker run -v /dir ... rather than name:/dir, or a volumes entry in your docker-compose.yml with only the directory. An example of a compose file that does a named mount is:
version: '2'
volumes:
my-vol:
driver: local
services:
my-container:
image: my-image
volumes:
- my-vol:/container/path
Once the anonymous volume has been created, there's no easy way to rename it. Easiest solution is to mount the anonymous volume along with the your target named volume and do a copy, e.g.:
docker run -v 123456789:/source -v my-vol:/target --rm \
busybox cp -av /source/. /target/
Where 123456789 is the long name of your anonymous volume.

Resources