Why is Docker filling up /var/lib/docker/overlay2? - docker

Even though I have successfully (?) removed all Docker images and containers, the folder /var/lib/docker/overlay2 still is vast (152 GB). Why? How do I reduce the used disk size?
I have tried to rename the folder (in preparation for a possible removal of the folder) but that caused subsequent pull requests to fail.
To me it appears pretty unbelievable that Docker would need this amount of vast disk space just for later being able to pull an image again. Please enlighten me what is wrong or why it has to be this way.
List of commands run which should show what I have tried and the current status:
$ docker image prune --force
Total reclaimed space: 0B
$ docker system prune --force
Total reclaimed space: 0B
$ docker image prune -a --force
Total reclaimed space: 0B
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ du -h --max-depth=1 /var/lib/docker/overlay2 | sort -rh | head -25
152G /var/lib/docker/overlay2
1.7G /var/lib/docker/overlay2/ys1nmeu2aewhduj0dfykrnw8m
1.7G /var/lib/docker/overlay2/ydqchhcaqokdokxzbh6htqa49
1.7G /var/lib/docker/overlay2/xmffou5nk3zkrldlfllopxcab
1.7G /var/lib/docker/overlay2/tjz58rjkote2c79veonb3s6qa
1.7G /var/lib/docker/overlay2/rlnr04hlcudgoh6ujobtsu2ck
1.7G /var/lib/docker/overlay2/r4ubwsmrorpr08k8o5rko9n98
1.7G /var/lib/docker/overlay2/q8x21c9enjhpitt365smkmn4e
1.7G /var/lib/docker/overlay2/ntr973uef37oweqlxr4kmaxps
1.7G /var/lib/docker/overlay2/mcyasqzo2gry5dvjxoao1opws
1.7G /var/lib/docker/overlay2/m2k4u58mob6e2db86qqu1e1f8
1.7G /var/lib/docker/overlay2/lizesless03kch8j7kpk89rcf
1.7G /var/lib/docker/overlay2/kmu7mjvsopr8o63onbsijb98j
1.7G /var/lib/docker/overlay2/khgjwqry5drdy0jbwf47gr2lb
1.7G /var/lib/docker/overlay2/gt70ur50vw3whq265vmpep7ay
1.7G /var/lib/docker/overlay2/c3tm1fcuekmdreowrfcso7nd4
1.7G /var/lib/docker/overlay2/7j93t64mt63arj6sewyyejwyo
1.7G /var/lib/docker/overlay2/3ftxvvg2xg02xuwcb3ut3dq89
1.7G /var/lib/docker/overlay2/0m3o3lw6b1ggs8m6z4uv6ueqf
1.4G /var/lib/docker/overlay2/r82rfxme096cq5pg1xz1z5arg
1.4G /var/lib/docker/overlay2/qric73hv1z3nx4k0zop3fvcm6
1.4G /var/lib/docker/overlay2/oyb0a5ab5h642y30s6hawj4r9
1.4G /var/lib/docker/overlay2/oqf9ltfoy36evnkuo8ga2uepl
1.4G /var/lib/docker/overlay2/ntuwvljxxzqs2oxhgg3enyo7x
1.4G /var/lib/docker/overlay2/l0oi2lxdrtg42hk2rznknqk0r
$ ls -l /var/lib/docker/overlay2
total 136
drwx------ 4 root root 72 Nov 20 13:03 00ep8i7v5bdmhqsxdoikslr19
drwx------ 4 root root 72 Feb 28 09:47 026x5e2xns6ui2acym19qfvl7
drwx------ 4 root root 72 Apr 2 19:20 032y8d31damevtfymq6yzkyi4
drwx------ 4 root root 72 Apr 23 13:42 03wwbyd4uge9u0auk94wwdlig
drwx------ 4 root root 72 Jan 15 12:46 04cy91a19owwqu9hyw6vruhzo
drwx------ 4 root root 72 Apr 2 14:44 051625a0f856b63ed67a3bc9c19f09fb1c90303b9536791dc88717cb7379ceeb
drwx------ 4 root root 72 Dec 3 19:56 059fk19uw70p6fqzei6wnj8s2
drwx------ 4 root root 72 Apr 21 15:03 059mddrhqegqhxv1ockejw9gs
drwx------ 4 root root 72 Nov 28 11:26 069dwkz92m8fao6whxnj4x9vp
drwx------ 4 root root 72 Feb 28 09:47 06h7qo5f70oyzaqgn1elbx5u8
drwx------ 4 root root 72 Dec 18 13:27 0756fd640036fa92499cfdcf4bcc3081d9ec16c25eebe5964d5e12d22beb9991
drwx------ 4 root root 72 Apr 20 11:32 09rk4gm6x2mcquc5cz0yvbawq
drwx------ 4 root root 72 Apr 2 19:55 09scfio3qvtewzgc5bdwgw4f6
drwx------ 4 root root 72 May 4 14:00 0ac2a09aa4a038981d37730e56dece4a3d28e80f261b68587c072b4012dc044a
drwx------ 4 root root 72 Feb 25 14:19 0c399f5c349ec61ac175525c52533b069a52028354c1055894466e9b5430fbc3
drwx------ 4 root root 72 May 4 14:00 0cac39b1382986a2d9a6690985792c04d03192337ea37ee06cb74f2f457b7bb7
drwx------ 4 root root 72 Mar 5 08:41 0czco1xx3148slgwf8imdrk33
drwx------ 4 root root 72 Apr 21 08:30 0gb2iqev9e7kr587l09u19eff
drwx------ 4 root root 72 Feb 20 18:03 0gknqh4pyg46uzi6asskbf8xk
drwx------ 4 root root 72 Jan 8 11:43 0gugiou3wqu53os4dageh77ty
drwx------ 4 root root 72 Jan 7 11:31 0i8fd5jet6ieajyl2uo1xj2ai
.
.
.
$ docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:27:04 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:25:42 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683

You might have switched storage drivers somewhere along the way, so maybe docker is just cleaning out those drivers but leaving overlay2 as is (I still can't understand why would pulling images would fail).
Let's try this, run docker info and check what is your storage driver:
$ docker info
Containers: 0
Images: 0
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
<output truncated>
If it is not overlay2 (as appears above) try switching to it, and then prune docker images again and check if that cleaned up that folder.
Another possible solution is mentioned in this thread, people are commenting that clearing logs solves this problem, so try the following:
Remove all log files:
find /var/lib/docker/containers/ -type f -name "*.log" -delete
Restart docker daemon (or entire machine):
sudo systemctl restart docker
or
docker-compose down && docker-compose up -d
or
shutdown -r now

in preparation for a possible removal of the folder
If you are going to delete all data from the Docker directory anyway it is safe to:
Stop Docker Daemon
Remove the /var/lib/docker directory entirely
Restart Docker Daemon
Docker will then recreate any needed data directories.
You can also add:
"log-driver": "json-file",
"log-opts": {"max-size": "20m", "max-file": "3"},
to your /etc/docker/daemon.json to restrict log size and growth in the future or set the log-driver to "journald" to eliminate log files entirely.

Thanks for your input and suggestions!
I believe that I am still using overlay2 as storage driver:
$ docker info
Client:
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 19.03.8
Storage Driver: overlay2
<output truncated>
I also cleared the logs and restarted the daemon and actually also the entire machine. The problem however remained.
In the end I solved it by stoping the deamon, removing the entire docker folder and restarting the deamon, as suggested above.
df -h
sudo systemctl stop docker
sudo mv /var/lib/docker /var/lib/docker_old
sudo systemctl start docker
sudo rm -rf /var/lib/docker_old
df -h
I fear however that this will not be a permanent solutions and that the problem will come back, but this will hopefully last another year. :)

Try to prune everything including volumes (different than the original poster's command):
$ docker system prune --volumes
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all volumes not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N]
That freed up a bunch of space for me and solved my issue. I think the build cache was one of the culprits for me.

Two things will fill up /var/lib/docker/overlay2:
Docker images: Clean those with docker image prune -a. Note that any images not currently associated with a container will be deleted which requires pulling or building the image again if you needed it.
Container Specific Changes: any write to the container filesystem that isn't going to another mount (like a volume) will cause a copy-on-write that is stored in the container specific layer. You can see these changes with docker diff on a container. Even a metadata change like file ownership, permissions, or a timestamp, can trigger this copy-on-write of the entire file.
Things that are not included in this directory:
Volumes: Named volumes will be stored in /var/lib/docker/volumes by default. You can still prune these with docker volume prune but make sure you have backed up any important data first. A better cleanup is to remove unused anonymous volumes with a command like:
docker volume ls -qf dangling=true | egrep '^[a-z0-9]{64}$' | \
xargs --no-run-if-empty docker volume rm
Container Logs: Container logs will be written to /var/lib/docker/containers. If these are taking up space, it's best to have docker automatically rotate those. See this answer for details on rotating logs.

I had the same problem, /var/lib/docker/overlay2 was using 17 GB even after removing every docker image: docker image rm ...
When you stop a container, it is not automatically removed unless you started it with the --rm flag (prune-containers). Container size can be seen using this command: docker container ls -a -s
I managed to reclaim the space taken by stopped container using this command: docker container prune.
Answer of #Nick is perhaps even better as it cleans every unused docker files:
docker system prune --volumes

Related

Why can I mount one image on a loopback device, but not a second one on another inside a container?

I'm following Is it possible to mount an ISO inside a docker container? to get a test running inside Docker which is then to be used in CI. The script below is part of the test. The thing I don't get is why the first image mounts, but not the second. Since the first mounts this can't be a matter of permissions, capabilities, etc. afaik.
$ dd if=/dev/zero of=testfs-1.img bs=1M count=32
32+0 records in
32+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 0.0244791 s, 1.4 GB/s
$ dd if=/dev/zero of=testfs-2.img bs=1M count=32
32+0 records in
32+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 0.0242179 s, 1.4 GB/s
$ mkfs -F testfs-1.img
mke2fs 1.44.1 (24-Mar-2018)
Discarding device blocks: 1024/32768 done
Creating filesystem with 32768 1k blocks and 8192 inodes
Filesystem UUID: 7e752a1c-1f0c-4efb-8cd9-67f5922adf7b
Superblock backups stored on blocks:
8193, 24577
Allocating group tables: 0/4 done
Writing inode tables: 0/4 done
Writing superblocks and filesystem accounting information: 0/4 done
$ mkfs -F testfs-2.img
mke2fs 1.44.1 (24-Mar-2018)
Discarding device blocks: 1024/32768 done
Creating filesystem with 32768 1k blocks and 8192 inodes
Filesystem UUID: cdd08978-4a52-407b-81c6-98d908eadee8
Superblock backups stored on blocks:
8193, 24577
Allocating group tables: 0/4 done
Writing inode tables: 0/4 done
Writing superblocks and filesystem accounting information: 0/4 done
$ mkdir -p src/mnt-1/hidden-1 src/mnt-2/hidden-2
$ ls -la src/
total 0
drwxr-xr-x 1 root root 20 Jun 13 23:15 .
drwxrwxrwx 1 root root 90 Jun 13 23:15 ..
drwxr-xr-x 1 root root 16 Jun 13 23:15 mnt-1
drwxr-xr-x 1 root root 16 Jun 13 23:15 mnt-2
$ losetup -f
/dev/loop15
$ mount -o loop testfs-1.img src/mnt-1
$ losetup -f
/dev/loop16
$ mount -o loop testfs-2.img src/mnt-2
mount: src/mnt-2: failed to setup loop device for /builds/krichter-sscce/docker-losetup/testfs-2.img.
The test if from bup in case anyone needs more background.
I'm using the image ubuntu:18.04 for the tests.
I can reproduce this with docker run --privileged -it ubuntu:18.04 and then inside the container executing
#!/bin/sh
dd if=/dev/zero of=testfs-1.img bs=1M count=32
dd if=/dev/zero of=testfs-2.img bs=1M count=32
mkfs -F testfs-1.img
mkfs -F testfs-2.img
mkdir -p src/mnt-1/hidden-1 src/mnt-2/hidden-2
ls -la src/
losetup -f
mount -o loop testfs-1.img src/mnt-1
losetup -f
mount -o loop testfs-2.img src/mnt-2
with bash. My Docker version is
> docker version
Client: Docker Engine - Community
Version: 19.03.11
API version: 1.40
Go version: go1.13.10
Git commit: 42e35e61f3
Built: Mon Jun 1 09:12:34 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.11
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 42e35e61f3
Built: Mon Jun 1 09:11:07 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
on an Ubuntu 20.04 host.
This is taken from gitHub and I don't have any merit putting it in here, but might help.
Take a look at this link.
A similar issue was fixed by Tony Fahrion following these steps while creating loop devices.
Precondition: docker container must be running in --privileged mode.
LOOPDEV=$(losetup --find --show --partscan ${IMAGE_FILE})
# drop the first line, as this is our LOOPDEV itself, but we only want the child
partitions
PARTITIONS=$(lsblk --raw --output "MAJ:MIN" --noheadings ${LOOPDEV} | tail -n +2)
COUNTER=1
for i in $PARTITIONS; do
MAJ=$(echo $i | cut -d: -f1)
MIN=$(echo $i | cut -d: -f2)
if [ ! -e "${LOOPDEV}p${COUNTER}" ]; then mknod ${LOOPDEV}p${COUNTER} b $MAJ $MIN; fi
COUNTER=$((COUNTER + 1))
done
The trick seems to be related to the mknod function. As I said earlier, hope it helps (it was just too long to put it in a comment)

Recreate container on stop with docker-compose

I am trying to set up a multi-container service with docker-compose.
Some of the containers need to be restarted from a fresh container (eg. the file system should be like in the image) when they restart.
How can I achieve this?
I've found the restart: always option I can put on my service in the docker-compose.yml file, but that doesn't give me a fresh file system as it uses the same container.
I've also seen the --force-recreate option of docker-compose up, but that doesn't apply as that only recreates the containers when the command is runned.
EDIT:
This is probably not a docker-compose issue, but more of a general docker question: What is the best way to make sure a container is in a fresh state when it is restarted? With fresh state, I mean a state identical to that of a brand new container from the same image. Restarted is the docker command docker restart or docker stop and docker start.
In docker, immutability typically refers to the image layers. They are immutable, and any changes are pushed to a container specific copy-on-write layer of the filesystem. That container specific layer will last for the lifetime of the container. So to have those files not-persist, you have two options:
Recreate the container instead of just restart it
Don't write the changes to the container filesystem, and don't write them to any persistent volumes.
You cannot do #1 with a restart policy by it's very definition. A restart policy gives you the same container filesystem, with the application restarted. But if you use docker's swarm mode, it will recreate containers when they exit, so if you can migrate to swarm mode, you can achieve this result.
Option #2 looks more difficult than it is. If you aren't writing to the container filesystem, or to a volume, then where? The answer is a tmpfs volume that is only stored in memory and is lost as soon as the container exits. In compose, this is a tmpfs: /data/dir/to/not/persist line. Here's an example on the docker command line.
First, let's create a container with a tmpfs mounted at /data, add some content, and exit the container:
$ docker run -it --tmpfs /data --name no-persist busybox /bin/sh
/ # ls -al /data
total 4
drwxrwxrwt 2 root root 40 Apr 7 21:50 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'do not save' >>/data/tmp-data.txt
/ # cat /data/tmp-data.txt
do not save
/ # ls -al /data
total 8
drwxrwxrwt 2 root root 60 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 12 Apr 7 21:51 tmp-data.txt
/ # exit
Easy enough, it behaves as a normal container, let's restart it and check the directory contents:
$ docker restart no-persist
no-persist
$ docker attach no-persist
/ # ls -al /data
total 4
drwxr-xr-x 2 root root 40 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'still do not save' >>/data/do-not-save.txt
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 60 Apr 7 21:52 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 18 Apr 7 21:52 do-not-save.txt
/ # exit
As you can see, the directory returned empty, and we can add data as needed back to the directory. The only downside of this is the directory will be empty even if you have content in the image at that location. I've tried combinations of named volumes, or using the mount syntax and passing the volume-nocopy option to 0, without luck. So if you need the directory to be initialized, you'll need to do that as part of your container entrypoint/cmd by copying from another location.
In order to not persist any changes to your containers it is enough that you don't map any directory from host to the container.
In this way, every time the containers runs (with docker run or docker-compose up ), it starts with a fresh file system.
docker-compose down also removes the containers, deleting any data.
The best solution I have found so far, is for the container itself to make sure to clean up when starting or stopping. I solve this by cleaning up when starting.
I copy my app files to /srv/template with the docker COPY directive in my Dockerfile, and have something like this in my ENTRYPOINT script:
rm -rf /srv/server/
cp -r /srv/template /srv/server
cd /srv/server

Is there a way to list files inside a docker volume?

Simple question: Is there a docker command to view the files inside a volume?
I run docker for windows which creates a MobyLinuxVM on my machine to run Docker. I can't get a remote desktop connection onto this machine like I can with an Ubuntu VM (which I also have running on my machine).
Therefore, I can't see a way to see what is inside my host volumes (as they are actually inside the MobyLinuxVM), where as if I ran docker on my Ubuntu VM I could remote onto the machine and take a look.
Therefore, is there a way I can run some sort of docker volume command to list what's inside each volume?
You can use a temporary container for this. I tend to use busybox for these temporary containers:
$ docker volume ls
DRIVER VOLUME NAME
local jenkins-home
local jenkins-home2
local jenkinsblueocean_jenkins-data
...
$ docker run -it --rm -v jenkins-home:/vol busybox ls -l /vol
total 428
-rw-r--r-- 1 1000 1000 327 Jul 14 2016 com.dabsquared.gitlabjenkins.GitLabPushTrigger.xml
-rw-r--r-- 1 1000 1000 276 Aug 17 2016 com.dabsquared.gitlabjenkins.connection.GitLabConnectionConfig.xml
-rw-r--r-- 1 1000 1000 256 Aug 17 2016 com.nirima.jenkins.plugins.docker.DockerPluginConfiguration.xml
drwxr-xr-x 28 1000 1000 4096 Aug 17 2016 config-history
-rw-r--r-- 1 1000 1000 6460 Aug 17 2016 config.xml
-rw-r--r-- 1 1000 1000 174316 Jun 2 18:50 copy_reference_file.log
-rw-r--r-- 1 1000 1000 2875 Aug 9 2016 credentials.xml
...
For a host volume, you can just replace the volume mount with the host directory name (fully qualified) in the docker run cli.
$ docker run -it --rm -v /path/on/host:/vol busybox ls -l /vol
This isn't a direct answer to the question (because it was asking about a docker command) but in case anyone arrives here like I did:
If you have Docker Desktop (on Windows at least) you can explore into a volume using the Docker Desktop GUI. Just click on the volume, then switch to the "Data" tab at the top.
Quick and easy if you are just wanting to take a look around or copy out a file.
Not sure how widely applicable this is, but if you have root access I've just discovered that you can browse the contents of a volume at /var/lib/docker/volumes/<VOLUME_NAME>/_data. VOLUME_NAME is as shown by docker volume ls.
I'm looking at an Ubuntu 18.04 VM running Docker 19.03.5 - YMMV.

Find out to which removed docker container a volume belonged to

Is there a way to associate existing docker volumes (located in /etc/docker/volumes) to containers?
One way to do this is to use docker inspect :conatiner_id but this assumes that the container exists. How can you find to which container the volume belonged to, in the scenario that the container does no longer exist?
Check docker volumes
$ ls -l /var/lib/docker/volumes/
total 72
drwxr-xr-x 3 root root 4096 Nov 14 14:27 0f801819cf76b04b6794163b65df5d649bd795e23f4fc778f78db9ac60a0180d
drwxr-xr-x 3 root root 4096 Nov 29 14:29 my-jenkins
For more info about your volume you can perform docker volume inspect but this tells you nothing about what's really inside your volume. The only way to know this is by going inside the volume-folder and check.
So I'll check "unamed" volume:
$ ls -l /var/lib/docker/volumes/0f801819cf76b04b6794163b65df5d649
bd795e23f4fc778f78db9ac60a0180d/_data
...
drwx------ 2 999 ping 4096 Nov 14 14:27 pg_tblspc
drwx------ 2 999 ping 4096 Nov 14 14:27 pg_twophase
drwx------ 3 999 ping 4096 Nov 14 14:27 pg_xlog
-rw------- 1 999 ping 88 Nov 14 14:27 postgresql.auto.conf
-rw------- 1 999 ping 20791 Nov 14 14:27 postgresql.conf
-rw------- 1 999 ping 37 Nov 14 14:27 postmaster.opts
Normally you should be able to link your volume to the old container you've used. You can check everything what's inside. There isn't a better way at the moment. This is actually the answer on your question but I'll give some more explanation to make it easier in the future.
The best way is to create named volumes. After deleting your container the volume will remain easy to recognize:
docker volume create --name my-jenkins
So in /var/lib/docker/volumes you'll see my-jenkins.
Now I start my jenkins container and link it with my named volume.
Everything which is in /var/jenkins_home will be stored in the named volume.
docker run -d -p 8080:8080 -v my-jenkins:/var/jenkins_home jenkins
I'll create a job in jenkins with the name firstjob. You'll see this job in my named docker volume.
$ ls -l /var/lib/docker/volumes/my-jenkins/_data/jobs/
total 4
drwxr-xr-x 3 dockrema dockrema 4096 Nov 29 14:47 firstjob
Now I will delete my container (id = fa1003894dbc). The container is gone:
$ docker rm -fv fa1003894dbc
I'm a bit later. I want to reuse the named docker volume which still exists to start a new jenkins container which will immediatly container the job "firstjob".
$ docker run -d -p 8080:8080 -v my-jenkins:/var/jenkins_home jenkins
If you have an unnamed docker volume (created automatically with name 0f8018x9cf76b04x163b6xdf) you can use
docker run -d -v 0f8018x9cf76b04x163b6xdf:/var/jenkins_home jenkins
Now your jenkins will use everything which is inside that volume (it's only not a named volume, what makes it more difficult to see with which type of container it was linked. But by accessing the volume folder you will find it in most cases.)

Having trouble setting up a persistent data volume for a Docker image

I've been looking into setting up a data volume for a Docker container that I'm running on my server. The container is from this FreePBX image https://hub.docker.com/r/jmar71n/freepbx/
Basically I want persistent data so I don't lose my VoIP extensions and settings in the case of Docker stopping. I've tried many guides, ones here on stack overflow, and on the Docker manpages, but I just can't quite get it to work.
Can anyone help me with what commands I need to run in order to attach a volume to the FreePBX image I linked above?
You can do this by running a container with the -v option and mapping to a host directory - you just need to know where the container's storing the data.
Looking at the Dockerfile for that image, I'm assuming that the data you're interested in is stored in MySql. In the MySql config the data directory the container's using is /var/lib/mysql.
So you can start your container like this, mapping the MySql data directory to /docker/pbx-data on your host:
> docker run -d -t -v /docker/pbx-data:/var/lib/mysql jmar71n/freepbx
20b45b8fb2eec63db3f4dcab05f89624ef7cb1ff067cae258e0f8a910762fb1a
Use docker inpect to confirm that the mount is mapped as expected:
> docker inspect --format '{{json .Mounts}}' 20b
[{"Source":"/docker/pbx-data",
"Destination":"/var/lib/mysql",
"Mode":"","RW":true,"Propagation":"rprivate"}]
When the container runs it bootstraps the database, so on the host you'll be able to see the contents of the MySql data directory the container is using:
> ls -l /docker/pbx-data
total 28684
-rw-r----- 1 103 root 2062 Sep 21 09:30 20b45b8fb2ee.err
-rw-rw---- 1 103 messagebus 18874368 Sep 21 09:30 ibdata1
-rw-rw---- 1 103 messagebus 5242880 Sep 21 09:30 ib_logfile0
-rw-rw---- 1 103 messagebus 5242880 Sep 21 09:30 ib_logfile1
drwx------ 2 103 root 4096 Sep 21 09:30 mysql
drwx------ 2 103 messagebus 4096 Sep 21 09:30 performance_schema
If you kill the container and run another one with the same volume mapping, it will have all the data files from the previous container, and your app state should be preserved.
I'm not familiar with FreePBX, but if there is state being stored in other directories, you can find the locations in config and map them to the host in the same way, with multiple -v options.
Hi Elton Stoneman and user3608260!
Yes, you assuming correctly for data saves in Mysql (records, users, configs, etc.).
But in asterisk, all configurations are saved in files '.conf' and similars.
In this case, the archives looked for user3608260 are storaged in '/etc/asterisk/*'
Your answer is perfectly with more one command: -v /local_to_save:/etc/asterisk
the final docker command:
docker run -d -t -v /docker/pbx-data:/var/lib/mysql -v /docker/pbx-asterisk:/etc/asterisk jmar71n/freepbx
[Assuming /docker/pbx-asterisk is a host directory. ]

Resources