I am trying to set up a multi-container service with docker-compose.
Some of the containers need to be restarted from a fresh container (eg. the file system should be like in the image) when they restart.
How can I achieve this?
I've found the restart: always option I can put on my service in the docker-compose.yml file, but that doesn't give me a fresh file system as it uses the same container.
I've also seen the --force-recreate option of docker-compose up, but that doesn't apply as that only recreates the containers when the command is runned.
EDIT:
This is probably not a docker-compose issue, but more of a general docker question: What is the best way to make sure a container is in a fresh state when it is restarted? With fresh state, I mean a state identical to that of a brand new container from the same image. Restarted is the docker command docker restart or docker stop and docker start.
In docker, immutability typically refers to the image layers. They are immutable, and any changes are pushed to a container specific copy-on-write layer of the filesystem. That container specific layer will last for the lifetime of the container. So to have those files not-persist, you have two options:
Recreate the container instead of just restart it
Don't write the changes to the container filesystem, and don't write them to any persistent volumes.
You cannot do #1 with a restart policy by it's very definition. A restart policy gives you the same container filesystem, with the application restarted. But if you use docker's swarm mode, it will recreate containers when they exit, so if you can migrate to swarm mode, you can achieve this result.
Option #2 looks more difficult than it is. If you aren't writing to the container filesystem, or to a volume, then where? The answer is a tmpfs volume that is only stored in memory and is lost as soon as the container exits. In compose, this is a tmpfs: /data/dir/to/not/persist line. Here's an example on the docker command line.
First, let's create a container with a tmpfs mounted at /data, add some content, and exit the container:
$ docker run -it --tmpfs /data --name no-persist busybox /bin/sh
/ # ls -al /data
total 4
drwxrwxrwt 2 root root 40 Apr 7 21:50 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'do not save' >>/data/tmp-data.txt
/ # cat /data/tmp-data.txt
do not save
/ # ls -al /data
total 8
drwxrwxrwt 2 root root 60 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 12 Apr 7 21:51 tmp-data.txt
/ # exit
Easy enough, it behaves as a normal container, let's restart it and check the directory contents:
$ docker restart no-persist
no-persist
$ docker attach no-persist
/ # ls -al /data
total 4
drwxr-xr-x 2 root root 40 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'still do not save' >>/data/do-not-save.txt
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 60 Apr 7 21:52 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 18 Apr 7 21:52 do-not-save.txt
/ # exit
As you can see, the directory returned empty, and we can add data as needed back to the directory. The only downside of this is the directory will be empty even if you have content in the image at that location. I've tried combinations of named volumes, or using the mount syntax and passing the volume-nocopy option to 0, without luck. So if you need the directory to be initialized, you'll need to do that as part of your container entrypoint/cmd by copying from another location.
In order to not persist any changes to your containers it is enough that you don't map any directory from host to the container.
In this way, every time the containers runs (with docker run or docker-compose up ), it starts with a fresh file system.
docker-compose down also removes the containers, deleting any data.
The best solution I have found so far, is for the container itself to make sure to clean up when starting or stopping. I solve this by cleaning up when starting.
I copy my app files to /srv/template with the docker COPY directive in my Dockerfile, and have something like this in my ENTRYPOINT script:
rm -rf /srv/server/
cp -r /srv/template /srv/server
cd /srv/server
Related
How does docker create a file system for a Linux container? And how are permissions set up on the root file system?
I encountered a situation when starting a docker container on a particular machine with Ubuntu Server. For some reason, /tmp in the container doesn't have write permissions:
$ docker run -it python:3.11-slim-buster /bin/bash
root#5d5fefe9b9a2:/# ls -la /tmp
total 8
drwxr-xr-t 1 root root 4096 Jan 26 06:58 .
drwxr-xr-x 1 root root 4096 Jan 29 04:31 ..
Note that this has 755 permissions.
However, when I start the same docker image as a container on WSL, I get 777:
$ docker run -it python:3.11-slim-buster /bin/bash
root#201dfe147e5a:/# ls -la /tmp
total 8
drwxrwxrwt 1 root root 4096 Nov 16 06:56 .
drwxr-xr-x 1 root root 4096 Jan 29 04:36 ..
This was fine a few weeks ago on the Ubuntu machine. I recently moved all the files from /var/lib/ubuntu to /ubuntu because the partition mounted at /var was full. Would this have caused the behavior with the permissions of /tmp inside a container? If so, why? And how do I fix it? If not, what else would cause this and...how do I fix it?
Docker uses a so-called union file system for a running container. The recommended driver on Linux is called overlay2. The files and directories for each layer of an image are stored under /var/lib/docker/overlay2, assuming the default config. The directory structure for each layer is combined to create the final file system for the container. See https://docs.docker.com/storage/storagedriver/overlayfs-driver/ for more details.
As for the permissions for the files in the container, they are derived from the permissions of the files in this directory in the host file system. When I copied the files from /var/lib/docker to /docker, I failed to preserve ownership and permissions. My best guess is that umask was applied as each new file was created.
I'm working on a build step that handles common deployment tasks in a Docker Swarm Mode cluster. As this is a common problem for us and for others, we've shared this build step as a BitBucket pipe: https://bitbucket.org/matchory/swarm-secret-pipe/
The pipe needs to use the docker command to work with a remote Docker installation. This doesn't work, however, because the docker executable cannot be found when the pipe runs.
The following holds true for our test repository pipeline:
The docker option is set to true:
options:
docker: true
The docker service is enabled for the build step:
main:
- step:
services:
- docker: true
Docker works fine in the repository pipeline itself, but not within the pipe.
Pipeline log shows the docker path being mounted into the pipe container:
docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
radiergummi/swarm-secret-pipe:1.3.7#sha256:baf05b25b38f2a59b044e07f4ad07065de90257a000137a0e1eb71cbe1a438e5
The pipe is pretty standard and uses a recent Alpine image; nothing special in that regard. The PATH is never overwritten. Now for the fun part: If I do ls /usr/local/bin/docker inside the pipe, it shows an empty directory:
ls /usr/local/bin
total 16K
drwxr-xr-x 1 root root 4.0K May 13 13:06 .
drwxr-xr-x 1 root root 4.0K Apr 4 16:06 ..
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 docker
ls /usr/local/bin/docker
total 8K
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 .
drwxr-xr-x 1 root root 4.0K May 13 13:06 ..
ls: /usr/local/bin/docker/docker: No such file or directory
As far as I understand pipelines and Docker, /usr/local/bin/docker should be the docker binary file. Instead, it appears to be an empty directory for some reason.
What is going on here?
I've also looked at other, official, pipes. They don't do anything differently, but seem to be using the docker command just fine (eg. the Azure pipe).
After talking to BitBucket support, I solved the issue. As it turns out, if the docker context is changed, any docker command is sent straight to the remote docker binary, which (on our services) lives at a different path than in BitBucket Pipelines!
As we changed the docker context before using the pipe, and the docker instance mounted into the pipe still has the remote context set, but the pipe searches for the docker binary at another place, the No such file or directory error is thrown.
TL;DR: Always restore the default docker host/context before passing control to a pipe, e.g.:
script:
- export DEFAULT_DOCKER_HOST=$DOCKER_HOST
- unset DOCKER_HOST
- docker context create remote --docker "host=ssh://${DEPLOY_SSH_USER}#${DEPLOY_SSH_HOST}"
- docker context use remote
# do your thing
- export DOCKER_HOST=$DEFAULT_DOCKER_HOST # <------ restore the default host
- pipe: matchory/swarm-secret-pipe:1.3.16
Simple question: Is there a docker command to view the files inside a volume?
I run docker for windows which creates a MobyLinuxVM on my machine to run Docker. I can't get a remote desktop connection onto this machine like I can with an Ubuntu VM (which I also have running on my machine).
Therefore, I can't see a way to see what is inside my host volumes (as they are actually inside the MobyLinuxVM), where as if I ran docker on my Ubuntu VM I could remote onto the machine and take a look.
Therefore, is there a way I can run some sort of docker volume command to list what's inside each volume?
You can use a temporary container for this. I tend to use busybox for these temporary containers:
$ docker volume ls
DRIVER VOLUME NAME
local jenkins-home
local jenkins-home2
local jenkinsblueocean_jenkins-data
...
$ docker run -it --rm -v jenkins-home:/vol busybox ls -l /vol
total 428
-rw-r--r-- 1 1000 1000 327 Jul 14 2016 com.dabsquared.gitlabjenkins.GitLabPushTrigger.xml
-rw-r--r-- 1 1000 1000 276 Aug 17 2016 com.dabsquared.gitlabjenkins.connection.GitLabConnectionConfig.xml
-rw-r--r-- 1 1000 1000 256 Aug 17 2016 com.nirima.jenkins.plugins.docker.DockerPluginConfiguration.xml
drwxr-xr-x 28 1000 1000 4096 Aug 17 2016 config-history
-rw-r--r-- 1 1000 1000 6460 Aug 17 2016 config.xml
-rw-r--r-- 1 1000 1000 174316 Jun 2 18:50 copy_reference_file.log
-rw-r--r-- 1 1000 1000 2875 Aug 9 2016 credentials.xml
...
For a host volume, you can just replace the volume mount with the host directory name (fully qualified) in the docker run cli.
$ docker run -it --rm -v /path/on/host:/vol busybox ls -l /vol
This isn't a direct answer to the question (because it was asking about a docker command) but in case anyone arrives here like I did:
If you have Docker Desktop (on Windows at least) you can explore into a volume using the Docker Desktop GUI. Just click on the volume, then switch to the "Data" tab at the top.
Quick and easy if you are just wanting to take a look around or copy out a file.
Not sure how widely applicable this is, but if you have root access I've just discovered that you can browse the contents of a volume at /var/lib/docker/volumes/<VOLUME_NAME>/_data. VOLUME_NAME is as shown by docker volume ls.
I'm looking at an Ubuntu 18.04 VM running Docker 19.03.5 - YMMV.
Is there a way to associate existing docker volumes (located in /etc/docker/volumes) to containers?
One way to do this is to use docker inspect :conatiner_id but this assumes that the container exists. How can you find to which container the volume belonged to, in the scenario that the container does no longer exist?
Check docker volumes
$ ls -l /var/lib/docker/volumes/
total 72
drwxr-xr-x 3 root root 4096 Nov 14 14:27 0f801819cf76b04b6794163b65df5d649bd795e23f4fc778f78db9ac60a0180d
drwxr-xr-x 3 root root 4096 Nov 29 14:29 my-jenkins
For more info about your volume you can perform docker volume inspect but this tells you nothing about what's really inside your volume. The only way to know this is by going inside the volume-folder and check.
So I'll check "unamed" volume:
$ ls -l /var/lib/docker/volumes/0f801819cf76b04b6794163b65df5d649
bd795e23f4fc778f78db9ac60a0180d/_data
...
drwx------ 2 999 ping 4096 Nov 14 14:27 pg_tblspc
drwx------ 2 999 ping 4096 Nov 14 14:27 pg_twophase
drwx------ 3 999 ping 4096 Nov 14 14:27 pg_xlog
-rw------- 1 999 ping 88 Nov 14 14:27 postgresql.auto.conf
-rw------- 1 999 ping 20791 Nov 14 14:27 postgresql.conf
-rw------- 1 999 ping 37 Nov 14 14:27 postmaster.opts
Normally you should be able to link your volume to the old container you've used. You can check everything what's inside. There isn't a better way at the moment. This is actually the answer on your question but I'll give some more explanation to make it easier in the future.
The best way is to create named volumes. After deleting your container the volume will remain easy to recognize:
docker volume create --name my-jenkins
So in /var/lib/docker/volumes you'll see my-jenkins.
Now I start my jenkins container and link it with my named volume.
Everything which is in /var/jenkins_home will be stored in the named volume.
docker run -d -p 8080:8080 -v my-jenkins:/var/jenkins_home jenkins
I'll create a job in jenkins with the name firstjob. You'll see this job in my named docker volume.
$ ls -l /var/lib/docker/volumes/my-jenkins/_data/jobs/
total 4
drwxr-xr-x 3 dockrema dockrema 4096 Nov 29 14:47 firstjob
Now I will delete my container (id = fa1003894dbc). The container is gone:
$ docker rm -fv fa1003894dbc
I'm a bit later. I want to reuse the named docker volume which still exists to start a new jenkins container which will immediatly container the job "firstjob".
$ docker run -d -p 8080:8080 -v my-jenkins:/var/jenkins_home jenkins
If you have an unnamed docker volume (created automatically with name 0f8018x9cf76b04x163b6xdf) you can use
docker run -d -v 0f8018x9cf76b04x163b6xdf:/var/jenkins_home jenkins
Now your jenkins will use everything which is inside that volume (it's only not a named volume, what makes it more difficult to see with which type of container it was linked. But by accessing the volume folder you will find it in most cases.)
I've been looking into setting up a data volume for a Docker container that I'm running on my server. The container is from this FreePBX image https://hub.docker.com/r/jmar71n/freepbx/
Basically I want persistent data so I don't lose my VoIP extensions and settings in the case of Docker stopping. I've tried many guides, ones here on stack overflow, and on the Docker manpages, but I just can't quite get it to work.
Can anyone help me with what commands I need to run in order to attach a volume to the FreePBX image I linked above?
You can do this by running a container with the -v option and mapping to a host directory - you just need to know where the container's storing the data.
Looking at the Dockerfile for that image, I'm assuming that the data you're interested in is stored in MySql. In the MySql config the data directory the container's using is /var/lib/mysql.
So you can start your container like this, mapping the MySql data directory to /docker/pbx-data on your host:
> docker run -d -t -v /docker/pbx-data:/var/lib/mysql jmar71n/freepbx
20b45b8fb2eec63db3f4dcab05f89624ef7cb1ff067cae258e0f8a910762fb1a
Use docker inpect to confirm that the mount is mapped as expected:
> docker inspect --format '{{json .Mounts}}' 20b
[{"Source":"/docker/pbx-data",
"Destination":"/var/lib/mysql",
"Mode":"","RW":true,"Propagation":"rprivate"}]
When the container runs it bootstraps the database, so on the host you'll be able to see the contents of the MySql data directory the container is using:
> ls -l /docker/pbx-data
total 28684
-rw-r----- 1 103 root 2062 Sep 21 09:30 20b45b8fb2ee.err
-rw-rw---- 1 103 messagebus 18874368 Sep 21 09:30 ibdata1
-rw-rw---- 1 103 messagebus 5242880 Sep 21 09:30 ib_logfile0
-rw-rw---- 1 103 messagebus 5242880 Sep 21 09:30 ib_logfile1
drwx------ 2 103 root 4096 Sep 21 09:30 mysql
drwx------ 2 103 messagebus 4096 Sep 21 09:30 performance_schema
If you kill the container and run another one with the same volume mapping, it will have all the data files from the previous container, and your app state should be preserved.
I'm not familiar with FreePBX, but if there is state being stored in other directories, you can find the locations in config and map them to the host in the same way, with multiple -v options.
Hi Elton Stoneman and user3608260!
Yes, you assuming correctly for data saves in Mysql (records, users, configs, etc.).
But in asterisk, all configurations are saved in files '.conf' and similars.
In this case, the archives looked for user3608260 are storaged in '/etc/asterisk/*'
Your answer is perfectly with more one command: -v /local_to_save:/etc/asterisk
the final docker command:
docker run -d -t -v /docker/pbx-data:/var/lib/mysql -v /docker/pbx-asterisk:/etc/asterisk jmar71n/freepbx
[Assuming /docker/pbx-asterisk is a host directory. ]