Docker-compose recreating containers, lost data - docker

In my attempt to extract some logs from a container I edited my docker-compose.yml adding an extra mount pointing to those logs.
After running docker-compose up and recreating the respective image I found out that all of the log files were gone, as the container was completely replaced (something which is quite obvious to me now)
Is there a way to recover the old container?
Also: the docker volumes live under /var/lib/docker/volumes/, where are the root file systems of containers?
Here is a snippet of the docker-compose:
version: '3.3'
services:
some_app:
image: some_image:latest
restart: always
volumes:
- some_image_logs:/var/log
volumes:
some_image_logs: {}

Related

docker-compose create a directory instead of mounting the file

I have a very simple docker-compose.yml file where I use nginx and mounting a file as a volume.
But everytime I run the application, it is creating a directory .htpasswd without really mounting the .htpasswd file where I locally.
This is the docker-compose.yml.
version: '3'
services:
reverse:
container_name: reverse
hostname: reverse
restart: unless-stopped
image: nginx
ports:
- 80:80
- 443:443
volumes:
- ./nginx/.htpasswd:/etc/nginx/conf.d/.htpasswd
Can someone help me fix this?
by default if binded to a none existent path, docker will create a folder, the solution would be in your case to create the path before running your docker-compose
How are you running Docker? here's an answer ...
For Mac with Minikube/Hyperkit docker and Docker Compose
Since I'm not using Docker Desktop any longer, I've experienced numerous issues similar to "docker in docker (dind)" paradigm with minikube...
mount minikube
use absolute path
e.g., easiest way was to mount the exact home path...
minikube mount $HOME:/Users/<you>
... keeps running...
docker-compose.yaml
volumes:
- /Users/<you>/path/to/file.yaml:/somedir/file.yaml

How can I preserve volumes in docker?

I'm using docker-compose down and my question is how can I save my volumes when I execute this command?
By default, docker-compose down should not remove any volume unless you add --volumes (or -v) flag (see the docs). However, you can set volumes as external, which will always prevent them from deletion:
volumes:
myapp:
external: true
You can find this example in official docker volumes documentation.
The docker-compose down command stops containers and removes containers, networks, volumes, and images created by up.
By default, the only things removed are:
Containers for services defined in the Compose file
Networks defined in the networks section of the Compose file
The default network, if one is used
Networks and volumes defined as external are never removed.
A volume may be created directly outside of compose with docker volume create and then referenced inside docker-compose.yml as follows:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
external: true

Command substitution in docker-compose.yml when scaling a service with multiple instances

The docker-compose.yml file I am using is following:
services:
web:
image: nginx:stable-alpine
ports:
- "8080-8130:80"
volumes:
- ${TEMPDIR}:/run
I launch 20 containers with the following command:
TEMPDIR=`mktemp -d` docker-compose up -d --scale web=20
All of the containers launch ok, but they all have the same temp volume mounted at /run, whereas I need each running container to have a unique temp volume. I understand that the problem is that the above yml file is doing Variable Substitution whereas what I need is Command Substitution but I could not find any way of doing that in the official reference, especially when launching multiple instances of the same docker image. Is there some way to fix this problem?
Maybe what you are looking for is tmpfs.
Mount a temporary file system inside the container. Can be a single value or a list.
You can use it simply like this
services:
web:
image: nginx:stable-alpine
ports:
- "8080-8130:80"
tmpfs: /run
Here are the references:
tmpfs on Docker
tmpfs on Docker compose

Docker docker-compose volumes delete sibling folders

This is docker compose file looks like
version: '3.3'
services:
portal:
ports:
- '8080:8080'
- '8000:8000'
environment:
- 'revcycle.portal.logger.root=C:/tomcat/logs/'
volumes:
- /src/main/webapp/sampleFiles:/usr/local/tomcat/webapps/portal/sampleFiles:rw
container_name: portal
image: 'portal:latest'
docker-compose up is creating container successfully by when i check the content of the tomcat webapp All the other sibling folder of the sampleFiles are deleted.
Am i missing something with the volumn commands
Same happen when I use Intellji Idea docker plugin Bind mounts in Configuration
It should be like this:
volumes:
- /src/main/webapp/sampleFiles:/usr/local/tomcat/webapps/portal/sampleFiles
as far as i know rw is for cases when you use drivers stuff...
and make sure that /src/main/webapp/sampleFiles is the host folder which have what you need in docker container. Because essentially it will be mapped into docker container. and will replace target folder.
this way siblings for /usr/local/tomcat/webapps/portal/sampleFiles should stay intact. If no, try starting without volumes part and verify that you see siblings.
don't forget to do docker-compose down and docker-compose up -d when you change anything in docker-compose.yaml file

How fast do the files from a docker image get copied to a named volume after container initialization

I have a stack of containers that are sharing a named volume. The image that contains the files is built to contain code (multiple libraries, thousands of classes).
The issue I am facing is that when I deploy the stack to a docker swarm mode cluster, the containers initialize before the files are fully copied to the volume.
Is there a way to tell that the volume is ready and all files mounted have been copied? I would have assumed that the containers would only get created after the volume is ready, but this does not seem to be the case.
I have an install command that runs in one of the containers sharing that named volume and this fails because the files are not there yet.
version: '3.3'
services:
php:
image: code
volumes:
- namedvolume:/var/www/html
web:
image: nginx
volumes:
- namedvolume:/var/www/html
install:
image: code
volumes:
- namedvolume:/var/www/html
command: "/bin/bash -c \"somecommand\""
volumes:
namedvolume:
Or is there something i am doing wrong?
Thanks

Resources