docker "--volumes-from" files not updating to target containers - docker

There's a lot of discussion about Docker volumes, but the following doesn't seem to be addressed.
I have a simple container arrangement. I have one volume container that has configs. I have a second container that pulls those configs in using --volumes-from the first container. This all works fine, and I can launch multiple instances of the second container, and all of the files are mounted as expected.
docker run --name configs -d config_image
docker run -it --volumes-from configs --name app1 app_image
docker run -it --volumes-from configs --name app2 app_image
However, if the config files change in the configs volume (created by the Dockerfile), the "app" containers that have mounted this volume do not see the changes. I have to restart the app container to see the files. I have also tested writing to the volume, and this works, but changes by one container are not seen by the other (until it is restarted).
This seems to be working as advertised, just without a real time update. I read something about inode mounting, and that this may be a problem with mount in general.
Would anyone know if this is possible or can shed some light on what may be happening?
Thanks.

Related

Is there a way to override the host's folder with the container's folder using volumes in Docker?

I'm fairly new to using Docker and Docker Compose (using Docker Compose for this particular problem). Here is what I know so far about the problem I am facing: When using volumes when there are contents available in the host folder as well as the container's folder, the files inside the container's folder are hidden and the host's files are then made available to the container.
I want to use it the other way round. I would like to make available the container's files (that were copied into the image in the Dockerfile) to the host folder.
Is there a way to do that?
Here are a bunch of screenshots of my Dockerfile and Docker Compose to show my setup.
Dockerfile Screenshot
DockerCompose Screenshot
Thanks in advance! :)
I've come across the same thing many times and the way I go about it is as follows.
As the host volume will always take priority over the container filesystem, you have to copy the files out of the container to the host first, then volume mount them back - this way you get what was there originally, and also what might change in the future (by the container).
The following is all pseudo code, but should hopefully simulate the concept:
First run the main container:
docker run --rm -d --name my-container registry/image-name
Then copy the files you want from it to the local filesystem
docker cp my-container:/files/i/want ./files
Then stop the original container
docker stop my-container
Then mount them back into the container on the next run
docker run --rm -d --name my-container -v ./files:/files/i/want registry/image-name
Obviously you've mentioned compose there also, so just reflect the volume mapping into the compose format - the copy stuff will need to be done via standard docker however in line with the above.
Note: I wrote the above commands blind, but will check them over at lunch and correct any mistypes - but the concept is correct

Making Docker "undeletable" volume

I have a docker named volume for database data. Now the thing is that when the database container is down and I (or anyone) run docker system prune it deletes all the unused containers, images and volumes including the one with database data. Is there a way to make the volume undeletable unless it is explicitly told to?
I suppose I can just mount a host directory to the container without making it a docker volume (and therefore without the risk of deleting it), but using docker volume seems like a cleaner way to do it.
When you run docker system pruneit is going to wipe out everything. But if you do something like this docker run -d -p 8080:8080 -p 1521:1521 -v /Users/noname_dev/programming/oracle-database:/u01/app/oracle -e DBCA_TOTAL_MEMORY=1024 oracle-database
then /Users/noname_dev/programming/oracle-database will still be there on your local but the container will naturally be gone till you create it again.

Is it possible to change the read-only/read-write status of a docker mount at runtime?

I have a dockerized application that uses the filesystem to store lots of state. The application code is contained in the docker image
I am considering a update strategy which involves sharing the volume between two containers, but making sure that at most one container at a time can write to that filesystem.
The workflow would be:
start container A with /data mounted rw
start container B with /data mounted ro, and a newer version of the application
stop serving requests to container A
for container A, make the /data mount read-only
for container B, make the /data mount read-write
start serving requests to container B
You can re-mount your volume from inside the container, in the rw mode, like that:
mount -o remount,rw /mnt/data
The catch is that mount syscall is not allowed inside the Docker containers by default so that you would have to run it in a privileged mode:
docker run --privileged ...
or enable the SYS_ADMIN capability
SYS_ADMIN Perform a range of system administration operations.
docker run --cap-add=SYS_ADMIN --security-opt apparmor:unconfined
(note that I have had to also add --security-opt apparmor:unconfined, to make this work on Ubuntu).
Also, remounting the rw volume back to ro might be tricky, as some process(es) might have already opened some files inside it for writing , in which case the remount will fail with is busy error message.
But my guess is that you can just restart the container instead (as it would be the one running an old version of the app).
Not exactly what the OP requested, but I've had a similar question where i needed to get data OUT of the running container, but had mounted RW.
Other ways to extract the data would have taken too long.
My approach ? Stash the container as an image and start a new container from that Image with a mount as RW :D
Initial container start:
docker run -p 80:8080 --mount type=bind,source="C:\data-folder-local\",target=/data-folder-container-ro,readonly -d imageName:imageTag
Making an image from the container. You can stop this container before/after if you want.
docker commit -a "mud" -m "Damn, mount should be rw, stashing a snapshot to reuse." CONTAINER_ID_HERE snapshotImageName:snapshotImageTag
where CONTAINER_ID_HERE i got from the output of docker ps (https://docs.docker.com/engine/reference/commandline/ps/)
Start a new container from the image made, but this time mount with write rights!
docker run -p 80:8080 --mount type=bind,source="C:\data-folder-local\",target=/data-folder-container-rw -d snapshotImageName:snapshotImageTag
write out files to the mount folder (on local system) from within your container :D
Hope that helps somebody.

Howto run a Prestashop docker container with persistent data?

There is something I'm missing in many docker examples and that is persistent data. Am I right if I conclude that every container that is stopped will lose it's data?
I got this Prestashop image running with it's internal database:
https://hub.docker.com/r/prestashop/prestashop/
You just run docker run -ti --name some-prestashop -p 8080:80 -d prestashop/prestashop
Well you got your demo then, but not very practical.
First of all I need to hook an external MySQL container, but that one will also lose all it's data if for example my server reboots.
And what about all the modules and themes that are going to be added to the prestashop container?
It has to do with Volumes, but it is not clear to my how all the the host volumes needs to be mapped correctly and what path to the host is normally chosen. /opt/prestashop er something?
First of all, I don't have any experience with PrestaShop. This is an example which you can use for every docker container (from which you want to persist the data).
With the new version of docker (1.11) it's pretty easy to 'persist' your data.
First create your named volume:
docker volume create --name prestashop-volume
You will see this volume in /var/lib/docker/volumes:
prestashop-volume
After you've created your named volume container you can connect your container with the volume container:
docker run -ti --name some-prestashop -p 8080:80 -d -v prestashop-volume:/path/to/what/you/want/to/persist :prestashop/prestashop
(when you really want to persist everything, I think you can use the path :/ )
Now you can do what you want on your database.
When your container goes down or you delete your container, the named volume will still be there and you're able to reconnect your container with the named-volume.
To make it even more easy you can create a cron-job which creates a .tar of the content of /var/lib/docker/volumes/prestashop-volume/
When really everything is gone you can restore your volume by recreating the named-volume and untar your .tar-file in it.

How to mount a directory in a Docker container to the host?

Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)

Resources