Is there a way to override the host's folder with the container's folder using volumes in Docker? - docker

I'm fairly new to using Docker and Docker Compose (using Docker Compose for this particular problem). Here is what I know so far about the problem I am facing: When using volumes when there are contents available in the host folder as well as the container's folder, the files inside the container's folder are hidden and the host's files are then made available to the container.
I want to use it the other way round. I would like to make available the container's files (that were copied into the image in the Dockerfile) to the host folder.
Is there a way to do that?
Here are a bunch of screenshots of my Dockerfile and Docker Compose to show my setup.
Dockerfile Screenshot
DockerCompose Screenshot
Thanks in advance! :)

I've come across the same thing many times and the way I go about it is as follows.
As the host volume will always take priority over the container filesystem, you have to copy the files out of the container to the host first, then volume mount them back - this way you get what was there originally, and also what might change in the future (by the container).
The following is all pseudo code, but should hopefully simulate the concept:
First run the main container:
docker run --rm -d --name my-container registry/image-name
Then copy the files you want from it to the local filesystem
docker cp my-container:/files/i/want ./files
Then stop the original container
docker stop my-container
Then mount them back into the container on the next run
docker run --rm -d --name my-container -v ./files:/files/i/want registry/image-name
Obviously you've mentioned compose there also, so just reflect the volume mapping into the compose format - the copy stuff will need to be done via standard docker however in line with the above.
Note: I wrote the above commands blind, but will check them over at lunch and correct any mistypes - but the concept is correct

Related

How to work with the files from a docker container

I need to work with all the files from a docker container, my approach is to copy all the list of files from the container to my host.
I'm using the next docker commands, for example with the postgres image:
docker create -ti --name dummy_1 postgres bash
docker cp dummy_1:/. Documents/docker/dockerOne
With this I have all the container folders and files in my host.
And then the idea is to transverse all the files with the java API, and work with them and finally delete the files and folders from local, but I would like to know if is it a better approach, maybe with Java and access directly to the container files, instead of create a local copy of the container files in my host.
Any ideas?
You can build a small server app inside your docker container which feeds you the information you need at an exposed port. Thats how i would have done it.
Maybe I don't understand the question, but you can mount a volume when you run, not create the container
docker run -v /host/path:/container/path your_container
Any code in the container (e.g. Java) that modifies files at /container/path will be reflected on the host, and not need to be copied back in/out. Similarly, any modifications on the host filesystem will be seen in the container.
I don't think I can implement an API in the docker container
Yes you can. You bind a TCP port using -p flag

Docker volume bind empty volume or convert files to folders

I'm running a container by sending to docker daemon so it can run a sibling container and in that container I try to run another container and mount a volume to access some data, however in the sibling container, the volume is either empty or the file is converted to a folder...
Running the first container:
$ docker run -v /var/run/docker.sock:/var/run/docker.sock -it example /bin/bash
root#3aa35965846a:/home/node/example# ls some_volume/
test.txt
root#3aa35965846a:/home/node/example# cat some_volume/test.txt
hello
// Running the second container
root#3aa35965846a:/home/node/example# docker run -v /home/node/example/some_volume/:/some_volume/ -it node:10 /bin/bash
root#6a84739fbb92:/# ls /some_volume/
* test.txt
root#6a84739fbb92:/# cat /some_volume/test.txt/
cat: /some_volume/test.txt/: Is a directory
The first time I run the second container the volume is empty, if I try to mount a file directly it is converted to a folder, and after that if I try to mount the folder like the example above, there is only the file I tried to mount earlier and it is a folder.
How is this possible ? If i try to mount a volume outside the first container I don't have any problem, how can I fix this ?
The first path in the docker run -v option is always on the host system. For example, if you
docker run -v /etc:/x busybox cat /x/shadow
it will dump out the host's encrypted password file, regardless of whether you ran this command directly from the host or from a container.
There isn't a way to share an arbitrary directory from one container to another. If the launching container knows something about its own directory structure (in particular that some directory was mounted from a specific host path or named volume) then it can replicate that to the other container, but that's not a generic answer. The other behaviors you're seeing are just a consequence of those directories not existing on the host system.
In general I would advise not using Docker for short-lived processes that principally interact with the outside world through the filesystem. Take whatever program you'd run in the other container, install it in your image's Dockerfile, and run it directly without going through Docker.
If you really can't avoid this workflow, the only thing I've found to work reliably is to docker create the container, docker cp files in, docker start it, and docker wait for it to finish. When it's done, docker cp the result out before docker rm it. That's a kind of painstaking workflow but it gets around the problem of the two containers not sharing any filesystem space.

Can I mount a Docker image as a volume in Docker?

I would like to distribute some larger static files/assets as a Docker image so that it is easy for user to pull those optional files down the same way they would be pulling the app itself. But I cannot really find a good way to expose files from one Docker image to the other? Is there a way to mount a Docker image itself (or a directory in it) as a volume to other Docker container?
I know that there are volume plugins I could use, but I could not find any where I could to this or something similar?
Is possible create any directory of an image to a docker volume, but not full image. At least not in a pretty or simple way.
If you want to create a directory from your image as a docker volume you can create a named volume:
docker volume create your_volume
docker run -d \
-it \
--name=yourcontainer \
-v your_volume:/dir_with_data_you_need \
your_docker_image
From this point, you'll have accessible your_volume with data from image your_docker_image
Reason why you cannot mount the whole image in a volume is because docker doesn't let specify / as source of named volume. You'll get Cannot create container for service your-srv: invalid volume spec "/": invalid volume specification: '/' even if you try with docker-compose.
Don't know any direct way.
You can use a folder in your host as a bridge to share things, this is a indirect way to acheive this.
docker run -d -v /any_of_your_host_folder:/your_assets_folder_in_your_image_container your_image
docker run -d -v /any_of_your_host_folder:/your_folder_of_your_new_container your_container_want_to_use_assets
For your_image, you need add CMD in dockerfile to copy the assets to your_assets_folder_in_your_image_container(the one you use as volume as CMD executes after volume)
This may waste time, but just at the first time the assets container starts. And after the container starts, the files in assets container in fact copy to the host folder, and has none business with assets image any more. So you can just delete the image of the assets image. Then no space waste.
You aim just want other people easy to use the assets, so why not afford script to them, automatically fetch the image -> start the container(CMD auto copy files to volume) -> delete the image/container -> the assets already on host, so people just use this host assets as a volume to do next things.
Of course, if container can directly use other image's resource, it is better than this solution. Anyway, this can be a solution although not perfect.
You can add the docker sock as a volume which will allow you to start one of your docker images from within your docker container.
To do this, add the two following volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/usr/bin/docker:/usr/bin/docker"
If you need to share files between the containers map the volume /tmp:/tmp when starting both containers.

Equivalent of -v in the Dockerfile?

So I want to mount my Docker container on my Windows PC using a Dockerfile. So far I have been able to do this using the following command:
docker run -v %userprofile%\mounted-docker\:/tmp/ container-name
This would mount /tmp/ from my Docker container into my C:\Users\USERNAME\mounted-docker\ folder. However, I can't seem to find the equivalent instruction in the Dockerfile documentation.
The only documentation is probably VOLUME in the Dockerfile documentation, which specifies:
Volumes on Windows-based containers: When using Windows-based containers, the destination of a volume inside the container must be one of:
a non-existing or empty directory
a drive other than C:
That's fine and all... but how exactly do I specify that? Let's say I want to mount either / or /tmp/ in a specified folder or drive, how do I do that?
The Dockerfile is used to build the image. To define how you'd like to run that image, you'll want to use a docker-compose.yml file.
In a Dockerfile, you cannot specify where a volume will be mounted from in the host. Doing so would open up docker to malicious image exploits where images from the Docker hub could mount the root filesystem and send private content to remote locations, or even perform a ransomware exploit. Specifying what elevated access a container can have is left up to the user running the image, from docker run or with the docker-compose.yml file.

Change volume configuration in docker-compose without losing the data

My docker-compose has a data container which isn't mapped to a local directory in the host machine, and I want to change it from:
volumes:
- /var/www/html
to
volumes:
- /html:/var/www/html
But when I will restart the container, it will remove the current data container and replace it with a new one.
I know that the container is actually still there, but is there an easy way to do it without the creation of a new data container.
My docker-compose version is 1.7.1 (under boot2docker).
Thanks.
Try at your own risk:
create your host directory /htmlas you wish
docker inspect {container_name} | grep Source and grab your volume path on the host system. It'll be something like /var/lib/docker/volumes/abdb15a2eff[...]/_data
copy the content of that directory to your host directory
recreate the container as you wish.
One safe way to do this is to create a backup of the data from inside the Docker image. Then restore that backup to the directory on your host machine. The Docker Volumes Tutorial mentions a process like this near the bottom.
Here's how you'd do it:
First, mount a directory from your host machine into the container if you don't already have one mounted in. Maybe a volume like ./:/backup. Next, run a backup command like this:
docker-compose run service-name tar czvf /backup/html_data.tar.gz /var/www/html
Now you have html_data.tar.gz in your current directory. Extract it wherever you want and be on your way!
(I'm assuming, based on the way you indicated your volumes, that you're using docker-compose. The process is similar for vanilla Docker.)
Alternate approach, with --volumes-from
Get the name (or hash) of the container with the data you want to copy. You can do this with docker ps. For this example, let's call it container1. Now run this command to back up its data:
docker run --rm --volumes-from container1 -v $(pwd):/backup ubuntu:latest tar czvf /backup/html_data.tar.gz /var/www/html
Note that the image you use (ubuntu:latest) is not important as long as it can tar things.

Resources