I know this isn't common practice at all.
I'm trying to share a docker image repertory with an host directory.
Usually, you're doing this the reverse way host directory -> docker image directory with
docker run -v /hostdirectory/:/dockerimagedirectory/
Is there any way to get the same inverted result aka the image directory -> hostdirectory ?
You can try to copy the repertory from container then mount it again to be sync with container changes.
docker cp <containerId>:/file/path/within/container /host/path/target
docker run -v /host/path/target:/file/path/within/container
Related
I am new to docker volumes, and my use case is the next:
I have two different containers running in the same host, and both need to read/write files from it. Is of my understanding that I should use docker volumes, but before I try that, I want to make sure that i can delete files of the host filesystem, from inside the containers (e.g. using a golang app)
Maybe, you should use docker volumes. It can share the directory between the host and containers. For example, you want to read/write the file in /mnt, you can mount the /mnt to container.
docker run -it -v /mnt:/mnt ubuntu:latest touch /mnt/hello.log
now, /mnt/hello.log was created. And you can edit the file /mnt/hello.log in you host filesystem.
Then,
docker run -it -v /mnt:/mnt ubuntu:latest rm /mnt/hello.log
After the command above, the file /mnt/hello.log will be deleted from inside the container.
Actually, you can delete the file in golang, like this:
os.Remove("/mnt/hello.log")
I'm running a container by sending to docker daemon so it can run a sibling container and in that container I try to run another container and mount a volume to access some data, however in the sibling container, the volume is either empty or the file is converted to a folder...
Running the first container:
$ docker run -v /var/run/docker.sock:/var/run/docker.sock -it example /bin/bash
root#3aa35965846a:/home/node/example# ls some_volume/
test.txt
root#3aa35965846a:/home/node/example# cat some_volume/test.txt
hello
// Running the second container
root#3aa35965846a:/home/node/example# docker run -v /home/node/example/some_volume/:/some_volume/ -it node:10 /bin/bash
root#6a84739fbb92:/# ls /some_volume/
* test.txt
root#6a84739fbb92:/# cat /some_volume/test.txt/
cat: /some_volume/test.txt/: Is a directory
The first time I run the second container the volume is empty, if I try to mount a file directly it is converted to a folder, and after that if I try to mount the folder like the example above, there is only the file I tried to mount earlier and it is a folder.
How is this possible ? If i try to mount a volume outside the first container I don't have any problem, how can I fix this ?
The first path in the docker run -v option is always on the host system. For example, if you
docker run -v /etc:/x busybox cat /x/shadow
it will dump out the host's encrypted password file, regardless of whether you ran this command directly from the host or from a container.
There isn't a way to share an arbitrary directory from one container to another. If the launching container knows something about its own directory structure (in particular that some directory was mounted from a specific host path or named volume) then it can replicate that to the other container, but that's not a generic answer. The other behaviors you're seeing are just a consequence of those directories not existing on the host system.
In general I would advise not using Docker for short-lived processes that principally interact with the outside world through the filesystem. Take whatever program you'd run in the other container, install it in your image's Dockerfile, and run it directly without going through Docker.
If you really can't avoid this workflow, the only thing I've found to work reliably is to docker create the container, docker cp files in, docker start it, and docker wait for it to finish. When it's done, docker cp the result out before docker rm it. That's a kind of painstaking workflow but it gets around the problem of the two containers not sharing any filesystem space.
I have mounted my USB devices to a docker container using docker run --privileged -v /dev/bus/usb:/dev/bus/usb -d ubuntu
Within the container, I would like to delete few files from /dev/bus/usb/
This results in the deletion of files from the host as well, which is not what I want
I would like to delete files from the container, but continue to have them in the host
Is there any way that I can achieve this ?
This is because you are using a shared volume, so when you delete files this action is effective into your container and into the host.
Maybe you can write a little Dockerfile to create an image with a copy of your usb files and not share the volume into the container.
FROM ubuntu
COPY /dev/bus/usb /path/for/your/copy
After that you can compile your image:
docker build -t imagename .
And finally launch it:
docker run -d imagename
I have a docker container which has some data in let's say /opt/files. File A and B. How can I start that container and access these files on my host machine?
I'm using Docker for Windows (Hyper-V). When i start the container with:
docker run -it -v C:/tmp:/opt/files myImage
I see an empty folder on my windows machine and inside of the container. Any new files I create there are of course reflected on both sides but how can I access files that are already in the container (e.g. because they're added in the Dockerfile)?
You can't share from inside container to host. There are two ways to do it
Copy the files from container
docker cp <containerid>:<file_path_inside_container> localpath
Share a folder other than the one where files will be generated
docker run -it -v C:/tmp:/opt/files_temp myImage
Then you get inside the container copy files from /opt/files to /opt/files_temp
Once your container is started, you can copy files inside it to your host.
Use docker cp for this (https://docs.docker.com/engine/reference/commandline/cp/).
Example : docker cp CONTAINER:SRC_PATH DEST_PATH|-
I have a docker container that is running the etcd docker image by CoreOS which can be found here: https://quay.io/repository/coreos/etcd. What I want to do is copy all the files that are saved in etcd's data directory locally. I tried to connect to the container using docker exec -it etcd /bin/sh but it seems like there is no shell (/bin/bash, /bin/sh) on there or at least it can't be found on the $PATH variable. How can I either get onto the image or get all the data files inside of etcd copied locally?
You can export the contents of an image easily:
docker export <CONTAINER ID> > /some_file.tar
Ideally you should use volumes so that all your data is stored outside the container. Then you can access those files like any other file.
Docker has the cp command for copying files between container and host:
docker cp <id>:/container/source /host/destination
You specify the container ID or name in the source, and you can flip the command round to copy from your host into the container:
docker cp /host/source <id>:/container/destination