How to copy file to a docker volume - docker

I have a docker volume called hadoop_conf created at the default location /var/lib/docker/volumes/hadoop_conf/_data/. I need to copy some files from the host machine to this directory so the container can read them. My user has permission to run docker command but it does not have sudo access and copying into that directory requires sudo access. Is there any alternative to copy some files to this directory? any docker command?

You can use docker cp to copy files from your host to a container with hadoop_conf already mounted. You do not need sudo privileges for this.
https://docs.docker.com/engine/reference/commandline/cp/
docker cp myfile.txt mycontainer:/path/to/hadoop_conf_volume/

Related

How to get all file and folders of a docker container?

I have a server that a docker container is running on. I can see it by running docker container ls:
And finding the image by running docker image ls:
I also can open the container using this command docker exec -it <container_hash> sh. All I need to do is making a zip file from the files of the project. I mean these files:
So, how can I copy/paste all files of a running container and make a zip file of them on the server? Noted that I use Ubuntu 20.04
docker cp is your friend here:
Usage: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Copy files/folders between a container and the local filesystem
Use '-' as the source to read a tar archive from stdin
and extract it to a directory destination in a container.
Use '-' as the destination to stream a tar archive of a
container source to stdout.
Options:
-a, --archive Archive mode (copy all uid/gid information)
-L, --follow-link Always follow symbol link in SRC_PATH
So in your case you could use
docker cp <container_name>:/app /local/path/for/directory
This will copy the directory out to your local file system. From there you can use a utility to create an archive in whatever format you want.

WORKDIR as VOLUME

In my dockerfile, I have my WORKDIR and I want to have it as a VOLUME, so that on the host I have a directory in /var/lib/docker/volumes/ where is the same content as in the WORKDIR.
How do I use the VOLUME Dockerfile command for this?
While you can mount a volume over the WORKDIR that you were using when building your image, the volume isn't available at build time. Volumes are only available for a container, not while building an image.
You can COPY files into the image to represent the content that will exist in the volume once a container is running, and use those temporary files to complete the building of the image. However, those exact files would be inaccessible once a volume is mounted in that location.
To have a directory from the host machine mounted inside a container, you would pass a -v parameter (you can do multiple -v params for different directories or for individual files) to the docker run command that starts the container:
docker run -v /var/lib/docker/volumes:/full/path/inside/container your_image_name

docker run -v works even without VOLUME or mkdir

What is the use of "VOLUME" or "RUN mkdir /m"?
Even if I do not specify any of these instructions in the Dockerfile, then also "docker run -v ${PWD}/m:/m" works.
Inside a Dockerfile, VOLUME marks a directory as a mount point for an external volume. Even if the docker run command doesn't mount an existing folder into that mount point, docker will create a named volume to hold the data.
RUN mkdir /m does what mkdir does on any Unix system. It makes a directory named m at the root of the filesystem.
docker run -v ... binds a host directory to a volume inside a container. It will work whether or not the mount point was declared as a volume in a Dockerfile, and it will also create the directory if it doesn't exist. So neither VOLUME or RUN mkdir are specifically necessary before using that command, though they may be helpful to communicate the intent to the user.

How to share data between the docker container and the host?

I tried to share data between the docker container and the host, for example by adding the parameter -v /Users/name/Desktop/Tutorials:/cntk/Tutorials to the docker run command, but I noticed that it also deletes all the files on the docker contained in /cntk/Tutorials.
My question is how to make the same link, but having instead all the files in /cntk/Tutorials copied to the host (at /Users/name/Desktop/Tutorials)
Thank you
Unfortunately that it is not possible, take a look here. That is because this is how mounting works in Linux.
It is not correct to say that the files were deleted. They are still present in the underlying image, but the act of mounting another directory at the same path has obscured them. They exist, but are not accessible in this condition.
One way you can accomplish this is by mounting a volume into your container at a different path, and then copying the container's files to that path. Something like this.
Mount a host volume using a different path than the one the container already has for the files you are interested in.
docker run -v /Users/name/Desktop/Tutorials:/cntk/Tutorials2 [...]
Now, execute a command that will copy the files already in the docker image, into the mounted volume from the outside host.
docker exec <container-id> cp -r /cntk/Tutorials /cntk/Tutorials2
The docker cp command allows you to copy files/folders on demand between host and the container:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
docker cp ContainerName:/home/data.txt . <== copy from container to host
docker cp ./test.txt ContainerName:/test.txt <== copy from host to container
docker cp ContainerName:/test.txt ./test2.txt <== copy from container to host
For details run docker cp --help

how to map a local folder as the volume the docker container or image?

I am wondering if I can map the volume in the docker to another folder in my linux host. The reason why I want to do this is, if I don't misunderstand, the default mapping folder is in /var/lib/docker/... and I don't have the access to this folder. So I am thinking about changing that to a host's folder I have access (for example /tmp/) when I create the image. I'm now able to modify Dockerfile if this can be done before creating the image. Or must this be done after creating the image or after creating the container?
I found this article which helps me to use a local directory as the volume in docker.
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Command I use while creating a new container:
docker run -d -P -name randomname -v /tmp/localfolder:/volumepath imageName
Docker doesn't have any tools I know of to map named or container volumes back to the host, though they are just sub directories under /var/lib/docker so writing your own tool wouldn't be impossible, but you'd need root access to run it. Note that with access to docker on the host, there are likely a lot of ways to access root privileges on the host. Creating a hard link to the target folder should be all that's needed if both source and target are on the same file system.
The docker way to access the named volume would be to create a disposable container to access your files. You can even create an additional host volume to export the data. E.g.
docker run -it --rm \
-v test:/source -v `pwd`/data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"'
Where "test" is the named volume you want to export/copy. You may need to also run a chown -R $uid /target in a container to change everything to your uid on the host.

Resources