Docker Mounting removes existing file in directory - docker

I have a docker image whose context in directory /workspace is
# ls workspace
dir1 dir2
When I am mounting a volume in docker's /workspace directory this content is lost. Is there any way to retain these files?
# docker run command
docker run -d -t --net host --rm \
-v $PWD:/workspace -w /workspace/workspace_1 \
my_docker_image:latest /bin/bash
At this point content of directory is changes
# ls workspace
hostdir1 hostdir2
What I want to have is
# ls workspace
dir1 dir2 hostdir1 hostdir2

Related

Cannot copy files from source folder to Docker image

I'm trying to copy the folder and contents from a source to a Docker container using an image. I built the image from my Dockerfile.
RUN useradd -m -s /bin/bash user1 && \
ln -s /foo /home/user1/foo
RUN mkdir /foo && && chown -R user1:user1 /foo
VOLUME /foo
After I build the Docker image, I run these commands to create a container.
docker run -it
--name container_name \
--mount "type=bind,source=$(pwd)/$FOLDER/container_foo,destination=/foo/" \
dockerimage:tag
The files from the source folder /foo isn't in the /container_foo. I checked by doing docker exec -it container_ID /bin/bash and confirmed that the files aren't there.
EDIT :
I just found out that mount only goes one way, exposing local host files/folder to the docker container. It doesn't copy the files inside of the docker container to the local folder when mounting. I removed creating the /foo directory from RUN and VOLUME. Instead I did this in the Dockerfile.
COPY --chown=user1:user1 foo/ foo/
And I was able to copy the files from source. Now I just need to copy it from there to container_foo when doing the docker run ... command.

Dockerfile VOLUME command, data is lost when mounted as bindmount

I created a Dockerfile and ran the container with bindMount, the contents are lost ( no content)
FROM alpine:3.8
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
docker run -it --name volDemo2 -v $(pwd)/myvol:/myvol voldemo sh
I am expecting the "myvol" under "pwd" should contain "greeting", which is not the case
root#default:/home/docker# docker run -it --name volDemo2 -v $(pwd)/myvol:/myvol voldemo sh
/ # cd myvol/
/myvol # ls
/myvol #
However, the same works fine if it is mounted the following way
docker#default:~$ docker run -it --name volDemo1 -v myvol:/myvol voldemo sh
/ # cd myvol/
/myvol # ls
1.txt greeting
/myvol # exit
Is it the expected behavior, VOLUME instruction will work only with "volume" and not "bindmounts"
This is how bind mounts work. They mount one folder in another path on the filesystem. All access to the target path get mapped directly back to the source directory.
What docker provides for a named volume (your second example) is an initialization step when that named volume is empty on container creation. They will copy all files, directories, and metadata like file owner and permissions, from the image filesystem into the named volume before the container is started. This only happens with named volumes and not host mounts or tmpfs volumes. And this only happens when the named volume is empty, so it will not updated as you change the image.
You can make a named volume that mounts other directories on the host by passing additional options, giving you something between a host mount and default named volume, since they are both implemented with a bind mount. Three different examples of that are shown below:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
If your goal is to keep things simple for other users of the image, and potentially update the volume with new versions of the image, then you'll want to do this as part of your entrypoint script. I do this in my volume caching scripts included in my base image. You copy the volume directory to a safe location inside the image, and then on container startup, the entrypoint script will copy the files into the volume.

Create directory for docker volume

I want to create docker volume and add directory and files into it without creating extra container/images or within minimal time. That thing would go into script, so interactions like -it bash won't do.
I can copy files with:
docker container create --name dummy -v myvolume:/root hello-world
docker cp c:\myfolder\myfile.txt dummy:/root/myfile.txt
docker rm dummy
How do I create an empty dir?:
attempt 1
mkdir lol; docker cp ./lol dummy:/root/lol # cannot copy directory
attempt 2
docker commit [CONTAINER_ID] temporary_image
docker run --entrypoint=bash -it temporary_image
This thing requires to pull image with bash.
this worked for me, you can try it out. I am doing the exact thing running from script
VOL_NAME=temp_vol
docker volume create $VOL_NAME
docker run -v $VOL_NAME:/root --name helper busybox true
mkdir tmp
docker cp tmp helper:/root/dir0
docker cp tmp helper:/root/dir1
docker cp tmp helper:/root/dir2
rm -rf tmp
docker rm helper
# check volume
sudo ls /var/lib/docker/volumes/$VOL_NAME/_data
dir0 dir1 dir2

docker cp the content of a folder

I try to copy the content of a folder (on my server) to my container:
docker cp sonatype-work-backup/* nexus:/sonatype-work/
So I want the content of sonatype-work in my /sonatype-work/ of nexus. But it doesn't work with the * and without the star it's copying the directory sonatype-work-backup inside my sonatype-work directory. I can't perform mv after that.
you could just mount that directory in your container at run
docker run -v /sonatype-work-backup:/mnt --name nexus nexus-image
then
docker exec -it nexus bash
and just cp from /mnt to your desired folder

What is the right way to add data to an existing named volume in Docker?

I was using Docker in the old way, with a volume container:
docker run -d --name jenkins-data jenkins:tag echo "data-only container for Jenkins"
But now I changed to the new way by creating a named volume:
docker volume create --name my-jenkins-volume
I bound this new volume to a new Jenkins container.
The only thing I've left is a folder in which I have the /var/jenkins_home of my previous jenkins container. (by using docker cp)
Now I want to fill my new named volume with the content of that folder.
Can I just copy the content of that folder to /var/lib/jenkins/volume/my-jenkins-volume/_data?
You can certainly copy data directly into /var/lib/docker/volumes/my-jenkins-volume/_data, but by doing this you are:
Relying on physical access to the docker host. This technique won't work if you're interacting with a remote docker api.
Relying on a particular aspect of the volume implementation would could change in the future, breaking any processes you have that rely on it.
I think you are better off relying on things you can accomplish using the docker api, via the command line client. The easiest solution is probably just to use a helper container, something like:
docker run -v my-jenkins-volume:/data --name helper busybox true
docker cp . helper:/data
docker rm helper
You don't need to start some container to add data to already existing named volume, just create a container and copy data there:
docker container create --name temp -v my-jenkins-volume:/data busybox
docker cp . temp:/data
docker rm temp
You can reduce the accepted answer to one line using, e.g.
docker run --rm -v `pwd`:/src -v my-jenkins-volume:/data busybox cp -r /src /data
Here are steps for copying contents of ~/data to docker volume named my-vol
Step 1. Attach the volume to a "temporary" container. For that run in terminal this command :
docker run --rm -it --name alpine --mount type=volume,source=my-vol,target=/data alpine
Step 2. Copy contents of ~/data into my-vol . For that run this commands in new terminal window :
cd ~/data
docker cp . alpine:/data
This will copy contents of ~/data into my-vol volume. After copy exit the temporary container.
You can add this BASH function to your .bashrc to copy files to a existing Docker volume without running a container
# Usage: copy-to-docker-volume SRC_PATH DEST_VOLUME_NAME [DEST_PATH]
copy-to-docker-volume() {
SRC_PATH=$1
DEST_VOLUME_NAME=$2
DEST_PATH="${3:-}"
# create smallest Docker image possible
echo -e 'FROM scratch\nLABEL empty=""' | docker build -t empty -
# create temporary container to be able to mount volume
CONTAINER_ID=$(docker container create -v my-volume:/data empty cmd)
# copy files to volume
docker cp "${SRC_PATH}" "${CONTAINER_ID}":"/data/${DEST_PATH}"
# remove temporary container
docker rm "${CONTAINER_ID}"
}
Example
# create volume as destination
docker volume create my-volume
# create directory to copy
mkdir my-dir
echo "hello file1" > my-dir/my-file-1
# copy directory to volume
copy-to-docker-volume my-dir my-volume
# list directory on volume
docker run --rm -it -v my-volume:/data busybox ls -la /data/my-dir
# show file content on volume
docker run --rm -it -v my-volume:/data busybox cat /data/my-dir/my-file-1
# create another file to copy
echo "hello file2" > my-file-2
# copy file to directory on volume
copy-to-docker-volume my-file-2 my-volume my-dir
# list (updated) directory on volume
docker run --rm -it -v my-volume:/data busybox ls -la /data/my-dir
# check volume content
docker run --rm -it -v my-volume:/data busybox cat /data/my-dir/my-file-2
If you don't want to create a docker and you can access as privileged user to , simply do (on Linux systems):
docker volume create my_named_volume
sudo cp -p . /var/lib/docker/volumes/my_named_volume/_data/
Furthermore, it also allows you to access data in docker runtime or also with docker containers stopped.
If you don't want to create a temp helper container on windows docker desktop (backed by wsl2) then
copy the files to below location
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\my-volume\_data
here my-volume is the name of your named volume. browse the above path from address bar in your file explorer. This is a internal network created by wsl in windows.
Note: it might be better to use docker API like mentioned by larsks, but I have not faced any issues on windows.
Similarly on linux files can be copied to
/var/lib/docker/volumes/my-volume/_data/

Resources