I want to update some container. For testing, I want to create a copy of the corresponding volume. Set up a new container for this new volume.
Is this as easy as doing cp -r volumeOld volumeNew?
Or do I have to pay attention to something?
To clone docker volumes, you can transfer your files from one volume to another one. For that you have to manually create a new volume and then spin up a container to copy the contents.
Someone has already made a script for that, which you might use: https://github.com/gdiepen/docker-convenience-scripts/blob/master/docker_clone_volume.sh
If not, use the following commands (taken from the script):
# Supplement "old_volume" and "new_volume" for your real volume names
docker volume create --name new_volume
docker container run --rm -it \
-v old_volume:/from \
-v new_volume:/to \
alpine ash -c "cd /from ; cp -av . /to"
On Linux it can be as easy as copying a directory. Docker keeps volumes in /var/lib/docker/volumes/<volume_name>, so you can simply copy contents of the source volume into a directory with another name:
# -p to preserve permissions
sudo cp -rp /var/lib/docker/volumes/source_volume /var/lib/docker/volumes/target_volume
Should you want to copy volumes managed by docker-compose, you'll also need to copy the specific labels when creating the new volume.
Else docker-compose will throw something like Volume already exists but was not created by Docker Compose.
Extending on the solution by MauriceNino, these lines worked for me:
# Supplement "proj1_vol1" and "proj2_vol2" for your real volume names
docker volume inspect proj1_vol1 # Look at labels of old volume
docker volume create \
--label com.docker.compose.project=proj2 \
--label com.docker.compose.version=2.2.1 \
--label com.docker.compose.volume=vol2 \
proj2_vol2
docker container run --rm -it \
-v proj1_vol1:/from \
-v proj2_vol2:/to \
alpine ash -c "cd /from ; cp -av . /to"
Btw, this also seems to be the only way to rename Docker volumes.
In my work I use this script to:
clone the container
clone all its volumes and copy contents from the old volumes to the new ones
run the new container (with an arbitrary new image)
reattach the new volumes to the new container at the same destinations as the old ones
However, the script makes some assumptions about the naming of the volumes, so please read the README instructions before applying it.
Related
Suppose I created a Docker volume like so:
docker volume create my-volume
The volume was then used by some container and data was written to it.
Is there any way to read the contents of the volume from the host machine without attaching it to a container. Answer should not include reading it from /var/lib/docker... as that path can change from machine to machine and OS to OS.
So I am looking for a command like
docker cat my-volume:/path/inside/this/volume/file.txt
Is there any way to read the contents of the volume from the host machine without attaching it to a container?
No.
On the other hand, the recipe to read an individual file from a temporary container isn't that much more complicated than what you show:
docker run --rm -v my-volume:/my-volume -w /my-volume busybox \
cat ./path/inside/this/volume/file.txt
Instead of cat, you can run any other command; so if you wanted to copy the contents of the volume out to the local system, for example, you could similarly run
docker run --rm -v my-volume:/my-volume -w /my-volume busybox \
tar cf - . \
| tar xvf -
Trying to copy files from the container to the local first
So, I have a custom Dockerfile, RUN mkdir /test1 && touch /test1/1.txt and then I build my image and I have created an empty folder in local path /root/test1
and docker run -d --name container1 -v /root/test1:/test1 Image:1
I tried to copy files from containers to the local folder, and I wanted to use it later on. but it is taking my local folder as a preceding and making my container empty.
Could you please someone help me here?
For example, I have built my own custom Jenkins file, for the first time while launching it I need to copy all the configurations and changes locally from the container, and later if wanted to delete my container and launch it again don't need to configure from the scratch.
Thanks,
The relatively new --mount flag replaces the -v/--volume mount. It's easier to understand (syntactically) and is also more verbose (see https://docs.docker.com/storage/volumes/).
You can mount and copy with:
docker run -i \
--rm \
--mount type=bind,source="$(pwd)"/root/test1,target=/test1 \
/bin/bash << COMMANDS
cp <files> /test1
COMMANDS
where you need to adjust the cp command to your needs. I'm not sure if you need the "$(pwd)" part.
Off the top, without testing to confirm, i think it is
docker cp container1:/path/on/container/filename /path/on/hostmachine/
EDIT: Yes that should work. Also "container1" is used here because that was the container's name provided in the example
In general it works like this
container to host
docker cp containername:/containerpath/ /hostpath/
host to container
docker cp /hostpath/ containername:/containerpath/
I need to install a custom bundle in a dockerized servicemix image. To do so, I need to paste some files in the /etc directory of the servicemix image.
Could anyone help me doing this?
I've tried using the Dockerfile as follows:
But it simply doesn't work. I've looked through the documentation of the image, and the author tells me to use the command: docker run --volumes-from servicemix-data -it ubuntu bash and inspect the /servicemix, but it's empty.
Dockerfile:
FROM dskow/apache-servicemix
WORKDIR .
COPY ./docs /apache-servicemix/etc
...
Command suggested by the author:
docker run --volumes-from servicemix-data -it ubuntu bash
I was unfamiliar with this approach but, having looked at the source (link), I think this is what you want to do:
Create a container called servicemix-data that will become your volume:
docker run --name servicemix-data -v /servicemix busybox
Confirm this worked:
docker container ls --format="{{.ID}}\t{{.Names}}" --all
42b3bc4dbedf servicemix-data
...
Then you want to copy the files into this container:
docker cp ./docs servicemix-data:/etc
Finally, run servicemix using this container (with your files) as the source for its data:
docker run \
--detach \
--name=servicemix \
--volumes-from=servicemix-data \
dskow/apache-servicemix
HTH!
Changes in the container will be lost until it is committed back to the image.
You can use this docker file https://hub.docker.com/r/mkroli/servicemix/dockerfile and your copy statement just before the ENTRYPOINT.
COPY ./docs /opt/apache-servicemix/etc
I would like someone to assist me in reading below docker run command
docker run --rm \
--volumes-from myredis \
-v $PWD/backup:/backup \
debian \
cp /data/dump.rdb /backup/
I know it dumps redis, and attaching volume from container myredis into cwd backup. As for the rest of the command I am having trouble interpreting it.
Thanks.
this command is to create a redis's backup you are coping the dump.rdb into the /backup dir on your host.
--rm means remove the container after run, usually it's a good way to clean your env because you can not reuse this container when it finish its work.
debian is the name of the image that you are using.
"cp /data/dump.rdb /backup/" is the command that you are doing inside your container
I was using Docker in the old way, with a volume container:
docker run -d --name jenkins-data jenkins:tag echo "data-only container for Jenkins"
But now I changed to the new way by creating a named volume:
docker volume create --name my-jenkins-volume
I bound this new volume to a new Jenkins container.
The only thing I've left is a folder in which I have the /var/jenkins_home of my previous jenkins container. (by using docker cp)
Now I want to fill my new named volume with the content of that folder.
Can I just copy the content of that folder to /var/lib/jenkins/volume/my-jenkins-volume/_data?
You can certainly copy data directly into /var/lib/docker/volumes/my-jenkins-volume/_data, but by doing this you are:
Relying on physical access to the docker host. This technique won't work if you're interacting with a remote docker api.
Relying on a particular aspect of the volume implementation would could change in the future, breaking any processes you have that rely on it.
I think you are better off relying on things you can accomplish using the docker api, via the command line client. The easiest solution is probably just to use a helper container, something like:
docker run -v my-jenkins-volume:/data --name helper busybox true
docker cp . helper:/data
docker rm helper
You don't need to start some container to add data to already existing named volume, just create a container and copy data there:
docker container create --name temp -v my-jenkins-volume:/data busybox
docker cp . temp:/data
docker rm temp
You can reduce the accepted answer to one line using, e.g.
docker run --rm -v `pwd`:/src -v my-jenkins-volume:/data busybox cp -r /src /data
Here are steps for copying contents of ~/data to docker volume named my-vol
Step 1. Attach the volume to a "temporary" container. For that run in terminal this command :
docker run --rm -it --name alpine --mount type=volume,source=my-vol,target=/data alpine
Step 2. Copy contents of ~/data into my-vol . For that run this commands in new terminal window :
cd ~/data
docker cp . alpine:/data
This will copy contents of ~/data into my-vol volume. After copy exit the temporary container.
You can add this BASH function to your .bashrc to copy files to a existing Docker volume without running a container
# Usage: copy-to-docker-volume SRC_PATH DEST_VOLUME_NAME [DEST_PATH]
copy-to-docker-volume() {
SRC_PATH=$1
DEST_VOLUME_NAME=$2
DEST_PATH="${3:-}"
# create smallest Docker image possible
echo -e 'FROM scratch\nLABEL empty=""' | docker build -t empty -
# create temporary container to be able to mount volume
CONTAINER_ID=$(docker container create -v my-volume:/data empty cmd)
# copy files to volume
docker cp "${SRC_PATH}" "${CONTAINER_ID}":"/data/${DEST_PATH}"
# remove temporary container
docker rm "${CONTAINER_ID}"
}
Example
# create volume as destination
docker volume create my-volume
# create directory to copy
mkdir my-dir
echo "hello file1" > my-dir/my-file-1
# copy directory to volume
copy-to-docker-volume my-dir my-volume
# list directory on volume
docker run --rm -it -v my-volume:/data busybox ls -la /data/my-dir
# show file content on volume
docker run --rm -it -v my-volume:/data busybox cat /data/my-dir/my-file-1
# create another file to copy
echo "hello file2" > my-file-2
# copy file to directory on volume
copy-to-docker-volume my-file-2 my-volume my-dir
# list (updated) directory on volume
docker run --rm -it -v my-volume:/data busybox ls -la /data/my-dir
# check volume content
docker run --rm -it -v my-volume:/data busybox cat /data/my-dir/my-file-2
If you don't want to create a docker and you can access as privileged user to , simply do (on Linux systems):
docker volume create my_named_volume
sudo cp -p . /var/lib/docker/volumes/my_named_volume/_data/
Furthermore, it also allows you to access data in docker runtime or also with docker containers stopped.
If you don't want to create a temp helper container on windows docker desktop (backed by wsl2) then
copy the files to below location
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\my-volume\_data
here my-volume is the name of your named volume. browse the above path from address bar in your file explorer. This is a internal network created by wsl in windows.
Note: it might be better to use docker API like mentioned by larsks, but I have not faced any issues on windows.
Similarly on linux files can be copied to
/var/lib/docker/volumes/my-volume/_data/