Exporting a Docker container with contents of mounted volume - docker

I am trying to export a docker container that uses a mounted local volume as its root, which I run with docker run --privileged -v /path/to/local/files:/root --name cse303dev -it cse303 to mount that local directory.
What is the best way of exporting the container AND all of the contents of that local mounted directory into a simple tar file of some sort? And then how would I easily re-import this container and run it so that I can see and use all of those files copied from the local machine that exported it? Is this possible?

You almost never "export" a container per se. Containers are generally intended to be freely destroyed and recreated.
In your case you already have the data you care about stored outside the container (in a bind-mounted folder, which is the easy case), so you can just copy that directory tree to wherever else and then run a new copy of the container there.
(cd /path/to/local/files; tar cvzf ~/local-files.tar.gz .)
scp local-files.tar.gz there:
ssh there
mkdir files
(cd files; tar xvzf ../local-files.tar.gz)
docker run -v $PWD/files:/root cse303
This is trickier if you're storing the data in a named volume. The Docker documentation describes how to back up the contents of a named volume and you'd have to go through that procedure.

If you want to export the entire contents of the container (including the mounted volumes, which might be a bad idea depending upon what you have mounted), then you want to run tar inside the container and pipe out the data to a file:
docker run --privileged -v /path/to/local/files:/root ${IMAGE_NAME} \
tar -cf - -C / --exclude=proc --exclude=sys . | gzip > myfile.tgz
I would highly recommend excluding /proc and /sys (as in the above example) or you will likely encounter issues.

Related

Backup /var/lib/docker without images?

I want to make a backup of all my containers and volumes, so the easiest way would be to copy /var/lib/docker to another location.
However this directory also includes all the images, and I don't want to include them since they all can easily be re-downloaded from public sources.
So how can I copy this directory while excluding the images?
You have to differentiate between container backup and vol backup:
Backing up a container, that is, its configurations like labels, envs, etc.
You do that by committing the container as an image:
$ docker container commit <container-name/id> <name-of-new-image>
Better give it also some metainfo:
$ docker container ... -m "very important container config state" -a "John Doe"
Backing up a volume
Let's say the volume of interest <my-vol> is bound to a container <other-container> - which may have been created like: docker container run -v /my-vol other-container ...
So first you have to bind the volume also to a newly created temporary container with the --volumes-from flag. With the -v option you mount a local path (of the host) into the container:
$ docker container run -rm --volumes-from <other-container> \
-v <dir/on/host>:<mountpath/in/container> \
<ubuntu/centos/whatever-base-image> tar cvf <mountpath/in/container>/backup.tar /<my-vol>
After completing the command the container stops and with that it will also be deleted because of the -rm option.
Whith all that the steps are:
bind the volume to a temp container
mount a hostpath into the container
make a tarbal (or whatever kind of backup)
of the volume in the container
container stops and is deleted after the backup command has finished
the backup tarbal is left on the mounted dir of the container host.
see also: https://docs.docker.com/storage/volumes/
Shell Command
.. the other - not recommended - way would be to do it just with os level commands:
shopt -s extglob
cp -r var/lib/docker/!(image) your/path/backup
For that you have to stop all involved containers to prevent read/write issues.

Is there any way to read contents of a Docker volume without attaching it to a container?

Suppose I created a Docker volume like so:
docker volume create my-volume
The volume was then used by some container and data was written to it.
Is there any way to read the contents of the volume from the host machine without attaching it to a container. Answer should not include reading it from /var/lib/docker... as that path can change from machine to machine and OS to OS.
So I am looking for a command like
docker cat my-volume:/path/inside/this/volume/file.txt
Is there any way to read the contents of the volume from the host machine without attaching it to a container?
No.
On the other hand, the recipe to read an individual file from a temporary container isn't that much more complicated than what you show:
docker run --rm -v my-volume:/my-volume -w /my-volume busybox \
cat ./path/inside/this/volume/file.txt
Instead of cat, you can run any other command; so if you wanted to copy the contents of the volume out to the local system, for example, you could similarly run
docker run --rm -v my-volume:/my-volume -w /my-volume busybox \
tar cf - . \
| tar xvf -

Docker volume is empty

When using -v switch the files from container should be copied to localhost volume right? But it seems like the directory jenkins_home isn't created at all.
If I create the jenkins_home directory manually and then mount it, the directory is still empty.
I want to preserve the jenkins configs so I could re-run image later.
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:latest
If you docker run -v jenkins_home:... where the first half of the -v option has no slashes in it at all, that syntax creates a named Docker volume; it isn't a bind mount.
If you docker run -v "$PWD/jenkins_home:..." then that host directory is mounted over the corresponding container directory. At startup time, nothing is ever copied into the host directory; if the host directory is empty, that empty directory gets mounted into the container, hiding everything that was in the image.
If you use the docker run -v named-volume:... syntax, and the named volume is empty, then in this case only, and only the very first time the container is run, the contents of the image are copied into the named volume. This doesn't work for bind mounts, and it doesn't work if there is already data in the volume (perhaps from a previous docker run). It also does not work in other container environments such as Kubernetes. I do not recommend relying on this behavior.
Probably the easiest way to make this work is to launch a one-off container to export the contents of the image, and then use bind-mount syntax:
cd jenkins_home
docker run \
--rm \ # clean up this container when done
-w /var/jenkins_home \ # set the current container directory
jenkins/jenkins \ # the image to run
tar cf - . \ # write a tar file to stdout
| tar xf - # and unpack it on the host
# Now launch the container as normal
docker run -d -p ... -v "$PWD:/var/jenkins_home" jenkins/jenkins
Figured it out.
Turned out that by default it creates the volume in /var/lib/docker/volumes/jenkins_home/ instead of in the current directory.
Also I had tried docker volume create jenkins_home before running the docker image to mount. So not sure if it was the -v jenkins_home:/var/jenkins_home or if it was docker create volume that created the directory in /var/lib/docker/volumes/.

Docker, mount all user directories to container

add -v option can mount directories to the container, for example, mounting /home/me/my_code to the container, and when in the container, we can see the directory.
Currently, in my Dockerfile, the user is docker and the workspace is /home/docker, and how can I mount all my directories in /home/me to /home/docker? So that when I enter into the container, it would be very convenient to run my task and explore files like in /home/me.
While building a image through dockerfile, COPY or ADD is used to copy a file with necessary content in the process of building the image example, installing npm binaries and all.
Since you are looking to have the flexibility of having a same local FS as inside the conatiner, you can try out "Bind Mounts".
bash-3.2$ docker run \
> -it \
> --name devtest \
> --mount type=bind,source=/Users/anku/,target=/app \
> nginx:latest \
> bash
root#c072896c7bb2:/#
root#c072896c7bb2:/# pwd
/
root#c072896c7bb2:/# cd app
root#c072896c7bb2:/app# ls
Applications Documents Library Music Projects PycharmProjects anaconda3 'iCloud Drive (Archive)' 'pCloud Drive' testrun.bash
Desktop Downloads Movies Pictures Public 'VirtualBox VMs' gitlab minikube-linux-amd64 starup.sh
root#c072896c7bb2:/app#
There are two kinds of mechanism to mange persisting data.
Volumes are completely managed by Docker.
Bind Mount, mounts a file or directory on the host machine into container. Any changes made from the host machine or from inside the container are synced.
Suggest to go through Differences between --volume and --mount behavior
Choose what best work for you.

how to map a local folder as the volume the docker container or image?

I am wondering if I can map the volume in the docker to another folder in my linux host. The reason why I want to do this is, if I don't misunderstand, the default mapping folder is in /var/lib/docker/... and I don't have the access to this folder. So I am thinking about changing that to a host's folder I have access (for example /tmp/) when I create the image. I'm now able to modify Dockerfile if this can be done before creating the image. Or must this be done after creating the image or after creating the container?
I found this article which helps me to use a local directory as the volume in docker.
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Command I use while creating a new container:
docker run -d -P -name randomname -v /tmp/localfolder:/volumepath imageName
Docker doesn't have any tools I know of to map named or container volumes back to the host, though they are just sub directories under /var/lib/docker so writing your own tool wouldn't be impossible, but you'd need root access to run it. Note that with access to docker on the host, there are likely a lot of ways to access root privileges on the host. Creating a hard link to the target folder should be all that's needed if both source and target are on the same file system.
The docker way to access the named volume would be to create a disposable container to access your files. You can even create an additional host volume to export the data. E.g.
docker run -it --rm \
-v test:/source -v `pwd`/data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"'
Where "test" is the named volume you want to export/copy. You may need to also run a chown -R $uid /target in a container to change everything to your uid on the host.

Resources