Docker: filesystem changes not exporting - docker

TL;DR My docker save/export isn't working and I don't know why.
I'm using boot2docker for Mac.
I've created a Wordpress installation proof of concept, and am using BusyBox as both the MySQL container as well as the main file system container. I created these containers using:
> docker run -v /var/lib/mysql --name=wp_datastore -d busybox
> docker run -v /var/www/html --name=http_root -d busybox
Running docker ps -a shows two containers, both based on busybox:latest. SO far so good. Then I create the Wordpress and MySQL containers, pointing to their respective data containers:
>docker run \
--name mysql_db \
-e MYSQL_ROOT_PASSWORD=somepassword \
--volumes-from wp_datastore \
-d mysql
>docker run \
--name=wp_site \
--link=mysql_db:mysql \
-p 80:80 \
--volumes-from http_root \
-d wordpress
I go to my url (boot2docker ip) and there's a brand new Wordpress application. I go ahead and set up the Wordpress site by adding a theme and some images. I then docker inspect http_root and sure enough the filesystem changes are all there.
I then commit the changed containers:
>docker commit http_root evilnode/http_root:dev
>docker commit wp_datastore evilnode/wp_datastore:dev
I verify that my new images are there. Then I save the images:
> docker save -o ~/tmp/http_root.tar evilnode/http_root:dev
> docker save -o ~/tmp/wp_datastore.tar evilnode/wp_datastore:dev
I verify that the tar files are there as well. So far, so good.
Here is where I get a bit confused. I'm not entirely sure if I need to, but I also export the containers:
> docker export http_root > ~/tmp/http_root_snapshot.tar
> docker export wp_datastore > ~/tmp/wp_datastore_snapshot.tar
So I now have 4 tar files:
http_root.tar (saved image)
wp_datastore.tar (saved image)
http_root_snapshot.tar (exported container)
wp_datastore_snapshot.tar (exported container)
I SCP these tar files to another machine, then proceed to build as follows:
>docker load -i ~/tmp/wp_datastore.tar
>docker load -i ~/tmp/http_root.tar
The images evilnode/wp_datastore:dev and evilnode/http_root:dev are loaded.
>docker run -v /var/lib/mysql --name=wp_datastore -d evilnode/wp_datastore:dev
>docker run -v /var/www/html --name=http_root -d evilnode/http_root:dev
If I understand correctly, containers were just created based on my images.
Sure enough, the containers are there. However, if I docker inspect http_root, and go to the file location aliased by /var/www/html, the directory is completely empty. OK...
So then I think I need to import into the new containers since images don't contain file system changes. I do this:
>cat http_root.snapshot.tar | docker import - http_root
I understand this to mean that I am importing a file system delta from one container into another. However, when I go back to the location aliased by /var/www/html, I see the same empty directory.
How do I export the changes from these containers?

Volumes are not exported with the new image. The proper way to manage data in Docker is to use a data container and use a command like docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata or docker cp to backup data and transfer it around. https://docs.docker.com/userguide/dockervolumes/#backup-restore-or-migrate-data-volumes

Related

To which directory should file volume be attached in Kommet Docker

I am running an installation of the Kommet platform, but I trouble figuring out how to attach a volume related to uploaded file storage.
According to Kommet documentation I am supposed to run it with the following command:
docker run -t kommet/kommet -v data-volume:/var/lib/postgresql -d
This works fine and I can see that my database data is properly stored in my data volume. However, Kommet also allows for uploading files, and I cannot figure out where they are stored.
Is there an option to attach another volume to a specific location where uploaded files are kept?
According to Running Kommet as Docker container doc, you must mount a volume at /usr/local/tomcat/webapps/filestorage such as:
docker run -d --name myapp \
-v my-file-storage:/usr/local/tomcat/webapps/filestorage \
-t kommet/kommet
In your case with database volume, it would be something like:
docker run -t kommet/kommet -d \
-v data-volume:/var/lib/postgresql
-v file-volume:/usr/local/tomcat/webapps/filestorage

Migrating A Docker Volume to Podman

I used to have a Docker volume for mariadb, which contained my database. As part of migration from Docker to Podman, I am trying to migrate the db volume as well. The way I tried this is as follows:
1- Copy the content of the named docker volume (/var/lib/docker/volumes/mydb_vol) to a new directory I want to use for Podman volumes (/opt/volumes/mydb_vol)
2- Run Podman run:
podman run --name mariadb-service -v /opt/volumes/mydb_vol:/var/lib/mysql/data:Z -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=mysecret -e MYSQL_DATABASE=wordpress --net host mariadb
This successfully creates a container and initializes the database with the given environment variables. The problem is that the database in the container is empty! I tried changing host mounted volume to /opt/volumes/mydb_vol/_data and container volume to /var/lib/mysql simultaneously and one at a time. None of them worked.
As a matter of fact, when I "podman execute -ti container_digest bash" inside the resulting container, I can see that the tables have been mounted successfully in the specified container directories, but mysql shell says the database is empty!
Any idea how to properly migrate Docker volumes to Podman? Is this even possible?
I solved it by not treating the directory as a docker volume, but instead mounting it into the container:
podman run \
--name mariadb-service \
--mount type=bind,source=/opt/volumes/mydb_vol/data,destination=/var/lib/mysql \
-e MYSQL_USER=wordpress \
-e MYSQL_PASSWORD=mysecret \
-e MYSQL_DATABASE=wordpress \
mariadb

Exporting whole docker container for jenkins or just volume?

Create jenkins container and bind volume - jenkins-data
docker run --name myJenkins1 -p 8080:8080 -p 50000:50000 -v jenkins-data:/var/jenkins_home jenkins/jenkins:lts
make changes - update plugins, run builds etc
login to jenkins in browser etc
now export the whole container as a tar
docker export 2c8b996d3088 > jenkinsContainerAndVolume.tar
Since this includes the jenkins image, it seems quite large. I am going to need the jenkins image anyway, but wondered if there is a better practice or standard to save just the volume data?
The docker-export command doesn't save the container's volumes.
To backup the named volume you could use tar like this:
docker run -v jenkins-data:/dbdata -v $(pwd):/backup ubuntu tar zcvf /backup/backup.tar.gz /dbdata
In case you need to migrate this container with all its volumes to another host I use this script:
https://github.com/ricardobranco777/docker-volumes.sh

Docker basics, how to keep installed packages and edited files?

Do I understand Docker correctly?
docker run -it --rm --name verdaccio -p 4873:4873 -d verdaccio/verdaccio
gets verdaccio if it does not exist yet on my server and runs it on a specific port. -d detaches it so I can leave the terminal and keep it running right?
docker exec -it --user root verdaccio /bin/sh
lets me ssh into the running container. However whatever apk package that I add would be lost if I rm the container then run the image again, as well as any edited file. So what's the use of this? Can I keep the changes in the image?
As I need to edit the config.yaml that is present in /verdaccio/conf/config.yaml (in the container), my only option to keep this changes is to detach the data from the running instance? Is there another way?
V_PATH=/path/on/my/server/verdaccio; docker run -it --rm --name
verdaccio -p 4873:4873 \
-v $V_PATH/conf:/verdaccio/conf \
-v $V_PATH/storage:/verdaccio/storage \
-v $V_PATH/plugins:/verdaccio/plugins \
verdaccio/verdaccio
However this command would throw
fatal--- cannot open config file /verdaccio/conf/config.yaml: ENOENT: no such file or directory, open '/verdaccio/conf/config.yaml'
You can use docker commit to build a new image based on the container.
A better approach however is to use a Dockerfile that builds an image based on verdaccio/verdaccio with the necessary changes in it. This makes the process easily repeatable (for example if a new version of the base image comes out).
A further option is the use of volumes as you already mentioned.

How to port data-only volumes from one host to another?

As described in the Docker documentation on Working with Volumes there is the concept of so-called data-only containers, which provide a volume that can be mounted into multiple other containers, no matter whether the data-only container is actually running or not.
Basically, this sounds awesome. But there is one thing I do not understand.
These volumes (which do not explicitly map to a folder on the host for portability reasons, as the documentation states) are created and managed by Docker in some internal folder on the host (/var/docker/volumes/…).
Supposed I use such a volume, and then I need to migrate it from one host to another - how do I port the volume? AFAICS it has a unique ID - can I just go and copy the volume and its according data-only container to a new host? How do I find out which files to copy? Or is there some support built-in to Docker that I did not discover yet?
The official answer is available in the section "Back up, restore, or migrate data volumes":
BACKUP:
sudo docker run --rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
--rm: remove the container when it exits
--volumes-from DATA: attach to the volumes shared by the DATA container
-v $(pwd):/backup: bind mount the current directory into the container; to write the tar file to
busybox: a small simpler image - good for quick maintenance
tar cvf /backup/backup.tar /data: creates an uncompressed tar file of all the files in the /data directory
RESTORE:
# create a new data container
$ sudo docker create -v /data --name DATA2 busybox true
# untar the backup files into the new container᾿s data volume
$ sudo docker run --rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
data/
data/sven.txt
# compare to the original container
$ sudo docker run --rm --volumes-from DATA -v `pwd`:/backup busybox ls /data
sven.txt
Extending the official answer from Docker docs and the top answer here, you can have following functions in your .bashrc or .zshrc:
# backup files from a docker volume into /tmp/backup.tar.gz
function docker-volume-backup-compressed() {
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie tar -czvf /backup/backup.tar.gz "${#:2}"
}
# restore files from /tmp/backup.tar.gz into a docker volume
function docker-volume-restore-compressed() {
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie tar -xzvf /backup/backup.tar.gz "${#:2}"
echo "Double checking files..."
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie ls -lh "${#:2}"
}
# backup files from a docker volume into /tmp/backup.tar
function docker-volume-backup() {
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox tar -cvf /backup/backup.tar "${#:2}"
}
# restore files from /tmp/backup.tar into a docker volume
function docker-volume-restore() {
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox tar -xvf /backup/backup.tar "${#:2}"
echo "Double checking files..."
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox ls -lh "${#:2}"
}
Note that the backup is saved into /tmp, so you can move the backup file saved there between docker hosts.
There is also two pairs of backup/restore aliases. One using compression and debian:jessie and other with no compression but with busybox. Favor using compression if the files to backup are big.
You can export the volume to tar and transfer to another machine. And import the data with tar on the second machine. This does not rely on implementation details of the volumes.
# you can list shared directories of the data container
docker inspect <data container> | grep "/vfs/dir/"
# you can export data container directory to tgz
docker run --cidfile=id.tmp --volumes-from <data container> ubuntu tar -cO <volume path> | gzip -c > volume.tgz
# clean up: remove exited container used for export and temporary file
docker rm `cat id.tmp` && rm -f id.tmp
I'll add another recent tool here from IBM which is actually made for the volume migration from one container host to another. This is a currently on-going project. So, you may find a different version with additional features in future.
Cargo was developed to migrate containers from one host to another host along with their data with minimal downtime. Cargo uses data federation capabilities of union filesystem to create a unified view of data (mainly the root file system) across the source and target hosts. This allows Cargo to start up a container almost immediately (within milliseconds) on the target host as the data from source root file system gets copied to target hosts either on-demand (using a copy-on-write (COW) partition) or lazily in the background (using rsync).
Important points are:
- a centralized server handles the migration process
The link to the project is given here:
https://github.com/nadgowdas/cargo
In case your machines are in different VPCs or you want to copy from/to local machine (like in my case) you can use dvsync I created. It's basically ngrok combined with rsync over SSH packaged into two small (both ~25MB) images. First, you start the dvsync-server on a machine you want to copy data from (You'll need the NGROK_AUTHTOKEN which can be obtained from ngrok dashboard):
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=MY_VOLUME,target=/data,readonly \
quay.io/suda/dvsync-server
Then you can start the dvsync-client on the machine you want to copy the files to, passing the DVSYNC_TOKEN shown by the server:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
Once the copying will be done, the client will exit. This works with Docker CLI, Compose, Swarm and Kubernetes as well.
Here's a one-liner in case it can be established an SSH connection between the machines:
docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c "cd /from ; tar -cf - . " | ssh <TARGET_HOST> 'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvf - " '
Credits go to Guido Diepen's post.
Just wrote docker-volume-snapshot command for similar usecase. This command is based on tommasop's answer.
With the command,
Create snapshot
docker-volume-snapshot create <volume-name> snapshot.tar
Move snapshot.tar to another host
Restore snapshot
docker-volume-snapshot restore snapshot.tar <volume-name>
Adding an answer here as I don't have reputation to comment. While all the above answers have helped me, I imagine there may be others like me who are also looking to copy the contents of a backup.tar file into a named docker volume on the collaborator's machine. I don't see this discussed specifically above or in docker volumes documentation.
Why would you do want to do copy the backup.tar file into a named docker volume?
This could be helpful in a scenario where a named docker volume has been specified inside an existing docker-compose.yml file to be used by some of the containers.
Copying contents of backup.tar into a named docker volume
On host machine, follow the steps in accepted answer or docker volumes documentation to create a backup.tar file and push it to some repository.
Pull backup.tar into collaborator's machine from repository.
On collaborator's machine, create a temporary container and a named docker volume.
docker run -v named_docker_volume:/dbdata --name temp_db_container ubuntu /bin/bash
--name temp_db_container : Create a container called
temp_db_container
ubuntu /bin/bash : Use a ubuntu image to
build temp_db_container with starting command of /bin/bash
-v named_docker_volume:/dbdata : Mount the /dbdata folder of
temp_db_container into a docker volume called
named_docker_volume. We use this specifically named volume
named_docker_volume to match with volume name specified in our
docker-compose.yml file.
On collaborator's machine, Copy over the contents of backup.tar into the named docker volume.
docker run --rm --volumes-from temp_db_container -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
--volumes-from temp_db_container : temp_db_container container's /dbdata folder was mapped to named_docker_volume volume in previous step. So any file that gets stored in /dbdata folder will immediately get copied over to named_docker_volume docker volume.
-v $(pwd):/backup : map the local machine's present working directory to the /backup folder located inside temp_db_container
ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1" : Untar the backup.tar file and store the untarred contents inside /dbdata folder.
On collaborator's machine, clear the temporary container temp_db_container
docker rm temp_db_container
Adapted from the accepted answer, but gives more flexibility in that you can use it in bash pipeline:
#!/bin/bash
if [ $# != 2 ]; then
echo Usage "$0": volume /path/of/the/dir/in/volume/to/backup
exit 1
fi
if [ -t 1 ]; then
echo The output of the cmd is binary data "(tar)", \
and it should be redirected instead of printed to terminal
exit 1
fi
volume="$1"
path="$2"
exec docker run --rm --mount type=volume,src="$volume",dst=/mnt/volume/ alpine tar cf - . -C /mnt/volume/"$path"
If you want to backup the volume periodically and incrementally, then you can use the following script:
#!/bin/bash
if [ $# != 3 ]; then
echo Usage "$0": volume /path/of/the/dir/in/volume/to/backup /path/to/put/backup
exit 1
fi
volume="$1"
volume_path="$2"
path="$3"
if [[ "$path" =~ ^.*/$ ]]; then
echo "The 3rd argument shouldn't end in '/', otherwise rsync would not behave as expected"
exit 1
fi
container_name="docker-backup-rsync-service-$RANDOM"
docker run --rm --name="$container_name" -d -p 8738:873 \
--mount type=volume,src="$volume",dst=/mnt/volume/ \
nobodyxu/rsyncd
echo -e '\nStarting syncing...'
rsync --info=progress2,stats,symsafe -aHAX --delete \
"rsync://localhost:8738/root/mnt/volume/$volume_path/" "$path"
exit_status=$?
echo -e '\nStopping the rsyncd docker...'
docker stop -t 1 "$container_name"
exit $exit_status
It utilizes rsync's server and client functionality to directly sync the dir between volume and your host dir.
I was dissatisfied with the answer using tar. I decided to take matters into my own hands. As I am going to be syncing the data often, and it's going to be big, I wanted specifically to use rsync. Using tar to send all the data every time would be just a waste of time and transfer.
After days spent on how to solve the problem of communicating between two remote docker containers, I finally got a solution using socat.
run two docker containers - one on the source the other on destination, each with one volume mounted - the source volume and destination volume.
run rsync --deamon on one of the containers that will stream/load data from the volume
run docker exec source_container socat - TCP:localhost and run docker exec desintation_container socat TCP-LISTEN:rsync - and connect stdin and stdout of both these together. So one socat connects to rsync --daemon and redirects data from/to stdout/stdin, the other socat listens on :rsync port (port 873) and redirect to/from stdin/stdout. Then connect them together, so basically we pipe data from one container port to the other.
then run on the other of volumes rsync client that would connect to localhost:rsync, effective connecting via "socat pipe" to the rsync --daemon.
Basically, it works like this:
log "Running both destination and source containers"
src_did=$(
env DOCKER_HOST=$src_docker_host docker run --rm -d -i -v \
"$src_volume":/data:ro -w /data alpine_with_rsync_and_socat\
sleep infinity
)
dst_did=$(
env DOCKER_HOST=$dst_docker_host docker run --rm -d -i -v \
"$dst_volume":/data:rw -w /data alpine_with_rsync_and_socat \
sleep infinity
)
log "Running rsyncd on destination container"
env DOCKER_HOST=$dst_docker_host docker exec "$dst_did" sh -c "
cat <<EOF > /etc/rsyncd.conf &&
uid = root
gid = root
use chroot = no
max connections = 1
numeric ids = yes
reverse lookup = no
[data]
path = /data/
read only = no
EOF
rsync --daemon
"
log "Setup rsync socat forwarding between containers"
{
coproc { env DOCKER_HOST=$dst_docker_host docker exec -i "$dst_did" \
socat -T 10 - TCP:localhost:rsync,forever; }
env DOCKER_HOST=$src_docker_host docker exec -i "$src_did" \
socat -T 10 TCP-LISTEN:rsync,forever,reuseaddr - <&"${COPROC[0]}" >&"${COPROC[1]}"
} &
log "Running rsync on source that will connect to destination"
env DOCKER_HOST=$src_docker docker exec -e RSYNC_PASSWORD="$g_password" -w /data "$src_did" \
rsync -aivxsAHSX --progress /data/ rsync://root#localhost/data
Another the really nice thing about that approach, is that you can copy data between two remote hosts, without ever storing the data locally. I also share the script ,docker-rsync-volumes that I've written around this idea. With that script, copying volume from two remote hosts is just simple ,docker-rsync-volumes --delete -f ssh://user#productionserver grafana_data -t ssh://user#backupserver grafana_data_backup.
This ssh copies your volume from one server to another.
docker run --rm -v $VOLUME:/$VOLUME alpine tar -czv --to-stdout -C /$VOLUME . | ssh $REMOTEHOST "docker run --rm -i -v $VOLUME:/$VOLUME alpine tar xzf - -C /$VOLUME"
If you want to copy more than one volume that matches a filter.
REMOTEHOST=root#123.123.123.123
Volumes=($(docker volume ls --filter "name=mailcow*" --format="{{.Name}}"))
for VOLUME in ${Volumes[#]}; do
docker run --rm -v $VOLUME:/$VOLUME alpine tar -czv --to-stdout -C /$VOLUME . | ssh $REMOTEHOST "docker run --rm -i -v $VOLUME:/$VOLUME alpine tar xzf - -C /$VOLUME"
done

Resources