Docker \ commit a container with its data - docker

lots of documentation, but I still missing something. My Goal is to run one-time registry (2.0) push to it couple of images and export\commit the container.
I need to take it in zip file to machine without internet.
Thing is - the images I pushed to registry aren't kept. whenever I import the regsitry to test - it comes empty. I understand that commit\export will not work on mounted volumes - how do I "disable" the volumes of the initial registry docker?

I would rather suggest you decouple the image (registry v2) from the data for transport by copying the needed images seperately and then mounting them into the registry container when running it.
Kind of like this:
On the machine you are preparing the registry, run a registry container using something like
docker run -d \
--name registry \
--restart=always \
-e SEARCH_BACKEND=sqlalchemy \
-e STORAGE_PATH=/srv/docker-registry \
-v /srv/data/docker-registry:/srv/docker-registry \
-p 127.0.0.1:5002:5000 \
registry:2.0.0
Then tag your images to localhost:5000/repo-name/image-name and execute
docker push localhost:5000/repo-name/image-name
After that, tar/zip/whatever /srv/data/docker-registry and do
docker save -o ~/docker-registry-v2 registry:2.0.0
Copy the two archives to the target machine,
docker load -i ~/docker-registry-v2
Untar/Unzip/Whatever the image archive and run the registry again wieht a similar run command as above, supplying the dir you unpacked the image archive to as the first path after -v.
With this technique, the repos and images in your registry will also survive container destroys and restarts.

Related

Storing local files in Docker Volume for sharing

I'm new to Docker, so this may be an obvious question that I'm just not using the right search terms to find an answer to, so my apologies if that is the case.
I'm trying to stand up a new CI/CD Pipeline using a purpose built container. So far, I've been using someone else's container, but I need more control over the available dependencies, so I need my own container. To that end, I've built a container (Ubuntu), and I have a local (host) directory for the dependencies, and another for the project I'm building. Both are connected to the container using Docker Volumes (-v option), like this.
docker run --name buildbox \
-v /projectpath:/home/project/ \
-v /dependencies:/home/libs \
buildImage buildScript.sh
Since this is going to eventually live in a Docker repo and be accessed by a GitLab CI/CD Pipeline, I want to store the dependencies directory in as small of a container as possible that I can push up to the Docker repo alongside my Ubuntu build container. That way I can have the Pipeline pull both containers, map the dependencies container to the build container (--volumes-from), and map the project to be built using the -v option; e.g.:
docker run --name buildbox \
-v /projectpath:/home/project/ \
--volumes-from depend_vol \
buildImage buildScript.sh
Thus, I pull buildImage and depend_vol from the Docker repo, run buildImage while attaching the dependencies container and project directory as volumes, then run the build script (and extract the build artifact when it's done). The reason I want them separate is in case I want to create different build containers that use common libraries, or if I want to create version specific dependency containers without having a full OS stored (I have plans for this).
Now, I could just start a lightweight generic container (like busybox) and copy everything into it, but I was wondering if there was simply a way to attach the volume and then store the contents in the image when the container shuts down. Everything I've seen about making a portable data store / volume starts with all the data already copied into the container.
But I want to take my local host dependencies directory and store it in a container. Is there a straightforward way to do this? Am I missing something obvious?
So this works, if not what I was hoping for, since I'm still doing a lot of file copy (just with tarballs).
# Create a tarball of the files on the host to store, don't store the full path
tar -cvf /home/projectFiles.tar -C /home/projectFiles/ .
# Start a lightweight docker container (busybox) with a volume connection to the host (/home:/backup), then extract the tarball into the container
# cd to the drive root and untar the tarball
docker run --name libraryVolume \
-v /home:/backup \
busybox \
/bin/sh -c \
"cd / && mkdir /projectLibs && tar -xvf /backup/projectFiles.tar -C /projectLibs"
# Don't forget to commit the container image
docker commit libraryVolume
That's it. Then push to the repo.
To use it, pull the repo, then start the data volume:
docker run --name projLib \
-v /projectLibs \
--entrypoint "/bin/sh" \
libraryVolume
Then start the container (projBuild) that is going to reference the data volume (projLib).
docker run --it --name projBuild \
--volumes-from=projLib \
-v /home/mySourceCode:/buildProject \
--entrypoint /buildProject/buildScript.sh \
builderImage
Seems to work.

volumes not working with Datapower and docker

I am using docker datapower image for local development. I am using this image
https://hub.docker.com/layers/ibmcom/datapower/latest/images/sha256-35b1a3fcb57d7e036d60480a25e2709e517901f69fab5407d70ccd4b985c2725?context=explore
Datapower version: IDG.10.0.1.0
System: Docker for mac
Docker version 19.03.13
I am running the container with the following config
docker run -it \
-v $PWD/config:/drouter/config \
-v $PWD/local:/drouter/local \
-e DATAPOWER_ACCEPT_LICENSE=true \
-e DATAPOWER_INTERACTIVE=true \
-p 9090:9090 \
-p 9022:22 \
-p 5554:5554 \
-p 8000-8010:8000-8010 \
ibmcom/datapower
when I create files in file management or save a DP object configuration I do not see the changes reflected in the directory on my machine
also I would expect to be able to create files on my host directory and see them reflected in /drouter/config + /drouter/local in the container as well as in the management GUI
the volume mounts don't seem to be working correctly or perhaps I misunderstand something about Datapower or Docker
I have tried mounting volumes in other docker containers under the same path and that works fine so I don't think its an issue with file sharing settings in docker.
The file system structure changed in version 10.0. There is some documentation in the IBM Knowledge Center showing the updated locations for config:, local:, etc., but the Dockerhub page is not updated to reflect that yet.
mounting the volumes like this fixed it for me
-v $PWD/config:/opt/ibm/datapower/drouter/config \
-v $PWD/local:/opt/ibm/datapower/drouter/local \
It seems the container is persisting configuration here instead. This is different than the instructions on dockerHub

Docker basics, how to keep installed packages and edited files?

Do I understand Docker correctly?
docker run -it --rm --name verdaccio -p 4873:4873 -d verdaccio/verdaccio
gets verdaccio if it does not exist yet on my server and runs it on a specific port. -d detaches it so I can leave the terminal and keep it running right?
docker exec -it --user root verdaccio /bin/sh
lets me ssh into the running container. However whatever apk package that I add would be lost if I rm the container then run the image again, as well as any edited file. So what's the use of this? Can I keep the changes in the image?
As I need to edit the config.yaml that is present in /verdaccio/conf/config.yaml (in the container), my only option to keep this changes is to detach the data from the running instance? Is there another way?
V_PATH=/path/on/my/server/verdaccio; docker run -it --rm --name
verdaccio -p 4873:4873 \
-v $V_PATH/conf:/verdaccio/conf \
-v $V_PATH/storage:/verdaccio/storage \
-v $V_PATH/plugins:/verdaccio/plugins \
verdaccio/verdaccio
However this command would throw
fatal--- cannot open config file /verdaccio/conf/config.yaml: ENOENT: no such file or directory, open '/verdaccio/conf/config.yaml'
You can use docker commit to build a new image based on the container.
A better approach however is to use a Dockerfile that builds an image based on verdaccio/verdaccio with the necessary changes in it. This makes the process easily repeatable (for example if a new version of the base image comes out).
A further option is the use of volumes as you already mentioned.

Docker: filesystem changes not exporting

TL;DR My docker save/export isn't working and I don't know why.
I'm using boot2docker for Mac.
I've created a Wordpress installation proof of concept, and am using BusyBox as both the MySQL container as well as the main file system container. I created these containers using:
> docker run -v /var/lib/mysql --name=wp_datastore -d busybox
> docker run -v /var/www/html --name=http_root -d busybox
Running docker ps -a shows two containers, both based on busybox:latest. SO far so good. Then I create the Wordpress and MySQL containers, pointing to their respective data containers:
>docker run \
--name mysql_db \
-e MYSQL_ROOT_PASSWORD=somepassword \
--volumes-from wp_datastore \
-d mysql
>docker run \
--name=wp_site \
--link=mysql_db:mysql \
-p 80:80 \
--volumes-from http_root \
-d wordpress
I go to my url (boot2docker ip) and there's a brand new Wordpress application. I go ahead and set up the Wordpress site by adding a theme and some images. I then docker inspect http_root and sure enough the filesystem changes are all there.
I then commit the changed containers:
>docker commit http_root evilnode/http_root:dev
>docker commit wp_datastore evilnode/wp_datastore:dev
I verify that my new images are there. Then I save the images:
> docker save -o ~/tmp/http_root.tar evilnode/http_root:dev
> docker save -o ~/tmp/wp_datastore.tar evilnode/wp_datastore:dev
I verify that the tar files are there as well. So far, so good.
Here is where I get a bit confused. I'm not entirely sure if I need to, but I also export the containers:
> docker export http_root > ~/tmp/http_root_snapshot.tar
> docker export wp_datastore > ~/tmp/wp_datastore_snapshot.tar
So I now have 4 tar files:
http_root.tar (saved image)
wp_datastore.tar (saved image)
http_root_snapshot.tar (exported container)
wp_datastore_snapshot.tar (exported container)
I SCP these tar files to another machine, then proceed to build as follows:
>docker load -i ~/tmp/wp_datastore.tar
>docker load -i ~/tmp/http_root.tar
The images evilnode/wp_datastore:dev and evilnode/http_root:dev are loaded.
>docker run -v /var/lib/mysql --name=wp_datastore -d evilnode/wp_datastore:dev
>docker run -v /var/www/html --name=http_root -d evilnode/http_root:dev
If I understand correctly, containers were just created based on my images.
Sure enough, the containers are there. However, if I docker inspect http_root, and go to the file location aliased by /var/www/html, the directory is completely empty. OK...
So then I think I need to import into the new containers since images don't contain file system changes. I do this:
>cat http_root.snapshot.tar | docker import - http_root
I understand this to mean that I am importing a file system delta from one container into another. However, when I go back to the location aliased by /var/www/html, I see the same empty directory.
How do I export the changes from these containers?
Volumes are not exported with the new image. The proper way to manage data in Docker is to use a data container and use a command like docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata or docker cp to backup data and transfer it around. https://docs.docker.com/userguide/dockervolumes/#backup-restore-or-migrate-data-volumes

Docker - how can I copy a file from an image to a host?

My question is related to this question on copying files from containers to hosts; I have a Dockerfile that fetches dependencies, compiles a build artifact from source, and runs an executable. I also want to copy the build artifact (in my case it's a .zip produced by sbt dist in '../target/`, but I think this question also applies to jars, binaries, etc.
docker cp works on containers, not images; do I need to start a container just to get a file out of it? In a script, I tried running /bin/bash in interactive mode in the background, copying the file out, and then killing the container, but this seems kludgey. Is there a better way?
On the other hand, I would like to avoid unpacking a .tar file after running docker save $IMAGENAME just to get one file out (but that seems like the simplest, if slowest, option right now).
I would use docker volumes, e.g.:
docker run -v hostdir:out $IMAGENAME /bin/cp/../blah.zip /out
but I'm running boot2docker in OSX and I don't know how to directly write to my mac host filesystem (read-write volumes are mounting inside my boot2docker VM, which means I can't easily share a script to extract blah.zip from an image with others. Thoughts?
To copy a file from an image, create a temporary container, copy the file from it and then delete it:
id=$(docker create image-name)
docker cp $id:path - > local-tar-file
docker rm -v $id
Unfortunately there doesn't seem to be a way to copy files directly from Docker images. You need to create a container first and then copy the file from the container.
However, if your image contains a cat command (and it will do in many cases), you can do it with a single command:
docker run --rm --entrypoint cat yourimage /path/to/file > path/to/destination
If your image doesn't contain cat, simply create a container and use the docker cp command as suggested in Igor's answer.
docker cp $(docker create --name tc registry.example.com/ansible-base:latest):/home/ansible/.ssh/id_rsa ./hacked_ssh_key && docker rm tc
wanted to supply a one line solution based on pure docker functionality (no bash needed)
edit: container does not even has to be run in this solution
edit2: thanks to #Jonathan Dumaine for --rm so the container will be removed after, i just never tried, because it sounded illogical to copy something from somewhere which has been already removed by the previous command, but i tried it and it works
edit3: due the comments we found out --rm is not working as expected, it does not remove the container because it never runs, so I added functionality to delete the created container afterwards(--name tc=temporary-container)
edit 4: this error appeared, seems like a bug in docker, because t is in a-z and this did not happen a few months before.
Error response from daemon: Invalid container name (t), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
A much faster option is to copy the file from running container to a mounted volume:
docker run -v $PWD:/opt/mount --rm --entrypoint cp image:version /data/libraries.tgz /opt/mount/libraries.tgz
real 0m0.446s
** VS **
docker run --rm --entrypoint cat image:version /data/libraries.tgz > libraries.tgz
real 0m9.014s
Parent comment already showed how to use cat. You could also use tar in a similar fashion:
docker run yourimage tar -c -C /my/directory subfolder | tar x
Another (short) answer to this problem:
docker run -v $PWD:/opt/mount --rm -ti image:version bash -c "cp /source/file /opt/mount/"
Update - as noted by #Elytscha Smith this only works if your image has bash built in
Not a direct answer to the question details, but in general, once you pulled an image, the image is stored on your system and so are all its files. Depending on the storage driver of the local Docker installation, these files can usually be found in /var/lib/docker/overlay2 (requires root access). overlay2 should be the most common storage driver nowadays, but the path may differ.
The layers associated with an image can be found using $ docker inspect image IMAGE_NAME:TAG, look for a GraphDriver attribute.
At least in my local environment, the following also works to quickly see all layers associated with an image:
docker inspect image IMAGE_NAME:TAG | jq ".[0].GraphDriver.Data"
In one of these diff directories, the wanted file can be found.
So in theory, there's no need to create a temporary container. Ofc this solution is pretty inconvenient.
First pull docker image using docker pull
docker pull <IMG>:<TAG>
Then, create a container using docker create command and store the container id is a variable
img_id=$(docker create <IMG>:<TAG>)
Now, run the docker cp command to copy folders and files from docker container to host
docker cp $img_id:/path/in/container /path/in/host
Once the files/folders are moved, delete the container using docker rm
docker rm -v $img_id
You essentially had the best solution already. Have the container copy out the files for you, and then remove itself when it's complete.
This will copy the files from /inside/container/ to your machine at /path/to/hostdir/.
docker run --rm -v /path/to/hostdir:/mnt/out "$IMAGENAME" /bin/cp -r /inside/container/ /mnt/out/
Update - here's a better version without the tar file:
$id = & docker create image-name
docker cp ${id}:path .
docker rm -v $id
Old answer
PowerShell variant of Igor Bukanov's answer:
$id = & docker create image-name
docker cp ${id}:path - > local-file.tar
docker rm -v $id
I am using boot2docker on MacOS. I can assure you that scripts based on "docker cp" are portable. Because any command is relayed inside boot2docker but then the binary stream is relayed back to the docker command line client running on your mac. So write operations from the docker client are executed inside the server and written back to the executing client instance!
I am sharing a backup script for docker volumes with any docker container I provide and my backup scripts are tested both on linux and MacOS with boot2docker. The backups can be easily exchanged between platforms. Basically I am executing the following command inside my script:
docker run --name=bckp_for_volume --rm --volumes-from jenkins_jenkins_1 -v /Users/github/jenkins/backups:/backup busybox tar cf /backup/JenkinsBackup-2015-07-09-14-26-15.tar /jenkins
Runs a new busybox container and mounts the volume of my jenkins container with the name jenkins_jenkins_1. The whole volume is written to the file backups/JenkinsBackup-2015-07-09-14-26-15.tar
I have already moved archives between the linux container and my mac container without any adjustments to the backup or restore script. If this is what you want you find the whole script an tutorial here: blacklabelops/jenkins
You could bind a local path on the host to a path on the container, and then cp the desired file(s) to that path at the end of your script.
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Then there is no need to copy afterwards.

Resources