How to get docker volumes from newly built image? - docker

I am building a docker image with a SQLLight database using Jenkins and I believe that I want to build the database from the Dockerfile and have it stored to a volume so that I can export the volume separately. I start the build as: docker build -t FOO . but when I am to extract the volume data as:
docker run --rm --volumes-from FOO -v $(pwd):/backup busybox tar cvf /backup/backup.tar /opt/webapp`
I get the error: No such container: FOO
This of course makes sense because FOO is not a container it's an image. But how do I get a container identifier? I can't just read whatever Docker outputs because I am batch running this in a Jenkins build.
I get the feeling I am going about this the wrong way. But what is the right way?

You need to run a container based on the FOO image:
docker run -d --name BAR FOO
And then you can access the volumes:
docker run --rm --volumes-from BAR ...

Run the container you want to backup and do the backup directly in that container (change your entry point to something like /bin/sh if it's been modified using --entrypoint /bin/sh):
docker run --rm -v $(pwd):/backup FOO tar cvf /backup/backup.tar /opt/webapp
Or, if you must run your backup in a different container (e.g. backup utilities aren't included), you only need to create the FOO container, not run it:
docker create --name foo-vol FOO
docker run --rm --volumes-from foo-vol -v $(pwd):/backup \
busybox tar cvf /backup/backup.tar /opt/webapp
docker rm -v foo-vol

Related

Rename Docker Volume on Docker Desktop

Is it possible to rename a docker volume? I want to change the volume names of the existing container. I see that config.json and hostconfig.json has the volume details in it.
docker run -di -p 8083:443 -v app_main_db_test_1:/var/lib/pgsql/data -v app_main_conf_test_1:/var/www/ ubuntu
I want to change app_main_db_test_1 to app_main_db and app_main_conf_test_1 to app_main_conf
as far as i know there are no ways of renaming docker volume so far. There is an open github issue, which indicates there is no solution to the topic yet.
But there are a few useful ways how to do so. Since you are saying you use Docker Desktop you could check this comment:
docker volume create --name <new_volume>
docker run --rm -it -v <old_volume>:/from -v <new_volume>:/to alpine ash -c "cd /from ; cp -av . /to"
docker volume rm <old_volume>
Which should do exactly what you are planning to do.
Figured it out. Thanks to this github post.
# Create new Volume for DB and copy files from old volume
docker volume create --name app_main_db
docker run --rm -it -v app_main_db_test_1:/from -v app_main_db:/to alpine ash -c "cd /from ; cp -av . /to"
# Create new Volume for conf and copy files from old volume
docker volume create --name app_main_conf
docker run --rm -it -v app_main_conf_test_1:/from -v app_main_conf:/to alpine ash -c "cd /from ; cp -av . /to"
# Start the container using new volumes
docker run -di -p 8083:443 -v app_main_db:/var/lib/pgsql/data -v app_main_conf:/var/www/ ubuntu
# Delete old volumes
docker volume rm app_main_db_test_1
docker volume rm app_main_conf_test_1

Where is the file I mounted at run time to Docker?

I mounted my secret file secret.json at runtime to a local docker, and while it works, I don't seems to find this volume anywhere.
My docker file looks like this and has no reference to secret:
RUN mkdir ./app
ADD src/python ./app/src/python
ENTRYPOINT ["python"]
Then I ran
docker build -t {MY_IMAGE_NAME} .
docker run -t -v $PATH_TO_SECRET_FILE/:/secrets/secret.json \
-e MY_CREDENTIALS=/secrets/secret.json \
{MY_IMAGE_NAME} ./app/src/python/runner.py
This runs successfully locally but when I do
docker run --entrypoint "ls" {MY_IMAGE_NAME}
I don't see the volume secrets.
Also, if I run
docker volume ls
it doesn't have anything that looks like secrets.
Without environment variable MY_CREDENTIALS the script won't run. So I am sure the secret file is mounted somewhere, but can't figure out where it is. Any idea?
You are actually creating two separate containers with the commands you are running. The first docker run command creates a container from the image you have built with the volume mounted and then the second command creates a new container from the same image but without any volumes (as you don't define any in your command)
I'd suggest you give your container a name like so
docker run -t -v $PATH_TO_SECRET_FILE/:/secrets/secret.json \
-e MY_CREDENTIALS=/secrets/secret.json \
--name my_container {MY_IMAGE_NAME} ./app/src/python/runner.py
and then run exec on that container
docker exec -it my_container sh

Docker volume backup error: Tar: MYCONTAINER_VOLUME: Cannot stat: No such file or directory

I'm trying to backup my volume as described here in the docker documentation: https://docs.docker.com/storage/volumes/#backup-restore-or-migrate-data-volumes
I'm running the command with the path to the volume:
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/docker/volumes/MYCONTAINER_VOLUME
... and also trying with just the name of my volume
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar MYCONTAINER_VOLUME
but no matter what I get an error like: tar: MYCONTAINER_VOLUME: Cannot stat: No such file or directory
This volume was created and linked to the container with docker-compose and its using a local driver for the volume.
When I run docker volume ls I get:
DRIVER VOLUME NAME
local MYCONTAINER_VOLUME
Can someone please tell me what i'm doing wrong with this?
I figured out what the issue was -
The last part of the command should be the path of the volume mounted in the CONTAINER, not the path of the volume on the HOST.
So basically, the formula for this command should be:
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu tar cvf /backup/MY_BACKUP.tar /PATH/INSIDE/CONTAINER/TO/VOLUME/data
... and this will create MY_BACKUP.tar in the current directory of the HOST.
also, make sure to STOP the container before archiving the volume if its something like postgres like in my case.
Then, to restore the volume if you're using docker-compose (since I had trouble with this too because the documentation isn't specific to preexisting containers / volumes created this way)
1) STOP the container
2) Make sure MY_BACKUP.tar is in the root project directory of the HOST
3) run
docker run --rm --volumes-from MYCONTAINER -v $(pwd):/backup ubuntu bash -c "cd / && tar xvf /backup/MY_BACKUP.tar
4) restart container
Hope this helps someone and I'm certainly open to any ideas to streamline this.
The documentation assume your container does have a volume associated to your container.
Meaning: your container was started with a volume.
Example:
$ docker run -d \
--name devtest \
--mount source=myvol2,target=/app \
nginx:latest
Check at the very least if you do have volumes created with:
docker volume ls

Where is my data located when I backup a docker container volume?

I tried to run:
docker run --rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
Where is "/data" located? I cannot find it on my system.
The tar command is executed on the container itself. In your particular case the data might be lost. That is because after the tar command ends, the container itself ends. And as you specified --rm it is deleted.
When your volume container DATA has /data, the data is stored there. You can access it by linking to it from an other container again like:
docker run --rm --volumes-from DATA busybox ls /data

docker shared volumed not working as described in the documentation

I am now learning docker and according to the documentation a shared data volume shall solely be destroyed when the last container holding a link to the shared volume is removed with the -v flag. Nevertheless, in my initial tests this is not the behaviour that I saw.
From the documentation:
Managing Data in Containers
If you remove containers that mount volumes, including the initial dbdata container, or the subsequent containers db1 and db2, the volumes will not be deleted. To delete the volume from disk, you must explicitly call docker rm -v against the last container with a reference to the volume. This allows you to upgrade, or effectively migrate data volumes between containers.
I did the following:
docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
docker run -d --volumes-from dbdata --name db1 ubuntu:14.04 /bin/bash
Created some files on the /dbdata directory
Exited the db1 container
docker run -d --volumes-from dbdata --name db2 ubuntu:14.04 /bin/bash
I could access the files created on item 3 and create some new files
Exited the db2 container
docker run -d --volumes-from dbdata --name db3 ubuntu:14.04 /bin/bash
I could access the files created on item 3 and 6 and create some new files
Exited the db3 container
Removed all containers without the -v flag
Created the db container again, but the data was not there.
As stated in the user manual:
This allows you to upgrade, or effectively migrate data volumes between containers.
I wonder what I am doing wrong.
You are doing nothing wrong. In step 12, you are creating a new container with the same name. It has a different volume, which initially is empty.
Maybe the following example can illustrate what is happening (ids and paths will/may vary on your system or in other docker versions):
$ docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
7c23cc1e6637e29f36c6cdd4c1461f6e1742b201e05227279ac3db55328da674
Run a container that has a volume /dbdata and give it the name dbdata. The Id is returned (your Id will be different).
Now lets inspect the container and print the "Volumes" information:
$ docker inspect --format "{{ .Volumes }}" dbdata
map[/dbdata:/var/lib/docker/vfs/dir/248641a5f51a80b5004f72f622a7329835e93881e9915a01b3c7112189d0b55e]
We can see that your /dbdata volume is located at /var/lib/docker/vfs/dir/248641...
Let's create some new data inside the container's volume:
$ docker run --rm --volumes-from dbdata ubuntu:14.04 /bin/bash -c "echo fuu >> /dbdata/test"
And check if it is available
$ docker run --rm --volumes-from dbdata -it ubuntu:14.04 cat /dbdata/test
fuu
Afterwards you delete the containers, without the -v flag.
$ docker rm dbdata
The dbdata container (with id 7c23cc1e6637) is gone, however is still present on your filesystem, as you can see if you inspect the folder:
$ cat /var/lib/docker/vfs/dir/248641a5f51a80b5004f72f622a7329835e93881e9915a01b3c7112189d0b55e/test
fuu
(Please note: if you use the -v flag and delete the container with docker rm -v dbdata the files of the volume on your host filesystem will be deleted and the above cat command would result in a No such file or directory message or similar)
Finally, in step 12. you start a new container with a different volume and give it the same name: dbdata.
docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
2500731848fd6f2093243da3be064db79e76d731904e6f5349c3f00f054e5f8c
Inspection yields a different volume, which is initially empty.
docker inspect --format "{{ .Volumes }}" dbdata
map[/dbdata:/var/lib/docker/vfs/dir/faffba00358060024026412203a1562125f73d2bdd69a2202483e858dda04740]
If you want to re-use the volume, you have to create a new container and import/restore the data from the filesystem into the data container. In your case, you should not delete the data container in the first place, as you want to reuse the volume from it.

Resources