Data Container In Docker : Data Not Populated - docker

Here is my question.
I need to read data from a volume inside my container. Instead of using a ADD command in my docker file to copy this data directly inside my container I need to look for this data from a data placeholder, i.e a container that holds data.
So, I created this data container,
docker run -d -v /var/lib/ABC --name ABC_datastore busybox true.
To my understanding this should create a container ABC_datastore that will contain the data inside the directory /var/lib/ABC of the host from which I am running this command? Am I wrong?
So if my understanding is correct, I can use this container in my main container,
docker run -i -t --volumes-from ABC_datastore --name="ABC_ins" -d ABC_img
This should populate the /var/lib/ABC inside my ABC-ins with the right value. But it is not happening. The folder /var/lib/ABC inside my ABC-ins is empty.
I also tried to populate the data using,
docker run -d -v /var/lib/ABC --name ABC_datastore busybox true;
tar -c /var/lib/ABC | docker run -a stdin -i --volumes-from ABC_datastore busybox tar -xC /var/lib/ABC
No luck here too.
Any help will be appreciated. My final goal is to create a data container that will contain the actual data in /var/lib/ABC that can be used inside my container in that given path.

docker run -d -v /var/lib/ABC --name ABC_datastore busybox true.
To my understanding this should create a container ABC_datastore that will contain the data inside the directory /var/lib/ABC of the host from which I am running this command? Am I wrong?
You need to tell docker where you want to mount your volume inside the container using the format -v /path/to/source:/path/to/destination.
Try:
docker run -d -v /var/lib/ABC:/var/lib/ABC --name ABC_datastore busybox true

Related

How to get inside docker container to see the mounted volume?

I am trying to buld a simple docker file that has a debian image.
Also, I want to mount my local volume inside the docker container.
The problem I have is that how do I get inside the container to see the volume mounted.
$docker run -d -it bash --mount type=bind,source="$(pwd)",target=/app docker_test:latest
43db16a76d50f1da0f8589c9ec460080ccef40122c9bc54abad3230dbbfe7885
I believe this 43db16a.. is container id. Even I try to attach to this container id I get an an error message. It says you cannot attach to the stop container. What am I missing here.
It works if I do
docker run -d -it --name test_docker1 --mount type=bind,source="$(pwd)"/,target=/app docker_test:latest
and then
docker attach
d6bd3cc6dc667e742d0bb3c7fbec58935046c1bf7a2e0b6806d48817082c05be
Also, it works when I do
$docker run --rm -ti --mount type=bind,source="$(pwd)"/,target=/app docker_test:latest
In another terminal do a docker ps, then look for the image you are looking for and copy the id, then do a docker exec -ti <your-image> bash there you have a bash terminal inside the container and you can check the mounted volume.

Difference between -volume and -volumes-from in Docker Volumes

What is the exact difference between the two flags used in docker volume commands -v and --volumes-from. It seems to me that they are doing the same work, consider the following scenario.
First lets create a volume named myvol using command:
$ docker volume create myvol
Now create and run a container named c1 that uses myvol and get into his bash:
$ docker run -it --name c1 -v myvol:/data nginx bash
Lets create a file test.txt in the mounted directory of the container as:
root#766f90ebcf37:/# touch /data/test.txt
root#766f90ebcf37:/# ls /data
test.txt
Using -volume flag:
Now create another container named c2 that also uses myvol:
$ docker run -it --name c2 -v myvol:/data nginx bash
As expected, the new generated container c2 also have access to all the files of myvol
root#393418742e2c:/# ls /data
test.txt
Now doing the same thing with --volumes-from
Creating a container named c3 using volumes from container c1
$ docker run -it --name c3 --volumes-from c1 nginx bash
This will result the same thing in c3:
root#27eacbe25f92:/# ls /data
test.txt
The point is if -v and --volumes-from are working the same way i.e. to share data between containers then why they are different flags and what --volumes-from can do that -v cannot do?
The point is if -v and --volumes-from are working the same way i.e. to share data between containers
-v and --volumes-from are not working the same way, but with both of them you can share data between containers.
what --volumes-from can do that -v cannot do?
E.g. it can connect to another containers volumes without knowing how volumes are named and you do not specify path. You are able to add suffixes to containers ID's, with permissions like :ro or :rw.
More details here - Mount volumes from container (--volumes-from) section
The other question is what -v can do that --volumes-from cannot do?
E.g. it can mount named volumes, you can mount host directories or tmpfs (this one you cannot share between containers). In --volume-from you cannot share data with host directly.
The conclusion is: the purpose of --volume is to share data with host. (here you can find more use cases).
The purpose of --volumes-from is to share the data between containers.
They both works nice together.

How to re-mount a docker volume without overriding existing files?

When running Docker, you can mount files and directories using the --volume option. E.g.:
docker run --volume /remote ./local myimage
I'm running a docker image that defines VOLUMESs in the Dockerfile. I need to access a config file that happens to be inside one of the defined volumes. I'd like to have that file "synced" on the host so that I can edit it. I know I could run docker exec ..., but I hope to circumvent that overhead for only editing one file. I found out that the volumes created by the VOLUMES line are stored in /var/lib/docker/volumes/<HASH>/_data.
Using docker inspect I was able to find the directory that is mounted:
docker inspect gitlab-runner | grep -B 1 '"Destination": "/etc/gitlab-runner"' | head -n 1 | cut -d '"' -f 4
Output:
/var/lib/docker/volumes/9c233c085c36380c6c33035222c16e5d061368c5060cc81dda2a9a713a2b2b3b/_data
So the question is:
Is there a way to re-mount volumes defined in an image? OR to somehow get the directory easier than my oneliner above?
EDIT after comments by zeppelin I've tried rebinding the volume with no success:
$ mkdir etc
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
$ docker exec test1 ls /etc/gitlab-runner/
certs
config.toml
$ docker exec test2 ls /etc/gitlab-runner/
# empty. no files
$ ls etc
# also empty
docker inspect shows correctly that the volume is bound to ~/etc, but the files inside the container at /etc/gitlab-runner/ seem lost.
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
You've got two different volume types there. One I call an anonymous volume (a very long uuid visible when you run docker volume ls). The second is a host volume or bind mount that maps a directory on the host directly into the container. So each container you spun up is looking at different places.
Anonymous volumes and named volumes (docker run -d -v mydata:/etc/gitlab-runner gitlab/gitlab-runner) get initialized to the contents of the image at that directory location. This initialization only happens when the volume is empty and is mounted into a new container. Host volumes, as you've seen, only get the contents of the host filesystem, even if it's empty at that location.
With that background, the short answer to your question is no, you cannot mount a file inside the container back out to your host. But you can copy the file out with several methods, assuming you don't overlay the source of the file with a host volume mount. With a running container, there's the docker cp command. Personally, I like:
docker run --rm -v ~/etc:/target gitlab/gitlab-runner \
cp -av /etc/gitlab-runner/. /target/.
If you have a named volume with data you want to copy in or out, you can use any image with the tools you need to do the copy:
docker run --rm -v mydata:/source -v ~/etc:/target busybox \
cp -av /source/. /target/.
Try to avoid modifying data inside a container from the host directly, much nicer is when you wrap your task into another container that you then start with "--volumes-from" option when possible in your case.
Not sure I understood your problem, anyway, as for the documentation you mention,
The VOLUME instruction creates a mount point with the specified name
and marks it as holding externally mounted volumes from native host or
other containers. [...] The docker run command initializes the newly
created volume with any data that exists at the specified location
within the base image.
So, following the example Dockerfile , after having built the image
docker build -t mytest .
and having the container running
docker run -d -ti --name mytestcontainer mytest /bin/bash
you can access it from the container itself, e.g.
docker exec -ti mytestcontainer ls -l /myvol/greeting
docker exec -ti mytestcontainer cat /myvol/greeting
Hope it helps.

How to get docker volumes from newly built image?

I am building a docker image with a SQLLight database using Jenkins and I believe that I want to build the database from the Dockerfile and have it stored to a volume so that I can export the volume separately. I start the build as: docker build -t FOO . but when I am to extract the volume data as:
docker run --rm --volumes-from FOO -v $(pwd):/backup busybox tar cvf /backup/backup.tar /opt/webapp`
I get the error: No such container: FOO
This of course makes sense because FOO is not a container it's an image. But how do I get a container identifier? I can't just read whatever Docker outputs because I am batch running this in a Jenkins build.
I get the feeling I am going about this the wrong way. But what is the right way?
You need to run a container based on the FOO image:
docker run -d --name BAR FOO
And then you can access the volumes:
docker run --rm --volumes-from BAR ...
Run the container you want to backup and do the backup directly in that container (change your entry point to something like /bin/sh if it's been modified using --entrypoint /bin/sh):
docker run --rm -v $(pwd):/backup FOO tar cvf /backup/backup.tar /opt/webapp
Or, if you must run your backup in a different container (e.g. backup utilities aren't included), you only need to create the FOO container, not run it:
docker create --name foo-vol FOO
docker run --rm --volumes-from foo-vol -v $(pwd):/backup \
busybox tar cvf /backup/backup.tar /opt/webapp
docker rm -v foo-vol

docker shared volumed not working as described in the documentation

I am now learning docker and according to the documentation a shared data volume shall solely be destroyed when the last container holding a link to the shared volume is removed with the -v flag. Nevertheless, in my initial tests this is not the behaviour that I saw.
From the documentation:
Managing Data in Containers
If you remove containers that mount volumes, including the initial dbdata container, or the subsequent containers db1 and db2, the volumes will not be deleted. To delete the volume from disk, you must explicitly call docker rm -v against the last container with a reference to the volume. This allows you to upgrade, or effectively migrate data volumes between containers.
I did the following:
docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
docker run -d --volumes-from dbdata --name db1 ubuntu:14.04 /bin/bash
Created some files on the /dbdata directory
Exited the db1 container
docker run -d --volumes-from dbdata --name db2 ubuntu:14.04 /bin/bash
I could access the files created on item 3 and create some new files
Exited the db2 container
docker run -d --volumes-from dbdata --name db3 ubuntu:14.04 /bin/bash
I could access the files created on item 3 and 6 and create some new files
Exited the db3 container
Removed all containers without the -v flag
Created the db container again, but the data was not there.
As stated in the user manual:
This allows you to upgrade, or effectively migrate data volumes between containers.
I wonder what I am doing wrong.
You are doing nothing wrong. In step 12, you are creating a new container with the same name. It has a different volume, which initially is empty.
Maybe the following example can illustrate what is happening (ids and paths will/may vary on your system or in other docker versions):
$ docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
7c23cc1e6637e29f36c6cdd4c1461f6e1742b201e05227279ac3db55328da674
Run a container that has a volume /dbdata and give it the name dbdata. The Id is returned (your Id will be different).
Now lets inspect the container and print the "Volumes" information:
$ docker inspect --format "{{ .Volumes }}" dbdata
map[/dbdata:/var/lib/docker/vfs/dir/248641a5f51a80b5004f72f622a7329835e93881e9915a01b3c7112189d0b55e]
We can see that your /dbdata volume is located at /var/lib/docker/vfs/dir/248641...
Let's create some new data inside the container's volume:
$ docker run --rm --volumes-from dbdata ubuntu:14.04 /bin/bash -c "echo fuu >> /dbdata/test"
And check if it is available
$ docker run --rm --volumes-from dbdata -it ubuntu:14.04 cat /dbdata/test
fuu
Afterwards you delete the containers, without the -v flag.
$ docker rm dbdata
The dbdata container (with id 7c23cc1e6637) is gone, however is still present on your filesystem, as you can see if you inspect the folder:
$ cat /var/lib/docker/vfs/dir/248641a5f51a80b5004f72f622a7329835e93881e9915a01b3c7112189d0b55e/test
fuu
(Please note: if you use the -v flag and delete the container with docker rm -v dbdata the files of the volume on your host filesystem will be deleted and the above cat command would result in a No such file or directory message or similar)
Finally, in step 12. you start a new container with a different volume and give it the same name: dbdata.
docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
2500731848fd6f2093243da3be064db79e76d731904e6f5349c3f00f054e5f8c
Inspection yields a different volume, which is initially empty.
docker inspect --format "{{ .Volumes }}" dbdata
map[/dbdata:/var/lib/docker/vfs/dir/faffba00358060024026412203a1562125f73d2bdd69a2202483e858dda04740]
If you want to re-use the volume, you have to create a new container and import/restore the data from the filesystem into the data container. In your case, you should not delete the data container in the first place, as you want to reuse the volume from it.

Resources