I am trying to create a busybox docker image to save the logs of my rails application, including nginx and unicorn logs. In order to create that container, I use the following command:
docker run --name app-logs -v /logs busybox /bin/sh
However the created container exits immediately with the code 0:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
75e2f2efdc77 busybox "/bin/sh" 6 seconds ago Exited (0) 4 seconds ago app-logs
The command docker logs is not giving any output and I can't find out what the problem is.
Thanks in advance.
You need to run docker in the foreground if you want to use a shell.
-t=false : Allocate a pseudo-tty
-i=false : Keep STDIN open even if not attached
So
docker run -ti --name app-logs -v /logs busybox /bin/sh
Data + Volumes
If you want to keep a what's called a data volume container, you need to have at least one container that has a reference to the volume. There's no need to keep it running. An exited container is still saved on your system (docker ps -a).
docker run --name app-logs -v /logs busybox /bin/true
Then you can mount your data container volume from your app containers
docker run -d --volumes-from app-logs --name app busybox ruby yourapp.rb
The other way to achieve the same is to use the host to store the data by mounting a host volume everywhere
docker run --name app -v /logs:/logs busybox ruby yourapp.rb
I've found storing data on the host outside of docker to be beneficial when dockers storage has issues. I can blow all the docker data away and start again and easily keep and re mount all my stateful/app data that's stored on the host.
Related
I run the following:
mkdir /some/dir/nexus-data && chown -R 200 /some/dir/nexus-data
chown -R 200 /Users/user.name/dockerVolume/nexus
docker run -d -p 8081:8081 --name nexus -v /some/dir/nexus-data:/nexus-data sonatype/nexus3
Now lets say I upload an artifact to Nexus, and stop the nexus container.
If I want another Nexus container open, on port 8082, what Docker command do I run such that it uses the same volume as on port 8081 (so when I run this container, it already contains the artifact that I uploaded before)
Basically, I want both Nexus containers to use the same storage, so that if I upload an artifact to one port, the other port will also have it.
I ran this command, but it didn't seem to work:
docker run --name=nexus2 -p 8082:8081 --volumes-from nexus sonatype/nexus3
Bind mounts which is what you're using as a "volume" has limited functionality as compared to an explicit Docker volume.
I believe the --volumes-from flag only works with volumes managed by Docker.
In order to share the volume between containers with this flag you can have docker create a volume for you with your run command.
Example:
$ docker run -d -p 8081:8081 --name nexus -v nexus-volume:/nexus-data sonatype/nexus3
The above command will create a Docker managed volume for you with the name nexus-volume. You can view the details of the created volume with the command $ docker volume inspect nexus-volume.
Now when you want to run a second container with the same volume you can use the --volumes-from command as you desire.
So doing:
$ docker run --name=nexus2 -p 8082:8081 --volumes-from nexus sonatype/nexus3
Should give you your desired behaviour.
I'm having trouble creating an image of a docker redis container with the data in the redis database. At the moment I'm doing this:
docker pull redis
docker run --name my-redis -p 6379:6379 -d redis
redis-cli
127.0.0.1:6379> set hello world
OK
127.0.0.1:6379> save
OK
127.0.0.1:6379> exit
docker stop my-redis
docker commit my-redis redis_with_data
docker run --name my-redis2 -p 6379:6379 -d redis_with_data
redis-cli
127.0.0.1:6379> keys *
(empty list or set)
I'm obviously not understanding something pretty basic here. Doesn't docker commit create a new image from an existing container?
okay, i've been doing some digging. The default redis image on hub.docker uses a data-volume which is then mounted at /data in a container. In order to share this volume between containers, you have to start a new container with the following argument:
docker run -d --volumes-from <name-of-container-you-want-the-data-from> \
--name <new-container-name> -p 6379:6379 redis
Note that the order of the arguments is important, otherwise docker run will fail silently.
docker volume ls
will tell you which data volumes have already been created by docker on your computer. I haven't yet found a way to give these volumes a trivial name, rather than a long random string.
I also haven't yet found a way to mount a data-volume, but rather just use the --volumes-from command.
Okay. I now have it working, but it's cludgey.
With
docker volume ls
docker volume inspect <id of docker volume>
you can find the path of the docker volume on the local file-system.
You can then mount this in a new container as follows:
docker run -d -v /var/lib/docker/volumes/<some incredibly long string>/_data:/data \
--name my-redis2 -p 6379:6379 redis
This is obviously not the way you're meant to do this. I'll carry on digging.
I put all that i've discovered upto now in a blog post: my blog post on medium.com
Maybe that will be useful for somebody
Data in docker is not persistent, when you restart the container your data will be gone. To prevent this you have to share a map on the host machine with your container. When you container restarts it will get the data from the map on the host.
You can read more about it in the Docker docs: https://docs.docker.com/engine/tutorials/dockervolumes/#data-volumes
From the redis container docs:
Run redis-server
docker run -d --name redis -p 6379:6379 dockerfile/redis
Run redis-server with persistent data directory. (creates dump.rdb)
docker run -d -p 6379:6379 -v <data-dir>:/data --name redis dockerfile/redis
Run redis-server with persistent data directory and password.
docker run -d -p 6379:6379 -v <data-dir>:/data --name redis dockerfile/redis redis-server /etc/redis/redis.conf --requirepass <password>
Source:
https://github.com/dockerfile/redis
Using data volume and sharing RDB file manually is not ugly, actually that's what data volume is designed for, to separate data from container.
But if you really need/want to save data to image and share it that way, you can just change the redis working directory from volume /data to somewhere else:
Option 1 is changing --dir when start the redis container:
docker run -d redis --dir /tmp
Then you can follow your steps to create new image. Note that only /tmp could be used by this method due to permission issue.
Option 2 is creating a new image with changed WORKDIR:
FROM redis
RUN mkdir /opt/redis && chown redis:redis /opt/redis
WORKDIR /opt/redis
Then docker build -t redis-new-image and use this image to do your job.
When running Docker, you can mount files and directories using the --volume option. E.g.:
docker run --volume /remote ./local myimage
I'm running a docker image that defines VOLUMESs in the Dockerfile. I need to access a config file that happens to be inside one of the defined volumes. I'd like to have that file "synced" on the host so that I can edit it. I know I could run docker exec ..., but I hope to circumvent that overhead for only editing one file. I found out that the volumes created by the VOLUMES line are stored in /var/lib/docker/volumes/<HASH>/_data.
Using docker inspect I was able to find the directory that is mounted:
docker inspect gitlab-runner | grep -B 1 '"Destination": "/etc/gitlab-runner"' | head -n 1 | cut -d '"' -f 4
Output:
/var/lib/docker/volumes/9c233c085c36380c6c33035222c16e5d061368c5060cc81dda2a9a713a2b2b3b/_data
So the question is:
Is there a way to re-mount volumes defined in an image? OR to somehow get the directory easier than my oneliner above?
EDIT after comments by zeppelin I've tried rebinding the volume with no success:
$ mkdir etc
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
$ docker exec test1 ls /etc/gitlab-runner/
certs
config.toml
$ docker exec test2 ls /etc/gitlab-runner/
# empty. no files
$ ls etc
# also empty
docker inspect shows correctly that the volume is bound to ~/etc, but the files inside the container at /etc/gitlab-runner/ seem lost.
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
You've got two different volume types there. One I call an anonymous volume (a very long uuid visible when you run docker volume ls). The second is a host volume or bind mount that maps a directory on the host directly into the container. So each container you spun up is looking at different places.
Anonymous volumes and named volumes (docker run -d -v mydata:/etc/gitlab-runner gitlab/gitlab-runner) get initialized to the contents of the image at that directory location. This initialization only happens when the volume is empty and is mounted into a new container. Host volumes, as you've seen, only get the contents of the host filesystem, even if it's empty at that location.
With that background, the short answer to your question is no, you cannot mount a file inside the container back out to your host. But you can copy the file out with several methods, assuming you don't overlay the source of the file with a host volume mount. With a running container, there's the docker cp command. Personally, I like:
docker run --rm -v ~/etc:/target gitlab/gitlab-runner \
cp -av /etc/gitlab-runner/. /target/.
If you have a named volume with data you want to copy in or out, you can use any image with the tools you need to do the copy:
docker run --rm -v mydata:/source -v ~/etc:/target busybox \
cp -av /source/. /target/.
Try to avoid modifying data inside a container from the host directly, much nicer is when you wrap your task into another container that you then start with "--volumes-from" option when possible in your case.
Not sure I understood your problem, anyway, as for the documentation you mention,
The VOLUME instruction creates a mount point with the specified name
and marks it as holding externally mounted volumes from native host or
other containers. [...] The docker run command initializes the newly
created volume with any data that exists at the specified location
within the base image.
So, following the example Dockerfile , after having built the image
docker build -t mytest .
and having the container running
docker run -d -ti --name mytestcontainer mytest /bin/bash
you can access it from the container itself, e.g.
docker exec -ti mytestcontainer ls -l /myvol/greeting
docker exec -ti mytestcontainer cat /myvol/greeting
Hope it helps.
I have been trying to setup a graph database using orientdb. So I tried using volumes by the following command
docker run -d -p 2424:2424 -p 2480:2480 -v config:/orientdb/config -v database:/orientdb/databases -v backup:/orientdb/backup -e ORIENTDB_ROOT_PASSWORD=mypasswdhere orientdb:latest
My prime motive behind using volumes was to store data in database after I kill the container.
But I used this command frequently to start the server.
Now it has hogged my disk space so I guess it creates a new copy each time this command is executed.
Can someone indicate a correct way to use existing volumes to use stored data in docker and to clean up the redundant data recreated by frequent execution of this command?
You can create named volumes with docker volume create
$ docker volume create --name hello
$ docker run -d -v hello:/world busybox ls /world
That way, only one volume in /var/lib/docker/volumes will be used each time you launch that container.
See also "Mount a shared-storage volume as a data volume".
In the meantime, to remove dangling volumes:
docker volume ls -qf "dangling=true" | xargs docker volume rm
As far as I understand, you aren't re-using the container, instead you start a new one each time.
After the first run, you can stop and the restart it with docker stop/start commands.
I am now learning docker and according to the documentation a shared data volume shall solely be destroyed when the last container holding a link to the shared volume is removed with the -v flag. Nevertheless, in my initial tests this is not the behaviour that I saw.
From the documentation:
Managing Data in Containers
If you remove containers that mount volumes, including the initial dbdata container, or the subsequent containers db1 and db2, the volumes will not be deleted. To delete the volume from disk, you must explicitly call docker rm -v against the last container with a reference to the volume. This allows you to upgrade, or effectively migrate data volumes between containers.
I did the following:
docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
docker run -d --volumes-from dbdata --name db1 ubuntu:14.04 /bin/bash
Created some files on the /dbdata directory
Exited the db1 container
docker run -d --volumes-from dbdata --name db2 ubuntu:14.04 /bin/bash
I could access the files created on item 3 and create some new files
Exited the db2 container
docker run -d --volumes-from dbdata --name db3 ubuntu:14.04 /bin/bash
I could access the files created on item 3 and 6 and create some new files
Exited the db3 container
Removed all containers without the -v flag
Created the db container again, but the data was not there.
As stated in the user manual:
This allows you to upgrade, or effectively migrate data volumes between containers.
I wonder what I am doing wrong.
You are doing nothing wrong. In step 12, you are creating a new container with the same name. It has a different volume, which initially is empty.
Maybe the following example can illustrate what is happening (ids and paths will/may vary on your system or in other docker versions):
$ docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
7c23cc1e6637e29f36c6cdd4c1461f6e1742b201e05227279ac3db55328da674
Run a container that has a volume /dbdata and give it the name dbdata. The Id is returned (your Id will be different).
Now lets inspect the container and print the "Volumes" information:
$ docker inspect --format "{{ .Volumes }}" dbdata
map[/dbdata:/var/lib/docker/vfs/dir/248641a5f51a80b5004f72f622a7329835e93881e9915a01b3c7112189d0b55e]
We can see that your /dbdata volume is located at /var/lib/docker/vfs/dir/248641...
Let's create some new data inside the container's volume:
$ docker run --rm --volumes-from dbdata ubuntu:14.04 /bin/bash -c "echo fuu >> /dbdata/test"
And check if it is available
$ docker run --rm --volumes-from dbdata -it ubuntu:14.04 cat /dbdata/test
fuu
Afterwards you delete the containers, without the -v flag.
$ docker rm dbdata
The dbdata container (with id 7c23cc1e6637) is gone, however is still present on your filesystem, as you can see if you inspect the folder:
$ cat /var/lib/docker/vfs/dir/248641a5f51a80b5004f72f622a7329835e93881e9915a01b3c7112189d0b55e/test
fuu
(Please note: if you use the -v flag and delete the container with docker rm -v dbdata the files of the volume on your host filesystem will be deleted and the above cat command would result in a No such file or directory message or similar)
Finally, in step 12. you start a new container with a different volume and give it the same name: dbdata.
docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
2500731848fd6f2093243da3be064db79e76d731904e6f5349c3f00f054e5f8c
Inspection yields a different volume, which is initially empty.
docker inspect --format "{{ .Volumes }}" dbdata
map[/dbdata:/var/lib/docker/vfs/dir/faffba00358060024026412203a1562125f73d2bdd69a2202483e858dda04740]
If you want to re-use the volume, you have to create a new container and import/restore the data from the filesystem into the data container. In your case, you should not delete the data container in the first place, as you want to reuse the volume from it.