how to save a docker redis container - docker

I'm having trouble creating an image of a docker redis container with the data in the redis database. At the moment I'm doing this:
docker pull redis
docker run --name my-redis -p 6379:6379 -d redis
redis-cli
127.0.0.1:6379> set hello world
OK
127.0.0.1:6379> save
OK
127.0.0.1:6379> exit
docker stop my-redis
docker commit my-redis redis_with_data
docker run --name my-redis2 -p 6379:6379 -d redis_with_data
redis-cli
127.0.0.1:6379> keys *
(empty list or set)
I'm obviously not understanding something pretty basic here. Doesn't docker commit create a new image from an existing container?
okay, i've been doing some digging. The default redis image on hub.docker uses a data-volume which is then mounted at /data in a container. In order to share this volume between containers, you have to start a new container with the following argument:
docker run -d --volumes-from <name-of-container-you-want-the-data-from> \
--name <new-container-name> -p 6379:6379 redis
Note that the order of the arguments is important, otherwise docker run will fail silently.
docker volume ls
will tell you which data volumes have already been created by docker on your computer. I haven't yet found a way to give these volumes a trivial name, rather than a long random string.
I also haven't yet found a way to mount a data-volume, but rather just use the --volumes-from command.
Okay. I now have it working, but it's cludgey.
With
docker volume ls
docker volume inspect <id of docker volume>
you can find the path of the docker volume on the local file-system.
You can then mount this in a new container as follows:
docker run -d -v /var/lib/docker/volumes/<some incredibly long string>/_data:/data \
--name my-redis2 -p 6379:6379 redis
This is obviously not the way you're meant to do this. I'll carry on digging.
I put all that i've discovered upto now in a blog post: my blog post on medium.com
Maybe that will be useful for somebody

Data in docker is not persistent, when you restart the container your data will be gone. To prevent this you have to share a map on the host machine with your container. When you container restarts it will get the data from the map on the host.
You can read more about it in the Docker docs: https://docs.docker.com/engine/tutorials/dockervolumes/#data-volumes
From the redis container docs:
Run redis-server
docker run -d --name redis -p 6379:6379 dockerfile/redis
Run redis-server with persistent data directory. (creates dump.rdb)
docker run -d -p 6379:6379 -v <data-dir>:/data --name redis dockerfile/redis
Run redis-server with persistent data directory and password.
docker run -d -p 6379:6379 -v <data-dir>:/data --name redis dockerfile/redis redis-server /etc/redis/redis.conf --requirepass <password>
Source:
https://github.com/dockerfile/redis

Using data volume and sharing RDB file manually is not ugly, actually that's what data volume is designed for, to separate data from container.
But if you really need/want to save data to image and share it that way, you can just change the redis working directory from volume /data to somewhere else:
Option 1 is changing --dir when start the redis container:
docker run -d redis --dir /tmp
Then you can follow your steps to create new image. Note that only /tmp could be used by this method due to permission issue.
Option 2 is creating a new image with changed WORKDIR:
FROM redis
RUN mkdir /opt/redis && chown redis:redis /opt/redis
WORKDIR /opt/redis
Then docker build -t redis-new-image and use this image to do your job.

Related

How to attach an existing mysql volume to a new container?

I pull mysql image and I want to run a container with that image, but instead of creating a new volume I'd like to use my existing mysql-db volume (mysql tables inside that volume)
what is the easiest way to do that?
I run this command but after 5 seconds the container's no longer running
docker run --name db-server -d -e MYSQL_ROOT_PASSWORD=aaaa -v /mysql-db:/var/lib/mysql mysql
docker run --name ganesh-mysql -v mysql-data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=mypasswd -d mysql:latest

Docker container to use same Nexus volume?

I run the following:
mkdir /some/dir/nexus-data && chown -R 200 /some/dir/nexus-data
chown -R 200 /Users/user.name/dockerVolume/nexus
docker run -d -p 8081:8081 --name nexus -v /some/dir/nexus-data:/nexus-data sonatype/nexus3
Now lets say I upload an artifact to Nexus, and stop the nexus container.
If I want another Nexus container open, on port 8082, what Docker command do I run such that it uses the same volume as on port 8081 (so when I run this container, it already contains the artifact that I uploaded before)
Basically, I want both Nexus containers to use the same storage, so that if I upload an artifact to one port, the other port will also have it.
I ran this command, but it didn't seem to work:
docker run --name=nexus2 -p 8082:8081 --volumes-from nexus sonatype/nexus3
Bind mounts which is what you're using as a "volume" has limited functionality as compared to an explicit Docker volume.
I believe the --volumes-from flag only works with volumes managed by Docker.
In order to share the volume between containers with this flag you can have docker create a volume for you with your run command.
Example:
$ docker run -d -p 8081:8081 --name nexus -v nexus-volume:/nexus-data sonatype/nexus3
The above command will create a Docker managed volume for you with the name nexus-volume. You can view the details of the created volume with the command $ docker volume inspect nexus-volume.
Now when you want to run a second container with the same volume you can use the --volumes-from command as you desire.
So doing:
$ docker run --name=nexus2 -p 8082:8081 --volumes-from nexus sonatype/nexus3
Should give you your desired behaviour.

why I need to add hostname to docker when creating a volume

I'm creating a rabbitmq container with the -v option to add a volume, the weird part is that if I don't add the --hostname the container is no getting the information of the volume, for example:
I create a volume like this:
docker volume create --name rabbit
Later I verify that the volume is created
docker volume ls
Then I create the container like this:
docker run --name rabbitprueba -P -p 55555:15672 -d -v rabbit:/var/lib/rabbitmq rabbitmq:3.6.10-management
I enter to localhost:55555 and enter user and password, then I create a simple queue, I return to my machine and stop and remove the container:
docker stop rabbitprueba
docker rm rabbitprueba
when I run the same command:
docker run --name rabbitprueba -P -p 55555:15672 -d -v rabbit:/var/lib/rabbitmq rabbitmq:3.6.10-management
The queue that I created is gone but if I repeat the same steps (stop container and remove it) and add to the command the --hostname the queue is not removed:
docker run --hostname rabbitprueba --name rabbitprueba -P -p 55555:15672 -d -v rabbit:/var/lib/rabbitmq rabbitmq:3.6.10-management
Why this is happening?, Am I doing something wrong?,
So you are doing nothing wrong, but you are assuming the problem to be with docker. The problem is how rabbitmq saves its data.
When you launch a rabbitmq container using below command
docker run -it rabbitmq:latest
You will notice in docker logs a line showing
Database directory at /var/lib/rabbitmq/mnesia/rabbit#51267ba4cc9f is empty. Initialising from scratch...
Next run:
Database directory at /var/lib/rabbitmq/mnesia/rabbit#5e9c67b4d6ed is empty. Initialising from scratch...
So you can see it creates a folder based on the hostname. Now if i run
docker run -it --hostname mymq rabbitmq
And the log would show
Database directory at /var/lib/rabbitmq/mnesia/rabbit#mymq is empty. Initialising from scratch...
So that is what is happening here. Not a problem with volume, but just the way rabbitmq works. It is possible for you to change the name of this config using environment variables like below
docker run -it -e "RABBITMQ_NODENAME=mq#localhost" rabbitmq
And logs would now show
Database directory at /var/lib/rabbitmq/mnesia/mq#localhost is empty. Initialising from scratch...

docker run creating a new data volume for each run

I would like to persist some configuration data from a container and am following the tutorial on data volumes.
I'm successfully running the app with:
docker run -it --privileged -v /app/config -p 8083:8083 myapp-ubuntu:2.2.2
Where -v /app/config is the directory inside the container that contains the config that should survive a container restart.
Also the result of running the container creates a volume in /var/lib/docker/volumes.
# ls /var/lib/docker/volumes
5e60d70dc15bcc53aa13cfd84507b5758842c7743d43da2bfa2fc121b2f32479
However, if I kill the container and rerun it no data is persisted and a new volume is created in /var/lib/docker/volumes:
# ls /var/lib/docker/volumes
5e60d70dc15bcc53aa13cfd84507b5758842c7743d43da2bfa2fc121b2f32479 (FIRST RUN)
82de3aa910bc38157a6dc20a516b770bd0264860ae83093d471212f69960d02a (SECOND RUN)
I would expect that these would be the steps for persisting, am I missing something here?
I think you can solve it with named volumes:
docker run -it --privileged -v some_named_volume:/app/config -p 8083:8083 myapp-ubuntu:2.2.2
Or you can use Dockerfile
with directive COPY

docker shared volumed not working as described in the documentation

I am now learning docker and according to the documentation a shared data volume shall solely be destroyed when the last container holding a link to the shared volume is removed with the -v flag. Nevertheless, in my initial tests this is not the behaviour that I saw.
From the documentation:
Managing Data in Containers
If you remove containers that mount volumes, including the initial dbdata container, or the subsequent containers db1 and db2, the volumes will not be deleted. To delete the volume from disk, you must explicitly call docker rm -v against the last container with a reference to the volume. This allows you to upgrade, or effectively migrate data volumes between containers.
I did the following:
docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
docker run -d --volumes-from dbdata --name db1 ubuntu:14.04 /bin/bash
Created some files on the /dbdata directory
Exited the db1 container
docker run -d --volumes-from dbdata --name db2 ubuntu:14.04 /bin/bash
I could access the files created on item 3 and create some new files
Exited the db2 container
docker run -d --volumes-from dbdata --name db3 ubuntu:14.04 /bin/bash
I could access the files created on item 3 and 6 and create some new files
Exited the db3 container
Removed all containers without the -v flag
Created the db container again, but the data was not there.
As stated in the user manual:
This allows you to upgrade, or effectively migrate data volumes between containers.
I wonder what I am doing wrong.
You are doing nothing wrong. In step 12, you are creating a new container with the same name. It has a different volume, which initially is empty.
Maybe the following example can illustrate what is happening (ids and paths will/may vary on your system or in other docker versions):
$ docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
7c23cc1e6637e29f36c6cdd4c1461f6e1742b201e05227279ac3db55328da674
Run a container that has a volume /dbdata and give it the name dbdata. The Id is returned (your Id will be different).
Now lets inspect the container and print the "Volumes" information:
$ docker inspect --format "{{ .Volumes }}" dbdata
map[/dbdata:/var/lib/docker/vfs/dir/248641a5f51a80b5004f72f622a7329835e93881e9915a01b3c7112189d0b55e]
We can see that your /dbdata volume is located at /var/lib/docker/vfs/dir/248641...
Let's create some new data inside the container's volume:
$ docker run --rm --volumes-from dbdata ubuntu:14.04 /bin/bash -c "echo fuu >> /dbdata/test"
And check if it is available
$ docker run --rm --volumes-from dbdata -it ubuntu:14.04 cat /dbdata/test
fuu
Afterwards you delete the containers, without the -v flag.
$ docker rm dbdata
The dbdata container (with id 7c23cc1e6637) is gone, however is still present on your filesystem, as you can see if you inspect the folder:
$ cat /var/lib/docker/vfs/dir/248641a5f51a80b5004f72f622a7329835e93881e9915a01b3c7112189d0b55e/test
fuu
(Please note: if you use the -v flag and delete the container with docker rm -v dbdata the files of the volume on your host filesystem will be deleted and the above cat command would result in a No such file or directory message or similar)
Finally, in step 12. you start a new container with a different volume and give it the same name: dbdata.
docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
2500731848fd6f2093243da3be064db79e76d731904e6f5349c3f00f054e5f8c
Inspection yields a different volume, which is initially empty.
docker inspect --format "{{ .Volumes }}" dbdata
map[/dbdata:/var/lib/docker/vfs/dir/faffba00358060024026412203a1562125f73d2bdd69a2202483e858dda04740]
If you want to re-use the volume, you have to create a new container and import/restore the data from the filesystem into the data container. In your case, you should not delete the data container in the first place, as you want to reuse the volume from it.

Resources