Migrating dockerized redis to another server - docker

I am new to both docker and redis. I have configured serverA to run redis inside docker. The redis database has been pre-seeded with a thousand key/value pairs. I can confirm the data has been persisted in this container. I then created a new docker image from this container, uploaded it to my docker repository.
On serverB, I pulled the redis image "redis-preseeded" and got it started. Using the redis-cli tool when I connect and issue the 'info keyspace' command, the keyspace is empty, suggesting none of the data made it across. What am I doing wrong?

Are you is using the official image for Redis?
https://hub.docker.com/_/redis/
Dockefile for the redis:3.2.0-alpine release
It has a volume declared:
..
VOLUME /data
WORKDIR /data
..
Volumes are described in the documentation. In a nutshell what the authors are doing is configuring Redis to store data on an external disk volume that will be maintained by the docker engine. Not only is this more efficient it also allows users to keep the data separate to the container using it.
Labouring the point a snapshot of the image will contain no data (if you think about it this is a good thing).
Once you understand these concepts then it becomes more obvious how data can be moved between servers:
Backup, Restore and Migrate data volumes

Related

When does Docker Redis store on a volume?

When does Docker Redis store on a volume?
That store it on a volume every command?
I want to know when Redis will be backed up.
Redis persists data to disk based on configuration. There are config options to persist every command or periodically and in different formats (RDB or AOF).
To understand how Redis data gets backed up, please go thru this link
https://redis.io/docs/management/persistence/.
In your case, since Redis is running inside Docker container make sure to mount a volume that is outside container. Otherwise data will be lost after container restart.

Deploy a docker app using volume create

I have a Python app using a SQLite database (it's a data collector that runs daily by cron). I want to deploy it, probably on AWS or Google Container Engine, using Docker. I see three main steps:
1. Containerize and test the app locally.
2. Deploy and run the app on AWS or GCE.
3. Backup the DB periodically and download back to a local archive.
Recent posts (on Docker, StackOverflow and elsewhere) say that since 1.9, Volumes are now the recommended way to handle persisted data, rather than the "data container" pattern. For future compatibility, I always like to use the preferred, idiomatic method, however Volumes seem to be much more of a challenge than data containers. Am I missing something??
Following the "data container" pattern, I can easily:
Build a base image with all the static program and config files.
From that image create a data container image and copy my DB and backup directory into it (simple COPY in the Dockerfile).
Push both images to Docker Hub.
Pull them down to AWS.
Run the data and base images, using "--volume-from" to refer to the data.
Using "docker volume create":
I'm unclear how to copy my DB into the volume.
I'm very unclear how to get that volume (containing the DB) up to AWS or GCE... you can't PUSH/PULL a volume.
Am I missing something regarding Volumes?
Is there a good overview of using Volumes to do what I want to do?
Is there a recommended, idiomatic way to backup and download data (either using the data container pattern or volumes) as per my step 3?
When you first use an empty named volume, it will receive a copy of the image's volume data where it's first used (unlike a host based volume that completely overlays the mount point with the host directory). So you can initialize the volume contents in your main image as a volume, upload that image to your registry and pull that image down to your target host, create a named volume on that host, point your image to that named volume (using docker-compose makes the last two steps easy, it's really 2 commands at most docker volume create <vol-name> and docker run -v <vol-name>:/mnt <image>), and it will be populated with your initial data.
Retrieving the data from a container based volume or a named volume is an identical process, you need to mount the volume in a container and run an export/backup to your outside location. The only difference is in the command line, instead of --volumes-from <container-id> you have -v <vol-name>:/mnt. You can use this same process to import data into the volume as well, removing the need to initialize the app image with data in it's volume.
The biggest advantage of the new process is that it clearly separates data from containers. You can purge all the containers on the system without fear of losing data, and any volumes listed on the system are clear in their name, rather than a randomly assigned name. Lastly, named volumes can be mounted anywhere on the target, and you can pick and choose which of the volumes you'd like to mount if you have multiple data sources (e.g. config files vs databases).

Appropriate use of Volumes - to push files into container?

I was reading Project Atomic's guidance for images which states that the 2 main use cases for using a volume are:-
sharing data between containers
when writing large files to disk
I have neither of these use cases in my example using an Nginx image. I intended to mount a host directory as a volume in the path of the Nginx docroot in the container. This is so that I can push changes to a website's contents into the host rather then addressing the container. I feel it is easier to use this approach since I can - for example - just add my ssh key once to the host.
My question is, is this an appropriate use of a data volume and if not can anyone suggest an alternative approach to updating data inside a container?
One of the primary reasons for using Docker is to isolate your app from the server. This means you can run your container anywhere and get the same result. This is my main use case for it.
If you look at it from that point of view, having your container depend on files on the host machine for a deployed environment is counterproductive- running the same container on a different machine may result in different output.
If you do NOT care about that, and are just using docker to simplify the installation of nginx, then yes you can just use a volume from the host system.
Think about this though...
#Dockerfile
FROM nginx
ADD . /myfiles
#docker-compose.yml
web:
build: .
You could then use docker-machine to connect to your remote server and deploy a new version of your software with easy commands
docker-compose build
docker-compose up -d
even better, you could do
docker build -t me/myapp .
docker push me/myapp
and then deploy with
docker pull
docker run
There's a number of ways to achieve updating data in containers. Host volumes are a valid approach and probably the simplest way to achieve making your data available.
You can also copy files into and out of a container from the host. You may need to commit afterwards if you are stopping and removing the running web host container at all.
docker cp /src/www webserver:/www
You can copy files into a docker image build from your Dockerfile, which is the same process as above (copy and commit). Then restart the webserver container from the new image.
COPY /src/www /www
But I think the host volume is a good choice.
docker run -v /src/www:/www webserver command
Docker data containers are also an option for mounted volumes but they don't solve your immediate problem of copying data into your data container.
If you ever find yourself thinking "I need to ssh into this container", you are probably doing it wrong.
Not sure if I fully understand your request. But why you need do that to push files into Nginx container.
Manage volume in separate docker container, that's my suggestion and recommend by Docker.io
Data volumes
A data volume is a specially-designated directory within one or more containers that bypasses the Union File System. Data volumes provide several useful features for persistent or shared data:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization.
Data volumes can be shared and reused among containers.
Changes to a data volume are made directly.
Changes to a data volume will not be included when you update an image.
Data volumes persist even if the container itself is deleted.
refer: Manage data in containers
As said, one of the main reasons to use docker is to achieve always the same result. A best practice is to use a data only container.
With docker inspect <container_name> you can know the path of the volume on the host and update data manually, but this is not recommended;
or you can retrieve data from an external source, like a git repository

What's the correct way to take automatic backups of docker volume containers?

I am in the process of building a simple web application using NodeJS that persists data to a MySQL database and saves images that have been uploaded to it. With my current setup, I have 4 Docker containers - 1 for the NodeJS application, 1 for the MySQL server, 1 Volume Container for the MySQL Data and 1 Volume container for the uploaded files.
What I would like to do is come up with a mechanism where I can periodically take backups of both volume containers automatically without stopping the web application.
Is it possible to do this and if so, what's the best way?
I have looked at the Docker Documentation on Volume management that covers backing up and restoring volumes, but I'm not sure that would work while the application is still writing data to the database or saving uploaded files.
To backup your database I suggest to use mysqldump, that is more safer then a simple file copy.
For volume backup you can also run a container, link the volume and tar the contents together.
In both cases you can use additional containers or process injection via docker exec.

Mysql installed and persisting data in docker images

I am a newbie to Docker. I created a docker image with JAVA and MySQL installed in it. I tried running a normal Java application by copying it into the docker image, it ran successfully with the expected result. After that i tried to run a Java based application using mysql database. i created a database and tried executing the program, it ran successfully and i got the expected output.
But, when i closed that container and again tried to run it, it requires me to again create a new database with same name and my already existing data is lost. so, every time it opens a new mysql and i need to run it and create new database and table to store the data. Is there anyway to save my data in the image, so that every other time when i RUN the docker image, it should have the previous data stored in the same database?
If you're just starting out with docker, I would recommend mounting a local directory in your container for the database data. This assumes you'll only be running your application on a single machine, but it is easier than creating separate containers for your data. To do this, you would do something like this:
# Add VOLUMEs to allow backup of config and databases
VOLUME ["/etc/mysql", "/var/lib/mysql"]
and do
$ docker run -v /path/to/local/etc:/etc/mysql -v /path/to/local/db:/var/lib/mysql your-image
That said, running mysql in docker is a decidedly non-trivial first step in the docker world. This is a good example:
https://github.com/tutumcloud/tutum-docker-mysql
There is a difference between image and container.
If you want to save the change you make in your container to a new image you should use
docker commit <container name or id> <optionnal image name>
If you just want to relaunch your container for more modification before commiting use:
docker start <container name or id>
docker attach <container name or id>
How to see list of all container
docker ps -a
You have two ways to managing data in containers:
Data volumes: A data volume is a specially-designated directory within one or more containers that bypasses the Union File System to provide several useful features for persistent or shared data
Data volume containers: If you have some persistent data that you want to share between containers, or want to use from non-persistent containers, it's best to create a named Data Volume Container, and then to mount the data from it.
For more info see official user guide: https://docs.docker.com/userguide/dockervolumes/

Resources