Is there anyone knows what is the best practice for sharing database between containers in docker?
What I mean is I want to create multiple containers in docker. Then, these containers will execute CRUD on the same database with same identity.
So far, I have two ideas. One is create an separate container to run database merely. Another one is install database directly on the host machine where installed docker.
Which one is better? Or, is there any other best practice for this requirement?
Thanks
It is hard to answer a 'best practice' question, because it's a matter of opinion. And opinions are off topic on Stack Overflow.
So I will give a specific example of what I have done in a serious deployment.
I'm running ELK (Elasticsearch, Logstash, Kibana). It's containerised.
For my data stores, I have storage containers. These storage containers contain a local fileystem pass through:
docker create -v /elasticsearch_data:/elasticsearch_data --name ${HOST}-es-data base_image /bin/true
I'm also using etcd and confd, to dynamically reconfigure my services that point at the databases. etcd lets me store key-values, so at a simplistic level:
CONTAINER_ID=`docker run -d --volumes-from ${HOST}-es-data elasticsearch-thing`
ES_IP=`docker inspect $CONTAINER_ID | jq -r .[0].NetworkSettings.Networks.dockernet.IPAddress`
etcdctl set /mynet/elasticsearch/${HOST}-es-0
Because we register it in etcd, we can then use confd to watch the key-value store, monitor it for changes, and rewrite and restart our other container services.
I'm using haproxy for this sometimes, and nginx when I need something a bit more complicated. Both these let you specify sets of hosts to 'send' traffic to, and have some basic availability/load balance mechanisms.
That means I can be pretty lazy about restarted/moving/adding elasticsearch nodes, because the registration process updates the whole environment. A mechanism similar to this is what's used for openshift.
So to specifically answer your question:
DB is packaged in a container, for all the same reasons the other elements are.
Volumes for DB storage are storage containers passing through local filesystems.
'finding' the database is done via etcd on the parent host, but otherwise I've minimised my install footprint. (I have a common 'install' template for docker hosts, and try and avoid adding anything extra to it wherever possible)
It is my opinion that the advantages of docker are largely diminished if you're reliant on the local host having a (particular) database instance, because you've no longer got the ability to package-test-deploy, or 'spin up' a new system in minutes.
(The above example - I have literally rebuilt the whole thing in 10 minutes, and most of that was the docker pull transferring the images)
It depends. A useful thing to do is to keep the database URL and password in an environment variable and provide that to Docker when running the containers. That way you will be free to connect to a database wherever it may be located. E.g. running in a container during testing and on a dedicated server in production.
The best practice is to use Docker Volumes.
Official doc: Manage data in containers. This doc details how to deal with DB and container. The usual way of doing so is to put the DB into a container (which is actually not a container but a volume) then the other containers can access this DB-container (the volume) to CRUD (or more) the data.
Random article on "Understanding Docker Volumes"
edit I won't detail much further as the other answer is well made.
Related
As far as I understand each docker installation has some kind of a unique ID. I can see it by executing docker system info:
$ docker system info
// ... a lot of output
ID: UJ6H:T6KC:YRIL:SIDL:5DUW:M66Y:L65K:FI2F:MUE4:75WX:BS3N:ASVK
// ... a lot of output
The question is if it's possible to get this ID from the container (by executing a code inside container) w/o mapping any volumes, etc?
Edit:
Just to clarify the use case (based on the comments): we're sending telemetry data from docker containers to our backend. We need to identify which containers are sharing the same host. This ID would help us to achieve this goal (it's kind of machine id). If there's any other way to identify the host - it can solve the issue as well.
No - unless you explicitly inject that information in the container(volumes, COPY, environment variable, ARG passed at build time and persisted in a file etc), or you fetch it via a GET request for example, that information is not available inside the docker containers.
You may open a console inside a container and search for all files that contain that ID grep -rnw '/' -e 'the-ID' but nothing will match the search.
On the other hand, any breakout from the container to the host would be a real security concern.
Edit to answer the update on you question:
The docker host has visibility on the containers that are running. A much better approach would be to send the information you need from the host level rather than container level.
You could still send data directly from the containers and use the container ID, which is known inside the container and correlate these telemetry information to the data sent from the docker host.
Yet another option, which is even better in my opinion, is to send that telemetry data to the stdout of the container. This info can easily be collected and send to the telemetry backend on the docker host, from the logging driver.
Often the hostname of the container is the container ID--not the ID you're talking about, but the ID you would use for e.g. docker container exec, so it's a fine identifier.
I am totally newbie with docker. Unfortunately, I nave made a change - I set a vew environment variable from GUI and it astonishingly caused container re-creation! All postgreSQL DBs have been lost.
So, two questions:
Why did it happen?
is there a way to rollback? (There were no backups or something else).
There are a fairly broad set of changes that require deleting and recreating containers. As you've discovered, this includes changing environment variables; it also includes published ports, host-mapped directories, and changing the image underneath the container. In turn, the image will change if there's ever any sort of security update, software patch release, or just a new application build.
In short: deleting Docker containers is very common and you need to make sure the data gets preserved.
The standard way to do this is is to mount some additional storage into the container. Docker provides a named volume system, but the named volumes can be opaque and hard to manage; it's often easier to bind mount a host directory. (N.B.: the linked documentation advocates for named volumes, IME host directories are easier to inspect and manage with readily-available non-Docker tools.) You need to look at each image's documentation to know where to attach the storage, but for the standard postgres image it is in /var/lib/postgresql/data (see "Where To Store Data" at the end of the linked page). In plain Docker you could run
docker run \
-d \
-p 5432:5432 \
-v ./postgres:/var/lib/postgresql/data \
postgres:11
but there's presumably a setting for that in your GUI tool.
Your previous data is probably lost. Docker doesn't keep snapshots of containers, and deleting a container actually deletes it and its underlying data. You still need to do things like take backups of your data in case Docker or some other part of your system fails.
Let's say you are trying to dockerise a database (couchdb for example).
Then there are at least two assets you consider volumes for:
database files
log files
Let's further say you want to keep the db-files private but want to expose the log-files for later processing.
As far as I undestand the documentation, you have two options:
First option
define managed volumes for both, log- and db-files within the db-image
import these in a second container (you will get both) and work with the logs
Second option
create data container with a managed volume for the logs
create the db-image with a managed volume for the db-files only
import logs-volume from data container when running db-image
Two questions:
Are both options realy valid/ possible?
What is the better way to do it?
br volker
The answer to question 1 is that, yes both are valid and possible.
My answer to question 2 is that I would consider a different approach entirely and which one to choose depends on whether or not this is a mission critical system and that data loss must be avoided.
Mission critical
If you absolutely cannot lose your data, then I would recommend that you bind mount a reliable disk into your database container. Bind mounting is essentially mounting a part of the Docker Host filesystem into the container.
So taking the database files as an example, you could image these steps:
Create a reliable disk e.g. NFS that is backed-up on a regular basis
Attach this disk to your Docker host
Bind mount this disk into my database container which then writes database files to this disk.
So following the above example, lets say I have created a reliable disk that is shared over NFS and mounted on my Docker Host at /reliable/disk. To use that with my database I would run the following Docker command:
docker run -d -v /reliable/disk:/data/db my-database-image
This way I know that the database files are written to reliable storage. Even if I lose my Docker Host, I will still have the database files and can easily recover by running my database container on another host that can access the NFS share.
You can do exactly the same thing for the database logs:
docker run -d -v /reliable/disk/data/db:/data/db -v /reliable/disk/logs/db:/logs/db my-database-image
Additionally you can easily bind mount these volumes into other containers for separate tasks. You may want to consider bind mounting them as read-only into other containers to protect your data:
docker run -d -v /reliable/disk/logs/db:/logs/db:ro my-log-processor
This would be my recommended approach if this is a mission critical system.
Not mission critical
If the system is not mission critical and you can tolerate a higher potential for data loss, then I would look at Docker Volume API which is used precisely for what you want to do: managing and creating volumes for data that should live beyond the lifecycle of a container.
The nice thing about the docker volume command is that it lets you created named volumes and if you name them well it can be quite obvious to people what they are used for:
docker volume create db-data
docker volume create db-logs
You can then mount these volumes into your container from the command line:
docker run -d -v db-data:/db/data -v db-logs:/logs/db my-database-image
These volumes will survive beyond the lifecycle of your container and are stored on the filesystem if your Docker host. You can use:
docker volume inspect db-data
To find out where the data is being stored and back-up that location if you want to.
You may also want to look at something like Docker Compose which will allow you to declare all of this in one file and then create your entire environment through a single command.
I am in the situation of running a simple PHP7.0, Redis and NGINX server in a single container.
This means I run php7.0-fpm and ngxinx and redis as a service.
But in the best practices I am reading:
# Run only one process per container
In almost all cases, you should only run a single process in a single container.
Decoupling applications into multiple containers makes it much easier to scale horizontally and reuse containers.
If that service depends on another service, make use of container linking.
Does this mean that it would be best to run one container with PHP7.0 and the application and another with nginx and another with redis?
#nwinkler in comments is right, the recommendation is for good. Couple of advantages of decoupling applications into multiple containers are:
Build time
It is true that docker does hash check and does not build the layers of the image if no changes happened but this is limited to layers structure (if layer X changes all layers above X will be built). This means it will start getting painful when your images start getting bigger.
Containers are isolated
When you are attached to your ngxinx you are pretty sure that any changes you are doing is not going to cause changes in your php container and that's always a good practice.
Scalability
You need ten more Redis, good, let's run ten more Redis containers.
In general I would go for a dockerfile for a base image for any scenario and in your case one which is whatever all the three containers of yours (php, redis & nxginx) share (thirdparty libs, tools etc). Then three dockerfiles for building each image. Then a bash or docker-compose.yml script for running the images inside containers.
I'm a bit confused about data-only docker containers. I read it's a bad practice to mount directories directly to the source-os: https://groups.google.com/forum/#!msg/docker-user/EUndR1W5EBo/4hmJau8WyjAJ
And I get how I make data-only containers: http://container42.com/2014/11/18/data-only-container-madness/
And I see somewhat similar question like mine: How to deal with persistent storage (e.g. databases) in docker
But what if I have a lamp-server setup.. and I have everything nice setup with data-containers, not linking them 'directly' to my source-os and make a backup once a while..
Than someone comes by, and restarts my server.. How do I setup my docker (data-only)-containers again, so I don't lose any data?
Actually, even though it was shykes who said it was considered a "hack" in that link you provide, note the date. Several eons worth of Docker years have passed since that post about volumes, and it's no longer considered bad practice to mount volumes on the host. In fact, here is a link to the very same shykes saying that he has "definitely used them at large scale in production for several years with no issues". Mount a host OS directory as a docker volume and don't worry about it. This means that your data persists across docker restarts/deployments/whatever. It's right there on the disk of the host, and doesn't go anywhere when your container goes away.
I've been using docker volumes that mount host OS directories for data storage (database persistent storage, configuration data, et cetera) for as long as I've been using Docker, and it's worked perfectly. Furthermore, it appears shykes no longer considers this to be bad practice.
Docker containers will persist on disk until they are explicitly deleted with docker rm. If your server restarts you may need to restart your service containers, but your data containers will continue to exist and their volumes will be available to other containers.
docker rm alone doesn't remove the actual data (which lives on in /var/lib/docker/vfs/dir)
Only docker rm -v would clear out the data as well.
The only issue is that, after a docker rm, a new docker run would re-create an empty volume in /var/lib/docker/vfs/dir.
In theory, you could with symlink redirect the new volume folders to the old ones, but that supposes you notes which volumes were associated to which data container... before the docker rm.
It's worth noting that the volumes you create with "data-only containers" are essentially still directories on your host OS, just in a different location (/var/lib/docker/...). One benefit is that you get to label your volumes with friendly identifiers and thus you don't have to hardcode your directory paths.
The downside is that administrative work like backing up specific data volumes is a bit of a hassle now since you have to manually inspect metadata to find the directory location. Also, if you accidentally wipe your docker installation or all of your docker containers, you'll lose your data volumes.