How to move an application based on two linked docker containers to another host? - docker

I'm looking forward to moving an application based on two linked contianers to a different host. The first container comes from an official MySQL image. The second container comes from an official Wordpress image. WP container is linked to the MySQL container.
These containers have evolved over time. Different templates and data. I'd like to migrate the containers to another host. Right now, if I stop the containers I only have to issue a docker start mysql and a docker start wp and all the context (links, ports, config, wahtever...) is maintained. I don't have to specify which ports I expose, or what links are in place.
What I would expect, and I don't know if docker offers, is to:
export the containers
move them to the new host
import the container in the new host
in the new host issue a docker start mysql and a docker start wp
Is this possible? If not what would be the way to get the exact same infraestructure up and running in another host?
I have tried using export/import and save/load. In both cases what you get imported is an image, not a container.

Related

Assign hostnames to exposed docker ports

Okay so in Vagrant/VVV you can assign different hostnames to your different projects so when you go to http://myproject-1.dev your website shows up.
This is very convenient if you are working on dozens of projects at the same time, As far as I know such thing is not possible in docker (it can't touch hosts file), My question is, is there something similar we can do in Docker? Some automated tool maybe?
Using docker for windows.
Hostnames can map many containers together. In docker compose, there's a hostname option. But that's only within the Docker network bridge, not available to the host
Docker isn't a VM (although it runs within one in Windows).
You can edit your hosts file to have the HyperVisor available, but you're supposed to have the host ports forwarded into the container.
Use localhost, not any hostname.
If you prefer your Vagrant patterns, continue using it, but provision Docker containers from it, or use Docker Machine

Putting databases in their own Docker containers?

I run a complex app with a database backend and many other things all in one container. I notice that Docker images for different database systems are available. When would I want to move something like a DB server to its own container, instead of running everything in the same container? The advantage I have now is that I can deploy everything at once, and I don't have to configure more than one container to get things talking.
Docker or the Container Manager is using Linux container technology to provide a best abstraction, using docker container with multiple process is a bad idea; use docker container for isolating one process, use docker volume container for storing database data ( docker state is not persistent by default).
Use docker-compose or fig to attach two docker containers db and web app, it will ease your management in future!

Docker linked containers, Docker Networks, Compose Networks - how should we now 'link' containers

I have an existing app that comprises of 4 docker containers running on the same host. They have been linked together using the link command.
However, after some upgrades of docker, the link behaviour has been deprecated, and changed it seems. We are having issues where containers are loosing the link to each other now.
So, docker says to use the new Network feature over linked containers. But I can't see how this works.
If 2 containers are in the same network, are the same ENV vars automatically exposed on the containers as if they were linked?
Or is the hosts file updated with the correct container name / ip addresses ? Even after a docker restart ?
I can't see in the docs how a container can find the location of another in its network?
Also, compose looks to have a simple set up for linking containers, and may automate some of this - would compose be the way to go for defining multi container apps? Or is it too soon to run it in production?
Does compose support multiple host configuration as well?
at some point in the future we will probably need to move one of the containers to a different host....
If 2 containers are in the same network, are the same ENV vars automatically exposed on the containers as if they were linked?
no, you would now have to use the container names as their hostnames. The new network feature has no idea which ports will be used. Think of this as 2 computers plugged on the same network hub. Both can address the other one by its hostname.
is the hosts file updated with the correct container name / ip addresses ? Even after a docker restart ?
yes, /etc/hosts files for all containers which are part of a network will be updated live by the docker engine.
I can't see in the docs how a container can find the location of another in its network?
Using the container name. See the Connect containers section of the Work with network commands doc:
Once connected, the containers can communicate using another container’s IP address or name.
Also, compose looks to have a simple set up for linking containers, and may automate some of this - would compose be the way to go for defining multi container apps? Or is it too soon to run it in production?
Compose supports the new network feature as beta by offering the --x-networking option. You should not use it in production yet (current Compose version is 1.5).
Furthermore, the current implementation is a bit inconvenient as we must use the full container name which is composed of the project name + _ + container name + _1. The documentation says the next version (current one is 1.5) will improve this so that we should not have to worry about the project name to address containers.
Does compose support multiple host configuration as well?
Yes, in conjonction with Swarm as detailed in the overlay network documentation

Replace Docker container with external service

I'm just getting started with Docker, and I read a ton of documentation and tutorials yesterday, but I can't find where I read about replacing an external service using a linked container, and I'm not even sure which terminology to search for.
Say there is an apache container and a mysql container, where apache was run with a link to mysql, and has access to its ports and such. Now instead of MySQL running on the container instance, we move it to AWS RDS, for example. How do you modify the mysql container so that apache continues to run as expected? To clarify, apache would still be run with a link to a container with the alias mysql, but the mysql container would take care of getting traffic on that port sent to AWS.
Alternatively, maybe there is a container running a MySQL service, but that container is on another host. I have a vague feeling that the pattern I'm referring to would be able to handle that scenario as well. Does this sound familiar to anyone?
If the container is on another host, why not just hit the host directly and have docker be transparent with 3386 (or whatever port you're running mysql on) forwarding requests to the container? I can't think of any reason you'd want to link containers unless they're actually on the same host. Docker is great at being transparent, so clients can run things against a service in Docker from another machine as if the service was being run directly on the machine without Docker.
If you really have to have both containers on the same machine (even though the mysql container is calling out to RDS or another host), you should be able to make a new simple mysql image that just has mysql_client installed and just takes requests and forwards them to RDS.

How to design Docker routing and database Layer?

I have a few Django that I want to host on a single docker host running CentOS. I want to have 3 layers
network
application
database
network: I want to have an nginx container in the network layer routing requests to different containers in the application layer. I play to use 1:1 port mappings in this docker container to expose port 80 on the host to the container. Nginx will use direct request to appropriate app in the application layer running on port 8001-8010
application: Ill have several containers each running a seperate django app using Gunicorn running on port 8001-8010
database: one container running MySQL with a different database for each app. The MYSQL container will have a data volume linked to it for persistence.
I understand you can link containers. But as I understand it, I think it relies on the order in which the containers are started ie: how can you link nginx to several containers when they havent been started yet.
So my question is
How do I connect the network layer to the application layer when the number/names of containers in the application is always changing. ie: I might bring a new applcation online/offline. How would I update nginx config and what would the addressing look like?
How do I connect the application layer to the database layer? do I have to use Docker Linking? In my Django application code I need to use the hostname of the database to connect to. What would I put for my hostname of my docker container? Would it be able to resolve?
Is there a reference architecture I could leverage?
Docker does not support dynamic linking but there are some tools that can do this for you, see this SO question.
2.) You could start you database container at first and then link all application containers to the database container. Docker will create the host file at the boot up (statically, if your database container reboots and gets another IP you need dynamlically links, see above). When you link a container like this:
-link db:db
you can access the container with the hostname db.
I ended up using this solution:
https://github.com/blalor/docker-hosts
It allows you to have to refer to other containers on the same host by hostname. It is also dynamic as the /etc/host file on the containers gets updated dynamically as containers go up and down.

Resources