Docker Engine Swarm: Access Network from "outside" swarm - docker-swarm

I'm working with a Docker (1.12) Engine Swarm with two nodes running in a swarm "bundle":
redis
my node.js application
They are both connected through their own network: docker network create -d overlay antispam
I use redis as an in-memory database and I have another node.js application that is used to push new data to this database in - usually - one hour intervals. The data-pusher is a short-lived application that takes some text files, parses them and pushes their content to redis. There is no "schedule short-running containers" feature in Docker swarm, yet. So I need to run my application as a "normal" docker container and thus need to access redis from "outside" of swarm.
The obvious solutions would be:
... to make the redis port public, but that's also an ugly solution.
... to convert the short-running app into a long running one that usually just sleeps and just wakes up every now and then. Call me old-school, but I don't like to waste resources for just having a sleeping process.
... to use RabbitMQ or MySQL (available as services in my "non-docker" environment) as a data relay, write my data there and have the application in the swarm read it out again. That would add another wstep of work.
Is there any other option to access the network of those two containers from outside?

Related

Is it possible to switch port binding between docker containers without downtime?

Scenario:
There is a container running with image version 1.0 and exposed port 8080 on localhost 80. The new version of the image is available, and there is a need to switch those versions. No, any orchestration tool is running ( Kubernetes, OpenShift etc...).
Is it possible to start a container with version 1.1 make it run without a problem
Please, keep in mind that I don't want to keep it simple, no replication, etc.
Simply docker container with the binded port to localhost.
Questions:
1. Is it possible to switch exposing of port between containers without downtime?
2. If not, is there is any mechanism implemented with docker (free edition) to do such switch?
Without downtime, you'd need a second replica of the service up an running, and a proxy in front of that service that's listening to user requests and routing from one to the other. Both Swarm Mode and Kubernetes provide this capability with similar tools, the port being exposed is indirectly connected to the app via either an application reverse proxy, or some iptables rules and ipvs entries in the kernel.
Out of the box, recent versions of docker include support for Swarm Mode with nothing additional to install. You can run a simple docker swarm init to start a single node swarm cluster in less than a second. And then instead of docker-compose up you switch to docker stack deploy -c docker-compose.yml $stack_name to manage your projects with almost the same compose file. For swarm mode, you'll want to be on version 3 of the compose file syntax.
For a v3 syntax compose file in swarm mode that has no outage on an update, you'll want healthcheck's defined in your image to monitor the application and report back when it's ready to receive requests. Then you'll want a deploy section of the compose file to either have multiple replicas for HA, or at least configure a single replica to have a "start-first" policy to ensure the new service is up before stopping the old one. See the compose docs for settings to adjust: https://docs.docker.com/compose/compose-file/#update_config
For an application based reverse proxy in docker, I really do like traefik, but more to allow me to run multiple http based container services with a single port opened. This allows me to mapping requests based off the hostname/path/http header to the right container, while at the same time giving features to migrate between different versions with weighting of which backend to use so you can do more than a simple round-robin load balancing during an upgrade.
There is no mechanism native to Docker that would allow you replace one container with another with no interruption. On the other hand, the duration of the interruption can probably be measured in milliseconds; whether or not this is really an issue for you depends entirely on your application.
You can get the behavior you want by introducing a dynamic reverse proxy such as Traefik into your configuration. The proxy binds to host ports and handles requests from remote systems, then distributes those requests to one or more backend containers.
You can create and remove backend containers as you please, and as long as at least one is running your application will be available. For your specific use case, this means that you can start the new version of your application first, then retire the old one, all without any interruption in service.

Can all docker swarm instances run on same machine?

I have a couple of Docker swarm questions (Sorry for not splitting them up but they are all closely related):
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
Do I have to run swarm mode to be able to communicate between instances?
What is the key difference between swarm mode and just running a number of containers as regular?
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
Is there any built in support for a message bus or similar in Docker?
Is there support for any consensus protocol in Docker?
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
Can a container list other containers, stop/restart some and start new ones? (to be able to function as a manager for other containers)
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
Background: What I'm trying to figure out is how I should go about and build a micro service mesh using Docker. The containers will be running .NET Core. I'm not too keen on relying too much on specifically Docker since it may not be the preferred tech in a couple of years. What can/should I do with Docker and what can/should I do inside the containers. That's what I'm trying to figure out.
I've copied your questions and tried to answer them.
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
You can have only one machine in a swarm and run multiple tasks of the same service or in other words your scale of a service can be more than the number of actual machines. I have a testing swarm with a single machine and one with three and it works the same way.
Do I have to run swarm mode to be able to communicate between instances?
You have to run your docker in swarm mode in order to create a service, please see this link
What is the key difference between swarm mode and just running a number of containers as regular?
The key difference afaik is, that when a task goes down, docker puts another task up automatically. And you can easily scale your services, which means you can easily have multiple tasks just by scaling your service (up or down). As of running a container - when it goes down you have to manually start another.
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
I've currently only tested with a couple of wildfly servers in a swarm, which are on the same network. I'm not sure about others, but would love to find out. I've only read about RabbitMQ, but can't seem to find the link atm.
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
I can't say.
Is there any built in support for a message bus or similar in Docker?
I can't say.
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
I've tested rancher and portainer.io, for a list of them I found this link
Can a container list other containers, stop/restart some and start new ones?
I'm not sure why would you want to do that? And I guess it's possible, see this link
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
I can't say.
#namokarm did a great job, and I'm filling in the gaps:
Benefits of Swarm over docker run or docker-compose.
All communications between containers has to be TCP/UDP etc. You could force two containers to only run on a single machine, then bind-mount their socket so they skip the network, but that would be a bit of an anti-pattern. Swarm is designed for everything to be distributed and TCP/UDP.
In a few cases, such as PHP-FPM + Nginx, I recommend bundling both in the same container (against docker best practices, but trust me it's easier than separate containers). This will ensure they scale together (1-to-1 relationship) and stay fast since they use local sockets to communicate). I only recommend this for a few setups like this, the other being ColdFusion + Nginx because they are two parts of the same tool that provide a HTTP response... I don't recommend bundling images together in nearly all other cases, but I'm open to ideas :).
Rancher is no longer supporting Swarm. Portainer and SwarmPit are GUI options.
Yes a container running something like Portainer/SwarmPit or controlling the Docker socket through a bind-mount or TCP can control the whole Swarm. This is how all docker management works :)
For reverse proxy, you would run a container-based proxy like Traefik or Docker Flow Proxy, which sets up HAProxy for Docker and Swarm.
Many of these topics are discussed in my DockerCon talks: https://www.bretfisher.com/dockercon18/

Putting databases in their own Docker containers?

I run a complex app with a database backend and many other things all in one container. I notice that Docker images for different database systems are available. When would I want to move something like a DB server to its own container, instead of running everything in the same container? The advantage I have now is that I can deploy everything at once, and I don't have to configure more than one container to get things talking.
Docker or the Container Manager is using Linux container technology to provide a best abstraction, using docker container with multiple process is a bad idea; use docker container for isolating one process, use docker volume container for storing database data ( docker state is not persistent by default).
Use docker-compose or fig to attach two docker containers db and web app, it will ease your management in future!

Recommended way to run a Docker Compose stack in production?

I have a couple of compose files (docker-compose.yml) describing a simple Django application (five containers, three images).
I want to run this stack in production - to have the whole stack begin on boot, and for containers to restart or be recreated if they crash. There aren't any volumes I care about and the containers won't hold any important state and can be recycled at will.
I haven't found much information on using specifically docker-compose in production in such a way. The documentation is helpful but doesn't mention anything about starting on boot, and I am using Amazon Linux so don't (currently) have access to Docker Machine. I'm used to using supervisord to babysit processes and ensure they start on boot up, but I don't think this is the way to do it with Docker containers, as they end up being ultimately supervised by the Docker daemon?
As a simple start I am thinking to just put restart: always on all my services and make an init script to do docker-compose up -d on boot. Is there a recommended way to manage a docker-compose stack in production in a robust way?
EDIT: I'm looking for a 'simple' way to run the equivalent of docker-compose up for my container stack in a robust way. I know upfront that all the containers declared in the stack can reside on the same machine; in this case I don't have need to orchestrate containers from the same stack across multiple instances, but that would be helpful to know as well.
Compose is a client tool, but when you run docker-compose up -d all the container options are sent to the Engine and stored. If you specify restart as always (or preferably unless-stopped to give you more flexibility) then you don't need run docker-compose up every time your host boots.
When the host starts, provided you have configured the Docker daemon to start on boot, Docker will start all the containers that are flagged to be restarted. So you only need to run docker-compose up -d once and Docker takes care of the rest.
As to orchestrating containers across multiple nodes in a Swarm - the preferred approach will be to use Distributed Application Bundles, but that's currently (as of Docker 1.12) experimental. You'll basically create a bundle from a local Compose file which represents your distributed system, and then deploy that remotely to a Swarm. Docker moves fast, so I would expect that functionality to be available soon.
You can find in their documentation more information about using docker-compose in production. But, as they mention, compose is primarily aimed at development and testing environments.
If you want to use your containers in production, I would suggest you to use a suitable tool to orchestrate containers, as Kubernetes.
If you can organize your Django application as a swarmkit service (docker 1.11+), you can orchestrate the execution of your application with Task.
Swarmkit has a restart policy (see swarmctl flags)
Restart Policies: The orchestration layer monitors tasks and reacts to failures based on the specified policy.
The operator can define restart conditions, delays and limits (maximum number of attempts in a given time window). SwarmKit can decide to restart a task on a different machine. This means that faulty nodes will gradually be drained of their tasks.
Even if your "cluster" has only one node, the orchestration layer will make sure your containers are always up and running.
You say that you use AWS so why don't you use ECS which is built for what you ask. You create an application which is the pack of your 5 containers. You will configure which and how many instances EC2 you want in your cluster.
You just have to convert your docker-compose.yml to the specific Dockerrun.aws.json which is not hard.
AWS will start your containers when you deploy and also restart them in case of crash

How could one use Docker Compose to synchronize container execution?

How could one use Docker Compose to synchronize container execution?
The problem I'm trying to solve is similar to Docker Compose wait for container X before starting Y. I use Docker Compose to launch several containers, all running on the same host, three of which are PostgreSQL, Liquibase, and a Web application servlet running in Tomcat. The PostgreSQL and Web application containers are both long running while the Liquibase container is ephemeral. The containers must not only start in order, but each container must also wait for the preceding container to be available or complete. In particular, the PostgreSQL server must be ready to process SQL commands before the Liquibase container runs, and the Liquibase schema migration task must complete before the Web application starts to ensure that the database schema is in a valid state.
I understand that I can achieve this synchronization using two wrapper "wait-for" scripts that poll for certain conditions (and this may be the only available option), the first of which would poll the availability of the PostgreSQL server to process commands while the second, which would run just prior to the Web application, could poll for the presence of a particular database object. However, like process synchronization, I think container synchronization is a common problem that can be addressed with more general inter-process communication and synchronization primitives like semaphores. Docker Compose would likely benefit the most from such synchronization mechanisms, but Docker containers might find them useful, too, for example, to establish multiple synchronization points within a container.
Until Docker Compose or Docker supports container synchronization primitives (similar to process synchronization primitives, but accessible from the shell), Dependencies for docker-compose with inotify is one of the better solutions that I've found to the Docker Compose container synchronization problem.
In addition to consul, etcd, and ZooKeeper, MQTT retained messages are another simple mechanism that Docker containers might use to coordinate activities. Mosquito is a lightweight, open-source implementation of MQTT.
I've come to the conclusion that Docker Compose is not the most appropriate tool for container synchronization. Tools like Kubernetes or Marathon facilitate more sophisticated container synchronization. What is the best Docker Linux Container orchestration tool? compares available container synchronization tools.

Resources