docker-swarm vs.docker-compose on single host in production - docker

Is there a reason to use docker-swarm instead of docker-compose for deploying a single host in production?
I'm currently rewriting an existing application. My predecessors set up the application using docker-swarm. But I do not understand why: the application will only consist of a single host running a couple of services. These services will only supply some local information on the customer network via a REST-Api to a kubernetes cluster (so no real load or reason to add additional hosts).
I looked through the Docker website and could not find a reason to use docker-swarm to deploy a single host, apart from testing a deployment on a single host dev environment.
Are there benefits of using docker-swarm compared to docker-compose regarding deployment, networking, etc...?

Docker Swarm and Docker Compose are fundamentally different animals. Compose is a build tool that lets you define and configure a group of related containers, whereas swarm is an orchestration tool that manages multiple docker engines in a way that lets you treat them (somewhat) as a single unit. Swarm exposes an API that is mostly compatible with the Docker Remote API, which allows existing applications to use Swarm to scale horizontally without having to completely overhaul the existing interface to the container engine.
That said, much of the functionality in Docker Compose that overlaps with Docker Swarm has been added incrementally. Compose has grown over time, and the distinction between the two has narrowed a bit. Swarm was eventually integrated into the Docker engine, and Docker Stack was introduced, allowing compose.yml files to be read directly by Docker, without using Compose.
So the real question might be: what is the difference between docker compose and docker stack? Not a whole lot. Compose is actually a separate project, written in Python that uses the Docker API under the hood. Stack does much of the same things as Compose, but is integrated into Docker. Stack also wants pre-built images, while compose will handle those image builds for you, which makes compose very handy for development.
What you are dealing with might be a product of a time when these 2 tools were a lot more distinct. Docker Swarm is part of Docker, and it allows for easy scaling if needed (even if you don't need it now, it might be good down the road). On the other hand, Compose (in my opinion anyway) is much more useful for development situations where you are making frequent tweaks to your images, and rebuilding.

Related

Can all docker swarm instances run on same machine?

I have a couple of Docker swarm questions (Sorry for not splitting them up but they are all closely related):
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
Do I have to run swarm mode to be able to communicate between instances?
What is the key difference between swarm mode and just running a number of containers as regular?
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
Is there any built in support for a message bus or similar in Docker?
Is there support for any consensus protocol in Docker?
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
Can a container list other containers, stop/restart some and start new ones? (to be able to function as a manager for other containers)
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
Background: What I'm trying to figure out is how I should go about and build a micro service mesh using Docker. The containers will be running .NET Core. I'm not too keen on relying too much on specifically Docker since it may not be the preferred tech in a couple of years. What can/should I do with Docker and what can/should I do inside the containers. That's what I'm trying to figure out.
I've copied your questions and tried to answer them.
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
You can have only one machine in a swarm and run multiple tasks of the same service or in other words your scale of a service can be more than the number of actual machines. I have a testing swarm with a single machine and one with three and it works the same way.
Do I have to run swarm mode to be able to communicate between instances?
You have to run your docker in swarm mode in order to create a service, please see this link
What is the key difference between swarm mode and just running a number of containers as regular?
The key difference afaik is, that when a task goes down, docker puts another task up automatically. And you can easily scale your services, which means you can easily have multiple tasks just by scaling your service (up or down). As of running a container - when it goes down you have to manually start another.
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
I've currently only tested with a couple of wildfly servers in a swarm, which are on the same network. I'm not sure about others, but would love to find out. I've only read about RabbitMQ, but can't seem to find the link atm.
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
I can't say.
Is there any built in support for a message bus or similar in Docker?
I can't say.
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
I've tested rancher and portainer.io, for a list of them I found this link
Can a container list other containers, stop/restart some and start new ones?
I'm not sure why would you want to do that? And I guess it's possible, see this link
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
I can't say.
#namokarm did a great job, and I'm filling in the gaps:
Benefits of Swarm over docker run or docker-compose.
All communications between containers has to be TCP/UDP etc. You could force two containers to only run on a single machine, then bind-mount their socket so they skip the network, but that would be a bit of an anti-pattern. Swarm is designed for everything to be distributed and TCP/UDP.
In a few cases, such as PHP-FPM + Nginx, I recommend bundling both in the same container (against docker best practices, but trust me it's easier than separate containers). This will ensure they scale together (1-to-1 relationship) and stay fast since they use local sockets to communicate). I only recommend this for a few setups like this, the other being ColdFusion + Nginx because they are two parts of the same tool that provide a HTTP response... I don't recommend bundling images together in nearly all other cases, but I'm open to ideas :).
Rancher is no longer supporting Swarm. Portainer and SwarmPit are GUI options.
Yes a container running something like Portainer/SwarmPit or controlling the Docker socket through a bind-mount or TCP can control the whole Swarm. This is how all docker management works :)
For reverse proxy, you would run a container-based proxy like Traefik or Docker Flow Proxy, which sets up HAProxy for Docker and Swarm.
Many of these topics are discussed in my DockerCon talks: https://www.bretfisher.com/dockercon18/

When to use Docker-Compose and when to use Docker-Swarm

I'm trying to understand the differences or similarities between Docker-Compose and Docker-Swarm.
By reading the documentation I have understood that docker-compose provides a mechanism to bind different containers together and work in collaboration, as a single service (I'm guessing it's using the same functionality as --link command used to link two containers)
Also, my understanding of docker-swarm is that it allows you to manage a cluster of different docker-hosts, each of which is running several container instances of some docker-images. We could define connections as overlay-networks between different containers in the swarm (even if they across two docker-hosts in the swarm) to connect them as a unit.
What I'm trying to understand is has docker-swarm succeeded docker-compose and overlay networks is the new (recommended) way to connect containers?
Or is it that docker-compose is still an integral part of the entire docker family and it is expected and advisable to use it to connect containers to work in collaboration. If so does docker-compose work with containers across different nodes in the swarm??
Or is it that overlay networks is for connecting containers across different hosts in the swarm and docker-compose is for creating internal links??
Besides I also see that it is mentioned in the docker documentation that --links not recommended anymore and will be obsolete soon.
I'm a bit confused???
Thanks Alot!
It will probably help to start with a few definitions:
docker-compose: Command used to configure and manage a group of related containers. It is a frontend to the same api's used by the docker cli, so you can reproduce it's behavior with commands like docker run.
docker-compose.yml: Definition file for a group of containers, used by docker-compose and now also by swarm mode.
swarm mode: Used to manage a group of docker engines as a single entity and provide orchestration (constantly trying to correct any differences between the current state and the target state).
service: One or more containers for the same image and configuration within swarm, multiple containers provide scalability.
stack: One or more services within a swarm, these may be defined using a DAB or a docker-compose.yml file.
bridge network: Network managed by a single docker engine where multiple containers may communicate with each other. You may have multiple networks managed by an engine, and containers can be attached to zero or more networks.
overlay network: Similar to a bridge network but spanning multiple docker engines. These require a key/value store to maintain their state. Swarm mode provides this, but if swarm mode is disabled, you may also use etcd, consul, or zookeeper.
links: a method to connect containers together that predates the bridged network. Its usage is no longer recommended.
classic swarm: A predecessor to the integrated swarm mode that runs as a container, allows multiple engines to appear as one, but does not provide orchestration or include its own k/v store.
To answer the questions:
has docker-swarm succeeded docker-compose and overlay networks is the new (recommended) way to connect containers?
Or is it that docker-compose is still an integral part of the entire docker family and it is expected and advisable to use it to connect containers to work in collaboration. If so does docker-compose work with containers across different nodes in the swarm??
They provide different functionality and will continue to both serve a purpose. docker-compose cannot start containers inside swarm mode, but a newer version of the docker-compose.yml file (version 3) can be used to define a stack directly in swarm mode without using docker-compose itself. docker-compose is needed to manage containers outside of swarm mode, on a single docker engine or with classic swarm.
Or is it that overlay networks is for connecting containers across different hosts in the swarm and docker-compose is for creating internal links??
Besides I also see that it is mentioned in the docker documentation that --links not recommended anymore and will be obsolete soon.
docker-compose starting with version 2 of the yml file connects multiple containers together by default with a new bridged network per project (the project defaults to the directory name). With classic swarm, that would default to an overlay network using an external k/v store. And with a swarm mode stack, this would be an overlay network.
Using docker networks is the preferred way to have containers communicate with each other. You want a network per group of containers you wish to isolate from the rest of your docker environment. docker-compose automates this network creation, but you can also do it from the command line with docker networks create.
Linking have been largely replaced by docker networks with built-in DNS discovery. When you remove links from your docker-compose.yml, you may need to replace them with a depends_on section to enforce container startup order. Otherwise, there are very few scenarios where linking makes sense and all the usage I've seen is from someone following outdated documentation.
compose or swarm or swarm overlay networks
You would find that you need to use all of the above if you're doing anything other than a demo on your laptop etc.
I deliberately separated out swarm & swarm overlay networks, because you need not use both, but you cannot get an overlay network without having a swarm underneath it.
Compose is for bringing up multiple containers together. Now it makes sense that they are related to each other, although they may not be. But let's suppose a typical case when the containers are for services that are related to each other, then you would want them to talk to each other in some way, but yet control how they talk to each other using networks. For example, take a 3 tier app that has a webserver, appserver and db. Let's say all three components are dockerized and you are using compose to bring them up togetherm instead of running docker run.. three times with different parameters etc. All three would come up, but you would want to control how they connect to each other. You want the webserver to be able to talk to the appserver, but not to the db directly. And you would want the appserver to talk (ping) the db server container and also ping the web server. All connections are two way, but restricted to only those services that you want to be able to communicate with each other. For such an arrangement, you would typically setup 2 networks - say frontend and backend. The web and app containers are connected to the frontend network. The app and db containers are connected to the backend network. Because there is no common network between the db and web containers they cannot touch (ping) each other, which is your intent.
Now, if you want these 3 services to be able to run on your cluster of 100's of machines, and you also want to scale across them, you would need a network that spans multiple hosts. That is where overlay networking (in swarm) comes into picture. Overlay networking is nothing but multi-host networking build over VxLAN technology. You do not have to know about VxLAN, except that it is a standard network topology that is supported in almost all modern networking infrastructure.
I hope that clarifies.
Edit: I did not see that you got an answer already!
I think you have most of the understanding correct as to what each is, but some tweaking is required.
You're correct docker-compose is to bring up multi-container applications. Earlier you used to do docker run .. to start every container. Usually modern applications embracing the micro-services paradigm can be made up of dozens of services and using docker run .. will get very tiresome very soon. Hence docker-compose allows you to express all the containers and their properties and how they connect to each other as a yaml or json file so you can manage it in an easier fashion.
So, docker-compose is the container orchestration part in the docker ecosystem.
Links are different, they are just a part of docker-compose or docker run commands and are deprecated in favor of software defined networks of which overlay networks are just one of them.
Swarm is the scheduling component in docker. What is scheduling - it is nothing but figuring out where to "place" your containers in your cluster of docker hosts. You can have a cluster of hundreds of servers, and you may have hundreds of containers, each encapsulating a service for a dozen different applications. Now how should these containers be distributed across your cluster of hundreds of servers, should some containers be placed only on certain hosts because they satisfy a particular criteria or maybe they should be closer to (or not) other containers which are somehow related... all these are part of the scheduling component which is performed by docker Swarm.
I suggest you go through the getting started documentation on docker.com here: https://docs.docker.com/engine/getstarted-voting-app/

Sharing docker clusters

I thought a major benefit of Docker was the ability to deploy a single unit of work (a container) that is cheap, lightweight, and boots fast, instead of having to deploy a more expensive and heavy VM that boots slowly. But everywhere I look (eg AWS, Docker Cloud, IBM, Azure, Google Cloud, kubernetes), deploying single containers is not an option. Instead, a single customer must deploy entire VMs that will run instances of the docker engine which will then host clusters of containers.
Is there any CaaS that allows you to deploy only as few containers as you need? I thought many cloud provider companies would offer this service, coordinating the logistics of which containers submitted by which customers to group together and distribute among the companies' docker engines. I see this service is unnecessary for those customers that will be deploying enough containers that a full docker engine instance is necessary. But what about those customers that want the cheap option of only deploying a single container?
If this service is not available, I see Docker containers as no cheaper nor lighter in weight than full VMs. In both cases, you pay for a heavy VM. The only remaining benefit would be isolation of processes and the ability to quickly change them.
Again, is there any cloud service available to deploy only a single container?
As far as I see here, the problem is the point of view of your approach, not Docker.
Any machine that runs a GNU-Linux distro can run the docker daemon and therefore, run your docker containers.
There are solutions like Elastic Beanstalk that allow you to deploy docker containers with a high level of abstraction, making your "ops" part a little bit easier.
Nevertheless, I wonder, how do you actually try to deploy your application? what do you mean with:
"Instead, a single customer must deploy entire VMs that will run
instances of the docker engine which will then host clusters of
containers."
?
For example, kubernetes is a framework that allows you to deploy containers in other machines, so yes, you have to have a Framework for that or, instead, use a Framework as a service as Elastic Beankstalk is.
I hope my answer helps!

Recommended way to run a Docker Compose stack in production?

I have a couple of compose files (docker-compose.yml) describing a simple Django application (five containers, three images).
I want to run this stack in production - to have the whole stack begin on boot, and for containers to restart or be recreated if they crash. There aren't any volumes I care about and the containers won't hold any important state and can be recycled at will.
I haven't found much information on using specifically docker-compose in production in such a way. The documentation is helpful but doesn't mention anything about starting on boot, and I am using Amazon Linux so don't (currently) have access to Docker Machine. I'm used to using supervisord to babysit processes and ensure they start on boot up, but I don't think this is the way to do it with Docker containers, as they end up being ultimately supervised by the Docker daemon?
As a simple start I am thinking to just put restart: always on all my services and make an init script to do docker-compose up -d on boot. Is there a recommended way to manage a docker-compose stack in production in a robust way?
EDIT: I'm looking for a 'simple' way to run the equivalent of docker-compose up for my container stack in a robust way. I know upfront that all the containers declared in the stack can reside on the same machine; in this case I don't have need to orchestrate containers from the same stack across multiple instances, but that would be helpful to know as well.
Compose is a client tool, but when you run docker-compose up -d all the container options are sent to the Engine and stored. If you specify restart as always (or preferably unless-stopped to give you more flexibility) then you don't need run docker-compose up every time your host boots.
When the host starts, provided you have configured the Docker daemon to start on boot, Docker will start all the containers that are flagged to be restarted. So you only need to run docker-compose up -d once and Docker takes care of the rest.
As to orchestrating containers across multiple nodes in a Swarm - the preferred approach will be to use Distributed Application Bundles, but that's currently (as of Docker 1.12) experimental. You'll basically create a bundle from a local Compose file which represents your distributed system, and then deploy that remotely to a Swarm. Docker moves fast, so I would expect that functionality to be available soon.
You can find in their documentation more information about using docker-compose in production. But, as they mention, compose is primarily aimed at development and testing environments.
If you want to use your containers in production, I would suggest you to use a suitable tool to orchestrate containers, as Kubernetes.
If you can organize your Django application as a swarmkit service (docker 1.11+), you can orchestrate the execution of your application with Task.
Swarmkit has a restart policy (see swarmctl flags)
Restart Policies: The orchestration layer monitors tasks and reacts to failures based on the specified policy.
The operator can define restart conditions, delays and limits (maximum number of attempts in a given time window). SwarmKit can decide to restart a task on a different machine. This means that faulty nodes will gradually be drained of their tasks.
Even if your "cluster" has only one node, the orchestration layer will make sure your containers are always up and running.
You say that you use AWS so why don't you use ECS which is built for what you ask. You create an application which is the pack of your 5 containers. You will configure which and how many instances EC2 you want in your cluster.
You just have to convert your docker-compose.yml to the specific Dockerrun.aws.json which is not hard.
AWS will start your containers when you deploy and also restart them in case of crash

Multiple site docker swarm with enforced topology

I am building a proof of concept docker swarm based application stack which is intended to evolve a product which is currently deployed to many physical sites and backed by a distributed CDN. The docker compose system I've set up includes a number of different image types which I need to ensure are deployed to each physical location (three copies of each service A, two copies of each service B, at each site for example, each site being several collocated physical machines belonging to the docker swarm) and then others which are deployed only to a central origin location. I'd like to find a way to deploy this with constraints on where the image types end up on the swarm. Is this possible?
Short answer, yes.
Long answer:
use docker compose for managing your cluster, it will ease management.
after creating your swarm you can make docker-compose use that swarm by:
docker-compose -H <docker-swarm-ip:port> up -d
and if you want a container/service to run specifically on a host.
add the following entry in docker-compose.yml under the service you want to run on that host:
environment:
- "constraint:node==<host>"
This is the way i do it now.
i believe this is also available when you use the run command. Tough i never tried it.

Resources