HAProxy with docker, consul and mesos, which one to choose? - docker

In a microservice stack that uses docker for container orchestration, consul for service discovery and mesos for container scheduling, there are two services that user facing (with GUI) requiring to be configured with HAProxy for load balancing.
The question is, at which level should they be load-balanced. There are some implementations of LB that support each use case. dockercloud-haproxy, fabio with consul and marathon-lb if DC/OS is in place.
What would be a selection criteria ?

If your need is not that pressing you might want to wait until docker 1.12 is stabilised and use the included LB feature, which is pretty slick:
https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration/

Related

Can all docker swarm instances run on same machine?

I have a couple of Docker swarm questions (Sorry for not splitting them up but they are all closely related):
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
Do I have to run swarm mode to be able to communicate between instances?
What is the key difference between swarm mode and just running a number of containers as regular?
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
Is there any built in support for a message bus or similar in Docker?
Is there support for any consensus protocol in Docker?
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
Can a container list other containers, stop/restart some and start new ones? (to be able to function as a manager for other containers)
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
Background: What I'm trying to figure out is how I should go about and build a micro service mesh using Docker. The containers will be running .NET Core. I'm not too keen on relying too much on specifically Docker since it may not be the preferred tech in a couple of years. What can/should I do with Docker and what can/should I do inside the containers. That's what I'm trying to figure out.
I've copied your questions and tried to answer them.
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
You can have only one machine in a swarm and run multiple tasks of the same service or in other words your scale of a service can be more than the number of actual machines. I have a testing swarm with a single machine and one with three and it works the same way.
Do I have to run swarm mode to be able to communicate between instances?
You have to run your docker in swarm mode in order to create a service, please see this link
What is the key difference between swarm mode and just running a number of containers as regular?
The key difference afaik is, that when a task goes down, docker puts another task up automatically. And you can easily scale your services, which means you can easily have multiple tasks just by scaling your service (up or down). As of running a container - when it goes down you have to manually start another.
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
I've currently only tested with a couple of wildfly servers in a swarm, which are on the same network. I'm not sure about others, but would love to find out. I've only read about RabbitMQ, but can't seem to find the link atm.
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
I can't say.
Is there any built in support for a message bus or similar in Docker?
I can't say.
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
I've tested rancher and portainer.io, for a list of them I found this link
Can a container list other containers, stop/restart some and start new ones?
I'm not sure why would you want to do that? And I guess it's possible, see this link
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
I can't say.
#namokarm did a great job, and I'm filling in the gaps:
Benefits of Swarm over docker run or docker-compose.
All communications between containers has to be TCP/UDP etc. You could force two containers to only run on a single machine, then bind-mount their socket so they skip the network, but that would be a bit of an anti-pattern. Swarm is designed for everything to be distributed and TCP/UDP.
In a few cases, such as PHP-FPM + Nginx, I recommend bundling both in the same container (against docker best practices, but trust me it's easier than separate containers). This will ensure they scale together (1-to-1 relationship) and stay fast since they use local sockets to communicate). I only recommend this for a few setups like this, the other being ColdFusion + Nginx because they are two parts of the same tool that provide a HTTP response... I don't recommend bundling images together in nearly all other cases, but I'm open to ideas :).
Rancher is no longer supporting Swarm. Portainer and SwarmPit are GUI options.
Yes a container running something like Portainer/SwarmPit or controlling the Docker socket through a bind-mount or TCP can control the whole Swarm. This is how all docker management works :)
For reverse proxy, you would run a container-based proxy like Traefik or Docker Flow Proxy, which sets up HAProxy for Docker and Swarm.
Many of these topics are discussed in my DockerCon talks: https://www.bretfisher.com/dockercon18/

Docker swarm with consul

I am new with docker swarm and I'm having ambitious to deploy my application with docker swarm.
With the docker swarm, it has itself discovery service but I googled around and found out people are mentioning about the Consul as discovery service.
My question is. What is the advantage of Consul? Why don't we just use default discovery service?
Thanks,
Consul was used as a service discovery module in the standalone Swarm (prior to docker 1.12). However, since docker 1.12, Swarm mode was introduced with comes with default discovery service. So you don't need an external store.
Key point to notice is that if you had a swarm with an external store like consul, it would still have some data/metadata that needs to be preserved. Hence the use of Consul still exists.
Let us first look at the scope of service discovery provided by both swarm and Consul.
Swarm is to fascilitate service discovery on your docker network/infra only, while consul can be used with almost anything if you know how to use it, be it a monolythic application or a microservice, consul gives you all of that at one place.
Secondly, even though Swarm is great to handle a small infrastructure loads, it doesn't really go well with handling high production loads for a resource heavy infrastructure. This is why there are other tools in existance, for example kubernetes, ECS etc.
So considering that you have an application which you know is going to grow, I would rather go for a solution that works well with whatever I may try in future without having to change too much and works well with scaling on any IaaS provider. Hope that helps.

Do I really need docker swarm?

I have a silly question regarding docker swarm.
I am thinking I can start a web application image in two containers, either in same server or two vm servers, then I start a load balance container, pointing to two web app containers through IP and port.
In this case, why do I need docker swarm for clustering management? What benefits can docker swarm bring?
I have read from docker documentation, they only introduce what is swarm and how to use swarm. But I can not find out answer for why I have to use swarm.
Thanks
What is swarming managing? turns a pool of Docker hosts into a single, virtual Docker host.
Can swarm auto-start the container if the container died? Yes it can, so can the Docker daemon on each host.
Can swarm auto-create more nodes if the resource is not enough? No it cannot. It does not aims on providing this service. Nevertheless you can program a node that start and run containers when needed.
Which mean, if traffic grows fast, do we still manually create more node and deploy more containers? Yes, unfortunately.
update
If needed, here is an answer that details how to deploy a Swarm cluster.

Using zookeeper for service discovery of mesos slaves running docker

I am trying to link 2 docker containers using mesos/marathon framework. As I understand there is no way to use the docker link feature in mesos/martahon. So the way to go forward is to use service discovery. Since zookeeper is already used my question is how to use zookeeper for service discovery so that 1 container can talk to another one.
For service discovery on Mesos/Marathon, you can use a proxy server (see https://mesosphere.github.io/marathon/docs/service-discovery-load-balancing.html) or a DNS server that derives settings from Mesos automatically (see https://github.com/mesosphere/mesos-dns).
Although possible I would reconsider using Zookeeper as a centralized KV store for your configuration and services information. You could try to implement a daemon to ask and save data in zookeeper in order to configure your container's config files and live patching, but it's a complex solution (there are examples of this approach in this post from Pinterest, or in Hadoop's ZKFailoverController daemon). From my point of view there are more suited solutions as Consul or etcd, with implementations of the daemons as kelseyhightower/confd or consul-template.

How to configure a high-availability cluster of MariaDB and Redis in Mesos or CoreOS

In most tutorials, presentations and demos, only stateless services are presented that are load balanced either via DNS (SkyDNS, skydock, etc.) or via reverse proxy, such as HAproxy or Vulcand, which are configured with etcd or ZooKeeper.
Is there a best practice for deploying a cluster of MariaDB and Redis using:
CoreOS + fleet + Docker; or
Mesos + Marathon + Docker
Any other cluster management solution
How can one configure a Redis cluster and a MariaDB cluster (Galera), when the host running Master may change?
https://github.com/sheldonh/coreos-vagrant/tree/master/redis
http://www.severalnines.com/blog/how-deploy-galera-cluster-mysql-using-docker-containers
After posting the question, I was lucky and came across a few repositories that have achieved what I am looking for:
Redis
https://github.com/mdevilliers/docker-rediscluster - A Redis cluster with two Redis instances and three Redis Sentinel monitors. If the Master fails, the Sentinels promote the Slave as a Master. Mark has also created a project that configures HAProxy to use the promoted Master - https://github.com/mdevilliers/redishappy
Percona/Galera cluster
An out-of-the-box working docker image - https://github.com/paulczar/docker-percona_galera
You could use CoreOS (or any other plattform where Docker can run) and Kubernetes with SkyDNS integration this would you allow to fetch the IP-address of the master. Also Kubernetes comes with a proxy (for service discovery) which sets environmental variables in your pods. You could access them at runtime. I think the best way (and a way you need to go) is to use a service discovery tool like SkyDNS or something similar. Here is a simple Kubernetes example.
Also you could do this with fleet and side-kicks but I think Kuberentes does somethings a little bit easier for you and is better to use. It is just a little bit tricky to set it up :)
I didn't used Mesos and Marathon so far but I think they should do this too. They (https://github.com/mesosphere/marathon#features) have all the tools you need to set your cluster up.

Resources