Microservice Discovery With Docker And Consul - docker

I'm interested in building microservices, but I'm getting a bit stuck on how service discovery should work when I've got multiple instances of a single microservice.
Suppose I've got an "OCR" app that reads text from an image.
Deploying that as 1 instance is easy, however, what if I want 50 instances of those?
I can run docker swarm to spin up get those 50 instances, but how do I send a request to any one of them, i.e. I don't want to have to know the exact container name of a specific instance, I don't care which one I get, as long as it's healthy, just send my request to any of the "OCR" containers.
How do I achieve this?
I've been looking into Consul and it seems very promising.
I especially like the HTTP api, (Although I'm a little unsure of how I would retrieve the url for the service I'm interested in. Would I need to do it before every request to make sure I'm pointing to a healthy instance?).
If I wanted to use consul, what would be the steps be in relation to docker swarm? Do I just need to register the service in consul when the container starts up, and it will automatically get de-registered if it fails right?).
After that, all of my containers just need to be aware of where consul is (And I guess I could stick a load balancer infront of it, incase I ever want to scale out consul itself to a bunch of instances?)
Please let me know if I'm going completely in the wrong direction.
If anyone could also suggest any articles or books on this topic I'd appreciate it.
Thanks.

When you're using Docker Swarm Mode, you get service discovery with load balancing for free.
DNSRR is in the key concept: https://docs.docker.com/engine/swarm/key-concepts/#load-balancing
Say you deploy OCR-app.
docker service create --network dev --name ORC-app --replicas 5 OCR-app:latest
The docker manager will deploy OCR-app in this case five times on nodes of your swarm network. Every other service which is part of the same docker network dev can request the OCR-app by it's name. E.g. GET http://OCR-app:4000/do/something.
Internally docker swarm uses round robin for forward the request automatically to one of the five services.

Related

Can all docker swarm instances run on same machine?

I have a couple of Docker swarm questions (Sorry for not splitting them up but they are all closely related):
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
Do I have to run swarm mode to be able to communicate between instances?
What is the key difference between swarm mode and just running a number of containers as regular?
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
Is there any built in support for a message bus or similar in Docker?
Is there support for any consensus protocol in Docker?
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
Can a container list other containers, stop/restart some and start new ones? (to be able to function as a manager for other containers)
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
Background: What I'm trying to figure out is how I should go about and build a micro service mesh using Docker. The containers will be running .NET Core. I'm not too keen on relying too much on specifically Docker since it may not be the preferred tech in a couple of years. What can/should I do with Docker and what can/should I do inside the containers. That's what I'm trying to figure out.
I've copied your questions and tried to answer them.
Do all instances in a swarm have to run on different machines or can they all run on the same? (if having limited amount of hardware and just wanting to try swarm mode)
You can have only one machine in a swarm and run multiple tasks of the same service or in other words your scale of a service can be more than the number of actual machines. I have a testing swarm with a single machine and one with three and it works the same way.
Do I have to run swarm mode to be able to communicate between instances?
You have to run your docker in swarm mode in order to create a service, please see this link
What is the key difference between swarm mode and just running a number of containers as regular?
The key difference afaik is, that when a task goes down, docker puts another task up automatically. And you can easily scale your services, which means you can easily have multiple tasks just by scaling your service (up or down). As of running a container - when it goes down you have to manually start another.
What are the options of communication between instances of containers? (in swarm and in regular mode) http? named pipes? other?
I've currently only tested with a couple of wildfly servers in a swarm, which are on the same network. I'm not sure about others, but would love to find out. I've only read about RabbitMQ, but can't seem to find the link atm.
If using http communication between containers on same machine, will it be roughly similarly as fast as named pipes?
I can't say.
Is there any built in support for a message bus or similar in Docker?
I can't say.
Are there any GUI's for designing, managing, testing and/or debugging Docker swarms?
I've tested rancher and portainer.io, for a list of them I found this link
Can a container list other containers, stop/restart some and start new ones?
I'm not sure why would you want to do that? And I guess it's possible, see this link
Can a container be given access to OS-features (Linux in my case) to configure for instance a reverse proxy or port forwarding on the WAN?
I can't say.
#namokarm did a great job, and I'm filling in the gaps:
Benefits of Swarm over docker run or docker-compose.
All communications between containers has to be TCP/UDP etc. You could force two containers to only run on a single machine, then bind-mount their socket so they skip the network, but that would be a bit of an anti-pattern. Swarm is designed for everything to be distributed and TCP/UDP.
In a few cases, such as PHP-FPM + Nginx, I recommend bundling both in the same container (against docker best practices, but trust me it's easier than separate containers). This will ensure they scale together (1-to-1 relationship) and stay fast since they use local sockets to communicate). I only recommend this for a few setups like this, the other being ColdFusion + Nginx because they are two parts of the same tool that provide a HTTP response... I don't recommend bundling images together in nearly all other cases, but I'm open to ideas :).
Rancher is no longer supporting Swarm. Portainer and SwarmPit are GUI options.
Yes a container running something like Portainer/SwarmPit or controlling the Docker socket through a bind-mount or TCP can control the whole Swarm. This is how all docker management works :)
For reverse proxy, you would run a container-based proxy like Traefik or Docker Flow Proxy, which sets up HAProxy for Docker and Swarm.
Many of these topics are discussed in my DockerCon talks: https://www.bretfisher.com/dockercon18/

Automatic self-configuration of an etcd cluster as a Docker swarm service

I want to find a way to deploy an etcd cluster as a Docker Swarm service that would automatically configure itself without any interaction. Basically, I think of something in spirit of this command:
docker service create --name etcd --replicas 3 my-custom-image/etcd
I'm assuming that overlay network is configured to be secure and provide both encryption and authentication, so I believe I don't need TLS, not even --auto-tls. Don't want an extra headache finding a way to provision the certificates, when this can be solved on the another layer.
I need an unique --name for each instance, but I can get that from an entrypoint script that would use export ETCD_NAME=$(hostname --short).
The problem is, I'm stuck on initial configuration. Based on the clustering guide there are three options, but none seems to fit:
The DNS discovery scenario is closest to what I'm looking for, but Docker doesn't support DNS SRV records discovery at the moment. I can lookup etcd and I will get all the IPs of my nodes' containers, but there are no _etcd-server._tcp records.
I cannot automatically build ETCD_INITIAL_CLUSTER because while I know the IPs, I don't know the names of the other nodes and I'm not aware about any way to figure those out. (I'm not going to expose Docker API socket to etcd container for this.)
There is no preexisting etcd cluster, and while supplying the initial configuration URI from discovery.etcd.io is a possible workaround I'm interested in not doing this. I'm aiming for "just deploy a stack from this docker-compose.yml and it'll automatically do the right thing, no questions asked" no-brainer scenario.
Is there any trick I can pull?
As you have correctly said you know the IPs of your nodes’ containers,
so the suggested trick is to simply build the required etcd names as derivatives of each node’s IP address.
inside each container etcd is named using this particular container's IP i.e. etcd-$ip
ETCD_INITIAL_CLUSTER is populated using other containers' IPs in a similar way
The names could be as simple as etcd-$ip or even better i.e. we could use the netmask to calculate the node’s IP on this network to make the names prettier.
In this case in a simple 3-nodes configuration one could end up having names like etcd-02 etcd-03 etc
No specific requirements exist for the name attribute, it just needs to be unique and human-readable. Although it indeed looks like a trick it might work

Docker swarm with consul

I am new with docker swarm and I'm having ambitious to deploy my application with docker swarm.
With the docker swarm, it has itself discovery service but I googled around and found out people are mentioning about the Consul as discovery service.
My question is. What is the advantage of Consul? Why don't we just use default discovery service?
Thanks,
Consul was used as a service discovery module in the standalone Swarm (prior to docker 1.12). However, since docker 1.12, Swarm mode was introduced with comes with default discovery service. So you don't need an external store.
Key point to notice is that if you had a swarm with an external store like consul, it would still have some data/metadata that needs to be preserved. Hence the use of Consul still exists.
Let us first look at the scope of service discovery provided by both swarm and Consul.
Swarm is to fascilitate service discovery on your docker network/infra only, while consul can be used with almost anything if you know how to use it, be it a monolythic application or a microservice, consul gives you all of that at one place.
Secondly, even though Swarm is great to handle a small infrastructure loads, it doesn't really go well with handling high production loads for a resource heavy infrastructure. This is why there are other tools in existance, for example kubernetes, ECS etc.
So considering that you have an application which you know is going to grow, I would rather go for a solution that works well with whatever I may try in future without having to change too much and works well with scaling on any IaaS provider. Hope that helps.

Docker and Consul by example: need clarification

I'm learning Docker-Swarm with Consul and found some issues I don't really understand. Basically, I created a Docker-Swarm cluster (node-01 and node-02) with Consul Sevice Discovery. I then run a multi-container application (Express app with Mongo) and I can see it is running on node-02. In order to run it, I have to go in and find the IP address of my node-02 and then open the browser.
It works fine, it's just that I was expecting that I could just go to some virtual IP (or DNS) and that it the Consul service (or Swarm) would then translate it to the correct IP address of node-02 in this example.
Next item is that when I log into Consul web UI, I was expecting to see the nodes under the 'nodes' menu, but that seems not to be the case. I was also expecting to get an overview of the 'applications' or 'services' I was running on the node-01 and node-02, but that is also not the case.
My questions are:
Can someone explain why I would need to manually find out on which node in the cluster my app is running. Cannot imagine this is done in larger deployments.
Can someone address why I don't see the 'nodes' and 'services' in the Consul UI?
Note: I tried to be as short as possible though I have been documenting the full setup in a blog post (with screenshots) for those who want to see more details. Go to blog post
Question 1
I would like to access the service without having to use the Swarm agent's IP address
Solution
It is feasible, you just need to start up a reverse proxy such as nginx in a container (here are the official nginx images). At the start up of this container use the option --link with the name of the application. Thus the IP address of this container will added in the file /etc/hosts of the reverse proxy container (remember to use --name and --hostname). Run this reverse proxy container on a specific node.
So the solution to get rid of the IP address issue is to deploy another container on a specific node (and then specific IP address)? Yes! But using --link will make this issue scalable ;)
Question 2
I was expecting to see the nodes under the 'nodes' menu, but that seems not to be the case.
What you do mean? What did you expect? Do you need to query the k,v-storage DB?
Check this: https://github.com/vmudigal/microservices-sample
Microservices Sample Architecture

CoreOS Fleet, link redundant Docker container

I have a small service that is split into 3 docker containers. One backend, one frontend and a small logging part. I now want to start them using coreOS and fleet.
I want to try and start 3 redundant backend containers, so the frontend can switch between them, if one of them fails.
How do i link them? If i only use one, it's easy, i just give it a name, e.g. 'back' and link it like this
docker run --name front --link back:back --link graphite:graphite -p 8080:8080 blurio/hystrixfront
Is it possible to link multiple ones?
The method you us will be somewhat dependent on the type of backend service you are running. If the backend service is http then there are a few good proxy / load balancers to choose from.
nginx
haproxy
The general idea behind these is that your frontend service need only be introduced to a single entry point which nginx or haproxy presents. The tricky part with this, or any cloud service, is that you need to be able to introduce backend services, or remove them, and have them available to the proxy service. There are some good writeups for nginx and haproxy to do this. Here is one:
haproxy tutorial
The real problem here is that it isn't automatic. There may be some techniques that automatically introduce/remove backends for these proxy servers.
Kubernetes (which can be run on top of coreos) has a concept called 'Services'. Using this deployment method you can create a 'service' and another thing called 'replication controller' which provides the 'backend' docker process for the service you describe. Then the replication controller can be instructed to increase/decrease the number of backend processes. You frontend accesses the 'service'. I have been using this recently and it works quite well.
I realize this isn't really a cut and paste answer. I think the question you ask is really the heart of cloud deployment.
As Michael stated, you can get this done automatically by adding a discovery service and bind it to the backend container. The discovery service will add the IP address (usually you'll want to bind this to be the IP address of your private network to avoid unnecessary bandwidth usage) and port in the etcd key-value store and can be read from the load balancer container to automatically update the load balancer to add the available nodes.
There is a good tutorial by Digital Ocean on this:
https://www.digitalocean.com/community/tutorials/how-to-use-confd-and-etcd-to-dynamically-reconfigure-services-in-coreos

Resources