What is the better way to do service discovery on Docker environment? - docker

I would like to listen your experience about service discovery on Docker environment.
We plan to have multi-host docker environment with Swarn.
The latest version of Docker provides internal DNS and round-robin feature.
Our idea is to use Docker overlay network.
I believe that one overlay network per application is the way to go, so every environment will be segmented in a specific subnet. Or just one big subnet for all applications is better?
Do service discovery internally (inside overlay network) from one service to another is easy, Docker internal DNS solves it, we just need to use --net-alias parameter.
But how external service discovery can be done? One from another machine/service outside the overlay network.
Can you share your experience or your thoughts about it?
Regards

Related

Docker swarm - DNS service discovery, looking to internals

First of all:
I have some poor experience with docker swarm (I mean I touch in production env). I read a lot of about it, and I know concepts like veth, overlay, labels-pinning, vtep, bridge and so on. I know also that docker swarm use some distributed key-value storage to deal with management of cluster.
However, there is something that I don't understand: service disorvery/DNS/resolving service name.
How does it work? Where is this DNS server placed? Who cares to resolve service names?
Is it possible to read the content of distributed key-value storage?
How does it work? Where is this DNS server placed? Who cares to resolve service names?
It's embedded in the docker daemon itself. Every container which is part of a user defined network, does is name resolving requests to 127.0.0.11.
https://docs.docker.com/v17.09/engine/userguide/networking/configure-dns/
Is it possible to read the content of distributed key-value storage?
Docker is using libkv. But I'm not sure if it's possible to bypass the docker daemon and access it.
https://github.com/docker/libkv
The DNS in Docker Swarm overlay networks works just like docker-compose bridge networks. Services resolve inside the same network by their service name. Other things like Swarm VIP and Routing Mesh make the whole solution slightly different but don't directly affect DNS resolution.
The Swarm raft log isn't meant to be easily read, but it's not more than just the service definitions of services and networks you create in Swarm. I've never needed to look at it directly in a production system.
Here's a 3.5 training video on all things Swarm (including lots of network details) from former Docker engineer Jerome Petazzoni and AJ Bowen
Also, Laura Frank has some details from last years DockerCon on how the raft log and consensus works and might point to some tools if you want to look under the hood.

Cluster of forward proxies

I'm trying to figure out whether Docker Swarm or Kubernetes are a good choice for my use case.
Basically, I want to build a small cluster of forward proxies (via squid, nginx or a custom nodejs script), and be able to deploy/start/stop/purge them all together.
I should be able to access the proxy cluster via a single IP address, manager should be able to load-balance the request to a node, and each proxy node must use a unique outgoing IP address.
I'm wondering:
Are Docker Swarm and/or Kubernetes the right way to go about it?
If so, should I set-up Docker Swarm and/or Kubernetes and its worker nodes (running the proxy) on a single dedicated server or separate virtual servers?
Is it also possible for all the cluster nodes to share a file system storage for caching, common config etc.
Any other tips to get this working.
Thanks!
Docker running in swarm mode should work well for this
Run docker on a single dedicated server; I see no need for virtual servers. You could also run the swarm across multiple dedicated servers.
https://docs.docker.com/engine/swarm/secrets/ work well for some settings and configurations. If you require significant storage, simply add a database service to your cluster
Docker swarm mode fits your requirements quite well; requests are automatically balanced across your swarm and each service instance can be configured to have a unique address. You should check out the swarm mode tutorial: https://docs.docker.com/engine/swarm/swarm-tutorial/

service discovery in docker without using consul

I'm new to docker and microservices. I've started to decompose my web-app into microservices and currently, I'm doing manual configuration.
After some study, I came across docker swarm mode which allows service discovery. Also, I came across other tools for service discovery such as Eureka and Consul.
My main aim is to replace IP addresses in curl call with service name and load balance between multiple instances of same service.
i.e. for ex. curl http://192.168.0.11:8080/ to curl http://my-service
I have to keep my services language independent.
Please suggest, Do I need to use Consul with docker swarm for service discovery or i can do it without Consul? What are the advantages?
With the new "swarm mode", you can use docker services to create clustered services across multiple swarm nodes. You can then access those same services, load-balanced, by using the service name rather than the node name in your requests.
This only applies to nodes within the swarm's overlay network. If your client systems are part of the same swarm, then discovery should work out-of-the-box with no need for any external solutions.
On the other hand, if you want to be able to discover the services from systems outside the swarm, you have a few options:
For stateless services, you could use docker's routing mesh, which will make the service port available across all swarm nodes. That way you can just point at any node in the swarm, and docker will direct your request to a node that is running the service (regardless of whether the node you hit has the service or not).
Use an actual load balancer in front of your swarm services if you need to control routing or deal with different states. This could either be another docker service (i.e. haproxy, nginx) launched with the --mode global option to ensure it runs on all nodes, or a separate load-balancer like a citrix netscaler. You would need to have your service containers reconfigure the LB through their startup scripts or via provisioning tools (or add them manually).
Use something like consul for external service discovery. Possibly in conjunction with registrator to add services automatically. In this scenario you just configure your external clients to use the consul server/cluster for DNS resolution (or use the API).
You could of course just move your service consumers into the swarm as well. If you're separating the clients from the services in different physical VLANs (or VPCs etc) though, you would need to launch your client containers in separate overlay networks to ensure you don't effectively defeat any physical network segregation already in place.
Service discovery (via dns) is built into docker since version 1.12. When you create a custom network (like bridge or overlay if you have multiple hosts) you can simply have the containers talk to each other via name as long as they are part of same network. You can also have an alias for each container which would round-robin the list of containers which have the same alias. For simple example see:
https://linuxctl.com/docker-networking-options-bridge
As long as you are using the bridge mode for your docker network and creating your containers inside that network, service discovery is available to you out of the box.
You will need to get help from other tools once your infrastructure starts to span in to multiple servers and microservices distributed on them.
Swarm is a good tool to start with, however, I would like to stick to consul if it comes to any IaaS provider like Amazon for my production loads.

Service Fabric with Window Containers

I was wondering how the containers running inside a service fabric cluster communicate with each other. I have two containers, one backend and one front end. I need to pass the IP address/DNS of the backend node to the front end container. Does anybody know how I can do this?
Other docker orchestration tools like swarm mode and Kubernetes use DNS and you just pass the service name.
As of now, you need to use environment variables to do this. However, we'll be adding DNS support in an upcoming release to make this easier.
There is specific documentation about container discovery here https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-container
I did not try it but it seems what you are looking for which is container to container discovery

How to connect to containers on docker Overlay network from an external machine

Is there any known solution for enabling an external machine to connect to the containers on a docker swarm overlay network?
The question is legitimate, see example below, however I do not know of a simple solution for it, I'll propose offhand a possible solution and would test later and update.
Suppose you have a docker overlay network of many Kafkas running on a
couple of nodes (container hosts). All Kafka brokers communicate with
each other beautifully.
When a Kafka client needs to access a Kafka broker it connects
to it (say somehow, supposedly even through Swarm's service external
port), but then that broken may reply that that data is in another
broker with and here is that other broker's IP (on the overlay
network)... meaning Kafka client must be able to access to all Kafka
brokers (overlay network).
You can do this easily if everything is containerized, but what if not?
You can do this with SDN, or an offhand solution:
A container with two networks serving as a router with one "leg" on the overlay network and the other l2bridged to where that other VM or host is and route through it, you'd have to Swarm "constrain" it to run where the network from which you want overlay network access is available. that should work!
If someone has another clean/clear solution I'm very interested too

Resources