I am a little confused about the difference between these two.
Docker swarm provides service discovery for the services that run in it.
In a microservice architecture, each microservices run in one of the containers. Do I need a separate service discovery that is provided by some of the API Gateways or any service discovery frameworks like Eureka, Zookeeper, etc?
Is there any added advantage if I use some specific service discovery frameworks other than that is provided by Docker Swarm?
Do I need a separate service discovery that is provided by some of the
API Gateways or any service discovery frameworks like Eureka,
Zookeeper, etc?
If your micro services are deployed as docker swarm services within the same swarm you don't need additional service discovery mechanism.
Each docker service can connect to another by its service name.
Related
I am new to the Docker swarm. I deployed a Jenkins service on Docker swarm cluster with 3 managers and 2 worker nodes. I can access the service using node port. But, I want to access the service from outside network using an external loadbalancer. If any one have any reference, please help me on this.
You specified an external load balancer, so you would do something like:
deploy hashicorp consul as part of your app stack, or as a swarm service, to your swarm.
integrate your services with hashicorp consul so they publish their external ips and ports to it. The services would be setup with host mode networking rather than using dockers ingress networking.
integrate your external load balancer with consul so it can deliver traffic to the service.
point your external dns as your external load balancer.
I have a docker swarm mode orchestration on my servers, and as my business requirements I have a custom service discovery (it's run by swarm too).
Every services after running call register method on service discovery and introduce his contact information.
So service discovery could reversing traffics and balancing load between instance by introduced ip and port
My problem there is, when an instance(ruined in container) call discovery register method, his remote-addr is not real (mean it's not equal to hostname -i) and service discovery can not find it in network
Is any idea?
One option would be for the service discovery to also participate in the swarm. Then it should be able to find the instances which are in containers that are in the swarm.
Another would be for the containers to run with --net=host. Though this may defeat the reason for having them in a swarm in the first place.
we built a Docker Swarm cluster, over several cloud providers.
Everything works but we have new constraints and need to restrict network communications between the cloud providers.
Is it possible to build a Docker Swarm cluster with "local load balancing"? What I mean by this question is, is it possible to use:
- one cloud provider for Swarm managers, with network access to Swarm workers;
- two cloud providers for Swarm workers, with network access to the Swarm managers, but no network access between these cloud providers?
In that case, would the load balancing still work if someone runs a web request towards one of the workers?
Please find below a drawing of the targeted architecture.
I'm new to docker and microservices. I've started to decompose my web-app into microservices and currently, I'm doing manual configuration.
After some study, I came across docker swarm mode which allows service discovery. Also, I came across other tools for service discovery such as Eureka and Consul.
My main aim is to replace IP addresses in curl call with service name and load balance between multiple instances of same service.
i.e. for ex. curl http://192.168.0.11:8080/ to curl http://my-service
I have to keep my services language independent.
Please suggest, Do I need to use Consul with docker swarm for service discovery or i can do it without Consul? What are the advantages?
With the new "swarm mode", you can use docker services to create clustered services across multiple swarm nodes. You can then access those same services, load-balanced, by using the service name rather than the node name in your requests.
This only applies to nodes within the swarm's overlay network. If your client systems are part of the same swarm, then discovery should work out-of-the-box with no need for any external solutions.
On the other hand, if you want to be able to discover the services from systems outside the swarm, you have a few options:
For stateless services, you could use docker's routing mesh, which will make the service port available across all swarm nodes. That way you can just point at any node in the swarm, and docker will direct your request to a node that is running the service (regardless of whether the node you hit has the service or not).
Use an actual load balancer in front of your swarm services if you need to control routing or deal with different states. This could either be another docker service (i.e. haproxy, nginx) launched with the --mode global option to ensure it runs on all nodes, or a separate load-balancer like a citrix netscaler. You would need to have your service containers reconfigure the LB through their startup scripts or via provisioning tools (or add them manually).
Use something like consul for external service discovery. Possibly in conjunction with registrator to add services automatically. In this scenario you just configure your external clients to use the consul server/cluster for DNS resolution (or use the API).
You could of course just move your service consumers into the swarm as well. If you're separating the clients from the services in different physical VLANs (or VPCs etc) though, you would need to launch your client containers in separate overlay networks to ensure you don't effectively defeat any physical network segregation already in place.
Service discovery (via dns) is built into docker since version 1.12. When you create a custom network (like bridge or overlay if you have multiple hosts) you can simply have the containers talk to each other via name as long as they are part of same network. You can also have an alias for each container which would round-robin the list of containers which have the same alias. For simple example see:
https://linuxctl.com/docker-networking-options-bridge
As long as you are using the bridge mode for your docker network and creating your containers inside that network, service discovery is available to you out of the box.
You will need to get help from other tools once your infrastructure starts to span in to multiple servers and microservices distributed on them.
Swarm is a good tool to start with, however, I would like to stick to consul if it comes to any IaaS provider like Amazon for my production loads.
I am having problem understanding the need of a separated service discovery server while we could register the slave node to the master node at the slave node start-up through whatever protocol. Hosting another service seem redundant to me.
Docker Swarm is there to create a cluster of hosts running Docker and schedule containers across the cluster.
It does not include service discovery, which is provided by a backend service, such as etcd, consul or zookeeper.
The first problem: service registration and discovery is an infrastructure concern, not an application concern.
The second problem: implementing service registration and discovery when infrastructure and application implementation are mutually agnostic is tough.
The DockerCon makes that distinction clear this morning (Nov. 16th, 2015), with the "Docker Stack":
(Graphics from #laurelcomics)
Docker networking solves these problems by backing an interface (DNS) with pluggable infrastructure components that adhere to a common KV interface.
You can see consul.io used in:
"Easy routing and service discovery with Docker, Consul and nginx"
"Docker Overlay Networks: That was Easy"
"Docker DNS getaddrinfo ENOTFOUND"
That means:
Consul is a KV (Key/Value) store which can be plugged into Swarm in order to manage the service discovery aspect.
Swarm is the access layer, which is usually the layer that contains a gateway or routing component that allows others to actually reach your services.
(Image from the "Easy routing and service discovery with Docker, Consul and nginx" article written by Ladislav Gazo)
The goal is to isolate what is an infrastructure concern (Discovery service) in its own container, separate from a dev tool concern (Swarm).