Quite possibly a very trivial question but I can't find anything in the documentation about a feature like this. As we know from the routing mesh documentation:
All nodes participate in an ingress routing mesh. The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh routes all incoming requests to published ports on available nodes to an active container.
However, I do not wish some nodes to participate in the routing mesh, but I still want them to participate in hosting the service.
The configuration I'm trying to achieve looks a bit like this:
I have a single service, hello-world, with three instances, one on each node.
I would like, in this example, only node-1 and node-2 to participate in externalising the ingress network. However, when I visit 10.0.0.3, it still exposes port 80 and 443 as it still has to have the ingress network on it to be able to run the container hello-world, and I would like this not to be the case.
In essence, I'd like to be able to run containers for a service that hosts port 80 & 443 on 10.0.0.3 without being to access it by visiting 10.0.0.3 in a web browser. Is there any way to configure this? Even if there's no container running on the node, it'll still forward traffic to a container that is running.
Thank you!
The short answer to your specific question is no, there is no supported way to selectively enable/disable the ingress network on specific nodes for specific overlay networks.
But based on what you're asking to do, the expected model for using only specific nodes for incoming traffic is to control which nodes receive the traffic, not shutoff ports on specific nodes...
In a typical 6-node swarm where you've separated out your managers to be protected in a different subnet from the DMZ (e.g. a subnet behind the workers). You'd use placement constraints to ensure your app workloads were only assigned to worker nodes, and those nodes were the only ones in the VLAN/Security Group/etc. for being accessible from user/client traffic.
Most prod designs of Swarm recommend protecting your managers (which manage the orchestration and scheduling of containers, store secrets, etc.) from external traffic.
Why not put your proxies on the workers in a client-accessible network, and have those nodes the only in DMZ/external LB.
Note that if you only allow firewall/LB access to some nodes (e.g. just 3 workers) then the other nodes that don't receive external incoming traffic are effectively not using their ingress networks, which achieves your desired result. The node that receives the external connection uses its VIP to route the traffic directly to the node that runs the published container port.
Related
Am trying to implement a cluster of containerised applications in the production using docker in the swarm mode.
Let me describe a very minimalist scenario.
All i have is just 5 aws-ec2 instances.
None of these nodes have a public IP assigned and all have private IPs assigned part of a subnet.
For example,
Manager Nodes
172.16.50.1
172.16.50.2
Worker Nodes
172.16.50.3
172.16.50.4
172.16.50.5
With the above infrastructure, have created a docker swarm with the first node's IP (172.16.50.1) as the --advertise-addr so that the other 4 nodes join the swarm as manager or worker with their respective tokens.
I didn't want to overload the Manager Nodes by making them doing the role of worker nodes too. (Is this a good idea or resource under-utilization?).
Being the nodes are 4 core each, am hosting 9 replicas of my web application which are distributed in the 3 worker nodes each running 3 containers hosting my web app.
Now with this setup in hand, how should i go about exposing the entire docker swarm cluster with a VIP (virtual IP) to the external world for consumption?
please validate my below thoughts:
1. Should I have a classic load-balancer setup like keeping a httpd or nginx or haproxy based reverse proxy which has a public IP assigned
and make it balance the load to the above 5 nodes where our
docker-swarm is deployed?
One downside I see here is that the above reverse-proxy would be Single Point of Failure? Any ideas how this could be made fault-tolerant/hightly available? should I try a AnyCast solution?
2. Going for a AWS ALB/ELB which would route the traffic to the above 5 nodes where our swarm is.
3. If keeping a separate Load Balancer is the way to go, then what does really docker-swarm load-balancing and service discovery is all
about?
what is docker swarm's answer to expose 1 virtual IP or host name to the external clients to access services in the swarm cluster?
Docker-swarm touts a lot about overlay networks but not sure how it
relates to my issue of exposing the cluster via VIP to clients in the
internet. Should we always keep the load balancer aware of the IP
addresses of the nodes that join the docker swarm later?
please shed some light!
On further reading, I understand that the Overlay Network we are creating in the swarm manager node only serves inter container communication.
The only difference from the other networking modes like bridge, host, macvlan is that the others enables communication among containers with in a single host and while the Overlay network facilitates communication among containers deployed in different subnets too. i.e., multi-host container communication.
with this knowledge as the headsup, to expose the swarm to the world via a single public IP assigned to a loadbalancer which would distribute requests to all the swarm nodes. This is just my understanding at a high level.
This is where i need your inputs and thoughts please...explaining the industry standard on how this is handled?
This is probably a simple question but I am wondering how others do this.
I have a Docker Swarm Mode cluster with 3 managers. Lets say they have the IP's
192.168.1.10
192.168.1.20
192.168.1.30
I can access my web container through any of the IP's, through the internal routing mesh etc. But incase the host goes down with the IP I access my web container with I obviously won't get routed to the container any more. So I need some kind of single DNS record or something that points me to an active address in my cluster.
How do you guys do this? Should I add all three IP's as A records to the DNS? Or have a load balancer / reverse proxy infront of the cluster?
I suppose there is more than one solution, would be nice to hear how others solved this :-)
Thanks a lot
Assuming your web server was started something like this:
docker service create --replicas=1 -p 80:80 nginx:alpine
The cluster will attempt to reschedule that one replica should the replica go down for any reason. With only 1 replica, you may experience some downtime between the old task going away, in the new task coming up somewhere else.
Part of the design goal of the routing mesh feature was indeed to simplify load-balancing configuration. Because all members of the cluster listen on port 80 in my example, your load balancer can simply be configured to listen to all three ips. Assuming the load balancer can properly detect the one node going down, the task will get rescheduled somewhere in the cluster, and any of the two remaining nodes will be able to route traffic via the routing mesh.
I'm working on a project to set up a cloud architecture using docker-swarm. I know that with swarm I could deploy replicas of a service which means multiple containers of that image will be running to serve requests.
I also read that docker has an internal load balancer that manages this request distribution.
However, I need help in understanding the following:
Say I have a container that exposes a service as a REST API or say its a web app. And If I have multiple containers (replicas) deployed in the swarm and I have other containers (running some apps) that talk to this HTTP/REST service.
Then, when I write those apps which IP:PORT combination do I use? Is it any of the worker node IP's running these services? Will doing so take care of distributing the load appropriately even amongst other workers/manager running the same service?
Or should I call the manager which in turn takes care of routing appropriately (even if the manager node does not have a container running this specific service)?
Thanks.
when I write those apps which IP:PORT combination do I use? Is it any
of the worker node IP's running these services?
You can use any node that is participating in the swarm, even if there is no replica of the service in question exists on that node.
So you will use Node:HostPort combination. The ingress routing mesh will route the request to an active container.
One Picture Worth Ten Thousand Words
Will doing so take care of distributing the load appropriately even
amongst other workers/manager running the same service?
The ingress controller will do round robin by default.
Now The clients should use dns round robin to access the service on the docker swarm nodes. The classic DNS cache problem will occur. To avoid that we can use external load balancer like HAproxy.
An important additional information to the existing answer
The advantage of using a proxy (HAProxy) in-front of docker swarm is, swarm nodes can reside on a private network that is accessible to the proxy server, but that is not publicly accessible. This will make your cluster secure.
If you are using AWS VPC, you can create a private subnet and place your swarm nodes inside the private subnet and place the proxy server in public subnet which can forward the traffic to the swarm nodes.
When you access the HAProxy load balancer, it forwards requests to nodes in the swarm. The swarm routing mesh routes the request to an active task. If, for any reason the swarm scheduler dispatches tasks to different nodes, you don’t need to reconfigure the load balancer.
For more details please read https://docs.docker.com/engine/swarm/ingress/
I'm new to docker and microservices. I've started to decompose my web-app into microservices and currently, I'm doing manual configuration.
After some study, I came across docker swarm mode which allows service discovery. Also, I came across other tools for service discovery such as Eureka and Consul.
My main aim is to replace IP addresses in curl call with service name and load balance between multiple instances of same service.
i.e. for ex. curl http://192.168.0.11:8080/ to curl http://my-service
I have to keep my services language independent.
Please suggest, Do I need to use Consul with docker swarm for service discovery or i can do it without Consul? What are the advantages?
With the new "swarm mode", you can use docker services to create clustered services across multiple swarm nodes. You can then access those same services, load-balanced, by using the service name rather than the node name in your requests.
This only applies to nodes within the swarm's overlay network. If your client systems are part of the same swarm, then discovery should work out-of-the-box with no need for any external solutions.
On the other hand, if you want to be able to discover the services from systems outside the swarm, you have a few options:
For stateless services, you could use docker's routing mesh, which will make the service port available across all swarm nodes. That way you can just point at any node in the swarm, and docker will direct your request to a node that is running the service (regardless of whether the node you hit has the service or not).
Use an actual load balancer in front of your swarm services if you need to control routing or deal with different states. This could either be another docker service (i.e. haproxy, nginx) launched with the --mode global option to ensure it runs on all nodes, or a separate load-balancer like a citrix netscaler. You would need to have your service containers reconfigure the LB through their startup scripts or via provisioning tools (or add them manually).
Use something like consul for external service discovery. Possibly in conjunction with registrator to add services automatically. In this scenario you just configure your external clients to use the consul server/cluster for DNS resolution (or use the API).
You could of course just move your service consumers into the swarm as well. If you're separating the clients from the services in different physical VLANs (or VPCs etc) though, you would need to launch your client containers in separate overlay networks to ensure you don't effectively defeat any physical network segregation already in place.
Service discovery (via dns) is built into docker since version 1.12. When you create a custom network (like bridge or overlay if you have multiple hosts) you can simply have the containers talk to each other via name as long as they are part of same network. You can also have an alias for each container which would round-robin the list of containers which have the same alias. For simple example see:
https://linuxctl.com/docker-networking-options-bridge
As long as you are using the bridge mode for your docker network and creating your containers inside that network, service discovery is available to you out of the box.
You will need to get help from other tools once your infrastructure starts to span in to multiple servers and microservices distributed on them.
Swarm is a good tool to start with, however, I would like to stick to consul if it comes to any IaaS provider like Amazon for my production loads.
Elasticsearch is designed to run in cluster mode, all I have to do is to define the relevant node IPs in the cluster via environment variable and as long as network connectivity is available it will connect and join the other nodes to the cluster.
I have 3 nodes, 1 is acting as the docker swarm manager and the other two are workers. I have initialized the manager and joined the worker nodes and everything looks ok from that standpoint.
Now I'm trying to run the elasticsearch container in a way that will allow me to join all nodes to the same elasticsearch cluster, however, I want the nodes to join using their overlay network interface and that means that I need to know the container internal IP addresses at the time of running the docker service create command, how can I do this? Do I have to use something like consul to achieve this?
Some clarifications:
I need to know, at the time of service creation the IP addresses (or DNS names) for all Elasticsearch participants so I could start the cluster correctly. This has to be at the time of creation and not afterwards. Also, as I understand, I can expose ports 9200/9300 for all services and work with the external machine IPs and get it to work, but I would like to use the overlay network to do all these communications (I thought this is what swarm mode is for).
Only a partial solution here.
So, when attaching your services to a custom overlay network, you indeed have access to Docker's custom Service discovery feature. I'll detail the networking feature of Docker Swarm mode, before trying to tie it to your problem.
I'll be using the different term of services and tasks, in which a service could be elasticsearch, whereas a task is a single instance of that elasticsearch service.
Docker networking
The idea is that for each services you create, docker assigns a Virtual IP (VIP), and a custom dns alias. You can retrieve this VIP using the docker service inspect myservice command.
But, there is two modes to attach a service to an overlay network dnsrr and VIP. You can select these options using the --endpoint-mode options of docker service create.
The VIP mode (I believe it is the default one, or at least the most used), affects the virtual ip to the service's dns alias. This means that doing an nslookup servicename would return to you a single vip, that behind the scenes, would be linked to one of your container in a round robin fashion. But, there is also a special dns alias that lets you access all of your instances ips (all of your tasks ips) : tasks.myservice.
So in VIP mode you can retrieve all of your tasks ips using a simple nslookup tasks.myservice, where myservice is a service name.
The other mode is dnsrr. This mode simply gets rid of the VIP, and connects the dns alias to the different tasks (=service instances), in a round robin way. This way, you simply have to do a nslookup myservice to retrieve the different service instances ip.
Elasticsearch clustering
Ok so first of all I'm not really familiar with the way elasticsearch lets you cluster. From what I understood from your question, you need when running the elasticsearch binary, give it as a parameter, the adress of all of the other nodes it needs to cluster with.
So what I would do, is to create a custom Elasticsearch image, probably based on the one from the default library, to which I would add a custom Entrypoint that would firstly run a script to retrieve the other tasks ip.
I'd believe that staying in VIP mode is suitable for you, since there is the tasks.myservice dns alias. You'll then need to parse the output to retrieve the tasks ip (and probably remove yours). Then you'll be able to save them in a config file environment variable, or use them as a runtime option for your elasticsearch binary.
Edit: To create a custom overlay network, you will need to use the docker network create command, and use the --network option of docker service create
This is answer is mainly based on the Swarm mode networking documentation