I have my aws load balancer setup to route any request with "/background" in url to node "background". I have 4 total notes behind the load balancer and it appears swarm load balances also. So while aws load balancer might route to node "background" another node might process the http request.
Each node has a web server in port 80 running the same application but I want url's with /background to be processed on the background node. Is this possible?
Related
I've setup a docker swarm with 2 nodes, 1 manager and 1 worker. I have 1 docker image that runs a jetty server on port 8080. When I start the service on the manager node with 4 replicas, I see 2 containers spawned on each node. When I attempt to access my application from the browser on the individual hosts, I notice that the request is received only by 1 of the containers on each node.
Without the setup of an external load balancer(like HAProxy), is it possible to handle the requests in a round robin fashion between the containers on a given host - such that my first request goes to 1 container and the next request goes to the other container on the same host?
Thanks
I am running 2 services in AWS ECS fargate. One is with nginx containers running behind the application load balancer. And the other is running with a node.js application. Node application is running with service discovery and Nginx containers proxy to the "service discovery endpoint" of node application containers.
My issue is :
After scaling up node application containers from 1 to 2. Nginx is unable to send the requests to the newly spawned container. It only sends the request to old containers. After restart/redploy of nginx containers it is able to send the requests to new containers.
I tried with "0" DNS ttl for service discovery endpoint. But facing the same issue.
Nginx does not resolve resolve at runtime if your server is specified as part of an upstream group or in certain other situations, see this SF post for more details. This means that Nginx never becomes aware of new containers being registered for service discovery.
You haven't posted your Nginx config so it's hard to say what you can do there. For proxy_pass directives, some people suggest using variables to force runtime resolution.
Another idea might be to expose a http endpoint from the Nginx container that listens to connections and reloads Nginx config. This endpoint can then be triggered by a lambda when new containers are registered (Lambda is in it's turn triggered by CloudWatch events). Disclaimer: I haven't tried this in practice, but it might work.
I have a web app running on a Docker container behind a load balancer, and in one of the responses I'm returning a callback URL. The problem is that the callback URL resolves to the Docker container name instead of the load balancer address (which is public).
How can I get the load balancer address to construct the callback URL?
I'm working on a project to set up a cloud architecture using docker-swarm. I know that with swarm I could deploy replicas of a service which means multiple containers of that image will be running to serve requests.
I also read that docker has an internal load balancer that manages this request distribution.
However, I need help in understanding the following:
Say I have a container that exposes a service as a REST API or say its a web app. And If I have multiple containers (replicas) deployed in the swarm and I have other containers (running some apps) that talk to this HTTP/REST service.
Then, when I write those apps which IP:PORT combination do I use? Is it any of the worker node IP's running these services? Will doing so take care of distributing the load appropriately even amongst other workers/manager running the same service?
Or should I call the manager which in turn takes care of routing appropriately (even if the manager node does not have a container running this specific service)?
Thanks.
when I write those apps which IP:PORT combination do I use? Is it any
of the worker node IP's running these services?
You can use any node that is participating in the swarm, even if there is no replica of the service in question exists on that node.
So you will use Node:HostPort combination. The ingress routing mesh will route the request to an active container.
One Picture Worth Ten Thousand Words
Will doing so take care of distributing the load appropriately even
amongst other workers/manager running the same service?
The ingress controller will do round robin by default.
Now The clients should use dns round robin to access the service on the docker swarm nodes. The classic DNS cache problem will occur. To avoid that we can use external load balancer like HAproxy.
An important additional information to the existing answer
The advantage of using a proxy (HAProxy) in-front of docker swarm is, swarm nodes can reside on a private network that is accessible to the proxy server, but that is not publicly accessible. This will make your cluster secure.
If you are using AWS VPC, you can create a private subnet and place your swarm nodes inside the private subnet and place the proxy server in public subnet which can forward the traffic to the swarm nodes.
When you access the HAProxy load balancer, it forwards requests to nodes in the swarm. The swarm routing mesh routes the request to an active task. If, for any reason the swarm scheduler dispatches tasks to different nodes, you don’t need to reconfigure the load balancer.
For more details please read https://docs.docker.com/engine/swarm/ingress/
Whats the best way to do load balancing on rails app, for a non aws setup? Nginx/Haproxy seems currently the best option.
2 node setup, one node also has haproxy on it
Load balancer: nginx listens on port 80/443 and proxy_forwards to haproxy on 8080 on the same server to load balance between the multiple nodes.
Nodes: nginx on the node listens to requests coming from haproxy on 8080 and processes it accordingly
Vinny
Setup haproxy on the top layer rather than on a shared node. It will help you to manage in case you are looking to point your application with an IP. Load will be shared at the top layer as compared to shared node with haproxy configuration. Haproxy server will never experience load when present on a independent setup.