Internal working of Netlfix Zuul with Netflix Eureka - netflix-zuul

How does Netflix Zuul Exactly work along with Eureka for routing & service discovery? From my understanding, a service instance registers/deregisters itself with Eureka with its IP and port. When a request arrives at Zuul (which is the gateway for end user requests), it checks with its filters if the request has to be routed (pre/post/route filters). If the filter for a particular request is present, does it send a request to Eureka asking it for a list of all IP Addresses and Ports for a service instance which serves that filter?
If that is how it works, what kind of requests reach Zuul ? If my filter name is “example”, would it be HTTP GET ipAddr:port/example ? (IP add:port is that of the Zuul instance)
I haven’t been able to find any article which describes the exact flow of messages between the 2 services in terms of the kind of messages to be passed. It would be great if someone could share some resources as well for me to get a better understanding.

Related

Pointing an external URL to always go to same instance of a Docker container

I have the following use case that I would appreciate any input on. I have a Docker Swarm running with Traefik pointed to it for ingress and routing. Currently I have it as a single service that is defined to have 6 replicas of the same image, so works out to be two containers each on three nodes.
The containers basically host a GraphQL server and my requirement is that depending on which client the request is coming from, it always goes to the same specific container (ie replica). So as an example say I have user1 and user2 at client1 and user3 and user4 at client2, if user1 makes a request and it goes to replica1, then if user2 makes a request it MUST go to replica1 as well. So basically, I could take a numeric hash on the client id (client1) and mod 6 it and decide which replica it goes to and in that way make sure any subsequent calls from any user in that client id goes to the same replica. Additionally, that information of what client the call is coming from is coded in a JWT token that the user sends in their request.
Any idea how I would go about changing my Docker Swarm to implement this? My best guess is to change the swarm to not be 6 replicas and instead define each container as a separate service with its own port. Then I could potentially point Traefik to nginx or something similar which would then receive the request, grab the JWT, decode it to find the client id, take a hash and then internally route it to the appropriate node:port combination.
But I feel like there must be a more elegant and simpler way of doing this. Maybe Traefik could facilitate this directly somehow or Docker Swarm has some configuration that I don't know about, that could be used. Any ideas?
Edit: Just to clarify my usecase, not just looking for the same user to always go to the same container but the same type of user to always go to the same container
For this kind of routing you need to setup Traefik for Sticky Sessions
This is a Traefik middleware that adds a cookie to responses that is used in subsequent requests to route to the same service.

Confused between proxy and reverse proxy

I need one line explanation for proxy and reverse proxy. When I post any project on freelancer they ask whether I want a backconnect rotating proxy / reverse proxy etc.
Anyone?
Proxy: It is making the request on behalf of the client. So, the server will return the response to the proxy, and the proxy will forward the response to the client. In fact, the server will never "learn" who the client was (the client's IP address); it will only know the proxy. However, the client definitely knows the server, since it essentially formats the HTTP request destined for the server, but it just hands it to the proxy.
Reverse Proxy: It is receiving the request on behalf of the server. It forwards the request to the server, receives the response and then returns the response to the client. In this case, the client will never "learn" who was the actual server (the server's IP address) (with some exceptions); it will only know the proxy. The server will or won't know the actual client, depending on the configurations of the reverse proxy.
A pair of simple definitions would be:
Forward Proxy: Acting on behalf of a requestor (or service consumer)
Reverse Proxy: Acting on behalf of service/content producer.

When to use Application Load Balancer and Network Load Balancer

I'm new to AWS.
I started learning about ALB and NLB. I know ALB working in Layer 7 protocols and NLB working in layer 4 protocols.
Can anyone please explain the real time example of ALB and NLB?? When to use ALB and NLB??
Even though all the web application will use TCP protocols for making connection between server and client.
So Is ALB use the TCP (layer 4) protocols??
Then what is the different between them? Can anyone please explain briefly???
In summary: an NLB only knows about TCP, while an ALB knows everything about the request.
An NLB can only route a request based on IP addresses and other TCP-package info.
An ALB can route a request by looking at the content of it: what protocol is it using (HTTP, HTTPS)? What path is it trying to query (/api/v1, /api/v2)? What content-type is it requesting?
So, if you want requests for the v1 API endpoint to be routed to an autoscaling group of EC2 instances and requests for the v2 API endpoint routed to another group of instances, then your best option is the ALB because it allows you to configure rules that make your desired routing possible.
On the other hand, if you just want that clients coming from Germany are routed to one autoscaling group and clients from the USA to another group, the NLB should be sufficient because you can set up rules that match the IP addresses of those countries.
TL;DR To load balance HTTP requests, use an ALB. For TCP/UDP load balancing, use an NLB.
An ALB (Application Load Balancer) understands HTTP. If you need to do HTTP-based routing (e.g., routing to different targets depending on the request path) you need to use an ALB.
Unique features of ALBs include:
HTTP path-based routing
HTTP header-based routing
Redirects
Lambda functions as targets
An NLB (Network Load Balancer) operates at the transport level (TCP/UDP). NLBs are more performant than ALBs because they don't need to parse HTTP messages.
NLBs support some unique features too:
Static IP
Elastic IP addresses
Preserving the source IP
You can see a full comparison of features on the Elastic load balancing features page.

API Gateway using Spring Cloud + Zuul + Consul : dynamic routing not working when using HTTPS

I'm currently working on an API Gateway to centralize calls to REST APIs. We are using Spring Cloud (Edgware.SR3 version), with Zuul (1.3.0) to handle service discovery and Consul as service registry.
In a first version, routes to each service was registered in the gateway configuration using zuul.routes.myApiName.url and it was working fine.
Then, we wanted to use dynamic routing to allow having multiple instances of each API.
I've removed the zuul.routes.myApiName.url for that purpose.
Problem is that my calls to the API through the gateway are returning and error :
Bad Request This combination of host and port requires TLS.
Here is the configuration of the API registering in Consul :
spring.cloud.discovery.enabled=true
spring.cloud.consul.host=#consul_host_ip#
spring.cloud.consul.port=#consul_host_port#
spring.cloud.consul.discovery.scheme=https
And here is the configuration of Zuul route in the gateway :
zuul.host.socket-timeout-millis=60000
zuul.add-proxy-headers=false
zuul.ignored-services=*
zuul.routes.myApiName.path=/myApiName/**
zuul.routes.myApiName.serviceId=myApiName
API is correctly registered in Consul and health check is using HTTPS successfully :
HTTP GET https://hostname:port/health: 200 Output: {"description":"Composite Discovery Client","status":"UP"}
Certificates are also well configured since I am able to call my API directly using HTTPS.
But it seems that Zuul redirection is using HTTP instead of HTTPS (I have the same error if I call my API in direct mode (no gateway) using HTTP).
I've been struggling with this for a while, so I'd like to know if there is a configuration missing to force Zuul to use HTTPS in the routed call to API ?
Thanks in advance !
Just configure
zuul.addProxyHeaders = false
As the proxy headers are added by Zuul while forwarding any request to backend.
I've finally found the solution : I had to add this property :
ribbon.IsSecure=true

Routing to same instance of Backend container that serviced initial request

We have a multiservice architecture consisting of HAProxy front end ( we can change this to another proxy if required), a mongodb database, and multiple instances of a backend app running under Docker Swarm.
Once an initial request is routed to an instance ( container ) of the backend app we would like all future requests from mobile clients to be routed to the same instance. The backend app uses TCP sockets to communicate with a VoIP PBX.
Ideally we would like to control the number of instances of the backend app using the replicas key in the docker-compose file. However if a container died and was recreated we would require mobile clients continue routing to the same container. The reason for this is each container is holding state info.
Is this possible with Docker swarm? We are thinking each instance of the backend app when created gets an identifier which is then used to do some sort of path based routing.
HAproxy has what you need. This article explains all.
As a conclusion of the article, you may choose from two solutions:
IP source affinity to server and Application layer persistence. The latter solution is stronger/better than the first but it requires cookies.
Here is an extras from the article:
IP source affinity to server
An easy way to maintain affinity between a user and a server is to use user’s IP address: this is called Source IP affinity.
There are a lot of issues doing that and I’m not going to detail them right now (TODO++: an other article to write).
The only thing you have to know is that source IP affinity is the latest method to use when you want to “stick” a user to a server.
Well, it’s true that it will solve our issue as long as the user use a single IP address or he never change his IP address during the session.
Application layer persistence
Since a web application server has to identify each users individually, to avoid serving content from a user to an other one, we may use this information, or at least try to reproduce the same behavior in the load-balancer to maintain persistence between a user and a server.
The information we’ll use is the Session Cookie, either set by the load-balancer itself or using one set up by the application server.
What is the difference between Persistence and Affinity
Affinity: this is when we use an information from a layer below the application layer to maintain a client request to a single server
Persistence: this is when we use Application layer information to stick a client to a single server
sticky session: a sticky session is a session maintained by persistence
The main advantage of the persistence over affinity is that it’s much more accurate, but sometimes, Persistence is not doable, so we must rely on affinity.
Using persistence, we mean that we’re 100% sure that a user will get redirected to a single server.
Using affinity, we mean that the user may be redirected to the same server…
Affinity configuration in HAProxy / Aloha load-balancer
The configuration below shows how to do affinity within HAProxy, based on client IP information:
frontend ft_web
bind 0.0.0.0:80
default_backend bk_web
backend bk_web
balance source
hash-type consistent # optional
server s1 192.168.10.11:80 check
server s2 192.168.10.21:80 check
Session cookie setup by the Load-Balancer
The configuration below shows how to configure HAProxy / Aloha load balancer to inject a cookie in the client browser:
frontend ft_web
bind 0.0.0.0:80
default_backend bk_web
backend bk_web
balance roundrobin
cookie SERVERID insert indirect nocache
server s1 192.168.10.11:80 check cookie s1
server s2 192.168.10.21:80 check cookie s2

Resources