Amazon Load Balancer sticky sessions with ajp:8009 - amazon-elb

We configured ELB with sticky sessions for the JSESSIONID cookie for two tomcats (tomcat1 and tomcat2)(Flow is - Apache Http Server - ELB - tomcats)
AJP protocol with port 8009 has been configured on tomcat side as from AWS ELB there is no AJP option, we have configured with tcp:8009
So the Apache httpd.conf entry is, (xxx.amazonaws.com is ELB name)
BalancerMember ajp://xxx.amazonaws.com:8009
Somehow the sticky session is not working, the http request is sent to both tomcat servers. Is it because of the protocol on ELB side (tcp:8009)? We are not sure what is missing here, Need help!!

Once you change it to TCP you lose sticky sessions. It is an ELB limitation. You might be able to get away with switching the protocol to HTTP but with a different port other than 80.
Unless I am mistaken, you might have to setup an HA Proxy or something else instead of the ELB. Something that can do both TCP with sticky.
It is well know that websockets+sticky doesn't work on amazon.
https://forums.aws.amazon.com/thread.jspa?messageID=627367

Related

HTTP website behind HTTPS Let's Encrypt NGINX Route

I can't seem to figure out if this is common practice or not, but I want to create a website (Running on a container) and then have traffic forwarded to the website from a wildcard on my domain, I want to secure it and use Nginx Proxy Manager and Let's Encrypt to manage the certificate.
Do I keep the website running on my internal server as just HTTP:80 and redirect traffic to to via Nginx? My current site is just a serverside Blazor webapp.
I've seen other people do this, but it makes me wonder if that is indeed secure, at some point between Nginx and the internal server it is not encrypted. Is my understanding correct?
I imagine it looks something like this:
Client connects securely to Nginx Proxy Manager (HTTPS)
Nginx Proxy Manager then decrypts and forwards to the Internal Website (HTTP)
Is my understanding correct?
Is this common practice, or is there a better way to achieve what I want?

Difference between ELB Protocol and Port number in AWS

In ELB setting, we can see HTTP and TCP as protocols for Listeners and I am not sure what would be the difference if I set Listeners as below.
ELB AAA
Load Balancer Protocol: HTTP
Load Balancer Port: 80
ELB BBB
Load Balancer Protocol: TCP
Load Balancer Port: 80
I believe its the same thing HTTP makes use of the TCP protocol.
I'm assuming you are referring to the older Classic Load Balancer, which has both HTTP and TCP options. The newer ELB generation is segregated more clearly: Application Load Balancer supports only HTTP/HTTPS and Network Load Balancer supports only TCP/UDP.
Classic Load Balancer
Refer to this link:
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html
To understand the difference, you need to understand a little about OSI model Layer 4 and Layer 7.
When configured as HTTP:80, the ELB terminates the request. That is, the ELB intercepts the request before forwarding it to the backend over HTTP/TCP. This may seem trivial, but it allows ELB to perform additional features such as sticky sessions.
When configured as TCP:80, the ELB does not intercept the request. It simply forwards the request to the backend with the same set of headers. This is because it supposedly operates at the network level, which allows the ELB to be more performant.

Spring-Security getRemoteAddress for app behind proxy

Is there a reason why the Spring-Security dose not provide any possibility to lookup for the RemoteAddress when the application is located behind a proxy e.g. load balancer, apache httpd server? At the moment the WebAuthenticationDetails object is saving the ip of the proxy. I saw that there are also solution for finding the remote address over the X-FORWARDED-FOR header attribute. I am curios is a reason why this is not provided?
If you use Tomcat, you could configure RemoteIpValve.
Tomcat port of mod_remoteip, this valve replaces the apparent client remote IP address and hostname for the request with the IP address list presented by a proxy or a load balancer via a request headers (e.g. "X-Forwarded-For").

AWS Opsworks: Load balancing via https

I've setup 2 Rails server instances with an Elastic Load Balancer. I setup SSL via opsworks and when I hit the IP of my instances with https e.g. https://1.2.3.4 I can see the correct certificate.
However, when I hit the domain of the Elastic Load Balancer, a timeout is thrown (loads endlessly).
How should I setup ELB, to properly redirect with https to my instances?
Found the answer was in Security Groups. Besides setting all inbound connections to accept HTTPS protocol and port 443, you have to set the outbound connections of the ELB. I found out that they were only to HTTP, and trying to access the instances with HTTPS it failed. I setup a new outbound rule HTTPS to anywhere and it worked!

oAuth with Elastic Load Balancer SSL

I am using AWS EC2 instances with ELB. I know that ELB itself has SSL connection enabled. My EC2 instances does not support SSL.
Here comes my problem. I need to implement some kind of authentication method like oAuth.
Is there a way to authenticate users with ELB?
You can't do that on ELB.
I recommend you to take a look at the ELB documentation http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/Welcome.html and this blog post http://harish11g.blogspot.com.br/2012/03/ssl-offloading-elastic-load-balancing.html
In Kong, you can do one thing. You can terminate SSL (trusted, can use free SSL via ACM) on ELB of KONG and use the feature Accept HTTP if already terminated in OAUTH2 plugin. But keep in mind the ELB Listeners will be (Secure TCP 443)--> (Secure TCP 8443 [https port exposed by KONG]).

Resources