https traffic to specific ip addresses using elb - amazon-elb

I have created a failover environment with route 53 and two elbs. Every elb have to attach multiple app servers. If allow https traffic to open everyone the application accessible but when i restrict the https traffic to specific ip address range the application is not accessible even those ip addresses who have permission.

I assume when you say you allow http traffic from specific IP addresses ,you are doing it at the security group level.If yes then make sure you modify the security groups of the Load balancers and not the backend instances.The backend instances will receive http traffic from the load balancer IPs (which would be your ELB subnets cidr block or if not being too restrictive you can allow http traffic from the vpc cidr range.
Additionally ensure that the load balancers and the backend instances have different security groups.
ELB SG > HTTP:80 ,SOURCE
BACKEND SG -> HTTP:80 ,SOURCE ELB SUBNET/VPC CIDR BLOCK
The would be simple changes if you are using ssl termination but the logic remains the same.
If you are not using security groups but something else then we would have to check your specific configuration.

Related

When to use Application Load Balancer and Network Load Balancer

I'm new to AWS.
I started learning about ALB and NLB. I know ALB working in Layer 7 protocols and NLB working in layer 4 protocols.
Can anyone please explain the real time example of ALB and NLB?? When to use ALB and NLB??
Even though all the web application will use TCP protocols for making connection between server and client.
So Is ALB use the TCP (layer 4) protocols??
Then what is the different between them? Can anyone please explain briefly???
In summary: an NLB only knows about TCP, while an ALB knows everything about the request.
An NLB can only route a request based on IP addresses and other TCP-package info.
An ALB can route a request by looking at the content of it: what protocol is it using (HTTP, HTTPS)? What path is it trying to query (/api/v1, /api/v2)? What content-type is it requesting?
So, if you want requests for the v1 API endpoint to be routed to an autoscaling group of EC2 instances and requests for the v2 API endpoint routed to another group of instances, then your best option is the ALB because it allows you to configure rules that make your desired routing possible.
On the other hand, if you just want that clients coming from Germany are routed to one autoscaling group and clients from the USA to another group, the NLB should be sufficient because you can set up rules that match the IP addresses of those countries.
TL;DR To load balance HTTP requests, use an ALB. For TCP/UDP load balancing, use an NLB.
An ALB (Application Load Balancer) understands HTTP. If you need to do HTTP-based routing (e.g., routing to different targets depending on the request path) you need to use an ALB.
Unique features of ALBs include:
HTTP path-based routing
HTTP header-based routing
Redirects
Lambda functions as targets
An NLB (Network Load Balancer) operates at the transport level (TCP/UDP). NLBs are more performant than ALBs because they don't need to parse HTTP messages.
NLBs support some unique features too:
Static IP
Elastic IP addresses
Preserving the source IP
You can see a full comparison of features on the Elastic load balancing features page.

F5 load balancer over https url

I have a service exposed over 2 nodes, each node has a https url for the service.
I want to put a F5 on top of these 2 https nodes, is it possible
Yes. First create an https monitor, some request that when you get the right response back, the node is 'online'. Then create a pool with the two nodes (listing their IP and port number) and attach the monitor. Then create a virtual server or two (I normally make an http one with no pool and the built-in https redirect iRule) plus the https virtual server. give them the same IP address, allow all source addresses (the source here is what allows the F5 to select it, you can restrict IP addresses further with the firewall policy). Auto SNAT will make the f5 replicate the request to the backend server but with the src IP of the F5. If you care about the client IP address on the backend servers they will need to listen for x-forewaded-for header. You'll need to add a profile to the https virtual server to attach such a header and populate it with the clients 'real' src IP. Then attach the pool and make sure the VS firewall is open correctly. You'll also need to import the right certs and keys and create an ssl profile that matches the DNS name you want to point at this VS, and attach the ssl profile to the https vip in the client ssl section. the server ssl section is normally ssl-insecure-compatible.
for the DNS name, hopefully your 2 nodes are named something like web1 and web2.example.com. so the DNS name for the vip should be web.example.com and the SSL Cert required would be for web.example.com or *.example.com if you're feeling frisky lol.

Routing to same instance of Backend container that serviced initial request

We have a multiservice architecture consisting of HAProxy front end ( we can change this to another proxy if required), a mongodb database, and multiple instances of a backend app running under Docker Swarm.
Once an initial request is routed to an instance ( container ) of the backend app we would like all future requests from mobile clients to be routed to the same instance. The backend app uses TCP sockets to communicate with a VoIP PBX.
Ideally we would like to control the number of instances of the backend app using the replicas key in the docker-compose file. However if a container died and was recreated we would require mobile clients continue routing to the same container. The reason for this is each container is holding state info.
Is this possible with Docker swarm? We are thinking each instance of the backend app when created gets an identifier which is then used to do some sort of path based routing.
HAproxy has what you need. This article explains all.
As a conclusion of the article, you may choose from two solutions:
IP source affinity to server and Application layer persistence. The latter solution is stronger/better than the first but it requires cookies.
Here is an extras from the article:
IP source affinity to server
An easy way to maintain affinity between a user and a server is to use user’s IP address: this is called Source IP affinity.
There are a lot of issues doing that and I’m not going to detail them right now (TODO++: an other article to write).
The only thing you have to know is that source IP affinity is the latest method to use when you want to “stick” a user to a server.
Well, it’s true that it will solve our issue as long as the user use a single IP address or he never change his IP address during the session.
Application layer persistence
Since a web application server has to identify each users individually, to avoid serving content from a user to an other one, we may use this information, or at least try to reproduce the same behavior in the load-balancer to maintain persistence between a user and a server.
The information we’ll use is the Session Cookie, either set by the load-balancer itself or using one set up by the application server.
What is the difference between Persistence and Affinity
Affinity: this is when we use an information from a layer below the application layer to maintain a client request to a single server
Persistence: this is when we use Application layer information to stick a client to a single server
sticky session: a sticky session is a session maintained by persistence
The main advantage of the persistence over affinity is that it’s much more accurate, but sometimes, Persistence is not doable, so we must rely on affinity.
Using persistence, we mean that we’re 100% sure that a user will get redirected to a single server.
Using affinity, we mean that the user may be redirected to the same server…
Affinity configuration in HAProxy / Aloha load-balancer
The configuration below shows how to do affinity within HAProxy, based on client IP information:
frontend ft_web
bind 0.0.0.0:80
default_backend bk_web
backend bk_web
balance source
hash-type consistent # optional
server s1 192.168.10.11:80 check
server s2 192.168.10.21:80 check
Session cookie setup by the Load-Balancer
The configuration below shows how to configure HAProxy / Aloha load balancer to inject a cookie in the client browser:
frontend ft_web
bind 0.0.0.0:80
default_backend bk_web
backend bk_web
balance roundrobin
cookie SERVERID insert indirect nocache
server s1 192.168.10.11:80 check cookie s1
server s2 192.168.10.21:80 check cookie s2

Why IP is not pointing to Joomla main page

Given the following URL: htttp://domain/index.php, where index.php is the main webpage in a joomla server. I want to get the URL with the IP format, http://IP/index.php. I've tried that with several Joomla servers without success. What is it happening?
I will try to keep this answer simple, yet understandable.
The relation between Internet domains and IP address is not necessarily one-to-one.
In shared hosting, a single IP address may be used by several domains (or hostnames).
A Host header, which is a part of the HTTP standard, is sent with the HTTP request. This allows the server to determine which site to serve.
When you are trying to access a domain for which you don't know the IP, DNS lookup is performed, which provides the requested IP address.
A HTTP request is then sent to that IP with a Host header with the hostname (which contains the domain name).
If you are trying to access the ip directly, for example by typing in a web browser's address bar, the value of the Host header will be the IP itself and the server will have no indication what domain you actually want.
It is possible to set up a default behavior for cases where the IP address is directly accessed, but it is highly likely that a shared host will not allow you to set it yourself.

request.remote_ip returns wrong ip

I have logging on my website, and i see logs for different people (with different UserAgent strings).
I'm sure, that they have different ip, but all the log records having the same ip.
I use request.remote_ip to store it in DB.
I don't have Apache as front-end. I just have Mongrel.
The question is - Why they are the same ?
If both users are behind the same proxy server or use the same internet provider, they may appear to have the same IP address. The IP that is seen at the web server is not the IP address of the individual PC, it's the address of the connection being used.
If you are using a load balancer, particularly a non-transparent load balancer, your server will see the IP address of the load balancer. Often times the load balancer will throw the the original remote ip address into a HTTP header.

Resources