This seems to have something to do with the subnet/availability zone, but I'm new to using a VPC and it's eluding me.
VPC: 10.80.0.0/16
subnet: 10.80.1.0/24 (us-east-1b)
subnet: 10.80.2.0/24 (us-east-1a)
All instances are Windows Server 2012.
I have an internet facing ELB created within my VPC (10.80.0.0/16). There is one instance added from AZ us-east-1a, which is on subnet 10.80.2.0/24. The instance is running IIS 7.5, with an app running on port 80 and /health.aspx set up for use as the ELB health check.
Internal traffic on the VPC is flowing normally (unrestricted). I can request health.aspx from this instance from another instance in us-east-1b (10.80.1.0/24). I can also copy files from one instance to another.
Outbound traffic is unrestricted. I can RDP to the instance (when connected to our VPN) and open a browser and request a web page and get it.
The ELB says the instance is healthy and I can see the requests to health.aspx in the IIS logs. Both the ELB and the instance are configured with a security group that allows 80 and 443.
But if I try to request {elb-url}/health.aspx over the open internet the request just times out. Similarly, with an elastic IP associated to the instance, a request to {elastic-ip}/health.aspx times out.
#Chris, thanks for the response...as it happens, I've already worked it out with some help from a friend. I'll post my findings here for posterity (in case anybody else was similarly confused about how ELB works).
This would be more clear with a diagram. But the summary is that in each availability zone, you need to create both a public and a private subnet. When you add availability zones to your ELB, you need to select the public subnet for the zone. This had already been done in us-east-1b before I got to this setup, and I had simply missed this nuance of ELB configuration. So for the new availability zone, I had to do this...
us-east-1c
private subnet 10.1.3.0/24 (using nat instance as default route)
public subnet 10.1.4.0/24 (using internet gateway as default route)
Then my instance goes in the private subnet as expected.
And the lynch pin of this whole thing is (drum roll....)
When I add us-east-1c to my ELB, I have to select the public subnet...10.1.4.0. Otherwise the instances will pass the health check (since the ELB can communicate with any instance within my entire VPC) but the responses from the servers cannot make it back out to the public internet.
This is what is so confusing. And I still don't fully understand it. The instance can make a request for, say, www.google.com. I can RDP to it and open a browser and get the web page. But a request from a host (like my laptop at my house) will die. strange.
PS: another note...make sure you are using enough NAT instance for your load. I think we ran into an issue where our NAT instance simply ran out of ports because too many web servers were trying to route outbound connections to 3rd party APIs through it. Quite honestly, I'm not good enough at this level of network/OS troubleshooting to be sure. But my theory is that our 8 instances of IIS were holding too many connections open to the NAT instance. We were also abusing the NIC on that micro instance. I upped us to two large instances, one per AZ and things smoothed back out. Both NAT instances are humming and we're not seeing the hung processes in IIS anymore.
Debugging this kind of issue is always a challenge. I have a few ideas to suggest based on what you have written (and generally apply to trying to solve this problem) that come from dealing with this a number of times.
Have you checked both the security groups and network ACLs? Bear in mind that all network ACLs need to be specified in both directions, as they are stateless. Also bear in mind that ELBs are a bit unique in this regard. While they are associated with your VPC, they sometimes need extra rules to ensure connectivity. In the past I have debugged this by opening all network ACLs on all ports, then removing these rules until it has stopped working in order to identify where the block was.
Security groups should be checked too. They are stateful but ensure that your load balancer has permissions to be hit from the web.
Have you checked this isn't an application configuration problem? I don't know how IIS comes out of the box but I would check it is setup to respond to all hostnames.
Check the ELB isn't an internal one, as that wouldn't be publically addressable.
You say the ELB is configured with the health check, but it's worth checking you also have the listener setup for port 80? It's in a separate tab on the dashboard and you will need this in addition to the health check for connectivity through the ELB.
Hope one of these tips is useful to you.
Related
I am working with a Synology NAS type aDS716+II, DSM 6.1.4-15217 Update 2 on wich runs Docker with a Jira container.
So now what I want to do I'm assinged to get to work is to access Jira's webinterface with let's say jira.synology.local with synology being the servername.
I read a lot about nginx and how it's built in since DSM 6.X but I don't seem to get it to work properly at all.
I can access Jira's webinterface from another machine within the LAN via IP_OF_SYNOLGY:PORT so when setting up a reverse proxy on the server it should be pointing to LOCALHOST:PORT right? I have also tried using the actual IP instead of LOCALHOST but without success.
I can access the interface of Synology itself not only via IP_OF_SYNOLGY:PORT but also via DOMAINNAME.LOCAL if I set the domain name.
I really don't know what I'm missing and I tried everything I could think of. Does someone has experience with this?
If some information is missing, I'll gladly provide it. I'm fairly new to synology I have to admit. Thanks in advance!
So this has gotten zero response but I figured probably someone will have a similar "problem" in the future, so I will answer anyway.
I solved everything, when I setup Active Directory. When installing AD, the DNS-Server will automatically be installed too.
So we have JIRA running in a Docker container (on port, let's say, 12345) and I want to access it via the LAN on jira.domainname.
To do so we need to have installed DSM6.X or higher (for nginx) and the DNS-Server. That's it.
In the DNS-Server you will have to create a new master zone
and apply the following settings, whereas you can freely choose the domain name and Master DNS server must be the IP of your synology station, since it functions as a DNS
Then you want to edit the Resource Record
There you want to add an A Record Resource
and an CNAME Record Resource
So your Resource Records will look like this
Now the last step for setting up the DNS server is to tell it what to do if there is no specific record for a query. So for example if you want to open jira.domainname in your browser, there is a specific record for that and the DNS server knows how to direct it. But if you want to open up for example google.com the DNS server has no information on that and does now know what to do. So what we do now is to to tell the DNS server to forward the request, if it has no records for a request. To do so, enable the forwarders and put in the IP of your gateway/ managed switch as primary and some public DNS server (8.8.8.8 for one of google's DNS server) as secondary.
Please remember that jira.domainname shall always be the domainname you choose and 192.168.0.200 shall always be the IP of your synology station.
So now the DNS server is completely setup. Now we want to take advantage of the built-in reverse proxy (which runs on nginx in the background). To do so we navigate as seen here
and create a new reverse proxy rule
So now that the URL's can point to the same destination (your synology, 192.168.0.200) but on different Port. That comes in very handy for some applications running in docker.
So now if you are running this in an home setup or small office, you probably are working with standard issue commercial router such as for example a FritzBox by AVM. Those are pretty good but beware that some prohibit the so called DNS Rebinding which means that DNS requests pointing to a local IP will be not allowed. Since in this setup the DNS server (your synology) and the destination JIRA (also your synology) are in the same LAN, we have to create an exception. Probably other routers don't suppress those requests, but if so exceptions are necessary.
So the next step, it to tell your Gateway or managed switch that it has to use the newly setup DNS server as the primary DNS server. For FritzBox' you can do so here
put in the IP of your DNS server and an secondary DNS server. This is important as a fallback solution if your DNS server probably stops working at some point.
Now that everything is setup I would recommend to restart the router/ managed switch, synology and the workstation you are working on, to flush all caches. After that you can simply open your browser and type in jira.domainname and JIRA should open up. You can also open a terminal/ cmd and type in nslookup jira.domainname to see if it is being resolved correctly.
I really hope this will help someone at some point and if there are any additional questions, please feel free to comment this or write me directly!
I know in airports, for example, I've connected to their AP, and it pops up a browser window to log in on my device. Is it possible to do so with NodeMCU in lua, or even with c firmware?
This can accomplished by setting the DNS server for a connecting client [via DHCP] to a sort of DNS proxy. It doesn't need to be a fully featured DNS server, it only needs to be able to either return a static DNS answer for any host name query or forward the request to a real DNS server, to resolve host names as usual.
The static answer effectively hijacks web requests at the DNS level, by forging the DNS answer, causing all host names to resolve to the IP address of a local web server. That local web server ignores any Uri details and serves a login prompt for every request. It must also maintain a list of client MAC addresses that have authenticated.
NodeMCU does have a built-in DHCP server, as part of it's built-in WiFi AP, but running both a web and a DNS proxy in ESP8266's limited memory would be a hell of a trick. I think that two of them working cooperatively, interfaced using the SPI bus might be workable... maybe even three of them, one dedicated to maintaining the list of authenticated MACs, expiring them, etc.
Note that the only part of this I have done on an ESP 8266 is some very simple web server functionality, so it's mostly theory. If you try it I'd be very interested in hearing about it. :-)
You might want to try out CaptiveIntraweb project (https://github.com/reischle/CaptiveIntraweb) which is based on NodeMCU.
There is also thread (http://www.esp8266.com/viewtopic.php?f=32&t=3618) on ESP8266 community forum that discusses the solution details.
So here is my issue, I have a website hosted from a virtual machine on my server and am using a dyndns service to point a url to my IP. My ISP recently set up a new modem which unfortunately has its own built in gateway and router. After fighting it to forward port 80 I tested it by trying to navigate to the site via the URL and it didn't work, then I tested it on my phone connected to cell data network and it worked! I am able to visit the site via the URL as long as I am not connected to my network. i find this very weird and cannot figure out why.
I am able to view the site on my network by typing in the local IP of the server.
Any suggestions why this might be occurring?
Yes, this is a pain. Usually your modem won't route traffic from inside that's destined for its public IP address.
When you come from outside, the traffic hits the modem from the external line, and the port forwarding rules get applied, and the traffic reaches your web server. But those port forwarding rules don't get applied to internal traffic. You're trying to browse the web server on the modem, rather than on your server.
I did once find a modem that allowed forwarding of internal traffic, but that was a long time ago, and I haven't see one like it since. What I do these days is to use the internal address when I'm on the internal network, and the external address when I'm not. For things that get scripted, I have a little function that determines whether I'm on my local network or not, and programmatically chooses the right way to address the server.
This is because your router does not support hairpinning (or does not have it set up).
From Cisco Support Community:-
The term hairpinning comes from the fact that the traffic comes from one source into a router or similar devices, makes a U-turn and goes back the same way it came.
Visualize this and you see something that looks like a hairpin.
Hairpin NAT is a useful technique for accessing an internal server using a public IP. Since you are using a public IP to attempt to access a server in your network, the traffic will attempt to go out to the internet. In order to reach the server, the traffic will need to be redirected to the correct location.
The problem is how you are doing your internal routing DNS.
You can do DNS Lookup and trace route to see where the Website name is not resolving and whether if you ping the domain e.g. ping something.com return the public IP.
I resolved ours by doing policy routing on website FQDN to go through a different WAN. It's working fine. This works for those with different WAN terminating at the site.
The other way is redo the DNS configuration in internal network.
When I visit my Rails 2.2 app on my remote server I receive the following value as my REMOTE_ADDR.
request.env['REMOTE_ADDR']: "75.184.124.93, 10.194.95.79"
What has me stumped is why there are two IPs. A quick check of my currently leased public IP confirms that my IP is 75.184.124.93.
So where is 10.194.95.79 coming from?
Is there something about how remote addresses are collected and reported in the HTTP headers spec that I'm missing? Is this expected, normal behavior?
It's definitely because of a reverse proxy.
Reverse proxies (I use BigIPs and Apache mod_proxy mode often) usually append all the intervening IPs to the list so you can pick out the right ones in your code.
For example, you might want to find the public one to log to your webstats application, so there it is right in the REMOTE_ADDR. But you also have the internal IP(s) so you know which loadbalancer it came from, which internal server its on for some kind of internal network tracking, etc
I'm developing an application where it seems likely that people will attempt to hide what their client IP address is behind a proxy server.
Is there a unified way to get what the actual client IP Address is behind the proxy? Looking at the Ruby docs, it explicitly states that
request.remote_ip
and
request.remote_addr
both would return the proxy address and not the actual client IP and I'm thrown by the "may contain" descriptions in the rest of the HTTP headers.
It depends if the proxy supports X-Forwarded-For. I'd run some tests to be sure that remote_ip isn't what you're looking for - based on a quick glance at the code it attempts to read the HTTP_X_FORWARDED_FOR header.
I'm typing this from a machine that's behind a proxy. I'm not "hiding", it's how my organisation (and most others large enough to have a server) works. I don't have a fixed IP address: it's allocated dynamically. So I can't see how knowing my "current" IP address is going to help, since it'll be different tomorrow. Heck, I may be connected via a different proxy tomorrow (I work for a large organisation)!
At home, I have several machines connected through a router. Again, I don't have a fixed IP address: it's allocated dynamically by my ISP. It's a large ISP, so there's probably a proxy server somewhere upstream.
So I think what you want is not technically possible. What kind of application would make it "likely that people will attempt to hide what their client IP address is" anyway? What problem are you trying to solve?