What if a load balancer's idle timeout longer than the website settings - amazon-elb

Question as title. When the LB to webserver's connection is closed, however, the client and the LB's connection is active, Would the LB re-forward the request to the server? We had a double postback issue. Just guessing this could be the cause of it. Anyone has similar experience?

We had sporadic 502 errors from a loadbalancer, and I found this guide: https://aws.amazon.com/premiumsupport/knowledge-center/elb-alb-troubleshoot-502-errors/ which explains that the load balancer idle timeout should be shorter than the target http server keep alive timeout.
The load balancer receives a request and forwards it to the target.
The target receives the request and starts to process it, but closes
the connection to the load balancer too early. This usually occurs
when the duration of the keep-alive timeout for the target is shorter
than the idle timeout value of the load balancer. Make sure that the
duration of the keep-alive timeout is greater than the idle timeout
value.

Related

First http request in iOS networking is slow, subsequent requests are much faster

I'm experiencing slow response times for my first http POST request to my server.
This happens both in Android and iOS networking libraries. (Volley on Android, and Alamofire in iOS).
First response is roughly 0.7s-0.9s, whereas subsequent requests are 0.2s.
I'm guessing this is due to the session being kept-alive by the server, therefore eliminating the need for establishing a new session on each request.
I figure I can make a dummy request when the app starts to start the session, but it doesn't seem very elegant.
I also control the server side (Node.js) so if any configuration needs to be done there I can also try it.
Investigating a little further, I tried sending an https CONNECT request before issuing the first "real" POST request, and the behavior replicates.
After 30 seconds or so, the connection is dropped (probably at the iOS URLSession level, the load balancer is configured to keep connections as 60 seconds).
In theory this makes sense because setting up an https connection takes up several (12 total) packets and I'm on an inter continental connection.
So my solution is to send a CONNECT request when I expect the user to send a regular request.

Sending SPDY requests results in "The request timed out" errors with NSUrlSession in iOS

My iOS app loads images from an nginx HTTP server. After I send 400+ such requests the networking 'gets stuck' and all subsequent HTTP requests result in "The request timed out" error. I can make the images load again only when I restart the app.
Details:
I am using NSURLSession.sharedSession().dataTaskWithURL to send four hundred HTTP GET requests to jpeg files.
Requests are sent sequentially, one after another. The interval between requests is 10 ms.
Each previous unfinished request is cancelled with cancel() method of NSURLSessionDataTask object.
Interestingly:
I can only have this issue with HTTPS requests and when SPDY is enabled on the server.
Non-secure HTTP requests work fine.
Non-SPDY HTTPS requests work fine. I tested it by turning SPDY off on the server side, in the nginx config.
Problem appears both on iOS 8 and 9, on physical device and in the simulator. Both on Wi-Fi and LTE.
When I look at nginx access logs, I can still see the 'stuck' requests coming in. Important nuance: the request log record appears at the exact moment when the iOS app is giving up on it after the time out period ends.
I was hoping to analyze HTTP requests with Charles Proxy but the problem cures itself when requests go through Charles. That is - everything works with Charles, much like effect in quantum mechanics when the fact of looking influences the outcome.
I was able to reproduce the issue when the iOS app connected to two different servers with vastly different nginx configurations. This probably means that the issue is not related to a particular nginx setup.
I analyzed the app using "Activity Monitor" instrument. The number of threads it is using during the bulk HTTP requests jumps from 5 to 10. In comparison, when I send just a single HTTP requests the number of threads jumps to 8. CPU load rarely goes above 30%.
What can be the cause of the issue? Can anyone recommend other ways or tools for analysing and debugging it?
Analysing with scheduling instrument
Demo app
This demo app reproduces the issue 100% of the time for me.
https://github.com/exchangegroup/ImageLoadDemo
Versions and settings
My nginx config: http://pastebin.com/pYYjdxfP
OS X: 10.10.4 (14E46), iOS: 8 and 9, Xcode: 7.0 (7A218), nginx: 1.9.4
Not ideal workaround
I managed to keep requests working only if I create a new NSURLSession for each individual request and clear the previous session with finishTasksAndInvalidate or invalidateAndCancel.
// Request 1
let configuration = NSURLSessionConfiguration.defaultSessionConfiguration()
let session = NSURLSession(configuration: configuration)
session.dataTaskWithURL ...
// Request 2
// clear the previous request
session.finishTasksAndInvalidate()
let session2 = NSURLSession(configuration: configuration)
session2.dataTaskWithURL ...
One possibility is that iOS started sending the request, and then packet loss prevented the headers and request body from being fully delivered.
Another possibility that comes to mind is that your server may not be logging the request until it actually finishes trying to deliver it, which would make the time stamps in the server logs line up with when the connection was closed, rather than when it was opened. (IIRC, that's what Apache does; I haven't worked with nginx, so I can't speak for its behavior.) If that's the case, then this is just a simple connection stall. As for why it is stalling, I couldn't guess.
Does the problem occur exclusively for HTTPS traffic? If you can reproduce it with HTTP, you don't need Charles Proxy; just use OS X's "Internet Sharing" feature, and capture the packets with tcpdump or wireshark, listening on the bridge interface. If you can't reproduce it with HTTP, my money would be on a problem with fetching the CRLs or performing the OCSP check while validating the server's certificate.
Is your app ending up with a huge number of threads as a result of excessive async dispatching to new queues, by any chance? Because that could easily cause all sorts of odd misbehavior.
How long is the timeout? If it is too short, your app might simply be running up against performance limitations of the hardware while processing the results of 400 requests delivered in only four seconds.
Also, are you trying to schedule these requests simultaneously? Because I seem to recall reading about a bug that causes NSURLSession to hit a brick wall if you start too many tasks in a single session at the same time. You might try adding tasks only after the number of tasks in a session drops below some threshold and see if that fixes the problem.

Wildfly HornetQ Remote HTTP connection memory leak

I am running Wildfly 8.2 instance with HornetQ messaging remotely accessible via HTTPS on port 8185.
For testing the connection I am running a client on the same machine connecting via https-remoting://localhost:8185
from client view everything works fine: connecting, sending / receiving messages and closing connection
on server side at first all works fine, too. However, after period set in "connection-ttl" of RemoteConnectionFactory has passed, server logs following lines:
2015-09-03 17:05:49,152 WARN [org.hornetq.core.client] (hornetq-failure-check-thread) HQ212037: Connection failure has been detected: HQ119014: Did not receive data from /192.168.160.83:63937. It is likely the client has exited or crashed without closing its connection, or the network between the server and client has failed. You also might have configured connection-ttl and client-failure-check-period incorrectly. Please check user manual for more information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
2015-09-03 17:05:49,154 INFO [org.hornetq.core.server] (hornetq-failure-check-thread) HQ221021: failed to remove connection
Final result after testing for a longer time (every 1-2 seconds clients are connecting, sending / receiving messages, closing the connection):
Wildfly consumes more and more heap memory and finally stops working with an OutOfMemoryError ...
As mentioned, the connections are always explicitly closed by client, and at closing time no error is logged, neither on client nor on server side. It seems that the "hornetq-failure-check-thread" just didn't get informed that the connection was already closed
Any help for this issue is appreciated!

How can i set connection timeout to NSUrlConnection

I know there is a setTimeoutInterval method to NSMutableURLRequest, but can i set a specific timeout to the time it takes to reach and connect to the server?
No, you can't. Timeout is the time by which we expect the reply from server. We have no idea what time had gone to connect to server and what time for the server to reply.
A NSURLConnection will abort the connection with a timeout error if the connection is "idle" for longer than the specified duration set via setTimeoutInterval.
That means, if you start a request and the client did not receive anything from the server so far, you should get a timeout error in connection:didFailWithError: after that duration.
That also means, if you are in the middle of a connection sending/receiving data, and the server later hangs and the connection becomes "idle" for longer than the specified timeout, it will also abort the connection.
Whenever the connection has some progress, the timer will be reset.
You can tweak that behavior in so far that you start your own timer which sends cancel to the connection after a specific duration. Possibly you may monitor the progress and estimate how long the request will take to finish and then possibly invoke the cancel if that will take to long.

What could cause a socket to close quickly after every HTTP transaction?

I have a Delphi 6 application that talks to an external device that acts as an HTTP server. I am using the ICS TWSocket components for this application. I open up a socket to talk to the device and handle the necessary header and body crafting to talk to the server. In other words, I am not using the ICS HTTP client component but using the lower level TWSocket component and handling the necessary HTTP "handshaking" myself.
The headers I craft and send to the external device have the keep-alive flag set to TRUE. On my system, after I send anything to the external device, the connection will stay open continuously and will not close until approximately 30 seconds of inactivity occurs (30 seconds where I don't make any requests of the external device as an HTTP server). I don't know if the external device closes it or if Microsoft Windows does it. But the important point is that normally I can do multiple sends and the connection will stay open until I send nothing for about 30 seconds. This works fine and is what my code expects.
However, on some of my users systems the socket is closing after every send. I do have code that checks for a closed socket and attempts a reconnect to the external device if necessary, but does not expect to have to do a reconnect with each transaction.
My questions are:
Is there a system setting for sockets that might be causing this anomalous behavior on some users systems?
If so, are there Windows API function calls I can use to query the offending parameter and then set it to the expected close on 30 seconds of inactivity instead of with each transaction?
If so, can I, or how do I do it in a manner that will not adversely affect any other programs running on the users system?
The server is closing the socket. There are three possible reasons for this:
The client made a HTTP/1.0 request
The client set a Connection: close header in the request
The server does not support persistent connections
HTTP/1.0 did not support persistent connections, and the server would be correct in closing the socket after a HTTP/1.0 request.
HTTP/1.1 specifies that a connection is implicitly persistent, unless the client specifies a Connection: close header. The server would be correct in closing the connection if it receives this header. If the server does not support persistent connections, it would also be correct in closing the connection.
If you are using HTTP/1.1, you can force the connection to be persistent (as long as the server supports it) by sending a Connection: keep-alive header. You should then also send a Keep-Alive: timeout=<secs>, max=<max-requests> header, where <secs> and <max-requests> are integers representing the desired behaviour.

Resources