I have a Delphi 6 application that talks to an external device that acts as an HTTP server. I am using the ICS TWSocket components for this application. I open up a socket to talk to the device and handle the necessary header and body crafting to talk to the server. In other words, I am not using the ICS HTTP client component but using the lower level TWSocket component and handling the necessary HTTP "handshaking" myself.
The headers I craft and send to the external device have the keep-alive flag set to TRUE. On my system, after I send anything to the external device, the connection will stay open continuously and will not close until approximately 30 seconds of inactivity occurs (30 seconds where I don't make any requests of the external device as an HTTP server). I don't know if the external device closes it or if Microsoft Windows does it. But the important point is that normally I can do multiple sends and the connection will stay open until I send nothing for about 30 seconds. This works fine and is what my code expects.
However, on some of my users systems the socket is closing after every send. I do have code that checks for a closed socket and attempts a reconnect to the external device if necessary, but does not expect to have to do a reconnect with each transaction.
My questions are:
Is there a system setting for sockets that might be causing this anomalous behavior on some users systems?
If so, are there Windows API function calls I can use to query the offending parameter and then set it to the expected close on 30 seconds of inactivity instead of with each transaction?
If so, can I, or how do I do it in a manner that will not adversely affect any other programs running on the users system?
The server is closing the socket. There are three possible reasons for this:
The client made a HTTP/1.0 request
The client set a Connection: close header in the request
The server does not support persistent connections
HTTP/1.0 did not support persistent connections, and the server would be correct in closing the socket after a HTTP/1.0 request.
HTTP/1.1 specifies that a connection is implicitly persistent, unless the client specifies a Connection: close header. The server would be correct in closing the connection if it receives this header. If the server does not support persistent connections, it would also be correct in closing the connection.
If you are using HTTP/1.1, you can force the connection to be persistent (as long as the server supports it) by sending a Connection: keep-alive header. You should then also send a Keep-Alive: timeout=<secs>, max=<max-requests> header, where <secs> and <max-requests> are integers representing the desired behaviour.
Related
The reference for this method only says what happens locally on the client, and says nothing about what it potentially sends to the server. Apparently, our server has some challenges with receiving a lot of status code 499 from us when we cancel a request, but I can't find anything about how URLSession handles cancellation. Is there a standard cancel-message over the protocol HTTP?
The client doesn’t send 499. Status codes are one-way. Rather, the client closes the network connection. The server records that dropped connection as a 499 status code in its logs.
If the server is HTTP/2 or later, the client may send either a END_STREAM or RST_STREAM message to cancel a single request without canceling other requests on the same connection, or it may just drop the connection. Either way, you’ll probably just see a 499 in your logs. There is little reason to care whether the connection was dropped or cancelled.
I am using the socket.io NodeJs server library and the Swift client library. Majority of the time the client successfully reconnects to the server after a disconnection, however intermittently we are seeing abrupt disconnections and then the client is never able to reconnect.
In the server logs, I see the client sending a connection attempt at the defined re-try interval, however it just never successfully establishes the connection and then we get a ping timeout.
There is surprisingly very little support for Socket.io which makes this extremely difficult to solve.
I figured out a solution to our problem by forcing a new engine to be created in the client upon reconnections. When creating the SocketIOClient object, set the forceNew variable to true which allows the client to create a new engine and thus always successfully establishes the connection.
return SocketIOClient(socketURL: socketURL, config: [.forceNew(true)])
I'm experiencing slow response times for my first http POST request to my server.
This happens both in Android and iOS networking libraries. (Volley on Android, and Alamofire in iOS).
First response is roughly 0.7s-0.9s, whereas subsequent requests are 0.2s.
I'm guessing this is due to the session being kept-alive by the server, therefore eliminating the need for establishing a new session on each request.
I figure I can make a dummy request when the app starts to start the session, but it doesn't seem very elegant.
I also control the server side (Node.js) so if any configuration needs to be done there I can also try it.
Investigating a little further, I tried sending an https CONNECT request before issuing the first "real" POST request, and the behavior replicates.
After 30 seconds or so, the connection is dropped (probably at the iOS URLSession level, the load balancer is configured to keep connections as 60 seconds).
In theory this makes sense because setting up an https connection takes up several (12 total) packets and I'm on an inter continental connection.
So my solution is to send a CONNECT request when I expect the user to send a regular request.
We're currently dealing with a performance issues in our app, and we believe that some of these issues might be related to the fact that the app and the underlying AFNetworking network stack seems to ignore keep-alive on HTTP 1.1.
We got information from Apple that persistent connections are purged after 3, 6 or 30 seconds respectively, depending on iOS version and WiFi/WWAN connectivity, regardless of server-side keep-alive information.
While monitoring the connection handshakes on our servers, we noted the weird behavior that an SSL connection from our app on an iOS device is left open and not closed with a FIN packet. As soon as new request is made from the app, the left over connection from the previous request is THEN closed with a FIN packet and a new connection is created.
While we understand that iOS purges the connections to keep the battery consumption low, we wonder that it doesn't terminate the existing connection properly and defers that termination to the start of a new request.
Could someone explain this behavior, and suggest solutions to avoid expensive SSL handshakes in connections which are covered by keep-alive under regular conditions?
I bumped into the same problem some weeks ago.
The solution was to force webserver ignore keepalive http header from iOs device and close connection immediately.
This question is mostly an HTTP question, I am working on an iOS app, though this question is not specific to iOS.
I would like to use persistent connections, and have no problems doing so, until an HTTP response uses the chunked transfer type, instead of explicitly sending Content-Length. The response itself works normally, and would work if I never needed to cancel the response. This response can take a while to send the response(it can take minutes and will never send the final 0 chunk), and frequently, I would like to cancel this request(and response) and send a new request on the same connection.
With HTTP/1.1, how can I cancel the chunked response response without closing the connection?
My current workaround is to not use persistent connections, but then I lose all the benefits of using persistent connections, which makes initiating these requests much slower.
You can't cancel it. There is nothing in the HTTP protocol that allows you to interrupt a HTTP response. You either need to read and discard the entire response or close the connection. However, you can issue another HTTP request on the same connection while the server is still sending the response, but you still have to process the entire response to the original request.