Keep Alive not working properly on iOS - ios

I am currently developing an application where we need some request to hit our server ASAP. To speed up the request process we have to eliminate handshake (as it takes extra) and have a permanent connection.
The application is using the Alamofire framework to make all request to our server and the setup is the following:
We have a session manager set up with default configuration and http header.
lazy var sessionManager: Alamofire.SessionManager = {
let configuration = URLSessionConfiguration.default
configuration.httpAdditionalHeaders = Alamofire.SessionManager.defaultHTTPHeaders
let manager = Alamofire.SessionManager(configuration: configuration)
return manager
}()
The session manager is persistent across all requests. Each request is made using the following code:
self.sessionManager.request(request.urlString, method: request.method, parameters: request.parameters)
.responseJSON { [weak self] response in
// Handle the response
}
request.urlString is the url of our server "http://ourserver.com/example"
request.method is set to post
request.parameters is a dictionary of paramaters
The request is working fine and we get a valid response. The problem arises on the keep alive timer, which is set by our server to 300 seconds. The device holds the connection for a maximum of 30 seconds on wifi and closes it almost instantly over GSM.
Server Debug
We did some debugging on our server and found the following results
Tests:
Test 1:
iPhone connects to the Internet via WiFi
Test 2:
iPhone connects to the Internet via 3G
Behaviour:
Both cases: app makes an HTTP/1.1 request to a web server with “Connection: keep-alive”; The Server (server ip = 10.217.81.131) responds with “Keep-Alive: timeout=300, max=99”
The client side (test 1 - app over WiFi) sends TCP FIN on the 30th second and the connection closes
The client side (test 2 – app over 3G) sends immediately (zero seconds) a TCP FIN request after it receives the HTTP/1.1 OK message from its first HTTP POST
Test 1 logs on the server side:
At 23.101902 the app makes an HTTP/1.1 POST request to the server with “Connection: keep-alive”
At 23.139422 the server responds HTTP/1.1 200 OK with “Connection: Keep-Alive” and “timeout=300” (300 seconds)
The Round-Trip-Time (RTT) is reported as 333.82 msec (this highlights the margin of error we have on the following timestamps):
The app, however, closes the connection in 30 seconds (approx. given the Internet transport variations – the difference between the 54.200863 and the 23.451979 timestamps):
The test is repeated numerous times with an approx. time of 30 seconds being always monitored
Test 2 logs on the server side:
The HTTP/1.1 POST request from the app:
The HTTP OK server response with keep-alive being accepted and set at 300 seconds:
The RTT is at 859.849 msec
The app closes immediately the connection, where immediately is 21.197918 – 18.747780 = 2.450138 seconds
The tests are repeated while switching from WiFi to 3G and back with the same results being recorded.
Client Debug
Using WiFi
First Attempt (connection established)
Optional(
[AnyHashable("Content-Type"): text/html,
AnyHashable("Content-Encoding"): gzip,
AnyHashable("Content-Length"): 36,
AnyHashable("Set-Cookie"): user_cookieuser_session=HXQuslXgivCRKd%2BJ6bkg5D%2B0pWhCAWkUPedUEGyZQ8%2Fl65UeFcsgebkF4tqZQYzVgp2gWgAQ3DwJA5dbXUCz4%2FnxIhUTVlTShIsUMeeK6Ej8YMlB11DAewHmkp%2Bd3Nr7hJFFQlld%2BD8Q2M46OMRGJ7joOzmvH3tXgQtRqR9gS2K1IpsdGupJ3DZ1AWBP5HwS41yqZraYsBtRrFnpGgK0CH9JrnsHhRmYpD40NmlZQ6DWtDt%2B8p6eg9jF0xE6k0Es4Q%2FNiAx9S9PkhII7CKPuBYfFi1Ijd7ILaCH5TXV3vipz0TmlADktC1OARPTYSwygN2r6bEsX15Un5WUhc2caCeuXnmd6xy8sbjVUDn72KELWzdmDTl6p5fRapHzFEfGEEg2LOEuwybmf2Nt6DHB6o6EA5vfJovh2obpp4HkIeAQ%3D; expires=Sun, 08-Jan-2017 12:51:43 GMT; path=/,
AnyHashable("Keep-Alive"): timeout=300, max=100,
AnyHashable("Connection"): Keep-Alive,
AnyHashable("X-Powered-By"): PHP/5.3.10-1ubuntu3.11,
AnyHashable("Server"): Apache/2.2.22 (Ubuntu),
AnyHashable("Vary"): Accept-Encoding,
AnyHashable("Date"): Sun, 08 Jan 2017 10:51:43 GMT])
Second Attempt (within 30 sec, the connection is still alive)
Optional([AnyHashable("Content-Type"): text/html,
AnyHashable("Content-Encoding"): gzip,
AnyHashable("Content-Length"): 36,
AnyHashable("Keep-Alive"): timeout=300, max=99,
AnyHashable("Connection"): Keep-Alive,
AnyHashable("X-Powered-By"): PHP/5.3.10-1ubuntu3.11,
AnyHashable("Server"): Apache/2.2.22 (Ubuntu),
AnyHashable("Vary"): Accept-Encoding,
AnyHashable("Date"): Sun, 08 Jan 2017 11:00:18 GMT])
Then after 30 seconds the connection drops (FI)
Using 3G
First Attempt
Optional([AnyHashable("Content-Type"): text/html,
AnyHashable("Content-Encoding"): gzip,
AnyHashable("Content-Length"): 36,
AnyHashable("Connection"): keep-alive,
AnyHashable("X-Powered-By"): PHP/5.3.10-1ubuntu3.11,
AnyHashable("Server"): Apache/2.2.22 (Ubuntu),
AnyHashable("Vary"): Accept-Encoding,
AnyHashable("Date"): Sun, 08 Jan 2017 11:04:31 GMT])
Then the connection drops almost instantly.

Now that I looked at the code a second time, I think I see the problem. The underlying NSURLSession class defaults to ignoring the keep-alive header, because some servers "support" it, but in practice, break badly if you actually try to use it, IIRC.
If you want a session to support keep-alive, you have to explicitly set HTTPShouldUsePipelining in the session configuration to YES.
Note that there is still no guarantee that the connection will stay up, depending on how aggressively iOS decides to power manage the radio, but at least you'll have a prayer. :-)

Related

Alamofire use configurable Caching

I'm using Alamofire 5 and have the requirement that some GET-requests should be cached. If the data is older then 20 minutes the real API should be hit.
What I found is to use the ResponseCacher. But I do not see a way to configure the individual request and need some advice.
let responseCacher = ResponseCacher(behavior: .modify { _, response in
let userInfo = ["date": Date()]
return CachedURLResponse(
response: response.response,
data: response.data,
userInfo: userInfo,
storagePolicy: .allowed)
})
let configuration = URLSessionConfiguration.af.default
private override init() {
configuration.timeoutIntervalForRequest = 10
configuration.requestCachePolicy = .reloadRevalidatingCacheData
Session(
configuration: configuration,
serverTrustManager: ServerTrustManager(evaluators: evaluators),
cachedResponseHandler: responseCacher
)
If the backend is returning proper caching headers that you want to limit to a certain amount of time, adding a Cache-Control: max-age= header on the request may work.
If the backend isn't return proper caching headers, using ResponseCacher is the way to go. You would modify the CachedURLResponse's response to include the proper Cache-Control header.
To elaborate on Jon's answer, the easiest way to achieve what you want is to just let the backend declare the cache semantics of this endpoint, then ensure that on the client side, URLSession uses a URLCache (which is probably the default anyway) and let URLSession and the backend do the rest. This requires, that you have control over the backend, though!
The more elaborate answer:
Here is just an example, how a server may return a response with declared cache semantics:
URL: https://www.example.com/ Status Code: 200
Age: 238645
Cache-Control: max-age=604800
Date: Tue, 12 Jan 2021 18:43:58 GMT
Etag: "3147526947"
Expires: Tue, 19 Jan 2021 18:43:58 GMT
Last-Modified: Thu, 17 Oct 2019 07:18:26 GMT
Vary: Accept-Encoding
x-cache: HIT
Accept-Ranges: bytes
Content-Encoding: gzip
Content-Length: 648
Content-Type: text/html; charset=UTF-8
Server: ECS (dcb/7EC7)
This server literally outputs the full range of what a server can declare regarding caching. The first eight headers (from Age to x-cache) declare the caching.
Cache-Control: max-age=604800 for example declares, that the data's freshness equals 604800 seconds. Having the date when the server created the data, the client can now check if the data is still "fresh".
Expires: Tue, 19 Jan 2021 18:43:58 GMT means the exact same thing, it declares when the data is outdated specifying the wall clock. This is redundant with the above declaration, but it is very clearly defined in the HTTP how clients should treat this.
Having an Age header is a hint, that the response has been actually delivered from a cache that exists between the client and the origin server. The age is the estimation of this data's age - the duration from when it has been created on the origin and when it has been delivered.
I don't wont to go into detail what every header means exactly and how a client and server should behave according HTTP since this is a very intense topic, but what you have to do is basically when you define an endpoint, is just to define the duration of the "freshness" of the returned data.
The whole details: Hypertext Transfer Protocol (HTTP/1.1): Caching
Once you came up with a good duration, Web-application frameworks (like Rails, SpringBoot, etc.) give great help with declaring cache semantics out of the box. Then Web-application frameworks will output the corresponding headers in the response - more or less "automagically".
The URLSession will automatically do the right thing according the HTTP protocol (well, almost). That is, it will store the response in the cache and when you perform a subsequent request it first looks for a suitable response in the cache and return that response if the "freshness" of the data is still given.
If that cached data is too old (according the given response headers and the current data and time), it will try to get a fresh data by sending the request to the origin server. Any upstream cache or eventually the origin server may then return fresh data. Your URLSession data task does all this transparently without giving you a clue whether the data comes from the cache or the origin server. And honestly, in most cases you don't need to know it.
Declaring the cache semantics according HTTP is very powerful and it usually should suit your needs. Also, the client may tailor its needs with specifying certain request headers, for example allowing to return even outdated data or ignoring any cached values, and much more.
Every detail may deserve a dedicated question and answer on SO.

Does not setting cache-control automatically enable caching even without conditional request?

For the following image: https://upload.wikimedia.org/wikipedia/commons/7/79/2010-brown-bear.jpg
There isn't any cache-control header. And based on here even if you don't send anything then it will use its default value which is private. That being doesn't the URLSession need to perform a conditional request to make sure its still valid?
Is there anything in the headers that allows it to make such a conditional request? Because I don't see cache-control, max-age, Expires. The only things I see is are Last-Modified & Etag but again it needs to validate against the server or does not specifying anything make it cache indefinitely?! I've already read this answer, but doesn't discuss this scenario.
Yet it's being cached by the URLSession. (Because if I turn off internet, still it gets downloaded)
Only other thing I see is "Strict-Transport-Security": max-age=106384710.
Does that effect caching? I've already look here and don't believe it should. From what I the max-age for the HSTS key is there only to enforce it to be accessed from HTTPS for a certain period of time. Once the max-age is reached then access through HTTP is also possible.
These are all the headers that I'm getting back:
Date : Wed, 31 Oct 2018 14:15:33 GMT
Content-Length : 215104
Access-Control-Expose-Headers: Age, Date, Content-Length, Content-Range, X-Content-Duration, X-Cache, X-Varnish
Via : 1.1 varnish (Varnish/5.1), 1.1 varnish (Varnish/5.1)
Age : 18581
Etag : 00e21950bf432476c91b811bb685b6af
Strict-Transport-Security : max-age=106384710; includeSubDomains; preload
Accept-Ranges : bytes
Content-Type : image/jpeg
Last-Modified : Fri, 04 Oct 2013 23:30:08 GMT
Access-Control-Allow-Origin : *
Timing-Allow-Origin : *
x-analytics : https=1;nocookies=1
x-object-meta-sha1base36 : 42tq5grg9rq1ydmqd4z5hmmqj6h2309
x-varnish : 60926196 48388489, 342256851 317476424
x-cache-status : hit-front
x-trans-id : tx08ed43bbcc1946269a9a3-005bd97070
x-timestamp : 1380929407.39127
x-cache : cp1076 hit/7, cp1090 hit/7
x-client-ip : 2001:558:1400:4e:171:2a98:fad6:2579
This question was asked because of this comment
doesn't the URLSession need to perform a conditional request to make sure its still valid?
The user-agent should be performing a conditional request, because of the
Etag: 00e21950bf432476c91b811bb685b6af
present. My desktop Chrome certainly does performs the conditional request (and gets back 304 Not Modified).
But it's free not to
But a user-agent is perfectly free to decide on it's own. It's perfectly free to look at:
Last-Modified: Fri, 04 Oct 2013 23:30:08 GMT
and decide that there resource is probably good for the next five minutes1. And if the network connection is down, its perfectly reasonable and correct to display the cached version instead. In fact, your browser would show you web-sites even while your dial-up 0.00336 Mbps dial-up modem was disconnected.
You wouldn't want your browser to show you nothing, when it knows full well it can show you something. It becomes even more useful when we're talking about poor internet connectivity not because of slow dialup and servers that go down, but of mobile computing, and metered data plans.
1I say 5 minutes, because in the early web, servers did not give cache hints. So browsers cached things without even being asked. And 5 minutes was a good number. And you used Ctrl+F5 (or was it Shift+F5, or was it Shift+Click, or was it Alt+Click) to force the browser to bypass the cache.

Race between socket accept and receive

I am using nodemcu with an esp-32 and recently came across an annoying problem. I refer to this sample from the NodeMCU Github page:
-- a simple HTTP server
srv = net.createServer(net.TCP)
srv:listen(80, function(conn)
conn:on("receive", function(sck, payload)
print(payload)
sck:send("HTTP/1.0 200 OK\r\nContent-Type: text/html\r\n\r\n<h1> Hello, NodeMCU.</h1>")
end)
conn:on("sent", function(sck) sck:close() end)
end)
This doesn't seem to work in every case.
If I try it with telnet, there is no issue:
$ telnet 172.17.10.59 80
Trying 172.17.10.59...
Connected to 172.17.10.59.
Escape character is '^]'.
GET / HTTP/1.1
HTTP/1.0 200 OK
Content-Type: text/html
<h1> Hello, NodeMCU.</h1>
Connection closed by foreign host.
But when using wget, it hangs most of the time:
$ wget http://172.17.10.59/
--2017-05-12 15:00:09-- http://172.17.10.59/
Connecting to 172.17.10.59:80... connected.
HTTP request sent, awaiting response...
After some research, the root cause seems to be, that the receive callback is registered after the first data was received from the client. This doesn't happen when testing manually with telnet, but with a client like wget or a browser, the delay between connecting and receiving the first data seems to be too small to register the receive handler first.
I have looked into the nodemcu code and there doesn't seem to be an easy way to work around this problem. Or do I miss something here?
In HTTP/1.0, you need the "Content-Length" in the HTTP header when there is a message-body.
For example:
"HTTP/1.0 200 OK\r\nContent-Type: text/html\r\nContent-Length: 25 \r\n\r\n<h1> Hello, NodeMCU.</h1>"
ref: https://www.w3.org/Protocols/HTTP/1.0/spec.html#Content-Length

Jersey Client opens too many Connections

i run in some problems with my jersey rest api and a client.
This is how im using the methods on a server side:
#POST
#Path("/seed")
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public Response addSeed(Seed seed) throws InterruptedException {
if (!Validator.isValidSeed(seed)) {
return Response.status(400).entity("{\"message\":\"Please verify your JSON!\", \"stat\":\"failed\"}")
.build();
}
save(seed);
return Response.status(200).build();
}
If i run a Jersey client in a while(true) loop, there are connections open and won't close. So im running into a problem i have a lot of connections open and my network crashes. So i can't use my server any more. After the connections are closed i can connect to the server.
This is a client:
ClientConfig config = new DefaultClientConfig();
Client client = Client.create(config);
WebResource service = client.resource(getBaseURI()).path("api/seed");
while (true) {
ClientResponse cr = service.header("Content-Type", "application/json").post(ClientResponse.class, seed);
System.out.println(cr);
cr.close();
My Questions are:
What can i do on the server side, to prevent clients open a new connection?
How can i specify a max number of connections?
And how should i implement the jersey client to reuse open connection?
I don't know of a way to limit Jersey resources at the web-app level. If you upgrade to GlassFish EE, you can make your resources EJBs #Stateless #StatelessDeployment(maxInstances=16)
The pile up of connections could be because of Keep-Alive settings. In Tomcat 6 there are two you can tune your connector with:
maxKeepAliveRequests, which defaults to 100. It's the maximum number of HTTP requests which can be pipelined until the connection is closed by the server. Setting this attribute to 1 will disable HTTP/1.0 keep-alive, as well as HTTP/1.1 keep-alive and pipelining. Setting this to -1 will allow an unlimited amount of pipelined or keep-alive HTTP requests.
keepAliveTimeout, which defaults to connectionTimeout which defaults to 60k ms. It it the number of milliseconds this Connector will wait for another HTTP request before closing the connection.

use httpclient to send request over ip is slow

Now, i come with a strange question. I send http request with httpclient. In the request, when i use domain name, like dynamic.12306.cn, or i use ip address and put the ip information in windows/system32/driver/ext/hosts like 122.227.2.27 dynamic.12306.cn, the request is quick to return. But if i only use the ip and don't put any info in to hosts, it is very slow.
The the two above, I will show example below:
Case 1. The speed is fast. The request url is https://dynamic.12306.cn/otsweb/main.jsp
or The request url is https://122.227.2.27/otsweb/main.jsp and put the 122.227.2.27 dynamic.12306.cn into hosts,
Case 2. The speed is slow. The request url is https://122.227.2.27/otsweb/main.jsp an don't put any info into hosts.
I open the debug mode of httpclient, and i find when i use the method of case 2, it is very slow to connect to server.
The logs:
2013/03/17 10:19:10:665 CST [DEBUG] BasicClientConnectionManager - Get connection for route {s}->https://122.227.2.27
2013/03/17 10:19:11:234 CST [DEBUG] DefaultClientConnectionOperator - Connecting to 122.227.2.27:443
2013/03/17 10:19:20:796 CST [DEBUG] RequestAddCookies - CookieSpec selected: best-match
it will cost several seconds to connect server.
But if i use the method of case 1.
The logs:
2013/03/17 10:30:13:876 CST [DEBUG] BasicClientConnectionManager - Get connection for route {s}->https://dynamic.12306.cn
2013/03/17 10:30:14:403 CST [DEBUG] DefaultClientConnectionOperator - Connecting to dynamic.12306.cn:443
2013/03/17 10:30:14:499 CST [DEBUG] RequestAddCookies - CookieSpec selected: best-match
it is fast to connect server.
try listen to DNS query when you r doing the request. I got the same issue, which turned out the hosting website appended a hostname right after the ip.

Resources