I have this piece of code that I have been trying to port. The code works 100% fine on windows using a WinHTTP implementation. On IOS 7 simulator, I am using NSURLSession. For regular HTTPS get/post seems to work fine.
Things start breaking down when I use a "streaming" HTTP. In this case, the content length is unknown, because the data is streaming in continuously.
I have a blocking synchronous below call that will wait until the current request completes. When I use the first command the synchronous loop will exit after the delegate is hit. However if I replace with the commented second line the synchronous loop hangs.
[m_pDelegate.session invalidateAndCancel];
// [m_pDelegate.session finishTasksAndInvalidate];
blockUntilOperationsComplete();
Eventually it will exit, and I do get my data callbacks. I believe the callbacks finally trigger MINUTES later because small keep-alive messages (16 bytes long) eventually overflow the buffer and trigger a delegate call. Is there a way to reduce the buffering threshold?
After wasting 2 days on this I'll leave this for the next soul that comes by. There is no way to reduce this buffer through existing NSURL* classes. It turns out that current implementation (on iOS7, and it seems that it's like that since forever) for chunked encoding buffers incoming data by waiting for 512 bytes of chunk encoded payload to gather, and only after that callbacks will occur - important part follows - if Content-Type is "text/html". After that all following traffic triggered callbacks will happen in real time.
However if server changes Content-Type header to "application/json" it will not be buffered and your callbacks will fire as soon as something is actually received.
Related
Is there an easy way to get the time it took Indy to connect, and the time it took to receive data, in a TIdHTTP.Get() or TIdHTTP.Put() operation?
EDIT:
I want to get statistics to determine which timeouts are best to use for the ReceiveTimeout and ConnectTimeOut properties.
For timing a connect, you can use the OnStatus(hsConnecting) and OnStatus(hsConnected)/OnConnected events. Note that if a URL involves a hostname (which is the usual case), there is also an OnStatus(hsResolving) event that precedes the OnStatus(hsConnecting) event. However, DNS resolution does not play into ConnectTimeout handling at this time.
For timing the receive, that is a bit trickier, since there are no events for detecting the end of sending a request, or the beginning/ending of reading a response 1. And also that a given HTTP request may involve multiple steps (redirects, authentication, etc), which may also involve multiple disconnects/re-connects since HTTP is a stateless protocol not dependent on a persistent connection, like most other protocols are. So, about the only way I can think of accomplishing this is to attach an Intercept component to the TIdHTTP.Intercept property and then manually parse the HTTP messages as they are being exchanged.
1 Actually, that is not entirely true. There is a TIdHTTP.OnHeadersAvailable event, which is fired after the HTTP response headers have been read, and before the HTTP response body is read, at least. So, if you don't care about the timing of the headers, you can use that event to start timing the receiving of the body data, at least. And then stop the timing when Get()/Post() exits. For each multi-step that requires TIdHTTP to repeat a request, you should get a new OnHeadersAvailable event, which you can use to reset your timer. That way, you end up with the time of the final response only.
However, note that ReceiveTimeout is a per-byte timeout, so an alternative might be to use a custom TStream (or Indy's TIdEventStream) to receive the HTTP response data into, and then you can time the durations between individual writes to that stream by overwriting its Write() method (or using the OnWrite event).
How can I send multiple post requests using TIdHTTP at the same time?
lHTTP1.Post('http://'+cURL+'/build.php?',lParamList, ResponseContent);
lHTTP2.Post('http://'+cURL+'/build.php?',lParamList, ResponseContent);
lHTTP3.Post('http://'+cURL+'/build.php?',lParamList, ResponseContent);
I tried using three threads to do that, but there is a one second delay between every post message.
How can I send all the post messages in the same second?
Since TIdHTTP is a blocking component, using separate threads is the correct approach. The 1s delay on each post could be related to how the OS schedules threads, or it might be related to network delays, or you might be using a version of Indy that has internal delays (for instance, if an HTTP server sends a 3xx response to a POST request, TIdHTTP waits up to 5s to make sure the server sends a proper response body - some buggy servers do not). It is difficult to know where your 1s delay is actually occurring. You will have to debug/profile your project to find out, we can't do that for you.
Is it wrong to wait for EPOLLIN, read all data from the socket, and then immediately send the response?
Is it better to wait for EPOLLOUT before sending the response? If so - why? If not - what exactly is the purpose of EPOLLOUT?
I've seen some epoll examples that wait for EPOLLOUT and some that don't.
If you wait for EPOLLOUT, you are guaranteed that the next send will not block. That means it will accept at least 1 byte (this is admittedly a quite poor guaranteee, but unluckily it's just that, you're never guaranteed that send accepts more than at least 1 byte).
You can do perfectly well without waiting for EPOLLOUT if either blocking is no issue or if the socket is nonblocking (in which case send would fail with EWOULDBLOCK). It sure results in much less complicated code.
It's not wrong to do either.
This is an issue that's making me question my own sanity, but I'm posting the question in case it's something real rather than a problem of my own making.
I have an iOS app that is making use of the NSURLConnection class to send a request to a webserver. The object is instantiated and instructed to call back the delegate, which receives the corresponding notifications didReceiveResponse / didReceiveData / didFinishLoading / didFailWithError. Effectively the same code that is posted on Apple's dev page for using the class. The requests are all short POST transmissions with JSON data; the responses are also JSON-formatted, and come back from an Apache Tomcat Java Servlet.
For the most part it all works as advertised. The app sends a series of requests to the server in order to start a job and poll for partial results. Most of the exhanges are short, but sometimes the results can be up to about 100-200Kb maximum when there are partial results available.
The individual pieces of data get handed back by the operating system in chunks of about 10Kb each time, give or take. The transport is essentially instantaneous, as it is talking to a test server on the LAN.
However: after a few dozen polling operations, the rate of transport grinds to a near standstill. The sequence of response/data.../finished works normally: the webserver has delivered its payload, but the iOS app is receiving exactly 2896 bytes, with a periodicity of 20-30 seconds in between chunks. It is the correct data, and waiting about 5 minutes for 130Kb of data does confirm that it's operating correctly.
Nothing I do seems to conveniently work around it. I tried switching to the "async" invocation method with a response block; same result. Talking to a remote website rather than my LAN test deployment gets the same result. Running in simulator or iPhone gets the same result. The server returns content-length and doesn't try to do anything weird like keeping the connection alive.
Changing the frequency of the polling achieves little, unless I crank up the delay in between polling to 50 seconds, then everything works fine, presumably because it only ends up polling once or twice.
A hypothesis that fits this observation is that the NSURLConnection object hangs around long after it has been released, and chews up resources. Once a certain limit is hit, the progress rate grinds to a near halt. If the slowed down connection actually completes, subsequent connections work normally again, presumably because they've been cleaned up.
So does this sound familiar to anyone?
My app communicates with a server over TCP, using AsyncSocket. There are two situations in which communication takes place:
The app sends the server something, the server responds. The app needs to read this response and do something with the information in it. This response is always the same length, e.g., a response is always 6 bytes.
The app is "idling" and the server initiates communication at some time (unknown to the app). The app needs to read whatever the server is sending (could be any number of bytes, but the first byte will indicate how many bytes are following so I know when to stop reading) and process this information.
The first situation is working fine. readDataToLength:timeout:tag returns what I need and I can do with it what I want. It's the second situation that I'm unsure of how to implement. I can't use readDataToLength:timeout:tag, since I don't know the length beforehand.
I'm thinking I could do something with readDataWithTimeout:tag:, setting the timeout to -1. That makes the socket to constantly listen for anything that's coming in, I believe. However, that will probably interfere with data that's coming in as response to something I sent out (situation 1). The app can't distinguish incoming data from situation 1 or situation 2 anymore.
Anybody here who can give me help me solve this?
Your error is in the network protocol design.
Unless your protocol has this information, there's no way to distinguish the response from the server-initiated communication. And network latency prevents obvious time-based approach you've described from working reliably.
One simple way to fix the protocol in your case (if the server-initiated messages are always less then 255 bytes) - add the 7-th byte to the beginning of the response, with the value FF.
This way you can readDataWithTimeout:tag: for 1 byte.
On timeout you retry until there's a data.
If the received value is FF, you read 6 more bytes with readDataToLength:6 timeout: tag:, and interpret it as the response to the request you've sent earlier.
If it's some other value, you read the message with readDataToLength:theValue timeout: tag:, and process the server-initiated message.