How can I detect dropped connection with CFStream/NSStream? - ios

Here's the setup:
I have an input stream created with CFReadStreamCreateForStreamedHTTPRequest. It has the kCFStreamPropertyHTTPAttemptPersistentConnection property set (I'm pooling connections). The server I'm connecting to is returning data with chunked encoding.
Everything works well in our normal case. My problem is that the server I'm talking to drops my connection if there are any problems. For the life of me I can't figure out how to detect the dropped connection.
In my stream's event callback I just see kCFStreamEventEndEncountered but no error is ever reported even though in Wireshark I can clearly see that the connections are getting FIN long before the end-of-stream chunked encoding marker is sent.
It seems like I should be getting a kCFURLErrorNetworkConnectionLost error event when the server drops the connection but I just get end-encountered.
Any suggestions greatly appreciated.

It took some digging but it turns out that this is, in fact, a problem with CFNetwork. I went back to the 10.4.10 Darwin sources and found this comment in CFHTTPFilter.c
// Premature end of stream. However, some servers send simply CRLF for their last chunk instead of 0CRLF; be tolerant of this.
So it seems that the built-in HTTP chunk decoder in iOS and OS X will consider a dropped connection to be orderly (and not report an error) if it successfully read the CRLF after the chunked data block and there are no more bytes outstanding in the buffer.

Related

Linux recv returns data not seen in Wireshark capture

I am receiving data through a TCP socket and although this code has been working for years, I came across a very odd behaviour trying to integrate a new device (that acts as a server) into my system:
Before receiving the HTTP Body response, the recv() kernel function gives me strange characters like '283' or '7b'.
I am actually debuging with gdb and I can see that the variables hold these values right after recv() was called (so it is not just what printf shows me)
I always read byte-after-byte (one at a time) with the recv() function and the returned value is always positive.
This first line of the received HTTP Body cannot be seen in Wireshark (!) and is also not expected to be there. In Wireshark I see what I would expect to receive.
I changed the device that sends me the data and I still receive the exact same values
I performed a clean debug build and also tried a release version of my programm and still get the exact same values, so I assume these are not random values that happened to be in memory.
i am running Linux kernel 3.2.58 without the option to upgrade/update.
I am not sure what other information i should provide and I have no idea what else to try.
Found it. The problem is that I did not take the Transfer-Encoding into consideration, which is chunked. I was lucky because also older versions of Wireshark were showing these bytes in the payload so other people also posted similar problems in the wireshark forum.
Those "strange" bytes show you the payload length that you are supposed to receive. When you are done reading this amount of bytes, you will receive again a number that tells you whether you should continue reading (and, again, how many bytes you will receive). As far as I understood, this is usefull when you have data that change dynamically and you might want to continuously get their current value.

Detecting a closed connection

I have an Grails application which does the following:
When a request is received from the client side, the server starts creating a zip file for that request and sends it back to the client machine. The amount of time the server takes in order to create the zip file is very high, and even though the connection is lost between the client and the server, the zip file is generated by the server continuously for 3 days or so, using 100% of the CPU, and sending the response somewhere, probably a dead-end.
I tried looking out a way to resolve this, and I find that socket programming is a way in order to detect the connection loss.
This question may look broad, but I just want to know the ways, a connection loss can be found out, so that I can dig around that approach and find out the solution.
Check if the object holding the results of the connect() method is null.
You can also wrap a try catch statement around the code that attempts the connection, and print out the exception if it fails so you know where/what happened.

IOS NSInputStream

I got a problem when using NSInputStream.
I have client app which connect to a server then server will start to send message to my client app through TCP repeatedly about 1 message per second. Server is just broadcasting message to client and message is xml format. The server send a message as one packet.
Now the problem is that when I read byte from NSInputStream the data got truncated which mean instead of receive 1 complete message, I got 2 separate data(partial xml) respond from time to time. I am not able to debug because it already happen when I read data byte from NSInputStream.
I use Wireshark to analyse every packet I receive and when it happen data got truncated too, because TCP so partial data retransmit to my client.
I have tried to log every partial data byte, the sum of partial data always around 1600 byte.
I have no idea how did they design and implement server side, but I do know there are many of people connect to that server and continuous get broadcasting message from it.
Does anyone encounter this problem? Can anyone help? Is it possible that data is over the max size and get splited?
This is not a problem per se. It is part of the design of TCP and also of NSInputStream. You may receive partial messages. It's your job to deal with that fact, wait until you receive a full message, and then process the completed message.
1600 bytes is a little strange. I would expect 1500 bytes, since that's the largest legal Ethernet packet (or especially somewhere around 1472, which is a really common MTU, minus some for the headers). Or I might expect a multiple of 1k or 4k due to buffering in NSInputStream. But none of that matters. You have to deal with the fact that you will not get messages necessarily at their boundaries.

My HTTP server's output is apparently invalid. How do I debug it?

Background: I have a custom HTTP server written in Erlang to stream stuff to an iPad app. I was using NSURLConnection - the standard high-level Apple way to consume HTTP content. However I was having problems with small chunks of data being buffered and not passed to my code immediately, so I was forced to switch to CFNetwork.
While NSURLConnection never complained, CFNetwork sometimes (~1 in 3 times) gives the following error and kills the connection:
The operation couldn’t be completed. (kCFErrorDomainCFNetwork error 303.)
According to Apple docs, this corresponds to "The HTTP server response could not be parsed."
This only occurs after the connection has been opened for a couple of seconds, and the response is being made.
I've taken a packet capture on the server with tshark. It's quite large, and contains UTF-8 as well as confidential details.
How can I go about verifying that it's a valid HTTP 1.1 chunked response?
Just to clarify, I've looked through the result of tcpick -C -yP -r ... and couldn't see anything immediately amiss, but I'm wondering if there's anything I can pass it through to confirm it's byte for byte valid.
Turns out it was sndbuf in the Erlang web server being set too low that was the problem.
That was a 5 hour waste of scanning a packet capture, implementing chunked HTTP with CFStream from scratch, etc, etc
How can I go about verifying that it's a valid HTTP 1.1 chunked response?
By confirming wheter chunked transfer encoding is implemented correctly.

Indy TCPClient and rogue byte in InputBuffer

I am using the following few lines of code to write and read from an external Modem/Router (aka device) via IP.
TCPClient.IOHandler.Write(MsgStr);
TCPClient.IOHandler.InputBuffer.Clear;
TCPClient.IOHandler.ReadBytes(Buffer, 10, True);
MsgStr is a string type which contains the text that I am sending to my device.
Buffer is declared as TIdBytes.
I can confirm that IOHandler.InputBufferIsEmpty returns True immediately prior to calling ReadBytes.
I'm expecting the first 10 bytes received to be very specific hence from my point of view I am only interested in the first 10 bytes received after I've sent my string.
The trouble I am having is, when talking to certain devices, the first byte returned the first time I've sent a string after establishing a connection puts a rogue (random) byte in my Buffer output. The subsequent bytes following are correct.
eg 10 bytes I'm expecting might be: #6A1EF1090#3 but what I get is .#6A1EF1090. in this example I have a full stop where there shouldn't be one.
If I try to send again, it works fine. (ie the 2nd Write sent after a connection has been established). What's weird (to me) is using a Socket Sniffer doesn't show the random byte being returned. If I create my own "server" to receive the response and send something back it works fine 100% of the time. Other software - ie, not my software - communicates fine with the device (but of course I have no idea how they're parsing the data).
Is there anything that I'm doing incorrectly above that would cause this - bearing in mind it only occurs the first time I'm using Write after establishing a connection?
Thanks
EDIT
I'm using Delphi 7 and Indy 10.5.8
UPDATE
Ok. After much testing and looking, I am no closer to finding this solution. I am getting two main scenarios. 1 - First byte missing and 2 - "introduced" byte at the start of received packet. Using TIdLogEvent and TIdLogDebug both either show the missing byte or the initial introduced byte as appropriate. So my ReadBytes statement above is showing consistently what Indy believes is there (in my opinion).
Also, to test it further, I downloaded and installed ICS components. Unfortunately (or fortunately depending on how you look at it) this didn't show the same issues as Indy. This didn't show the first byte missing nor did it show an introduced byte at the beginning. However, I have only done superficial testing, but Indy produces the behaviour "pretty much straight away" whereas ICS hasn't produced it at all yet.
If anyone is interested I can supply a small demo app illustrating the issue and IP I connect to - it's a public IP so anyone can access it. Otherwise for now, I'll just have to work around it. I'm reluctant to switch to ICS as ICS may work fine in this instance and given the use of this socket stuff is pretty much the whole crux of the program, it would be nasty to have to entirely replace Indy with ICS.
The last parameter (True)
TCPClient.IOHandler.ReadBytes(Buffer, 10, True);
causes the read to append instead of replace the buffer content.
This requires that size and content of the buffer are set up correctly first.
If the parameter is False, the buffer content will be replaced for the given number of bytes.
ReadBytes() does not inject rogue bytes into the buffer, so there are only two possibilities I can think of right now given the limited information you have provided:
The device really is sending an extra byte upon initial connection, like mj2008 suggested. If a packet sniffer is not detecting it, try attaching one of Indy's own TIdLog... components to your TIdTCPClient, such as TIdLogFile or TIdLogEvent, to verify what TIdTCPClient is actually receiving from the socket.
you have another thread reading from the same connection at the same time, corrupting the InputBuffer. Even a call to TIdTCPClient.Connected() will perform a read. Don't perform reads in multiple threads at the same time, if you are using the threads.

Resources