CocoaAsyncSocket set the buffer size - ios

I have written a VB.NET server that communicates with a silverlight client and a iOS client (using CocoaAsyncSocket).
I'm sending and receiving JSON data, and pdf documents encoded as base64 strings.
When receiving encoded pdf documents on the client side I have some performance issues, it was easily fixed in the silverlight client by adjusting the ReceiveBufferSize, and setting the SendBufferSize on the server (both currently set to 65536). But on iOS client I can't find any where to set the buffer size.
Receiving a document about 6MB in silverlight takes 3-4 sec, and on iOS 25-30 sec.

I have found the problem, it had nothing to do with the buffer size (CocoaAsyncSocket seams to handle that by it self). I had a NSLog writing out all strings, so it was the output to the console that slowed everything down. I thought that all NSLog call where ignored when building the app for release, but that's not the case, it still prints everything out.

Related

Does AVPlayer support live footage served directly from a fragmented MP4 file?

Overview
I have a server generating a livestream of video that is exposed as a fragmented MP4 file.
That file is being served to an iOS emulator trying to play the video using react-native-video, which, I believe uses AVPlayer.
The first request the emulator makes is a range request for bytes 0-1. I record the X-Playback-Session-Id and respond with: 206 partial content, bytes 0-1, and the content-range bytes 0-1/*. According to the specification, the size of * indicates that the value is unknown.
I then receive an error on the AVPlayer stating that the server is not correctly configured. According to the apple docs this indicates the server does not support range requests.
I have implemented support for range requests. As an experiment, I set the content-range to respond with a very large size, instead of * (bytes 0-1/17179869176). Which works to an extent. The AVPlayer follows through with multiple range requests for different byte-ranges (0-17179869175). Though sometimes it only requests a singular range.
This buffers for a while and displays nothing until I stop the server (with a breakpoint), and a short while after the video stops buffering (but does not close any active connections) and plays what it has so far loaded. Given that this is a livestream that's not acceptable.
Playing the livestream inside chrome or an android emulator works exactly as I'd expect - the video is played as soon as it gets all the necessary data. But chrome also does not require any of the support for byte range requests to be able to play a video.
I can understand that without any source of content-length the AVPlayer is unable to make range requests as it doesn't know where the file ends. However, as the media I'm exposing is a live stream I don't have a meaningful content-length to give it. So there must be something I can specify either in headers on the server or as AVPlayer settings on the client that states the video is a livestream and so cannot be handled through range requests, or that it must request chunks of footage at a time.
I've looked online and found some useful documents regarding the subject of livestreaming, though all of them are surrounding use of HLS and m3u playlist files. However, changing the back-end to generate m3u playlist files and to decode the video to work out the durations for the chunks correctly would probably take more weeks and months of development time, and I don't understand why it'd be necessary, given that I'm only exposing a single resolution of a single video file that does not need to seek, and that it does work perfectly fine on android.
After having spent so long and having come across so many different hard to resolve issues it's starting to feel like I've somehow gone down the wrong path and that I'm going about this completely the wrong way.
My question is two-fold
Does AVPlayer support live footage served directly from a fragmented MP4 file?
If so, how do I implement it?

Long freeze on WebRTC iOS native SDK

I am working on an iOS app that uses WebRTC's native SDK to provide access to streams from different cameras. The codec used is H264/AVC.
Although most camera streams work perfectly fine, there are some that consistently freeze when the streams are first launched. It looks like the frames are not being decoded but I am not sure how to go about fixing it.
When I enable debug logging, I see a lot of the following in WebRTC's logs:
(rtp_frame_reference_finder.cc:240): Generic frame with packet range [21170, 21170] has no GoP, dropping frame.
(rtp_frame_reference_finder.cc:240): Generic frame with packet range [21169, 21169] has no GoP, dropping frame.
(video_receive_stream.cc:699): No decodable frame in 200 ms, requesting keyframe.
(video_receive_stream.cc:699): No decodable frame in 200 ms, requesting keyframe.
When there is a freeze, VideoBroadcaster::OnFrame on video_broadcaster.cc is never called which prevents the entire stream flow from starting. When I test on Xcode and pause/unpause the debugger, almost always the stream will starting working and I see VideoBroadcaster::OnFrame getting fired and frames start being decoded. So somehow the pause/unpause process fixes the issue and kicks off the stream.
On the iOS SDK that the encoders are never setup. I have used the RTCVideoEncoderFactoryH264 encoder provided by the SDK. I have provided implementations for the interface/protocol RTCVideoEncoderFactory and also tried overriding the encoders in the SDK. In all of these cases, the createEncoder() function is never called. There are no issues with the decoder however, it sets up correctly.
In the RTCInboundRTPVideoStream stats report, PLICount and NACKCount are steadily increasing. My understanding is that the receiver is letting the other peer know there is picture loss in the encoded video.
Since I don't know what exactly is preventing the frames from being decoded, I would like to restart to the stream when I PLICount or NACKCount increasing.
How can I do that without going through the whole SDP offer/answer process? The only way I see is to toggle the isEnabled flag on RTCMediaStreamTrack but that doesn't fix the problem for me.
Are there any encoding/decoding parameters I can update to restart the stream?
What could be the reason for pausing/unpausing the debugger fixing the issue?

Document Collaboration

I am trying to make an iOS app which would involve 5 or so users connected to a single web document, with one of them editing it while the others received updates in realtime.
How can I make the app so that it can update its documents in realtime (without the user having to click a "sync" button)? It should work similar to shared google docs, when one user makes a change, it is instantly reflected in all users' copies, but it should run natively on iOS, not through a web browser.
I am not asking for a full app schematic or any code, I only need a nudge in the right direction.
I would suggest that you keep a master copy of the document on your server (and by the way, you will need a server in order to make this work effectively), and while the users edit a temporary version of the document that is stored locally on their iPhones, the server is constantly notified of changes when there is one, and when the version on the server is changed (if the version on the server isn't the same as the one on the device), the server sends a message using a special protocol that you will make to specify if
Content (text, image or something else) is added to the document
Content is removed from the document
Content is edited in the document
... You get the point
All you need are different ways to notify the devices of different types of changes made to the server document. From those notifications, the user's temporary document can get changed according to what change was made to the server's version without having to constantly download the full document over and over. Every once in a while (or from manual user input), you can have the iPhone app request the full server document to make sure that all changes made on the iPhone are correct.
Use NSInputStreams and NSOutputStreams to receive and send messages to your server. Use an NSStreamDelegate to handle server events (its only instance method is an event handling method). This guide is an excellent start if you really don't know anything about sending messages. You can send and receive NSData and NSStrings in which you can store your protocol.
As an example of protocol, an app that I have created that receives and sends messages to and from a Windows server does the following:
When preparing the data to be sent on the iOS app, I first write 4 bytes of data to an NSData object that contain the length of the proceeding data so that the server knows exactly how many bytes to read from the stream. I chose 4 bytes since that's the size of an unsigned int type, which can represent very large numbers (and therefore very large data sizes).
I add the data to the NSData object. The data is in the form of a struct in my case. Really, you can send any type of data so long as you know how to parse it at the other end.
I send the NSData object.
Really, sending, receiving and parsing NSStream messages is very simple, but if you are writing server-client code for the first time for an iOS app, the process can seem daunting. I did simplify the process down quite a bit, because you also have to consider if the server is ready to receive messages, has space available for messages to be written and so on, but the guide that I linked to earlier, which is also right here, was quite helpful as I was writing my client-server app.
Hopefully these guidelines are general enough (and specific on the right topics) for your liking.

Indy TCPClient and rogue byte in InputBuffer

I am using the following few lines of code to write and read from an external Modem/Router (aka device) via IP.
TCPClient.IOHandler.Write(MsgStr);
TCPClient.IOHandler.InputBuffer.Clear;
TCPClient.IOHandler.ReadBytes(Buffer, 10, True);
MsgStr is a string type which contains the text that I am sending to my device.
Buffer is declared as TIdBytes.
I can confirm that IOHandler.InputBufferIsEmpty returns True immediately prior to calling ReadBytes.
I'm expecting the first 10 bytes received to be very specific hence from my point of view I am only interested in the first 10 bytes received after I've sent my string.
The trouble I am having is, when talking to certain devices, the first byte returned the first time I've sent a string after establishing a connection puts a rogue (random) byte in my Buffer output. The subsequent bytes following are correct.
eg 10 bytes I'm expecting might be: #6A1EF1090#3 but what I get is .#6A1EF1090. in this example I have a full stop where there shouldn't be one.
If I try to send again, it works fine. (ie the 2nd Write sent after a connection has been established). What's weird (to me) is using a Socket Sniffer doesn't show the random byte being returned. If I create my own "server" to receive the response and send something back it works fine 100% of the time. Other software - ie, not my software - communicates fine with the device (but of course I have no idea how they're parsing the data).
Is there anything that I'm doing incorrectly above that would cause this - bearing in mind it only occurs the first time I'm using Write after establishing a connection?
Thanks
EDIT
I'm using Delphi 7 and Indy 10.5.8
UPDATE
Ok. After much testing and looking, I am no closer to finding this solution. I am getting two main scenarios. 1 - First byte missing and 2 - "introduced" byte at the start of received packet. Using TIdLogEvent and TIdLogDebug both either show the missing byte or the initial introduced byte as appropriate. So my ReadBytes statement above is showing consistently what Indy believes is there (in my opinion).
Also, to test it further, I downloaded and installed ICS components. Unfortunately (or fortunately depending on how you look at it) this didn't show the same issues as Indy. This didn't show the first byte missing nor did it show an introduced byte at the beginning. However, I have only done superficial testing, but Indy produces the behaviour "pretty much straight away" whereas ICS hasn't produced it at all yet.
If anyone is interested I can supply a small demo app illustrating the issue and IP I connect to - it's a public IP so anyone can access it. Otherwise for now, I'll just have to work around it. I'm reluctant to switch to ICS as ICS may work fine in this instance and given the use of this socket stuff is pretty much the whole crux of the program, it would be nasty to have to entirely replace Indy with ICS.
The last parameter (True)
TCPClient.IOHandler.ReadBytes(Buffer, 10, True);
causes the read to append instead of replace the buffer content.
This requires that size and content of the buffer are set up correctly first.
If the parameter is False, the buffer content will be replaced for the given number of bytes.
ReadBytes() does not inject rogue bytes into the buffer, so there are only two possibilities I can think of right now given the limited information you have provided:
The device really is sending an extra byte upon initial connection, like mj2008 suggested. If a packet sniffer is not detecting it, try attaching one of Indy's own TIdLog... components to your TIdTCPClient, such as TIdLogFile or TIdLogEvent, to verify what TIdTCPClient is actually receiving from the socket.
you have another thread reading from the same connection at the same time, corrupting the InputBuffer. Even a call to TIdTCPClient.Connected() will perform a read. Don't perform reads in multiple threads at the same time, if you are using the threads.

Soap Delphi Client end with a timeout for a 1MB call

we are developing a SOAP webservice (Apache/PHP). All run well for small size calls, but with a 1Mb soap call (the HTTPS call size is 1MB) our Delphi Soap client stop with a timeout on all PC but one, and our PHP clients run well with a default_socket_timeout=300, but stop with a "Error Fetching http headers" with default_socket_timeout=60.
How can we change the timeout for Delphi? In fact this timeout seem to be in a Windows XP network library (wininet.dll called by soaphttptrans.pas)
Thanks
Cédric
In fact it was a problem with IE7 installation : it change all the network timeout.
PC with IE6 have a 3600 secondes timeout, IE7 change it to 30 secondes.
Use of InternetQueryOption() show this, and InternetSetOption help to change this.
Big thanks to my work mate wich hunt the bug for hours.
There's a MaxSinglePostSize in SOAPHttpTrans. I seem to recall having issues with that. This isn't a limit, but it behaves differently (breaks up into chunks for sending) if you're over, or under that limit. (basically 32768 by default). I expect you'll hit that size sooner if you're on D2009/D2010 due to widestrings. It would be interesting to see if you run into trouble around that size. You can use Fiddler to capture some output (or hook into the OnBeforePost event and dump the serialized XML to a file yourself) and see if that's where you run into trouble, instead of the previously observed 1MB.
But anyway, the THTTPReqResp class has options for SendTimeout and ReceiveTimeout. Try adjusting those.
Also... if you are using Delphi prior to Delphi2007, you should update your soap libraries. There's a download somewhere... many bug fixes, including a nasty memory issue that will cause your app to be halted by DEP.

Resources