Are WebSocket messages cached on iOS? - ios

Strange but I cannot find any information on that: if I write a [large] message to the WebSocket stream on iOS and the execution gets back to my code, is the message already sent or somehow buffered?
I'm using Starscream library but it just uses CFStream-s.

Looking at the source code for the Starscream library mentioned, the library appends the send operation to a NSOperation queue:
private func dequeueWrite(..) {
...
writeQueue.addOperation(operation)
}
and then immediately returns.
So when the one of the send methods returns, for example:
open func write(data: Data, completion: (() -> ())? = nil)
The message will not yet have been sent.
But as you can see you can pass a completion block to this method, that will be called when the whole message has been written to the underlying output stream. Note that this doesn't tell you anything about whether the message has actually been sent on the network, or if the sender has received it successfully.
To know if the sender has received and processed the message successfully, you need to wait for a response message - that is something you need to define in your application protocol.
Before using the Starscream library in production, you might want to report/fix some issues in it. While reviewing the send mechanism I noticed that if the OutputStream buffer is full on WebSocket.swift line 1254 the library tries sending the rest of the buffer in a busy loop rather than waiting for a hasSpaceAvailable event. This may waste a lot of CPU cycle if you send a large message.
Also, it looks like the case when stream.write returns 0, indicating that the output buffer is full, is incorrectly handled as an error.

Probably it use
func CFWriteStreamWrite(_ stream: CFWriteStream!,
_ buffer: UnsafePointer<UInt8>!,
_ bufferLength: CFIndex) -> CFIndex
the write call return "The number of bytes successfully written, 0 if the stream has been filled to capacity (for fixed-length streams), or -1 if either the stream is not open or an error occurs."
So yes, they are buffered. But I think that is the only option, a write function need to have the buffer because every socket have a max buffer zsize

Related

Incomplete image sent over TCP using BlueSocket framework

I’ve been trying to send image over TCP using Kitura BlueSocket.
Every time I try to decode image from Data object I get
Warning! [0x7fb6f205e400] Decoding incomplete with error code -1.
This is expected if the image has not been fully downloaded.
And indeed the image is usually half-loaded. This is the code I use for downloading Data object:
func readData() -> Data {
var receivedData = Data.init()
do {
try mySocket?.read(into: &receivedData)
}
catch {
print("Error receiving data")
}
return receivedData
}
and this is how I decode image:
func decodeImage(from: Data){
imageRead.image = UIImage(data: from)
}
and this code is used in the View Controller like so:
let imageData = networkManager.readData()
decodeImage(from: imageData)
I do not know why the image doesn't download fully.
You're working with a low-level socket. TCP just knows about packets; it doesn't know anything about "images" or "records" or anything else. It just deals with packets. Depending on various factors, you may see packets as small as a few hundred bytes to as large as a few kB.
.read(into:) returns you whatever packets have arrived so far. It doesn't know where the boundaries of your image are. That's up to you to determine. You need to loop until all the data you want to process has arrived (what that means completely depends on the protocol you've designed). You can't run this on the main queue; it'll block your UI. It needs to run on a background queue.
A very common socket-level protocol is to send the length first, and then send the data. That way the reader knows how much data to expect, and will know when the transfer is done. If the sender doesn't know the size when it starts transmitting, then you would typically use an "end-of-transfer" token of some sort, or use a protocol that chunks the data into blocks, and then has some marker to note "this is the last block." But in any case, you'll need to choose or design a protocol you want to use here. (Or use an existing system like HTTP rather than a low-level socket.)

Bidirectional gRPC stream sometimes stops processing responses after stopping and starting

In short
We have a mobile app that streams fairly high volumes of data to and from a server through various bidirectional streams. The streams need to be closed on occasion (for example when the app is backgrounded). They are then reopened as needed. Sometimes when this happens, something goes wrong:
From what I can tell, the stream is up and running on the device's side (the status of both the GRPCProtocall and the GRXWriter involved is either started or paused)
The device sends data on the stream fine (the server receives the data)
The server seems to send data back to the device fine (the server's Stream.Send calls return as successful)
On the device, the result handler for data received on the stream is never called
More detail
Our code is heavily simplified below, but this should hopefully provide enough detail to indicate what we're doing. A bidirection stream is managed by a Switch class:
class Switch {
/** The protocall over which we send and receive data */
var protocall: GRPCProtoCall?
/** The writer object that writes data to the protocall. */
var writer: GRXBufferedPipe?
/** A static GRPCProtoService as per the .proto */
static let service = APPDataService(host: Settings.grpcHost)
/** A response handler. APPData is the datatype defined by the .proto. */
func rpcResponse(done: Bool, response: APPData?, error: Error?) {
NSLog("Response received")
// Handle response...
}
func start() {
// Create a (new) instance of the writer
// (A writer cannot be used on multiple protocalls)
self.writer = GRXBufferedPipe()
// Setup the protocall
self.protocall = Switch.service.rpcToStream(withRequestWriter: self.writer!, eventHandler: self.rpcRespose(done:response:error:))
// Start the stream
self.protocall.start()
}
func stop() {
// Stop the writer if it is started.
if self.writer.state == .started || self.writer.state == .paused {
self.writer.finishWithError(nil)
}
// Stop the proto call if it is started
if self.protocall?.state == .started || self.protocall?.state == .paused {
protocall?.cancel()
}
self.protocall = nil
}
private var needsRestart: Bool {
if let protocall = self.protocall {
if protocall.state == .notStarted || protocall.state == .finished {
// protocall exists, but isn't running.
return true
} else if writer.state == .notStarted || writer.state == .finished {
// writer isn't running
return true
} else {
// protocall and writer are running
return false
}
} else {
// protocall doesn't exist.
return true
}
}
func restartIfNeeded() {
guard self.needsRestart else { return }
self.stop()
self.start()
}
func write(data: APPData) {
self.writer.writeValue(data)
}
}
Like I said, heavily simplified, but it shows how we start, stop, and restart streams, and how we check whether a stream is healthy.
When the app is backgrounded, we call stop(). When it is foregrounded and we need the stream again, we call start(). And we periodically call restartIfNeeded(), eg. when screens that use the stream come into view.
As I mentioned above, what happens occasionally is that our response handler (rpcResponse) stops getting called when server writes data to the stream. The stream appears to be healthy (server receives the data we write to it, and protocall.state is neither .notStarted nor .finished). But not even the log on the first line of the response handler is executed.
First question: Are we managing the streams correctly, or is our way of stopping and restarting streams prone to errors? If so, what is the correct way of doing something like this?
Second question: How do we debug this? Everything we could think of that we can query for a status tells us that the stream is up and running, but it feels like the objc gRPC library keeps a lot of its mechanics hidden from us. Is there a way to see whether responses from server may do reach us, but fail to trigger our response handler?
Third question: As per the code above, we use the GRXBufferedPipe provided by the library. Its documentation advises against using it in production because it doesn't have a push-back mechanism. To our understanding, the writer is only used to feed data to the gRPC core in a synchronised, one-at-a-time fashion, and since server receives data from us fine, we don't think this is an issue. Are we wrong though? Is the writer also involved in feeding data received from server to our response handler? I.e. if the writer broke due to overload, could that manifest as a problem reading data from the stream, rather than writing to it?
UPDATE: Over a year after asking this, we have finally found a deadlock bug in our server-side code that was causing this behaviour on client-side. The streams appeared to hang because no communication sent by the client was handled by server, and vice-versa, but the streams were actually alive and well. The accepted answer provides good advice for how to manage these bi-directional streams, which I believe is still valuable (it helped us a lot!). But the issue was actually due to a programming error.
Also, for anyone running into this type of issue, it might be worth investigating whether you're experiencing this known issue where a channel gets silently dropped when iOS changes its network. This readme provides instructions for using Apple's CFStream API rather than TCP sockets as a possible fix for that issue.
First question: Are we managing the streams correctly, or is our way of stopping and restarting streams prone to errors? If so, what is the correct way of doing something like this?
From what I can tell by looking at your code, the start() function seems to be right. In the stop() function, you do not need to call cancel() of self.protocall; the call will be finished with the previous self.writer.finishWithError(nil).
needsrestart() is where it gets a bit messy. First, you are not supposed to poll/set the state of protocall yourself. That state is altered by itself. Second, setting those state does not close your stream. It only pause a writer, and if app is in background, pausing a writer is like a no-op. If you want to close a stream, you should use finishWithError to terminate this call, and maybe start a new call later when needed.
Second question: How do we debug this?
One way is to turn on gRPC log (GRPC_TRACE and GRPC_VERBOSITY). Another way is to set breakpoint at here where gRPC objc library receives a gRPC message from the server.
Third question: Is the writer also involved in feeding data received from server to our response handler?
No. If you create a buffered pipe and feed that as request of your call, it only feed data to be sent to server. The receiving path is handled by another writer (which is in fact your protocall object).
I don't see where the usage of GRXBufferedPipe in production is discouraged. The known drawback about this utility is that if you pause the writer but keep writing data to it with writeWithValue, you end up buffering a lot of data without being able to flush them, which may cause memory issue.

QNX MsgReceive Pulse

I have a problem because I don't know how _pulse receiving works. If I have my data struct
typedef struct _my_data {
msg_header_t hdr;
int data;
} my_data_t;
and I am receiving only my msg I cant tell if it is a pulse
my_data_t msg;
...
rcvid = MsgReceive(g_Attach->chid, &msg, sizeof(msg), NULL);
when rcvid = 0 BUT how a program knows that it need to send _pulse in a form of msg (struct that I defined) or else how does it work. In addition is _IO_CONNECT a pulse? If yes why doesn't it have rcvid==0? - according to http://www.qnx.com/developers/docs/6.3.2/neutrino/lib_ref/n/name_attach.html
1 - _IO_CONNECT is not used for pulse. Its used for connect system call to resource managers. Example system calls are open(), close(), etc.
2 - You need to know whether the server or client is waiting on pulse message or not. For pulse message the blocking function in the resource manager will be MsgReceivePulse() and the client will use MsgSendPulse().
MsgSend() is used for normal message and MsgSendPulse() is for sending pulse message.
Similarly MsgReceive() is used for receiving normal message and MsgReceivePulse() is used for receiving pulse messages. Please refer to the QNX documents for more detailed description.
Both variants have different parameters like the functions for pulse messages do not have any parameter for return data because pulses are non blocking small messages which do not block for any reply but functions for normal messages have parameters for receive data.
You need to create channel and connection, for example
chid=ChannelCreate(0);
int pid=getpid();
coid=ConnectAttach(0, pid, chid, 0, 0);
and attach channel to connection.............
Then if you have two threads...............from one thread you can to call MsgSend function, for example MsgSend(coid, &(message), sizeof(message), &rmsg, sizeof(rmsg)); and in the other thread rcvid=MsgReceive(chid, (void*)&message, sizeof(message),NULL);

NSInputStream and NSOutputStream problems

I have two NSInputStream and NSOutputStream between devices that are connected to each other via network. When I write something in output stream, the data is written until the NSStreamEventEndEncountered event occurs. I close the output stream but on the other side (input stream) the NSStreamEventEndEncountered event never occurs, until I exit the view controller of the output stream. So:
1. Why does not the NSStreamEventEndEncountered event occurs at input stream even after the the same occurred at the output stream ? (the output stream is even closed in this event)
2. It is my understanding that, once you open the NSOutputStream, you can only write data once. Opening the output stream again after NSStreamEventEndEncountered event (for example to write something new on any event) is not possible, right ????
I probably need more info about your connection and how you're sending your data but let me try and answer your questions:
1.
You're not encountering an end of your inputstream because you never started reading from it. The outputstream finished writing because it probably encountered an end, just like you said.
Imagine Jacob (your outputstream) delivering a envelope (your data) to his friend's house. Jacob puts the envelope on his friend's doormat and walks back to his own house. At this point Jacob's work is done so he tells himself that he's done (in your case the outputstream signals an NSStreamEventEndEncountered).
Jacob's friend George (your inputstream) could see the envelope or not, but nevertheless never looks what is in it. So unless George takes the envelope and looks what is inside it, he could never tell himself that he finished looking at it (in your case the inputstream never signals an NSStreamEventEndEncountered).
2.
This actually depends on how you plan to use your outputstream. If you plan to send data multiple times to the same device, why not leave the outputstream open? You can write data as long as the socket is open and there is space available. When you close the outputstream you need to reopen it however.

Problem with connection.readln waiting for carriage return

I'm facing problem with TCpindy connection.readln method , I had no control in the other side sending data , when using Readln method in server side application hang (because receiving data don't contain carrige return ) , i'm trying readstring method but without success
Is there any suggestion to encouter this problem , me be looking for other component rather than indy ,
I need to get data from other client (tcp connection ) without any information about size of receiving data and without carriage return at the end of each frame.
You have to know how the data is being sent in order to read it properly. TCP is a byte stream, the sender needs to somehow indicate where one message ends and the next begins, either by:
prefixing each message with its
length
putting unique delimiters in between
each message
pausing in time between each message
Indy can handle all of these possibilities, but you need to identify which one is actually being used first.
Worse case scenerio, use the CurrentReadBuffer() method, which returns a String of whatever raw bytes are available at that moment.

Resources