Incomplete image sent over TCP using BlueSocket framework - ios

I’ve been trying to send image over TCP using Kitura BlueSocket.
Every time I try to decode image from Data object I get
Warning! [0x7fb6f205e400] Decoding incomplete with error code -1.
This is expected if the image has not been fully downloaded.
And indeed the image is usually half-loaded. This is the code I use for downloading Data object:
func readData() -> Data {
var receivedData = Data.init()
do {
try mySocket?.read(into: &receivedData)
}
catch {
print("Error receiving data")
}
return receivedData
}
and this is how I decode image:
func decodeImage(from: Data){
imageRead.image = UIImage(data: from)
}
and this code is used in the View Controller like so:
let imageData = networkManager.readData()
decodeImage(from: imageData)
I do not know why the image doesn't download fully.

You're working with a low-level socket. TCP just knows about packets; it doesn't know anything about "images" or "records" or anything else. It just deals with packets. Depending on various factors, you may see packets as small as a few hundred bytes to as large as a few kB.
.read(into:) returns you whatever packets have arrived so far. It doesn't know where the boundaries of your image are. That's up to you to determine. You need to loop until all the data you want to process has arrived (what that means completely depends on the protocol you've designed). You can't run this on the main queue; it'll block your UI. It needs to run on a background queue.
A very common socket-level protocol is to send the length first, and then send the data. That way the reader knows how much data to expect, and will know when the transfer is done. If the sender doesn't know the size when it starts transmitting, then you would typically use an "end-of-transfer" token of some sort, or use a protocol that chunks the data into blocks, and then has some marker to note "this is the last block." But in any case, you'll need to choose or design a protocol you want to use here. (Or use an existing system like HTTP rather than a low-level socket.)

Related

FileHandle don't free memory in iOS

I'll send large file to server. The file will be separated to chunks. I receive high memory consumption when I call FileHandle.readData(ofLength:). Memory for chunk don't deallocate, and after some time I receive EOM exception and crash.
Profiler show problem in FileHandle.readData(ofLength:) (see screenshots)
func nextChunk(then: #escaping (Data?) -> Void) {
self.previousOffset = self.fileHandle.offsetInFile
autoreleasepool {
let data = self.fileHandle.readData(ofLength: Constants.chunkLength)
if data == Constants.endOfFile {
then(nil)
} else {
then(data)
self.currentChunk += 1
}
}
}
The allocations tool is simply showing you where the unreleased memory was initially allocated. It is up to you to figure out what you subsequently did with that object and why it was not released in a timely manner. None of the profiling tools can help you with that. They can only point to where the object was originally allocated, which is only the starting point for your research.
One possible problem might be if you are creating Data-based URLRequest objects. That means that while the associated URLSessionTask requests are in progress, the Data is held in memory. If so, you might consider using a file-based uploadTask instead. That prevents the holding the Data associated with the body of the request in memory.
Once your start using file-based uploadTask, that begs the question as to whether you need/want to break it up into chunks at all. A file-based uploadTask, even when sending very large assets, requires very little RAM at runtime. And, at some future point in time, you may even consider using a background session, so the uploads will continue even if the user leaves the app. The combination of these features may obviate the chunking altogether.
As you may have surmised, the autoreleasepool may be unnecessary. That is intended to solve a very specific problem (where one create and release autorelease objects in a tight loop). I suspect your problem rests elsewhere.

Error when sending data between connected devices in Game Center

I am working on a local multiplayer, real time game in swift 5. In order to achieve the real time gameplay, I am sending data back and forth between two devices with the function GKMatch.sendData(data:, to:, withDataSendingMethod:). It works fairly inconsistently, regardless of if I use .reliable or .unreliable, however the error it gives me when it is unable to send data is consistent. It is as follow:
2020-07-27 21:07:22.433631-0400 Teacher Brawl[19336:5244039] [ViceroyTrace] [ERROR] AGPSessionRecvFrom:1954 0x103f11600 sack: SEARCH FAILURE SERIAL NUMBER (0000000B) FROM (5682ABEE)...
Where Teacher Brawl is the name of the project.
I was wondering if anyone is able to provide insight as to why I am getting the error, as I do not fully understand it being relatively new to swift and newer to GameKit. The code I am using to send the data is shown below, and it is being called anytime there is a tap on the screen, which in the context of this game is fairly minimal. If you need any further details please let me know, I would be happy to provide them. All help is greatly appreciated as the inconsistency of data sending has stopped any progress I can make for this game. :)
func sendButtons(button: String) {
let sendableString: Data? = button.data(using: .utf8)
do {
try localMatch.send(sendableString!, to: localMatch.players, dataMode: .unreliable)
}
catch {
print("")
}
}
For reference, the variable localMatch is my variable for the GKMatch that was returned when both players joined the game.
This error message is common and shouldn't interfere with your game sending data. My app gets this error but still sends data fine.
If you are sending data to all players, you should use the built in function func sendData(toAllPlayers data: Data, with mode: GKMatch.SendDataMode) throws. You should also send data reliably if it is not being sent very often. For debugging reasons, you might want to print when data is sent and print errors. Here is the full code you can try.
func sendButtons(button: String) {
let sendableString: Data? = button.data(using: .utf8)
do {
try localMatch.sendData(toAllPlayers: sendableString!, with: .reliable)
print("Data sent")
}
catch {
print("Data not sent")
print(error)
}
}
If the data is not sent, check if "Data sent" is printed and check for errors

Are WebSocket messages cached on iOS?

Strange but I cannot find any information on that: if I write a [large] message to the WebSocket stream on iOS and the execution gets back to my code, is the message already sent or somehow buffered?
I'm using Starscream library but it just uses CFStream-s.
Looking at the source code for the Starscream library mentioned, the library appends the send operation to a NSOperation queue:
private func dequeueWrite(..) {
...
writeQueue.addOperation(operation)
}
and then immediately returns.
So when the one of the send methods returns, for example:
open func write(data: Data, completion: (() -> ())? = nil)
The message will not yet have been sent.
But as you can see you can pass a completion block to this method, that will be called when the whole message has been written to the underlying output stream. Note that this doesn't tell you anything about whether the message has actually been sent on the network, or if the sender has received it successfully.
To know if the sender has received and processed the message successfully, you need to wait for a response message - that is something you need to define in your application protocol.
Before using the Starscream library in production, you might want to report/fix some issues in it. While reviewing the send mechanism I noticed that if the OutputStream buffer is full on WebSocket.swift line 1254 the library tries sending the rest of the buffer in a busy loop rather than waiting for a hasSpaceAvailable event. This may waste a lot of CPU cycle if you send a large message.
Also, it looks like the case when stream.write returns 0, indicating that the output buffer is full, is incorrectly handled as an error.
Probably it use
func CFWriteStreamWrite(_ stream: CFWriteStream!,
_ buffer: UnsafePointer<UInt8>!,
_ bufferLength: CFIndex) -> CFIndex
the write call return "The number of bytes successfully written, 0 if the stream has been filled to capacity (for fixed-length streams), or -1 if either the stream is not open or an error occurs."
So yes, they are buffered. But I think that is the only option, a write function need to have the buffer because every socket have a max buffer zsize

Bidirectional gRPC stream sometimes stops processing responses after stopping and starting

In short
We have a mobile app that streams fairly high volumes of data to and from a server through various bidirectional streams. The streams need to be closed on occasion (for example when the app is backgrounded). They are then reopened as needed. Sometimes when this happens, something goes wrong:
From what I can tell, the stream is up and running on the device's side (the status of both the GRPCProtocall and the GRXWriter involved is either started or paused)
The device sends data on the stream fine (the server receives the data)
The server seems to send data back to the device fine (the server's Stream.Send calls return as successful)
On the device, the result handler for data received on the stream is never called
More detail
Our code is heavily simplified below, but this should hopefully provide enough detail to indicate what we're doing. A bidirection stream is managed by a Switch class:
class Switch {
/** The protocall over which we send and receive data */
var protocall: GRPCProtoCall?
/** The writer object that writes data to the protocall. */
var writer: GRXBufferedPipe?
/** A static GRPCProtoService as per the .proto */
static let service = APPDataService(host: Settings.grpcHost)
/** A response handler. APPData is the datatype defined by the .proto. */
func rpcResponse(done: Bool, response: APPData?, error: Error?) {
NSLog("Response received")
// Handle response...
}
func start() {
// Create a (new) instance of the writer
// (A writer cannot be used on multiple protocalls)
self.writer = GRXBufferedPipe()
// Setup the protocall
self.protocall = Switch.service.rpcToStream(withRequestWriter: self.writer!, eventHandler: self.rpcRespose(done:response:error:))
// Start the stream
self.protocall.start()
}
func stop() {
// Stop the writer if it is started.
if self.writer.state == .started || self.writer.state == .paused {
self.writer.finishWithError(nil)
}
// Stop the proto call if it is started
if self.protocall?.state == .started || self.protocall?.state == .paused {
protocall?.cancel()
}
self.protocall = nil
}
private var needsRestart: Bool {
if let protocall = self.protocall {
if protocall.state == .notStarted || protocall.state == .finished {
// protocall exists, but isn't running.
return true
} else if writer.state == .notStarted || writer.state == .finished {
// writer isn't running
return true
} else {
// protocall and writer are running
return false
}
} else {
// protocall doesn't exist.
return true
}
}
func restartIfNeeded() {
guard self.needsRestart else { return }
self.stop()
self.start()
}
func write(data: APPData) {
self.writer.writeValue(data)
}
}
Like I said, heavily simplified, but it shows how we start, stop, and restart streams, and how we check whether a stream is healthy.
When the app is backgrounded, we call stop(). When it is foregrounded and we need the stream again, we call start(). And we periodically call restartIfNeeded(), eg. when screens that use the stream come into view.
As I mentioned above, what happens occasionally is that our response handler (rpcResponse) stops getting called when server writes data to the stream. The stream appears to be healthy (server receives the data we write to it, and protocall.state is neither .notStarted nor .finished). But not even the log on the first line of the response handler is executed.
First question: Are we managing the streams correctly, or is our way of stopping and restarting streams prone to errors? If so, what is the correct way of doing something like this?
Second question: How do we debug this? Everything we could think of that we can query for a status tells us that the stream is up and running, but it feels like the objc gRPC library keeps a lot of its mechanics hidden from us. Is there a way to see whether responses from server may do reach us, but fail to trigger our response handler?
Third question: As per the code above, we use the GRXBufferedPipe provided by the library. Its documentation advises against using it in production because it doesn't have a push-back mechanism. To our understanding, the writer is only used to feed data to the gRPC core in a synchronised, one-at-a-time fashion, and since server receives data from us fine, we don't think this is an issue. Are we wrong though? Is the writer also involved in feeding data received from server to our response handler? I.e. if the writer broke due to overload, could that manifest as a problem reading data from the stream, rather than writing to it?
UPDATE: Over a year after asking this, we have finally found a deadlock bug in our server-side code that was causing this behaviour on client-side. The streams appeared to hang because no communication sent by the client was handled by server, and vice-versa, but the streams were actually alive and well. The accepted answer provides good advice for how to manage these bi-directional streams, which I believe is still valuable (it helped us a lot!). But the issue was actually due to a programming error.
Also, for anyone running into this type of issue, it might be worth investigating whether you're experiencing this known issue where a channel gets silently dropped when iOS changes its network. This readme provides instructions for using Apple's CFStream API rather than TCP sockets as a possible fix for that issue.
First question: Are we managing the streams correctly, or is our way of stopping and restarting streams prone to errors? If so, what is the correct way of doing something like this?
From what I can tell by looking at your code, the start() function seems to be right. In the stop() function, you do not need to call cancel() of self.protocall; the call will be finished with the previous self.writer.finishWithError(nil).
needsrestart() is where it gets a bit messy. First, you are not supposed to poll/set the state of protocall yourself. That state is altered by itself. Second, setting those state does not close your stream. It only pause a writer, and if app is in background, pausing a writer is like a no-op. If you want to close a stream, you should use finishWithError to terminate this call, and maybe start a new call later when needed.
Second question: How do we debug this?
One way is to turn on gRPC log (GRPC_TRACE and GRPC_VERBOSITY). Another way is to set breakpoint at here where gRPC objc library receives a gRPC message from the server.
Third question: Is the writer also involved in feeding data received from server to our response handler?
No. If you create a buffered pipe and feed that as request of your call, it only feed data to be sent to server. The receiving path is handled by another writer (which is in fact your protocall object).
I don't see where the usage of GRXBufferedPipe in production is discouraged. The known drawback about this utility is that if you pause the writer but keep writing data to it with writeWithValue, you end up buffering a lot of data without being able to flush them, which may cause memory issue.

How to write data to socket in BlackBerry?

I am sending data to the server twice. First, I send "Hello world" and then I send "Server".
But the server received the data at 1 read. But the server have to read the data in a two-read operation.
Also, I write the data. Then read data from server and then I write the data.
In this case, the server can read the first data. But server can not read the second data.
The server uses read, write, read.
So how to overcome this issue? How do I write data to socket in BlackBerry?
What you describe is how TCP is supposed to work by default. What you are seeing is the Nagle algorithm (RFC 896) at work, reducing the number of outbound packets being sent so they are processed as efficiently as possible. You may be sending 2 packets in your code, but they are being transmitted together as 1 packet. Since TCP is a byte stream, the receiver should not be making any assumptions about how many packets it gets. You have to delimit your packet data in a higher-level protocol, and the receiver has to process data according to that protocol. It has to handle cases where multiple packets arrive in a single read, a single pakcet arriving in multiple reads, and everything in between, only processing packet data when they have been received in full, caching whatever is left over for subsequent reads to process when needed.
Hard to say without a little more detail, but it sounds like you're using 1-directional communication in the first case - i.e. the client writes, then writes again. There are any number of reasons that the server would receive the 2 writes as 1 read. Buffering on the client, somewhere in the wireless stack (or in the BES), buffering on the server side. All of those are legal with TCP/IP.
Without knowing anything more about your solution, have you thought about defining a small protocol - i.e. the client writes a known byte or bytes (like a 0 byte?) before sending the second write? Then the server can read, then recognize the delimiting byte, and say 'aha, this is now a different write from the client'?
As previously said this is an expected TCP behavior to save bandwidth. Note that to deliver your package TCP adds lot of data (e.g. destination port,sequence number, checksums...).
Instead of flushing the data I´ll recommend you to put more work in your protocol. For example you can define a header that contains the number of bytes to read and then the payload (the actual data).
The following code is a protocol encoded in an string with the structure [length];[data]
StringBuffer headerStr = new StringBuffer();
StringBuffer data = new StringBuffer();
//read header
char headerByte = dataInputStream.readChar();
while (headerByte != ';') {
headerStr.append(headerByte);
headerByte = dataInputStream.readChar();
}
//header has the number of character to read
int header= Integer.parseInt(headerStr.toString());
int bytesReaded = 1;
char dataByte = dataInputStream.readChar();
//we should read the number of characters indicated in the header
while (bytesReaded < header) {
data.append(dataByte);
dataByte = dataInputStream.readChar();
bytesReaded++;
}
For the first query, I guess you are using TCP. If you use UDP, then the server will read the packets in the order you want.
Can you be more clear/elaborative on the second query ?
I would try explicitly telling Connector.open to open up the stream as read_write. Then I would ensure that I flush my connections after each time I talked to the server.
SocketConnection connection = (SocketConnection) Connector.open(url, Connector.READ_WRITE);
OutputStream out = connection.openOutputStream();
// ... write to server
out.flush()
I got a solution to overcome to extract both the string
On sender device
Create a header which contains details of that data eg the data
length, datatype etc.
Add this header to the actual data and send it
On recipient device
read the header
retrieve the actual data length from the header
read the next data upto the data length as specified by the header

Resources