How big can the payload be when sending data via WatchConnectivity? - ios

When sending data using the WatchConnectivity framework, either from the phone to the watch or vice-versa, how big can the payload be before the framework gives me the WCErrorCodePayloadTooLarge error?
I couldn't find the answer on Apple's documentation, and there doesn't seem to be much information on this on the internet at this time (in fact, googling WCErrorCodePayloadTooLarge gives me just 4 results).
Has anyone tested to try to find the answer to this? If this question doesn't get an answer, I will try to do it myself and post the results.
So far, all the information I have is that it may be able to support files that are bigger than 30 MBs. I think this because I take a lot of raw photos on my iPhone, and they usually are ~36MB in size, and they always show up in my watch's Photos app.
For reference, WCSession's documentation has the following description of WCErrorCodePayloadTooLarge:
An error indicating that the item being sent exceeds the maximum size
limit. This type of error can occur for both data dictionaries and
files.
Available in watchOS 2.0 and later.

According to the private symbols WCPayloadSizeLimitApplicationContext, WCPayloadSizeLimitMessage, WCPayloadSizeLimitUserInfo, the limits (as of iOS 9.0.2) are:
65,536 bytes (65.5 KB) for a message
65,536 bytes (65.5 KB) for a user info
262,144 bytes (262.1 KB) for an application context
I don't know why Apple wouldn't document this, other than the fact that it can be difficult when sending dictionaries through WatchConnectivity to determine exactly how large they are. Certainly the acceptable sizes may change over time.
I couldn't find (and haven't personally observed) any maximum size limit when sending files, though I've noticed that it seems to get unreliable when you send large files (hundreds of MBs).

Related

Bluetooth Low Energy data transmission on iOS

I'm recently working on a project which uses Bluetooth Low Energy. I implemented most of communication protocol, however I started having concerns, that actually I don't know how the data transmission works and if the solution that I implemented is going to behave in the same way with all devices.
So my main concern is what data chunk is received when I get a notification from peripheral(_:didUpdateValueFor:error:)? Is it only as big as negotatiated MTU size? Or maybe iOS receives information about chunk size and waits to receive it all before triggering peripheral(_:didUpdateValueFor:error:)?
When a peripheral sends chunks let's say 100 bytes each, can I assume that I will get always in a single notification 100 bytes? Or could it be last 50 bytes from previous chunk and first 50 bytes from the next one? That would be quite tricky and hard to detect where is the beginning of my frame.
I tried to find more information in Apple documentation but there is nothing about it.
My guess is that I receive always a single state of characteristic. Therefore it would mean that chunks depend on implementation on peripheral side. But what if characteristic is bigger than MTU size?
First, keep in mind that sending streaming data over a characteristic is not what characteristics are designed for. The point of characteristics is to represent some small (~20 bytes) piece of information like current battery level, device name, or current heartbeat. The idea is that a characteristics will change only when the underly value changes. It was never designed to be a serial protocol. So your default assumption should be that it's up to you to manage everything about that.
You should not write more data to a characteristic than the value you get from maximumWriteValueLength(for:). Chunking is your job.
Each individual value you write will appear to the receiver atomically. Remember, these are intended to be individual values, not chunks out of a larger data stream, so it would make no sense to overlap values from the same characteristic. "Atomically" means it all arrives or none of it. So if your MTU can handle 100 bytes, and you write 100 bytes, the other side will receive 100 bytes or nothing.
That said, there is very little error detection in BLE, and you absolutely can drop packets. It's up to you to verify that the data arrived correctly.
If you're able to target iOS 11+, do look at L2CAP, which is designed for serial protocols rather than using GATT.
If you can't do that, I recommend watching WWDC 2013 Session 703, which covers this use case in detail. (I am having trouble finding a link to it anymore, however.)

iOS Bluetooth BLE read data maximum size

I have an iOS app that reads/writes on a BLE device. The device is sending me data over 20 bytes long and I see they get trimmed. Based on the following thread
Bluetooth LE maximum transmission size
it looks like iOS is trimming the data. That thread shows the solution on how to write bigger data sizes, but how do we read info larger than 20 bytes?
For anyone looking at this post years later like I am, we ran into this question as well at one point. I would like to share some helpful hints for data larger than 20 bytes.
Since the data is larger than one packet can handle, you will need to send it in multiple packets. It helps significantly if your data ALWAYS ends with some sort of END byte. For us, our end byte gives the size of the total byte array so we can check that at the end of reading.
Create a loop that checks for a packet constantly and stops when it receives that end byte (would also be good to have a timeout for that loop).
Make sure to clear the "buffer" when you start a new read.
It is nice to have an "isBusy" boolean to keep track of whether another function is currently waiting to read from the device. This prevents read overlaps. For us, if the port is currently busy, we wait a half second and try again.
Hope this helps!

LZMA SDK iOS to show progress

If anyone has used the iOS wrapper for the LZMA SDK available at https://github.com/mdejong/lzmaSDK and have been able to tweak it in order to see the progress of unarchiving, please help.
I am going to use this SDK in iOS to extract a 16MB file, which uncompresses to a 150MB file, and this takes around 40seconds to complete. It would be good to have some kind of callback for showing the progress of uncompression.
Help is greatly appreciated.
Thanks
So, I looked at this issue quite a bit recently, and honestly the best you are going to be able to do is look for all the files in a specific tmp dir where decompression is going on and then count them and compare to a known size N. The problem with attempting to do this in the library is that it spans multiple runtimes and the callback idea makes the code a mess. Also, a callback would not help that much because of the way 7z compression works. To decode, one needs to build up the decompression dictionary before specific files can be decompressed, and that process of building up the dictionary takes a long time before the first file can even be written. So, if you put a "percent done" counter in your app showing how much was done, it would show 0% done for a long time, then jump to 50% and then 90 or 100 %. Basically, it would not be that useful even if it was implemented.
You could try C++ port of the latest LZMA SDK(15.06) without described above limitations(C version). Memory allocations and IO read/write can be tuned in runtime, plus work with password encrypted archives, smoothed progress, Lzma & Lzma2 archive types etc.
GitHub: https://github.com/OlehKulykov/LzmaSDKObjC

What is the best size for a buffer in BlackBerry?

In my application I need to read data from an input stream. I have set the current buffer size for reading as 1024. But I have seen in some Android applications buffer size has been kept as 8192 (8 KB). Will there be any specific advantage if I increase the buffer size in my application to 8KB?
Any expert opinion will be much appreciated.
Edit: (I am using BB OS 6 and 7 and I am dealing with network inputstream.)
I can't say that I've found the universally best buffer size, but it seems to me that something in the range of 1KB to 8KB should be fine in most situations (for BlackBerry Java apps).
Keep in mind that if the amount of data is small (so you'd probably only need one or two buffers at 1KB-8KB), it's probably best just to use the IOUtilities method:
byte[] result = IOUtilities.streamToBytes(inputStream);
with which you don't need to actually pick a buffer size. But, if you know that result would be a large block of data, you're probably right in wanting to read one buffer at a time.
However, I would argue that the answer should almost always be obtained simply by building the app, and measuring performance with a few different values for byte buffer size. It's easy enough to change one constant, build, run and measure again, and then you're not guessing, or taking the advice of someone who doesn't know all the details of your app.
See here for information about BlackBerry Eclipse plugin memory analysis, and
here for BlackBerry Eclipse plugin profiling.
These tools are found in Eclipse by selecting the Window menu, then Show View -> Other... -> BlackBerry -> BlackBerry Memory Statistics View, or BlackBerry Profiler View, while debugging.
This way, you can see how much memory, or processor, the network code is using during the call to retrieve data and populate your buffer.
More
BlackBerry InputStream to String conversion
This question was also asked in the official BlackBerry forum here:
http://supportforums.blackberry.com/t5/Java-Development/What-is-the-best-size-for-a-buffer-in-BlackBerry/td-p/2559417
The OP gave this clarification:
"I am reading from network. Once I establish socket connection with the server, the server will send me notifications one after the other. I need to read the notifications/data from the inputstream available in the socket connection. For this I have a background thread which checks anything is available in the inputstream and if something is available, it will read with the help of a buffer and then passes the read data to a StringBuffer."
Given this information, I have a different take, in that I think the BlackBerry network handling abstracts the Java application from the network buffer processing to the extent that the application buffer size will have little if any impact on the performance.
But be aware, this is only my opinion.
My response on that thread was as follows:
First thing to note is that the method "isAvailable()", in my experience, does not work correctly on OS 5.0 and earlier. It is fixed in OS 6 (at least from my testing).
Because isAvailable() was broken, (and for other application reasons) what I have implemented for a socket connection is that each message is preceded by a length. So in the socket connection, I read the length of the next message, and then the actual data. This is done with no blocking - in other words I read the entire message, regardless of size. I recommend you do the same. The message must exist in full somewhere so it makes no difference if it is in some memory managed by the socket connection, or in some memory managed by you.
Note also, until OS 6.0, when you did the read you would get all the data to fill the buffer you had - in other words it waited till the buffer was full. In OS 6.0 and later, the read can complete without giving you a full buffer.
In your case, you might be working in a post OS 6.0 only, so you could use isAvailable() - create a buffer of that size, and read everything. I can't see that it makes any difference whether you have the bytes in memory managed by the socket, or memory managed by you.
But in fact, I would argue that the best approach is the one that makes your processing simplest. So for example, if you know that the next message is 200 bytes, then read 200 bytes, and then process that message. Then read the next message.
You could spend a lot of time attempting to manage the buffers to match the underlying socket buffers. I don't know exactly how the underlying BlackBerry socket processing code works, but it doesn't put data directly into your buffers. So let it manage its buffer size to optimize the network, you manage your buffer size to optimize your processing. That will work best for everyone.

Maximum upload size on iOS / NSURLRequest / NSURLConnection

I'm uploading multi-gigabyte files from iPhones/iPods, and I noticed that NSURLConnection and friends incorrectly use signed 32-bit integers for the number of bytes in the upload
(all of Apple's other API's use (long long) a.k.a. (int64_t) - allowing you to deal with any file that could exist)
When I try to upload files over 3GB, streamed from a file, I get overflows in the data coming back from Apple - but this could be a problem with anything in the chain (web proxy, cache, server). I'm still debugging this, but in the meantime....
Apple docs don't mention a size limit on uploads - is there one?
Bizarrely, I get no problems for uploads up to 3GB (even though the overflow ought to be at 2GB) - it's going over 3GB that always overflows. (I've triple-checked all my source that I'm not using 32bit types anywhere, so I'm fairly confident it's a problem somewhere between iOS and the server(s))

Resources