When I try to send an object with multiple images(converted to string using Base64) as STREAM type, from the onPayloadTransferUpdate() method, I can see "Failure" result and the devices(tested only when 2 devices are connected) automatically disconnect after that. Is Google Nearby connections not the right option to send large bytes?
Nearby Connections should be able to handle that. There's no explicit size limit on STREAM payloads.
I would suggest chunking the bytes (eg. send a couple KB at a time) and seeing if that helps. You can get into weird situations when you send entire files at once because it loads the bytes into memory twice (once inside your app, and once inside the Nearby process) which can cause out of memory errors. Binder, the interprocess communication layer on Android, also has a limited buffer to send data between processes.
You can also save it as a temporary file and send it as a FILE payload, in which case we will handle the chunking for you.
Disclaimer: I work on Nearby Connections.
1) You don't need to Base64-encode the data for the sake of Nearby Connections -- your STREAM can have raw binary data, and that'll work just fine.
2) How big is this data you're sending, and at what byte offset (you can see this in the PayloadTransferUpdate you get with Status.ERROR) does it fail at? It sounds like your devices are just getting disconnected.
3) What Strategy are you using?
4) If you still have discovery ongoing (i.e. you haven't called stopDiscovery()), try stopping that and then sending your Payload -- discovery is a heavyweight operation that can make it hard to reliably maintain connections between devices for long intervals.
Related
I am writing a real-time game client. It works great on desktop, but as I come to implement it on iOS, the packets seems to cluster together and only send ever half second to 1 second which yields a very jumpy experience.
I am using the low level sys/socket library so a send simply looks like
send(this->sock, bytes, byteslength, 0)
As I said, this does not have this behavior on a desktop system, but I assume there is some sort of power saving logic going on here that wants to cluster the data into a single packet instead of sending the packet as soon as send is called. Certainly there must be a way to force the fast send.
We are developing a typical CANbus networked system with what you could call a controller organizing a number of devices.
The devices needs configuration, which the controller writes (and might also read back) using regular object dictionary items (currently in the manufacturer specific range).
The devices also perform actions (commands) with more than 8 bytes of data and this we solve by having write only items in the device object dictionary and relying on the regular segmentation/de-segmentation of SDO's. (I don't know if this is the CANopen way of doing things, but it seems reasonable).
However, the device also produces events (say some sensor data passes a certain threshold) resulting in more than 8 bytes of asynchronous data coming up from the device. PDO's are meant to be used for sending async event data, but it can only contain 8 bytes. The devices could write the data into an object dictionary item on the controller, but this doesn't seem like the CANopen way. Am I right?
The best we've come up with is to send a PDO to the controller, informing the controller that more data are available in the object dictionary on the device.
Anyone with CANopen background that can way in on the best (CANopen) way of solving this?
Since I'm repeating 8 bytes a lot, we can safely assume that this network is not running CAN-FD.
The key of any sensible CAN network design is to consider real-time, data priorities, bus load and data amounts early on. If you find yourself with a chunk of data larger than 8 bytes, then that strongly suggests that something is wrong in this design - it should likely be split in several packages.
Generally, you shouldn't be using SDO for data at all, since they come with overhead. That includes writes to the object dictionary, which also means SDO access. Block transfers etc with SDO are meant for things like bootloaders or one-time configuration, not for live data traffic in operational mode. It can be done, but it is fishy.
You can in theory map data across several PDOs with PDO mapping, but all of this really sounds like an "XY problem" - you are convinced that you need to transmit larger chunks of data and look for a way to do it. But step 1 is to look at the fundamental network data/design and see if you actually need those large chunks or if it makes sense to split them in several. The ideal CANopen design is to have one PDO per type of data, when possible.
I am developing an app that needs to send large amounts of data between an iPhone and a device (it takes approximately 10 seconds to send the data). But I want to be able to cancel the data communication anytime. I am aware I can simply drop the connection to the device at anytime with
centralManager.cancelPeripheral(peripheral)
but that is not what I am actually looking for, as I want to stop sending data but without terminating the bluetooth connection.
Is there a way to terminate the data transmission without dropping the connection to the device?
the codes of sending data is as follow:
for (var Hex: UInt8 = 0x01; Hex <= 0x14; Hex+=1){
var outbuffer = [UInt8](count: 16, repeatedValue: 0x00)
outbuffer[0] = (0x68)
outbuffer[1] = (Hex)
let data = NSData(bytes: outbuffer, length: 7)
print("data\(data)")
connectingPeripheral.writeValue(data, forCharacteristic: connectingCharacteristicPassword , type: CBCharacteristicWriteType.WithResponse)
}
I figured that I would go ahead and give my input on this. There is no way in CoreBluetooth to stop the transmission of a data packet that has already been written to the output buffer. The reason for why this is the case is simply because it is not needed and it would be a useless functionality. The only reason for why you are having this issue is because your methodology is wrong in my opinion. Do not put everything in a for-loop and push the data all at once. Instead you should implement some sort of flow control mechanism.
In Bluetooth LE there are two main ways of writing data to a peripheral: “Write Commands” and “Write Requests”. You can look at it a bit like the TCP vs UDP protocols. With write commands you are just sending data without knowing whether or not the data was received by the application on the other side of the bluetooth link. With write requests you are sending data and letting the peripheral know that you want to be notified (ack’ed) that the data was in fact received. These two types are in CoreBluetooth called CBCharacteristicWriteWithResponse and CBCharacteristicWriteWithoutResponse. When writing data using the CBCharacteristicWriteWithResponse (like you are doing in your code) you will get a peripheral:didWriteValueForCharacteristic:error: callback which verifies that the data has arrived at the other side. At this point you now have the option to go ahead and send the next packet if you want to, but if you for some reason want to stop sending data, then you can do that as well. Doing it this way you are in control of the whole flow and not just simply pushing everything though a for-loop.
But wait, why would you ever want to use write commands then? Well, since write requests requires the receiver to respond back to the sender it means that data must be sent in both directions. In this case, since the ack is sent by the application layer, you have to wait for the next connection interval before the ack can be sent. This means that when sending large amounts of data you can only send one packet per every two connection intervals which will give you a very poor overall bit rate.
With write commands, since they are not ack’ed, you can send as manny packets as possible within one connection event window. In most cases you should be able to send about 10-20 packets per connection window. But be aware that if you send too many packets then you will fill the outgoing buffer and packets will be lost. So, something that you can try is to directly send 9 packets with the WriteWithoutResponse type, followed by 1 packet of the WriteWithResponse type. After doing this you can wait for the peripheral:didWriteValueForCharacteristic:error: callback in which you can then send 10 more packets the same way. This way you will manage to send 10 packets per every 2 connection intervals while still being able to control the flow better.
You can of course experiment with the ratio a bit, but remember that the buffer is shared between multiple applications on the iOS device so you don’t want to be too close to the limit.
With iOS CoreBluetooth, when sending a relatively large amount data, it's important to break it up into 20 byte chunks and then write them one at a time into the peripheral object. This is pretty easy to do when using a WriteWithResponse characteristic: write 20 bytes, wait for the callback, write the next 20 bytes, and so on.
But what about with a WriteWithoutResponse characteristic? I need to send of 1-2kB of data as quickly as I can over BLE. WriteWithResponse is very inefficient at doing this, because it acks every 20 byte packet. Error correction and reliability are taken care of at my application layer, so I have no need for BLE to ack the data.
The issue is that WriteWithoutResponse does not give you a callback, because there is no way for CoreBluetooth to know when the data was actually written. So the question is: how do we properly space out sending a large amount of data using WriteWithoutResponse?
The only solution I've thought of is to do the following:
Get the connection interval and the number of packets that the link is capable of per connection interval.
Immediately write X packets of 20 bytes each, wait Y time, and repeat until there is no data left. (X = Number of packets per connection interval, Y = The connection interval)
There are two glaring problems with this approach:
CoreBluetooth does not expose the Connection Interval to us (why??). So there are two options. The first being: guess. Probably either a worse-case or average-case depending on your preferred connection parameters, I think iOS likes to pick 30ms. But this is a bad idea because a central has the right to completely ignore the suggested parameters. The second is that you could have the peripheral store and transmit the agreed upon CI to the iOS device. The issue with this is that you can't send the CI until iOS device has finished discovering the services and characteristics and subscribed to the appropriate notification. So you'd have to either put in a somewhat arbitrary fixed delay after connection before sending the CI, or send a small amount of data from the iOS device notifying the peripheral that it is ready. Both create latencies and are pretty poor solutions.
We don't know how many packets per connection interval can be supported. There is a theoretical maximum of 6. But the average case is probably 4 or less. It is also dependent on the peripheral.
Of course a great option for sending large amounts of data is to increase the MTU size to larger than 20 bytes to accommodate our large amount of data. But it seems few peripherals support this; ours does not.
Anyone have any insights on how to solve this?
If You are supporting iOS 11:
iOS Website
#property(readonly) BOOL canSendWriteWithoutResponse;
This property lets you know, if Buffer is filled up or not and can transmit more without response.
After each transmission keep an eye on this variable and a call back: Peripheral Delegate
peripheralIsReadyToSendWriteWithoutResponse:
Which should be sufficient to let you know, when to send more data.
I have developed an app that will upload a file to a server. Is there a limit on size uploading file using the cellular provider data. I think Wifi has no limit, but how about the cellular provider data, does OS limit the size of using data? If so, what is the size limit?
Your app will have no problem continuing to spray HTTP Post requests all over the internets as often as you tell it to, or whatever other exotic web magic you're invoking.
If you are referring to particular upload limits of individual cell phone data plans, then I think you'll have to read the fine print of every different contract with every different mobile provider to get an answer, and that answer will be "it varies"
Of course, if you are considering sending particularly large data-sets back to a server, you're probably going to consider breaking it into small pieces and handling dropped packets gracefully anyway, so I'm assuming you're not asking "what's the biggest file I can push through completely intact".