I have an external BLE device, in which I can read/write the data up to 500-800 bytes.
I'm writing by chunks sized to 200 bytes, and the reading is limited on the BLE device side by 20 bytes per packet.
After I connected and do some operation (read / writeWithResponse) - it takes approx. 1 sec, but every operation after that takes up to 1 min, e.g. while I'm reading, transferring every packet takes about 2 seconds.
If I disconnect and connect to the device, only the first operation is fast.
The code is pretty simple - I just splitting data (in case of writing) and sending it by chunks. Every next chunk is sending just after I receive callback didWriteValueForCharacteristic:.
How can I improve the speed? Do you have suggestions: why only the first operation with BLE module is fast?
Related
I'm developing a sensor based on the ESP32-DevKit board where I sense vibration from an accelerometer. The application/sensor goal is to store the accelerometer data for 20s and then send all the data through BLE.
I'm currently using the ESP32 ADC (12 bit) for a fast sampling rate (10-100KHz) to get an accurate signal. The next step is to store this signal, but it will take as size almost 2MB, so I don't know if I can store it in the ESP32 and send it later via BLE (packet by packet), therefore a lot of tasks will end up degrading the process time and Energy.
The main points are :
Fast sampling rate / accurate signal.
Sending data to phone with the lowest energy possible.
using ESP32-S2 to Store 2MB data and resend it to Phone app.
Is there any possibility of doing what I want to?
When storing the signal, have you considered compressing the data? If the accelerometer readings are very similar to the previous reading, then just storing the difference might save a lot of space, especially if you use a variable length format.
I have a project where I save GPS data, but because it is comparatively slow moving boats, the difference between two coordinates (every second or so) will be very small, so no point storing the full coordinates.
I'm recently working on a project which uses Bluetooth Low Energy. I implemented most of communication protocol, however I started having concerns, that actually I don't know how the data transmission works and if the solution that I implemented is going to behave in the same way with all devices.
So my main concern is what data chunk is received when I get a notification from peripheral(_:didUpdateValueFor:error:)? Is it only as big as negotatiated MTU size? Or maybe iOS receives information about chunk size and waits to receive it all before triggering peripheral(_:didUpdateValueFor:error:)?
When a peripheral sends chunks let's say 100 bytes each, can I assume that I will get always in a single notification 100 bytes? Or could it be last 50 bytes from previous chunk and first 50 bytes from the next one? That would be quite tricky and hard to detect where is the beginning of my frame.
I tried to find more information in Apple documentation but there is nothing about it.
My guess is that I receive always a single state of characteristic. Therefore it would mean that chunks depend on implementation on peripheral side. But what if characteristic is bigger than MTU size?
First, keep in mind that sending streaming data over a characteristic is not what characteristics are designed for. The point of characteristics is to represent some small (~20 bytes) piece of information like current battery level, device name, or current heartbeat. The idea is that a characteristics will change only when the underly value changes. It was never designed to be a serial protocol. So your default assumption should be that it's up to you to manage everything about that.
You should not write more data to a characteristic than the value you get from maximumWriteValueLength(for:). Chunking is your job.
Each individual value you write will appear to the receiver atomically. Remember, these are intended to be individual values, not chunks out of a larger data stream, so it would make no sense to overlap values from the same characteristic. "Atomically" means it all arrives or none of it. So if your MTU can handle 100 bytes, and you write 100 bytes, the other side will receive 100 bytes or nothing.
That said, there is very little error detection in BLE, and you absolutely can drop packets. It's up to you to verify that the data arrived correctly.
If you're able to target iOS 11+, do look at L2CAP, which is designed for serial protocols rather than using GATT.
If you can't do that, I recommend watching WWDC 2013 Session 703, which covers this use case in detail. (I am having trouble finding a link to it anymore, however.)
I have an iOS app that reads/writes on a BLE device. The device is sending me data over 20 bytes long and I see they get trimmed. Based on the following thread
Bluetooth LE maximum transmission size
it looks like iOS is trimming the data. That thread shows the solution on how to write bigger data sizes, but how do we read info larger than 20 bytes?
For anyone looking at this post years later like I am, we ran into this question as well at one point. I would like to share some helpful hints for data larger than 20 bytes.
Since the data is larger than one packet can handle, you will need to send it in multiple packets. It helps significantly if your data ALWAYS ends with some sort of END byte. For us, our end byte gives the size of the total byte array so we can check that at the end of reading.
Create a loop that checks for a packet constantly and stops when it receives that end byte (would also be good to have a timeout for that loop).
Make sure to clear the "buffer" when you start a new read.
It is nice to have an "isBusy" boolean to keep track of whether another function is currently waiting to read from the device. This prevents read overlaps. For us, if the port is currently busy, we wait a half second and try again.
Hope this helps!
I'm writing an iOS app for a device with a BLE module that advertises a few bytes of data on a consistent basis while it's connected. We are trying to estimate the power consumption of the BLE module so we can estimate the battery life for the device. I've scoured SO and Google looking for the appropriate way to estimate this, but I'm coming up empty. Is there a way to take the number of bytes that are being sent, multiplied by the frequency with which the data is sent and come up with a rough approximation of power consumption?
A typical BLE SoC (i.e. a all-in-one Application + Radio chip) typically consumes:
A few hundreds nA while in deep sleep,
2 to 10 µA while a RTC tracks time (needed between radio events while advertising or connected),
10 to 30 mA while CPU or Radio runs (computing data, TX, RX). RX and TX power consumption is roughly the same.
Life of a BLE peripheral basically consists of 3 main states:
Be idle (not advertising, not connected). Most people will tell your device is off. Unless it has a physical power switch, it still consumes a few hundred nanoamps though.
Advertise (before a connection takes place). Peripheral needs to be running approximatively 5 ms every 50 ms. This is the time when your device actually uses most power because advertising requires to send many packets, frequently. Average power consumption is in the 1-10 mA range.
Be connected. Here, consumption is application-dependant. If application is mostly idle, peripheral is required to wakeup periodically and must send a packet each time in order to keep the connection alive. Even if the peripheral has nothing useful to send, an empty packet is still sent. Side effect: that means low duty cycle applications basically transmit packets for free.
So to actually answer you question:
length of your payload is not a problem (as long as you keep your packets shorts): we're talking about transmitting during 1 µs more per bit, while the rest of the handling (waking up, receiving master packet, etc. kept us awake during at least 200 µs);
what you actually call "continuous" is the key point. Is it 5 Hz ? 200 Hz ? 3 kHz ?
Let's say we send data at a 5 Hz rate. Power estimate will be around 5 connection events every second, roughly 2 ms CPU + Radio per connection event, so 10 ms running every second. Average consumption: 200 µA (.01 * 20 mA + .99 * 5 µA)
This calculation does not take some parameters into account though:
You should add consumption from your sensors (Gyro/Accelerometers can eat a few mA),
You should consider on-board communication (i2c, SPI, etc),
If your design actually uses two chips (one for the application talking to a radio module), consumption will roughly double.
Say I have an InfiniBand or similar PCIe device and a fast Intel Core CPU and I want to send e.g. 8 bytes of user data over the IB link. Say also that there is no device driver or other kernel: we're keeping this simple and just writing directly to the hardware. Finally, say that the IB hardware has previously been configured properly for the context, so it's just waiting for something to do.
Q: How many CPU cycles will it take the local CPU to tell the hardware where the data is and that it should start sending it?
More info: I want to get an estimate of the cost of using PCIe communication services compared to CPU-local services (e.g. using a coprocessor). What I am expecting is that there will be a number of writes to registers on the PCIe bus, for example setting up an address and length of a packet, and possibly some reads and writes of status and/or control registers. I expect each of these will take several hundred CPU cycles each, so I would expect the overall setup would take order of 1000 to 2000 CPU cycles. Would I be right?
I am just looking for a ballpark answer...
Your ballpark number is correct.
If you want to send an 8 byte payload using an RDMA write, first you will write the request descriptor to the NIC using Programmed IO, and then the NIC will fetch the payload using a PCIe DMA read. I'd expect both the PIO and the DMA read to take between 200-500 nanoseconds, although the PIO should be faster.
You can get rid of the DMA read and save some latency by putting the payload inside the request descriptor.