I have an iOS app that reads/writes on a BLE device. The device is sending me data over 20 bytes long and I see they get trimmed. Based on the following thread
Bluetooth LE maximum transmission size
it looks like iOS is trimming the data. That thread shows the solution on how to write bigger data sizes, but how do we read info larger than 20 bytes?
For anyone looking at this post years later like I am, we ran into this question as well at one point. I would like to share some helpful hints for data larger than 20 bytes.
Since the data is larger than one packet can handle, you will need to send it in multiple packets. It helps significantly if your data ALWAYS ends with some sort of END byte. For us, our end byte gives the size of the total byte array so we can check that at the end of reading.
Create a loop that checks for a packet constantly and stops when it receives that end byte (would also be good to have a timeout for that loop).
Make sure to clear the "buffer" when you start a new read.
It is nice to have an "isBusy" boolean to keep track of whether another function is currently waiting to read from the device. This prevents read overlaps. For us, if the port is currently busy, we wait a half second and try again.
Hope this helps!
Related
I'm recently working on a project which uses Bluetooth Low Energy. I implemented most of communication protocol, however I started having concerns, that actually I don't know how the data transmission works and if the solution that I implemented is going to behave in the same way with all devices.
So my main concern is what data chunk is received when I get a notification from peripheral(_:didUpdateValueFor:error:)? Is it only as big as negotatiated MTU size? Or maybe iOS receives information about chunk size and waits to receive it all before triggering peripheral(_:didUpdateValueFor:error:)?
When a peripheral sends chunks let's say 100 bytes each, can I assume that I will get always in a single notification 100 bytes? Or could it be last 50 bytes from previous chunk and first 50 bytes from the next one? That would be quite tricky and hard to detect where is the beginning of my frame.
I tried to find more information in Apple documentation but there is nothing about it.
My guess is that I receive always a single state of characteristic. Therefore it would mean that chunks depend on implementation on peripheral side. But what if characteristic is bigger than MTU size?
First, keep in mind that sending streaming data over a characteristic is not what characteristics are designed for. The point of characteristics is to represent some small (~20 bytes) piece of information like current battery level, device name, or current heartbeat. The idea is that a characteristics will change only when the underly value changes. It was never designed to be a serial protocol. So your default assumption should be that it's up to you to manage everything about that.
You should not write more data to a characteristic than the value you get from maximumWriteValueLength(for:). Chunking is your job.
Each individual value you write will appear to the receiver atomically. Remember, these are intended to be individual values, not chunks out of a larger data stream, so it would make no sense to overlap values from the same characteristic. "Atomically" means it all arrives or none of it. So if your MTU can handle 100 bytes, and you write 100 bytes, the other side will receive 100 bytes or nothing.
That said, there is very little error detection in BLE, and you absolutely can drop packets. It's up to you to verify that the data arrived correctly.
If you're able to target iOS 11+, do look at L2CAP, which is designed for serial protocols rather than using GATT.
If you can't do that, I recommend watching WWDC 2013 Session 703, which covers this use case in detail. (I am having trouble finding a link to it anymore, however.)
I’m here again with a new question; this time about PLC.
I start by saying I’m new of PLC and I’ve never saw one of them until a couple of month ago.
I’m asked to write a program that read, from Delphi, some data from a PLC Siemens S7-300 in order to archive them in a SQL Server database. I’m using the “libnodave” library.
The program is quite simple. I must verify a bit and when it is on I have to read the data from the PLC and set off the bit. With the library I’ve told about I can read and write without problems, but the data I have to read are stored in a group of byte (about 60 bytes), so I’ve to read some bytes, skip some others and read others bytes. Moreover the bit I must test is in the end of this group of bytes.
So I read the entire group of bytes I put the data red in a group of variables and then I test the bit and, if it is on, I store the data into the database.
In order to skip the byte I don’t have to read I use this kind of statements:
for i := 1 to 14 do
daveGetU8(dc);
for i := 1 to 6 do
daveGetU16(dc);
My questions are these:
There is a better way to read the data skipping the ones I don’t have
to read?
Is it convenient to read the entire group of bytes and after
test the bit or is better to make two reading separated?
I say this because I’ve found in internet that the read operations requires some time, so is better to make the minimum numbers of reading possible.
Eros
Communicating with a PLC involves some overhead.
You send a request and after some time you receive an answer.
Often the communication is through a serial line with limited bandwidth.
The timing then involves:
Time to send the request
Time for the PLC to respond
Time to transfer the response
It is difficult to give a definite answer to your questions, since we don't know how critical the timing is.
Anyway, polling the flag byte only seems like reasonable way to go.
When the flag is set, read the entire block in one command and then clear the flag.
Reading the data in small parts to avoid the gaps, is probably more time consuming than reading the entire block at once.
You can make the maths yourself since you know the specifications.
Example:
Lets say the baud rate is 9600 baud. This means roughly 1 byte per millisecond transfer time. The command to read is about 10 bytes long and the block answer about 70 bytes (assuming the protocol is binary). The PLC delay time about 50 ms. This adds to 130 ms, while reading the flag only adds to about 70 ms.
Only you can say if the additional polling time of 70 ms is acceptable.
Edit: In a comment it is stated that the communication is via ethernet on a 100+ MBit/s line. In that case, I suggest to read all data in one command and process it in the PC. Timing is of little concern with such bandwidth.
So i'm curious, is there any info on a router's config page or manual, or is there any way to measure it's buffer? (I'm talking about the "memory" it has to keep packets until the router can transmit them)
You should be able to measure your outgoing buffer by sending lots of (UDP) data out, as quickly as possible, and see how much goes out before data starts getting dropped. You will need to send it really fast, and have something at the other end to capture it as it comes in; your send speed has to be a lot faster than your receive speed.
Your buffer will be a little smaller than that, since in the time it takes you to send the data, at least the first packet will have left the router.
Note that you will be measuring the smallest buffer between you and the remote end; if you have, for example, a firewall and a router in two separate devices, you don't really know which buffer you are testing.
Your incoming buffer is a lot more difficult to test, since you can't fill it fast enough (from the internet side) not to be deliverable quickly enough.
In my application I need to read data from an input stream. I have set the current buffer size for reading as 1024. But I have seen in some Android applications buffer size has been kept as 8192 (8 KB). Will there be any specific advantage if I increase the buffer size in my application to 8KB?
Any expert opinion will be much appreciated.
Edit: (I am using BB OS 6 and 7 and I am dealing with network inputstream.)
I can't say that I've found the universally best buffer size, but it seems to me that something in the range of 1KB to 8KB should be fine in most situations (for BlackBerry Java apps).
Keep in mind that if the amount of data is small (so you'd probably only need one or two buffers at 1KB-8KB), it's probably best just to use the IOUtilities method:
byte[] result = IOUtilities.streamToBytes(inputStream);
with which you don't need to actually pick a buffer size. But, if you know that result would be a large block of data, you're probably right in wanting to read one buffer at a time.
However, I would argue that the answer should almost always be obtained simply by building the app, and measuring performance with a few different values for byte buffer size. It's easy enough to change one constant, build, run and measure again, and then you're not guessing, or taking the advice of someone who doesn't know all the details of your app.
See here for information about BlackBerry Eclipse plugin memory analysis, and
here for BlackBerry Eclipse plugin profiling.
These tools are found in Eclipse by selecting the Window menu, then Show View -> Other... -> BlackBerry -> BlackBerry Memory Statistics View, or BlackBerry Profiler View, while debugging.
This way, you can see how much memory, or processor, the network code is using during the call to retrieve data and populate your buffer.
More
BlackBerry InputStream to String conversion
This question was also asked in the official BlackBerry forum here:
http://supportforums.blackberry.com/t5/Java-Development/What-is-the-best-size-for-a-buffer-in-BlackBerry/td-p/2559417
The OP gave this clarification:
"I am reading from network. Once I establish socket connection with the server, the server will send me notifications one after the other. I need to read the notifications/data from the inputstream available in the socket connection. For this I have a background thread which checks anything is available in the inputstream and if something is available, it will read with the help of a buffer and then passes the read data to a StringBuffer."
Given this information, I have a different take, in that I think the BlackBerry network handling abstracts the Java application from the network buffer processing to the extent that the application buffer size will have little if any impact on the performance.
But be aware, this is only my opinion.
My response on that thread was as follows:
First thing to note is that the method "isAvailable()", in my experience, does not work correctly on OS 5.0 and earlier. It is fixed in OS 6 (at least from my testing).
Because isAvailable() was broken, (and for other application reasons) what I have implemented for a socket connection is that each message is preceded by a length. So in the socket connection, I read the length of the next message, and then the actual data. This is done with no blocking - in other words I read the entire message, regardless of size. I recommend you do the same. The message must exist in full somewhere so it makes no difference if it is in some memory managed by the socket connection, or in some memory managed by you.
Note also, until OS 6.0, when you did the read you would get all the data to fill the buffer you had - in other words it waited till the buffer was full. In OS 6.0 and later, the read can complete without giving you a full buffer.
In your case, you might be working in a post OS 6.0 only, so you could use isAvailable() - create a buffer of that size, and read everything. I can't see that it makes any difference whether you have the bytes in memory managed by the socket, or memory managed by you.
But in fact, I would argue that the best approach is the one that makes your processing simplest. So for example, if you know that the next message is 200 bytes, then read 200 bytes, and then process that message. Then read the next message.
You could spend a lot of time attempting to manage the buffers to match the underlying socket buffers. I don't know exactly how the underlying BlackBerry socket processing code works, but it doesn't put data directly into your buffers. So let it manage its buffer size to optimize the network, you manage your buffer size to optimize your processing. That will work best for everyone.
I heard the word buffer after a long time today and wondering if somebody can give a good overview of buffer and some examples of how it matters in today's world.
A buffer is generally a portion of memory that contains data that has not yet been fully committed to its intended device. In the case of buffered I/O, generally there is a fast device and a slow device. The devices themselves need not have disparate speeds, but perhaps the interfaces between them differ or perhaps it is more time-consuming to either produce or consume the data than the other part is.
The idea is that you temporarily store the generated data in a buffer so that it is not lost when the slower device isn't ready to handle it. Once the device is ready, the another buffer may take the current buffer's place and the consuming device will process the data in the first buffer.
In this manner, the slower device receives the data at a moderated pace rather than the fire-hose that the original data source can be.