Unpack byte array from BLE packet - ios

Im developing an app using BLE where iPhone device is the peripheral, and will respond to write requests of type CBATTRequest from the Central.
My take is that this represents a byte array from value of CBATTRequest via request.value of type NSData that I can unpack to read packet # etc. Given the size (octets) and position of each field, how can I unpack and read each value, conceptually and technically?. And how would I go about constructing/packing this same byte array as if I was preparing to send this request? Since I will have to pack data in the same manner for the response.

When you receive the data, it's probably in a CBATTRequest. The data is contained in a member value of type NSData. The member length tells the length in bytes/octects.
CBATTRequest* request = ...;
NSData* value = request.value;
int packetLen = value.length;
It then makes sense to cast this to a struct that corresponds to the structure of the packet:
struct Packet {
unsgined char pktNo;
unsigned char ctrlCmd;
unsigned char txPowerRequest;
unsigned char uuid[2];
unsigned char txCnt;
unsigned char userPayload[14];
};
Packet* packet= (Packet)value.bytes;
Note that packet is of variable length. So only part of the userPayload is valid. The valid length is:
int userPayloadLength = packetLen - 6;
Now you can easily access the members:
int packetNumber = packet->pktNo;
To construct a similar packet, you would approach is slightly similarly.
Packet reponse;
response.pktNo = ...;
reponse.ctrlCmd = ...;
int userPayloadLength = 5;
NSData* value = [NSData dataWithBytes: &response length: userPayloadLength + 6];
Bit 4 to 0 set to 0x01 for..
This most likely is relative to a single octect, e.g. to ctrlCmd. To test it:
if (((packet->ctrlCmd >> 0) & 0x1f) == 0x01) ...
0x1f is the bit mask for 5 consecutive bits set (bit 0 to 5). >> 0 doesn't do anything but would be required if the bits were shifted, e.g. for bit 2 to 5 you would need to shift by 2.
A typical UUID is 16 bytes long. So I assume byte index 13 & 12 refers to bytes 12 and 13 within a 16 byte UUID (as only two bytes are transmitted). The remaining bytes are probably fixed to the base Bluetooth UUID:
00000000-0000-1000-8000-00805F9B34FB

Related

Convert bytes to signed integers in lua 5.1.5

I'm looking for how to turn bytes into a signed int using lua 5.1.5, so far I've only been able to find solutions for lua 5.2 onward, and they are not backward compatible.
I have solutions for how to turn bytes into unsigned integers, like so:
payload_t.temperature=tonumber(utility.hex2str(string.sub(payload,32,33)),16)
First of all I'll assume that you actually have a byte string rather than a hex string given; if your string is a hex string, you can trivially convert it to a byte string using gsub:
function hex2bytes(str)
-- assert that it is indeed a string of hex digit pairs
assert(#str % 2 == 0 and not str:match"[^%x]")
return str:gsub("%x%x", function(hex) return tonumber(hex, 16) end)
end
Now, let's convert this byte string to an integer. I'll assume little endian (least significant byte first); should your string be big endian (most significant byte first) you'll have to reverse it using str:reverse() before you read it.
Reading an unsigned integer is pretty straightforward:
function bytes2uint(str)
local uint = 0
for i = 1, #str do
uint = uint + str:byte(i) * 0x100^(i-1)
end
return uint
end
I'll assume your integers are stored using Two's complement. In this case the higher 2^n values (equivalent to the first bit being set or the value being >= 2^(n-1)) the uint can take represent negative numbers, with the smallest value (2^(n-1)) representing the largest negative value (-2^(n-1)). Thus you can simply subtract the unsigned value from 2^n, the (exclusive) max value for the uint:
function bytes2int(str)
local uint = bytes2uint(str)
local max = 0x100 ^ #str
if uint >= max / 2 then
return uint - max
end
return uint
end

Extract 4 bits vs 2Bit of Bluetooth HEX Data, why same method results in error

This is a follow on question from this SO (Extract 4 bits of Bluetooth HEX Data) which an answer has been accepted. I wanna understand more why the difference between what I was using; example below; (which works) when applied to the SO (Extract 4 bits of Bluetooth HEX Data) does not.
To decode Cycling Power Data, the first 2 bits are the flags and it's used to determine what capabilities the power meter provides.
guard let characteristicData = characteristic.value else { return -1 }
var byteArray = [UInt8](characteristicData)
// This is the output from the Sensor (In Decimal and Hex)
// DEC [35, 0, 25, 0, 96, 44, 0, 33, 229] Hex:{length = 9, bytes = 0x23001900602c0021e5} FirstByte:100011
/// First 2 Bits is Flags
let flags = byteArray[1]<<8 + byteArray[0]
This results in the flags bit being concatenate from the first 2 bits. After which I used the flags bit and masked it to get the relevant bit position.
eg: to get power balance, I do (flags & 0x01 > 0)
This method works and I'm a happy camper.
However, Why is it that when I used this same method on SO Extract 4 bits of Bluetooth HEX Data it does not work? This is decoding Bluetooth FTMS Data (different from above)
guard let characteristicData = characteristic.value else { return -1 }
let byteArray = [UInt8](characteristicData)
let nsdataStr = NSData.init(data: (characteristic.value)!)
print("pwrFTMS 2ACC Feature Array:[\(byteArray.count)]\(byteArray) Hex:\(nsdataStr)")
PwrFTMS 2ACC Feature Array:[8][2, 64, 0, 0, 8, 32, 0, 0] Hex:{length = 8, bytes = 0x0240000008200000}
Based on the specs, the returned data has 2 characteristics, each of them 4 octet long.
doing
byteArray[3]<<24 + byteArray[2]<<16 + byteArray[1]<<8 + byteArray[0]
to join the first 4bytes results in an wrong output to start the decoding.
edit: Added clarification
There is a problem with this code that you say works... but it seems to work "accidentally":
let flags = byteArray[1]<<8 + byteArray[0]
This results in a UInt8, but the flags field in the first table is 16 bits. Note that byteArray[1] << 8 always evaluates to 0, because you are shifting all of the bits of the byte out of the byte. It appeared to work because the only bit you were interested in was in byteArray[0].
So you need it convert it to 16-bit (or larger) first and then shift it:
let flags = (UInt16(byteArray[1]) << 8) + UInt16(byteArray[0])
Now flags is UInt16
Similarly when you do 4 bytes, you need them to be 32-bit values, before you shift. So
let flags = UInt32(byteArray[3]) << 24
+ UInt32(byteArray[2]) << 16
+ UInt32(byteArray[1]) << 8
+ UInt32(byteArray[0])
but since that's just reading a 32-bit value from a sequence of bytes that are in little endian byte order, and all current Apple devices (and the vast majority of all other modern computers) are little endian machines, here is an easier way:
let flags = byteArray.withUnsafeBytes {
$0.bindMemory(to: UInt32.self)[0]
}
In summary, in both cases, you had been only preserving byte 0 in your shift-add, because the other shifts all evaluated to 0 due to shifting the bits completely out of the byte. It just so happened that in the first case byte[0] contained the information you needed. In general, it's necessary to first promote the value to the size you need for the result, and then shift it.

Converting byte value correctly

I am having a hard time getting the correct value that I need.
I get from my characteristic vales from:
func peripheral(_ peripheral: CBPeripheral, didUpdateValueFor ...
I can read and print off the values with:
let values = characteristic.value
for val in values! {
print("Value", num)
}
This gets me:
"Value 0" // probe state not important
"Value 46" // temp
"Value 2" // see below
The problem is that the temp is not 46.
Below is a snippet of instructions on how I need to convert the byte to get the actual temp.
The actual temp was around 558 ºF.
Here are a part of the instructions:
Description: temperature data that is valid only if the temperature stat is normal
byte[1] = (unsigned char)temp;
byte[2] = (unsigned char)(temp>>8);
byte[3] = (unsigned char)(temp>>16);
byte[4] = (unsigned char)(temp>>24);
I can't seem to get the correct temp? Please let me know what I am doing wrong.
According to the description, value[1] ... value[4] are the least significant to most significant bytes of the (32-bit integer) temperature, so this is how you would recreate
that value from the bytes:
if let value = characteristic.value, value.count >= 5 {
let tmp = UInt32(value[1]) + UInt32(value[2]) << 8 + UInt32(value[3]) << 16 + UInt32(value[4]) << 24
let temperature = Int32(bitPattern: tmp)
}
The bit-fiddling is done in unsigned integer arithmetic to avoid
an overflow. Assuming that the temperature is a signed value,
this value is then converted to a signed integer with the same
bit representation.
The instructions tell you the answer. You are getting 46 in byte 1, then 2 in byte 2. The instructions say to leave byte 1 alone, but for byte 2 we are to shift the results as temp>>8 — which means "multiply by 256" (because 2^8 is 256). Well, what is
46+256×2
It is 558, just the result we're looking for.

What is the meaning of this Queue Property (iOS Audio Queues)?

I want to write a player to play the music. I see the code like below:
AudioFileGetPropertyInfo(audioFile,
kAudioFilePropertyMagicCookieData, &size, nil);
if (size > 0) {
cookie = malloc(sizeof(char) * size);
AudioFileGetProperty(audioFile,
kAudioFilePropertyMagicCookieData, &size, cookie);
AudioQueueSetProperty(aduioQueue,
kAudioQueueProperty_MagicCookie, cookie, size);
free(cookie);
}
i don't know why to set the AudioQueueProperty,and what is the means about kAudioQueueProperty_MagicCookie? I can't find the help from the documentation.
who can give a direction to slove the problem.
Actually magic cookie is more than just a signature, it holds some information about the encoder, most useful items are "Maximum Bit Rate" and "Average Bit Rate", specially for a compressed format like AudioFileMPEG4Type. For this specific type magic cookie is same as "esds" box in MPEG-4 data file. You can find the exact bit settings at:
http://xhelmboyx.tripod.com/formats/mp4-layout.txt
8+ bytes vers. 2 ES Descriptor box
= long unsigned offset + long ASCII text string 'esds'
- if encoded to ISO/IEC 14496-10 AVC standards then optionally use:
= long unsigned offset + long ASCII text string 'm4ds'
-> 4 bytes version/flags = 8-bit hex version + 24-bit hex flags
(current = 0)
-> 1 byte ES descriptor type tag = 8-bit hex value 0x03
-> 3 bytes extended descriptor type tag string = 3 * 8-bit hex value
- types are Start = 0x80 ; End = 0xFE
- NOTE: the extended start tags may be left out
-> 1 byte descriptor type length = 8-bit unsigned length
-> 2 bytes ES ID = 16-bit unsigned value
-> 1 byte stream priority = 8-bit unsigned value
- Defaults to 16 and ranges from 0 through to 31
-> 1 byte decoder config descriptor type tag = 8-bit hex value 0x04
-> 3 bytes extended descriptor type tag string = 3 * 8-bit hex value
- types are Start = 0x80 ; End = 0xFE
- NOTE: the extended start tags may be left out
-> 1 byte descriptor type length = 8-bit unsigned length
-> 1 byte object type ID = 8-bit unsigned value
- type IDs are system v1 = 1 ; system v2 = 2
- type IDs are MPEG-4 video = 32 ; MPEG-4 AVC SPS = 33
- type IDs are MPEG-4 AVC PPS = 34 ; MPEG-4 audio = 64
- type IDs are MPEG-2 simple video = 96
- type IDs are MPEG-2 main video = 97
- type IDs are MPEG-2 SNR video = 98
- type IDs are MPEG-2 spatial video = 99
- type IDs are MPEG-2 high video = 100
- type IDs are MPEG-2 4:2:2 video = 101
- type IDs are MPEG-4 ADTS main = 102
- type IDs are MPEG-4 ADTS Low Complexity = 103
- type IDs are MPEG-4 ADTS Scalable Sampling Rate = 104
- type IDs are MPEG-2 ADTS = 105 ; MPEG-1 video = 106
- type IDs are MPEG-1 ADTS = 107 ; JPEG video = 108
- type IDs are private audio = 192 ; private video = 208
- type IDs are 16-bit PCM LE audio = 224 ; vorbis audio = 225
- type IDs are dolby v3 (AC3) audio = 226 ; alaw audio = 227
- type IDs are mulaw audio = 228 ; G723 ADPCM audio = 229
- type IDs are 16-bit PCM Big Endian audio = 230
- type IDs are Y'CbCr 4:2:0 (YV12) video = 240 ; H264 video = 241
- type IDs are H263 video = 242 ; H261 video = 243
-> 6 bits stream type = 3/4 byte hex value
- type IDs are object descript. = 1 ; clock ref. = 2
- type IDs are scene descript. = 4 ; visual = 4
- type IDs are audio = 5 ; MPEG-7 = 6 ; IPMP = 7
- type IDs are OCI = 8 ; MPEG Java = 9
- type IDs are user private = 32
-> 1 bit upstream flag = 1/8 byte hex value
-> 1 bit reserved flag = 1/8 byte hex value set to 1
-> 3 bytes buffer size = 24-bit unsigned value
-> 4 bytes maximum bit rate = 32-bit unsigned value
-> 4 bytes average bit rate = 32-bit unsigned value
-> 1 byte decoder specific descriptor type tag
= 8-bit hex value 0x05
-> 3 bytes extended descriptor type tag string
= 3 * 8-bit hex value
- types are Start = 0x80 ; End = 0xFE
- NOTE: the extended start tags may be left out
-> 1 byte descriptor type length
= 8-bit unsigned length
-> ES header start codes = hex dump
-> 1 byte SL config descriptor type tag = 8-bit hex value 0x06
-> 3 bytes extended descriptor type tag string = 3 * 8-bit hex value
- types are Start = 0x80 ; End = 0xFE
- NOTE: the extended start tags may be left out
-> 1 byte descriptor type length = 8-bit unsigned length
-> 1 byte SL value = 8-bit hex value set to 0x02
"
Magic Cookie that comes from kAudioFilePropertyMagicCookieData starts from ES Descriptor (just ignore the first 4 bytes described in the map and rest will be an exact match to magick cookie).
A sample magic cookie would be like this:
03 80 80 80 22 00 00 00 04 80 80 80 14 40 15 00 18 00 00 00 FA 00 00 00 FA 00 05 80 80 80 02 12 08 06 80 80 80 01 02
Maximum bit rate is at offset 18 -> 0XFA00 (or 64,000)
Average bit rate is at offset 22 -> 0XFA00 (or 64,000)
Although according to Apple documentation, magic cookie is read/write, but I had no chance changing the bit rate before creating or converting files.
Hope that helps someone.
The "magic cookie" is a file type signature consisting of a unique sequence of bytes at the beginning of the file, indicating the file format. The audio queue framework uses this information to determine how to decode or extract audio information from a file stream (instead of using or trusting the file name extension). The code you posted reads this set of bytes from the file, and passes it to the audio queue as a cookie. (It would be a mistake to let them be interpreted as PCM samples instead, for instance).

PGMidi changing pitch sendBytes example

I'm trying the second day to send a midi signal. I'm using following code:
int pitchValue = 8191 //or -8192;
int msb = ?;
int lsb = ?;
UInt8 midiData[] = { 0xe0, msb, lsb};
[midi sendBytes:midiData size:sizeof(midiData)];
I don't understand how to calculate msb and lsb. I tried pitchValue << 8. But it's working incorrect, When I'm looking to events using midi tool I see min -8192 and +8064 max. I want to get -8192 and +8191.
Sorry if question is simple.
Pitch bend data is offset to avoid any sign bit concerns. The maximum negative deviation is sent as a value of zero, not -8192, so you have to compensate for that, something like this Python code:
def EncodePitchBend(value):
''' return a 2-tuple containing (msb, lsb) '''
if (value < -8192) or (value > 8191):
raise ValueError
value += 8192
return (((value >> 7) & 0x7F), (value & 0x7f))
Since MIDI data bytes are limited to 7 bits, you need to split pitchValue into two 7-bit values:
int msb = (pitchValue + 8192) >> 7 & 0x7F;
int lsb = (pitchValue + 8192) & 0x7F;
Edit: as #bgporter pointed out, pitch wheel values are offset by 8192 so that "zero" (i.e. the center position) is at 8192 (0x2000) so I edited my answer to offset pitchValue by 8192.

Resources