Im developing an app using BLE where iPhone device is the peripheral, and will respond to write requests of type CBATTRequest from the Central.
My take is that this represents a byte array from value of CBATTRequest via request.value of type NSData that I can unpack to read packet # etc. Given the size (octets) and position of each field, how can I unpack and read each value, conceptually and technically?. And how would I go about constructing/packing this same byte array as if I was preparing to send this request? Since I will have to pack data in the same manner for the response.
When you receive the data, it's probably in a CBATTRequest. The data is contained in a member value of type NSData. The member length tells the length in bytes/octects.
CBATTRequest* request = ...;
NSData* value = request.value;
int packetLen = value.length;
It then makes sense to cast this to a struct that corresponds to the structure of the packet:
struct Packet {
unsgined char pktNo;
unsigned char ctrlCmd;
unsigned char txPowerRequest;
unsigned char uuid[2];
unsigned char txCnt;
unsigned char userPayload[14];
};
Packet* packet= (Packet)value.bytes;
Note that packet is of variable length. So only part of the userPayload is valid. The valid length is:
int userPayloadLength = packetLen - 6;
Now you can easily access the members:
int packetNumber = packet->pktNo;
To construct a similar packet, you would approach is slightly similarly.
Packet reponse;
response.pktNo = ...;
reponse.ctrlCmd = ...;
int userPayloadLength = 5;
NSData* value = [NSData dataWithBytes: &response length: userPayloadLength + 6];
Bit 4 to 0 set to 0x01 for..
This most likely is relative to a single octect, e.g. to ctrlCmd. To test it:
if (((packet->ctrlCmd >> 0) & 0x1f) == 0x01) ...
0x1f is the bit mask for 5 consecutive bits set (bit 0 to 5). >> 0 doesn't do anything but would be required if the bits were shifted, e.g. for bit 2 to 5 you would need to shift by 2.
A typical UUID is 16 bytes long. So I assume byte index 13 & 12 refers to bytes 12 and 13 within a 16 byte UUID (as only two bytes are transmitted). The remaining bytes are probably fixed to the base Bluetooth UUID:
00000000-0000-1000-8000-00805F9B34FB
I am having a hard time getting the correct value that I need.
I get from my characteristic vales from:
func peripheral(_ peripheral: CBPeripheral, didUpdateValueFor ...
I can read and print off the values with:
let values = characteristic.value
for val in values! {
print("Value", num)
}
This gets me:
"Value 0" // probe state not important
"Value 46" // temp
"Value 2" // see below
The problem is that the temp is not 46.
Below is a snippet of instructions on how I need to convert the byte to get the actual temp.
The actual temp was around 558 ºF.
Here are a part of the instructions:
Description: temperature data that is valid only if the temperature stat is normal
byte[1] = (unsigned char)temp;
byte[2] = (unsigned char)(temp>>8);
byte[3] = (unsigned char)(temp>>16);
byte[4] = (unsigned char)(temp>>24);
I can't seem to get the correct temp? Please let me know what I am doing wrong.
According to the description, value[1] ... value[4] are the least significant to most significant bytes of the (32-bit integer) temperature, so this is how you would recreate
that value from the bytes:
if let value = characteristic.value, value.count >= 5 {
let tmp = UInt32(value[1]) + UInt32(value[2]) << 8 + UInt32(value[3]) << 16 + UInt32(value[4]) << 24
let temperature = Int32(bitPattern: tmp)
}
The bit-fiddling is done in unsigned integer arithmetic to avoid
an overflow. Assuming that the temperature is a signed value,
this value is then converted to a signed integer with the same
bit representation.
The instructions tell you the answer. You are getting 46 in byte 1, then 2 in byte 2. The instructions say to leave byte 1 alone, but for byte 2 we are to shift the results as temp>>8 — which means "multiply by 256" (because 2^8 is 256). Well, what is
46+256×2
It is 558, just the result we're looking for.
I have an array of UInts containing 16 elements and I need to convert it to a Data object of 16 bytes.
I am using the below code to convert, but it is converting it to 128 bytes instead of 16 bytes.
let numbers : stride(from: 0, to: salt.length, by: 2).map() {
strtoul(String(chars[$0 ..< min($0 + 2, chars.count)]), nil, 16)
}
/*numbers is a [UInt] array*/
let data = Data(buffer: UnsafeBufferPointer(start: numbers, count:number.count))
/*Data returns 128 bytes instead of 16 bytes*/
Please correct me as to what I am doing wrong.
You can't convert 16 UInts to 16 bytes. A UInt is 8 bytes long on a 64 bit device, or 4 bytes on a 32 bit device. You need to use an array of UInt8s.
If you have an array of UInts as input you can't cast them to UInt8, but you can convert them:
let array: [UInt] = [1, 2, 3, 123, 255]
let array8Bit: [UInt8] = array.map{UInt8($0)}
I've stumbled onto an odd NSDecimalNumber behavior: for some values, invocations of integerValue, longValue, longLongValue, etc., return the an unexpected value. Example:
let v = NSDecimalNumber(string: "9.821426272392280061")
v // evaluates to 9.821426272392278
v.intValue // evaluates to 9
v.integerValue // evaluates to -8
v.longValue // evaluates to -8
v.longLongValue // evaluates to -8
let v2 = NSDecimalNumber(string: "9.821426272392280060")
v2 // evaluates to 9.821426272392278
v2.intValue // evaluates to 9
v2.integerValue // evaluates to 9
v2.longValue // evaluates to 9
v2.longLongValue // evaluates to 9
This is using XCode 7.3; I haven't tested using earlier versions of the frameworks.
I've seen a bunch of discussion about unexpected rounding behavior with NSDecimalNumber, as well as admonishments not to initialize it with the inherited NSNumber initializers, but I haven't seen anything about this specific behavior. Nevertheless there are some rather detailed discussions about internal representations and rounding which may contain the nugget I seek, so apologies in advance if I missed it.
EDIT: It's buried in the comments, but I've filed this as issue #25465729 with Apple. OpenRadar: http://www.openradar.me/radar?id=5007005597040640.
EDIT 2: Apple has marked this as a dup of #19812966.
Since you already know the problem is due to "too high precision", you could workaround it by rounding the decimal number first:
let b = NSDecimalNumber(string: "9.999999999999999999")
print(b, "->", b.int64Value)
// 9.999999999999999999 -> -8
let truncateBehavior = NSDecimalNumberHandler(roundingMode: .down,
scale: 0,
raiseOnExactness: true,
raiseOnOverflow: true,
raiseOnUnderflow: true,
raiseOnDivideByZero: true)
let c = b.rounding(accordingToBehavior: truncateBehavior)
print(c, "->", c.int64Value)
// 9 -> 9
If you want to use int64Value (i.e. -longLongValue), avoid using numbers with more than 62 bits of precision, i.e. avoid having more than 18 digits totally. Reasons explained below.
NSDecimalNumber is internally represented as a Decimal structure:
typedef struct {
signed int _exponent:8;
unsigned int _length:4;
unsigned int _isNegative:1;
unsigned int _isCompact:1;
unsigned int _reserved:18;
unsigned short _mantissa[NSDecimalMaxSize]; // NSDecimalMaxSize = 8
} NSDecimal;
This can be obtained using .decimalValue, e.g.
let v2 = NSDecimalNumber(string: "9.821426272392280061")
let d = v2.decimalValue
print(d._exponent, d._mantissa, d._length)
// -18 (30717, 39329, 46888, 34892, 0, 0, 0, 0) 4
This means 9.821426272392280061 is internally stored as 9821426272392280061 × 10-18 — note that 9821426272392280061 = 34892 × 655363 + 46888 × 655362 + 39329 × 65536 + 30717.
Now compare with 9.821426272392280060:
let v2 = NSDecimalNumber(string: "9.821426272392280060")
let d = v2.decimalValue
print(d._exponent, d._mantissa, d._length)
// -17 (62054, 3932, 17796, 3489, 0, 0, 0, 0) 4
Note that the exponent is reduced to -17, meaning the trailing zero is omitted by Foundation.
Knowing the internal structure, I now make a claim: the bug is because 34892 ≥ 32768. Observe:
let a = NSDecimalNumber(decimal: Decimal(
_exponent: -18, _length: 4, _isNegative: 0, _isCompact: 1, _reserved: 0,
_mantissa: (65535, 65535, 65535, 32767, 0, 0, 0, 0)))
let b = NSDecimalNumber(decimal: Decimal(
_exponent: -18, _length: 4, _isNegative: 0, _isCompact: 1, _reserved: 0,
_mantissa: (0, 0, 0, 32768, 0, 0, 0, 0)))
print(a, "->", a.int64Value)
print(b, "->", b.int64Value)
// 9.223372036854775807 -> 9
// 9.223372036854775808 -> -9
Note that 32768 × 655363 = 263 is the value just enough to overflow a signed 64-bit number. Therefore, I suspect that the bug is due to Foundation implementing int64Value as (1) convert the mantissa directly into an Int64, and then (2) divide by 10|exponent|.
In fact, if you disassemble Foundation.framework, you will find that it is basically how int64Value is implemented (this is independent of the platform's pointer width).
But why int32Value isn't affected? Because internally it is just implemented as Int32(self.doubleValue), so no overflow issue would occur. Unfortunately a double only has 53 bits of precision, so Apple has no choice but to implement int64Value (requiring 64 bits of precision) without floating-point arithmetics.
I'd file a bug with Apple if I were you. The docs say that NSDecimalNumber can represent any value up to 38 digits long. NSDecimalNumber inherits those properties from NSNumber, and the docs don't explicitly say what conversion is involved at that point, but the only reasonable interpretation is that if the number is roundable to and representable as an Int, then you get the correct answer.
It looks to me like a bug in handling the sign-extension during the conversion somewhere, since intValue is 32-bit and integerValue is 64-bit (in Swift).
I'm trying the second day to send a midi signal. I'm using following code:
int pitchValue = 8191 //or -8192;
int msb = ?;
int lsb = ?;
UInt8 midiData[] = { 0xe0, msb, lsb};
[midi sendBytes:midiData size:sizeof(midiData)];
I don't understand how to calculate msb and lsb. I tried pitchValue << 8. But it's working incorrect, When I'm looking to events using midi tool I see min -8192 and +8064 max. I want to get -8192 and +8191.
Sorry if question is simple.
Pitch bend data is offset to avoid any sign bit concerns. The maximum negative deviation is sent as a value of zero, not -8192, so you have to compensate for that, something like this Python code:
def EncodePitchBend(value):
''' return a 2-tuple containing (msb, lsb) '''
if (value < -8192) or (value > 8191):
raise ValueError
value += 8192
return (((value >> 7) & 0x7F), (value & 0x7f))
Since MIDI data bytes are limited to 7 bits, you need to split pitchValue into two 7-bit values:
int msb = (pitchValue + 8192) >> 7 & 0x7F;
int lsb = (pitchValue + 8192) & 0x7F;
Edit: as #bgporter pointed out, pitch wheel values are offset by 8192 so that "zero" (i.e. the center position) is at 8192 (0x2000) so I edited my answer to offset pitchValue by 8192.