I have an array of UInts containing 16 elements and I need to convert it to a Data object of 16 bytes.
I am using the below code to convert, but it is converting it to 128 bytes instead of 16 bytes.
let numbers : stride(from: 0, to: salt.length, by: 2).map() {
strtoul(String(chars[$0 ..< min($0 + 2, chars.count)]), nil, 16)
}
/*numbers is a [UInt] array*/
let data = Data(buffer: UnsafeBufferPointer(start: numbers, count:number.count))
/*Data returns 128 bytes instead of 16 bytes*/
Please correct me as to what I am doing wrong.
You can't convert 16 UInts to 16 bytes. A UInt is 8 bytes long on a 64 bit device, or 4 bytes on a 32 bit device. You need to use an array of UInt8s.
If you have an array of UInts as input you can't cast them to UInt8, but you can convert them:
let array: [UInt] = [1, 2, 3, 123, 255]
let array8Bit: [UInt8] = array.map{UInt8($0)}
Related
This is a follow on question from this SO (Extract 4 bits of Bluetooth HEX Data) which an answer has been accepted. I wanna understand more why the difference between what I was using; example below; (which works) when applied to the SO (Extract 4 bits of Bluetooth HEX Data) does not.
To decode Cycling Power Data, the first 2 bits are the flags and it's used to determine what capabilities the power meter provides.
guard let characteristicData = characteristic.value else { return -1 }
var byteArray = [UInt8](characteristicData)
// This is the output from the Sensor (In Decimal and Hex)
// DEC [35, 0, 25, 0, 96, 44, 0, 33, 229] Hex:{length = 9, bytes = 0x23001900602c0021e5} FirstByte:100011
/// First 2 Bits is Flags
let flags = byteArray[1]<<8 + byteArray[0]
This results in the flags bit being concatenate from the first 2 bits. After which I used the flags bit and masked it to get the relevant bit position.
eg: to get power balance, I do (flags & 0x01 > 0)
This method works and I'm a happy camper.
However, Why is it that when I used this same method on SO Extract 4 bits of Bluetooth HEX Data it does not work? This is decoding Bluetooth FTMS Data (different from above)
guard let characteristicData = characteristic.value else { return -1 }
let byteArray = [UInt8](characteristicData)
let nsdataStr = NSData.init(data: (characteristic.value)!)
print("pwrFTMS 2ACC Feature Array:[\(byteArray.count)]\(byteArray) Hex:\(nsdataStr)")
PwrFTMS 2ACC Feature Array:[8][2, 64, 0, 0, 8, 32, 0, 0] Hex:{length = 8, bytes = 0x0240000008200000}
Based on the specs, the returned data has 2 characteristics, each of them 4 octet long.
doing
byteArray[3]<<24 + byteArray[2]<<16 + byteArray[1]<<8 + byteArray[0]
to join the first 4bytes results in an wrong output to start the decoding.
edit: Added clarification
There is a problem with this code that you say works... but it seems to work "accidentally":
let flags = byteArray[1]<<8 + byteArray[0]
This results in a UInt8, but the flags field in the first table is 16 bits. Note that byteArray[1] << 8 always evaluates to 0, because you are shifting all of the bits of the byte out of the byte. It appeared to work because the only bit you were interested in was in byteArray[0].
So you need it convert it to 16-bit (or larger) first and then shift it:
let flags = (UInt16(byteArray[1]) << 8) + UInt16(byteArray[0])
Now flags is UInt16
Similarly when you do 4 bytes, you need them to be 32-bit values, before you shift. So
let flags = UInt32(byteArray[3]) << 24
+ UInt32(byteArray[2]) << 16
+ UInt32(byteArray[1]) << 8
+ UInt32(byteArray[0])
but since that's just reading a 32-bit value from a sequence of bytes that are in little endian byte order, and all current Apple devices (and the vast majority of all other modern computers) are little endian machines, here is an easier way:
let flags = byteArray.withUnsafeBytes {
$0.bindMemory(to: UInt32.self)[0]
}
In summary, in both cases, you had been only preserving byte 0 in your shift-add, because the other shifts all evaluated to 0 due to shifting the bits completely out of the byte. It just so happened that in the first case byte[0] contained the information you needed. In general, it's necessary to first promote the value to the size you need for the result, and then shift it.
I'm trying to implement Bluetooth FTMS(Fitness Machine).
guard let characteristicData = characteristic.value else { return -1 }
let byteArray = [UInt8](characteristicData)
let nsdataStr = NSData.init(data: (characteristic.value)!)
print("pwrFTMS 2ACC Feature Array:[\(byteArray.count)]\(byteArray) Hex:\(nsdataStr)")
Here is what's returned from the bleno server
PwrFTMS 2ACC Feature Array:[8][2, 64, 0, 0, 8, 32, 0, 0] Hex:{length = 8, bytes = 0x0240000008200000}
Based on the specs, the returned data has 2 characteristics, each of them 4 octet long.
I'm having trouble getting the 4 octets split so I can get it converted to binary and get the relevant Bits for decoding.
Part of the problem is the swift will remove the leading zero. Hence, instead of getting 00 00 64 02, I'm getting 642. I tried the below to pad it with leading zero but since it's formatted to a string, I can't convert it to binary using radix:2
let FTMSFeature = String(format: "%02x", byteArray[3]) + String(format: "%02x", byteArray[2]) + String(format: "%02x", byteArray[1]) + String(format: "%02x", byteArray[0])
I've been banging my head on this for an entire day and went thru multiple SO and Google to no avail.
How Can I convert:
From - [HEX] 00 00 40 02
To - [DEC] 16386
To - [BIN] 0100 0000 0000 0010
then I can get to Bit1 = 1 and Bit14 = 1
How Can I convert:
From - [HEX] 00 00 40 02 To - [DEC] 16386 To - [BIN] 0100 0000
0000 0010
You can simply use ContiguousBytes withUnsafeBytes method to load your bytes as UInt32. Note that it will use only the same amount of bytes needed to create the resulting type (4 bytes)
let byteArray: [UInt8] = [2, 64, 0, 0, 8, 32, 0, 0]
let decimal = byteArray.withUnsafeBytes { $0.load(as: UInt32.self) }
decimal // 16386
To convert from bytes to binary you just need to pad to left your resulting binary string. Note that your expected binary string has only 2 bytes when a 32-bit unsigned integer should have 4:
extension FixedWidthInteger {
var binary: String {
(0 ..< Self.bitWidth / 8).map {
let byte = UInt8(truncatingIfNeeded: self >> ($0 * 8))
let string = String(byte, radix: 2)
return String(repeating: "0",
count: 8 - string.count) + string
}.reversed().joined(separator: " ")
}
}
let binary = decimal.binary // "00000000 00000000 01000000 00000010"
To know if a specific bit is on or off you can do as follow:
extension UnsignedInteger {
func bit<B: BinaryInteger>(at pos: B) -> Bool {
precondition(0..<B(bitWidth) ~= pos, "invalid bit position")
return (self & 1 << pos) > 0
}
}
decimal.bit(at: 0) // false
decimal.bit(at: 1) // true
decimal.bit(at: 2) // false
decimal.bit(at: 3) // false
decimal.bit(at: 14) // true
If you need to get a value at a specific bytes position you can check this post
byte 0: min_value (0-3 bit)
max_value (4-7 bit)
The byte0 should be the min and max values combined.
min and max values are both integers (in 0-15 range).
I should convert them into 4-bit binary, and combine them somehow? (how?)
E.g.
min_value=2 // 0010
max_value=3 // 0011
The result should be an Uint8, and the value: 00100011
You can use the shift left operator << to get the result you want:
result = ((min_value << 4) + max_value).toRadixString(2).padLeft(8, '0');
I am trying to communicate with a Bluetooth laser tag gun that takes data in 20 byte chunks, which are broken down into 16, 8 or 4-bit words. To do this, I made a UInt8 array and changed the values in there. The problem happens when I try to send the UInt8 array.
var bytes = [UInt8](repeating: 0, count: 20)
bytes[0] = commandID
if commandID == 240 {
commandID = 0
}
commandID += commandIDIncrement
print(commandID)
bytes[2] = 128
bytes[4] = UInt8(gunIDSlider.value)
print("Response: \(laserTagGun.writeValue(bytes, for: gunCControl, type: CBCharacteristicWriteType.withResponse))")
commandID is just a UInt8. This gives me the error, Cannot convert value of type '[UInt8]' to expected argument type 'Data', which I tried to solve by doing this:
var bytes = [UInt8](repeating: 0, count: 20)
bytes[0] = commandID
if commandID == 240 {
commandID = 0
}
commandID += commandIDIncrement
print(commandID)
bytes[2] = 128
bytes[4] = UInt8(gunIDSlider.value)
print("bytes: \(bytes)")
assert(bytes.count * MemoryLayout<UInt8>.stride >= MemoryLayout<Data>.size)
let data1 = UnsafeRawPointer(bytes).assumingMemoryBound(to: Data.self).pointee
print("data1: \(data1)")
print("Response: \(laserTagGun.writeValue(data1, for: gunCControl, type: CBCharacteristicWriteType.withResponse))")
To this, data1 just prints 0 bytes and I can see that laserTagGun.writeValue isn't actually doing anything by reading data from the other characteristics. How can I convert my UInt8 array to Data in swift? Also please let me know if there is a better way to handle 20 bytes of data than a UInt8 array. Thank you for your help!
It looks like you're really trying to avoid a copy of the bytes, if not, then just init a new Data with your bytes array:
let data2 = Data(bytes)
print("data2: \(data2)")
If you really want to avoid the copy, what about something like this?
let data1 = Data(bytesNoCopy: UnsafeMutableRawPointer(mutating: bytes), count: bytes.count, deallocator: .none)
print("data1: \(data1)")
I've stumbled onto an odd NSDecimalNumber behavior: for some values, invocations of integerValue, longValue, longLongValue, etc., return the an unexpected value. Example:
let v = NSDecimalNumber(string: "9.821426272392280061")
v // evaluates to 9.821426272392278
v.intValue // evaluates to 9
v.integerValue // evaluates to -8
v.longValue // evaluates to -8
v.longLongValue // evaluates to -8
let v2 = NSDecimalNumber(string: "9.821426272392280060")
v2 // evaluates to 9.821426272392278
v2.intValue // evaluates to 9
v2.integerValue // evaluates to 9
v2.longValue // evaluates to 9
v2.longLongValue // evaluates to 9
This is using XCode 7.3; I haven't tested using earlier versions of the frameworks.
I've seen a bunch of discussion about unexpected rounding behavior with NSDecimalNumber, as well as admonishments not to initialize it with the inherited NSNumber initializers, but I haven't seen anything about this specific behavior. Nevertheless there are some rather detailed discussions about internal representations and rounding which may contain the nugget I seek, so apologies in advance if I missed it.
EDIT: It's buried in the comments, but I've filed this as issue #25465729 with Apple. OpenRadar: http://www.openradar.me/radar?id=5007005597040640.
EDIT 2: Apple has marked this as a dup of #19812966.
Since you already know the problem is due to "too high precision", you could workaround it by rounding the decimal number first:
let b = NSDecimalNumber(string: "9.999999999999999999")
print(b, "->", b.int64Value)
// 9.999999999999999999 -> -8
let truncateBehavior = NSDecimalNumberHandler(roundingMode: .down,
scale: 0,
raiseOnExactness: true,
raiseOnOverflow: true,
raiseOnUnderflow: true,
raiseOnDivideByZero: true)
let c = b.rounding(accordingToBehavior: truncateBehavior)
print(c, "->", c.int64Value)
// 9 -> 9
If you want to use int64Value (i.e. -longLongValue), avoid using numbers with more than 62 bits of precision, i.e. avoid having more than 18 digits totally. Reasons explained below.
NSDecimalNumber is internally represented as a Decimal structure:
typedef struct {
signed int _exponent:8;
unsigned int _length:4;
unsigned int _isNegative:1;
unsigned int _isCompact:1;
unsigned int _reserved:18;
unsigned short _mantissa[NSDecimalMaxSize]; // NSDecimalMaxSize = 8
} NSDecimal;
This can be obtained using .decimalValue, e.g.
let v2 = NSDecimalNumber(string: "9.821426272392280061")
let d = v2.decimalValue
print(d._exponent, d._mantissa, d._length)
// -18 (30717, 39329, 46888, 34892, 0, 0, 0, 0) 4
This means 9.821426272392280061 is internally stored as 9821426272392280061 × 10-18 — note that 9821426272392280061 = 34892 × 655363 + 46888 × 655362 + 39329 × 65536 + 30717.
Now compare with 9.821426272392280060:
let v2 = NSDecimalNumber(string: "9.821426272392280060")
let d = v2.decimalValue
print(d._exponent, d._mantissa, d._length)
// -17 (62054, 3932, 17796, 3489, 0, 0, 0, 0) 4
Note that the exponent is reduced to -17, meaning the trailing zero is omitted by Foundation.
Knowing the internal structure, I now make a claim: the bug is because 34892 ≥ 32768. Observe:
let a = NSDecimalNumber(decimal: Decimal(
_exponent: -18, _length: 4, _isNegative: 0, _isCompact: 1, _reserved: 0,
_mantissa: (65535, 65535, 65535, 32767, 0, 0, 0, 0)))
let b = NSDecimalNumber(decimal: Decimal(
_exponent: -18, _length: 4, _isNegative: 0, _isCompact: 1, _reserved: 0,
_mantissa: (0, 0, 0, 32768, 0, 0, 0, 0)))
print(a, "->", a.int64Value)
print(b, "->", b.int64Value)
// 9.223372036854775807 -> 9
// 9.223372036854775808 -> -9
Note that 32768 × 655363 = 263 is the value just enough to overflow a signed 64-bit number. Therefore, I suspect that the bug is due to Foundation implementing int64Value as (1) convert the mantissa directly into an Int64, and then (2) divide by 10|exponent|.
In fact, if you disassemble Foundation.framework, you will find that it is basically how int64Value is implemented (this is independent of the platform's pointer width).
But why int32Value isn't affected? Because internally it is just implemented as Int32(self.doubleValue), so no overflow issue would occur. Unfortunately a double only has 53 bits of precision, so Apple has no choice but to implement int64Value (requiring 64 bits of precision) without floating-point arithmetics.
I'd file a bug with Apple if I were you. The docs say that NSDecimalNumber can represent any value up to 38 digits long. NSDecimalNumber inherits those properties from NSNumber, and the docs don't explicitly say what conversion is involved at that point, but the only reasonable interpretation is that if the number is roundable to and representable as an Int, then you get the correct answer.
It looks to me like a bug in handling the sign-extension during the conversion somewhere, since intValue is 32-bit and integerValue is 64-bit (in Swift).