I'm trying to implement Bluetooth FTMS(Fitness Machine).
guard let characteristicData = characteristic.value else { return -1 }
let byteArray = [UInt8](characteristicData)
let nsdataStr = NSData.init(data: (characteristic.value)!)
print("pwrFTMS 2ACC Feature Array:[\(byteArray.count)]\(byteArray) Hex:\(nsdataStr)")
Here is what's returned from the bleno server
PwrFTMS 2ACC Feature Array:[8][2, 64, 0, 0, 8, 32, 0, 0] Hex:{length = 8, bytes = 0x0240000008200000}
Based on the specs, the returned data has 2 characteristics, each of them 4 octet long.
I'm having trouble getting the 4 octets split so I can get it converted to binary and get the relevant Bits for decoding.
Part of the problem is the swift will remove the leading zero. Hence, instead of getting 00 00 64 02, I'm getting 642. I tried the below to pad it with leading zero but since it's formatted to a string, I can't convert it to binary using radix:2
let FTMSFeature = String(format: "%02x", byteArray[3]) + String(format: "%02x", byteArray[2]) + String(format: "%02x", byteArray[1]) + String(format: "%02x", byteArray[0])
I've been banging my head on this for an entire day and went thru multiple SO and Google to no avail.
How Can I convert:
From - [HEX] 00 00 40 02
To - [DEC] 16386
To - [BIN] 0100 0000 0000 0010
then I can get to Bit1 = 1 and Bit14 = 1
How Can I convert:
From - [HEX] 00 00 40 02 To - [DEC] 16386 To - [BIN] 0100 0000
0000 0010
You can simply use ContiguousBytes withUnsafeBytes method to load your bytes as UInt32. Note that it will use only the same amount of bytes needed to create the resulting type (4 bytes)
let byteArray: [UInt8] = [2, 64, 0, 0, 8, 32, 0, 0]
let decimal = byteArray.withUnsafeBytes { $0.load(as: UInt32.self) }
decimal // 16386
To convert from bytes to binary you just need to pad to left your resulting binary string. Note that your expected binary string has only 2 bytes when a 32-bit unsigned integer should have 4:
extension FixedWidthInteger {
var binary: String {
(0 ..< Self.bitWidth / 8).map {
let byte = UInt8(truncatingIfNeeded: self >> ($0 * 8))
let string = String(byte, radix: 2)
return String(repeating: "0",
count: 8 - string.count) + string
}.reversed().joined(separator: " ")
}
}
let binary = decimal.binary // "00000000 00000000 01000000 00000010"
To know if a specific bit is on or off you can do as follow:
extension UnsignedInteger {
func bit<B: BinaryInteger>(at pos: B) -> Bool {
precondition(0..<B(bitWidth) ~= pos, "invalid bit position")
return (self & 1 << pos) > 0
}
}
decimal.bit(at: 0) // false
decimal.bit(at: 1) // true
decimal.bit(at: 2) // false
decimal.bit(at: 3) // false
decimal.bit(at: 14) // true
If you need to get a value at a specific bytes position you can check this post
Related
I am trying to communicate with a Bluetooth laser tag gun that takes data in 20 byte chunks, which are broken down into 16, 8 or 4-bit words. To do this, I made a UInt8 array and changed the values in there. The problem happens when I try to send the UInt8 array.
var bytes = [UInt8](repeating: 0, count: 20)
bytes[0] = commandID
if commandID == 240 {
commandID = 0
}
commandID += commandIDIncrement
print(commandID)
bytes[2] = 128
bytes[4] = UInt8(gunIDSlider.value)
print("Response: \(laserTagGun.writeValue(bytes, for: gunCControl, type: CBCharacteristicWriteType.withResponse))")
commandID is just a UInt8. This gives me the error, Cannot convert value of type '[UInt8]' to expected argument type 'Data', which I tried to solve by doing this:
var bytes = [UInt8](repeating: 0, count: 20)
bytes[0] = commandID
if commandID == 240 {
commandID = 0
}
commandID += commandIDIncrement
print(commandID)
bytes[2] = 128
bytes[4] = UInt8(gunIDSlider.value)
print("bytes: \(bytes)")
assert(bytes.count * MemoryLayout<UInt8>.stride >= MemoryLayout<Data>.size)
let data1 = UnsafeRawPointer(bytes).assumingMemoryBound(to: Data.self).pointee
print("data1: \(data1)")
print("Response: \(laserTagGun.writeValue(data1, for: gunCControl, type: CBCharacteristicWriteType.withResponse))")
To this, data1 just prints 0 bytes and I can see that laserTagGun.writeValue isn't actually doing anything by reading data from the other characteristics. How can I convert my UInt8 array to Data in swift? Also please let me know if there is a better way to handle 20 bytes of data than a UInt8 array. Thank you for your help!
It looks like you're really trying to avoid a copy of the bytes, if not, then just init a new Data with your bytes array:
let data2 = Data(bytes)
print("data2: \(data2)")
If you really want to avoid the copy, what about something like this?
let data1 = Data(bytesNoCopy: UnsafeMutableRawPointer(mutating: bytes), count: bytes.count, deallocator: .none)
print("data1: \(data1)")
I'm working on an application that reads in some strings in Q Number format.
The java implementation converts a string like this:
int i = Integer.parseInt("00801600", 16);
System.out.println("Number from Integer.parseInt = " + i); // i=8394240
float j = Integer.reverseBytes(i);
System.out.println("After Integer.reverseBytes = " + j); // j=1474560.0
float k = j / 65536; //TWO_POWER_OF_16 = 65536
System.out.println("After Q division = " + k); // k=22.5
I've played with a lot of combinations of swift functions, and this is (hopefully) pretty close:
let i: Int = Int("00801600", radix: 16) ?? 0
let istr = "Number from Int = \(i)"
let j: Double = Double(i.byteSwapped)
let jstr = "After byte swapping = \(j)"
let k: Double = Double(j) / 65536.0
let kstr = "After Q division = \(k)"
Obviously, Int.byteSwapped isn't what I'm looking for. In my example above, j is where it all goes off the rails. The java code produces 1474560.0, whereas my swift is 6333186975989760.0.
A Java int is always 32 bits, so Integer.reverseBytes turns 0x00801600 into 0x00168000.
A Swift Int is 32 bits on 32-bit platforms and 64 bits on 64-bit platforms (which is most current platforms). So on a 32-bit platform, i.byteSwapped turns 0x00801600 into 0x00168000, but on a 64-bit platform, i.byteSwapped turns 0x0000000000801600 into 0x0016800000000000.
If you want 32 bits, be explicit:
1> let i = Int32("00801600", radix: 16)!
i: Int32 = 8394240
2> let j = Double(i.byteSwapped)
j: Double = 1474560
3> let k = j / 65536
k: Double = 22.5
4>
You say that you're trying to implement Q encoded numbers, but the Java code you've shown doesn't really do that. It hard-codes the case of Q16 (by virtue of dividing by 65536, which is 2^16), but frankly, I'm not even sure how it's intended to work, but it doesn't.
0x00801600 when Q encoded with a numerator of size 16, represents 0x0080 / 0x1600, which is 128 / 5632, which is equal to ~0.0227. Even if you imagine that your input is swapped, 5632 / 128 is 44, not 22.5. So I don't see any interpretation under which this math works out.
To implement this in Swift (and in Java, for that matter), I would make a new QEncoded data type, that stores an integer and a number of bits that count towards the numerator (the number of bits that count for the denominator can be inferred as the formed minus the latter).
This approach is the most flexible, but it isn't particularly efficient (since it wastes one Int for the numeratorBitWidth for every instance). If you have so many of these that memory usage is a concern, you can use a more protocol oriented approach, which I detail in a second answer.
// A QEncoded binary number of the form Qm.n https://en.wikipedia.org/wiki/Q_%28number_format%29
struct QEncoded<I: BinaryInteger> {
var i: I
var numeratorBitWidth: Int // "m"
var denominatorBitWidth: Int { return i.bitWidth - numeratorBitWidth } // "n"
var numerator: I {
return i >> denominatorBitWidth
}
var denominator: I {
if denominatorBitWidth == 0 { return 1 }
let denominatorMask: I = (1 << I(numeratorBitWidth)) - 1
return i & denominatorMask
}
var ratio: Double { return Double(numerator) / Double(denominator) }
var qFormatDescription: String {
let (m, n) = (self.numeratorBitWidth, self.denominatorBitWidth)
return (n == 0) ? "Q\(m)" : "Q\(m).\(n)"
}
init(bitPattern: I, numeratorBitWidth: Int, denominatorBitWidth: Int) {
assert(numeratorBitWidth + denominatorBitWidth == bitPattern.bitWidth, """
The number of bits in the numerator (\(numeratorBitWidth)) and denominator (\(denominatorBitWidth)) \
must sum to the total number of bits in the integer \(bitPattern.bitWidth)
""")
self.i = bitPattern
self.numeratorBitWidth = numeratorBitWidth
}
// Might be useful to implement something like this:
// init(numerator: I, numeratorBits: Int, denominator: I, denominatorBits: Int) {
//
// }
}
Here's a little demo:
extension BinaryInteger {
var binaryDescription: String {
var binaryString = ""
var internalNumber = self
var counter = 0
for _ in (1...self.bitWidth) {
binaryString.insert(contentsOf: "\(internalNumber & 1)", at: binaryString.startIndex)
internalNumber >>= 1
counter += 1
if counter % 4 == 0 {
binaryString.insert(contentsOf: " ", at: binaryString.startIndex)
}
}
return binaryString
}
}
extension QEncoded {
func test() {
print("\(self.i.binaryDescription) with \(qFormatDescription) encoding is: \(numerator.binaryDescription) (numerator: \(numerator)) / \(denominator.binaryDescription) (denominator: \(denominator)) = \(ratio)")
}
}
// ↙︎ This common "0_" prefix does nothing, it's just necessary because "0b_..." isn't a valid form
// The rest of the `_` denote the seperation between the numerator and denominator, strictly for human understanding only (it has no impact on the code's behaviour)
QEncoded(bitPattern: 0b0__00111111 as UInt8, numeratorBitWidth: 0, denominatorBitWidth: 8).test()
QEncoded(bitPattern: 0b0_0_0111111 as UInt8, numeratorBitWidth: 1, denominatorBitWidth: 7).test()
QEncoded(bitPattern: 0b0_00_111111 as UInt8, numeratorBitWidth: 2, denominatorBitWidth: 6).test()
QEncoded(bitPattern: 0b0_001_11111 as UInt8, numeratorBitWidth: 3, denominatorBitWidth: 5).test()
QEncoded(bitPattern: 0b0_0011_1111 as UInt8, numeratorBitWidth: 4, denominatorBitWidth: 4).test()
QEncoded(bitPattern: 0b0_00111_111 as UInt8, numeratorBitWidth: 5, denominatorBitWidth: 3).test()
QEncoded(bitPattern: 0b0_001111_11 as UInt8, numeratorBitWidth: 6, denominatorBitWidth: 2).test()
QEncoded(bitPattern: 0b0_0011111_1 as UInt8, numeratorBitWidth: 7, denominatorBitWidth: 1).test()
QEncoded(bitPattern: 0b0_00111111_ as UInt8, numeratorBitWidth: 8, denominatorBitWidth: 0).test()
Which prints:
0011 1111 with Q0.8 encoding is: 0000 0000 (numerator: 0) / 0000 0000 (denominator: 0) = -nan
0011 1111 with Q1.7 encoding is: 0000 0000 (numerator: 0) / 0000 0001 (denominator: 1) = 0.0
0011 1111 with Q2.6 encoding is: 0000 0000 (numerator: 0) / 0000 0011 (denominator: 3) = 0.0
0011 1111 with Q3.5 encoding is: 0000 0001 (numerator: 1) / 0000 0111 (denominator: 7) = 0.14285714285714285
0011 1111 with Q4.4 encoding is: 0000 0011 (numerator: 3) / 0000 1111 (denominator: 15) = 0.2
0011 1111 with Q5.3 encoding is: 0000 0111 (numerator: 7) / 0001 1111 (denominator: 31) = 0.22580645161290322
0011 1111 with Q6.2 encoding is: 0000 1111 (numerator: 15) / 0011 1111 (denominator: 63) = 0.23809523809523808
0011 1111 with Q7.1 encoding is: 0001 1111 (numerator: 31) / 0011 1111 (denominator: 63) = 0.49206349206349204
0011 1111 with Q8 encoding is: 0011 1111 (numerator: 63) / 0000 0001 (denominator: 1) = 63.0
This is an alternate approach to my main answer. Read that one first.
This is a more protocol oriented approach. It encodes the numeratorBitWidth at the type level, so each instance only has to have enough memory to store I. Unfortunately, this requires a new struct definition for every type of Q encoded integer you might want (There's 16 variants just for 16 bit integers alone: QEncoded1_15, QEncoded2_14, ... QEncoded15_1, QEncoded16_0).
protocol QEncoded {
associatedtype I: BinaryInteger
var i: I { get set }
static var numeratorBitWidth: Int { get } // "m"
static var denominatorBitWidth: Int { get } // "n"
}
extension QEncoded {
static var denominatorBitWidth: Int { return I().bitWidth - Self.numeratorBitWidth }
static var qFormatDescription: String {
let (m, n) = (self.numeratorBitWidth, self.denominatorBitWidth)
return (n == 0) ? "Q\(m)" : "Q\(m).\(n)"
}
var numerator: I {
return i >> Self.denominatorBitWidth
}
var denominator: I {
if Self.denominatorBitWidth == 0 { return 1 }
let denominatorMask: I = (1 << I(Self.numeratorBitWidth)) - 1
return i & denominatorMask
}
var ratio: Double { return Double(numerator) / Double(denominator) }
}
Example usage:
extension BinaryInteger {
var binaryDescription: String {
var binaryString = ""
var internalNumber = self
var counter = 0
for _ in (1...self.bitWidth) {
binaryString.insert(contentsOf: "\(internalNumber & 1)", at: binaryString.startIndex)
internalNumber >>= 1
counter += 1
if counter % 4 == 0 {
binaryString.insert(contentsOf: " ", at: binaryString.startIndex)
}
}
return binaryString
}
}
extension QEncoded {
func test() {
print("\(self.i.binaryDescription) with \(Self.qFormatDescription) encoding is: \(numerator.binaryDescription) (numerator: \(numerator)) / \(denominator.binaryDescription) (denominator: \(denominator)) = \(ratio)")
}
}
struct QEncoded16_0: QEncoded {
static let numeratorBitWidth = 16
var i: UInt16
init(bitPattern: I) { self.i = bitPattern }
}
struct QEncoded8_8: QEncoded {
static let numeratorBitWidth = 8
var i: UInt16
init(bitPattern: I) { self.i = bitPattern }
}
struct QEncoded4_12: QEncoded {
static let numeratorBitWidth = 4
var i: UInt16
init(bitPattern: I) { self.i = bitPattern }
}
Output:
0011 1110 0000 1111 with Q16 encoding is: 0011 1110 0000 1111 (numerator: 15887) / 0000 0000 0000 0001 (denominator: 1) = 15887.0
0011 1110 0000 1111 with Q8.8 encoding is: 0000 0000 0011 1110 (numerator: 62) / 0000 0000 0000 1111 (denominator: 15) = 4.133333333333334
0011 1110 0000 1111 with Q4.12 encoding is: 0000 0000 0000 0011 (numerator: 3) / 0000 0000 0000 1111 (denominator: 15) = 0.2
I want to take input from user in binary, What I want is something like:
10101
11110
Then I need to perform bitwise OR on this. I know how to take input and how to perform bitwise OR, only I want to know is how to convert because what I am currently using is not giving right result. What I tried is as below:
let aBits: Int16 = Int16(a)! //a is String "10101"
let bBits: Int16 = Int16(b)! //b is String "11110"
let combinedbits = aBits | bBits
Edit: I don't need decimal to binary conversion with radix, as my string already have only 0 and 1
String can have upto 500 characters like:
1001101111101011011100101100100110111011111011000100111100111110111101011011011100111001100011111010
this is beyond Int limit, how to handle that in Swift?
Edit2 : As per vacawama 's answer, below code works great:
let maxAB = max(a.count, b.count)
let paddedA = String(repeating: "0", count: maxAB - a.count) + a
let paddedB = String(repeating: "0", count: maxAB - b.count) + b
let Str = String(zip(paddedA, paddedB).map({ $0 == ("0", "0") ? "0" : "1" }))
I can have array of upto 500 string and each string can have upto 500 characters. Then I have to get all possible pair and perform bitwise OR and count maximum number of 1's. Any idea to make above solution more efficient? Thank you
Since you need arbitrarily long binary numbers, do everything with strings.
This function first pads the two inputs to the same length, and then uses zip to pair the digits and map to compute the OR for each pair of characters. The resulting array of characters is converted back into a String with String().
func binaryOR(_ a: String, _ b: String) -> String {
let maxAB = max(a.count, b.count)
let paddedA = String(repeating: "0", count: maxAB - a.count) + a
let paddedB = String(repeating: "0", count: maxAB - b.count) + b
return String(zip(paddedA, paddedB).map({ $0 == ("0", "0") ? "0" : "1" }))
}
print(binaryOR("11", "1100")) // "1111"
print(binaryOR("1000", "0001")) // "1001"
I can have array of upto 500 string and each string can have upto 500
characters. Then I have to get all possible pair and perform bitwise
OR and count maximum number of 1's. Any idea to make above solution
more efficient?
You will have to do 500 * 499 / 2 (which is 124,750 comparisons). It is important to avoid unnecessary and/or repeated work.
I would recommend:
Do an initial pass to loop though your strings to find out the length of the largest one. Then pad all of your strings to this length. I would keep track of the original length of each string in a tiny stuct:
struct BinaryNumber {
var string: String // padded string
var length: Int // original length before padding
}
Modify the binaryOR function to take BinaryNumbers and return Int, the count of "1"s in the OR.
func binaryORcountOnes(_ a: BinaryNumber, _ b: BinaryNumber) -> Int {
let maxAB = max(a.length, b.length)
return zip(a.string.suffix(maxAB), b.string.suffix(maxAB)).reduce(0) { total, pair in return total + (pair == ("0", "0") ? 0 : 1) }
}
Note: The use of suffix helps the efficiency by only checking the digits that matter. If the original strings had length 2 and 3, then only the last 3 digits will be OR-ed even if they're padded to length 500.
Loop and compare all pairs of BinaryNumbers to find largest count of ones:
var numbers: [BinaryNumber] // This array was created in step 1
maxOnes = 0
for i in 0 ..< (numbers.count - 1) {
for j in (i + 1) ..< numbers.count {
let ones = binaryORcountOnes(numbers[i], numbers[j])
if ones > maxOnes {
maxOnes = ones
}
}
}
print("maxOnes = \(maxOnes)")
Additional idea for speedup
OR can't create more ones than were in the original two numbers, and the number of ones can't exceed the maximum length of either of the original two numbers. So, if you count the ones in each number when you are padding them and store that in your struct in a var ones: Int property, you can use that to see if you should even bother calling binaryORcountOnes:
maxOnes = 0
for i in 0 ..< (numbers.count - 1) {
for j in (i + 1) ..< numbers.count {
if maxOnes < min(numbers[i].ones + numbers[j].ones, numbers[i].length, numbers[j].length) {
let ones = binaryORcountOnes(numbers[i], numbers[j])
if ones > maxOnes {
maxOnes = ones
}
}
}
}
By the way, the length of the original string should really just be the minimum length that includes the highest order 1. So if the original string was "00101", then the length should be 3 because that is all you need to store "101".
let number = Int(a, radix: 2)
Radix helps using binary instead of decimical value
You can use radix for converting your string. Once converted, you can do a bitwise OR and then check the nonzeroBitCount to count the number of 1's
let a = Int("10101", radix: 2)!
let b = Int("11110", radix: 2)!
let bitwiseOR = a | b
let nonZero = bitwiseOR.nonzeroBitCount
As I already commented above "10101" is actually a String not a Binary so "10101" | "11110" will not calculate what you actually needed.
So what you need to do is convert both value in decimal then use bitwiseOR and convert the result back to in Binary String (in which format you have the data "11111" not 11111)
let a1 = Int("10101", radix: 2)!
let b1 = Int("11110", radix: 2)!
var result = 21 | 30
print(result)
Output: 31
Now convert it back to binary string
let binaryString = String(result, radix: 2)
print(binaryString)
Output: 11111
--: EDIT :--
I'm going to answer a basic example of how to calculate bitwiseOR as the question is specific for not use inbuilt function as string is very large to be converted into an Int.
Algorithm: 1|0 = 1, 1|1 = 1, 0|0 = 0, 0|1 = 1
So, What we do is to fetch all the characters from String one by one the will perform the | operation and append it to another String.
var str1 = "100101" // 37
var str2 = "10111" // 23
/// Result should be "110111" -> "55"
// #1. Make both string equal
let length1 = str1.characters.count
let length2 = str2.characters.count
if length1 != length2 {
let maxLength = max(length1, length2)
for index in 0..<maxLength {
if str1.characters.count < maxLength {
str1 = "0" + str1
}
if str2.characters.count < maxLength {
str2 = "0" + str2
}
}
}
// #2. Get the index and compare one by one in bitwise OR
// a) 1 - 0 = 1,
// b) 0 - 1 = 1,
// c) 1 - 1 = 1,
// d) 0 - 0 = 0
let length = max(str1.characters.count, str2.characters.count)
var newStr = ""
for index in 0..<length {
let charOf1 = Int(String(str1[str1.index(str1.startIndex, offsetBy: index)]))!
let charOf2 = Int(String(str2[str2.index(str2.startIndex, offsetBy: index)]))!
let orResult = charOf1 | charOf2
newStr.append("\(orResult)")
}
print(newStr)
Output: 110111 // 55
I would like to refer Understanding Bitwise Operators for more detail.
func addBinary(_ a: String, _ b: String) {
var result = ""
let arrA = Array(a)
let arrB = Array(b)
var lengthA = arrA.count - 1
var lengthB = arrB.count - 1
var sum = 0
while lengthA >= 0 || lengthB >= 0 || sum == 1 {
sum += (lengthA >= 0) ? Int(String(arrA[lengthA]))! : 0
sum += (lengthB >= 0) ? Int(String(arrB[lengthB]))! : 0
result = String((sum % 2)) + result
sum /= 2
lengthA -= 1
lengthB -= 1
}
print(result) }
addBinary("11", "1")
I have an array of UInts containing 16 elements and I need to convert it to a Data object of 16 bytes.
I am using the below code to convert, but it is converting it to 128 bytes instead of 16 bytes.
let numbers : stride(from: 0, to: salt.length, by: 2).map() {
strtoul(String(chars[$0 ..< min($0 + 2, chars.count)]), nil, 16)
}
/*numbers is a [UInt] array*/
let data = Data(buffer: UnsafeBufferPointer(start: numbers, count:number.count))
/*Data returns 128 bytes instead of 16 bytes*/
Please correct me as to what I am doing wrong.
You can't convert 16 UInts to 16 bytes. A UInt is 8 bytes long on a 64 bit device, or 4 bytes on a 32 bit device. You need to use an array of UInt8s.
If you have an array of UInts as input you can't cast them to UInt8, but you can convert them:
let array: [UInt] = [1, 2, 3, 123, 255]
let array8Bit: [UInt8] = array.map{UInt8($0)}
I'm trying to do some binary file parsing in swift, and although i have things working I have a situation where i have variable fields.
I have all my parsing working in the default case
I grab
1-bit field
1-bit field
1-bit field
11-bits field
1-bit field
(optional) 4-bit field
(optional) 4-bit field
1-bit field
2-bit field
(optional) 4-bit field
5-bit field
6-bit field
(optional) 6-bit field
(optional) 24-bit field
(junk data - up until byte buffer 0 - 7 bits as needed)
Most of the data uses only a certain set of optionals so I've gone ahead and started writing classes to handle that data. My general approach is to create a pointer structure and then construct a byte array from that:
let rawData: NSMutableData = NSMutableData(data: input_nsdata)
var ptr: UnsafeMutablePointer<UInt8> = UnsafeMutablePointer<UInt8(rawData.mutableBytes)
bytes = UnsafeMutableBufferPointer<UInt8>(start: ptr, count: rawData.length - offset)
So I end up working with an array of [UInt8] and I can do my parsing in a way similar to:
let b1 = (bytes[3] & 0x01) << 5
let b2 = (bytes[4] & 0xF8) >> 3
return Int(b1 | b2)
So where I run into trouble is with the optional fields, because my data does not lie specifically on byte boundaries everything gets complicated. In the ideal world I would probably just work directly with the pointer and advance it by bytes as needed, however, there is no way that I'm aware of to advance a pointer by 3-bits - which brings me to my question
What is the best approach to handle my situation?
One idea i thought was to come up with various structures that reflect the optional fields, except I'm not sure in swift how to create bit-aligned packed structures.
What is my best approach here? For clarification - the initial 1-bit fields determine which of the optional fields are set.
If the fields do not lie on byte boundaries then you'll have to keep
track of both the current byte and the current bit position within a byte.
Here is a possible solution which allows to read an arbitrary number
of bits from a data array and does all the bookkeeping. The only
restriction is that the result of nextBits() must fit into an UInt
(32 or 64 bits, depending on the platform).
struct BitReader {
private let data : [UInt8]
private var byteOffset : Int
private var bitOffset : Int
init(data : [UInt8]) {
self.data = data
self.byteOffset = 0
self.bitOffset = 0
}
func remainingBits() -> Int {
return 8 * (data.count - byteOffset) - bitOffset
}
mutating func nextBits(numBits : Int) -> UInt {
precondition(numBits <= remainingBits(), "attempt to read more bits than available")
var bits = numBits // remaining bits to read
var result : UInt = 0 // result accumulator
// Read remaining bits from current byte:
if bitOffset > 0 {
if bitOffset + bits < 8 {
result = (UInt(data[byteOffset]) & UInt(0xFF >> bitOffset)) >> UInt(8 - bitOffset - bits)
bitOffset += bits
return result
} else {
result = UInt(data[byteOffset]) & UInt(0xFF >> bitOffset)
bits = bits - (8 - bitOffset)
bitOffset = 0
byteOffset = byteOffset + 1
}
}
// Read entire bytes:
while bits >= 8 {
result = (result << UInt(8)) + UInt(data[byteOffset])
byteOffset = byteOffset + 1
bits = bits - 8
}
// Read remaining bits:
if bits > 0 {
result = (result << UInt(bits)) + (UInt(data[byteOffset]) >> UInt(8 - bits))
bitOffset = bits
}
return result
}
}
Example usage:
let data : [UInt8] = ... your data ...
var bitReader = BitReader(data: data)
let b1 = bitReader.nextBits(1)
let b2 = bitReader.nextBits(1)
let b3 = bitReader.nextBits(1)
let b4 = bitReader.nextBits(11)
let b5 = bitReader.nextBits(1)
if b1 > 0 {
let b6 = bitReader.nextBits(4)
let b7 = bitReader.nextBits(4)
}
// ... and so on ...
And here is another possible implemention, which is a bit simpler
and perhaps more effective. It collects bytes into an UInt, and
then extracts the result in a single step.
Here the restriction is that numBits + 7 must be less or equal
to the number of bits in an UInt (32 or 64). (Of course UInt
can be replace by UInt64 to make it platform independent.)
struct BitReader {
private let data : [UInt8]
private var byteOffset = 0
private var currentValue : UInt = 0 // Bits which still have to be consumed
private var currentBits = 0 // Number of valid bits in `currentValue`
init(data : [UInt8]) {
self.data = data
}
func remainingBits() -> Int {
return 8 * (data.count - byteOffset) + currentBits
}
mutating func nextBits(numBits : Int) -> UInt {
precondition(numBits <= remainingBits(), "attempt to read more bits than available")
// Collect bytes until we have enough bits:
while currentBits < numBits {
currentValue = (currentValue << 8) + UInt(data[byteOffset])
currentBits = currentBits + 8
byteOffset = byteOffset + 1
}
// Extract result:
let remaining = currentBits - numBits
let result = currentValue >> UInt(remaining)
// Update remaining bits:
currentValue = currentValue & UInt(1 << remaining - 1)
currentBits = remaining
return result
}
}