I'm working on an application that reads in some strings in Q Number format.
The java implementation converts a string like this:
int i = Integer.parseInt("00801600", 16);
System.out.println("Number from Integer.parseInt = " + i); // i=8394240
float j = Integer.reverseBytes(i);
System.out.println("After Integer.reverseBytes = " + j); // j=1474560.0
float k = j / 65536; //TWO_POWER_OF_16 = 65536
System.out.println("After Q division = " + k); // k=22.5
I've played with a lot of combinations of swift functions, and this is (hopefully) pretty close:
let i: Int = Int("00801600", radix: 16) ?? 0
let istr = "Number from Int = \(i)"
let j: Double = Double(i.byteSwapped)
let jstr = "After byte swapping = \(j)"
let k: Double = Double(j) / 65536.0
let kstr = "After Q division = \(k)"
Obviously, Int.byteSwapped isn't what I'm looking for. In my example above, j is where it all goes off the rails. The java code produces 1474560.0, whereas my swift is 6333186975989760.0.
A Java int is always 32 bits, so Integer.reverseBytes turns 0x00801600 into 0x00168000.
A Swift Int is 32 bits on 32-bit platforms and 64 bits on 64-bit platforms (which is most current platforms). So on a 32-bit platform, i.byteSwapped turns 0x00801600 into 0x00168000, but on a 64-bit platform, i.byteSwapped turns 0x0000000000801600 into 0x0016800000000000.
If you want 32 bits, be explicit:
1> let i = Int32("00801600", radix: 16)!
i: Int32 = 8394240
2> let j = Double(i.byteSwapped)
j: Double = 1474560
3> let k = j / 65536
k: Double = 22.5
4>
You say that you're trying to implement Q encoded numbers, but the Java code you've shown doesn't really do that. It hard-codes the case of Q16 (by virtue of dividing by 65536, which is 2^16), but frankly, I'm not even sure how it's intended to work, but it doesn't.
0x00801600 when Q encoded with a numerator of size 16, represents 0x0080 / 0x1600, which is 128 / 5632, which is equal to ~0.0227. Even if you imagine that your input is swapped, 5632 / 128 is 44, not 22.5. So I don't see any interpretation under which this math works out.
To implement this in Swift (and in Java, for that matter), I would make a new QEncoded data type, that stores an integer and a number of bits that count towards the numerator (the number of bits that count for the denominator can be inferred as the formed minus the latter).
This approach is the most flexible, but it isn't particularly efficient (since it wastes one Int for the numeratorBitWidth for every instance). If you have so many of these that memory usage is a concern, you can use a more protocol oriented approach, which I detail in a second answer.
// A QEncoded binary number of the form Qm.n https://en.wikipedia.org/wiki/Q_%28number_format%29
struct QEncoded<I: BinaryInteger> {
var i: I
var numeratorBitWidth: Int // "m"
var denominatorBitWidth: Int { return i.bitWidth - numeratorBitWidth } // "n"
var numerator: I {
return i >> denominatorBitWidth
}
var denominator: I {
if denominatorBitWidth == 0 { return 1 }
let denominatorMask: I = (1 << I(numeratorBitWidth)) - 1
return i & denominatorMask
}
var ratio: Double { return Double(numerator) / Double(denominator) }
var qFormatDescription: String {
let (m, n) = (self.numeratorBitWidth, self.denominatorBitWidth)
return (n == 0) ? "Q\(m)" : "Q\(m).\(n)"
}
init(bitPattern: I, numeratorBitWidth: Int, denominatorBitWidth: Int) {
assert(numeratorBitWidth + denominatorBitWidth == bitPattern.bitWidth, """
The number of bits in the numerator (\(numeratorBitWidth)) and denominator (\(denominatorBitWidth)) \
must sum to the total number of bits in the integer \(bitPattern.bitWidth)
""")
self.i = bitPattern
self.numeratorBitWidth = numeratorBitWidth
}
// Might be useful to implement something like this:
// init(numerator: I, numeratorBits: Int, denominator: I, denominatorBits: Int) {
//
// }
}
Here's a little demo:
extension BinaryInteger {
var binaryDescription: String {
var binaryString = ""
var internalNumber = self
var counter = 0
for _ in (1...self.bitWidth) {
binaryString.insert(contentsOf: "\(internalNumber & 1)", at: binaryString.startIndex)
internalNumber >>= 1
counter += 1
if counter % 4 == 0 {
binaryString.insert(contentsOf: " ", at: binaryString.startIndex)
}
}
return binaryString
}
}
extension QEncoded {
func test() {
print("\(self.i.binaryDescription) with \(qFormatDescription) encoding is: \(numerator.binaryDescription) (numerator: \(numerator)) / \(denominator.binaryDescription) (denominator: \(denominator)) = \(ratio)")
}
}
// ↙︎ This common "0_" prefix does nothing, it's just necessary because "0b_..." isn't a valid form
// The rest of the `_` denote the seperation between the numerator and denominator, strictly for human understanding only (it has no impact on the code's behaviour)
QEncoded(bitPattern: 0b0__00111111 as UInt8, numeratorBitWidth: 0, denominatorBitWidth: 8).test()
QEncoded(bitPattern: 0b0_0_0111111 as UInt8, numeratorBitWidth: 1, denominatorBitWidth: 7).test()
QEncoded(bitPattern: 0b0_00_111111 as UInt8, numeratorBitWidth: 2, denominatorBitWidth: 6).test()
QEncoded(bitPattern: 0b0_001_11111 as UInt8, numeratorBitWidth: 3, denominatorBitWidth: 5).test()
QEncoded(bitPattern: 0b0_0011_1111 as UInt8, numeratorBitWidth: 4, denominatorBitWidth: 4).test()
QEncoded(bitPattern: 0b0_00111_111 as UInt8, numeratorBitWidth: 5, denominatorBitWidth: 3).test()
QEncoded(bitPattern: 0b0_001111_11 as UInt8, numeratorBitWidth: 6, denominatorBitWidth: 2).test()
QEncoded(bitPattern: 0b0_0011111_1 as UInt8, numeratorBitWidth: 7, denominatorBitWidth: 1).test()
QEncoded(bitPattern: 0b0_00111111_ as UInt8, numeratorBitWidth: 8, denominatorBitWidth: 0).test()
Which prints:
0011 1111 with Q0.8 encoding is: 0000 0000 (numerator: 0) / 0000 0000 (denominator: 0) = -nan
0011 1111 with Q1.7 encoding is: 0000 0000 (numerator: 0) / 0000 0001 (denominator: 1) = 0.0
0011 1111 with Q2.6 encoding is: 0000 0000 (numerator: 0) / 0000 0011 (denominator: 3) = 0.0
0011 1111 with Q3.5 encoding is: 0000 0001 (numerator: 1) / 0000 0111 (denominator: 7) = 0.14285714285714285
0011 1111 with Q4.4 encoding is: 0000 0011 (numerator: 3) / 0000 1111 (denominator: 15) = 0.2
0011 1111 with Q5.3 encoding is: 0000 0111 (numerator: 7) / 0001 1111 (denominator: 31) = 0.22580645161290322
0011 1111 with Q6.2 encoding is: 0000 1111 (numerator: 15) / 0011 1111 (denominator: 63) = 0.23809523809523808
0011 1111 with Q7.1 encoding is: 0001 1111 (numerator: 31) / 0011 1111 (denominator: 63) = 0.49206349206349204
0011 1111 with Q8 encoding is: 0011 1111 (numerator: 63) / 0000 0001 (denominator: 1) = 63.0
This is an alternate approach to my main answer. Read that one first.
This is a more protocol oriented approach. It encodes the numeratorBitWidth at the type level, so each instance only has to have enough memory to store I. Unfortunately, this requires a new struct definition for every type of Q encoded integer you might want (There's 16 variants just for 16 bit integers alone: QEncoded1_15, QEncoded2_14, ... QEncoded15_1, QEncoded16_0).
protocol QEncoded {
associatedtype I: BinaryInteger
var i: I { get set }
static var numeratorBitWidth: Int { get } // "m"
static var denominatorBitWidth: Int { get } // "n"
}
extension QEncoded {
static var denominatorBitWidth: Int { return I().bitWidth - Self.numeratorBitWidth }
static var qFormatDescription: String {
let (m, n) = (self.numeratorBitWidth, self.denominatorBitWidth)
return (n == 0) ? "Q\(m)" : "Q\(m).\(n)"
}
var numerator: I {
return i >> Self.denominatorBitWidth
}
var denominator: I {
if Self.denominatorBitWidth == 0 { return 1 }
let denominatorMask: I = (1 << I(Self.numeratorBitWidth)) - 1
return i & denominatorMask
}
var ratio: Double { return Double(numerator) / Double(denominator) }
}
Example usage:
extension BinaryInteger {
var binaryDescription: String {
var binaryString = ""
var internalNumber = self
var counter = 0
for _ in (1...self.bitWidth) {
binaryString.insert(contentsOf: "\(internalNumber & 1)", at: binaryString.startIndex)
internalNumber >>= 1
counter += 1
if counter % 4 == 0 {
binaryString.insert(contentsOf: " ", at: binaryString.startIndex)
}
}
return binaryString
}
}
extension QEncoded {
func test() {
print("\(self.i.binaryDescription) with \(Self.qFormatDescription) encoding is: \(numerator.binaryDescription) (numerator: \(numerator)) / \(denominator.binaryDescription) (denominator: \(denominator)) = \(ratio)")
}
}
struct QEncoded16_0: QEncoded {
static let numeratorBitWidth = 16
var i: UInt16
init(bitPattern: I) { self.i = bitPattern }
}
struct QEncoded8_8: QEncoded {
static let numeratorBitWidth = 8
var i: UInt16
init(bitPattern: I) { self.i = bitPattern }
}
struct QEncoded4_12: QEncoded {
static let numeratorBitWidth = 4
var i: UInt16
init(bitPattern: I) { self.i = bitPattern }
}
Output:
0011 1110 0000 1111 with Q16 encoding is: 0011 1110 0000 1111 (numerator: 15887) / 0000 0000 0000 0001 (denominator: 1) = 15887.0
0011 1110 0000 1111 with Q8.8 encoding is: 0000 0000 0011 1110 (numerator: 62) / 0000 0000 0000 1111 (denominator: 15) = 4.133333333333334
0011 1110 0000 1111 with Q4.12 encoding is: 0000 0000 0000 0011 (numerator: 3) / 0000 0000 0000 1111 (denominator: 15) = 0.2
Related
I'm trying to implement Bluetooth FTMS(Fitness Machine).
guard let characteristicData = characteristic.value else { return -1 }
let byteArray = [UInt8](characteristicData)
let nsdataStr = NSData.init(data: (characteristic.value)!)
print("pwrFTMS 2ACC Feature Array:[\(byteArray.count)]\(byteArray) Hex:\(nsdataStr)")
Here is what's returned from the bleno server
PwrFTMS 2ACC Feature Array:[8][2, 64, 0, 0, 8, 32, 0, 0] Hex:{length = 8, bytes = 0x0240000008200000}
Based on the specs, the returned data has 2 characteristics, each of them 4 octet long.
I'm having trouble getting the 4 octets split so I can get it converted to binary and get the relevant Bits for decoding.
Part of the problem is the swift will remove the leading zero. Hence, instead of getting 00 00 64 02, I'm getting 642. I tried the below to pad it with leading zero but since it's formatted to a string, I can't convert it to binary using radix:2
let FTMSFeature = String(format: "%02x", byteArray[3]) + String(format: "%02x", byteArray[2]) + String(format: "%02x", byteArray[1]) + String(format: "%02x", byteArray[0])
I've been banging my head on this for an entire day and went thru multiple SO and Google to no avail.
How Can I convert:
From - [HEX] 00 00 40 02
To - [DEC] 16386
To - [BIN] 0100 0000 0000 0010
then I can get to Bit1 = 1 and Bit14 = 1
How Can I convert:
From - [HEX] 00 00 40 02 To - [DEC] 16386 To - [BIN] 0100 0000
0000 0010
You can simply use ContiguousBytes withUnsafeBytes method to load your bytes as UInt32. Note that it will use only the same amount of bytes needed to create the resulting type (4 bytes)
let byteArray: [UInt8] = [2, 64, 0, 0, 8, 32, 0, 0]
let decimal = byteArray.withUnsafeBytes { $0.load(as: UInt32.self) }
decimal // 16386
To convert from bytes to binary you just need to pad to left your resulting binary string. Note that your expected binary string has only 2 bytes when a 32-bit unsigned integer should have 4:
extension FixedWidthInteger {
var binary: String {
(0 ..< Self.bitWidth / 8).map {
let byte = UInt8(truncatingIfNeeded: self >> ($0 * 8))
let string = String(byte, radix: 2)
return String(repeating: "0",
count: 8 - string.count) + string
}.reversed().joined(separator: " ")
}
}
let binary = decimal.binary // "00000000 00000000 01000000 00000010"
To know if a specific bit is on or off you can do as follow:
extension UnsignedInteger {
func bit<B: BinaryInteger>(at pos: B) -> Bool {
precondition(0..<B(bitWidth) ~= pos, "invalid bit position")
return (self & 1 << pos) > 0
}
}
decimal.bit(at: 0) // false
decimal.bit(at: 1) // true
decimal.bit(at: 2) // false
decimal.bit(at: 3) // false
decimal.bit(at: 14) // true
If you need to get a value at a specific bytes position you can check this post
I am trying to create my own hashing framework/library, but I've stumbled across an issue. When I calculate the SHA256 hash of an empty string, the hash is calculated successfully, but when I calculate it for anything else, it fails. Can someone help me figure out why?
As provided by Wikipedia, when performed online and using python, this hash matches.
let h = SHA256(message: Data("".utf8))
let d = h.digest()
// e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
print(d)
But 'Hello world' does not
let h = SHA256(message: Data("Hello world".utf8))
let d = h.digest()
// ce9f4c08f0688d09b8061ed6692c1d5af2516c8682fad2d9a5d72f96ba787a80
print(d)
// Expected:
// 64ec88ca00b268e5ba1a35678a1b5316d212f4f366b2477232534a8aeca37f3c
I hope someone can help me. SHA256 implementation below:
/*
First 32 bits of the fractional parts of the
square roots of the first 8 primes 2..19.
*/
fileprivate let kSHA256H0: UInt32 = 0x6a09e667
fileprivate let kSHA256H1: UInt32 = 0xbb67ae85
fileprivate let kSHA256H2: UInt32 = 0x3c6ef372
fileprivate let kSHA256H3: UInt32 = 0xa54ff53a
fileprivate let kSHA256H4: UInt32 = 0x510e527f
fileprivate let kSHA256H5: UInt32 = 0x9b05688c
fileprivate let kSHA256H6: UInt32 = 0x1f83d9ab
fileprivate let kSHA256H7: UInt32 = 0x5be0cd19
/*
First 32 bits of the fractional parts of the
cube roots of the first 64 primes 2..311.
*/
fileprivate let kSHA256K: [UInt32] = [
0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5,
0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3,
0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc,
0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7,
0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13,
0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3,
0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5,
0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208,
0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
]
/// Shift the value of x n amount to the right.
/// - Parameters:
/// - x: The value to shift.
/// - n: The amount to shift by.
/// - Returns: The shifted value.
fileprivate func shiftRight(_ x: UInt32, _ n: UInt32) -> UInt32 { x >> n }
/// Rotate the value of x n amount of times.
/// - Parameters:
/// - x: The value to rotate.
/// - y: The amount to rotate by.
/// - Returns: The rotated value.
fileprivate func rotateRight(_ x: UInt32, _ y: UInt32) -> UInt32 { (x >> (y & 31)) | (x << (32 - (y & 31))) }
/// Split data into chunks of specified size.
/// - Note: This function will not pad or append data
/// to make sure all the chunks are equal in size.
/// - Parameters:
/// - data: The data to split.
/// - size: The size of a chunk.
/// - Returns: An array containing chunks of specified size (when able).
fileprivate func chunk(_ data: Data, toSize size: Int) -> [Data] {
stride(from: 0, to: data.count, by: size).map {
data.subdata(in: $0 ..< Swift.min($0 + size, data.count))
}
}
public class SHA256 {
/// The pre-processed data.
fileprivate let message: Data
fileprivate var hash = [
kSHA256H0, kSHA256H1, kSHA256H2, kSHA256H3,
kSHA256H4, kSHA256H5, kSHA256H6, kSHA256H7
]
public init(message: Data) {
self.message = Self.preProcess(message: message)
}
fileprivate static func preProcess(message: Data) -> Data {
let L = message.count * 8 // Original message length in bits.
var K = 0 // Required padding bits.
while (L + 1 + K + 64) % 512 != 0 {
K += 1
}
var padding = Data(repeating: 0, count: K / 8)
padding.insert(0x80, at: 0) // Insert 1000 0000 into the padding.
var length = UInt64(L).bigEndian
return message + padding + Data(bytes: &length, count: 8)
}
public func digest() -> Data {
let chunks = chunk(message, toSize: 64)
for chunk in chunks {
var w = [UInt32](repeating: 0, count: 64) // 64-entry message schedule array of 32-bit words.
// Copy the chunk into first 16 words w[0..15] of the schedule array.
for i in 0 ..< 16 {
let sub = chunk.subdata(in: i ..< i + 4)
w[i] = sub.withUnsafeBytes { $0.load(as: UInt32.self) }.bigEndian
}
// Extend the first 16 words into the remaining 48 words w[16..63] of the schedule array.
for i in 16 ..< 64 {
let s0 = rotateRight(w[i - 15], 7) ^ rotateRight(w[i - 15], 18) ^ shiftRight(w[i - 15], 3)
let s1 = rotateRight(w[i - 2], 17) ^ rotateRight(w[i - 2], 19) ^ shiftRight(w[i - 2], 10)
w[i] = s1 &+ w[i - 7] &+ s0 &+ w[i - 16]
}
// Create some working variables.
var a = hash[0]
var b = hash[1]
var c = hash[2]
var d = hash[3]
var e = hash[4]
var f = hash[5]
var g = hash[6]
var h = hash[7]
// Compress function main loop.
for i in 0 ..< 64 {
let S1 = rotateRight(e, 6) ^ rotateRight(e, 11) ^ rotateRight(e, 25)
let ch = (e & f) ^ (~e & g)
let T1 = h &+ S1 &+ ch &+ kSHA256K[i] &+ w[i]
let S0 = rotateRight(a, 2) ^ rotateRight(a, 13) ^ rotateRight(a, 22)
let maj = (a & b) ^ (a & c) ^ (b & c)
let T2 = S0 &+ maj
h = g
g = f
f = e
e = d &+ T1
d = c
c = b
b = a
a = T1 &+ T2
}
hash[0] &+= a
hash[1] &+= b
hash[2] &+= c
hash[3] &+= d
hash[4] &+= e
hash[5] &+= f
hash[6] &+= g
hash[7] &+= h
}
return hash.map {
var num = $0.bigEndian
return Data(bytes: &num, count: 4)
}.reduce(Data(), +)
}
}
Turns out, I was creating the wrong sub data to construct my UInt32's from to create the message schedule array. (The first couple of lines in the .digest() function)
The old one was
let sub = chunk.subdata(in: i ..< i + 4)
The new one is
let sub = chunk.subdata(in: i * 4 ..< (i * 4) + 4)
This resolves the issue
I want to convert an integer from 0 to 65355 and for that I need a two byte representation. I'm trying to divide it by 2, 8 times, and sum the powers of 2 when the rest is one, and then cast that integer as a byte but I'm having problems meeting the restrictions of a byte (256). The second byte will be the rest of the 8th division and I'm having problems casting that as a byte too.
The following is my code for the previously described function method:
method convertBin(i:int) returns (b:seq<byte>)
requires 0<=i<=65535;
{
var b1:=0;
var q:=i;
var j:=0;
while j<8
invariant 0<=j<=8 && (b1 as int)< power(2,j)
decreases 8-j
{
var p:int;
if(q%2==1){
p:=power(2, j);
b1:=b1 + p;
q:=q/2;
}
j:=j+1;
}
b1:=b1 as byte;
b:=[b1]+[q as byte];
}
To complete your example, you need stronger loop invariants. But you don't need a loop at all, since there's no reason to divide only by 2.
Here's doing it with byte as a subset type:
type byte = x | 0 <= x < 256
method convertBin(i: int) returns (b1: byte, b0: byte)
requires 0 <= i < 0x1_0000
ensures i == 256 * b1 + b0
{
b1, b0 := i / 256, i % 256;
}
And here's the same program, but with byte being a newtype:
newtype byte = x | 0 <= x < 256
method convertBin(i: int) returns (b1: byte, b0: byte)
requires 0 <= i < 0x1_0000
ensures i == 256 * b1 as int + b0 as int
{
b1, b0 := (i / 256) as byte, (i % 256) as byte;
}
Rustan
I'm trying to do some binary file parsing in swift, and although i have things working I have a situation where i have variable fields.
I have all my parsing working in the default case
I grab
1-bit field
1-bit field
1-bit field
11-bits field
1-bit field
(optional) 4-bit field
(optional) 4-bit field
1-bit field
2-bit field
(optional) 4-bit field
5-bit field
6-bit field
(optional) 6-bit field
(optional) 24-bit field
(junk data - up until byte buffer 0 - 7 bits as needed)
Most of the data uses only a certain set of optionals so I've gone ahead and started writing classes to handle that data. My general approach is to create a pointer structure and then construct a byte array from that:
let rawData: NSMutableData = NSMutableData(data: input_nsdata)
var ptr: UnsafeMutablePointer<UInt8> = UnsafeMutablePointer<UInt8(rawData.mutableBytes)
bytes = UnsafeMutableBufferPointer<UInt8>(start: ptr, count: rawData.length - offset)
So I end up working with an array of [UInt8] and I can do my parsing in a way similar to:
let b1 = (bytes[3] & 0x01) << 5
let b2 = (bytes[4] & 0xF8) >> 3
return Int(b1 | b2)
So where I run into trouble is with the optional fields, because my data does not lie specifically on byte boundaries everything gets complicated. In the ideal world I would probably just work directly with the pointer and advance it by bytes as needed, however, there is no way that I'm aware of to advance a pointer by 3-bits - which brings me to my question
What is the best approach to handle my situation?
One idea i thought was to come up with various structures that reflect the optional fields, except I'm not sure in swift how to create bit-aligned packed structures.
What is my best approach here? For clarification - the initial 1-bit fields determine which of the optional fields are set.
If the fields do not lie on byte boundaries then you'll have to keep
track of both the current byte and the current bit position within a byte.
Here is a possible solution which allows to read an arbitrary number
of bits from a data array and does all the bookkeeping. The only
restriction is that the result of nextBits() must fit into an UInt
(32 or 64 bits, depending on the platform).
struct BitReader {
private let data : [UInt8]
private var byteOffset : Int
private var bitOffset : Int
init(data : [UInt8]) {
self.data = data
self.byteOffset = 0
self.bitOffset = 0
}
func remainingBits() -> Int {
return 8 * (data.count - byteOffset) - bitOffset
}
mutating func nextBits(numBits : Int) -> UInt {
precondition(numBits <= remainingBits(), "attempt to read more bits than available")
var bits = numBits // remaining bits to read
var result : UInt = 0 // result accumulator
// Read remaining bits from current byte:
if bitOffset > 0 {
if bitOffset + bits < 8 {
result = (UInt(data[byteOffset]) & UInt(0xFF >> bitOffset)) >> UInt(8 - bitOffset - bits)
bitOffset += bits
return result
} else {
result = UInt(data[byteOffset]) & UInt(0xFF >> bitOffset)
bits = bits - (8 - bitOffset)
bitOffset = 0
byteOffset = byteOffset + 1
}
}
// Read entire bytes:
while bits >= 8 {
result = (result << UInt(8)) + UInt(data[byteOffset])
byteOffset = byteOffset + 1
bits = bits - 8
}
// Read remaining bits:
if bits > 0 {
result = (result << UInt(bits)) + (UInt(data[byteOffset]) >> UInt(8 - bits))
bitOffset = bits
}
return result
}
}
Example usage:
let data : [UInt8] = ... your data ...
var bitReader = BitReader(data: data)
let b1 = bitReader.nextBits(1)
let b2 = bitReader.nextBits(1)
let b3 = bitReader.nextBits(1)
let b4 = bitReader.nextBits(11)
let b5 = bitReader.nextBits(1)
if b1 > 0 {
let b6 = bitReader.nextBits(4)
let b7 = bitReader.nextBits(4)
}
// ... and so on ...
And here is another possible implemention, which is a bit simpler
and perhaps more effective. It collects bytes into an UInt, and
then extracts the result in a single step.
Here the restriction is that numBits + 7 must be less or equal
to the number of bits in an UInt (32 or 64). (Of course UInt
can be replace by UInt64 to make it platform independent.)
struct BitReader {
private let data : [UInt8]
private var byteOffset = 0
private var currentValue : UInt = 0 // Bits which still have to be consumed
private var currentBits = 0 // Number of valid bits in `currentValue`
init(data : [UInt8]) {
self.data = data
}
func remainingBits() -> Int {
return 8 * (data.count - byteOffset) + currentBits
}
mutating func nextBits(numBits : Int) -> UInt {
precondition(numBits <= remainingBits(), "attempt to read more bits than available")
// Collect bytes until we have enough bits:
while currentBits < numBits {
currentValue = (currentValue << 8) + UInt(data[byteOffset])
currentBits = currentBits + 8
byteOffset = byteOffset + 1
}
// Extract result:
let remaining = currentBits - numBits
let result = currentValue >> UInt(remaining)
// Update remaining bits:
currentValue = currentValue & UInt(1 << remaining - 1)
currentBits = remaining
return result
}
}
I have a very long String (600+ characters) holding a big decimal value (yes I know - sounds like a BigInteger) and need the byte representation of this value.
Is there any easy way to archive this with swift?
static func decimalStringToUInt8Array(decimalString:String) -> [UInt8] {
...
}
Edit: Updated for Swift 5
I wrote you a function to convert your number string. This is written in Swift 5 (originally Swift 1.2).
func decimalStringToUInt8Array(_ decimalString: String) -> [UInt8] {
// Convert input string into array of Int digits
let digits = Array(decimalString).compactMap { Int(String($0)) }
// Nothing to process? Return an empty array.
guard digits.count > 0 else { return [] }
let numdigits = digits.count
// Array to hold the result, in reverse order
var bytes = [UInt8]()
// Convert array of digits into array of Int values each
// representing 6 digits of the original number. Six digits
// was chosen to work on 32-bit and 64-bit systems.
// Compute length of first number. It will be less than 6 if
// there isn't a multiple of 6 digits in the number.
var ints = Array(repeating: 0, count: (numdigits + 5)/6)
var rem = numdigits % 6
if rem == 0 {
rem = 6
}
var index = 0
var accum = 0
for digit in digits {
accum = accum * 10 + digit
rem -= 1
if rem == 0 {
rem = 6
ints[index] = accum
index += 1
accum = 0
}
}
// Repeatedly divide value by 256, accumulating the remainders.
// Repeat until original number is zero
while ints.count > 0 {
var carry = 0
for (index, value) in ints.enumerated() {
var total = carry * 1000000 + value
carry = total % 256
total /= 256
ints[index] = total
}
bytes.append(UInt8(truncatingIfNeeded: carry))
// Remove leading Ints that have become zero.
while ints.count > 0 && ints[0] == 0 {
ints.remove(at: 0)
}
}
// Reverse the array and return it
return bytes.reversed()
}
print(decimalStringToUInt8Array("0")) // prints "[0]"
print(decimalStringToUInt8Array("255")) // prints "[255]"
print(decimalStringToUInt8Array("256")) // prints "[1,0]"
print(decimalStringToUInt8Array("1024")) // prints "[4,0]"
print(decimalStringToUInt8Array("16777216")) // prints "[1,0,0,0]"
Here's the reverse function. You'll notice it is very similar:
func uInt8ArrayToDecimalString(_ uint8array: [UInt8]) -> String {
// Nothing to process? Return an empty string.
guard uint8array.count > 0 else { return "" }
// For efficiency in calculation, combine 3 bytes into one Int.
let numvalues = uint8array.count
var ints = Array(repeating: 0, count: (numvalues + 2)/3)
var rem = numvalues % 3
if rem == 0 {
rem = 3
}
var index = 0
var accum = 0
for value in uint8array {
accum = accum * 256 + Int(value)
rem -= 1
if rem == 0 {
rem = 3
ints[index] = accum
index += 1
accum = 0
}
}
// Array to hold the result, in reverse order
var digits = [Int]()
// Repeatedly divide value by 10, accumulating the remainders.
// Repeat until original number is zero
while ints.count > 0 {
var carry = 0
for (index, value) in ints.enumerated() {
var total = carry * 256 * 256 * 256 + value
carry = total % 10
total /= 10
ints[index] = total
}
digits.append(carry)
// Remove leading Ints that have become zero.
while ints.count > 0 && ints[0] == 0 {
ints.remove(at: 0)
}
}
// Reverse the digits array, convert them to String, and join them
return digits.reversed().map(String.init).joined()
}
Doing a round trip test to make sure we get back to where we started:
let a = "1234567890987654321333555777999888666444222000111"
let b = decimalStringToUInt8Array(a)
let c = uInt8ArrayToDecimalString(b)
if a == c {
print("success")
} else {
print("failure")
}
success
Check that eight 255 bytes is the same as UInt64.max:
print(uInt8ArrayToDecimalString([255, 255, 255, 255, 255, 255, 255, 255]))
print(UInt64.max)
18446744073709551615
18446744073709551615
You can use the NSData(int: Int, size: Int) method to get an Int to NSData, and then get the bytes from NSData to an array: [UInt8].
Once you know that, the only thing is to know the size of your array. Darwin comes in handy there with the powfunction. Here is a working example:
func stringToUInt8(string: String) -> [UInt8] {
if let int = string.toInt() {
let power: Float = 1.0 / 16
let size = Int(floor(powf(Float(int), power)) + 1)
let data = NSData(bytes: &int, length: size)
var b = [UInt8](count: size, repeatedValue: 0)
return data.getBytes(&b, length: size)
}
}
You can always do:
let bytes = [UInt8](decimalString.utf8)
If you want the UTF-8 bytes.
Provided you had division implemented on your decimal string you could divide by 256 repeatedly. The reminder of the first division is the your least significant byte.
Here's an example of division by a scalar in C (assumed the length of the number is stored in A[0] and writes the result in the same array):
void div(int A[], int B)
{
int i, t = 0;
for (i = A[0]; i > 0; i--, t %= B)
A[i] = (t = t * 10 + A[i]) / B;
for (; A[0] > 1 && !A[A[0]]; A[0]--);
}