Convert NSString to UInt8 in a function in Swift? - ios

I'm creating a Swift app in Xcode that sends a command to a BLE adapter in order to make the LED's connected to it change to a different colour.
As I've established from a reply to a previous post on SO, I have to send command in terms of hex integers in an array. I'm using the following code in order to do this:
let bytes : [UInt8] = [ 0x52, 0x13, 0x00, 0x56, 0xFF, 0x00, 0x00, 0x00, 0xAA ]
let data = NSData(bytes: bytes, length: bytes.count)
Therefore, this requires a UInt8 form as suggested above.
However, I'm trying to use sliders as colour pickers on my Swift app in order to set the R, G, and B colours of the LED strip connected to the BLE receiver. In order to do this I have created three sliders for R, G and B respectively, setting the minimum value of each to 0 and the max to 255 (since 255 converts to FF in hex). I'm then using the following function to convert these to hex form for me to implement in the command above.
func colorToHex(input: Int) -> UInt8 {
var st = NSString(format: "%2X", input)
return st
}
The problem with this is the fact that I must return a UInt8 value back again. Since 'st' is an NSString, Xcode throws an error of 'NSString not convertible to UInt8'.
I'm fairly new to Swift. The question here is, how do I get the function to return a UInt8 value how do I get it to form a UInt8 value?
Any help would be greatly appreciated!

There is no need to use NSString or Int. If redSlider is your UISlider with minimum value 0 and maximum value 255 then you can just compute
let redByte = UInt8(redSlider.value)
and use that in your bytes array:
var bytes : [UInt8] = [ 0x52, 0x13, 0x00, 0x56, 0xFF, 0x00, 0x00, 0x00, 0xAA ]
bytes[0] = redByte // Assuming that the first array element is for red.

Just
func colorToHex(input: Int) -> UInt8 {
return UInt8(input % (Int(UInt8.max) + 1))
}
NSString(format: "%2X", colorToHex(25)) // "19"
NSString(format: "%2X", colorToHex(254)) // "FE"
NSString(format: "%2X", colorToHex(255)) // "FF"
NSString(format: "%2X", colorToHex(256)) // "0"
If I were you, I will use NSString(format: "%0x", colorToHex(25)) // "19"
In your case you have space, if the number has one symbol

Related

Calculate high byte and low byte in swift iOS?

How to calculate the highByte and lowByte of any number;
Example :
let mValue = 26513
hex representation of mValue = 0x6791
Then how find high and low byte of above number?
Updated for swift :
below solution works for me:
let mVal = 26513 // hex value of mVal = 0x6791 (UInt16)
let highByte = (mVal >> 8) & 0xff // hex value of highByte = 0x0067 (UInt8)
let lowByte = mVal & 0xff // hex value of lowByte = 0x0091 (UInt8)
print("highByte: \(highByte)\nLowByte: \(lowByte)")

How to map an UnsafePointer to another type?

I'm trying to convert an UnsafePointer<UInt16> to an UnsafePointer<Float> and so far I ended with this solution:
let bufferSize = 1024
let buffer: UnsafePointer<UInt16> = ....
let tmp = UnsafeBufferPointer(start: buffer, count: bufferSize).map(Float.init)
let converted: UnsafePointer<Float> = UnsafePointer(tmp)
It works but I have the feeling it's not an efficient way since I'm creating an intermediate Array... Is there a better way to do that ?
You can use withMemoryRebound to convert a pointer from one type to another:
buffer.withMemoryRebound(to: Float.self, capacity: 1024) { converted -> Void in
// use `converted` here
}
But be careful that MemoryLayout<Float>.size is 4 (i.e. 32 bits) and MemoryLayout<UInt16> is obviously 2 (i.e.. 16 bits), so the bufferSize of your Float will be half of that of your UInt16 buffer.

How to convert NSData to multiple type Ints

I obtain magnetometer trim register as get NSData() that looks as follows:
<00001a1a 4f56f202 00000000 1dfd421b>
I need to convert it to Int8, UInt8, Int16, UInt16 depending on which byte I access.
Sources from docs:
s8 dig_x1;/**< trim x1 data */
s8 dig_y1;/**< trim y1 data */
s8 dig_x2;/**< trim x2 data */
s8 dig_y2;/**< trim y2 data */
u16 dig_z1;/**< trim z1 data */
s16 dig_z2;/**< trim z2 data */
s16 dig_z3;/**< trim z3 data */
s16 dig_z4;/**< trim z4 data */
u8 dig_xy1;/**< trim xy1 data */
s8 dig_xy2;/**< trim xy2 data */
u16 dig_xyz1;/**< trim xyz1 data *
The main problem is how to access a selected byte in NSData to convert it manually either to Int8 or UIint16 etc?
Generally, how to approach such problem? Should look for a way to manually iterate over NSData and convert each value manualy as well?
You can convert data.bytes + offset to a pointer of the
appropriate type and then dereference the pointer:
let dig_x1 = UnsafePointer<Int8>(data.bytes).memory
let dig_y1 = UnsafePointer<Int8>(data.bytes + 1).memory
// ...
let dig_z1 = UnsafePointer<UInt16>(data.bytes + 4).memory
let dig_z2 = UnsafePointer<Int16>(data.bytes + 6).memory
// ...
(Note: Here it is assumed that all values in that binary blob are
property aligned for their type.)
The data is in little-endian byte order, which is also what all
current iOS platforms use. To be on the safe side, convert
the data to host byte order explicitly:
let dig_z1 = UInt16(littleEndian: UnsafePointer(data.bytes + 4).memory)
let dig_z2 = Int16(littleEndian: UnsafePointer(data.bytes + 6).memory)
// ...
An alternative is to define a C structure in the bridging header file
struct MagnetometerData {
int8_t dig_x1;
int8_t dig_y1;
int8_t dig_x2;
int8_t dig_y2;
uint16_t dig_z1;
int16_t dig_z2;
int16_t dig_z3;
int16_t dig_z4;
uint8_t dig_xy1;
int8_t dig_xy2;
uint16_t dig_xyz1;
} ;
and extract the data in one step:
var mdata = MagnetometerData()
data.getBytes(&mdata, length: sizeofValue(mdata))
This works (if there is no padding between the struct members)
because Swift preserves the layout of structures imported from C.
A possible Swift 3 implementation of the first approach is
let dig_x1 = ((data as NSData).bytes).load(as: Int8.self)
let dig_y1 = ((data as NSData).bytes + 1).load(as: Int8.self)
// ...
let dig_z1 = ((data as NSData).bytes + 4).load(as: UInt16.self)
let dig_z2 = ((data as NSData).bytes + 6).load(as: Int16.self)
// ...
Again it is assumed that all values are property aligned for their
type.

Calculating CoreMIDI Pitch Bend Values For iOS?

I need to hand assemble 14bit MIDI Pitch Bend values from raw UInt16 values in iOS. I'm wondering if anybody out there has had a chance to come up with an elegant solution? Here's where I'm at - I'll get a chance to test this probably later today, but if I hear back before then, great:
First, some MIDI preliminaries for anybody curious.
MIDI Pitch Bend is broken up into one Status Byte followed by two Data Bytes (it's a 14bit controller), these two Data Bytes are associated with their Status Byte by both leading with a Zero status bit, MIDI Spec has them appearing in the order of MSB -> LSB
(Edit: Update, it's actually Status -> LSB -> MSB )
( ie 1110 0000, 0111 1111, 0111 1111 )
The challenge is how to break up an ARM/Intel 16bit UInt16 into two 7 bit segments on iOS, and have it make sense for MIDI?
Please keep in mind that, because we're dealing with an unsigned integer, a 0 value is NOT neutral pitch bend, but rather full pitch down - where as neutral pitch bend is defined as 8192 - and 16,383 is full pitch up.
So here's my best guess as to how to do this:
UInt16 msbAnd = base10ValueUInt16 & 16256; //clearing out LSB
UInt16 msbAndShift = msbAnd << 1; //shift into leading Byte, with 0 status bit
UInt16 lsbAnd = base10ValueUInt16 & 127; //isolating LSB
UInt16 finalTwoBytePitchWord = msbFinalAndShift | lsbAnd; //make UInt16 word
UInt16 finalTwoBytePitchWordFlipped = CFSwapInt16HostToBig(finalTwoBytePitchWord); //Endian tweak
This code runs fine and seems to create the two Data Bytes with the required zero status bits and flips them around from little endian Intel/ARM which seems to be necessary for MIDI (MIDI is STATUS -> MSB -> LSB ): I can slap on the leading Status Byte with the appropriate MIDI channel later.
So, does this make sense? Has anybody come up with a more elegant solution? ( is there a Library I'm overlooking? ) ... I'll check back in later and also let folks know if this actually worked on the sampler I have to target it at.
Thanks
I think your code is close to right, but it's overly complicated. This question has nothing to do with iOS or endianness or ARM or Intel; it's just plain old C bit-twiddling. If you write the code correctly, it will work on any reasonable platform without modification. You don't need a library; it's only a couple lines of code.
It's best to work with MIDI on a byte-by-byte basis. You want a function that takes a 16-bit unsigned integer (which we'll trust has at most 14 bits worth of value) and returns two single-byte values, one with the most significant bits, one with the least significant bits.
Later on, when you send the message, you assemble the bytes in the appropriate order. According to the specification, pitch wheel messages are three bytes: STATUS, then LSB, then MSB. You have them backwards in your question!
The least-significant 7 bits are easy: just mask off those bits from the original value. The most-significant 7 bits are similar: mask off the next higher 7 bits from the original value, then shift them down.
It doesn't matter whether the 16-bit integers are little-endian or big-endian in memory on your machine; the compiler takes care of that.
Here's a function and a test tool.
#include <stdio.h>
#include <stdint.h> // for C standard uint8_t and uint16_t
// or, if you prefer, use unsigned char and unsigned short, or Byte and UInt16;
// they'll all work, although some are more portable than others
void encode14BitValue(uint16_t value, uint8_t *out_msb, uint8_t *out_lsb)
{
uint16_t mask = 0x007F; // low 7 bits on
// "(1 << 7) - 1" is arguably clearer
*out_lsb = value & mask;
*out_msb = (value & (mask << 7)) >> 7;
}
int main(int argc, const char * argv[])
{
typedef struct {
uint16_t in;
uint8_t expected_msb;
uint8_t expected_lsb;
} test_case;
test_case cases[] = {
{ 0x0000, 0x00, 0x00 },
{ 0x0001, 0x00, 0x01 },
{ 0x0002, 0x00, 0x02 },
{ 0x0004, 0x00, 0x04 },
{ 0x0008, 0x00, 0x08 },
{ 0x0009, 0x00, 0x09 },
{ 0x000F, 0x00, 0x0F },
{ 0x0010, 0x00, 0x10 },
{ 0x0011, 0x00, 0x11 },
{ 0x001F, 0x00, 0x1F },
{ 0x0020, 0x00, 0x20 },
{ 0x0040, 0x00, 0x40 },
{ 0x0070, 0x00, 0x70 },
{ 0x007F, 0x00, 0x7F },
{ 0x0080, 0x01, 0x00 },
{ 0x0081, 0x01, 0x01 },
{ 0x008F, 0x01, 0x0F },
{ 0x0090, 0x01, 0x10 },
{ 0x00FF, 0x01, 0x7F },
{ 0x0100, 0x02, 0x00 },
{ 0x0200, 0x04, 0x00 },
{ 0x0400, 0x08, 0x00 },
{ 0x0800, 0x10, 0x00 },
{ 0x1000, 0x20, 0x00 },
{ 0x1FFF, 0x3F, 0x7F },
{ 0x2000, 0x40, 0x00 },
{ 0x2001, 0x40, 0x01 },
{ 0x3FFF, 0x7F, 0x7F },
};
int passed = 1;
for (int i = 0, c = sizeof(cases) / sizeof(cases[0]); i < c; i++) {
uint8_t msb, lsb;
encode14BitValue(cases[i].in, &msb, &lsb);
if (cases[i].expected_msb != msb || cases[i].expected_lsb != lsb) {
printf("failed: 0x%04hX expected 0x%02hhX 0x%02hhX got 0x%02hhX 0x%02hhX\n", cases[i].in, cases[i].expected_msb, cases[i].expected_lsb, msb, lsb);
passed = 0;
}
}
return passed ? 0 : 1;
}
In your code, trying to pack the two bytes of result into one 16-bit integer just adds confusion. I don't know why you're doing that, since you're going to have to extract individual bytes again, whenever you send the MIDI anywhere else. That's where any worries about endianness come up, since your packing and unpacking code have to agree. You might as well not bother. I bet your code was incorrect, but your error in swapping MSB and LSB compensated for it.

Convert 4 byte into a signed integer

I'm trying to parse a binary file in the browser. I have 4 bytes that represent a 32-bit signed integer.
Is there a straight forward way of converting this to a dart int, or do I have to calculate the inverse of two's complement manually?
Thanks
Edit: Using this for manually converting:
int readSignedInt() {
int value = readUnsignedInt();
if ((value & 0x80000000) > 0) {
// This is a negative number. Invert the bits and add 1
value = (~value & 0xFFFFFFFF) + 1;
// Add a negative sign
value = -value;
}
return value;
}
You can use ByteArray from typed_data library.
import 'dart:typed_data';
int fromBytesToInt32(int b3, int b2, int b1, int b0) {
final int8List = new Int8List(4)
..[3] = b3
..[2] = b2
..[1] = b1
..[0] = b0;
return int8List.asByteArray().getInt32(0);
}
void main() {
assert(fromBytesToInt32(0x00, 0x00, 0x00, 0x00) == 0);
assert(fromBytesToInt32(0x00, 0x00, 0x00, 0x01) == 1);
assert(fromBytesToInt32(0xF0, 0x00, 0x00, 0x00) == -268435456);
}
Place the 4 bytes in a ByteArray and extract the Int32 like this:
import 'dart:scalarlist';
void main() {
Int8List list = new Int8List(4);
list[0] = b0;
list[1] = b1;
list[2] = b2;
list[3] = b3;
int number = list.asByteArray().getInt32(0);
}
John
I'm not exactly sure what you want, but maybe this code sample might get you ideas:
int bytesToInteger(List<int> bytes) {
var value = 0;
for (var i = 0, length = bytes.length; i < length; i++) {
value += bytes[i] * pow(256, i);
}
return value;
}
So let's say we have [50, 100, 150, 250] as our "4 bytes", the ending result is a 32-bit unsigned integer. I have a feeling this isn't exactly what you are looking for, but it might help you.

Resources