Parse BLE Manufacturer data - Objective-C - ios

I'm working on a custom bluetooth product, the manufacturer has embeded data in the advertisement packet. How do I effectively parse this data so it's usable within an iOS app?
I'm currently grabbing the data from the NSDictionary as follows:
NSData *rawData = [advertisement objectForKey:#"kCBAdvDataManufacturerData"];
The data in the packet is structured like so:
uint8_t compId[2];
uint8_t empty[6];
uint8_t temperature[2];
uint8_t rampRate[2];
uint8_t dutyFactor[2];
uint8_t alarms[2];
uint8_t statusFlag;
uint8_t speedRpm[2];
uint8_t vib[2];
uint8_t deviceTypeId;
uint8_t radioStatus;
uint8_t cycleTimer[2];
uint8_t batteryLevel;
My first thought was to convert it to a string and parse out the data that I need, but this seems slow and really inefficient. There has to be a standard way developers deal with this. I'm still pretty green when it comes to bitwise operators. All the data is formatted in little endian.

Certainly don't convert it to a string, as it isn't one, and you'll have issues with encoding.
Start by checking that the length of the data matches what you're expecting (26 bytes)
Use the bytes method to get a pointer to the data
Add a function or method to combine two bytes into a 16-bit integer. You'll have to find out if those 2-byte fields are signed or unsigned.
Something along these lines:
- (int)getWordFromBuffer:(const unsigned char *)bytes atOffset:(int) offset
{
return (int)bytes[offset] | (bytes[offset+1] << 8);
}
- (NSDictionary *)decodeData:(NSData *)data
{
if (data.length != 26)
{
NSLog(#"wrong length %d instead of 26", data.length);
return nil;
}
const unsigned char *bytes = (unsigned char *)data.bytes;
return
#{
#"compId": #([self getWordFromBuffer:bytes atOffset:0]),
#"temperature": #([self getWordFromBuffer:bytes atOffset:8]),
#"rampRate": #([self getWordFromBuffer:bytes atOffset:10]),
....
};
}

Related

Binary hash representation to HEX/Ascii in Objective-c

I would to log a binary hash representation in the console, using an hex or ascii representation. The algorithm is MD5, so the function is CC_MD5
I get the binary hash representation via a Theos tweak, which is working well.
EDIT: this tweak intercept the CC_MD5 call. The call is implemented in the method described below. When CC_MD5 is called, replaced_CC_MD5 intercept the call.
The app tested, is a simple app which i made myself and it's using this method to calculate MD5 Hash:
- (NSString *) md5:(NSString *) input
{
const char *cStr = [input UTF8String];
unsigned char digest[16];
CC_MD5( cStr, strlen(cStr), digest ); // This is the md5 call
NSMutableString *output = [NSMutableString stringWithCapacity:CC_MD5_DIGEST_LENGTH * 2];
for(int i = 0; i < CC_MD5_DIGEST_LENGTH; i++)
[output appendFormat:#"%02x", digest[i]];
return output;
}
The hashing it's ok, and the app returns to me the correct hash for the input
input = prova
MD5 Digest = 189bbbb00c5f1fb7fba9ad9285f193d1
The function in my Theos Tweak where i manipulate the CC_MD5 function is
EDIT: where data would be cStr, len would be strlen(cStr) and md would be digest.
static unsigned char * replaced_CC_MD5(const void *data, CC_LONG len, unsigned char *md) {
CC_LONG dataLength = (size_t) len;
NSLog(#"==== START CC_MD5 HOOK ====");
// hex of digest
NSData *dataDigest = [NSData dataWithBytes:(const void *)md length:(NSUInteger)CC_MD5_DIGEST_LENGTH];
NSLog(#"%#", dataDigest);
// hex of string
NSData *dataString = [NSData dataWithBytes:(const void *)data length:(NSUInteger)dataLength];
NSLog(#"%#", dataString);
NSLog(#"==== END CC_MD5 HOOK ====");
return original_CC_MD5(data, len, md);
}
The log of dataString it's ok: 70726f76 61 which is the HEX representation of prova
The log of dataDigest is e9aa0800 01000000 b8c00800 01000000 which is, if i understood, the binary hash representation.
How can i convert this representation to have the MD5 Hash digest?
In replaced_CC_MD5 you are displaying md before the call to original_CC_MD5 which sets its value. What you are seeing is therefore random data (or whatever was last stored in md).
Move the call to original_CC_MD5 to before the display statement and you should see the value you expect. (You'll of course need to save the result of the call in a local so you can return the value in the return statement.)

How to turn 4 bytes into a float in objective-c from NSData

Here is an example of turning 4 bytes into a 32bit integer in objective-c. The function readInt grabs 4 bytes from the read function and then converts it into a single 32 bit int. Does anyone know how I would convert 4 bytes to a float? I believe it is big endian. Basically I need a readFloat function. I can never grasp these bitwise operations.
EDIT:
I forgot to mention that the original data comes from Java's DataOutputStream class. The writeFloat function accordign to java doc is
Converts the float argument to an int using the floatToIntBits method
in class Float, and then writes that int value to the underlying
output stream as a 4-byte quantity, high byte first.
This is Objective c trying to extract the data written by java.
- (int32_t)read{
int8_t v;
[data getBytes:&v range:NSMakeRange(length,1)];
length++;
return ((int32_t)v & 0x0ff);
}
- (int32_t)readInt {
int32_t ch1 = [self read];
int32_t ch2 = [self read];
int32_t ch3 = [self read];
int32_t ch4 = [self read];
if ((ch1 | ch2 | ch3 | ch4) < 0){
#throw [NSException exceptionWithName:#"Exception" reason:#"EOFException" userInfo:nil];
}
return ((ch1 << 24) + (ch2 << 16) + (ch3 << 8) + (ch4 << 0));
}
OSByteOrder.h contains functions for reading, writing, and converting integer data.
You can use OSSwapBigToHostInt32() to convert a big-endian integer to the native representation, then copy the bits into a float:
NSData* data = [NSData dataWithContentsOfFile:#"/tmp/java/test.dat"];
int32_t bytes;
[data getBytes:&bytes length:sizeof(bytes)];
bytes = OSSwapBigToHostInt32(bytes);
float number;
memcpy(&number, &bytes, sizeof(bytes));
NSLog(#"Float %f", number);
[data getBytes:&myFloat range:NSMakeRange(locationWhereFloatStarts, sizeof(float)] ought to do the trick.
Given that the data comes from DataOutputStream's writeFloat() method, then that is documented to use Float.floatToIntBits() to create the integer representation. intBitsToFloat() further documents how to interpret that representation.
I'm not sure if it's the same thing, but the xdr API seems like it might handle that representation. The credits on the man page refer to Sun Microsystems standards/specifications, so it seems likely it's related to Java.
So, it may work to do something like:
// At top of file:
#include <rpc/types.h>
#include <rpc/xdr.h>
// In some function or method:
XDR xdr;
xdrmem_create(&xdr, (char*)data.bytes + offset, data.length - offset, XDR_DECODE);
float f;
if (!xdr_float(&xdr, &f))
/* handle error */;
xdr_destroy(&xdr);
If the data consists of a whole stream in eXternal Data Representation, then you would create one XDR stream for the whole task of extracting items from it, and use many xdr_...() calls between creating and destroying it to extract all of the items.

Convert NSData to a NSString returns random characters

I am working on a bluetooth iOS project and have managed to get some data from the bluetooth device.
However, I am struggling to convert this data into something useful, such as an NSString. Whenever I try to NSLog the NSString that was converted from the NSData received, it is a bunch of gibberish. The output is:
ēဥ၆䄀
The bluetooth device is a heart monitor from a manufacturer in Asia and they have provided the protocol reference on how to make calls to the device. This one thing they mention in the protocol reference:
The PC send 16-byte packets to the device, then the device sent back the 16-byte packets. Except for some special commands, all others can use this communication mode.
Can anyone tell me what I am doing wrong? I have tried everything I know, including every single encoding in the apple docs as well as both initWithData and initWithBytes. Thanks!
-(void)peripheral:(CBPeripheral *)peripheral didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic
error:(NSError *)error {
if (error)
{
NSLog(#"erorr in read is %#", error.description);
return;
}
NSData *data= characteristic.value;
NSString *myString = [[NSString alloc] initWithBytes:[data bytes] length:[data length] encoding:NSUTF16StringEncoding];
NSLog(#"Value from device is %#", myString); //OUTPUT IS ēဥ၆䄀
}
What you have here is a string of raw data that can't be directly converted into a human readable string - unless you consider hex-representation to be human readable :)
To make sense of this data you need to either have a protocol specification at hand or prepare for hours (sometimes) days of reverse-engineering.
This byte-sequence can be composed of multiple values formatted in standard (float IEEE 754, uint8_t, uint16_t...) or even proprietary formats.
One important thing to consider when communicating with the outside world is also endianness (ie: does the 'biggest' byte in multi-byte format come first or last).
There are many ways to manipulate this data. To get the raw array of bytes you could do:
NSData *rxData = ...
uint8_t *bytes = (uint8_t *)[rxData bytes];
And then if (for example) first byte tells you what type of payload the string holds you can switch like:
switch (bytes[0])
{
case 0x00:
//first byte 0x00: do the parsing
break;
case 0x01:
//first byte 0x01: do the parsing
break;
// ...
default:
break;
}
Here would be an example of parsing data that consists of:
byte 0: byte holding some bit-coded flags
bytes 1,2,3,4: 32-bit float
bytes 5,6: uint16_t
bool bitFlag0;
bool bitFlag1;
bool bitFlag2;
bool bitFlag3;
uint8_t firstByte;
float theFloat;
uint16_t theInteger;
NSData *rxData = ...
uint8_t *bytes = (uint8_t *)[rxData bytes];
// getting the flags
firstByte = bytes[0];
bitFlag0 = firstByte & 0x01;
bitFlag1 = firstByte & 0x02;
bitFlag2 = firstByte & 0x04;
bitFlag3 = firstByte & 0x08;
//getting the float
[[rxData subdataWithRange:NSMakeRange(1, 4)] getBytes:&theFloat length:sizeof(float)];
NSLog (#"the float is &.2f",theFloat);
//getting the unsigned integer
[[data subdataWithRange:NSMakeRange(6, 2)] getBytes:&theInteger length:sizeof(uint16_t)];
NSLog (#"the integer is %u",theInteger);
One note: depending on the endianness you might need to reverse the 4-float or the 2-uint16_t bytes before converting them. Converting this byte arrays can also be done with unions.
union bytesToFloat
{
uint8_t b[4];
float f;
};
and then:
bytesToFloat conv;
//float would be written on bytes b1b2b3b4 in protocol
conv.b[0] = bytes[1]; //or bytes[4] .. endianness!
conv.b[1] = bytes[2]; //or bytes[3] .. endianness!
conv.b[2] = bytes[3]; //or bytes[2] .. endianness!
conv.b[3] = bytes[4]; //or bytes[1] .. endianness!
theFloat = conv.f,
If for example you know that byte6 and byte7 represent an uint16_t value you can calculate it from raw bytes:
value = uint16_t((bytes[6]<<8)+bytes[7]);
or (again - endianness):
value = uint16_t((bytes[7]<<8)+bytes[6]);
One more note: using simply sizeof(float) is a bit risky since float can be 32-bit on one platform and 64-bit on another.

Why does the following code return incorrect values from NSData?

I need to send data across the network as NSData. As the format may be determined only at runtime, (e.g.: Message Type, number of objects etc, types of objects), I am using the following code to pack / unpack the NSData
To pack:
NSMutableData *data = [NSMutableData dataWithCapacity:0];
unsigned int _state = 66;
[data appendBytes:&state length:sizeof(state)];
To unpack (after receiving on a different iOS device)
void *buffer = malloc(255);
[data getBytes:buffer length:sizeof(unsigned int)];
unsigned int _state = (unsigned int)buffer;
....
I am using the buffer, because eventually there will be many different ints/ floats etc stored in the NSData. The first int may determine the type of message, second int - the number of floats stored, etc... I send and receive the data using apples game center apis:
- (BOOL)sendData:(NSData *)data toPlayers:(NSArray *)playerIDs withDataMode:(GKMatchSendDataMode)mode error:(NSError **)error
-(void)match:(GKMatch *)match didReceiveData:(NSData *)data fromPlayer:(NSString *)playerID
But the problem is, when I unpack the single int, instead of getting 66, I get some random value like 401488960 or 399903824 (its different each time I unpack, even though I am sending 66 each time). Why is the data incorrect? Am I unpacking incorrectly?
You are casting the pointer buffer to unsigned int: you are assigning the memory address to _state, not the value at that address. Use a pointer of the appropriate type (unsigned int *) instead, and dereference it:
unsigned int _state = *(unsigned int *)buffer;

aes decryption \0 character ios

i've a problem..when i decrypt the data that is returned from my php page,
if the length of the string is less than 16, the char \0 is append to string.
Original string is: 100000065912248
I decrypt the encrypted string with this function:
#define FBENCRYPT_ALGORITHM kCCAlgorithmAES128
#define FBENCRYPT_BLOCK_SIZE kCCBlockSizeAES128
#define FBENCRYPT_KEY_SIZE kCCKeySizeAES256
+ (NSData*)decryptData:(NSData*)data key:(NSData*)key iv:(NSData*)iv;
{
NSData* result = nil;
// setup key
unsigned char cKey[FBENCRYPT_KEY_SIZE];
bzero(cKey, sizeof(cKey));
[key getBytes:cKey length:FBENCRYPT_KEY_SIZE];
// setup iv
char cIv[FBENCRYPT_BLOCK_SIZE];
bzero(cIv, FBENCRYPT_BLOCK_SIZE);
if (iv) {
[iv getBytes:cIv length:FBENCRYPT_BLOCK_SIZE];
}
// setup output buffer
size_t bufferSize = [data length] + FBENCRYPT_BLOCK_SIZE;
void *buffer = malloc(bufferSize);
int length = [data length];
// do decrypt
size_t decryptedSize = 0;
CCCryptorStatus cryptStatus = CCCrypt(kCCDecrypt,
FBENCRYPT_ALGORITHM,
0,
cKey,
FBENCRYPT_KEY_SIZE,
cIv,
[data bytes],
[data length],
buffer,
bufferSize,
&decryptedSize);
if (cryptStatus == kCCSuccess) {
result = [NSData dataWithBytesNoCopy:buffer length:decryptedSize];
} else {
free(buffer);
NSLog(#"[ERROR] failed to decrypt| CCCryptoStatus: %d", cryptStatus);
}
return result;
}
I send a nil "iv" parameter to the function and after i use "cIv" in function, and it contain this:
The result is exactly, but the length of string is 16 instead of 15 (string: 100000065912248). In fact, the last character is \0.
Why? how can i solve?
EDIT:
PHP encrypt function:
function encrypt($plaintext) {
$key = 'a16byteslongkey!a16byteslongkey!';
$base64encoded_ciphertext = base64_encode(mcrypt_encrypt(MCRYPT_RIJNDAEL_128, $key, $plaintext, MCRYPT_MODE_CBC));
$base64encoded_ciphertext = trim($base64encoded_ciphertext);
return $base64encoded_ciphertext;
}
AES is a block cypher and encrypts/decrypts blocks of length 128 bits (16 bytes). So if the data is not a block size some padding must be added. The most popular and supported by Apple is PKCS7.
Interfacing with PHP one must consider padding and possible base64 encoding.
The solution is to use the same padding on both sides, PHP and iOS.
AES always operates on 16 bytes, there is no option--so, if you have 15 bytes a byte is going to have to be added, that is padding. From what I understand (not much about PHP encryption) PHP does not do true PCKS7padding and it is best to pad yourself. Lookup PKCS7 in Wikipedia.
You should be OK with zero padding (the default) if you only operate on strings, but I would recommend PKCS#7 padding, if only for interoperability reasons.
With zero padding the plaintext is padded with 00 valued bytes, but only if required. This is different from PKCS#7 padding, which is always deployed. After decryption you can use the trim function on the resulting plaintext after decryption. You should then get the original string.
This obviously wont work on binary data because it may end with a character that is removed by the trim function. Beware that trim in PHP seems to strip off 00 bytes. This is not a given, officially 00 is not whitespace, even though it is treated that way by many runtimes.
You have to remove padding from the decrypted data
function removePadding($decryptedText){
$strPad = ord($decryptedText[strlen($decryptedText)-1]);
$decryptedText= substr($decryptedText, 0, -$strPad);
return $decryptedText;
}

Resources