How to get bytes into the right order when they pass through different platforms? - ios

I'm developing an app which pass bytes through the network, the server declares its bytes order is Big-Endian. In my app, I wrap my data with a header which takes 2 bytes, I assign the byte as following :
int length = [self.dataLengthHeader length];
if (length <= 255) {
high = 0;
low = length;
}else if (length == 256)
{
high = 1;
low = 0;
}else {
high = length/256;
low = length%256;
}
Byte byte[] = {high, low};
NSLog(#"%hhu %hhu", high, low);
NSMutableData *dataToSend = [NSMutableData dataWithBytes:byte length:2];
For example
The first byte is 00(8 bits), the second is 05(8 bits)
Where another app receives the header, it parse the header which takes 2 bytes into two int(two NSInteger would be better) to get the information of the real message.
NSData *twoBytes = [NSData dataWithBytes:payloadptr length:2];
NSData *low = [twoBytes subdataWithRange:NSMakeRange(1, 1)];
int lowP;
[low getBytes:&lowP length:sizeof(lowP)];
NSData *high = [twoBytes subdataWithRange:NSMakeRange(0, 1)];
int highP;
[high getBytes:&highP length:sizeof(highP)];
Memory shortcut:
When I log out the bytes, it turns out to be something like this:highP = 70074112 lowP = 365573
I can never get the correct result, anybody could help me?
Any help would be appreciated!

Read about serialization.
You could make yourself using e.g. htonl(3), or endian(3).
You could use XDR with RPCGEN, or ASN.1.
You could use libs11n (in C++). You could also consider protocol buffers, etc...
Unless you have a lot of data, or bandwidth, you may consider using textual serialization formats like JSON (there are somehow flexible, easier to debug, etc...) or binary counterparts like BSON. Notice that sending data on a network is much slower than your CPU, so the overhead of textual serialization is generally lost in the noise (even if you compress it).

Related

Zlib decompression method warning using ios 64bit Architecture

I am just updating one of my applications and I have an error with my decompression method.
This is the warning I am experiencing
Implicit conversion loses integer precision: 'unsigned long' to 'unit' (aka 'unsigned int')
this is the line of code its happening on
stream.avail_in = len - stream.total_in;
And this is what the whole method looks like
#pragma mark - ZLib Compression Methods
// Returns the decompressed version if the zlib compressed input data or nil if there was an error
- (NSData*) dataByDecompressingData:(NSData*)data {
NSLog(#"%lu", (unsigned long)data.length);
Byte* bytes = (Byte*)[data bytes];
NSInteger len = [data length];
NSMutableData *decompressedData = [[NSMutableData alloc] initWithCapacity:COMPRESSION_BLOCK];
Byte* decompressedBytes = (Byte*) malloc(COMPRESSION_BLOCK);
z_stream stream;
int err;
stream.zalloc = (alloc_func)0;
stream.zfree = (free_func)0;
stream.opaque = (voidpf)0;
stream.next_in = bytes;
err = inflateInit(&stream);
CHECK_ERR(err, #"inflateInit");
while (true) {
stream.avail_in = len - stream.total_in;
stream.next_out = decompressedBytes;
stream.avail_out = COMPRESSION_BLOCK;
err = inflate(&stream, Z_NO_FLUSH);
[decompressedData appendBytes:decompressedBytes length:(stream.total_out-[decompressedData length])];
if(err == Z_STREAM_END)
break;
CHECK_ERR(err, #"inflate");
}
err = inflateEnd(&stream);
CHECK_ERR(err, #"inflateEnd");
free(decompressedBytes);
return decompressedData;
}
First off, you should not use stream.total_in. It may or may not be a large enough type for your application. It is always unsigned long. Use your own total input counter sized for your application and ignore stream.total_in.
Second, I'm guessing that your CHECK_ERR() aborts somehow. You should not abort in the event of a Z_BUF_ERROR. In that case, you can continue by simply providing more input and/or more output space.
Third, the problem here is that you need to pick a stream.avail_in that is assured to fit in unsigned. You should be comparing the amount of remaining input to the largest value of unsigned, e.g. UINT_MAX or (unsigned)0 - 1. If the remaining data is larger, use the max value and deduct that from the remaining input. If smaller or equal, use all of it and set the remaining input to zero.

How to turn 4 bytes into a float in objective-c from NSData

Here is an example of turning 4 bytes into a 32bit integer in objective-c. The function readInt grabs 4 bytes from the read function and then converts it into a single 32 bit int. Does anyone know how I would convert 4 bytes to a float? I believe it is big endian. Basically I need a readFloat function. I can never grasp these bitwise operations.
EDIT:
I forgot to mention that the original data comes from Java's DataOutputStream class. The writeFloat function accordign to java doc is
Converts the float argument to an int using the floatToIntBits method
in class Float, and then writes that int value to the underlying
output stream as a 4-byte quantity, high byte first.
This is Objective c trying to extract the data written by java.
- (int32_t)read{
int8_t v;
[data getBytes:&v range:NSMakeRange(length,1)];
length++;
return ((int32_t)v & 0x0ff);
}
- (int32_t)readInt {
int32_t ch1 = [self read];
int32_t ch2 = [self read];
int32_t ch3 = [self read];
int32_t ch4 = [self read];
if ((ch1 | ch2 | ch3 | ch4) < 0){
#throw [NSException exceptionWithName:#"Exception" reason:#"EOFException" userInfo:nil];
}
return ((ch1 << 24) + (ch2 << 16) + (ch3 << 8) + (ch4 << 0));
}
OSByteOrder.h contains functions for reading, writing, and converting integer data.
You can use OSSwapBigToHostInt32() to convert a big-endian integer to the native representation, then copy the bits into a float:
NSData* data = [NSData dataWithContentsOfFile:#"/tmp/java/test.dat"];
int32_t bytes;
[data getBytes:&bytes length:sizeof(bytes)];
bytes = OSSwapBigToHostInt32(bytes);
float number;
memcpy(&number, &bytes, sizeof(bytes));
NSLog(#"Float %f", number);
[data getBytes:&myFloat range:NSMakeRange(locationWhereFloatStarts, sizeof(float)] ought to do the trick.
Given that the data comes from DataOutputStream's writeFloat() method, then that is documented to use Float.floatToIntBits() to create the integer representation. intBitsToFloat() further documents how to interpret that representation.
I'm not sure if it's the same thing, but the xdr API seems like it might handle that representation. The credits on the man page refer to Sun Microsystems standards/specifications, so it seems likely it's related to Java.
So, it may work to do something like:
// At top of file:
#include <rpc/types.h>
#include <rpc/xdr.h>
// In some function or method:
XDR xdr;
xdrmem_create(&xdr, (char*)data.bytes + offset, data.length - offset, XDR_DECODE);
float f;
if (!xdr_float(&xdr, &f))
/* handle error */;
xdr_destroy(&xdr);
If the data consists of a whole stream in eXternal Data Representation, then you would create one XDR stream for the whole task of extracting items from it, and use many xdr_...() calls between creating and destroying it to extract all of the items.

Random 256bit key using SecRandomCopyBytes( ) in iOS

I have been using UUIDString as an encrption key for the files stored on my iPAD, but the security review done on my app by a third party suggested the following.
With the launch of the application, a global database key is generated and stored in the keychain. During generation, the method UUIDString of the class NSUUID provided by the iOS is used. This function generates a random string composed of letters A to F, numbers and hyphens and unnecessarily restricts the key space, resulting in a weakening of the entropy.
Since the key is used only by application logic and does not have to be read, understood or processed by an individual, there is no need to restrict the key space to readable characters. Therefore, a random 256-bit key generated via SecRandomCopyBytes () should be used as the master key.
Now I have searched a lot and tried some code implementation but havent found the exact thing.
What I have tried:
NSMutableData* data = [NSMutableData dataWithLength:32];
int result = SecRandomCopyBytes(kSecRandomDefault, 32, data.mutableBytes);
NSLog(#"Description %d",result);
My understanding is that this should give me an integer and I should convert it to an NSString and use this as my key, but I am pretty sure that this is not what is required here and also the above method always gives the result as 0. I am completely lost here and any help is appreciated.
Thanks.
The result of SecRandomCopyBytes should always be 0, unless there is some error (which I can't imagine why that might happen) and then the result would be -1. You're not going to convert that into a NSString.
The thing you're trying to get are the random bytes which are being written into the mutable bytes section, and that's what you'll be using as your "master key" instead of the UUID string.
The way I would do it would be:
uint8_t randomBytes[16];
int result = SecRandomCopyBytes(kSecRandomDefault, 16, randomBytes);
if(result == 0) {
NSMutableString *uuidStringReplacement = [[NSMutableString alloc] initWithCapacity:16*2];
for(NSInteger index = 0; index < 16; index++)
{
[uuidStringReplacement appendFormat: #"%02x", randomBytes[index]];
}
NSLog(#"uuidStringReplacement is %#", uuidStringReplacement);
} else {
NSLog(#"SecRandomCopyBytes failed for some reason");
}
Using a UUIDString feels secure enough to me, but it sounds like your third party security audit firm is trying really hard to justify their fees.
EDITED: since I'm now starting to collect downvotes because of Vlad's alternative answer and I can't delete mine (as it still has the accepted checkmark), here's another version of my code. I'm doing it with 16 random bytes (which gets doubled in converting to Hex).
The NSData generated does not guarantee UTF16 chars.
This method will generate 32byte UTF string which is equivalent to 256bit. (Advantage is this is plain text and can be sent in GET requests ext.)
Since the length of Base64 hash is = (3/4) x (length of input string) we can work out input length required to generate 32byte hash is 24 bytes long. Note: Base64 may pad end with one, two or no '=' chars if not divisible.
With OSX 10.9 & iOS 7 you can use:
-[NSData base64EncodedDataWithOptions:]
This method can be used to generate your UUID:
+ (NSString*)generateSecureUUID {
NSMutableData *data = [NSMutableData dataWithLength:24];
int result = SecRandomCopyBytes(NULL, 24, data.mutableBytes);
NSAssert(result == 0, #"Error generating random bytes: %d", result);
NSString *base64EncodedData = [data base64EncodedStringWithOptions:0];
return base64EncodedData;
}
A UUID is a 16 bytes (128 bits) unique identifier, so you aren't using a 256 bits key here. Also, as #zaph pointed out, UUIDs use hardware identifiers and other inputs to guarantee uniqueness. These factors being predictable are definitely not cryptographically secure.
You don't have to use a UUID as an encryption key, instead I would go for a base 64 or hexadecimal encoded data of 32 bytes, so you'll have your 256 bit cryptographically secure key:
/** Generates a 256 bits cryptographically secure key.
* The output will be a 44 characters base 64 string (32 bytes data
* before the base 64 encoding).
* #return A base 64 encoded 256 bits secure key.
*/
+ (NSString*)generateSecureKey
{
NSMutableData *data = [NSMutableData dataWithLength:32];
int result = SecRandomCopyBytes(kSecRandomDefault, 32, data.mutableBytes);
if (result != noErr) {
return nil;
}
return [data base64EncodedStringWithOptions:kNilOptions];
}
To answer the part about generate UUID-like (secure) random numbers, here's a good way, but remember these will be 128 bits only keys:
/** Generates a 128 bits cryptographically secure key, formatted as a UUID.
* Keep that you won't have the same guarantee for uniqueness
* as you have with regular UUIDs.
* #return A cryptographically secure UUID.
*/
+ (NSString*)generateCryptoSecureUUID
{
unsigned char bytes[16];
int result = SecRandomCopyBytes(kSecRandomDefault, 16, bytes);
if (result != noErr) {
return nil;
}
return [[NSUUID alloc] initWithUUIDBytes:bytes].UUIDString;
}
Cryptography is great, but doing it right is really hard (it's easy to leave security breaches). I cannot recommend you more the use of RNCryptor, which will push you through the use of good encryption standards, will make sure you're not unsafely reusing the same keys, will derivate encryption keys from passwords correctly, etc.
And i try this code for length 16 and bytes 16 :
uint8_t randomBytes[16];
NSMutableString *ivStr;
int result = SecRandomCopyBytes(kSecRandomDefault, 16, randomBytes);
if(result == 0) {
ivStr = [[NSMutableString alloc] initWithCapacity:16];
for(NSInteger index = 0; index < 8; index++)
{
[ivStr appendFormat: #"%02x", randomBytes[index]];
}
NSLog(#"uuidStringReplacement is %#", ivStr);
} else {
NSLog(#"SecRandomCopyBytes failed for some reason");
}
Successful
Since the Key usually needs to be UTF-8 encoded and "readable" - i.e. with no UTF-8 control characters- I decided to filter the randomly generated bytes generated using SecRandomCopyBytes so it'd only have characters from the Basic Latin Unicode block.
/*!
* #brief Generates NSData from a randomly generated byte array with a specific number of bits
* #param numberOfBits the number of bits the generated data must have
* #return the randomly generated NSData
*/
+ (NSData *)randomKeyDataGeneratorWithNumberBits:(int)numberOfBits {
int numberOfBytes = numberOfBits/8;
uint8_t randomBytes[numberOfBytes];
int result = SecRandomCopyBytes(kSecRandomDefault, numberOfBytes, randomBytes);
if(result == 0) {
return [NSData dataWithBytes:randomBytes length:numberOfBytes];
} else {
return nil;
}
}
/*!
* #brief Generates UTF-8 NSData from a randomly generated byte array with a specific number of bits
* #param numberOfBits the number of bits the generated data must have
* #return the randomly generated NSData
*/
+ (NSData *)randomKeyUTF8DataGeneratorWithNumberBits:(int)numberOfBits {
NSMutableData *result = [[NSMutableData alloc] init];
int numberOfBytes = numberOfBits/8;
while (result.length < numberOfBytes) {
// Creates a random byte
NSData *byte = [self randomKeyDataGeneratorWithNumberBits:8];
int asciiValue = [[[NSString alloc] initWithData:byte encoding:NSUTF8StringEncoding] characterAtIndex:0];
// Checks if the byte is UTF-8
if (asciiValue > 32 && asciiValue < 127) {
[result appendData:byte];
}
}
return result;
}
If you want to make your key a little more "readable" you can try and make it Base64 URL Safe
/*!
* #brief Encodes a String Base 64 with URL and Filename Safe Alphabet
* #discussion Base64url Encoding The URL- and filename-safe Base64 encoding described in RFC 4648 [RFC4648] (https://tools.ietf.org/html/rfc4648)
* #discussion Section 5 (https://tools.ietf.org/html/rfc4648#section-5)
* #param string the string to be enconded
* #return the encoded string
*/
+ (NSString *)base64URLandFilenameSafeString:(NSString *)string {
NSString *base64String = string;
base64String = [base64String stringByReplacingOccurrencesOfString:#"/"
withString:#"_"];
base64String = [base64String stringByReplacingOccurrencesOfString:#"+"
withString:#"-"];
return base64String;
}
Generate a UTF-8 256 bits key:
NSData *key = [self randomKeyUTF8DataGeneratorWithNumberBits:256];
NSString *UTF8String = [[NSString alloc] initWithBytes:[key bytes] length:data.length encoding:NSUTF8StringEncoding];
NSString *base64URLSafeString = [self base64URLandFilenameSafeString:UTF8String];

Convert NSData to a NSString returns random characters

I am working on a bluetooth iOS project and have managed to get some data from the bluetooth device.
However, I am struggling to convert this data into something useful, such as an NSString. Whenever I try to NSLog the NSString that was converted from the NSData received, it is a bunch of gibberish. The output is:
ēဥ၆䄀
The bluetooth device is a heart monitor from a manufacturer in Asia and they have provided the protocol reference on how to make calls to the device. This one thing they mention in the protocol reference:
The PC send 16-byte packets to the device, then the device sent back the 16-byte packets. Except for some special commands, all others can use this communication mode.
Can anyone tell me what I am doing wrong? I have tried everything I know, including every single encoding in the apple docs as well as both initWithData and initWithBytes. Thanks!
-(void)peripheral:(CBPeripheral *)peripheral didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic
error:(NSError *)error {
if (error)
{
NSLog(#"erorr in read is %#", error.description);
return;
}
NSData *data= characteristic.value;
NSString *myString = [[NSString alloc] initWithBytes:[data bytes] length:[data length] encoding:NSUTF16StringEncoding];
NSLog(#"Value from device is %#", myString); //OUTPUT IS ēဥ၆䄀
}
What you have here is a string of raw data that can't be directly converted into a human readable string - unless you consider hex-representation to be human readable :)
To make sense of this data you need to either have a protocol specification at hand or prepare for hours (sometimes) days of reverse-engineering.
This byte-sequence can be composed of multiple values formatted in standard (float IEEE 754, uint8_t, uint16_t...) or even proprietary formats.
One important thing to consider when communicating with the outside world is also endianness (ie: does the 'biggest' byte in multi-byte format come first or last).
There are many ways to manipulate this data. To get the raw array of bytes you could do:
NSData *rxData = ...
uint8_t *bytes = (uint8_t *)[rxData bytes];
And then if (for example) first byte tells you what type of payload the string holds you can switch like:
switch (bytes[0])
{
case 0x00:
//first byte 0x00: do the parsing
break;
case 0x01:
//first byte 0x01: do the parsing
break;
// ...
default:
break;
}
Here would be an example of parsing data that consists of:
byte 0: byte holding some bit-coded flags
bytes 1,2,3,4: 32-bit float
bytes 5,6: uint16_t
bool bitFlag0;
bool bitFlag1;
bool bitFlag2;
bool bitFlag3;
uint8_t firstByte;
float theFloat;
uint16_t theInteger;
NSData *rxData = ...
uint8_t *bytes = (uint8_t *)[rxData bytes];
// getting the flags
firstByte = bytes[0];
bitFlag0 = firstByte & 0x01;
bitFlag1 = firstByte & 0x02;
bitFlag2 = firstByte & 0x04;
bitFlag3 = firstByte & 0x08;
//getting the float
[[rxData subdataWithRange:NSMakeRange(1, 4)] getBytes:&theFloat length:sizeof(float)];
NSLog (#"the float is &.2f",theFloat);
//getting the unsigned integer
[[data subdataWithRange:NSMakeRange(6, 2)] getBytes:&theInteger length:sizeof(uint16_t)];
NSLog (#"the integer is %u",theInteger);
One note: depending on the endianness you might need to reverse the 4-float or the 2-uint16_t bytes before converting them. Converting this byte arrays can also be done with unions.
union bytesToFloat
{
uint8_t b[4];
float f;
};
and then:
bytesToFloat conv;
//float would be written on bytes b1b2b3b4 in protocol
conv.b[0] = bytes[1]; //or bytes[4] .. endianness!
conv.b[1] = bytes[2]; //or bytes[3] .. endianness!
conv.b[2] = bytes[3]; //or bytes[2] .. endianness!
conv.b[3] = bytes[4]; //or bytes[1] .. endianness!
theFloat = conv.f,
If for example you know that byte6 and byte7 represent an uint16_t value you can calculate it from raw bytes:
value = uint16_t((bytes[6]<<8)+bytes[7]);
or (again - endianness):
value = uint16_t((bytes[7]<<8)+bytes[6]);
One more note: using simply sizeof(float) is a bit risky since float can be 32-bit on one platform and 64-bit on another.

NSData Packet Interpretation

I have a fairly complex issue regarding the interpretation of packets in an app that I am making. A host app sends a packet to client apps with the following structure:
[Header of 10 bytes][peerID of selected client of variable byte length][empty byte][peerID of a client of variable byte length][empty byte][int of 4 bytes][peerID of client of variable byte length][empty byte][int of 4 bytes]
Here is a sample packet that is produced under this structure:
434e4c50 00000000 006a3134 31303837 34393634 00313233 38313638 35383900 000003e8 31343130 38373439 36340000 0003e8
Converted it looks like this:
CNLP j1410874964 1238168589 Ë1410874964 Ë
"CNLP j" is the packet header of 10 bytes. "1410874964" is the peerID of the selected client. "1238168589" is the peerID of another client. " Ë" has an int value of 1000. "1410874964" is the peerID of the other client (in this case, the selected client). " Ë" also has an int value of 1000. Basically, in this packet I am communicating 2 things - who the selected client is and the int value associated with each client.
My problem exists on the interpretation side (client side). To interpret this particular type of packet, I use the following method:
+ (NSMutableDictionary *)infoFromData:(NSData *)data atOffset:(size_t) offset
{
size_t count;
NSMutableDictionary *info = [NSMutableDictionary dictionaryWithCapacity:8];
while (offset < [data length])
{
NSString *peerID = [data cnl_stringAtOffset:offset bytesRead:&count];
offset += count;
NSNumber *number = [NSNumber numberWithInteger:[data cnl_int32AtOffset:offset]];
offset += 4;
[info setObject:number forKey:peerID];
}
return info;
}
Typically, each of these packets range between 49 and 51 bytes. "offset" is set in a previous method to reflect the byte number after the packet header plus the empty byte after the selected player (in the case of the above packet, 21). "count" is initialized with a value of 1. In the case of this particular example, length is 51. The following method is passed the above arguments:
- (NSString *)cnl_stringAtOffset:(size_t)offset bytesRead:(size_t *)amount
{
const char *charBytes = (const char *)[self bytes];
NSString *string = [NSString stringWithUTF8String:charBytes + offset];
*amount = strlen(charBytes + offset) + 1;
return string;
}
This method is supposed to read through a variable length string in the packet, set the offset to the byte immediately after the empty byte pad behind the peerID string, and return the string that was read. "amount" is then set to the number of bytes the method read through for the string (this is becomes the new value of count after returning to the first method). "offset" and "count" are then added together to become the new "offset" - where interpretation of the int portion of the packet will begin. The above arguments are passed to the following method:
- (int)cnl_int32AtOffset:(size_t)offset
{
const int *intBytes = (const int *)[self bytes];
return ntohl(intBytes[offset / 4]);
}
This method is intended to return the 32 bit (4 byte) int value read at the current offset value of the packet. I believe that the problem exists in this method when the offset is a number that is not divisible by 4. In this case, the first int value of 1000 was correctly interpreted, and 32 was returned as the offset during the first iteration of the while loop. However, during the second iteration, the int value interpreted was 909377536 (obtained from reading bytes 36340000 in the packet instead of bytes 000003E8) This was likely due to the fact that the offset during this iteration was set to 47 (not divisible by 4). After interpreting the 32 bit int in the category above, 4 is added to the offset in the first method to account for a 4 byte (32 bit int). If my intuition about an offset not divisible by zero is correct, any suggestions to get around this problem are greatly appreciated. I have been looking for a way to solve this problem for quite some time and perhaps fresh eyes may help. Thanks for any help!!!
The unportable version (undefined behaviour for many reasons):
return ntohl(*(const int *)([self bytes]+offset));
A semi-portable version is somewhat trickier, but in C99 it appears that you can assume int32_t is "the usual" two's complement representation (no trap representations, no padding bits), thus:
// The cast is necessary to prevent arithmetic on void* which is nonstandard.
const uint8_t * p = (const uint8_t *)[self bytes]+offset;
// The casts ensure the result type is big enough to hold the shifted value.
// We use uint32_t to prevent UB when shifting into the sign bit.
uint32_t n = ((uint32_t)p[0]<<24) | ((uint32_t)p[1]<<16) | ((uint32_t)p[2]<<8) | ((uint32_t)p[3]);
// Jump through some hoops to prevent UB on "negative" numbers.
// An equivalent to the third expression is -(int32_t)~n-1.
// A good compiler should be able to optimize this into nothing.
return (n <= INT32_MAX) ? (int32_t)n : -(int32_t)(UINT32_MAX-n)-1;
This won't work on architectures without 8-bit bytes, but such architectures probably have different conventions for how things are passed over the network.
A good compiler should be able to optimize this into a single (possibly byte-swapped) load on suitable architectures.

Resources