NSData Packet Interpretation - ios

I have a fairly complex issue regarding the interpretation of packets in an app that I am making. A host app sends a packet to client apps with the following structure:
[Header of 10 bytes][peerID of selected client of variable byte length][empty byte][peerID of a client of variable byte length][empty byte][int of 4 bytes][peerID of client of variable byte length][empty byte][int of 4 bytes]
Here is a sample packet that is produced under this structure:
434e4c50 00000000 006a3134 31303837 34393634 00313233 38313638 35383900 000003e8 31343130 38373439 36340000 0003e8
Converted it looks like this:
CNLP j1410874964 1238168589 Ë1410874964 Ë
"CNLP j" is the packet header of 10 bytes. "1410874964" is the peerID of the selected client. "1238168589" is the peerID of another client. " Ë" has an int value of 1000. "1410874964" is the peerID of the other client (in this case, the selected client). " Ë" also has an int value of 1000. Basically, in this packet I am communicating 2 things - who the selected client is and the int value associated with each client.
My problem exists on the interpretation side (client side). To interpret this particular type of packet, I use the following method:
+ (NSMutableDictionary *)infoFromData:(NSData *)data atOffset:(size_t) offset
{
size_t count;
NSMutableDictionary *info = [NSMutableDictionary dictionaryWithCapacity:8];
while (offset < [data length])
{
NSString *peerID = [data cnl_stringAtOffset:offset bytesRead:&count];
offset += count;
NSNumber *number = [NSNumber numberWithInteger:[data cnl_int32AtOffset:offset]];
offset += 4;
[info setObject:number forKey:peerID];
}
return info;
}
Typically, each of these packets range between 49 and 51 bytes. "offset" is set in a previous method to reflect the byte number after the packet header plus the empty byte after the selected player (in the case of the above packet, 21). "count" is initialized with a value of 1. In the case of this particular example, length is 51. The following method is passed the above arguments:
- (NSString *)cnl_stringAtOffset:(size_t)offset bytesRead:(size_t *)amount
{
const char *charBytes = (const char *)[self bytes];
NSString *string = [NSString stringWithUTF8String:charBytes + offset];
*amount = strlen(charBytes + offset) + 1;
return string;
}
This method is supposed to read through a variable length string in the packet, set the offset to the byte immediately after the empty byte pad behind the peerID string, and return the string that was read. "amount" is then set to the number of bytes the method read through for the string (this is becomes the new value of count after returning to the first method). "offset" and "count" are then added together to become the new "offset" - where interpretation of the int portion of the packet will begin. The above arguments are passed to the following method:
- (int)cnl_int32AtOffset:(size_t)offset
{
const int *intBytes = (const int *)[self bytes];
return ntohl(intBytes[offset / 4]);
}
This method is intended to return the 32 bit (4 byte) int value read at the current offset value of the packet. I believe that the problem exists in this method when the offset is a number that is not divisible by 4. In this case, the first int value of 1000 was correctly interpreted, and 32 was returned as the offset during the first iteration of the while loop. However, during the second iteration, the int value interpreted was 909377536 (obtained from reading bytes 36340000 in the packet instead of bytes 000003E8) This was likely due to the fact that the offset during this iteration was set to 47 (not divisible by 4). After interpreting the 32 bit int in the category above, 4 is added to the offset in the first method to account for a 4 byte (32 bit int). If my intuition about an offset not divisible by zero is correct, any suggestions to get around this problem are greatly appreciated. I have been looking for a way to solve this problem for quite some time and perhaps fresh eyes may help. Thanks for any help!!!

The unportable version (undefined behaviour for many reasons):
return ntohl(*(const int *)([self bytes]+offset));
A semi-portable version is somewhat trickier, but in C99 it appears that you can assume int32_t is "the usual" two's complement representation (no trap representations, no padding bits), thus:
// The cast is necessary to prevent arithmetic on void* which is nonstandard.
const uint8_t * p = (const uint8_t *)[self bytes]+offset;
// The casts ensure the result type is big enough to hold the shifted value.
// We use uint32_t to prevent UB when shifting into the sign bit.
uint32_t n = ((uint32_t)p[0]<<24) | ((uint32_t)p[1]<<16) | ((uint32_t)p[2]<<8) | ((uint32_t)p[3]);
// Jump through some hoops to prevent UB on "negative" numbers.
// An equivalent to the third expression is -(int32_t)~n-1.
// A good compiler should be able to optimize this into nothing.
return (n <= INT32_MAX) ? (int32_t)n : -(int32_t)(UINT32_MAX-n)-1;
This won't work on architectures without 8-bit bytes, but such architectures probably have different conventions for how things are passed over the network.
A good compiler should be able to optimize this into a single (possibly byte-swapped) load on suitable architectures.

Related

How to convert switch state into integer in ios

I am using five switches for handling different types of notifications. To remember the state of the switch, I am thinking of converting state of five switches into an integer. For example, if my switches status is as follow, 01010 then the integer should be 10. Please help me how to achieve this.
At first extract each switch value and store it in a single string
Now convert the string to decimal /integer value like this:-
NSString * binarystring = #"01010";
long decimalValue = strtol([binarystring UTF8String], NULL, 2);
NSLog(#"%ld", decimalValue );
Edit
Get all switch control value in single string:-
NSString *binarystring = [[NSString alloc] initWithFormat:#"%i%i%i%i%i",self.switch1.isOn,self.switch2.isOn,self.switch3.isOn,self.switch4.isOn,self.switch5.isOn];
(Why bother encoding your 5 switch values into a single integer? Storing 5 Booleans is not hard. That said the question is how to do it...)
Important aside: BOOL values are not 0 and 1
Objective-C is a superset of C, and in the original C there was no Boolean type - instead it just used an integer type with the interpretation that 0 was false and anything else was true.
Objective-C defines BOOL as a signed char, that is an 8-bit signed integer type (as characters are just an integer type in C). So in Objective-C 0 is false, and -128..-1, 1..127 are all true. NO is defined as 0 and YES as 1, but various operations may result in other values.
To get a 0 or 1 from a BOOL b you can use the conditional operator:
b ? 1 : 0
However the built in logical operators by definition will always return 0 or 1 and never any of the other possible values. The ! operator is logical not, and two not's get you back to where you started so:
!!b
will also give you a 0 or 1.
In any code that takes a BOOL and tries to use it as a 0 or 1 you should really use one of the above (or an equivalent).
One way to solve it: using strings
Your question has been interpreted as using a string as an intermediary during the encoding. First assume the class has your five buttons stored in an instance variable as a simple array (it will allow us to loop):
const int kSWITCH_COUNT = 5; // let's not hard code it everywhere
#implemention MyClass
{
Switch *switches[kSWITCH_COUNT];
}
then the string method goes something like:
- (void) stringMethod
{
NSMutableString *binarystring = NSMutableString.new;
// build up the string one value at a time, note the !! so we only get 0 or 1 values
for (int ix = 0; ix < kSWITCH_COUNT; ix++)
[binarystring appendFormat:#"%d", !!switches[ix].isOn];
long decimalValue = strtol([binarystring UTF8String], NULL, 2);
NSLog(#"Encoded: 0x%lx", decimalValue);
}
This method works, but it is rather a circuitous way of getting to the result - you have 5 integer (Boolean) values and you want to combine them into an integer, why involve strings?
A better way to solve it: using integers
(Objective-)C provides bitwise operators to do shifts, or, and, etc. operations which treat integer types as an ordered collection of bits - which is what they are on a computer.
The << operator shifts left, e.g. 0x1 << 1 produces 0x2, i.e. << 1 is equivalent to multiplication by 2. The | operator is bitwise or, e.g. 0x1 << 1 | 1produces0x3`. The answer to your question now follows easily:
- (void) shiftMethod
{
unsigned int encoded = 0;
for (int ix = 0; ix < kSWITCH_COUNT; ix++)
encoded = (encoded << 1) | !!switches[ix].isOn;
NSLog(#"Encoded: 0x%x", encoded);
}
If you don't like shifts and ors you can use multiplication and addition:
encoded = encoded * 2 + !!switches[ix].isOn;
The above solves the problem directly, no converting to/from intermediate strings. It happens to be a lot faster as well, but in the overall scheme of an application neither approach is probably going to take a significant proportion of the execution time and you shouldn't select based on that.
A Third Way
If you are going to wish to set/get the individual bits of an integer a lot you can use struct types with bit-field widths. These let you set/get the bits of an integer directly - no shifting etc. required - and you may find them useful, but they are rather "low level". Any good book on C will show you how to use these.
HTH

How to turn 4 bytes into a float in objective-c from NSData

Here is an example of turning 4 bytes into a 32bit integer in objective-c. The function readInt grabs 4 bytes from the read function and then converts it into a single 32 bit int. Does anyone know how I would convert 4 bytes to a float? I believe it is big endian. Basically I need a readFloat function. I can never grasp these bitwise operations.
EDIT:
I forgot to mention that the original data comes from Java's DataOutputStream class. The writeFloat function accordign to java doc is
Converts the float argument to an int using the floatToIntBits method
in class Float, and then writes that int value to the underlying
output stream as a 4-byte quantity, high byte first.
This is Objective c trying to extract the data written by java.
- (int32_t)read{
int8_t v;
[data getBytes:&v range:NSMakeRange(length,1)];
length++;
return ((int32_t)v & 0x0ff);
}
- (int32_t)readInt {
int32_t ch1 = [self read];
int32_t ch2 = [self read];
int32_t ch3 = [self read];
int32_t ch4 = [self read];
if ((ch1 | ch2 | ch3 | ch4) < 0){
#throw [NSException exceptionWithName:#"Exception" reason:#"EOFException" userInfo:nil];
}
return ((ch1 << 24) + (ch2 << 16) + (ch3 << 8) + (ch4 << 0));
}
OSByteOrder.h contains functions for reading, writing, and converting integer data.
You can use OSSwapBigToHostInt32() to convert a big-endian integer to the native representation, then copy the bits into a float:
NSData* data = [NSData dataWithContentsOfFile:#"/tmp/java/test.dat"];
int32_t bytes;
[data getBytes:&bytes length:sizeof(bytes)];
bytes = OSSwapBigToHostInt32(bytes);
float number;
memcpy(&number, &bytes, sizeof(bytes));
NSLog(#"Float %f", number);
[data getBytes:&myFloat range:NSMakeRange(locationWhereFloatStarts, sizeof(float)] ought to do the trick.
Given that the data comes from DataOutputStream's writeFloat() method, then that is documented to use Float.floatToIntBits() to create the integer representation. intBitsToFloat() further documents how to interpret that representation.
I'm not sure if it's the same thing, but the xdr API seems like it might handle that representation. The credits on the man page refer to Sun Microsystems standards/specifications, so it seems likely it's related to Java.
So, it may work to do something like:
// At top of file:
#include <rpc/types.h>
#include <rpc/xdr.h>
// In some function or method:
XDR xdr;
xdrmem_create(&xdr, (char*)data.bytes + offset, data.length - offset, XDR_DECODE);
float f;
if (!xdr_float(&xdr, &f))
/* handle error */;
xdr_destroy(&xdr);
If the data consists of a whole stream in eXternal Data Representation, then you would create one XDR stream for the whole task of extracting items from it, and use many xdr_...() calls between creating and destroying it to extract all of the items.

Very big ID in JSON, how to obtain it without losing precision

I have IDs in JSON file and some of them are really big but they fit inside bounds of unsigned long long int.
"id":9223372036854775807,
How to get this large number from JSON using objectForKey:idKey of NSDictionary?
Can I use NSDecimalNumber? Some of this IDs fit into regular integer.
Tricky. Apple's JSON code converts integers above 10^18 to NSDecimalNumber, and smaller integers to plain NSNumber containing a 64 bit integer value. Now you might have hoped that unsignedLongLongValue would give you a 64 bit value, but it doesn't for NSDecimalNumber: The NSDecimalNumber first gets converted to double, and the result to unsigned long long, so you lose precision.
Here's something that you can add as an extension to NSNumber. It's a bit tricky, because if you get a value very close to 2^64, converting it to double might get rounded to 2^64, which cannot be converted to 64 bit. So we need to divide by 10 first to make sure the result isn't too big.
- (uint64_t)unsigned64bitValue
{
if ([self isKindOfClass:[NSDecimalNumber class]])
{
NSDecimalNumber* asDecimal = (NSDecimalNumber *) self;
uint64_t tmp = (uint64_t) (asDecimal.doubleValue / 10.0);
NSDecimalNumber* tmp1 = [[NSDecimalNumber alloc] initWithUnsignedLongLong:tmp];
NSDecimalNumber* tmp2 = [tmp1 decimalNumberByMultiplyingByPowerOf10: 1];
NSDecimalNumber* remainder = [asDecimal decimalNumberBySubtracting:tmp2];
return (tmp * 10) + remainder.unsignedLongLongValue;
}
else
{
return self.unsignedLongLongValue;
}
}
Or process the raw JSON string, look for '"id" = number; '. With often included white space, you can find the number, then over write it with the number quoted. You can put the data into a mutable data object and get a char pointer to it, to overwrite.
[entered using iPhone so a bit terse]

How to get bytes into the right order when they pass through different platforms?

I'm developing an app which pass bytes through the network, the server declares its bytes order is Big-Endian. In my app, I wrap my data with a header which takes 2 bytes, I assign the byte as following :
int length = [self.dataLengthHeader length];
if (length <= 255) {
high = 0;
low = length;
}else if (length == 256)
{
high = 1;
low = 0;
}else {
high = length/256;
low = length%256;
}
Byte byte[] = {high, low};
NSLog(#"%hhu %hhu", high, low);
NSMutableData *dataToSend = [NSMutableData dataWithBytes:byte length:2];
For example
The first byte is 00(8 bits), the second is 05(8 bits)
Where another app receives the header, it parse the header which takes 2 bytes into two int(two NSInteger would be better) to get the information of the real message.
NSData *twoBytes = [NSData dataWithBytes:payloadptr length:2];
NSData *low = [twoBytes subdataWithRange:NSMakeRange(1, 1)];
int lowP;
[low getBytes:&lowP length:sizeof(lowP)];
NSData *high = [twoBytes subdataWithRange:NSMakeRange(0, 1)];
int highP;
[high getBytes:&highP length:sizeof(highP)];
Memory shortcut:
When I log out the bytes, it turns out to be something like this:highP = 70074112 lowP = 365573
I can never get the correct result, anybody could help me?
Any help would be appreciated!
Read about serialization.
You could make yourself using e.g. htonl(3), or endian(3).
You could use XDR with RPCGEN, or ASN.1.
You could use libs11n (in C++). You could also consider protocol buffers, etc...
Unless you have a lot of data, or bandwidth, you may consider using textual serialization formats like JSON (there are somehow flexible, easier to debug, etc...) or binary counterparts like BSON. Notice that sending data on a network is much slower than your CPU, so the overhead of textual serialization is generally lost in the noise (even if you compress it).

Odd atoi(char *) issue

I'm experiencing a very odd issue with atoi(char *). I'm trying to convert a char into it's numerical representation (I know that it is a number), which works perfectly fine 98.04% of the time, but it will give me a random value the other 1.96% of the time.
Here is the code I am using to test it:
int increment = 0, repetitions = 10000000;
for(int i = 0; i < repetitions; i++)
{
char randomNumber = (char)rand()%10 + 48;
int firstAtoi = atoi(&randomNumber);
int secondAtoi = atoi(&randomNumber);
if(firstAtoi != secondAtoi)NSLog(#"First: %d - Second: %d", firstAtoi, secondAtoi);
if(firstAtoi > 9 || firstAtoi < 0)
{
increment++;
NSLog(#"First Atoi: %d", firstAtoi);
}
}
NSLog(#"Ratio Percentage: %.2f", 100.0f * (float)increment/(float)repetitions);
I'm using the GNU99 C Language Dialect in XCode 4.6.1. The first if (for when the first number does not equal the second) never logs, so the two atoi's return the same result every time, however, the results are different every time. The "incorrect results" seemingly range from -1000 up to 10000. I haven't seen any above 9999 or any below -999.
Please let me know what I am doing wrong.
EDIT:
I have now changed the character design to:
char numberChar = (char)rand()%10 + 48;
char randomNumber[2];
randomNumber[0] = numberChar;
randomNumber[1] = 0;
However, I am using:
MAX(MIN((int)(myCharacter - '0'), 9), 0)
to get the integer value.
I really appreciate all of the answers!
atoi expects a string. You have not given it a string, you have given it a single char. A string is defined as some number of characters ended by the null character. You are invoking UB.
From the docs:
If str does not point to a valid C-string, or if the converted value would be out of the range of values representable by an int, it causes undefined behavior.
Want to "convert" a character to its integral representation? Don't overcomplicate things;
int x = some_char;
A char is an integer already, not a string. Don't think of a single char as text.
If I'm not mistaken, atoi expects a null-terminated string (see the documentation here).
You're passing in a single stack-based value, which does not have to be null-terminated. I'm extremely surprised it's even getting it right: it could be reading off hundreds of garbage numbers into eternity, if it never finds a null-terminator. If you just want to get the number of a single char (as in, the numeric value of the char's human-readable representation), why don't you just do int numeric = randomNumber - 48 ?

Resources