iOS convert IP Address to integer and backwards - ios

Say I have a NSString
NSString *myIpAddress = #"192.168.1.1"
I want to convert this to a integer - increment it an then convert it back to NSString.
Does iOS have an easy way to do this other than using bit mask and shifting and sprintf?

Something like this is what I do in my app:
NSArray *ipExplode = [string componentsSeparatedByString:#"."];
int seg1 = [ipExplode[0] intValue];
int seg2 = [ipExplode[1] intValue];
int seg3 = [ipExplode[2] intValue];
int seg4 = [ipExplode[3] intValue];
uint32_t newIP = 0;
newIP |= (uint32_t)((seg1 & 0xFF) << 24);
newIP |= (uint32_t)((seg2 & 0xFF) << 16);
newIP |= (uint32_t)((seg3 & 0xFF) << 8);
newIP |= (uint32_t)((seg4 & 0xFF) << 0);
newIP++;
NSString *newIPStr = [NSString stringWithFormat:#"%u.%u.%u.%u",
((newIP >> 24) & 0xFF),
((newIP >> 16) & 0xFF),
((newIP >> 8) & 0xFF),
((newIP >> 0) & 0xFF)];

Related

Base64Encoding of UIImage Doesn't Match

I have a UIImage and I want to encode it using base 64. I then send the string to our server.
Our server decodes it using btoa(). It can't do so properly.
After debugging, we found out that the result of encoding/decoding using btoa()/atob() does not match NSData's base64EncodedStringWithOptions when I convert from UIImage to NSData and then encode.
What's weird is they do match when I read the UIImage directly as NSData using dataWithContentsOfFile: instead of converting from UIImage to NSData using UIImagePNGRepresentation()
My problem is that I'm supposed to use an imagepicker that returns a UIImage. I don't want to write the image to file and then read it directly as NSData. it's not efficient. Is there a way to solve this?
Try this for base64 encoding:
+ (NSString*)base64forData:(NSData*)theData
{
const uint8_t* input = (const uint8_t*)[theData bytes];
NSInteger length = [theData length];
static char table[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
NSMutableData* data = [NSMutableData dataWithLength:((length + 2) / 3) * 4];
uint8_t* output = (uint8_t*)data.mutableBytes;
NSInteger i;
for (i=0; i < length; i += 3) {
NSInteger value = 0;
NSInteger j;
for (j = i; j < (i + 3); j++) {
value <<= 8;
if (j < length) {
value |= (0xFF & input[j]);
}
}
NSInteger theIndex = (i / 3) * 4;
output[theIndex + 0] = table[(value >> 18) & 0x3F];
output[theIndex + 1] = table[(value >> 12) & 0x3F];
output[theIndex + 2] = (i + 1) < length ? table[(value >> 6) & 0x3F] : '=';
output[theIndex + 3] = (i + 2) < length ? table[(value >> 0) & 0x3F] : '=';
}
return [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding] ;
}

Convert image manipulation to 64-bit creates black lines

I inherited an image filter app, and I'm trying to update it. Apple required me to change the architecture to support 64-bit. On 64-bit phones, the images have vertical black bars (see below). 32 bit phones work as expected.
It seems like this is an issue with the old code assuming a 32-bit system, but how can I fix it?
I've narrowed it down to the following code that applies an image curve:
NSUInteger* currentPixel = _rawBytes;
NSUInteger* lastPixel = (NSUInteger*)((unsigned char*)_rawBytes + _bufferSize);
while(currentPixel < lastPixel)
{
SET_RED_COMPONENT_RGBA(currentPixel, _reds[RED_COMPONENT_RGBA(currentPixel)]);
SET_GREEN_COMPONENT_RGBA(currentPixel, _greens[GREEN_COMPONENT_RGBA(currentPixel)]);
SET_BLUE_COMPONENT_RGBA(currentPixel, _blues[BLUE_COMPONENT_RGBA(currentPixel)]);
++currentPixel;
}
Here are the macro definitions:
#define ALPHA_COMPONENT_RGBA(pixel) (unsigned char)(*pixel >> 24)
#define BLUE_COMPONENT_RGBA(pixel) (unsigned char)(*pixel >> 16)
#define GREEN_COMPONENT_RGBA(pixel) (unsigned char)(*pixel >> 8)
#define RED_COMPONENT_RGBA(pixel) (unsigned char)(*pixel >> 0)
#define SET_ALPHA_COMPONENT_RGBA(pixel, value) *pixel = (*pixel & 0x00FFFFFF) | ((unsigned long)value << 24)
#define SET_BLUE_COMPONENT_RGBA(pixel, value) *pixel = (*pixel & 0xFF00FFFF) | ((unsigned long)value << 16)
#define SET_GREEN_COMPONENT_RGBA(pixel, value) *pixel = (*pixel & 0xFFFF00FF) | ((unsigned long)value << 8)
#define SET_RED_COMPONENT_RGBA(pixel, value) *pixel = (*pixel & 0xFFFFFF00) | ((unsigned long)value << 0)
#define BLUE_COMPONENT_ARGB(pixel) (unsigned char)(*pixel >> 24)
#define GREEN_COMPONENT_ARGB(pixel) (unsigned char)(*pixel >> 16)
#define RED_COMPONENT_ARGB(pixel) (unsigned char)(*pixel >> 8)
#define ALPHA_COMPONENT_ARGB(pixel) (unsigned char)(*pixel >> 0)
#define SET_BLUE_COMPONENT_ARGB(pixel, value) *pixel = (*pixel & 0x00FFFFFF) | ((unsigned long)value << 24)
#define SET_GREEN_COMPONENT_ARGB(pixel, value) *pixel = (*pixel & 0xFF00FFFF) | ((unsigned long)value << 16)
#define SET_RED_COMPONENT_ARGB(pixel, value) *pixel = (*pixel & 0xFFFF00FF) | ((unsigned long)value << 8)
#define SET_ALPHA_COMPONENT_ARGB(pixel, value) *pixel = (*pixel & 0xFFFFFF00) | ((unsigned long)value << 0)
How should I change the above to work on either a 32 or 64 bit device? Do I need to include more code?
NSUInteger changes size between 32- and 64-bit devices. It used to be 4 bytes; now it's 8. The code assumes it's working with RGBA data, with a byte for each channel, so the increment of an 8-byte pointer is skipping over half the data.
Just be explicit about the size:
uint32_t * currentPixel = _rawBytes;
uint32_t * lastPixel = (uint32_t *)((unsigned char *)_rawBytes + _bufferSize);
and the calculation should work correctly on both types of device.

extracting bits from NSData bytes

I would like to extract all the bits in the following bits from NSData byte :
status Data byte : <0011...
Result turns all are 0000 0000 0000 0000 . Could you please tell me how to ?
NSData *aData = [valueData subdataWithRange:NSMakeRange(0, 2)]; //16 bit status
status= [self bitsToInt:aData];
NSString *aString = [NSString stringWithFormat:#"%d", status];
int value = [aString intValue];
NSlog(#"sadasd value : ,%d" ,value );
unsigned thbit0 = (1 << 0) & value;
unsigned thbit1 = (1 << 1) & value;
unsigned thbit2 = (1 << 2) & value;
unsigned thbit3 = (1 << 3) & value;
unsigned thbit4 = (1 << 4) & value;
unsigned thbit5 = (1 << 5) & value;
unsigned thbit6 = (1 << 6) & value;
unsigned thbit7 = (1 << 7) & value;
unsigned thbit8 = (1 << 8) & value;
unsigned thbit9 = (1 << 9) & value;
unsigned thbit10 = (1 << 10) & value;
unsigned thbit11= (1 << 11) & value;
unsigned thbit12 = (1 << 12) & value;
..
- (int) bitsToInt : (NSData *) valueDa {
uint8_t * bytePtr = (uint8_t * )[valueDa bytes];
int high = bytePtr[1] >= 0 ? bytePtr[1] : 256 + bytePtr[1];
int low = bytePtr[0] >= 0 ? bytePtr[0] : 256 + bytePtr[0];
return low | (high << 8);
}
You could try to work with bit string instead of integer values with additional bit extracting.
Here is simple decoder:
- (NSString *)getBitsFromData:(NSData *)data
{
NSMutableString *result = [NSMutableString string];
const uint8_t *bytes = [data bytes];
for (NSUInteger i = 0; i < data.length; i++)
{
uint8_t byte = bytes[i];
for (int j = 0; j < 8; j++)
{
((byte >> j) & 1) == 0 ? [result appendString:#"0"] : [result appendString:#"1"];
}
}
return result;
}
Test:
NSString *test = #"test";
NSLog(#"%#", [self getBitsFromData:[test dataUsingEncoding:NSUTF8StringEncoding]]);
Result:
2015-07-29 11:37:31.768 Test[18342:9947704] 00101110101001101100111000101110

Convert uint8_t to NSString

Just started learning objective-c and was trying to convert a byte array into UTF8 NSString but have been getting nil/null.
Here is the abbreviated code sample.
enum {
TMessageType_CALL = 1,
TMessageType_REPLY = 2,
TMessageType_EXCEPTION = 3,
TMessageType_ONEWAY = 4
};
int32_t VERSION_1 = 0x80010000;
int value = VERSION_1 | TMessageType_CALL;
uint8_t buff[4];
buff[0] = 0xFF & (value >> 24);
buff[1] = 0xFF & (value >> 16);
buff[2] = 0xFF & (value >> 8);
buff[3] = 0xFF & value;
//Convert buff to NSString with offset =0, length =4
I tried the following.
NSString *t = [[NSString alloc] initWithBytes:buff length:4 encoding:NSUTF8StringEncoding];
NSString *t1 = [NSString stringWithUTF8String:(char *)buff];
But both t and t1 return nil.
What is the right API to convert it correctly?
This conversion needs to be generic across WriteI32() writeI64(), writeString(), writeDouble(). Here is the code for the rest.
- (void) writeI16: (short) value
{
uint8_t buff[2];
buff[0] = 0xff & (value >> 8);
buff[1] = 0xff & value;
[mTransport write: buff offset: 0 length: 2];
}
- (void) writeI64: (int64_t) value
{
uint8_t buff[8];
buff[0] = 0xFF & (value >> 56);
buff[1] = 0xFF & (value >> 48);
buff[2] = 0xFF & (value >> 40);
buff[3] = 0xFF & (value >> 32);
buff[4] = 0xFF & (value >> 24);
buff[5] = 0xFF & (value >> 16);
buff[6] = 0xFF & (value >> 8);
buff[7] = 0xFF & value;
[mTransport write: buff offset: 0 length: 8];
}
- (void) writeDouble: (double) value
{
// spit out IEEE 754 bits - FIXME - will this get us in trouble on
// PowerPC?
[self writeI64: *((int64_t *) &value)];
}
- (void) writeString: (NSString *) value
{
if (value != nil) {
const char * utf8Bytes = [value UTF8String];
size_t length = strlen(utf8Bytes);
[self writeI32: length];
[mTransport write: (uint8_t *) utf8Bytes offset: 0 length: length];
} else {
// instead of crashing when we get null, let's write out a zero
// length string
[self writeI32: 0];
}
}
buff is an array of unsigned chars, so you could use this:
NSString *t = [NSString stringWithFormat:#"%s", buff];
As an alternative, you can get each character explicitly:
NSMutableString *t = [NSMutableString stringWithCapacity:4];
for (NSUInteger i = 0; i < 4; ++i)
[t appendFormat:#"%c", buff[i]];
NSLog(#"%#", t);
The first option does a conversion to a valid string. The second option gives you each character, regardless of any terminating characters ('\0').
I'm not sure what useful information this will give you, but there you have it.

Converting aligned array uint8[8] to double

I am facing a bit of a challenge trying to convert an aligned array uint8[8] to a double.
It was particularly easy to convert uint8[4] to long with bit-operations, but i understand that the double can become messy in terms of a sign bit?
In Java i simply use ByteBuffer.wrap(bytes).getDouble() but i assume its not that easy in C.
I tried to implement this code, but the last command gives the error Expression is not assignable and Shift count >= width of type
long tempHigh = 0;
long tempLow = 0;
double sum = 0;
tempHigh |= buffer[0] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[1] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[2] & 0xFF;
tempHigh <<= 8;
tempHigh |= buffer[3] & 0xFF;
tempLow |= buffer[4] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[5] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[6] & 0xFF;
tempLow <<= 8;
tempLow |= buffer[7] & 0xFF;
sum |= ((tempHigh & 0xFFFF) <<= 32) + (tempLow & 0xFFFF);
How can this procedure be done correctly or just resolve the error i have made?
Thanks in advance.
double is a floating-point type; it doesn't support bitwise operations such as |.
You could do something like:
double sum;
memcpy(&sum, buffer, sizeof(sum));
But be aware of endianness issues.
The portable way to do it is to read out the sign, exponent, and mantissa values into integer variables with bitwise arithmetic, then call ldexp to apply the exponent.
OK, here's some code. Beware it might have mismatched parentheses or off-by-one errors.
unsigned char x[8]; // your input; code assumes little endian
long mantissa = ((((((x[6]%16)*256 + x[5])*256 + x[4])*256 + x[3])*256 + x[2])*256 + x[1])*256 + x[0];
int exp = x[7]%128*16 + x[6]/16 - 1023;
int sign = 1-x[7]/128*2;
double y = sign*ldexp(0x1p53 + mantissa, exp-53);
How about a union? Write to the long part as you have, then the double is automagically correct. Something like this:
union
{
double sum;
struct
{
long tempHigh;
long tempLow;
}v;
}u;
u.v.tempHigh = 0;
u.v.tempHigh |= buffer[0] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[1] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[2] & 0xFF;
u.v.tempHigh <<= 8;
u.v.tempHigh |= buffer[3] & 0xFF;
u.v.tempLow |= buffer[4] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[5] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[6] & 0xFF;
u.v.tempLow <<= 8;
u.v.tempLow |= buffer[7] & 0xFF;
printf("%f", u.sum);

Resources