Decrypting using CCCrypt returns different results on iOS and MacOS - ios

Decrypting using CCCrypt returns different results on iOS (5,6) and MacOS 10.7.
Same code running in different platforms/architectures produces different outputs with the same input. Why? and how to fix it?
I have debugged everything. All the variables share the same value either running on the Mac or in the iPhone. The point where they bifurcate is:
ccStatus = CCCrypt(kCCDecrypt, //decrypt
kCCAlgorithmAES128, //AES128
0, // mode - no Padding
rawAESKey, // simmetric key
kCCKeySizeAES128, //key size
iv,
dataIn, dataInBytesSize, //input
dataOut, dataOutBytesSize, //output
&clearTextSize);
At this point, dataOut has different values depending if it's running in the mac, and in the iPhone. ccStatus returns success in both cases.
Note:
Xcode Version 4.6.2 (4H1003)
iOS SDK 5,6 - binary produced as 32 bits
MacOS SDK 10.7 - binary produced as 64 bits

Related

NRF52 java unsigned byte issue

I am using ble with nrf52 and sending a byte array command from android with java like below
byte[] header=new byte[]{
(byte )0x5f,
(byte) 0xf0,
(byte) 0xf1,
(byte) 0xf2,
};
when I log these bytes to console, I see as below
[111,-16,-15,-14]
Device gots the command but doesn't send answer
However, if I send the same command from iOS with swift, command data showns as below
[111,240,241,242]
and works as aspected.
What may cause this behavior? May it be related java's unsingned byte support?
You are on the right track. While your iOS App handles the received data as 'unsigned bytes' the Android App interprets them as 'signed bytes'. You might want to treat them as unsigned values with the corresponding *unsigned methods, for example Byte.compareUnsigned

Crash in openexr when trying to open openexr file on 64-bit machine

I have included openexr source code (downloaded from http://www.openexr.com/downloads.html) to decode openexr images in my project.
I have built my project for 64-bit (on mac as well as on windows) and trying to open following file:
https://github.com/openexr/openexr-images/blob/master/ScanLines/Blobbies.exr
It crashes inside openexr source while opening this openexr file.
rgbaFile = new Imf::RgbaInputFile("Blobbies.exr");
rgbaFile->setFrameBuffer(pixel - datawindow.min.x - (y * exrDimension.mWidth), 1, mWidth);
rgbaFile->readPixels(datawindow.min.y, datawindow.max.y);
This file has data window (-20,-20) to (1020,1020) and display window (0,0) to (999,999).
I am able to open it properly when trying on 32-bit build of my project.
I have found the cause of the issue.
Issue is not in openexr api but in parameters
that are being passed to setFrameBuffer(), i.e.
pixel - datawindow.min.x - (y * exrDimension.mWidth)
here, y is of int32 type while exrDimension.mWidth is of unsigned int32 type. when these two are multiplied gives a very large value of unsigned int32 type. The overall type of the result of the expression on is different when received inside setFrameBuffer function. It depends on the architecture of the machine. If arch is 32 bit, it is int32 and if it is 64 bit arch, then it is int64.
On 32 bit arch, overall result is typecast to int32 which gives correct value, while on 64 bit arch, this value is incorrect due to y being negative, can't be represented in unsigned int32.

How to send int between 32bit and 64bit processors iOS

Pretty much the title, I send an int in a struct using Gamekit and on the receiving end the other device gets it.
Between 64bit cpus (iPhone 5S and over) the number is received fine. But when a iPhone 5 gets it (32bit cpu) the int is received as 0. Whats the correct way?
I've tried sending as NSInteger and the results are the same.
I have to add I have this issue with u_int_32t:
When devices connect, each device trades random numbers. These numbers determine which player starts, and I'm using u_int_32t for this, however, 32bit cpus still receive 0. For example:
I declare
uint32_t _ourRandomNumber;
Then, _ourRandomNumber = arc4random();
And then the numbers are sent, in a struct like this.
typedef struct {
Message message;
uint32_t randomNumber;
} MessageRandomNumber;
Using a method like this:
- (void)sendRandomNumber{
MessageRandomNumber message;
message.message.messageType = kMessageTypeRandomNumber;
message.randomNumber = _ourRandomNumber;
NSData *data = [NSData dataWithBytes:&message length:sizeof(MessageRandomNumber)];
[self sendData:data];
}
When the 32 bit cpu receives it then in the receiving method:
Message *message = (Message*)[data bytes];
if (message->messageType == kMessageTypeRandomNumber) {
MessageRandomNumber *messageRandomNumber = (MessageRandomNumber*)[data bytes];
NSLog(#"Received random number:%d", messageRandomNumber->randomNumber);
The NSLog shows: Received random number:0
NSInteger is going to be 64-bit on a 64-bit platform and 32-bit on a 32-bit platform. If you don't care about 64-bit precision, you could always use an int32_t (or a u_int32_t if you want unsigned) type to explicitly just use a 32-bit value. It is generally wise to be explicit about data lengths when sending values between devices, which is what these types exist for (there's int8_t, int16_t, int32_t, and int64_t and their unsigned counterparts).
It's also worth mentioning that you need to be concerned about the byte order of the values (assuming larger values than int8_t and u_int8_t) when sending values to arbitrary hardware. If you're only working with iOS devices this isn't going to be an issue, however.

Trouble with phone number in ipad mini

I am developing an app that shows a 10 digit number. When I run it in my iPhone 6 it shows ok. It displays 8183874201. But when I run it in my old iPad Mini it show the number in a format like this -467821389.
The code I am running is:
var telefonoCasaStaff = self.timelineData[0].objectForKey("TelCasa") as Int
self.telCasaTextLabel.text = String(telefonoCasaStaff)
Any ideas?
Int is a 64-bit integer on 64-bit devices, and a 32-bit integer
on 32-bit devices.
8183874201 = 0x1E7CC0299 exceeds the range of 32-bit integers, and
apparently is truncated to 32-bit.
You could use Int64, but generally, storing phone numbers
as integers makes not much sense, and you should use strings instead.

iPad missing OpenGL extension string GL_APPLE_texture_2D_limited_npot

In my iOS game, I want to use the GL_APPLE_texture_2D_limited_npot extension when available to save memory (the game have NPOT textures, and in my current implementation I add some padding to make those power of two).
I am testing in my iPad (first generation). Every thing I have read so far says that all iOS devices which supports OpenGLES2 (including iPad) also support GL_APPLE_texture_2D_limited_npot (which is very good, since my game use OpenGLES2). I have tested in my iPad, and it does support (I removed the padding and the images work if I set wrap to GL_CLAMP_TO_EDGE), but the extension does not show when I call glGetString(GL_EXTENSIONS). The code:
const char *extensions = (const char *)glGetString(GL_EXTENSIONS);
std::cout << extensions << "\n";
Results in:
GL_OES_depth_texture GL_OES_depth24 GL_OES_element_index_uint GL_OES_fbo_render_mipmap GL_OES_mapbuffer GL_OES_packed_depth_stencil GL_OES_rgb8_rgba8 GL_OES_standard_derivatives GL_OES_texture_float GL_OES_texture_half_float GL_OES_vertex_array_object GL_EXT_blend_minmax GL_EXT_debug_label GL_EXT_debug_marker GL_EXT_discard_framebuffer GL_EXT_read_format_bgra GL_EXT_separate_shader_objects GL_EXT_shader_texture_lod GL_EXT_texture_filter_anisotropic GL_APPLE_framebuffer_multisample GL_APPLE_rgb_422 GL_APPLE_texture_format_BGRA8888 GL_APPLE_texture_max_level GL_IMG_read_format GL_IMG_texture_compression_pvrtc
Why does this extension does not show with glGetString(GL_EXTENSIONS)? What is the proper way to check for it? Does all OpenGLES2 iOS devices really support it?
OpenGL ES 2.0 supports non power of 2 textures in specification. There is no need for extension. Here is the spec: http://www.khronos.org/registry/gles/specs/2.0/es_full_spec_2.0.25.pdf (Page 69):
If wt and ht are the specified image width and height, and if either wt or ht are
less than zero, then the error INVALID_VALUE is generated.
The maximum allowable width and height of a two-dimensional texture image
must be at least 2k-lod for image arrays of level zero through k, where k is the log
base 2 of MAX_TEXTURE_SIZE. and lod is the level-of-detail of the image array.
It may be zero for image arrays of any level-of-detail greater than k. The error
INVALID_VALUE is generated if the specified image is too large to be stored under
any conditions.
Not a word about power of two restriction (that is in OpenGL ES 1.x standard).
And if you read the specification of extension - http://www.khronos.org/registry/gles/extensions/APPLE/APPLE_texture_2D_limited_npot.txt, then you'll notice that it is written agains OpenGL ES 1.1 spec.

Resources