Memcpy crash only on iPhone 5s - ios

All,
I am facing a strange issue with iPhone 5s. I am doing a HMAC-SHA1 encrypting in my app using a third party library. The library uses memcpy, which am not aware what it is, as I am not into too much of memory level programming in C. The encryption works perfectly fine in all iphones except the 64 bit 5s. Below is the code where it crashes (5th line).
void SHA1Update(SHA1_CTX* context, unsigned char* data, unsigned int len)
{
unsigned int i, j;
j = (context->count[0] >> 3) & 63;
if ((context->count[0] += len << 3) < (len << 3)) context->count[1]++;
context->count[1] += (len >> 29);
if ((j + len) > 63) {
memcpy(&context->buffer[j], data, (i = 64-j));
SHA1Transform(context->state, context->buffer);
for ( ; i + 63 < len; i += 64) {
SHA1Transform(context->state, &data[i]);
}
j = 0;
}
else i = 0;
memcpy(&context->buffer[j], &data[i], len - i);
}
Below is the crash details
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x001a9c799cbfdcf9
Triggered by Thread: 0
Thread 0 Crashed:
0 libsystem_platform.dylib 0x0000000191c34dc8 _platform_memmove + 40
1 Anyvan 0x00000001001fc6a8 SHA1Update (sha1.c:128)
2 Anyvan 0x00000001000c7efc hmac_sha1 (hmac.c:73)
3 Anyvan 0x00000001000fe7cc -[LoginViewController callAuth:] (LoginViewController.m:360)
Any help will be highly appreciated.

This is crashing because you are using int not because of memcpy. Try replacing the int with NSInteger and it should work. You can find more information here.
The iPhone5S uses a 64bit architecture while the others use 32bit. One of the core differences between the two is how the system handles integers as explained in the doc. Your code is using unsigned int which does not mean the same thing on 32bit and on 64bit and that results in a crash. You should change your int called i and j to NSIntegers and I believe your code will work. Or come to think of it you can try to simply replace the word int by long.

I had this same problem when compiling a Twitter+OAuth library for 64 bit platforms. I resolved it by replacing "unsigned long" by "uint32_t" in 7 places in sha1.h and sha1.c. Like this:
#include <stdint.h>
typedef struct {
uint32_t state[5];
uint32_t count[2];
unsigned char buffer[64];
} SHA1_CTX;
You also need to include the stdint.h library (at the top of sha1.h and maybe sha1.c) to make this work. uint32_t is always 32 bits and I think that SHA1's transformation macros like R0 fail when operating on 64 bit data types. So changing to an explicit 32 bit data type fixes it. The answer about memcpy led me to this solution.

Remove support for arm64 if its not that important. Then set NO for "Build Active Architecture Only" (debug and release)
That did the trick for me, I didn't really need any 64bit specific functionality on my app so I took it out.

void CSHA1::Update(UINT_8 *data, long len){}
i just changed len form UINT_32 to long, and it did work, now can run on 4s, 5,5s,6 hah

Related

Is there a 8/16/32 checksum algorithm that will yield a result that is not equal to all FFs or zeros?

I would like to calculate a checksum (preferable 8 bit) that will yield a result that is not FF and not 0. It is to be used in a circular log SPI flash file system for a microcontroller. In the file system 0 marks the start of a record, and FF indicates erased memory. Thus when I calculate the checksum I do not want the result to be confused with the start of a record or unused memory.
I have looked at Fletcher's checksums, but that could still yield 0 as a result. Alternatively I though of using a 7 bit checksum and using the last bit to make sure I do not have a zero or FF result.
Does anyone know about such an implementation?
I ended up doing the following:
uint8_t CrcCalc(uint8_t* buffer, size_t len)
{
// .... some calculation here with polynomial of own choice
}
uint8_t CrcCalcNon0orFF(uint8_t* buffer, size_t len)
{
uint8_t tempCrc = CrcCalc(buffer,len);
if (tempCrc == 0xFF) tempCrc++;
if (tempCrc == 0) tempCrc++;
return tempCrc;
}
The above can be extended to 16 and 32 bit problems as well.
I am not sure if it will satisfy the math purists, but it worked for me.

OpenCL OutOfResources

I have an OpenCL Kernel that throws an OutOfResources exception when run.
Note: I am using Cloo for C#
I created a minimum working example of my problem and the kernel now looks like this:
__kernel void MinBug
(
__global float * img,
__global float * background,
__global int * tau
)
{
int neighbourhoodSize = tau[0];
const int x = get_global_id(0);
const int y = get_global_id(1);
for (int i = -neighbourhoodSize; i <= neighbourhoodSize; i++)
{
for (int j = -neighbourhoodSize; j <= neighbourhoodSize; j++)
{
//...
}
}
}
For my original program, this runs fine when tau is small (ie: 2, 10, 15), but once tau gets to be around 27, this sometimes throws an exception. The minimum working example I created does not have this problem until tau gets near 300.
The specific error that I get in my C# program is
Cloo.OutOfResourcesComputeException: 'OpenCL error code detected:
OutOfResources.'
This always happens on the very next line after calling the Kernel.Execute() method.
What concept am I missing?
Thanks to Huseyin for his advice on installing the correct runtime.
I also needed to select the correct platform in the code.
On my computer I currently have three platforms.
Two of them seem to be associated with the CPU (intel i7).
And one seems to be the GPU (NVidia gtx 660 ti)
I tried explicitly running on my GPU and it ran out of juice. As you can see from the error message above.
When I specified the CPU
CLCalc.InitCL(Cloo.ComputeDeviceTypes.Cpu, 1);
It ran much better. Who'd have thought, my CPU seems to have more grunt than the GPU. Maybe its a simplistic metric. Its also worth noting my CPU supports a later version of OpenCL than the GPU.

Buffer Overflow Not Overflowing Return Address

Below is the C code
#include <stdio.h>
void read_input()
{
char input[512];
int c = 0;
while (read(0, input + c++,1) == 1);
}
int main ()
{
read_input();
printf("Done !\n");
return 0;
}
In the above code, there should be a buffer overflow of the array 'input'. The file we give it will have over 600 characters in it, all 2's ( ex. 2222222...) (btw, ascii of 2 is 32). However, when executing the code with the file, no segmentation fault is thrown, meaning program counter register was unchanged. Below is the screenshot of the memory of input array in gdb, highlighted is the address of the ebp (program counter) register, and its clear that it was skipped when writing:
LINK
The writing of the characters continues after the program counter, which is maybe why segmentation fault is not shown. Please explain why this is happening, and how to cause the program counter to overflow.
This is tricky! Both input[] and c are in stack, with c following the 512 bytes of input[]. Before you read the 513th byte, c=0x00000201 (513). But since input[] is over you are reading 0x32 (50) onto c that after reading is c=0x00000232 (562): in fact this is little endian and the least significative byte comes first in memory (if this was a big endian architecture it was c=0x32000201 - and it was going to segfault mostly for sure).
So you are actually jumping 562 - 513 = 49 bytes ahead. Than there is the ++ and they are 50. In fact you have exactly 50 bytes not overwritten with 0x32 (again... 0x3232ab64 is little endian. If you display memory as bytes instead of dwords you will see 0x64 0xab 0x32 0x32).
So you are writing in not assigned stack area. It doesn't segfault because it's in the process legal space (up to the imposed limit), and is not overwriting any vital information.
Nice example of how things can go horribly wrong without exploding! Is this a real life example or an assignment?
Ah yes... for the second question, try declaring c before input[], or c as static... in order not to overwrite it.

Alternatives to type casting when formatting NS(U)Integer on 32 and 64 bit architectures?

With the 64 bit version of iOS we can't use %d and %u anymore to format NSInteger and NSUInteger. Because for 64 bit those are typedef'd to long and unsigned long instead of int and unsigned int.
So Xcode will throw warnings if you try to format NSInteger with %d. Xcode is nice to us and offers an replacement for those two cases, which consists of a l-prefixed format specifier and a typecast to long. Then our code basically looks like this:
NSLog(#"%ld", (long)i);
NSLog(#"%lu", (unsigned long)u);
Which, if you ask me, is a pain in the eye.
A couple of days ago someone at Twitter mentioned the format specifiers %zd to format signed variables and %tu to format unsigned variables on 32 and 64 bit plattforms.
NSLog(#"%zd", i);
NSLog(#"%tu", u);
Which seems to work. And which I like more than typecasting.
But I honestly have no idea why those work. Right now both are basically magic values for me.
I did a bit of research and figured out that the z prefix means that the following format specifier has the same size as size_t. But I have absolutely no idea what the prefix t means. So I have two questions:
What exactly do %zd and %tu mean?
And is it safe to use %zd and %tu instead of Apples suggestion to typecast to long?
I am aware of similar questions and Apples 64-Bit Transition guides, which all recommend the %lu (unsigned long) approach. I am asking for an alternative to type casting.
From http://pubs.opengroup.org/onlinepubs/009695399/functions/printf.html:
z
Specifies that a following [...] conversion specifier applies to a size_t or the corresponding signed integer type argument;
t
Specifies that a following [...] conversion specifier applies to a ptrdiff_t or the corresponding unsigned type argument;
And from http://en.wikipedia.org/wiki/Size_t#Size_and_pointer_difference_types:
size_t is used to represent the size of any object (including arrays) in the particular implementation. It is used as the return type of the sizeof operator.
ptrdiff_t is used to represent the difference between pointers.
On the current OS X and iOS platforms we have
typedef __SIZE_TYPE__ size_t;
typedef __PTRDIFF_TYPE__ ptrdiff_t;
where __SIZE_TYPE__ and __PTRDIFF_TYPE__ are predefined by the
compiler. For 32-bit the compiler defines
#define __SIZE_TYPE__ long unsigned int
#define __PTRDIFF_TYPE__ int
and for 64-bit the compiler defines
#define __SIZE_TYPE__ long unsigned int
#define __PTRDIFF_TYPE__ long int
(This may have changed between Xcode versions. Motivated by #user102008's
comment, I have checked this with Xcode 6.2 and updated the answer.)
So ptrdiff_t and NSInteger are both typedef'd to the same type:
int on 32-bit and long on 64-bit. Therefore
NSLog(#"%td", i);
NSLog(#"%tu", u);
work correctly and compile without warnings on all current
iOS and OS X platforms.
size_t and NSUInteger have the same size on all platforms, but
they are not the same type, so
NSLog(#"%zu", u);
actually gives a warning when compiling for 32-bit.
But this relation is not fixed in any standard (as far as I know), therefore I would
not consider it safe (in the same sense as assuming that long has the same size
as a pointer is not considered safe). It might break in the future.
The only alternative to type casting that I know of is from the answer to "Foundation types when compiling for arm64 and 32-bit architecture", using preprocessor macros:
// In your prefix header or something
#if __LP64__
#define NSI "ld"
#define NSU "lu"
#else
#define NSI "d"
#define NSU "u"
#endif
NSLog(#"i=%"NSI, i);
NSLog(#"u=%"NSU, u);
I prefer to just use an NSNumber instead:
NSInteger myInteger = 3;
NSLog(#"%#", #(myInteger));
This does not work in all situations, but I've replaced most of my NS(U)Integer formatting with the above.
According to Building 32-bit Like 64-bit, another solution is to define the NS_BUILD_32_LIKE_64 macro, and then you can simply use the %ld and %lu specifiers with NSInteger and NSUInteger without casting and without warnings.

Bitwise operations, wrong result in Dart2Js

I'm doing ZigZag encoding on 32bit integers with Dart. This is the source code that I'm using:
int _encodeZigZag(int instance) => (instance << 1) ^ (instance >> 31);
int _decodeZigZag(int instance) => (instance >> 1) ^ (-(instance & 1));
The code works as expected in the DartVM.
But in dart2js the _decodeZigZag function is returning invalid results if I input negativ numbers. For example -10. -10 is encoded to 19 and should be decoded back to -10, but it is decoded to 4294967286. If I run (instance >> 1) ^ (-(instance & 1)) in the JavaScript console of Chrome, I get the expected result of -10. That means for me, that Javascript should be able to run this operation properly with it number model.
But Dart2Js generate the following JavaScript, that looks different from the code I tested in the console:
return ($.JSNumber_methods.$shr(instance, 1) ^ -(instance & 1)) >>> 0;
Why does Dart2Js adds a usinged right shift by 0 to the function? Without the shift, the result would be as expected.
Now I'm wondering, is it a bug in the Dart2Js compiler or the expected result? Is there a way to force Dart2Js to output the right javascript code?
Or is my Dart code wrong?
PS: Also tested splitting up the XOR into other operations, but Dart2Js is still adding the right shift:
final a = -(instance & 1);
final b = (instance >> 1);
return (a & -b) | (-a & b);
Results in:
a = -(instance & 1);
b = $.JSNumber_methods.$shr(instance, 1);
return (a & -b | -a & b) >>> 0;
For efficiency reasons dart2js compiles Dart numbers to JS numbers. JS, however, only provides one number type: doubles. Furthermore bit-operations in JS are always truncated to 32 bits.
In many cases (like cryptography) it is easier to deal with unsigned 32 bits, so dart2js compiles bit-operations so that their result is an unsigned 32 bit number.
Neither choice (signed or unsigned) is perfect. Initially dart2js compiled to signed 32 bits, and was only changed when we tripped over it too frequently. As your code demonstrate, this doesn't remove the problem, just shifts it to different (hopefully less frequent) use-cases.
Non-compliant number semantics have been a long-standing bug in dart2js, but fixing it will take time and potentially slow down the resulting code. In the short-term future Dart developers (compiling to JS) need to know about this restriction and work around it.
Looks like I found equivalent code that output the right result. The unit test pass for both the dart vm and dart2js and I will use it for now.
int _decodeZigZag(int instance) => ((instance & 1) == 1 ? -(instance >> 1) - 1 : (instance >> 1));
Dart2Js is not adding a shift this time. I would still be interested into the reason for this behavior.

Resources