I developed a Cordova application and it went through a security review, one of the findings was related to a plugin I use to make curl requests. The finding is graded as Heigh vulnerability.
The finding is as follows:
Signed integer sizeof at line 143 of [some file] specifies size of memory to allocate.
#ifdef __LP64__
#define CURL_SIZEOF_LONG 8
#else
#define CURL_SIZEOF_LONG 4
#endif
#define CurlchkszEQ(t, s) sizeof(t) == s ? 1 : -1
typedef char
__curl_rule_01__
[CurlchkszEQ(long, CURL_SIZEOF_LONG)];
This is an objective-c code and I am not familiar with it. So my question is what is this vulnerability and how to fix it?
Related
I cannot understand the macros logic and the way they work...so I can make a mcp2515_init() function. If someone can explain I would be glad to listen,thanks for your time.
#define true 1
#define false 0
#define True 1
#define False 0
typedef _Bool bool;
#define RESET(x) _XRS(x)
#define SET(x) _XS(x)
#define TOGGLE(x) _XT(x)
#define SET_OUTPUT(x) _XSO(x)
#define SET_INPUT(x) _XSI(x)
#define IS_SET(x) _XR(x)
#define PORT(x) _port2(x)
#define DDR(x) _ddr2(x)
#define PIN(x) _pin2(x)
#define _XRS(x,y) PORT(x) &= ~(1<<y)
#define _XS(x,y) PORT(x) |= (1<<y)
#define _XT(x,y) PORT(x) ^= (1<<y)
#define _XSO(x,y) DDR(x) |= (1<<y)
#define _XSI(x,y) DDR(x) &= ~(1<<y)
#define _XR(x,y) ((PIN(x) & (1<<y)) != 0)
#define _port2(x) PORT ## x
#define _ddr2(x) DDR ## x
#define _pin2(x) PIN ## x
I will try to break it down, what you have is a set of bit manipulating macros flipping/setting/clearing one bit on a port they are a generic kind of macros and not specific to the SPI driver.
bit clear (reset)
#define _XRS(x,y) PORT(x) &= ~(1<<y)
AND port with every bit expect bit y on port x.
bit set
#define _XS(x,y) PORT(x) |= (1<<y)
OR port with only bit y set. This will set bit y and leave other unaffected.
toggle
#define _XT(x,y) PORT(x) ^= (1<<y)
XOR a bit with 1 will toggle its value 0 will keep its state.
This list of macros only take one argument
#define RESET(x) _XRS(x)
#define SET(x) _XS(x)
#define TOGGLE(x) _XT(x)
#define SET_OUTPUT(x) _XSO(x)
#define SET_INPUT(x) _XSI(x)
#define IS_SET(x) _XR(x)
so these macros does not look compatible with the as they take one parameter instead of two, but with the magic of macros it is possible to create an output of x,y for instance
#define GPIO1_PIN4 1,4
#define POWER_LED GPIO0,PIN2
will make SET(POWER_LED); a legal assignment.
The # and ## macros
#define _port2(x) PORT ## x
Are used to concatenate outputs from macros to create new structures read more here
So using all the macros _XT(2,15); will expand to
PORT2 = PORT2 ^ (1<<15)
I am personally not a fan of this kind of macros as it breaks how plain vanilla C works by hiding the assignments inside the macros, for instance this style cannot set multiple bits on the same port with single write. However it is legal C and it is common among some embedded systems development environments.
It would probably be easier to ignore the above macros and just write the configuration and data directly to the registers word by word.
I am running an application on an embedded (PowerPC 32 bit) system where there is a stack size limitation of 64K. I am experiencing some occasional crashes because of stack overflow.
I can build the application also for a normal Linux system (with some minor little changes in the code), so I can run an emulation on my development environment.
I was wondering which is the best way to find the methods that exceed the stack size limitation and which is the stack frame when this happens (in order to perform some code refactoring).
I've already tried Callgrind (a Valgrind tool), but it seems not to be the right tool.
I'm looking more for a tool than changes in the code (since it's a 200K LOC and 100 files project).
The application is entirely written in C++03.
While it seems that there should be an existing tool for this, I would approach it by writing a small macro and adding it to the top of suspected functions:
char *__stack_root__;
#define GUARD_SIZE (64 * 1024 - 1024)
#define STACK_ROOT \
char __stack_frame__; \
__stack_root__ = &__stack_frame__;
#define STACK_GUARD \
char __stack_frame__; \
if (abs(&__stack_frame__ - __stack_root__) > GUARD_SIZE) { \
printf("stack is about to overflow in %s: at %d bytes\n", __FUNCTION__, abs(&__stack_frame__ - __stack_root__)); \
}
And here how to use it:
#include <stdio.h>
#include <stdlib.h>
void foo(int);
int main(int argc, char** argv) {
STACK_ROOT; // this macro records the top of the bottom of the thread's stack
foo(10000);
return 0;
}
void foo(int x) {
STACK_GUARD; // this macro checks if we're approaching the end of memory available for stack
if (x > 0) {
foo(x - 1);
}
}
couple notes here:
this code assumes single thread. If you have multiple threads, you need to keep track of individual __stack_frame__ variables, one per thread. Use thread local storage for this
using abs() to make sure the macro works both when PowerPC grows its stack up, as well as down (it can: depends on your setup)
adjust the GUARD_SIZE to your liking, but keep it smaller than the max size of your stack on the target
This is my example I've found:
#define kNumberOfViews (37)
#define kViewsWide (5)
#define kViewMargin (2.0)
Why it cannot be like that?
#define kNumberOfViews 37
#define kViewsWide 5
#define kViewMargin 2.0
And what means k in front? Is there a some guide for it?
It is not really required in your example, but the use of parenthesis in defines is a useful approach to make sure your define states exactly what you mean in the context of the define and protects it from side effects when used in code.
E.g
#define VAR1 40
#define VAR2 20
#define SAVETYPING1 VAR1-VAR2
#define SAVETYPING2 (VAR1-VAR2)
Then in your code
foo(4*SAVETYPING1); // comes out as foo(140)
Is not the same as
foo(4*SAVETYPING2); // comes out as foo(80)
As for what the k prefix means. It is used for constants. Plenty of discussion here on the origins:
Objective C - Why do constants start with k
#define SOME_VALUE 1234
It is preprocessor directive. It means, that before your code is compiled, all occurrences of SOME_VALUE will be replaced by 1234. Alternative to this would be
const int kSomeValue = 1234;
For discussion about advantages of one or the other see
#define vs const in Objective-C
As for brackets - in more complex cases they are necessary exactly because preprocessor makes copy-paste with #define. Consider this example:
#define BIRTH_YEAR 1990
#define CURRENT_YEAR 2015
#define AGE CURRENT_YEAR - BIRTH_YEAR
...
// later in the code
int ageInMonths = AGE * 12;
Here one might expect that ageInMonths = 25 * 12, but instead it is computed as ageInMonths = 2015 - 1990 * 12 = 2015 - (1990 * 12). That is why correct definition of AGE should have been
#define AGE (CURRENT_YEAR - BIRTH_YEAR)
As for naming conventions, AFAIK for #define constants capital cases with underscores are used, and for const constants camel names with leading k are used.
k is just a hungarian notation convention to indicate that that is a constant value. Personally I find it dumb, but it is a convention that many people follow. It isn't required for the code to work at all.
I am not sure why the examples you saw had parens around them, but there is no need to have parentheses around #define values.
All,
I am facing a strange issue with iPhone 5s. I am doing a HMAC-SHA1 encrypting in my app using a third party library. The library uses memcpy, which am not aware what it is, as I am not into too much of memory level programming in C. The encryption works perfectly fine in all iphones except the 64 bit 5s. Below is the code where it crashes (5th line).
void SHA1Update(SHA1_CTX* context, unsigned char* data, unsigned int len)
{
unsigned int i, j;
j = (context->count[0] >> 3) & 63;
if ((context->count[0] += len << 3) < (len << 3)) context->count[1]++;
context->count[1] += (len >> 29);
if ((j + len) > 63) {
memcpy(&context->buffer[j], data, (i = 64-j));
SHA1Transform(context->state, context->buffer);
for ( ; i + 63 < len; i += 64) {
SHA1Transform(context->state, &data[i]);
}
j = 0;
}
else i = 0;
memcpy(&context->buffer[j], &data[i], len - i);
}
Below is the crash details
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x001a9c799cbfdcf9
Triggered by Thread: 0
Thread 0 Crashed:
0 libsystem_platform.dylib 0x0000000191c34dc8 _platform_memmove + 40
1 Anyvan 0x00000001001fc6a8 SHA1Update (sha1.c:128)
2 Anyvan 0x00000001000c7efc hmac_sha1 (hmac.c:73)
3 Anyvan 0x00000001000fe7cc -[LoginViewController callAuth:] (LoginViewController.m:360)
Any help will be highly appreciated.
This is crashing because you are using int not because of memcpy. Try replacing the int with NSInteger and it should work. You can find more information here.
The iPhone5S uses a 64bit architecture while the others use 32bit. One of the core differences between the two is how the system handles integers as explained in the doc. Your code is using unsigned int which does not mean the same thing on 32bit and on 64bit and that results in a crash. You should change your int called i and j to NSIntegers and I believe your code will work. Or come to think of it you can try to simply replace the word int by long.
I had this same problem when compiling a Twitter+OAuth library for 64 bit platforms. I resolved it by replacing "unsigned long" by "uint32_t" in 7 places in sha1.h and sha1.c. Like this:
#include <stdint.h>
typedef struct {
uint32_t state[5];
uint32_t count[2];
unsigned char buffer[64];
} SHA1_CTX;
You also need to include the stdint.h library (at the top of sha1.h and maybe sha1.c) to make this work. uint32_t is always 32 bits and I think that SHA1's transformation macros like R0 fail when operating on 64 bit data types. So changing to an explicit 32 bit data type fixes it. The answer about memcpy led me to this solution.
Remove support for arm64 if its not that important. Then set NO for "Build Active Architecture Only" (debug and release)
That did the trick for me, I didn't really need any 64bit specific functionality on my app so I took it out.
void CSHA1::Update(UINT_8 *data, long len){}
i just changed len form UINT_32 to long, and it did work, now can run on 4s, 5,5s,6 hah
With the 64 bit version of iOS we can't use %d and %u anymore to format NSInteger and NSUInteger. Because for 64 bit those are typedef'd to long and unsigned long instead of int and unsigned int.
So Xcode will throw warnings if you try to format NSInteger with %d. Xcode is nice to us and offers an replacement for those two cases, which consists of a l-prefixed format specifier and a typecast to long. Then our code basically looks like this:
NSLog(#"%ld", (long)i);
NSLog(#"%lu", (unsigned long)u);
Which, if you ask me, is a pain in the eye.
A couple of days ago someone at Twitter mentioned the format specifiers %zd to format signed variables and %tu to format unsigned variables on 32 and 64 bit plattforms.
NSLog(#"%zd", i);
NSLog(#"%tu", u);
Which seems to work. And which I like more than typecasting.
But I honestly have no idea why those work. Right now both are basically magic values for me.
I did a bit of research and figured out that the z prefix means that the following format specifier has the same size as size_t. But I have absolutely no idea what the prefix t means. So I have two questions:
What exactly do %zd and %tu mean?
And is it safe to use %zd and %tu instead of Apples suggestion to typecast to long?
I am aware of similar questions and Apples 64-Bit Transition guides, which all recommend the %lu (unsigned long) approach. I am asking for an alternative to type casting.
From http://pubs.opengroup.org/onlinepubs/009695399/functions/printf.html:
z
Specifies that a following [...] conversion specifier applies to a size_t or the corresponding signed integer type argument;
t
Specifies that a following [...] conversion specifier applies to a ptrdiff_t or the corresponding unsigned type argument;
And from http://en.wikipedia.org/wiki/Size_t#Size_and_pointer_difference_types:
size_t is used to represent the size of any object (including arrays) in the particular implementation. It is used as the return type of the sizeof operator.
ptrdiff_t is used to represent the difference between pointers.
On the current OS X and iOS platforms we have
typedef __SIZE_TYPE__ size_t;
typedef __PTRDIFF_TYPE__ ptrdiff_t;
where __SIZE_TYPE__ and __PTRDIFF_TYPE__ are predefined by the
compiler. For 32-bit the compiler defines
#define __SIZE_TYPE__ long unsigned int
#define __PTRDIFF_TYPE__ int
and for 64-bit the compiler defines
#define __SIZE_TYPE__ long unsigned int
#define __PTRDIFF_TYPE__ long int
(This may have changed between Xcode versions. Motivated by #user102008's
comment, I have checked this with Xcode 6.2 and updated the answer.)
So ptrdiff_t and NSInteger are both typedef'd to the same type:
int on 32-bit and long on 64-bit. Therefore
NSLog(#"%td", i);
NSLog(#"%tu", u);
work correctly and compile without warnings on all current
iOS and OS X platforms.
size_t and NSUInteger have the same size on all platforms, but
they are not the same type, so
NSLog(#"%zu", u);
actually gives a warning when compiling for 32-bit.
But this relation is not fixed in any standard (as far as I know), therefore I would
not consider it safe (in the same sense as assuming that long has the same size
as a pointer is not considered safe). It might break in the future.
The only alternative to type casting that I know of is from the answer to "Foundation types when compiling for arm64 and 32-bit architecture", using preprocessor macros:
// In your prefix header or something
#if __LP64__
#define NSI "ld"
#define NSU "lu"
#else
#define NSI "d"
#define NSU "u"
#endif
NSLog(#"i=%"NSI, i);
NSLog(#"u=%"NSU, u);
I prefer to just use an NSNumber instead:
NSInteger myInteger = 3;
NSLog(#"%#", #(myInteger));
This does not work in all situations, but I've replaced most of my NS(U)Integer formatting with the above.
According to Building 32-bit Like 64-bit, another solution is to define the NS_BUILD_32_LIKE_64 macro, and then you can simply use the %ld and %lu specifiers with NSInteger and NSUInteger without casting and without warnings.