how can I make a uint32 in F#? - f#

I have an .NET api, from RabbitMQ, that takes a uint32
the method is:
void BasicQos(uint prefetchSize, ushort prefetchCount, bool global);
in C#:
channel.BasicQos(0, 1, false);
but in F#, the same line gives:
[FS0001] This expression was expected to have type 'uint32' but here has type 'int'
but I can't figure out how to declare a number as unsigned in F#; how can I do this?
UInt32 and UInt32Converter exist but I couldn't see how I could use them to convert / cast, etc.

Check out the docs page on literals here, but the short form is that you need the u suffix: 0u is an unsigned 32-bit integral zero. You could also use the uint32 function to convert a signed 32-bit int to an unsigned 32-bit int, but this would be work done at runtime as opposed to a compile-time value.

Related

Invalid type conversion while using ANSI c

I am facing this problem as when I am trying to build the code using ANSI C, as I was practicing writing in it and dealing with its rules, it tells me invalid type conversion and I don't know what to do.
this is the code line that makes the error, it is a pointer to function:
((CanIf_FuncTypeCanSpecial)(entry->CanIfUserRxIndication))(
entry->CanIfCanRxPduHrhRef->CanIfCanControllerHrhIdRef,
entry->CanIfCanRxPduId,
CanSduPtr,
CanDlc,
CanId);
and this is howentry->CanIfUserRxIndication is declared, as void *CanIfUserRxIndication;
and this is how CanIf_FuncTypeCanSpecial is declared, as
typedef void (*CanIf_FuncTypeCanSpecial)
(uint8 channel, PduIdType pduId, const uint8 *sduPtr, uint8 dlc, Can_IdType canId);
every parameter in the conversion type is the same type as the input parameters except the first one entry->CanIfCanRxPduHrhRef->CanIfCanControllerHrhIdRef it is from type enum not uint8.
You can find the code on GitHub.
and also the MISRA Rule is telling me this:
#1398-D (MISRA-C:2004 11.1/R) Conversions shall not be performed between a pointer to a function and any type other than an integral type
I tried to convert from enum to uint8 to make all of the parameters as what the type conversion CanIf_FuncTypeCanSpecial takes, but nothing happened.
If I understand correctly, you are trying to cast an existing function to match a function pointer declaration that has a differing argument type. You can cast the parameters and call such a function, but because function pointers themselves may be used anywhere in the program, at the places where they would be used the code would not know what to cast (which may result in a size difference) this is illegal.

iOS 64 bit support with C framework libxml2 "implicit conversion loses integer precision"

My app uses libxml2, which contains a function "xmlReadMemory(const char * buffer,
int size,
const char * URL,
const char * encoding,
int options)"
I do have a "size" to send, but it is an NSUInteger.
This framework is written in C, so it expects me to send an int, which throws a warning in my application since I am now including arm64 as a valid architecture: "Implicit conversion loses integer precision: NSUInteger (aka 'unsigned long') to int". Is there a safe way to resolve this warning?
link to framework API: http://www.xmlsoft.org/html/libxml-parser.html
As long as you're confident that you'll never need to pass it a value larger than the largest 32 bit integer (very unlikely) just cast it to int by adding (int) in front of the parameter in the function call. An unsigned 32 bit value is over 4 GB, which is more than the memory on an iPhone, and would take HOURS to download over the cell network.

Alternatives to type casting when formatting NS(U)Integer on 32 and 64 bit architectures?

With the 64 bit version of iOS we can't use %d and %u anymore to format NSInteger and NSUInteger. Because for 64 bit those are typedef'd to long and unsigned long instead of int and unsigned int.
So Xcode will throw warnings if you try to format NSInteger with %d. Xcode is nice to us and offers an replacement for those two cases, which consists of a l-prefixed format specifier and a typecast to long. Then our code basically looks like this:
NSLog(#"%ld", (long)i);
NSLog(#"%lu", (unsigned long)u);
Which, if you ask me, is a pain in the eye.
A couple of days ago someone at Twitter mentioned the format specifiers %zd to format signed variables and %tu to format unsigned variables on 32 and 64 bit plattforms.
NSLog(#"%zd", i);
NSLog(#"%tu", u);
Which seems to work. And which I like more than typecasting.
But I honestly have no idea why those work. Right now both are basically magic values for me.
I did a bit of research and figured out that the z prefix means that the following format specifier has the same size as size_t. But I have absolutely no idea what the prefix t means. So I have two questions:
What exactly do %zd and %tu mean?
And is it safe to use %zd and %tu instead of Apples suggestion to typecast to long?
I am aware of similar questions and Apples 64-Bit Transition guides, which all recommend the %lu (unsigned long) approach. I am asking for an alternative to type casting.
From http://pubs.opengroup.org/onlinepubs/009695399/functions/printf.html:
z
Specifies that a following [...] conversion specifier applies to a size_t or the corresponding signed integer type argument;
t
Specifies that a following [...] conversion specifier applies to a ptrdiff_t or the corresponding unsigned type argument;
And from http://en.wikipedia.org/wiki/Size_t#Size_and_pointer_difference_types:
size_t is used to represent the size of any object (including arrays) in the particular implementation. It is used as the return type of the sizeof operator.
ptrdiff_t is used to represent the difference between pointers.
On the current OS X and iOS platforms we have
typedef __SIZE_TYPE__ size_t;
typedef __PTRDIFF_TYPE__ ptrdiff_t;
where __SIZE_TYPE__ and __PTRDIFF_TYPE__ are predefined by the
compiler. For 32-bit the compiler defines
#define __SIZE_TYPE__ long unsigned int
#define __PTRDIFF_TYPE__ int
and for 64-bit the compiler defines
#define __SIZE_TYPE__ long unsigned int
#define __PTRDIFF_TYPE__ long int
(This may have changed between Xcode versions. Motivated by #user102008's
comment, I have checked this with Xcode 6.2 and updated the answer.)
So ptrdiff_t and NSInteger are both typedef'd to the same type:
int on 32-bit and long on 64-bit. Therefore
NSLog(#"%td", i);
NSLog(#"%tu", u);
work correctly and compile without warnings on all current
iOS and OS X platforms.
size_t and NSUInteger have the same size on all platforms, but
they are not the same type, so
NSLog(#"%zu", u);
actually gives a warning when compiling for 32-bit.
But this relation is not fixed in any standard (as far as I know), therefore I would
not consider it safe (in the same sense as assuming that long has the same size
as a pointer is not considered safe). It might break in the future.
The only alternative to type casting that I know of is from the answer to "Foundation types when compiling for arm64 and 32-bit architecture", using preprocessor macros:
// In your prefix header or something
#if __LP64__
#define NSI "ld"
#define NSU "lu"
#else
#define NSI "d"
#define NSU "u"
#endif
NSLog(#"i=%"NSI, i);
NSLog(#"u=%"NSU, u);
I prefer to just use an NSNumber instead:
NSInteger myInteger = 3;
NSLog(#"%#", #(myInteger));
This does not work in all situations, but I've replaced most of my NS(U)Integer formatting with the above.
According to Building 32-bit Like 64-bit, another solution is to define the NS_BUILD_32_LIKE_64 macro, and then you can simply use the %ld and %lu specifiers with NSInteger and NSUInteger without casting and without warnings.

MISRA 2004: 10.1/R warning

S32 pLeftX;
pLeftX = pos.x - 1; //Getting a Misra 2004: 10.1/R warning for this.
Here, pos.x is of type int.
If pos.x is really int and S32 is a signed int type, then your static analyser is broken.
Implicit type conversion to a wider type of the same signedness is allowed by rule 10.1. If pos.x is int, then the types in the expression is int + int. The result is an int, which is always signed. This is then implicitly converted to a 32-bit signed int, which is fine.
I would first look at how the tool is configured. What is the size of int? According to the C90 standard the size of int is implementation defined and can be 16 bit or greater. Assuming that S32 is a 32 bit signed integer type, my understanding is that:
If the size of int is 32 bits then there is no problem.
If the size of int is greater that 32 bits then the problem is that there is an implicit conversion to a narrower type.
If the size of int is less that 32 bits then the problem is that the right hand side of the assignment is a complex expression and its result is implicitly converted to a wider type.

MAX / MIN function in Objective C that avoid casting issues

I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!
NSString *test = #"short";
int calFailed = MAX(test.length - 10, 0); // returns -5
After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.
NSString *test = #"short";
int calExpected = MAX((int)test.length - 10, 0); // returns 0
This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.
Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.
The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.
The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [#"short" length] - 10 will evaluate to 4294967291.
What might be more surprising is that even without the subtraction, something like MAX([#"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [#"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([#"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).
So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.
Quoting from Understand integer conversion rules [cert.org]:
If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
(emphasis mine)
Consider this example:
int s = -1;
unsigned int u = 1;
NSLog(#"%i", s < u);
// -> 0
The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.
It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)
So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.
You probably don't have enough compiler warnings turned on. If you turn on -Wsign-compare (which can be turned on with -Wextra) you will generate a warning that looks like the following
warning: signed and unsigned type in conditional expression [-Wsign-compare]
This allows you to place the casts in the right places if necessary and you shouldn't need to rewrite the MAX or MIN macros

Resources