MISRA 2004: 10.1/R warning - misra

S32 pLeftX;
pLeftX = pos.x - 1; //Getting a Misra 2004: 10.1/R warning for this.
Here, pos.x is of type int.

If pos.x is really int and S32 is a signed int type, then your static analyser is broken.
Implicit type conversion to a wider type of the same signedness is allowed by rule 10.1. If pos.x is int, then the types in the expression is int + int. The result is an int, which is always signed. This is then implicitly converted to a 32-bit signed int, which is fine.

I would first look at how the tool is configured. What is the size of int? According to the C90 standard the size of int is implementation defined and can be 16 bit or greater. Assuming that S32 is a 32 bit signed integer type, my understanding is that:
If the size of int is 32 bits then there is no problem.
If the size of int is greater that 32 bits then the problem is that there is an implicit conversion to a narrower type.
If the size of int is less that 32 bits then the problem is that the right hand side of the assignment is a complex expression and its result is implicitly converted to a wider type.

Related

how can I make a uint32 in F#?

I have an .NET api, from RabbitMQ, that takes a uint32
the method is:
void BasicQos(uint prefetchSize, ushort prefetchCount, bool global);
in C#:
channel.BasicQos(0, 1, false);
but in F#, the same line gives:
[FS0001] This expression was expected to have type 'uint32' but here has type 'int'
but I can't figure out how to declare a number as unsigned in F#; how can I do this?
UInt32 and UInt32Converter exist but I couldn't see how I could use them to convert / cast, etc.
Check out the docs page on literals here, but the short form is that you need the u suffix: 0u is an unsigned 32-bit integral zero. You could also use the uint32 function to convert a signed 32-bit int to an unsigned 32-bit int, but this would be work done at runtime as opposed to a compile-time value.

Is there a constant for max/min int/double value in dart?

Is there a constant in dart that tells us what is the max/min int/double value ?
Something like double.infinity but instead double.maxValue ?
For double there are
double.maxFinite (1.7976931348623157e+308)
double.minPositive (5e-324)
In Dart 1 there was no such number for int. The size of integers was limited only by available memory
In Dart 2 int is limited to 64 bit, but it doesn't look like there are constants yet.
For dart2js different rules apply
When compiling to JavaScript, integers are therefore restricted to 53 significant bits because all JavaScript numbers are double-precision floating point values.
I found this from dart_numerics package.
here you are the int64 max value:
const int intMaxValue = 9223372036854775807;
for dart web is 2^53-1:
const int intMaxValue = 9007199254740991;
"The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent integers between -(2^53 - 1) and 2^53 - 1." see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER
No,
Dart does not have a built-in constant for the max value of an int
but this is how to get the max value
Because ints are signed in Dart, they have a range (inclusive) of [-2^31, 2^31-1] if 32-bit and [-2^63, 2^63 - 1] if 64-bit. The first bit in an int is called the 'sign-bit'. If the sign-bit is 1, the int is negative; if 0, the int is non-negative. In the max int, all the bits are 1 except the sign bit, which is 0. We can most easily achieve this by writing the int in hexadecimal notation (integers preceded with '0x' are hexadecimal):
int max = 0x7fffffff; // 32-bit
int max = 0x7fffffffffffffff; // 64-bit
In hexadecimal (a.k.a. hex), each hex digit specifies a group of 4 bits, since there are 16 hex digits (0-f), there are 2 bit digits (0-1), and 2^4 = 16. There is a compile error if more bits than the bitness are specified; if fewer bits than the bitness were specified, then the hexadecimal integer will be padded with 0's until the number of bits is the bitness. So, to indicate that all the bits are 1 except for the sign-bit, we will need to use bitness / 4 hex characters (e.g. 16 for 64-bit architecture). The first hex character will represent the binary integer '0111' (7), which is 0x7, and all the other hex characters will represent the binary integer '1111' (15), or 0xf.
Alternatively, you could use bit-shifting, which I will not explain, but feel free to Google it.
int bitness = ... // presumably 64
int max = (((1 << (bitness - 2)) - 1) << 1) + 1;
Use double.maxFinite() to convert int
const maxValue = double.maxFinite.toInt();
const minValue = -double.maxFinite.toInt();

iOS 64 bit support with C framework libxml2 "implicit conversion loses integer precision"

My app uses libxml2, which contains a function "xmlReadMemory(const char * buffer,
int size,
const char * URL,
const char * encoding,
int options)"
I do have a "size" to send, but it is an NSUInteger.
This framework is written in C, so it expects me to send an int, which throws a warning in my application since I am now including arm64 as a valid architecture: "Implicit conversion loses integer precision: NSUInteger (aka 'unsigned long') to int". Is there a safe way to resolve this warning?
link to framework API: http://www.xmlsoft.org/html/libxml-parser.html
As long as you're confident that you'll never need to pass it a value larger than the largest 32 bit integer (very unlikely) just cast it to int by adding (int) in front of the parameter in the function call. An unsigned 32 bit value is over 4 GB, which is more than the memory on an iPhone, and would take HOURS to download over the cell network.

MAX / MIN function in Objective C that avoid casting issues

I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!
NSString *test = #"short";
int calFailed = MAX(test.length - 10, 0); // returns -5
After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.
NSString *test = #"short";
int calExpected = MAX((int)test.length - 10, 0); // returns 0
This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.
Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.
The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.
The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [#"short" length] - 10 will evaluate to 4294967291.
What might be more surprising is that even without the subtraction, something like MAX([#"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [#"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([#"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).
So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.
Quoting from Understand integer conversion rules [cert.org]:
If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
(emphasis mine)
Consider this example:
int s = -1;
unsigned int u = 1;
NSLog(#"%i", s < u);
// -> 0
The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.
It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)
So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.
You probably don't have enough compiler warnings turned on. If you turn on -Wsign-compare (which can be turned on with -Wextra) you will generate a warning that looks like the following
warning: signed and unsigned type in conditional expression [-Wsign-compare]
This allows you to place the casts in the right places if necessary and you shouldn't need to rewrite the MAX or MIN macros

How to define -1 as a uint64 in a match clause?

let myuint64 = 10uL
match myuint64 with
| -1 -> ()
| _ -> ()
How do I define the given -1 as a uint64 value?
> match 0UL-1UL with
- |System.UInt64.MaxValue -> "-1"
- |_ -> "???"
- ;;
val it : string = "-1"
Let me leave alone the fact that you can't really represent a negative value with a data type that can only store positive values (and zero of course).
If, on the other hand, you were storing it in a signed value, -1 would be stored as all bits set.
So basically, I will assume you want to find a way to represent -1 as a bit-wise value that will be compatible with -1 as a signed value.
The value would then be, in C# and C/C++ syntax, 0xffffffffffffffff. Exactly how to specify that in F# I don't know.
I don't know F# at all, but if it's anything like any other languages, a UInt64 can't be -1. Ever. UInt means unsigned integer, which means it can only represent positive values.
To expand on other answers:
When a type starts with a u it means unsigned. What signed/unsigned means is this:
Numbers are stored using a certain number of bits. In the case of int64 and uint64, 64 bits are used. If the number is signed, the 1st bit is not used as part of the number itself, only the other 63 are. That bit is used to say whether the number is negative. If the number is unsigned, then all bits including the 1st bit are used as part of the number and the number is always non-negative (ie: is positive or 0).
Well you could assign it -1 and on most architectures store the 2's complement in there. The signed and unsigned stuff are really only for the type checking. There is no negative sign in hardware.
I have no idea if f# type checker is smart enough to know that a lexical constant -1 is a negative number and should not be put in a uint64.
C definitely does not care.
#include <stdio.h>
#include <inttypes.h>
main()
{
uint64_t x = -1;
printf("0x%x\n", x); // 0xffffffff
}
if F# will convert it for you then -1UL would work. If not then you can specify it as 0xFFFFFFFFFFFFFFFFUL and add a comment to remember that it's -1.
Don't have the F# tools installed at the moment so I cannot verify this.
If you want to go with a signed int:
-1: int64
but you can't match a negative number to a uint, as others have stated.

Resources