Why can I go up to this number with an integer? - memory

I am learning C and I read in the Kernighan&Ritchie's book that integers int were included in a set specific set [-32767;32767]. I tried to verify this assertion by writing the following program which increment a variable count from 1 to the limit before it fall in negative numbers.
#include <stdio.h>
int main(void){
int count = 1;
while(count > 0){
count++;
printf("%d\n", count);
}
return 0;
}
And surprisingly I got this output:
1
......
2147483640
2147483641
2147483642
2147483643
2147483644
2147483645
2147483646
2147483647 -> This is a lot more than 32767?!
-2147483648
I do not understand, Why do I get this output? And I doubt M. Ritchie made a mistake ;)

You're on a 32- or a 64-bit machine and the C compiler you are using has 32-bit integers. In 2's complement binary, the highest positive integer would be 31 bites, or 2^31-1 or 2147483647, as you are observing.
Note that this doesn't violate K&R's claim that the integer value includes the range [-32768;32767].

Shorts typically go from -32768 to 32767. 2^15th - 1 is the largest short.
Ints typically go from -2147483648 to 2147483647. 2^31st -1 is the largest int.
Basically ints are twice the size you thought.

Related

Dart double "bitwise not" is giving different result (~~-1 != -1)

So I am running dart on DartPad And I tried running the following code:
import 'dart:math';
void main() {
print(~0);
print(~-1);
print(~~-1);
}
Which resulted in the following outputs
4294967295
0
4294967295
As you can see inverting the bits from 0 results in the max number (I was expecting -1 as dart uses two's complement) and inverting from -1 results in 0, which creates the situation where inverting 2 times -1 does not give me -1.
Looks like it's ignoring the first bit when inverting 0, why is that?
Dart compiled for the web (which includes DartPad) uses JavaScript numbers and number operations.
One of the consequences of that is that bitwise operations (~, &, |, ^, <<, >> and >>> on int) only gives 32-bit results, because that's what the corresponding JavaScript operations do.
For historical reasons, Dart chooses to give unsigned 32-bit results, not two's complement numbers. So ~-1 is 0 and ~0 is the unsigned 0xFFFFFFFF, not -1.
In short, that's just how it is.

Is there a constant for max/min int/double value in dart?

Is there a constant in dart that tells us what is the max/min int/double value ?
Something like double.infinity but instead double.maxValue ?
For double there are
double.maxFinite (1.7976931348623157e+308)
double.minPositive (5e-324)
In Dart 1 there was no such number for int. The size of integers was limited only by available memory
In Dart 2 int is limited to 64 bit, but it doesn't look like there are constants yet.
For dart2js different rules apply
When compiling to JavaScript, integers are therefore restricted to 53 significant bits because all JavaScript numbers are double-precision floating point values.
I found this from dart_numerics package.
here you are the int64 max value:
const int intMaxValue = 9223372036854775807;
for dart web is 2^53-1:
const int intMaxValue = 9007199254740991;
"The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent integers between -(2^53 - 1) and 2^53 - 1." see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER
No,
Dart does not have a built-in constant for the max value of an int
but this is how to get the max value
Because ints are signed in Dart, they have a range (inclusive) of [-2^31, 2^31-1] if 32-bit and [-2^63, 2^63 - 1] if 64-bit. The first bit in an int is called the 'sign-bit'. If the sign-bit is 1, the int is negative; if 0, the int is non-negative. In the max int, all the bits are 1 except the sign bit, which is 0. We can most easily achieve this by writing the int in hexadecimal notation (integers preceded with '0x' are hexadecimal):
int max = 0x7fffffff; // 32-bit
int max = 0x7fffffffffffffff; // 64-bit
In hexadecimal (a.k.a. hex), each hex digit specifies a group of 4 bits, since there are 16 hex digits (0-f), there are 2 bit digits (0-1), and 2^4 = 16. There is a compile error if more bits than the bitness are specified; if fewer bits than the bitness were specified, then the hexadecimal integer will be padded with 0's until the number of bits is the bitness. So, to indicate that all the bits are 1 except for the sign-bit, we will need to use bitness / 4 hex characters (e.g. 16 for 64-bit architecture). The first hex character will represent the binary integer '0111' (7), which is 0x7, and all the other hex characters will represent the binary integer '1111' (15), or 0xf.
Alternatively, you could use bit-shifting, which I will not explain, but feel free to Google it.
int bitness = ... // presumably 64
int max = (((1 << (bitness - 2)) - 1) << 1) + 1;
Use double.maxFinite() to convert int
const maxValue = double.maxFinite.toInt();
const minValue = -double.maxFinite.toInt();

Is that a bug, when I send zero to to funtion luaO_ceillog2?

I'a reading lua source code which version is 5.3. And i found the function
int luaO_ceillog2 (unsigned int x) in lobject.c file doest't take a special discuss for 0. When 0 was send to this fuction, it would return 32. Does this is a bug? I was confused.
luaO_ceillog2 is a function that's only used internally. Its name infers that it calculates ceil (maximum number that's not less than) of log2 of the argument.
Mathematically, logbx is only valid for x who is positive. So 0 is not a valid argument for this function, I don't think this counts as a bug.

iOS calculating sum of filesizes always negative

I've got a strange problem here, and i'm sure it's just something small.
I recieve information about files via JSON (RestKit is doing a good job).
I write the filesize of each file via coredata to a local store.
Afterwards within one of my viewcontrollers i need to sum up the files-sizes of all files in database. I fetch all files and then going through a slope (for) to sum the size up.
The problem is now, the result is always negative!
The coredata entity filesize is of type Integer 32 (filesize is reported in bytes by JSON).
I read the fetchresult in an NSArray allPublicationsToLoad and then try to sum up. The Objects in the NSArray of Type CDPublication have a value filesize of Type NSNumber:
for(int n = 0; n < [allPublicationsToLoad count]; n = n + 1)
{
CDPublication* thePub = [allPublicationsToLoad objectAtIndex:n];
allPublicationsSize = allPublicationsSize + [[thePub filesize] integerValue];
sum = [NSNumber numberWithFloat:([sum floatValue] + [[thePub filesize] floatValue])];
Each single filesize of the single CDPublications objects are positive and correct. Only the sum of all the filesizes ist negative afterwards. There are around 240 objects right now with filesize-values between 4000 and 234.645.434.123.
Can somebody please give me a hit into the right direction !?
Is it the problem that Integer 32 or NSNumber can't hold such a huge range?
Thanks
MadMaxApp
}
The NSNumber object can't hold such a huge number. Because of the way negative numbers are stored the result is negative.
Negative numbers are stored using two's complement, this is done to make addition of positive and negative numbers easier. The range of numbers NSNumber can hold is split in two, the highest half (the int values for which the highest order bit is equal to 1) is considered to be negative, the lowest half (where the highest order bit is equal to 0) are the normal positive numbers. Now, if you add sufficiently large numbers, the result will be in the highest half and thus be interpreted as a negative number. Here's an illustration for the 4-bit integer situation (32 works exactly the same but there would be a lot more 0 and 1 to type;))
With 4 bits you can represent this range of signed integers:
0000 (=0)
0001 (=1)
0010 (=2)
...
0111 (=7)
1000 (=-8)
1001 (=-7)
...
1111 (=-1)
The maximum positive integer you can represent is 7 in this case. If you would add 5 and 4 for example you would get:
0101 + 0100 = 1001
1001 equals -7 when you represent signed integers like this (and not 9, as you would expect). That's the effect you are observing, but on a much larger scale (32 bits)
Your only option to get correct results in this case is to increase the number of bits used to represent your integers so the result won't be in the negative number range of bit combinations. So if 32 bits is not enough (like in your case), you can use a long (64 bits).
[myNumber longLongValue];
I think this has to do with int overflow: very large integers get reinterpreted as negatives when they overflow the size of int (32 bits). Use longLongValue instead of integerValue:
long long allPublicationsSize = 0;
for(int n = 0; n < [allPublicationsToLoad count]; n++) {
CDPublication* thePub = [allPublicationsToLoad objectAtIndex:n];
allPublicationsSize += [[thePub filesize] longLongValue];
}
This is an integer overflow issue associated with use of two's complement arithmetic. For a 32 bit integer there are exactly 232 (4,294,967,296) possible integer values which can be expressed. When using two's complement, the most significant bit is used as a sign bit which allows half of the numbers to represent non-negative integers (when the sign bit is 0) and the other half to represent negative numbers (when the sign bit is 1). This gives an effective range of [-231, 231-1] or [-2,147,483,648, 2,147,483,647].
To overcome this problem for your case, you should consider using a 64-bit integer. This should work well for the range of values you seem to be interested in using. Alternatively, if even 64-bit is not sufficient, you should look for big integer libraries for iOS.

How to define -1 as a uint64 in a match clause?

let myuint64 = 10uL
match myuint64 with
| -1 -> ()
| _ -> ()
How do I define the given -1 as a uint64 value?
> match 0UL-1UL with
- |System.UInt64.MaxValue -> "-1"
- |_ -> "???"
- ;;
val it : string = "-1"
Let me leave alone the fact that you can't really represent a negative value with a data type that can only store positive values (and zero of course).
If, on the other hand, you were storing it in a signed value, -1 would be stored as all bits set.
So basically, I will assume you want to find a way to represent -1 as a bit-wise value that will be compatible with -1 as a signed value.
The value would then be, in C# and C/C++ syntax, 0xffffffffffffffff. Exactly how to specify that in F# I don't know.
I don't know F# at all, but if it's anything like any other languages, a UInt64 can't be -1. Ever. UInt means unsigned integer, which means it can only represent positive values.
To expand on other answers:
When a type starts with a u it means unsigned. What signed/unsigned means is this:
Numbers are stored using a certain number of bits. In the case of int64 and uint64, 64 bits are used. If the number is signed, the 1st bit is not used as part of the number itself, only the other 63 are. That bit is used to say whether the number is negative. If the number is unsigned, then all bits including the 1st bit are used as part of the number and the number is always non-negative (ie: is positive or 0).
Well you could assign it -1 and on most architectures store the 2's complement in there. The signed and unsigned stuff are really only for the type checking. There is no negative sign in hardware.
I have no idea if f# type checker is smart enough to know that a lexical constant -1 is a negative number and should not be put in a uint64.
C definitely does not care.
#include <stdio.h>
#include <inttypes.h>
main()
{
uint64_t x = -1;
printf("0x%x\n", x); // 0xffffffff
}
if F# will convert it for you then -1UL would work. If not then you can specify it as 0xFFFFFFFFFFFFFFFFUL and add a comment to remember that it's -1.
Don't have the F# tools installed at the moment so I cannot verify this.
If you want to go with a signed int:
-1: int64
but you can't match a negative number to a uint, as others have stated.

Resources