For an app, I need a 64bit unsigned int. Looking at dart documentation I did not see how to exactly go about declaring one.
Can anyone tell me how this is done? I will use this "64bit unsigned int" in bitwise operation.
Dart does not have a native unsigned 64-bit integer.
For many operations, you can just use the signed 64-bit integer that an int is, and interpret it as unsigned. It's the same bits. That won't work with division, though. (And if it's for the web, then an int is a JavaScript number, and you need to do something completely different).
The simplest general approach is to use a BigInt and use toUnsigned(64) after you do any operations on it.
Just use fixnum
You can easily create an int64 with Int64()
Related
I need to do some bit operation on ruby.
One of the data I need it to be unsigned int to get the correct value.
However when I add ,this number is always int,
How to declare a variable as unsigned int? I've been searched but seems none answer my question.
The other post says I can't declare a unsigned variable in Ruby.
Changed Question:
How to do unsigned subtraction in Ruby on Rails?
I need do some byte checksum, which requires unsigned operation.
Types are dynamic in Ruby. Ruby implements integers in such a way that the distinction signed/unsigned is irrelevant, as Ruby integers extend automatically into BigNum (arbitrary length integers) when applicable.
I have looked everywhere but I couldnt find any information related to this topic.
Also, Is there a java - like Long / BigDecimal datatype in dart?
That depends if you are running in the Dart VM or compiling to JavaScript.
On the Dart VM an int is arbitrary precision and has no limit.
When compiling to JavaScript you are actually using floating point numbers, since that is all that JavaScript supports. This means you are restricted to representing integers within the range -253 to 253
Dart 2
For dart2js generated JavaScript Pixel Elephants answer is still true.
within the range -253 to 253
Other execution platforms have fixed-size integers with 64 bits.
The type BigInt was added to typed_data
Since Dart 2.0 will switch to fixed-size integers, we also added a BigInt class to the typed_data library. Developers targeting dart2js can use this class as well. The BigInt class is not implementing num, and is completely independent of the num hierarchy.
Dart 1
There are also the packages
- https://pub.dartlang.org/packages/bignum
- https://pub.dartlang.org/packages/decimal
I prefer this one:
int hugeInteger = double.maxFinite.toInt()
How could I handle operations with a number like:
48534588306961133067968196965257961415756656521818848750723547477673457670019632882524164647651492025728980571833579341743988603191694784406703
Nothing that I've tried worked so far... unsigned long, long long, etc...
What you need is a library that provides support for operations on integers of arbitrary length. However, from what I've been able to find out, there are no such libraries written in Objective-C.
You are nevertheless in luck as Objective-C is a superset of C. That makes it possible for you to use C libraries such as those described in the answers to this somewhat dated SO question.
Also, since the Clang compiler supports C++ and combining Objective-C and C++ code, you can probably use something like big int.
Note that none of the built-in types is even close to being big enough to represent numbers with as many digits as your examples. The biggest available integer type is unsigned long long, if you don't need negative numbers, and its size is 8 bytes/64 bits, which gives you a range of 0-18446744073709551615, or 20 digits max.
You could use JKBigInteger instead, it is a Objective-C wrapper around LibTomMath C library. And really easy to use and understand.
In your case:
JKBigInteger *int = [[JKBigInteger alloc] initWithString:#"48534588306961133067968196965257961415756656521818848750723547477673457670019632882524164647651492025728980571833579341743988603191694784406703"];
You can try here : http://gmplib.org/
GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers. There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a regular interface.
I'm not a Pascal newbie, but I still don't know until now why Delphi and Free Pascal usually declares parameters and returned values as signed integers whereas I see them should always be positive. For example:
Pos() returns type of Integer. Is it possible to be a negative?
SetLength() declares the NewLength parameter as a type of Integer. Is there a negative length for string?
System.THandle declared as Longint. Is there a negative number for handles?
There are many decisions like those in Delphi and Free Pascal. What considerations were behind this?
In Pascal, Integer (signed) is the base type. All other integer number types are a subranges of integer. (this is not entirely true in Borland dialects, given longint in TP and int64 in Delphi, but close enough).
An important reason for that if the intermediate result of calculations gets negative, and you calculate with unsigned integers, range check errors will trigger, and since most older programming languages DON'T assume 2-complement integers, the result (with range checks off) might even be corrupt.
The THandle case is much simpler. Delphi didn't have a proper 32-bit unsigned till D4, but only a 31-bit cardinal. (since 32-bit unsigned integer is not a subrange of integer, the later unsigned ints are a subset of int64, which moved the problem to uint64 which was only added in D2010 or so)
So in many places in the headers signed types are used where the winapi uses unsigned types, probably to avoid the 32th bit getting accidentally corrupt in those versions, and the custom stuck.
But the winapi case is different from the general case.
Added later Some Pascal (and Modula2/3) implementations circumvent this trap by setting the integer at a size larger than the wordsize, and require all numeric types to declare a proper subrange, like in the below program.
The first holds the primary assumption that everything is a subset of integer, and the second allows the compiler to scale nearly everything down again to fit in registers, specially if the CPU has some operations for larger than word operations. (like x86 where 32-bit * 32-bit mul gives a 64-bit result, or can detect wordsize overflows using status bits (e.g. to generate range exceptions for adds without doing a full 2*wordsize add)
var x : 0..20;
y : -10..10;
begin
// any expression of x and y has a range -10..20
Turbo Pascal and Delphi emulate an integer type twice the wordsize for their 16-bit and 32-bit offerings. The handling of the highest unsigned type is hacky at best.
Well, for a start THandle is declared incorrectly. It's unsigned in the Windows headers and should be so in Delphi. In fact I think this was corrected in a recent release of Delphi.
I'd imagine that the preference for signed over unsigned is largely historical and not particularly significant. However, I can think of one example where it is important. Consider the for loop:
for i := 0 to Count-1 do
If i is unsigned and Count is 0 then this loop runs from 0 to $FFFFFFFF which is not what you want. Using a signed integer loop variable avoids that problem.
Pascal is a victim of its syntax here. The equivalent C or C++ loop has no such trouble
for (unsigned int i=0; i<Count; i++)
due to the syntactic difference and use of a comparison operator as stopping condition.
This could also be the reason why Length() on a string or dynamic array returns a signed value. And so for consistency, SetLength() should accept signed values. And given that the return value of Pos() is used to index strings, it should be signed also.
Here's another Stack Overflow discussion of the topic: Should I use unsigned integers for counting members?
Of course, I'm speculating wildly here. Perhaps there was no design and just out of habit the precedent of using signed values was set and became enshrined.
Some string related search functions return -1 when nothing is found.
I believe the reasoning behind this is that MaxInt is 2GB which is the maximum size for strings in 32 bit Delphi. This because a single process can have up to 2GB memory
There are many reasons for using signed integers, even some that might apply when you do not intend to return a negative value.
Imagine I write code that calls Pos, and I want to do math with the results. Would you rather have a negative result (Pos('x',s)-5) raise a range-check exception, underflow and become a very large unsigned number around 4 billion, or go negative, if Pos('x',s) returns 1? Either one is a source of problems for new users who seldom think about these cases, but the long-established tradition is that by using Integer results, it's your job to check for negative and zero results and not use them as string offsets. There is an advantage for beginning and for advanced programmers, in using Integer, and not having "negative" values roll under and become large unsigned values or raise range exceptions.
Secondly, remember that in beginning programming, one usually introduces Integer (signed) types long before one introduces unsigned types like Cardinal. Beginners often work with functions like Pos, and it makes sense to use the type that will create the least-unfriendly set of side effects. There are no negative side effects to having a range larger than the one you absolutely need (the range you probably need for Pos is 1 to maximum-string-length-in-delphi). There is zero benefit in 32-bit Delphi to using the Cardinal type for Pos, and there definitely ARE downsides to choosing it.
Once you get to 64-bit delphi, however, you could theoretically have strings LARGER than an Integer can hold, and moving to Cardinal wouldn't fix all your potential problems. However, the chance of anyone having a 2+ GB string is probably nil, and Delphi 64-bit compiler doesn't allow a >2 GB string, anyway. In my testing, I can achieve an almost 1 GB String in 64 bit Delphi. So the practical length limit for a Win64 string is about a billion (1073741814) characters, which is using nearly 2 GB of actual RAM. At that limit, I either get EIntOverflow or EAccessViolation, and it seems I am hitting Delphi run time library (RTL) bugs, not properly defined limits, so your mileage may vary.
Does it make a difference which one I use in objective-c (particularly on iOS)? I assume it comes from inheriting from C and its types, as well as inheriting the types from Mac OS, which iOS was based on, but I don't know which one I should use:
unsigned char from...well..the compiler?
uint8_t from stdint.h
UInt8 from MacTypes.h
Byte from MacTypes.h
Bytef from zconf.h
I am aware that the various defs are for portability reasons, and using literals like unsigned char is not good future thinking (size might change, and things will end up like the Windows API again). I'd like some advice on how to spot the best ones for my uses. Or a good tongue lashing if I'm just being silly...
EDIT : Just for some more info, if I want something that will always be 1 byte, should I use uint8_t (doesn't seem like it would change with a name like that)? I'd like to think UInt8 wouldn't change either but I see that the definition of UInt32 varies on whether or not the processor is 64-bit.
FURTHER EDIT : When I say byte, I specifically mean that I want 8 bits. I am doing pixel compression operations (32 bits -> 8 bits) for disk storage.
It's totally indifferent. Whichever you use, it will most probably end up being an unsigned char. If you want it to look nice, though, I suggest you use uint8_t from <stdint.h>.
Neither will change with the architecture. char is always 1 byte as per the C standard, and it would be insupportable from a user's point of view if in an implementation, UInt8 suddenly became 16 bits long.
(It is not the case, however, that char is required to be 8 bits wide, it's only that if the name of a type suggest that it's 8 bits long, then any sensible implementation does indeed typedefs it as such. Incidentally, a byte (which char is) is often an 8-bit unit, i. e. an octet.)
As in every programming language derived from C-language type model, Objective C has a handful of equivalent options to declare a 8-bit integer.
Why did I say equivalent? Because as OP correctly stated, it's obvious that all of those options eventually typedef-ed to unsigned char built-in compiler type. This is correct for now and, let's speak practically, nobody sane will change them to be a non-8-bit integers in the future.
So, the actual question here is what is the better order to prioritize considerations when choosing the type name for 8-bit integer?
Code readability
Since basically in every code having C language roots, primitive type names are a mess. Therefore, probably the most important factor is readability. And by that I mean clear and uniquely identifiable intent of choosing this specific type for this specific integer for the majority of people who would read your code.
So let's take look at those types from an average Objective C programmer point of view who knows little about C language.
unsigned char - what's this??? why char is ever meant to be signed???
uint8_t - ok, unsigned 8 bit integer
UInt8 - hmm, the same as above
Byte - signed or unsigned 8 bit integer
Bytef - what's this? byte-float? what does that 'f' mean?
It's obvious here that unsigned char and Bytef aren't a good choices.
Going further, you can notice another nuisance with Byte type name: you can't say for sure if it represents signed or unsigned integer which could be extremely important when you're trying to understand what is the range of values this integer could hold (-128 .. 127 or 0 .. 256). This is not adding points to code readability, too.
Uniform code style
We're now left with the 2 type names: uint8_t and UInt8. How to choose between them?
Again, looking at them through the eyes of an Objective C programmer, who is using type names like NSInteger, NSUInteger a lot, it looks like much natural when he sees UInt8. uint8_t just looks like a very low-level daunting stuff.
Conclusion
Thus, we eventually are left with the single option - UInt8. Which is clearly identifiable in terms of number of bits, range and looks accustomed. So it's probably the best choice here.