Is there the equivalent of INT_MAX in dart? [duplicate] - dart

This question already has answers here:
Is there a constant for max/min int/double value in dart?
(5 answers)
Closed 2 years ago.
The question is in the title: is there a constant INT_MAX (the maximum value of an integer) in the Dart language?
I don't care what it is, I just want to use it as an initialization constant to, for example, find a minimum value in a List.
I note that there is a double.maxFinite which I could use as in
int i = double.maxFinite.toInt();
but that somehow seems wrong to me. Or is it?

There is no maximal integer value across Dart platforms.
On native platforms, the maximal value is 0x7FFFFFFFFFFFFFFF (263-1). There is no constant provided for it.
On web platforms, the maximal integer value is double.maxFinite.
If I had to do something which needed an initial maximal value (finding the minimal element of a list, perhaps), I'd prefer to start out with the first element, and throw on an empty input.
As a second choice, I'd use num for the accumulator and use double.infinity as starting value. Then I'd check at the end and do something useful if the value is still infinite.

Related

How to generate the same random sequence from a given seed in Delphi [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 months ago.
Improve this question
I am wanting to generate the same random sequence (of numbers or characters) based on a given "seed" value.
Using the standard Randomize function does not seem to have such an option.
For example in C# you can initialize the Random function with a seed value (Random seed c#).
How can I achieve something similar in Delphi?
You only need to assign a particular value to the RandSeed global variable.
Actually, I'm almost surprised you asked, because you clearly know of the Randomize function, the documentation for which states the following:
The random number generator should be initialized by making a call to Randomize, or by assigning a value to RandSeed.
The question did not mention thread safety as a requirement, but the OP specified that in comments.
In general, pseudorandom number generators have internal state, in order to calculate the next number. So they cannot be assumed to be thread-safe, and require an instance per thread.
For encapsulated thread-safe random numbers, one alternative is to use a suitably good hash function, such as xxHash, and pass it a seed and a counter that you increment after each call in the thread.
There is a Delphi implementation of xxHash here:
https://github.com/Xor-el/xxHashPascal
For general use, it's easy to make several versions in 1, 2 or 3 dimensions as needed, and return either floating point in 0..1 or integers in a range.

How to get the maxium value of an Integer in dart?

In java we can just Integer.maxValue but dart there is not this method.
I am using int max = 1<<32 but this doesn't work properly when compiling to javascript.
What is the best way to get the integer maximum value using dart language?
I was using dart_numerics package in my app for another reason and found it while typing
There is no method because the int maximum values are fixed.
According to the documentation:
Integers While the language specifies an arbitrarily sized integer,
for performance reasons the VM has three different internal integer
representations: smi (rhymes with pie), mint, and bigint. Each
representation is used to hold different ranges of integer numbers
(see the table). The VM automatically switches between these
representations behind the scenes as numbers grow and shrink in range.
You can read about that here: https://dart.dev/articles/archive/numeric-computation

Why is output of {42.05 + 0.05 } like this on Dart Lang? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 1 year ago.
When i try this on DartPad output is like this. Anyone can explain ?
This is expected behavior. Double numbers cannot represent all decimal fractions precisely, and neither 0.05 nor 42.05 are the exact values that the double values represent.
The exact values are:
42.0499999999999971578290569595992565155029296875
0.05000000000000000277555756156289135105907917022705078125
If you add these two exact values, the result can yet again not be represented exactly as a double. The two closest representable doubles are:
42.099999999999994315658113919198513031005859375
42.10000000000000142108547152020037174224853515625
Of these, the former is closer to the correct result of the addition, so that is the double value chosen to represent that result.
This issue is not specific to Dart. All language using IEEE-754 64-bit floating point numbers will get the same result, and that is probably all languages with a 64-bit floating point type (C, C++, C#, Java, JavaScript, etc).

How I can make method take infinity arguments in objective-c? [duplicate]

This question already has answers here:
How to create variable argument methods in Objective-C
(3 answers)
Closed 9 years ago.
I want to make method take infinity arguments in objective-c and add these arguments .
I'm assuming by "infinite" you mean "as many as I want, as long, as memory allows".
Pass a single argument of type NSArray. Fill it with as many arguments as you wish.
If all arguments are guaranteed to be non-object datatypes - i. e. int, long, char, float, double, struct's and arrays of them - you might be better of with a NSData. But identifying individual values in a binary blob will be trickier.
Since you want to add them up, I assume they're numbers. Are they all the same datatype? Then pass an array (and old style C array) or a pointer, and also a number of elements.
EDIT: now that I think of it, the whole design is fishy. You want a method that takes an arbitrarily large number of arguments and adds them up. But the typing effort required for passing them into a function is comparable to that of summing them up. If you have a varargs function, Sum(a,b,c,d,e) takes less typing than a+b+c+d+e. If you have a container class (NSArray, NSData, etc), you have to loop through the addends; while you're doing that, you might as well sum them up.
That's not possible on a finite machine (that is, all existing computers).
If you're good with a variable, yet finite, amount of arguments, there are C's ... variadic argument functions.

Recommended initialization values for numbers

Assume you have a variety of number or int based variables that you want to be initialized to some default value. But using 0 could be problematic because 0 is meaningful and could have side affects.
Are there any conventions around this?
I have been working in Actionscript lately and have a variety of value objects with optional parameters so for most variables I set null but for numbers or ints I can't use null. An example:
package com.website.app.model.vo
{
public class MyValueObject
{
public function MyValueObject (
_id:String=null,
_amount:Number=0,
_isPurchased:Boolean=false
)
{ // Constructor
if( _id != null ) this.id = _id;
if( _amount != 0 ) this.amount = _amount;
if( _isPurchased != false ) this.isPurchased = _isPurchased;
}
public var id:String;
public var amount:Number;
public var isPurchased:Boolean;
}
}
The difficulty is that using 0 in the above code might be problematic if the value is not ever changed from its initial value. It is easy to detect if a variable has a null value. But detecting 0 may not be so easy because 0 might be a legitimate value. I want to set a default value to make the parameter optional but I also want to later detect in my code if the value was changed from its default without hard to debug side affects.
I suppose I could use something like -1 for a value. I was wondering if there are any well known coding conventions for this kind of thing? I suppose it depends on the nature of the variable and the data.
This is first my stack overflow question. Hopefully the gist of my question makes sense.
A lot of debuggers will use 0xdeadbeef for initializing registers. I always get a chuckle when I see that.
But, in all honesty, your question contains its own answer - use a value that your variable is not ever expected to become. It doesn't matter what the value is.
Since you asked in a comment I'll talk a little bit about C and C++. For efficiency reasons local variables and allocated memory are not initialized by default. But debug builds often do this to help catch errors. A common value used is 0xcdcdcdcd which is reasonably unlikely. It has the high bit set and is either a rather large unsigned or rather large negative signed number. As a pointer address it is odd which will cause an alignment exception if used on anything but a char (but not on X86). It has no special meaning as a 32 bit floating point number so it isn't a perfect choice.
Occasionally you'll see a partially aligned value in a variable such as 0xcdcd0000 or 0x0000cdcd. These can be treated as suspcious at the very least.
Sometimes different values will be used depending on the allocation area of library. That gives you a clue where a bad value may have originated (i.e., it itself wasn't initialized but it was copied from an unititialized value).
The ideal value would be invalid no matter what alignment you read from memory and is invalid over all primitive types. It also should look suspicious to a human so even if they do not know the convention they can suspect something is a foot. That's why 0xdeadbeef can be a good choice because the (hex viewing) programmer will recognize that as the work of a human and not random chance. Note also that it is odd and has the high bit set so it has that going for it.
The value -1 is often traditionally used as an "out of range" or "invalid" value to indicate failure or non-initialised data. Then again, that goes right down the pan if -1 is a semantically valid value for the variable...or you're using an unsigned type.
You seem to like null (and for a good reason), so why not just use it throughout?
In ActionScript you can only assign Number.NaN to variables that are typed Number, not int or uint.
That being said, because AS3 does not support named arguments you can always look at the arguments array (it's a built-in array that all functions have, unless you use the ...rest construct). If that array's length is less than the position of your numeric argument you know it wasn't passed in.
I often use a maximum value for this. As you say, zero often is a valid value. Generally max-int, while theoretically valid, is safe to exclude. But not always; be careful.
I like 0xD15EA5ED, it's similar to 0xDEADBEEF but is usually more accurate when debugging.

Resources