Related
I am learning Dart, and I defined a sum function to sum list of numbers.
sum(numberList) => numberList.reduce((num a, num b) => a + b);
When I call it on list of numbers:
main() {
var nl = [4, 2, 4, 5, 9];
print(sum(nl));
}
I got error:
type '(num, num) => num' is not a subtype of type '(int, int) => int' of 'combine'
This confused me, why a function defined for type num cannot be called on list of int? How to fix this problem? If the list 'nl' is coming from outside of my code, how can I cast list of int to list of num? (It seems List of int is not a list of num? puzzled.)
The problem is that you are calling reduce on a List<int>.
The type of that is int reduce(int Function(int, int) combine).
That means that the combine function argument must be a function returning an int.
You try to pass a function which returns a num, and that is not allowed.
You didn't catch that statically because you haven't typed the argument to sum.
Try changing it to:
num sum(List<num> numberList) => numberList.reduce((num a, num b) => a + b);
What you can do is cast the list to List<num> before passing it to sum:
print(sum(nl.cast<num>()));
I'm sorry I cannot give a real answer to your but here are some methods so solve your issue.
First of all obviously cast your list to an num list, either by specifying the generic type or using .cast :
var nl = [4, 2, 4, 5, 9];
print(sum(nl.cast<num>()));
var nl = <num>[4, 2, 4, 5, 9];
print(sum(nl2));
or you might want to use an extension which will work just fine for any list type which extends num:
extension SumList<T extends num> on List<T> {
T sum() => reduce((a, b) => a + b);
}
var nl4 = [4, 2, 4, 5, 9];
print(nl4.sum());
With Xcode 11.1 if I run a playground with:
pow(10 as Double, -2) // 0.01
I get same output using Float:
pow(10 as Float, -2) // 0.01
But if I try to use the pow(Decimal, Int) as in:
pow(10 as Decimal, -2) // NaN
Does anybody know why?
Is there a better way to deal with positive and negative exponent with pow and Decimal? I need Decimal as they behave as I expect with currency value.
EDIT: I know how to resolve that from math perspective, I'd like to understand why it happens and/or if it can be solved without adding on the cyclomatic complexity of my code (e.g. checking if the exponent is negative and executing 1 / pow)
Well, algebraically, x^(-p) == 1/(x^(p))
So, convert your negative power to a positive power, and then take the reciprocal.
1/pow(10 as Decimal, 2) // 0.01
I think that this struct give us an idea about the problem:
public struct Decimal {
public var _exponent: Int32
public var _length: UInt32 // length == 0 && isNegative -> NaN
public var _isNegative: UInt32
public var _isCompact: UInt32
public var _reserved: UInt32
public var _mantissa: (UInt16, UInt16, UInt16, UInt16, UInt16, UInt16, UInt16, UInt16)
public init()
public init(_exponent: Int32, _length: UInt32, _isNegative: UInt32, _isCompact: UInt32, _reserved: UInt32, _mantissa: (UInt16, UInt16, UInt16, UInt16, UInt16, UInt16, UInt16, UInt16))
}
The length condition should be satisfacted only length == 0, but as UInt32 doesn't represents fractionary numbers the condition is satisfacted...
That's simply how NSDecimal / NSDecimalNumber works: it doesn't do negative exponents. You can see a rather elaborate workaround described here:
https://stackoverflow.com/a/12095004/341994
As you can see, the workaround is exactly what you've already been told: look to see if the exponent would be negative and, if so, take the inverse of the positive root.
... if it can be solved without adding on the cyclomatic complexity of my code ...
extension Decimal {
func pow(i: Int)->Decimal {
i < 0 ? 1.0 / Foundation.pow(self, -i) : Foundation.pow(self, i)
}
}
it is really not so complex to use it, from your example ....
(10 as Decimal).pow(i: -2) // 0.01
Kindly find my point of view for this Apple function implementation, Note the following examples:
pow(1 as Decimal, -2) // 1; (1 ^ Any number) = 1
pow(10 as Decimal, -2) // NAN
pow(0.1 as Decimal, -2) // 100
pow(0.01 as Decimal, -2) // 10000
pow(1.5 as Decimal, -2) // NAN
pow(0.5 as Decimal, -2) // NAN
It seems like, pow with decimal don't consider any floating numbers except for 10 basis. So It deals with:
0.1 ^ -2 == (1/10) ^ -2 == 10 ^ 2 // It calculates it appropriately, It's 10 basis 10, 100, 1000, ...
1.5 ^ -2 == (3/2) ^ -2 // (3/2) is a floating number ,so deal with it as Double not decimal, It returns NAN.
0.5 ^ -2 == (1/2) ^ -2 // (2) isn't 10 basis, So It will be dealt as (1/2) as It is, It's a floating number also. It returns NAN.
I think this could be a bug on Swift's compiler.
EDIT: This is a weird behaviour on Objective-C's NSDecimalNumber, see #matt's comment on this answer below.
As stated by #jawadAli on his answer
Well, algebraically, x^(-p) == 1/(x^(p))
This formula is correct therefore the following statements should be equal
let ten: Decimal = 10
let one: Decimal = 1
let answer: Decimal = 0.01
pow(ten, -2) // NaN
one / pow(ten, 2) // 0.01
one / (ten * ten) // 0.01
answer // 0.01
Trying this with other data types would result to 0.01.
I also tried to replicate this by using other negative exponents on Decimal data type and it seems to always evaluate to NaN. With the exceptions of 1, 0, and -1
(1...100).forEach {
print(pow(-2, -$0)) // NaN
print(pow(-1, -$0)) // Correct
print(pow(0, -$0)) // Correct
print(pow(-1, -$0)) // Correct
print(pow(-2, -$0)) // NaN
}
I would suggest that you use a different Data Type for now.
In my DoCalcTimeDerivative, I need to take cos for one of the state vector elements.
I do this with the following code
Vector4<T> x = context.get_continuous_state_vector().CopyToVector();
T c0 = std::cos(x[0]);
However, I get the following error
error: no matching function for call to ‘cos(Eigen::DenseCoeffsBase<Eigen::Matrix<Eigen::AutoDiffScalar<Eigen::Matrix<double, -1, 1> >, 4, 1, 0, 4, 1>, 1>::Scalar&)’
I've also tried using
const systems::VectorBase<T>& x = context.get_continuous_state_vector();
T c0 = std::cos(x[0]);
which similarly gives the following error
error: no matching function for call to ‘cos(const Eigen::AutoDiffScalar<Eigen::Matrix<double, -1, 1> >&)’
This is strange as I see std::cos and std::sin used in the examples but I can't seem to figure out why it works in the examples but not mine.
try this instead:
using std::cos;
Vector4<T> x = context.get_continuous_state_vector().CopyToVector();
T c0 = cos(x[0]);
I'm learning F# and have an assignment where I have to treat a float as a coordinate. For example float 2.3 would be treated as a coordinate (2.3) where x is 2 and y is 3.
How can I split the float to calculate with it?
I am trying to make a function to calculate the length of a vector:
let lenOfVec (1.2, 2.3) and using pythagoras' method to get the length of hypotenuse.
But I am already stuck at splitting up the float.
Hope some can help!
Having at your disposal libraries as rich as F#/.NET offer the task of splitting a float into two can be done with one short line of code:
let splitFloat n = n.ToString().Split('.') |> Array.map float
library function ToString() converts the argument n (supposedly float) to a string
library functionSplit('.') applied to this string converts it into an array of two strings representing the first number before decimal dot and the second number after the dot
finally this array of 2 strings is converted by applying library function float to the each array element with the help of just another library function Array.map, producing the array of two sought floats
Being applied to a random float number the outlined chain of conversions looks like
123.456 --> "123.456" --> [|123;456|] --> [|123.0;456.0|]
Stealing from a few other answers on here, something like this seems to work for a few examples:
open System
///Takes in a float and returns a tuple of the the two parts.
let split (n: float) =
let x = Math.Truncate(n)
let bits = Decimal.GetBits(decimal n)
let count = BitConverter.GetBytes(bits.[3]).[2]
let dec = n - x
let y = dec * Math.Pow(10., float count)
x, y
Examples:
2.3 -> (2.0, 3.0)
200.123 -> (200.0, 123.0)
5.23 -> (5.0, 23.0)
Getting the X is easy, as you can just truncate the decimal part.
Getting the Y took input from this answer and this one.
I've stumbled onto an odd NSDecimalNumber behavior: for some values, invocations of integerValue, longValue, longLongValue, etc., return the an unexpected value. Example:
let v = NSDecimalNumber(string: "9.821426272392280061")
v // evaluates to 9.821426272392278
v.intValue // evaluates to 9
v.integerValue // evaluates to -8
v.longValue // evaluates to -8
v.longLongValue // evaluates to -8
let v2 = NSDecimalNumber(string: "9.821426272392280060")
v2 // evaluates to 9.821426272392278
v2.intValue // evaluates to 9
v2.integerValue // evaluates to 9
v2.longValue // evaluates to 9
v2.longLongValue // evaluates to 9
This is using XCode 7.3; I haven't tested using earlier versions of the frameworks.
I've seen a bunch of discussion about unexpected rounding behavior with NSDecimalNumber, as well as admonishments not to initialize it with the inherited NSNumber initializers, but I haven't seen anything about this specific behavior. Nevertheless there are some rather detailed discussions about internal representations and rounding which may contain the nugget I seek, so apologies in advance if I missed it.
EDIT: It's buried in the comments, but I've filed this as issue #25465729 with Apple. OpenRadar: http://www.openradar.me/radar?id=5007005597040640.
EDIT 2: Apple has marked this as a dup of #19812966.
Since you already know the problem is due to "too high precision", you could workaround it by rounding the decimal number first:
let b = NSDecimalNumber(string: "9.999999999999999999")
print(b, "->", b.int64Value)
// 9.999999999999999999 -> -8
let truncateBehavior = NSDecimalNumberHandler(roundingMode: .down,
scale: 0,
raiseOnExactness: true,
raiseOnOverflow: true,
raiseOnUnderflow: true,
raiseOnDivideByZero: true)
let c = b.rounding(accordingToBehavior: truncateBehavior)
print(c, "->", c.int64Value)
// 9 -> 9
If you want to use int64Value (i.e. -longLongValue), avoid using numbers with more than 62 bits of precision, i.e. avoid having more than 18 digits totally. Reasons explained below.
NSDecimalNumber is internally represented as a Decimal structure:
typedef struct {
signed int _exponent:8;
unsigned int _length:4;
unsigned int _isNegative:1;
unsigned int _isCompact:1;
unsigned int _reserved:18;
unsigned short _mantissa[NSDecimalMaxSize]; // NSDecimalMaxSize = 8
} NSDecimal;
This can be obtained using .decimalValue, e.g.
let v2 = NSDecimalNumber(string: "9.821426272392280061")
let d = v2.decimalValue
print(d._exponent, d._mantissa, d._length)
// -18 (30717, 39329, 46888, 34892, 0, 0, 0, 0) 4
This means 9.821426272392280061 is internally stored as 9821426272392280061 × 10-18 — note that 9821426272392280061 = 34892 × 655363 + 46888 × 655362 + 39329 × 65536 + 30717.
Now compare with 9.821426272392280060:
let v2 = NSDecimalNumber(string: "9.821426272392280060")
let d = v2.decimalValue
print(d._exponent, d._mantissa, d._length)
// -17 (62054, 3932, 17796, 3489, 0, 0, 0, 0) 4
Note that the exponent is reduced to -17, meaning the trailing zero is omitted by Foundation.
Knowing the internal structure, I now make a claim: the bug is because 34892 ≥ 32768. Observe:
let a = NSDecimalNumber(decimal: Decimal(
_exponent: -18, _length: 4, _isNegative: 0, _isCompact: 1, _reserved: 0,
_mantissa: (65535, 65535, 65535, 32767, 0, 0, 0, 0)))
let b = NSDecimalNumber(decimal: Decimal(
_exponent: -18, _length: 4, _isNegative: 0, _isCompact: 1, _reserved: 0,
_mantissa: (0, 0, 0, 32768, 0, 0, 0, 0)))
print(a, "->", a.int64Value)
print(b, "->", b.int64Value)
// 9.223372036854775807 -> 9
// 9.223372036854775808 -> -9
Note that 32768 × 655363 = 263 is the value just enough to overflow a signed 64-bit number. Therefore, I suspect that the bug is due to Foundation implementing int64Value as (1) convert the mantissa directly into an Int64, and then (2) divide by 10|exponent|.
In fact, if you disassemble Foundation.framework, you will find that it is basically how int64Value is implemented (this is independent of the platform's pointer width).
But why int32Value isn't affected? Because internally it is just implemented as Int32(self.doubleValue), so no overflow issue would occur. Unfortunately a double only has 53 bits of precision, so Apple has no choice but to implement int64Value (requiring 64 bits of precision) without floating-point arithmetics.
I'd file a bug with Apple if I were you. The docs say that NSDecimalNumber can represent any value up to 38 digits long. NSDecimalNumber inherits those properties from NSNumber, and the docs don't explicitly say what conversion is involved at that point, but the only reasonable interpretation is that if the number is roundable to and representable as an Int, then you get the correct answer.
It looks to me like a bug in handling the sign-extension during the conversion somewhere, since intValue is 32-bit and integerValue is 64-bit (in Swift).