I am getting strange behavior - when I assign a single value to a jagged array, it changes values of the entire column.
let testArray = Array.create 5 (Array.create 5 nan)
testArray.[3].[3] <- 1.0
The code above, instead of changing the value of a single cell, changes the value of the entire column.
val it : float [] [] =
[|[|nan; nan; nan; 1.0; nan|]; [|nan; nan; nan; 1.0; nan|];
[|nan; nan; nan; 1.0; nan|]; [|nan; nan; nan; 1.0; nan|];
[|nan; nan; nan; 1.0; nan|]|]
This happens because you're not creating a 2-dimensional array (as I assume you expect), but rather you're creating exactly two arrays: one array with 5 nans in it, and another array with 5 references to the first array in it. Just to illustrate the matter, the following code is completely equivalent to yours:
let firstArray = Array.create 5 nan
let testArray = Array.create 5 firstArray
testArray.[3].[3] <- 1.0
So that the line testArray.[3].[3] <- 1.0 actually changes only one element - the fourth one in firstArray, but if you then try to print out testArray, that same element shows up multiple times, because there are multiple references to firstArray.
If you wanted to create an array with five different arrays in it, you need to use Array.init, which instead of the element takes a "create next element" function:
let testArray = Array.init 5 (fun _ -> Array.create 5 nan)
Alternatively, you can use list comprehension to create the array:
let testArray = [|for i in 1..5 -> Array.create 5 nan|]
Both will give you the same result, though the first one will have slightly better performance.
If you really need to work with two-dimensional arrays (rather than arrays of arrays), you might want to look at Array2D instead:
let testArray = Array2D.create 5 5 nan
testArray.[3,3] <- 1.0
Related
I am trying to return a pair of sums using the let construct in sml. Every way I have tried will only return one value. I have tried creating a list by using cons (::) and then returning the list, but that gives an error as well.
val t = [(3,4), (4,5), (5,6)];
fun sumPairs(nil) = 0
| sumPairs((x,y)::zs) =
let
val sumFirst = x + sumPairs(zs)
val sumSecond = y + sumPairs(zs)
in
(sumFirst, sumSecond) <how would I return this as a tuple or list?>
end;
sumPairs(t);
The problem is not with (sumFirst, sumSecond) or with let specifically, but with the rest of your code.
The base case and the recursions say that sumPairs produces an int, not a pair of ints.
Because of this, there is a conflict when you try produce a pair.
Your base case should be (0,0), not 0, since it must be a pair.
You also need to deconstruct the result from the recursion since that produces a pair, not an integer.
Like this
fun sumPairs nil = (0, 0)
| sumPairs ((x,y)::zs) =
let
val (sumFirst, sumSecond) = sumPairs zs
in
(x + sumFirst, y + sumSecond)
end;
I'm learning F# and have an assignment where I have to treat a float as a coordinate. For example float 2.3 would be treated as a coordinate (2.3) where x is 2 and y is 3.
How can I split the float to calculate with it?
I am trying to make a function to calculate the length of a vector:
let lenOfVec (1.2, 2.3) and using pythagoras' method to get the length of hypotenuse.
But I am already stuck at splitting up the float.
Hope some can help!
Having at your disposal libraries as rich as F#/.NET offer the task of splitting a float into two can be done with one short line of code:
let splitFloat n = n.ToString().Split('.') |> Array.map float
library function ToString() converts the argument n (supposedly float) to a string
library functionSplit('.') applied to this string converts it into an array of two strings representing the first number before decimal dot and the second number after the dot
finally this array of 2 strings is converted by applying library function float to the each array element with the help of just another library function Array.map, producing the array of two sought floats
Being applied to a random float number the outlined chain of conversions looks like
123.456 --> "123.456" --> [|123;456|] --> [|123.0;456.0|]
Stealing from a few other answers on here, something like this seems to work for a few examples:
open System
///Takes in a float and returns a tuple of the the two parts.
let split (n: float) =
let x = Math.Truncate(n)
let bits = Decimal.GetBits(decimal n)
let count = BitConverter.GetBytes(bits.[3]).[2]
let dec = n - x
let y = dec * Math.Pow(10., float count)
x, y
Examples:
2.3 -> (2.0, 3.0)
200.123 -> (200.0, 123.0)
5.23 -> (5.0, 23.0)
Getting the X is easy, as you can just truncate the decimal part.
Getting the Y took input from this answer and this one.
I have 2 arrays say arrayA & arrayB. arrayA has the elements say [1,2] and arrayB has the elements say [3,4]. Now I want to multiply and add the elements in these arrays like so.. 1x3 + 2x4 = 11. How can I achieve this...?
Here a combo of zip, map and reduce:
let result = (zip([1,2], [3,4]).map { $0.0 * $0.1 }).reduce(0, +)
print(result) // 11
Zip makes a sequence of pairs based on the two arrays: (1,3), (2,4)
with map I am iterating for each element of the array producing at each iteration a new element
$0 means the element of the sequence at the current iteration. Since the element is a pair (because of zip), we can access to the first sub-element of the pair with $0.0 and to the second with $0.1.
finally (after map) we get an array of products, just need to "reduce" it to a number, summing all the resulting elements with reduce.
(0, +) means that reduce starts from 0 as initial value, then with the abbreviation + we can accumulate the sums of all the elements.
Note that rather than using a chained map and reduce (for multiplication and summation, respectively), you could directly apply the reduce operation on the zipped sequence, and modify the reduce closure to accordingly calculate the sum of the pair-wise multiplied objects in the zipped sequence:
let a = [1, 2]
let b = [3, 4]
let result = zip(a,b).reduce(0) { $0 + $1.0 * $1.1 } // 11
Try this.
let A = [1,2]
let B = [3,4]
let C = zip(A, B).map {$0.0 * $0.1}
print(C) // [3, 8]
let sum = C.reduce(0, +)
print(sum)//11
If I have two unknown values, lets say x and y, what is the best way loop through all of the values between between those values?
For example, given the values x = 0 and y = 5 I would like to do something with the values 0, 1, 2, 3, 4, and 5. The result could exclude 0 and 5 if this is simpler.
Using Swift's Range operator, I could do something like this:
for i in x...y {
// Do something with i
}
Except I do not know if x or y is the greater value.
The Swift documentation for Range Operators states:
The closed range operator (a...b) defines a range that runs from a to b, and includes the values a and b. The value of a must not be greater than b.
There are a number of solutions here. A pretty straight forward one is:
let diff = y - x
for i in 0...abs(diff) {
let value = min(x, y) + i
// Do something with value
}
Is there a better, or more elegant way to achieve this?
I guess the most explicit way of writing it would be:
for i in min(a, b)...max(a, b) {
// Do something with i
}
To exclude the first and last value, you can increment your lower limit and use the Swift ..< syntax:
let lowerLimit = min(a, b) + 1
let upperLimit = max(a, b)
for i in lowerLimit..<upperLimit {
// Do something with i
}
I currently have the following code:
type Matrix(sourceMatrix:double[,]) =
let rows = sourceMatrix.GetUpperBound(0) + 1
let cols = sourceMatrix.GetUpperBound(1) + 1
let matrix = Array2D.zeroCreate<double> rows cols
do
for i in 0 .. rows - 1 do
for j in 0 .. cols - 1 do
matrix.[i,j] <- sourceMatrix.[i,j]
new (rows, cols) = Matrix( Array2D.zeroCreate<double> rows cols)
new (boolSourceMatrix:bool[,]) = Matrix(Array2D.zeroCreate<double> rows cols)
for i in 0 .. rows - 1 do
for j in 0 .. cols - 1 do
if(boolSourceMatrix.[i,j]) then matrix.[i,j] <- 1.0
else matrix.[i,j] <- -1.0
My problem lies in the last constructor that takes a bool[,] parameter. The compiler isn't letting me get away with the two for loops I'm trying to use for initialization in this constructor. How can I make this work?
The easiest solution would be to just do this instead:
new (boolSourceMatrix) = Matrix(Array2D.map (fun b -> if b then 1.0 else -1.0) boolSourceMatrix)
The specific issue that you were running into is that the let-bound fields from the primary constructor aren't available in alternate constructors. To work around this, you could use an explicitly defined field, if you wanted. However, in this case it's better to take advantage of the additional functionality in the Array2D module.