Writing code in F# in most cases results in very condense an intuitive work. This piece of code looks somehow imperative and inconvenient to me.
times is an array of float values
Lines inside the file times.csv always look like that:
Mai 06 2011 05:43:45 nachm.,00:22.99
Mai 04 2011 08:59:12 nachm.,00:22.73
Mai 04 2011 08:58:27 nachm.,00:19.38
Mai 04 2011 08:57:54 nachm.,00:18.00
average generates an average of the values, dropping the lowest and highest time
getAllSubsetsOfLengthN creates a sequence of all consecutive subsets of length n. Is there a 'nicer' solution to that? Or does already exist something like that inside the F# core?
bestAverageOfN finds the lowest average of all the subsets
let times =
File.ReadAllLines "times.csv"
|> Array.map (fun l -> float (l.Substring((l.LastIndexOf ':') + 1)))
let average set =
(Array.sum set - Array.min set - Array.max set) / float (set.Length - 2)
let getAllSubsetsOfLengthN n (set:float list) =
seq { for i in [0 .. set.Length - n] -> set
|> Seq.skip i
|> Seq.take n }
let bestAverageOfN n =
times
|> Array.toList
|> getAllSubsetsOfLengthN n
|> Seq.map (fun t -> t
|> Seq.toArray
|> average)
|> Seq.min
What I am looking for are nicer, shorter or easier solutions. Every useful post will be upvoted, of course :)
I guess, getAllSubsetsOfLengthN can be replaced with Seq.windowed
so bestAverageOfN will look like:
let bestAverageOfN n =
times
|> Seq.windowed n
|> Seq.map average
|> Seq.min
Without much thinking, there are some basic functional refactorings you can make. For example, in the calculation of bestAverageOfN, you can use function composition:
let bestAverageOfN n =
times
|> Array.toList
|> getAllSubsetsOfLengthN n
|> Seq.map (Seq.toArray >> average)
|> Seq.min
Other than this and the suggestion by desco, I don't think there is anything I would change. If you don't use your special average function anywhere in the code, you could write it inline as a lambda function, but that really depends on your personal preferences.
Just for the sake of generality, I would probably make times an argument of bestAverageOfN:
let bestAverageOfN n times =
times
|> Seq.windowed n
|> Seq.map (fun set ->
(Array.sum set - Array.min set - Array.max set) / float (set.Length - 2))
|> Seq.min
Since you mentioned regex for parsing your input, I thought I'd show you such a solution. It may well be overkill, but it is also a more functional solution since regular expressions are declarative while substring stuff is more imperative. Regex is also nice since it is easier to grow if the structure of your input changes, index substring stuff can get messy, and I try to avoid it completely.
First a couple active patterns,
open System.Text.RegularExpressions
let (|Groups|_|) pattern input =
let m = Regex.Match(input, pattern)
if m.Success then
Some([for g in m.Groups -> g.Value] |> List.tail)
else
None
open System
let (|Float|_|) input =
match Double.TryParse(input) with
| true, value -> Some(value)
| _ -> None
Adopting #ildjarn's times implementation:
let times =
File.ReadAllLines "times.csv"
|> Array.map (function Groups #",.*?:(.*)$" [Float(value)] -> value)
Since bestAversageOfN has already been covered, here's an alternative implementation of times:
let times =
File.ReadAllLines "times.csv"
|> Array.map (fun l -> l.LastIndexOf ':' |> (+) 1 |> l.Substring |> float)
Related
I am trying to filter out values from a sequence, that are not in another sequence. I was pretty sure my code worked, but it is taking a long time to run on my computer and because of this I am not sure, so I am here to see what the community thinks.
Code is below:
let statezip =
StateCsv.GetSample().Rows
|> Seq.map (fun row -> row.State)
|> Seq.distinct
type State = State of string
let unwrapstate (State s) = s
let neededstates (row:StateCsv) = Seq.contains (unwrapstate row.State) statezip
I am filtering by the neededstates function. Is there something wrong with the way I am doing this?
let datafilter =
StateCsv1.GetSample().Rows
|> Seq.map (fun row -> row.State,row.Income,row.Family)
|> Seq.filter neededstates
|> List.ofSeq
I believe that it should filter the sequence by the values that are true, since neededstates function is a bool. StateCsv and StateCsv1 have the same exact structure, although from different years.
Evaluation of contains on sequences and lists can be slow. For a case where you want to check for the existence of an element in a collection, the F# Set type is ideal. You can convert your sequences to sets using Set.ofSeq, and then run the logic over the sets instead. The following example uses the numbers from 1 to 10000 and then uses both sequences and sets to filter the result to only the odd numbers by checking that the values are not in a collection of even numbers.
Using Sequences:
let numberSeq = {0..10000}
let evenNumberSeq = seq { for n in numberSeq do if (n % 2 = 0) then yield n }
#time
numberSeq |> Seq.filter (fun n -> evenNumberSeq |> Seq.contains n |> not) |> Seq.toList
#time
This runs in about 1.9 seconds for me.
Using sets:
let numberSet = numberSeq |> Set.ofSeq
let evenNumberSet = evenNumberSeq |> Set.ofSeq
#time
numberSet |> Set.filter (fun n -> evenNumberSet |> Set.contains n |> not)
#time
This runs in only 0.005 seconds. Hopefully you can materialize your sequences to sets before performing your contains operation, thereby getting this level of speedup.
I am basically trying to compute the factors of a BigInteger that are a prime, I have two simple factorization functions, they both look like they should produce the same result in the way I used them here down below but this is not the case here, can someone explain what is happening?
let lookupTable = new HashSet<int>(primes)
let isPrime x = lookupTable.Contains x
let factors (n:bigint) =
Seq.filter (fun x -> n % x = 0I) [1I..n]
let factors' (n:bigint) =
Seq.filter (fun x -> n % x = 0I) [1I..bigint(sqrt(float(n)))]
600851475143I
|> fun n -> bigint(sqrt(float(n)))
|> factors
|> Seq.map int
|> Seq.filter isPrime
|> Seq.max // produces 137
600851475143I
|> factors'
|> Seq.map int
|> Seq.filter isPrime
|> Seq.max // produces 6857 (the right answer)
Your functions are not equivalent. In the first function, the list of candidates goes to n, and the filter function also uses n for remainder calculation. The second function, however, also uses n for remainder calculation, but the candidates list goes to sqrt(n) instead.
To make the second function equivalent, you need to reformulate it like this:
let factors' (n:bigint) =
let k = bigint(sqrt(float(n)))
Seq.filter (fun x -> k % x = 0I) [1I..k]
Update, to clarify this somewhat:
In the above code, notice how k is used in two places: to produce the initial list of candidates and to calculate remainder within the filter function? This is precisely the change I made to your code: my code uses k in both places, but your code uses k in one place, but n in the other.
This is how your original function would look with k:
let factors' (n:bigint) =
let k = bigint(sqrt(float(n)))
Seq.filter (fun x -> n % x = 0I) [1I..k]
Notice how it uses k in one place, but n in the other.
In APL one can use a bit vector to select out elements of another vector; this is called compression. For example 1 0 1/3 5 7 would yield 3 7.
Is there a accepted term for this in functional programming in general and F# in particular?
Here is my F# program:
let list1 = [|"Bob"; "Mary"; "Sue"|]
let list2 = [|1; 0; 1|]
[<EntryPoint>]
let main argv =
0 // return an integer exit code
What I would like to do is compute a new string[] which would be [|"Bob"; Sue"|]
How would one do this in F#?
Array.zip list1 list2 // [|("Bob",1); ("Mary",0); ("Sue",1)|]
|> Array.filter (fun (_,x) -> x = 1) // [|("Bob", 1); ("Sue", 1)|]
|> Array.map fst // [|"Bob"; "Sue"|]
The pipe operator |> does function application syntactically reversed, i.e., x |> f is equivalent to f x. As mentioned in another answer, replace Array with Seq to avoid the construction of intermediate arrays.
I expect you'll find many APL primitives missing from F#. For lists and sequences, many can be constructed by stringing together primitives from the Seq, Array, or List modules, like the above. For reference, here is an overview of the Seq module.
I think the easiest is to use an array sequence expression, something like this:
let compress bits values =
[|
for i = 0 to bits.Length - 1 do
if bits.[i] = 1 then
yield values.[i]
|]
If you only want to use combinators, this is what I would do:
Seq.zip bits values
|> Seq.choose (fun (bit, value) ->
if bit = 1 then Some value else None)
|> Array.ofSeq
I use Seq functions instead of Array in order to avoid building intermediary arrays, but it would be correct too.
One might say this is more idiomatic:
Seq.map2 (fun l1 l2 -> if l2 = 1 then Some(l1) else None) list1 list2
|> Seq.choose id
|> Seq.toArray
EDIT (for the pipe lovers)
(list1, list2)
||> Seq.map2 (fun l1 l2 -> if l2 = 1 then Some(l1) else None)
|> Seq.choose id
|> Seq.toArray
Søren Debois' solution is good but, as he pointed out, but we can do better. Let's define a function, based on Søren's code:
let compressArray vals idx =
Array.zip vals idx
|> Array.filter (fun (_, x) -> x = 1)
|> Array.map fst
compressArray ends up creating a new array in each of the 3 lines. This can take some time, if the input arrays are long (1.4 seconds for 10M values in my quick test).
We can save some time by working on sequences and creating an array at the end only:
let compressSeq vals idx =
Seq.zip vals idx
|> Seq.filter (fun (_, x) -> x = 1)
|> Seq.map fst
This function is generic and will work on arrays, lists, etc. To generate an array as output:
compressSeq sq idx |> Seq.toArray
The latter saves about 40% of computation time (0.8s in my test).
As ildjarn commented, the function argument to filter can be rewritten to snd >> (=) 1, although that causes a slight performance drop (< 10%), probably because of the extra function call that is generated.
I have just solved problem23 in Project Euler, in which I need a set to store all abundant numbers. F# has a immutable set, I can use Set.empty.Add(i) to create a new set containing number i. But I don't know how to use immutable set to do more complicated things.
For example, in the following code, I need to see if a number 'x' could be written as the sum of two numbers in a set. I resort to a sorted array and array's binary search algorithm to get the job done.
Please also comment on my style of the following program. Thanks!
let problem23 =
let factorSum x =
let mutable sum = 0
for i=1 to x/2 do
if x%i=0 then
sum <- sum + i
sum
let isAbundant x = x < (factorSum x)
let abuns = {1..28123} |> Seq.filter isAbundant |> Seq.toArray
let inAbuns x = Array.BinarySearch(abuns, x) >= 0
let sumable x =
abuns |> Seq.exists (fun a -> inAbuns (x-a))
{1..28123} |> Seq.filter (fun x -> not (sumable x)) |> Seq.sum
the updated version:
let problem23b =
let factorSum x =
{1..x/2} |> Seq.filter (fun i->x%i=0) |> Seq.sum
let isAbundant x = x < (factorSum x)
let abuns = Set( {1..28123} |> Seq.filter isAbundant )
let inAbuns x = Set.contains x abuns
let sumable x =
abuns |> Seq.exists (fun a -> inAbuns (x-a))
{1..28123} |> Seq.filter (fun x -> not (sumable x)) |> Seq.sum
This version runs in about 27 seconds, while the first 23 seconds(I've run several times). So an immutable red-black tree actually does not have much speed down compared to a sorted array with binary search. The total number of elements in the set/array is 6965.
Your style looks fine to me. The different steps in the algorithm are clear, which is the most important part of making something work. This is also the tactic I use for solving Project Euler problems. First make it work, and then make it fast.
As already remarked, replacing Array.BinarySearch by Set.contains makes the code even more readable. I find that in almost all PE solutions I've written, I only use arrays for lookups. I've found that using sequences and lists as data structures is more natural within F#. Once you get used to them, that is.
I don't think using mutability inside a function is necessarily bad. I've optimized problem 155 from almost 3 minutes down to 7 seconds with some aggressive mutability optimizations. In general though, I'd save that as an optimization step and start out writing it using folds/filters etc. In the example case of problem 155, I did start out using immutable function composition, because it made testing and most importantly, understanding, my approach easy.
Picking the wrong algorithm is much more detrimental to a solution than using a somewhat slower immutable approach first. A good algorithm is still fast even if it's slower than the mutable version (couch hello captain obvious! cough).
Edit: let's look at your version
Your problem23b() took 31 seconds on my PC.
Optimization 1: use new algorithm.
//useful optimization: if m divides n, (n/m) divides n also
//you now only have to check m up to sqrt(n)
let factorSum2 n =
let rec aux acc m =
match m with
| m when m*m = n -> acc + m
| m when m*m > n -> acc
| m -> aux (acc + (if n%m=0 then m + n/m else 0)) (m+1)
aux 1 2
This is still very much in functional style, but using this updated factorSum in your code, the execution time went from 31 seconds to 8 seconds.
Everything's still in immutable style, but let's see what happens when an array lookup is used instead of a set:
Optimization 2: use an array for lookup:
let absums() =
//create abundant numbers as an array for (very) fast lookup
let abnums = [|1..28128|] |> Array.filter (fun n -> factorSum2 n > n)
//create a second lookup:
//a boolean array where arr.[x] = true means x is a sum of two abundant numbers
let arr = Array.zeroCreate 28124
for x in abnums do
for y in abnums do
if x+y<=28123 then arr.[x+y] <- true
arr
let euler023() =
absums() //the array lookup
|> Seq.mapi (fun i isAbsum -> if isAbsum then 0 else i) //mapi: i is the position in the sequence
|> Seq.sum
//I always write a test once I've solved a problem.
//In this way, I can easily see if changes to the code breaks stuff.
let test() = euler023() = 4179871
Execution time: 0.22 seconds (!).
This is what I like so much about F#, it still allows you to use mutable constructs to tinker under the hood of your algorithm. But I still only do this after I've made something more elegant work first.
You can easily create a Set from a given sequence of values.
let abuns = Set (seq {1..28123} |> Seq.filter isAbundant)
inAbuns would therefore be rewritten to
let inAbuns x = abuns |> Set.mem x
Seq.exists would be changed to Set.exists
But the array implementation is fine too ...
Note that there is no need to use mutable values in factorSum, apart from the fact that it's incorrect since you compute the number of divisors instead of their sum:
let factorSum x = seq { 1..x/2 } |> Seq.filter (fun i -> x % i = 0) |> Seq.sum
Here is a simple functional solution that is shorter than your original and over 100× faster:
let problem23 =
let rec isAbundant i t x =
if i > x/2 then x < t else
if x % i = 0 then isAbundant (i+1) (t+i) x else
isAbundant (i+1) t x
let xs = Array.Parallel.init 28124 (isAbundant 1 0)
let ys = Array.mapi (fun i b -> if b then Some i else None) xs |> Array.choose id
let f x a = x-a < 0 || not xs.[x-a]
Array.init 28124 (fun x -> if Array.forall (f x) ys then x else 0)
|> Seq.sum
The first trick is to record which numbers are abundant in an array indexed by the number itself rather than using a search structure. The second trick is to notice that all the time is spent generating that array and, therefore, to do it in parallel.
I have this "learning code" I wrote for the morris seq in f# that suffers from stack overflow that I don't know how to avoid. "morris" returns an infinite sequence of "see and say" sequences (i.e., {{1}, {1,1}, {2,1}, {1,2,1,1}, {1,1,1,2,2,1}, {3,1,2,2,1,1},...}).
let printList l =
Seq.iter (fun n -> printf "%i" n) l
printfn ""
let rec morris s =
let next str = seq {
let cnt = ref 1 // Stack overflow is below when enumerating
for cur in [|0|] |> Seq.append str |> Seq.windowed 2 do
if cur.[0] <> cur.[1] then
yield!( [!cnt ; cur.[0]] )
cnt := 0
incr cnt
}
seq {
yield s
yield! morris (next s) // tail recursion, no stack overflow
}
// "main"
// Print the nth iteration
let _ = [1] |> morris |> Seq.nth 3125 |> printList
You can pick off the nth iteration using Seq.nth but you can only get so far before you hit a stack overflow. The one bit of recursion I have is tail recursion and it in essence builds a linked set of enumerators. That's not where the problem is. It's when "enum" is called on the say the 4000th sequence. Note that's with F# 1.9.6.16, the previous version topped out above 14000). It's because the way the linked sequences are resolved. The sequences are lazy and so the "recursion" is lazy. That is, seq n calls seq n-1 which calls seq n-2 and so forth to get the first item (the very first # is the worst case).
I understand that [|0|] |> Seq.append str |> Seq.windowed 2, is making my problem worse and I could triple the # I could generate if I eliminated that. Practically speaking the code works well enough. The 3125th iteration of morris would be over 10^359 characters in length.
The problem I'm really trying to solve is how to retain the lazy eval and have a no limit based on stack size for the iteration I can pick off. I'm looking for the proper F# idiom to make the limit based on memory size.
Update Oct '10
After learning F# a bit better, a tiny bit of Haskell, thinking & investigating this problem for over year, I finally can answer my own question. But as always with difficult problems, the problem starts with it being the wrong question. The problem isn't sequences of sequences - it's really because of a recursively defined sequence. My functional programming skills are a little better now and so it's easier to see what's going on with the version below, which still gets a stackoverflow
let next str =
Seq.append str [0]
|> Seq.pairwise
|> Seq.scan (fun (n,_) (c,v) ->
if (c = v) then (n+1,Seq.empty)
else (1,Seq.ofList [n;c]) ) (1,Seq.empty)
|> Seq.collect snd
let morris = Seq.unfold(fun sq -> Some(sq,next sq))
That basicially creates a really long chain of Seq processing function calls to generate the sequnces. The Seq module that comes with F# is what can't follow the chain without using the stack. There's an optimization it uses for append and recursively defined sequences, but that optimization only works if the recursion is implementing an append.
So this will work
let rec ints n = seq { yield n; yield! ints (n+1) }
printf "%A" (ints 0 |> Seq.nth 100000);;
And this one will get a stackoverflow.
let rec ints n = seq { yield n; yield! (ints (n+1)|> Seq.map id) }
printf "%A" (ints 0 |> Seq.nth 100000);;
To prove the F# libary was the issue, I wrote my own Seq module that implemented append, pairwise, scan and collect using continutions and now I can begin generating and printing out the 50,000 seq without a problem (it'll never finish since it's over 10^5697 digits long).
Some additional notes:
Continuations were the idiom I was looking for, but in this case, they had to go into the F# library, not my code. I learned about continuations in F# from Tomas Petricek's Real-World Functional Programming book.
The lazy list answer that I accepted held the other idiom; lazy evaluation. In my rewritten library, I also had to leverage the lazy type to avoid stackoverflow.
The lazy list version sorta of works by luck (maybe by design but that's beyond my current ability to determine) - the active-pattern matching it uses while it's constructing and iterating causes the lists to calculate values before the required recursion gets too deep, so it's lazy, but not so lazy it needs continuations to avoid stackoverflow. For example, by the time the 2nd sequence needs a digit from the 1st sequence, it's already been calculated. In other words, the LL version is not strictly JIT lazy for sequence generation, only list management.
You should definitely check out
http://research.microsoft.com/en-us/um/cambridge/projects/fsharp/manual/FSharp.PowerPack/Microsoft.FSharp.Collections.LazyList.html
but I will try to post a more comprehensive answer later.
UPDATE
Ok, a solution is below. It represents the Morris sequence as a LazyList of LazyLists of int, since I presume you want it to be lazy in 'both directions'.
The F# LazyList (in the FSharp.PowerPack.dll) has three useful properties:
it is lazy (evaluation of the nth element will not happen until it is first demanded)
it does not recompute (re-evaluation of the nth element on the same object instance will not recompute it - it caches each element after it's first computed)
you can 'forget' prefixes (as you 'tail' into the list, the no-longer-referenced prefix is available for garbage collection)
The first property is common with seq (IEnumerable), but the other two are unique to LazyList and very useful for computational problems such as the one posed in this question.
Without further ado, the code:
// print a lazy list up to some max depth
let rec PrintList n ll =
match n with
| 0 -> printfn ""
| _ -> match ll with
| LazyList.Nil -> printfn ""
| LazyList.Cons(x,xs) ->
printf "%d" x
PrintList (n-1) xs
// NextMorris : LazyList<int> -> LazyList<int>
let rec NextMorris (LazyList.Cons(cur,rest)) =
let count = ref 1
let ll = ref rest
while LazyList.nonempty !ll && (LazyList.hd !ll) = cur do
ll := LazyList.tl !ll
incr count
LazyList.cons !count
(LazyList.consf cur (fun() ->
if LazyList.nonempty !ll then
NextMorris !ll
else
LazyList.empty()))
// Morris : LazyList<int> -> LazyList<LazyList<int>>
let Morris s =
let rec MakeMorris ll =
LazyList.consf ll (fun () ->
let next = NextMorris ll
MakeMorris next
)
MakeMorris s
// "main"
// Print the nth iteration, up to a certain depth
[1] |> LazyList.of_list |> Morris |> Seq.nth 3125 |> PrintList 10
[1] |> LazyList.of_list |> Morris |> Seq.nth 3126 |> PrintList 10
[1] |> LazyList.of_list |> Morris |> Seq.nth 100000 |> PrintList 35
[1] |> LazyList.of_list |> Morris |> Seq.nth 100001 |> PrintList 35
UPDATE2
If you just want to count, that's fine too:
let LLLength ll =
let rec Loop ll acc =
match ll with
| LazyList.Cons(_,rest) -> Loop rest (acc+1N)
| _ -> acc
Loop ll 0N
let Main() =
// don't do line below, it leaks
//let hundredth = [1] |> LazyList.of_list |> Morris |> Seq.nth 100
// if we only want to count length, make sure we throw away the only
// copy as we traverse it to count
[1] |> LazyList.of_list |> Morris |> Seq.nth 100
|> LLLength |> printfn "%A"
Main()
The memory usage stays flat (under 16M on my box)... hasn't finished running yet, but I computed the 55th length fast, even on my slow box, so I think this should work just fine. Note also that I used 'bignum's for the length, since I think this will overflow an 'int'.
I believe there are two main problems here:
Laziness is very inefficient so you can expect a lazy functional implementation to run orders of magnitude slower. For example, the Haskell implementation described here is 2,400× slower than the F# I give below. If you want a workaround, your best bet is probably to amortize the computations by bunching them together into eager batches where the batches are produced on-demand.
The Seq.append function is actually calling into C# code from IEnumerable and, consequently, its tail call doesn't get eliminated and you leak a bit more stack space every time you go through it. This shows up when you come to enumerate over the sequence.
The following is over 80× faster than your implementation at computing the length of the 50th subsequence but perhaps it is not lazy enough for you:
let next (xs: ResizeArray<_>) =
let ys = ResizeArray()
let add n x =
if n > 0 then
ys.Add n
ys.Add x
let mutable n = 0
let mutable x = 0
for i=0 to xs.Count-1 do
let x' = xs.[i]
if x=x' then
n <- n + 1
else
add n x
n <- 1
x <- x'
add n x
ys
let morris =
Seq.unfold (fun xs -> Some(xs, next xs)) (ResizeArray [1])
The core of this function is a fold over a ResizeArray that could be factored out and used functionally without too much performance degradation if you used a struct as the accumulator.
Just save the previous element that you looked for.
let morris2 data = seq {
let cnt = ref 0
let prev = ref (data |> Seq.nth 0)
for cur in data do
if cur <> !prev then
yield! [!cnt; !prev]
cnt := 1
prev := cur
else
cnt := !cnt + 1
yield! [!cnt; !prev]
}
let rec morrisSeq2 cur = seq {
yield cur
yield! morrisSeq2 (morris2 cur)
}