Here's my code
let primes =
let rec primesRec remain =
let prime = Seq.head remain
seq { yield Seq.head remain; yield! primesRec (Seq.filter (fun n -> n % prime <> 0) remain) }
primesRec (Seq.initInfinite (fun i -> i + 2))
First of all, I know that this isn't as efficient as the sieve should be, but I think it should still be much faster than it is.
Seq.take 100 primes |> List.ofSeq
already takes a noticable amount of time (<1s), and it completely freezes for 1000 (as in, I don't want to wait anymore).
So, as far as I can tell, this sequence is constructed by taking the first element of the remaining ones and then recursing to generate the other ones but filtering the rest.
Am I wrong in thinking that this is quadratic in complexity? It just goes through all the existing primes to each new prime it has to generate.
I know that since I want the 1000th prime, it's actually quadratic in the size of that prime, which is around 8000, but that should still be around 100 000 000 iterations. Shouldn't that execute in under a second?
It's not memoizing the remain seq so for every iteration, it is calculating all the primes up to prime again.
You theory about there being 100M iterations may be correct, but remember it's not just doing a little modulo operation: it's creating sequences and iterators and allocating memory and pushing stuff onto the stack and so on.
I made this performant by caching the remain seq as follows. But you could also solve it by using List instead of seq.
let primes =
let rec primesRec remain =
let prime = Seq.head remain
seq { yield Seq.head remain; yield! primesRec (Seq.filter (fun n -> n % prime <> 0) (Seq.cache remain)) }
primesRec (Seq.initInfinite (fun i -> i + 2))
Related
I have the following code to perform the Sieve of Eratosthenes in F#:
let sieveOutPrime p numbers =
numbers
|> Seq.filter (fun n -> n % p <> 0)
let primesLessThan n =
let removeFirstPrime = function
| s when Seq.isEmpty s -> None
| s -> Some(Seq.head s, sieveOutPrime (Seq.head s) (Seq.tail s))
let remainingPrimes =
seq {3..2..n}
|> Seq.unfold removeFirstPrime
seq { yield 2; yield! remainingPrimes }
This is excruciatingly slow when the input to primesLessThan is remotely large: primes 1000000 |> Seq.skip 1000;; takes nearly a minute for me, though primes 1000000 itself is naturally very fast because it's just a Sequence.
I did some playing around, and I think that the culprit has to be that Seq.tail (in my removeFirstPrime) is doing something intensive. According to the docs, it's generating a completely new sequence, which I could imagine being slow.
If this were Python and the sequence object were a generator, it would be trivial to ensure that nothing expensive happens at this point: just yield from the sequence, and we've cheaply dropped its first element.
LazyList in F# doesn't seem to have the unfold method (or, for that matter, the filter method); otherwise I think LazyList would be the thing I wanted.
How can I make this implementation fast by preventing unnecessary duplications/recomputations? Ideally primesLessThan n |> Seq.skip 1000 would take the same amount of time regardless of how large n was.
Recursive solutions and sequences don't go well together (compare the answers here, it's very much the same pattern you are using). You might want to inspect the generated code, but I'd just consider this a rule of thumb.
LazyList (as defined in FSharpX) does of course come with unfold and filter defined, it would have been quite bizarre if it didn't. Typically in F# code this sort of functionality is provided in separate modules rather than as instance members on the type itself, a convention that does seem to confuse most of those documentation systems.
As you probably know Seq is a lazily evaluated collection. Sieve algorithm is about filtering out non-primes from a sequence so that you don't have to consider them again.
However, when you combine Sieve with a lazily evaluated collection you end up do the filtering of the same non-primes over and over again.
You see much better performance if you switch from Seq to Array or List because of the non-lazy aspect of those collections means that you only filter non-primes once.
One way to improve performance in your code is to introduce caching.
let removeFirstPrime s =
let s = s |> Seq.cache
match s with
| s when Seq.isEmpty s -> None
| s -> Some(Seq.head s, sieveOutPrime (Seq.head s) (Seq.tail s))
I implemented a LazyList that works alot like Seq that allows me to count the number of evaluations:
For all primes up to 2000.
Without caching: 14753706 evaluations
With caching: 97260 evaluations
Of course if you really need performance you use a mutable array implementation.
PS. Performance metrics
Running 'seq' ...
it took 271 ms with cc (16, 4, 0), the result is: 1013507
Running 'list' ...
it took 14 ms with cc (16, 0, 0), the result is: 1013507
Running 'array' ...
it took 14 ms with cc (10, 0, 0), the result is: 1013507
Running 'mutable' ...
it took 0 ms with cc (0, 0, 0), the result is: 1013507
This is Seq with caching. Seq in F# has rather high overhead, there are interesting lazy alternatives to Seq like Nessos.
List and Array run roughly similar but because of the more compact memory layout the GC metrics are better for Array (10 cc0 collections for Array vs 16 cc0 collections for List). Seq has worse GC metrics in that it forced 4 cc1 collections.
The mutable implementation of sieve algorithm has better memory and performance metrics by a large margin.
Lazy evaluation is a great boon for stuff like processing huge files that will not fit in main memory at one go. However, suppose there are some elements in the sequence that I want evaluated immediately, while the rest can be lazily computed - is there any way to specify that?
Specific problem: (in case that helps to answer the question)
Specifically, I am using a series of IEnumerables as iterators for multiple sequences - these sequences are data read from files opened using BinaryReader streams (each sequence is responsible for the reading in of data from one of the files). The MoveNext() on these is to be called in a specific order. Eg. iter0 then iter1 then iter5 then iter3 .... and so on. This order is specified in another sequence index = {0,1,5,3,....}. However sequences being lazy, the evaluation is naturally done only when required. Hence, the file reads (for the sequences right at the beginning that read from files on disk) happens as the IEnumerables for a sequence are moving. This is causing an illegal file access - a file that is being read by one process is accessed again (as per the error msg).
True, the illegal file access could be for other reasons, and after having tried my best to debug other causes a partially lazy evaluation might be worth trying out.
While I agree with Tomas' comment: you shouldn't need this if file sharing is handled properly, here's one way to eagerly evaluate the first N elements:
let cacheFirst n (items: seq<_>) =
seq {
use e = items.GetEnumerator()
let i = ref 0
yield!
[
while !i < n && e.MoveNext() do
yield e.Current
incr i
]
while e.MoveNext() do
yield e.Current
}
Example
let items = Seq.initInfinite (fun i -> printfn "%d" i; i)
items
|> Seq.take 10
|> cacheFirst 5
|> Seq.take 3
|> Seq.toList
Output
0
1
2
3
4
val it : int list = [0; 1; 2]
Daniel's solution is sound, but I don't think we need another operator, just Seq.cache for most cases.
First cache your sequence:
let items = Seq.initInfinite (fun i -> printfn "%d" i; i) |> Seq.cache
Eager evaluation followed by lazy access from the beginning:
let eager = items |> Seq.take 5 |> Seq.toList
let cached = items |> Seq.take 3 |> Seq.toList
This will evaluate the first 5 elements once (during eager) but make them cached for secondary access.
I have a csv file with the following structure :
The first line is a header row
The remaining lines are data lines,
each with the same number of commas, so we can think of the data in
terms of columns
I have written a little script to go through each line of the file and return a sequence of tuples containing the column header and the length of the largest string of data in that column :
let getColumnInfo (fileName:string) =
let delimiter = ','
let readLinesIntoColumns (sr:StreamReader) = seq {
while not sr.EndOfStream do
yield sr.ReadLine().Split(delimiter) |> Seq.map (fun c -> c.Length )
}
use sr = new StreamReader(fileName)
let headers = sr.ReadLine().Split(delimiter)
let columnSizes =
let initial = Seq.map ( fun h -> 0 ) headers
let toMaxColLengths (accumulator:seq<int>) (line:seq<int>) =
let chooseBigger a b = if a > b then a else b
Seq.map2 chooseBigger accumulator line
readLinesIntoColumns sr |> Seq.fold toMaxColLengths initial
Seq.zip headers columnSizes;
This works fine on a small file. However when it trys to process a large file (> 75 Mb) it blows fsi with a StackOverflow exception. If I remove the line
Seq.map2 chooseBigger accumulator line
the program completes.
Now, my question is this : why is F# using up the stack? My understanding of sequences in F# is that the entire sequence is not held in memory, only the elements that are being processed. Therefore I expected that the lines that had already been processed would not remain on the stack. Where is my misunderstanding?
I think this is a good question. Here's a simpler repro:
let test n =
[for i in 1 .. n -> Seq.empty]
|> List.fold (Seq.map2 max) Seq.empty
|> Seq.iter ignore
test creats a sequence of empty sequences, calculates the max by rows, and then iterates over the resulting (empty) sequence. You'll find that with a high value of n this will cause a stack overflow, even though there aren't any values to iterate over at all!
It's a bit tricky to explain why, but here's a stab at it. The problem is that as you fold over the sequences, Seq.map2 is returning a new sequence which defers its work until it's enumerated. Thus, when you try to iterate through the resulting sequence, you end up calling back into a chain of computations n layers deep.
As Daniel explains, you can escape this by evaluating the resulting sequence eagerly (e.g. by converting it to a list).
EDIT
Here's an attempt to further explain what's going wrong. When you call Seq.map2 max s1 s2, neither s1 nor s2 is actually enumerated; you get a new sequence which, when enumerated, will enumerate both of them and compare the yielded values. Thus, if we do something like the following:
let s0 = Seq.empty
let s1 = Seq.map2 max Seq.emtpy s0
let s2 = Seq.map2 max Seq.emtpy s1
let s3 = Seq.map2 max Seq.emtpy s2
let s4 = Seq.map2 max Seq.emtpy s3
let s5 = Seq.map2 max Seq.emtpy s4
...
Then the call to Seq.map2 always returns immediately and uses constant stack space. However, enumerating s5 requires enumerating s4, which requires enumerating s3, etc. This means that enumerating s99999 will build up a huge call stack that looks sort of like:
...
(s99996's enumerator).MoveNext()
(s99997's enumerator).MoveNext()
(s99998's enumerator).MoveNext()
(s99999's enumerator).MoveNext()
and we'll get a stack overflow.
Your code contains so many sequences it's hard to reason about. My guess is that's what's tripping you up. You can make this much simpler and efficient (eagerness is not all bad):
let getColumnInfo (fileName:string) =
let delimiter = ','
use sr = new StreamReader(fileName)
match sr.ReadLine() with
| null | "" -> Array.empty
| hdr ->
let cols = hdr.Split(delimiter)
let counts = Array.zeroCreate cols.Length
while not sr.EndOfStream do
sr.ReadLine().Split(delimiter)
|> Array.iteri (fun i fld ->
counts.[i] <- max counts.[i] fld.Length)
Array.zip cols counts
This assumes all lines are non-empty and have the same number of columns.
You can fix your function by changing this line to:
Seq.map2 chooseBigger accumulator line |> Seq.toList |> seq
why is F# using up the stack? My understanding of sequences in F# is that the entire sequence is not held in memory, only the elements that are being processed. Therefore I expected that the lines that had already been processed would not remain on the stack. Where is my misunderstanding?
The lines themselves are not eating up your stack space. The problem is you've accidentally written a function that builds up a huge unevaluated computation (tree of thunks) that stack overflows when it is evaluated because it makes non-tail calls O(n) deep. This tends to happen whenever you build sequences from other sequences and don't force the evaluation of anything.
Just naively using Seq.length may be not good enough as will blow up on infinite sequences.
Getting more fancy with using something like ss |> Seq.truncate n |> Seq.length will work, but behind the scene would involve double traversing of the argument sequence chunk by IEnumerator's MoveNext().
The best approach I was able to come up with so far is:
let hasAtLeast n (ss: seq<_>) =
let mutable result = true
use e = ss.GetEnumerator()
for _ in 1 .. n do result <- e.MoveNext()
result
This involves only single sequence traverse (more accurately, performing e.MoveNext() n times) and correctly handles boundary cases of empty and infinite sequences. I can further throw in few small improvements like explicit processing of specific cases for lists, arrays, and ICollections, or some cutting on traverse length, but wonder if any more effective approach to the problem exists that I may be missing?
Thank you for your help.
EDIT: Having on hand 5 overall implementation variants of hasAtLeast function (2 my own, 2 suggested by Daniel and one suggested by Ankur) I've arranged a marathon between these. Results that are tie for all implementations prove that Guvante is right: a simplest composition of existing algorithms would be the best, there is no point here in overengineering.
Further throwing in the readability factor I'd use either my own pure F#-based
let hasAtLeast n (ss: seq<_>) =
Seq.length (Seq.truncate n ss) >= n
or suggested by Ankur the fully equivalent Linq-based one that capitalizes on .NET integration
let hasAtLeast n (ss: seq<_>) =
ss.Take(n).Count() >= n
Here's a short, functional solution:
let hasAtLeast n items =
items
|> Seq.mapi (fun i x -> (i + 1), x)
|> Seq.exists (fun (i, _) -> i = n)
Example:
let items = Seq.initInfinite id
items |> hasAtLeast 10000
And here's an optimally efficient one:
let hasAtLeast n (items:seq<_>) =
use e = items.GetEnumerator()
let rec loop n =
if n = 0 then true
elif e.MoveNext() then loop (n - 1)
else false
loop n
Functional programming breaks up work loads into small chunks that do very generic tasks that do one simple thing. Determining if there are at least n items in a sequence is not a simple task.
You already found both the solutions to this "problem", composition of existing algorithms, which works for the majority of cases, and creating your own algorithm to solve the issue.
However I have to wonder whether your first solution wouldn't work. MoveNext() is only called n times on the original method for certain, Current is never called, and even if MoveNext() is called on some wrapper class the performance implications are likely tiny unless n is huge.
EDIT:
I was curious so I wrote a simple program to test out the timing of the two methods. The truncate method was quicker for a simple infinite sequence and one that had Sleep(1). It looks like I was right when your correction sounded like overengineering.
I think clarification is needed to explain what is happening in those methods. Seq.truncate takes a sequence and returns a sequence. Other than saving the value of n it doesn't do anything until enumeration. During enumeration it counts and stops after n values. Seq.length takes an enumeration and counts, returning the count when it ends. So the enumeration is only enumerated once, and the amount of overhead is a couple of method calls and two counters.
Using Linq this would be as simple as:
let hasAtLeast n (ss: seq<_>) =
ss.Take(n).Count() >= n
Seq take method blows up if there are not enough elements.
Example usage to show it traverse seq only once and till required elements:
seq { for i = 0 to 5 do
printfn "Generating %d" i
yield i }
|> hasAtLeast 4 |> printfn "%A"
Let's suppose I have n arrays, where n is a variable (some number greater than 2, usually less than 10).
Each array has k elements.
I also have an array of length n that contains a set of weights that dictate how I would like to linearly combine all the arrays.
I am trying to create a high performance higher order function to combine these arrays in F#.
How can I do this, so that I get a function that takes an array of arrays (arrs is a sample), a weights array (weights), and then computed a weighted sum based on the weights?
let weights = [|.6;;.3;.1|]
let arrs = [| [|.0453;.065345;.07566;1.562;356.6|] ;
[|.0873;.075565;.07666;1.562222;3.66|] ;
[|.06753;.075675;.04566;1.452;3.4556|] |]
thanks for any ideas.
Here's one solution:
let combine weights arrs =
Array.map2 (fun w -> Array.map ((*) w)) weights arrs
|> Array.reduce (Array.map2 (+))
EDIT
Here's some (much needed) explanation of how this works. Logically, we want to do the following:
Apply each weight to its corresponding row.
Add together the weight-adjusted rows.
The two lines above do just that.
We use the Array.map2 function to combine corresponding weights and rows; the way that we combine them is to multiply each element in the row by the weight, which is accomplished via the inner Array.map.
Now we have an array of weighted rows and need to add them together. We can do this one step at a time by keeping a running sum, adding each array in turn. The way we sum two arrays pointwise is to use Array.map2 again, using (+) as the function for combining the elements from each. We wrap this in an Array.reduce to apply this addition function to each row in turn, starting with the first row.
Hopefully this is a reasonably elegant approach to the problem, though the point-free style admittedly makes it a bit tricky to follow. However, note that it's not especially performant; doing in-place updates rather than creating new arrays with each application of map, map2, and reduce would be more efficient. Unfortunately, the standard library doesn't contain nice analogues of these operations which work in-place. It would be relatively easy to create such analogues, though, and they could be used in almost exactly the same way as I've done here.
Something like this did it for me:
let weights = [|0.6;0.3;0.1|]
let arrs = [| [|0.0453;0.065345;0.07566;1.562;356.6|] ;
[|0.0873;0.075565;0.07666;1.562222;3.66|] ;
[|0.06753;0.075675;0.04566;1.452;3.4556|] |]
let applyWeight x y = x * y
let rotate (arr:'a[][]) =
Array.map (fun y -> (Array.map (fun x -> arr.[x].[y])) [|0..arr.Length - 1|]) [|0..arr.[0].Length - 1|]
let weightedarray = Array.map (fun x -> Array.map(applyWeight (fst x)) (snd x)) (Array.zip weights arrs)
let newarrs = Array.map Array.sum (rotate weightedarray)
printfn "%A" newarrs
By the way.. the 0 preceding a float value is necessary.