why does Seq.isEmpty say not enough elements? - f#

nums is indeed seq of int when I mouse over. Any idea what's going on?
This function line is intended to be the equivalent of C#'s DefaultIfEmpty Linq function.
The general idea is take a space delimited line of strings and write out which ones occur count number of times.
code:
open System
[<EntryPoint>]
let main argv =
let tests = Console.ReadLine() |> int
for i in [0..tests] do
let (length, count) = Console.ReadLine()
|> (fun s -> s.Split [|' '|])
|> (fun split -> Int32.Parse(split.[0]), Int32.Parse(split.[1]))
Console.ReadLine()
|> (fun s -> s.Split [|' '|])
|> Seq.map int
|> Seq.take length
|> Seq.groupBy (fun x -> x)
|> Seq.map (fun (key, group) -> key, Seq.sum group)
|> Seq.where (fun (_, countx) -> countx = count)
|> Seq.map (fun (n, _) -> n)
|> (fun nums -> if Seq.isEmpty nums then "-1" else String.Join(" ", nums))
|> Console.WriteLine
0 // return an integer exit code
Sample input:
3
9 2
4 5 2 5 4 3 1 3 4

So, sequences in F# use lazy evaluation. That means that when you use functions such as map, where, take etc, the results are not evaluated immediately.
The results are only evaluated when the sequence is actually enumerated. When you call Seq.isEmpty you trigger a call to MoveNext() which results in the first element of the result sequence being evaluated - in your case this results in a large chain of functions being evaluated.
In this case, the InvalidOperationException is actually being triggered by Seq.take which throws if the sequence doesn't have sufficient elements. This might surprise you coming from C# where Enumerable.Take will take up to the requested number of elements but could take fewer if you reach the end of the sequence.
If you want this behaviour in F#, you need to replace Seq.take with Seq.truncate.

Related

Trying to filter out values in a sequence that are not in another sequence

I am trying to filter out values from a sequence, that are not in another sequence. I was pretty sure my code worked, but it is taking a long time to run on my computer and because of this I am not sure, so I am here to see what the community thinks.
Code is below:
let statezip =
StateCsv.GetSample().Rows
|> Seq.map (fun row -> row.State)
|> Seq.distinct
type State = State of string
let unwrapstate (State s) = s
let neededstates (row:StateCsv) = Seq.contains (unwrapstate row.State) statezip
I am filtering by the neededstates function. Is there something wrong with the way I am doing this?
let datafilter =
StateCsv1.GetSample().Rows
|> Seq.map (fun row -> row.State,row.Income,row.Family)
|> Seq.filter neededstates
|> List.ofSeq
I believe that it should filter the sequence by the values that are true, since neededstates function is a bool. StateCsv and StateCsv1 have the same exact structure, although from different years.
Evaluation of contains on sequences and lists can be slow. For a case where you want to check for the existence of an element in a collection, the F# Set type is ideal. You can convert your sequences to sets using Set.ofSeq, and then run the logic over the sets instead. The following example uses the numbers from 1 to 10000 and then uses both sequences and sets to filter the result to only the odd numbers by checking that the values are not in a collection of even numbers.
Using Sequences:
let numberSeq = {0..10000}
let evenNumberSeq = seq { for n in numberSeq do if (n % 2 = 0) then yield n }
#time
numberSeq |> Seq.filter (fun n -> evenNumberSeq |> Seq.contains n |> not) |> Seq.toList
#time
This runs in about 1.9 seconds for me.
Using sets:
let numberSet = numberSeq |> Set.ofSeq
let evenNumberSet = evenNumberSeq |> Set.ofSeq
#time
numberSet |> Set.filter (fun n -> evenNumberSet |> Set.contains n |> not)
#time
This runs in only 0.005 seconds. Hopefully you can materialize your sequences to sets before performing your contains operation, thereby getting this level of speedup.

GroupBy Year then take Pairwise diffs except for the head value then Flatten Using Deedle and F#

I have the following variable:
data:seq<(DateTime*float)>
and I want to do something like the following F# code but using Deedle:
data
|> Seq.groupBy (fun (k,v) -> k.Year)
|> Seq.map (fun (k,v) ->
let vals = v |> Seq.pairwise
let first = seq { yield v |> Seq.head }
let diffs = vals |> Seq.map (fun ((t0,v0),(t1,v1)) -> (t1, v1 - v0))
(k, diffs |> Seq.append first))
|> Seq.collect snd
This works fine using F# sequences but I want to do it using Deedle series. I know I can do something like:
(data:Series<DateTime*float>) |> Series.groupBy (fun k v -> k.Year)...
But then I need to take the within group year diffs except for the head value which should just be the value itself and then flatten the results into on series...I am bit confused with the deedle syntax
Thanks!
I think the following might be doing what you need:
ts
|> Series.groupInto
(fun k _ -> k.Month)
(fun m s ->
let first = series [ fst s.KeyRange => s.[fst s.KeyRange]]
Series.merge first (Series.diff 1 s))
|> Series.values
|> Series.mergeAll
The groupInto function lets you specify a function that should be called on each of the groups
For each group, we create series with the differences using Series.diff and append a series with the first value at the beginning using Series.merge.
At the end, we get all the nested series & flatten them using Series.mergeAll.

key based functional fold

I have a map reduce code for which I group in each of the threads by some key and then in the reduce part merge the results. My current approach is to search for an specific key index in the accumulator and then mapi to retrieve the combined result only for this key, leaving the rest unmodified:
let rec groupFolder sequence acc =
match sequence with
| (by:string, what) :: rest ->
let index = acc |> Seq.tryFindIndex( fun (byInAcc, _) -> byInAcc.Equals(by) )
match index with
| Some (idx) ->
acc |> Seq.mapi( fun i (byInAcc, whatInAcc) -> if i = idx then (by, (what |> Array.append whatInAcc) ) else byInAcc, whatInAcc )
|> groupFolder rest
| None -> acc |> Seq.append( seq{ yield (by, what) } )
|> groupFolder rest
My question is, is it a more functional way to achieve just this?
As an example input to this reducer
let GroupsCommingFromMap = [| seq { yield! [|("key1", [|1;2;3|] ); ("key2", [|1;2;3|] ); ("key3", [|1;2;3|]) |] }, seq { yield! [|("key1", [|4;5;6|] ); ("key2", [|4;5;6|] ); ("key3", [|4;5;6|]) |] } |];;
GroupsCommingFromMap |> Seq.reduce( fun acc i ->
acc |> groupFolder (i |> Seq.toList))
the expected result should contain all key1..key3 each with the array 1..6
From the code you posted, it is not very clear what you're trying to do. Could you include some sample inputs (together with the output that you would like to get)? And does your code actually work on any of the inputs (it has incomplete pattern match, so I doubt that...)
Anyway, you can implement key-based map reduce using Seq.groupBy. For example:
let mapReduce mapper reducer input =
input
|> Seq.map mapper
|> Seq.groupBy fst
|> Seq.map (fun (k, vs) ->
k, vs |> Seq.map snd |> Seq.reduce reducer)
Here:
The mapper takes a value from the input sequence and turns it into key value pair. The mapReduce function then groups the values using the key
The reducer is then used to reduce all values associated with each key
This lets you create a word count function like this (using simple mapper that returns the word as the key with 1 as a value and reducer that just adds all the numbers):
"hello world hello people hello world".Split(' ')
|> mapReduce (fun w -> w, 1) (+)
EDIT: The example you mentioned does not really have "mapper" part, but instead it has array of arrays as an input - so perhaps it is easier to write this directly using Seq.groupBy like this:
let GroupsCommingFromMap =
[| [|("key1", [|1;2;3|] ); ("key2", [|1;2;3|] ); ("key3", [|1;2;3|]) |]
[|("key1", [|4;5;6|] ); ("key2", [|4;5;6|] ); ("key3", [|4;5;6|]) |] |]
GroupsCommingFromMap
|> Seq.concat
|> Seq.groupBy fst
|> Seq.map (fun (k, vs) -> k, vs |> Seq.map snd |> Array.concat)

Problems Creating an Infinite Lazy List

I completed the seventh Euler problem* in F# but am not entirely happy with my
implementation. In the function primes I create a sequence that I estimated would contain the 10,001st prime number. When I tried using Seq.initInfinite to lazily generate the candidate primes my code just hung before throwing an out of memory exception.
Could someone advise me on replacing the literal sequence with a lazily-generated sequence which is short-circuited once the desired prime is found?
let isPrime n =
let bound = int (sqrt (float n))
seq {2 .. bound} |> Seq.forall (fun x -> n % x <> 0)
let primeAsync n =
async { return (n, isPrime n)}
let primes =
{1..1000000}
|> Seq.map primeAsync
|> Async.Parallel
|> Async.RunSynchronously
|> Array.filter snd
|> Array.map fst
|> Array.mapi (fun i el -> (i, el))
|> Array.find (fun (fst, snd) -> fst = 10001)
primes
*"By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13. What is the 10,001st prime number?"
I think the problem is/was that Async.RunSynchronously isn't lazy and tried to evaluate the whole infinite sequence. Although there are better solutions for this, your algorithm is fast enough, so you don't even need parallelization; this works perfectly:
open System
let isPrime n =
let bound = n |> float |> sqrt |> int
seq {2 .. bound} |> Seq.forall (fun x -> n % x <> 0)
let prime =
Seq.initInfinite ((+) 2)
|> Seq.filter isPrime
|> Seq.skip 10000
|> Seq.head
The sequence gets 'reified' as soon as you feed it to Async.Parallel. If you want to minimise memory consumption, run the computation serially or split it into lazy chunks, the elements in each chunk to be run in parallel.

How do I do in F# what would be called compression in APL?

In APL one can use a bit vector to select out elements of another vector; this is called compression. For example 1 0 1/3 5 7 would yield 3 7.
Is there a accepted term for this in functional programming in general and F# in particular?
Here is my F# program:
let list1 = [|"Bob"; "Mary"; "Sue"|]
let list2 = [|1; 0; 1|]
[<EntryPoint>]
let main argv =
0 // return an integer exit code
What I would like to do is compute a new string[] which would be [|"Bob"; Sue"|]
How would one do this in F#?
Array.zip list1 list2 // [|("Bob",1); ("Mary",0); ("Sue",1)|]
|> Array.filter (fun (_,x) -> x = 1) // [|("Bob", 1); ("Sue", 1)|]
|> Array.map fst // [|"Bob"; "Sue"|]
The pipe operator |> does function application syntactically reversed, i.e., x |> f is equivalent to f x. As mentioned in another answer, replace Array with Seq to avoid the construction of intermediate arrays.
I expect you'll find many APL primitives missing from F#. For lists and sequences, many can be constructed by stringing together primitives from the Seq, Array, or List modules, like the above. For reference, here is an overview of the Seq module.
I think the easiest is to use an array sequence expression, something like this:
let compress bits values =
[|
for i = 0 to bits.Length - 1 do
if bits.[i] = 1 then
yield values.[i]
|]
If you only want to use combinators, this is what I would do:
Seq.zip bits values
|> Seq.choose (fun (bit, value) ->
if bit = 1 then Some value else None)
|> Array.ofSeq
I use Seq functions instead of Array in order to avoid building intermediary arrays, but it would be correct too.
One might say this is more idiomatic:
Seq.map2 (fun l1 l2 -> if l2 = 1 then Some(l1) else None) list1 list2
|> Seq.choose id
|> Seq.toArray
EDIT (for the pipe lovers)
(list1, list2)
||> Seq.map2 (fun l1 l2 -> if l2 = 1 then Some(l1) else None)
|> Seq.choose id
|> Seq.toArray
Søren Debois' solution is good but, as he pointed out, but we can do better. Let's define a function, based on Søren's code:
let compressArray vals idx =
Array.zip vals idx
|> Array.filter (fun (_, x) -> x = 1)
|> Array.map fst
compressArray ends up creating a new array in each of the 3 lines. This can take some time, if the input arrays are long (1.4 seconds for 10M values in my quick test).
We can save some time by working on sequences and creating an array at the end only:
let compressSeq vals idx =
Seq.zip vals idx
|> Seq.filter (fun (_, x) -> x = 1)
|> Seq.map fst
This function is generic and will work on arrays, lists, etc. To generate an array as output:
compressSeq sq idx |> Seq.toArray
The latter saves about 40% of computation time (0.8s in my test).
As ildjarn commented, the function argument to filter can be rewritten to snd >> (=) 1, although that causes a slight performance drop (< 10%), probably because of the extra function call that is generated.

Resources