F# - Weight (Unit of Measure) Stones-Pounds - f#

I have a csv file containing daily weights as follows:
Date,Name,Weight
11-Sep-2017,Alpha,9-1
13-Sep-2017,Alpha,8-13
15-Sep-2017,Alpha,8-11
Though I can successfully import them using CsvProvider, the weight column defaults to System.DateTime.
// Weight
[<Measure>] type lb
[<Literal>]
let input = "DayWeights.csv"
type Weights = CsvProvider<input, HasHeaders=true>
let data = Weights.GetSample()
for row in data.Rows do
printfn "Output: (%A, %A, %A)" row.Date row.Name row.Weight
Is it possible to create a Unit of Measure (UoM) to define "stlb" with the option to convert to lbs on import and, if so, how?

I don't think you could represent stones-pounds as a single numeric type, and units of measure can only be used on numeric types (although there is some discussion about changing this in future). This is because some of their features only make sense with numeric operations like addition and multiplication. The units themselves are multiplied and divided:
[<Measure>] type lb
2<lb> + 2<lb> // 4<lb>
2<lb> * 2<lb> // 4<lb ^ 2>
2<lb> / 2<lb> // 1
Instead of units of measure, if you want some kind of tag to know that a given value has a type of stones-pounds, you could create a single case discriminated union:
type StonesPounds = StonesPounds of int * int
// StonesPounds -> int<lb>
let convertToLb (StonesPounds (s, p)) = (s * 14 + p) * 1<lb>
StonesPounds (1, 2) |> convertToLb // 16<lb>
The downside of this compared to units of measure is that you have to manually pack and unpack these values in code before you can use the numbers and there is a runtime cost for that too.

I resolved the automatic converting of the input weight column to System.DateTime as follows:
// http://fssnip.net/27
let lazySplit (sep:string) (str:string) =
match sep, str with
| ((null | ""), _) | (_, (null | "")) -> seq [str]
| _ ->
let n, j = str.Length, sep.Length
let rec loop p =
seq {
if p < n then
let i = match str.IndexOf(sep, p) with -1 -> n | i -> i
yield str.Substring(p, i - p)
yield! loop (i + j)
}
loop 0
let weight input =
input
|> (fun x -> lazySplit "/" x |> Seq.take 2 |> String.concat("-"))
let data = Weighings.GetSample()
for row in data.Rows do
let stlbs = weight (string row.Weight)
printfn "Output: (%A, %A, %A)" row.Date row.Name stlbs
// Output: 11-Sep-2017,"Alpha","09-01")
Thanks to one and all for your expert help and guidance.

Related

F#, Deedle and OptionalValue: Object must implement IConvertible error

I'm facing trouble when I try to create missing values in a Frame and later perform operations with them. Here is a "working" sample:
open Deedle
open System.Text.RegularExpressions
do fsi.AddPrinter(fun (printer:Deedle.Internal.IFsiFormattable) -> "\n" + (printer.Format()))
module Frame = let mapAddCol col f frame = frame |> Frame.addCol col (Frame.mapRowValues f frame)
[ {|Desc = "A - 1.50ml"; ``Price ($)`` = 23.|}
{|Desc = "B - 2ml"; ``Price ($)`` = 18.5|}
{|Desc = "C"; ``Price ($)`` = 25.|} ]
|> Frame.ofRecords
(*
Desc Price ($)
0 -> A - 1.50ml 23
1 -> B - 2ml 18.5
2 -> C 25
*)
|> Frame.mapAddCol "Volume (ml)" (fun row ->
match Regex.Match(row.GetAs<string>("Desc"),"[\d\.]+").Value with
| "" -> OptionalValue.Missing
| n -> n |> float |> OptionalValue)
(*
Desc Price ($) Volume (ml)
0 -> A - 1.50ml 23 1.5
1 -> B - 2ml 18.5 2
2 -> C 25 <missing>
*)
|> fun df -> df?``Price ($/ml)`` <- df?``Price ($)`` / df?``Volume (ml)``
//error message: System.InvalidCastException: Object must implement IConvertible.
What is wrong with this approach?
Deedle internally stores a flag whether a value is present or missing. This is typically exposed via the OptionalValue type, but the internal representation is not actually using this type.
When you use a function such as mapRowValues to generate new data, Deedle needs to recognize which data is missing. This happens in only somewhat limited cases only. When you return OptionalValue<float>, Deedle actually produces a series where the type of values is OptionalValue<float> rather than float (the type system does not let it do anything else).
For float values, the solution is just to return nan as your missing value:
|> Frame.mapAddCol "Volume (ml)" (fun row ->
match Regex.Match(row.GetAs<string>("Desc"),"[\d\.]+").Value with
| "" -> nan
| n -> n |> float )
This will create a new series of float values, which you can then access using the ? operator.

F# infinite stream of armstrong numbers

I'm trying to create an infinite Stream in F# that contains armstrong numbers. An armstrong number is one whose cubes of its digits add up to the number. For example, 153 is an armstrong number because 1^3 + 5^3 + 3^3 = 153. so far, I have created several functions to help me do so. They are:
type 'a stream = Cons of 'a * (unit -> 'a stream);;
let rec upfrom n = Cons (n, fun() -> upfrom (n+1));;
let rec toIntArray = function
| 0 -> []
| n -> n % 10 :: toIntArray (n / 10);;
 
let rec makearmstrong = function
| [] -> 0
| y::ys -> (y * y * y) + makearmstrong ys;;
let checkarmstrong n = n = makearmstrong(toIntArray n);;
let rec take n (Cons(x,xsf)) =
match n with
| 0 -> []
| _ -> x :: take (n-1)(xsf());;
let rec filter p (Cons (x, xsf)) =
if p x then Cons (x, fun() -> filter p (xsf()))
else filter p (xsf());;
And finally:
let armstrongs = filter (fun n -> checkarmstrong n)(upfrom 1);;
Now, when I do take 4 armstrongs;;, (or any number less than 4) this works perfectly and gives me [1;153;370;371] but if I do take 5 armstrongs;;nothing happens, it seems like the program freezes.
I believe the issue is that there are no numbers after 407 that are the sums of their cubes (see http://oeis.org/A046197), but when your code evaluates the equivalent of take 1 (Cons(407, filter checkarmstrong (upfrom 408))) it's going to force the evaluation of the tail and filter will recurse forever, never finding a matching next element. Also note that your definition of Armstrong numbers differs from, say, Wikipedia's, which states that the power the digits are raised to should be the number of digits in the number.

F#: How do i split up a sequence into a sequence of sequences

Background:
I have a sequence of contiguous, time-stamped data. The data-sequence has gaps in it where the data is not contiguous. I want create a method to split the sequence up into a sequence of sequences so that each subsequence contains contiguous data (split the input-sequence at the gaps).
Constraints:
The return value must be a sequence of sequences to ensure that elements are only produced as needed (cannot use list/array/cacheing)
The solution must NOT be O(n^2), probably ruling out a Seq.take - Seq.skip pattern (cf. Brian's post)
Bonus points for a functionally idiomatic approach (since I want to become more proficient at functional programming), but it's not a requirement.
Method signature
let groupContiguousDataPoints (timeBetweenContiguousDataPoints : TimeSpan) (dataPointsWithHoles : seq<DateTime * float>) : (seq<seq< DateTime * float >>)= ...
On the face of it the problem looked trivial to me, but even employing Seq.pairwise, IEnumerator<_>, sequence comprehensions and yield statements, the solution eludes me. I am sure that this is because I still lack experience with combining F#-idioms, or possibly because there are some language-constructs that I have not yet been exposed to.
// Test data
let numbers = {1.0..1000.0}
let baseTime = DateTime.Now
let contiguousTimeStamps = seq { for n in numbers ->baseTime.AddMinutes(n)}
let dataWithOccationalHoles = Seq.zip contiguousTimeStamps numbers |> Seq.filter (fun (dateTime, num) -> num % 77.0 <> 0.0) // Has a gap in the data every 77 items
let timeBetweenContiguousValues = (new TimeSpan(0,1,0))
dataWithOccationalHoles |> groupContiguousDataPoints timeBetweenContiguousValues |> Seq.iteri (fun i sequence -> printfn "Group %d has %d data-points: Head: %f" i (Seq.length sequence) (snd(Seq.hd sequence)))
I think this does what you want
dataWithOccationalHoles
|> Seq.pairwise
|> Seq.map(fun ((time1,elem1),(time2,elem2)) -> if time2-time1 = timeBetweenContiguousValues then 0, ((time1,elem1),(time2,elem2)) else 1, ((time1,elem1),(time2,elem2)) )
|> Seq.scan(fun (indexres,(t1,e1),(t2,e2)) (index,((time1,elem1),(time2,elem2))) -> (index+indexres,(time1,elem1),(time2,elem2)) ) (0,(baseTime,-1.0),(baseTime,-1.0))
|> Seq.map( fun (index,(time1,elem1),(time2,elem2)) -> index,(time2,elem2) )
|> Seq.filter( fun (_,(_,elem)) -> elem <> -1.0)
|> PSeq.groupBy(fst)
|> Seq.map(snd>>Seq.map(snd))
Thanks for asking this cool question
I translated Alexey's Haskell to F#, but it's not pretty in F#, and still one element too eager.
I expect there is a better way, but I'll have to try again later.
let N = 20
let data = // produce some arbitrary data with holes
seq {
for x in 1..N do
if x % 4 <> 0 && x % 7 <> 0 then
printfn "producing %d" x
yield x
}
let rec GroupBy comp (input:LazyList<'a>) : LazyList<LazyList<'a>> =
LazyList.delayed (fun () ->
match input with
| LazyList.Nil -> LazyList.cons (LazyList.empty()) (LazyList.empty())
| LazyList.Cons(x,LazyList.Nil) ->
LazyList.cons (LazyList.cons x (LazyList.empty())) (LazyList.empty())
| LazyList.Cons(x,(LazyList.Cons(y,_) as xs)) ->
let groups = GroupBy comp xs
if comp x y then
LazyList.consf
(LazyList.consf x (fun () ->
let (LazyList.Cons(firstGroup,_)) = groups
firstGroup))
(fun () ->
let (LazyList.Cons(_,otherGroups)) = groups
otherGroups)
else
LazyList.cons (LazyList.cons x (LazyList.empty())) groups)
let result = data |> LazyList.of_seq |> GroupBy (fun x y -> y = x + 1)
printfn "Consuming..."
for group in result do
printfn "about to do a group"
for x in group do
printfn " %d" x
You seem to want a function that has signature
(`a -> bool) -> seq<'a> -> seq<seq<'a>>
I.e. a function and a sequence, then break up the input sequence into a sequence of sequences based on the result of the function.
Caching the values into a collection that implements IEnumerable would likely be simplest (albeit not exactly purist, but avoiding iterating the input multiple times. It will lose much of the laziness of the input):
let groupBy (fun: 'a -> bool) (input: seq) =
seq {
let cache = ref (new System.Collections.Generic.List())
for e in input do
(!cache).Add(e)
if not (fun e) then
yield !cache
cache := new System.Collections.Generic.List()
if cache.Length > 0 then
yield !cache
}
An alternative implementation could pass cache collection (as seq<'a>) to the function so it can see multiple elements to chose the break points.
A Haskell solution, because I don't know F# syntax well, but it should be easy enough to translate:
type TimeStamp = Integer -- ticks
type TimeSpan = Integer -- difference between TimeStamps
groupContiguousDataPoints :: TimeSpan -> [(TimeStamp, a)] -> [[(TimeStamp, a)]]
There is a function groupBy :: (a -> a -> Bool) -> [a] -> [[a]] in the Prelude:
The group function takes a list and returns a list of lists such that the concatenation of the result is equal to the argument. Moreover, each sublist in the result contains only equal elements. For example,
group "Mississippi" = ["M","i","ss","i","ss","i","pp","i"]
It is a special case of groupBy, which allows the programmer to supply their own equality test.
It isn't quite what we want, because it compares each element in the list with the first element of the current group, and we need to compare consecutive elements. If we had such a function groupBy1, we could write groupContiguousDataPoints easily:
groupContiguousDataPoints maxTimeDiff list = groupBy1 (\(t1, _) (t2, _) -> t2 - t1 <= maxTimeDiff) list
So let's write it!
groupBy1 :: (a -> a -> Bool) -> [a] -> [[a]]
groupBy1 _ [] = [[]]
groupBy1 _ [x] = [[x]]
groupBy1 comp (x : xs#(y : _))
| comp x y = (x : firstGroup) : otherGroups
| otherwise = [x] : groups
where groups#(firstGroup : otherGroups) = groupBy1 comp xs
UPDATE: it looks like F# doesn't let you pattern match on seq, so it isn't too easy to translate after all. However, this thread on HubFS shows a way to pattern match sequences by converting them to LazyList when needed.
UPDATE2: Haskell lists are lazy and generated as needed, so they correspond to F#'s LazyList (not to seq, because the generated data is cached (and garbage collected, of course, if you no longer hold a reference to it)).
(EDIT: This suffers from a similar problem to Brian's solution, in that iterating the outer sequence without iterating over each inner sequence will mess things up badly!)
Here's a solution that nests sequence expressions. The imperitave nature of .NET's IEnumerable<T> is pretty apparent here, which makes it a bit harder to write idiomatic F# code for this problem, but hopefully it's still clear what's going on.
let groupBy cmp (sq:seq<_>) =
let en = sq.GetEnumerator()
let rec partitions (first:option<_>) =
seq {
match first with
| Some first' -> //'
(* The following value is always overwritten;
it represents the first element of the next subsequence to output, if any *)
let next = ref None
(* This function generates a subsequence to output,
setting next appropriately as it goes *)
let rec iter item =
seq {
yield item
if (en.MoveNext()) then
let curr = en.Current
if (cmp item curr) then
yield! iter curr
else // consumed one too many - pass it on as the start of the next sequence
next := Some curr
else
next := None
}
yield iter first' (* ' generate the first sequence *)
yield! partitions !next (* recursively generate all remaining sequences *)
| None -> () // return an empty sequence if there are no more values
}
let first = if en.MoveNext() then Some en.Current else None
partitions first
let groupContiguousDataPoints (time:TimeSpan) : (seq<DateTime*_> -> _) =
groupBy (fun (t,_) (t',_) -> t' - t <= time)
Okay, trying again. Achieving the optimal amount of laziness turns out to be a bit difficult in F#... On the bright side, this is somewhat more functional than my last attempt, in that it doesn't use any ref cells.
let groupBy cmp (sq:seq<_>) =
let en = sq.GetEnumerator()
let next() = if en.MoveNext() then Some en.Current else None
(* this function returns a pair containing the first sequence and a lazy option indicating the first element in the next sequence (if any) *)
let rec seqStartingWith start =
match next() with
| Some y when cmp start y ->
let rest_next = lazy seqStartingWith y // delay evaluation until forced - stores the rest of this sequence and the start of the next one as a pair
seq { yield start; yield! fst (Lazy.force rest_next) },
lazy Lazy.force (snd (Lazy.force rest_next))
| next -> seq { yield start }, lazy next
let rec iter start =
seq {
match (Lazy.force start) with
| None -> ()
| Some start ->
let (first,next) = seqStartingWith start
yield first
yield! iter next
}
Seq.cache (iter (lazy next()))
Below is some code that does what I think you want. It is not idiomatic F#.
(It may be similar to Brian's answer, though I can't tell because I'm not familiar with the LazyList semantics.)
But it doesn't exactly match your test specification: Seq.length enumerates its entire input. Your "test code" calls Seq.length and then calls Seq.hd. That will generate an enumerator twice, and since there is no caching, things get messed up. I'm not sure if there is any clean way to allow multiple enumerators without caching. Frankly, seq<seq<'a>> may not be the best data structure for this problem.
Anyway, here's the code:
type State<'a> = Unstarted | InnerOkay of 'a | NeedNewInner of 'a | Finished
// f() = true means the neighbors should be kept together
// f() = false means they should be split
let split_up (f : 'a -> 'a -> bool) (input : seq<'a>) =
// simple unfold that assumes f captured a mutable variable
let iter f = Seq.unfold (fun _ ->
match f() with
| Some(x) -> Some(x,())
| None -> None) ()
seq {
let state = ref (Unstarted)
use ie = input.GetEnumerator()
let innerMoveNext() =
match !state with
| Unstarted ->
if ie.MoveNext()
then let cur = ie.Current
state := InnerOkay(cur); Some(cur)
else state := Finished; None
| InnerOkay(last) ->
if ie.MoveNext()
then let cur = ie.Current
if f last cur
then state := InnerOkay(cur); Some(cur)
else state := NeedNewInner(cur); None
else state := Finished; None
| NeedNewInner(last) -> state := InnerOkay(last); Some(last)
| Finished -> None
let outerMoveNext() =
match !state with
| Unstarted | NeedNewInner(_) -> Some(iter innerMoveNext)
| InnerOkay(_) -> failwith "Move to next inner seq when current is active: undefined behavior."
| Finished -> None
yield! iter outerMoveNext }
open System
let groupContigs (contigTime : TimeSpan) (holey : seq<DateTime * int>) =
split_up (fun (t1,_) (t2,_) -> (t2 - t1) <= contigTime) holey
// Test data
let numbers = {1 .. 15}
let contiguousTimeStamps =
let baseTime = DateTime.Now
seq { for n in numbers -> baseTime.AddMinutes(float n)}
let holeyData =
Seq.zip contiguousTimeStamps numbers
|> Seq.filter (fun (dateTime, num) -> num % 7 <> 0)
let grouped_data = groupContigs (new TimeSpan(0,1,0)) holeyData
printfn "Consuming..."
for group in grouped_data do
printfn "about to do a group"
for x in group do
printfn " %A" x
Ok, here's an answer I'm not unhappy with.
(EDIT: I am unhappy - it's wrong! No time to try to fix right now though.)
It uses a bit of imperative state, but it is not too difficult to follow (provided you recall that '!' is the F# dereference operator, and not 'not'). It is as lazy as possible, and takes a seq as input and returns a seq of seqs as output.
let N = 20
let data = // produce some arbitrary data with holes
seq {
for x in 1..N do
if x % 4 <> 0 && x % 7 <> 0 then
printfn "producing %d" x
yield x
}
let rec GroupBy comp (input:seq<_>) = seq {
let doneWithThisGroup = ref false
let areMore = ref true
use e = input.GetEnumerator()
let Next() = areMore := e.MoveNext(); !areMore
// deal with length 0 or 1, seed 'prev'
if not(e.MoveNext()) then () else
let prev = ref e.Current
while !areMore do
yield seq {
while not(!doneWithThisGroup) do
if Next() then
let next = e.Current
doneWithThisGroup := not(comp !prev next)
yield !prev
prev := next
else
// end of list, yield final value
yield !prev
doneWithThisGroup := true }
doneWithThisGroup := false }
let result = data |> GroupBy (fun x y -> y = x + 1)
printfn "Consuming..."
for group in result do
printfn "about to do a group"
for x in group do
printfn " %d" x

Help Needed Creating a Binary Tree Given Truth Table

First, in order to provide full disclosure, I want to point out that this is related to homework in a Machine Learning class. This question is not the homework question and instead is something I need to figure out in order to complete the bigger problem of creating an ID3 Decision Tree Algorithm.
I need to generate tree similar to the following when given a truth table
let learnedTree = Node(0,"A0", Node(2,"A2", Leaf(0), Leaf(1)), Node(1,"A1", Node(2,"A2", Leaf(0), Leaf(1)), Leaf(0)))
learnedTree is of type BinaryTree which I've defined as follows:
type BinaryTree =
| Leaf of int
| Node of int * string * BinaryTree * BinaryTree
ID3 algorithms take into account various equations to determine where to split the tree, and I've got all that figured out, I'm just having trouble creating the learned tree from my truth table. For example if I have the following table
A1 | A2 | A3 | Class
1 0 0 1
0 1 0 1
0 0 0 0
1 0 1 0
0 0 0 0
1 1 0 1
0 1 1 0
And I decide to split on attribute A1 I would end up with the following:
(A1 = 1) A1 (A1 = 0)
A2 | A3 | Class A2 | A3 | Class
0 0 1 1 0 1
0 1 0 0 0 0
1 0 1 0 0 0
0 1 1
Then I would split the left side and split the right side, and continue the recursive pattern until the leaf nodes are pure and I end up with a tree similar to the following based on the splitting.
let learnedTree = Node(0,"A0", Node(2,"A2", Leaf(0), Leaf(1)), Node(1,"A1", Node(2,"A2", Leaf(0), Leaf(1)), Leaf(0)))
Here is what I've kind of "hacked" together thus far, but I think I might be way off:
let rec createTree (listToSplit : list<list<float>>) index =
let leftSideSplit =
listToSplit |> List.choose (fun x -> if x.Item(index) = 1. then Some(x) else None)
let rightSideSplit =
listToSplit |> List.choose (fun x -> if x.Item(index) = 0. then Some(x) else None)
if leftSideSplit.Length > 0 then
let pureCheck = isListPure leftSideSplit
if pureCheck = 0 then
printfn "%s" "Pure left node class 0"
createTree leftSideSplit (index + 1)
else if pureCheck = 1 then
printfn "%s" "Pure left node class 1"
createTree leftSideSplit (index + 1)
else
printfn "%s - %A" "Recursing Left" leftSideSplit
createTree leftSideSplit (index + 1)
else printfn "%s" "Pure left node class 0"
Should I be using pattern matching instead? Any tips/ideas/help? Thanks a bunch!
Edit: I've since posted an implementation of ID3 on my blog at:
http://blogs.msdn.com/chrsmith
Hey Jim, I've been wanting to write a blog post implementing ID3 in F# for a while - thanks for giving me an execute. While this code doesn't implement the algorithm full (or correctly), it should be sufficient for getting you started.
In general you have the right approach - representing each branch as a discriminated union case is good. And like Brian said, List.partition is definitely a handy function. The trick to making this work correctly is all in determining the optimal attribute/value pair to split on - and to do that you'll need to calculate information gain via entropy, etc.
type Attribute = string
type Value = string
type Record =
{
Weather : string
Temperature : string
PlayTennis : bool
}
override this.ToString() =
sprintf
"{Weather = %s, Temp = %s, PlayTennis = %b}"
this.Weather
this.Temperature
this.PlayTennis
type Decision = Attribute * Value
type DecisionTreeNode =
| Branch of Decision * DecisionTreeNode * DecisionTreeNode
| Leaf of Record list
// ------------------------------------
// Splits a record list into an optimal split and the left / right branches.
// (This is where you use the entropy function to maxamize information gain.)
// Record list -> Decision * Record list * Record list
let bestSplit data =
// Just group by weather, then by temperature
let uniqueWeathers =
List.fold
(fun acc item -> Set.add item.Weather acc)
Set.empty
data
let uniqueTemperatures =
List.fold
(fun acc item -> Set.add item.Temperature acc)
Set.empty
data
if uniqueWeathers.Count = 1 then
let bestSplit = ("Temperature", uniqueTemperatures.MinimumElement)
let left, right =
List.partition
(fun item -> item.Temperature = uniqueTemperatures.MinimumElement)
data
(bestSplit, left, right)
else
let bestSplit = ("Weather", uniqueWeathers.MinimumElement)
let left, right =
List.partition
(fun item -> item.Weather = uniqueWeathers.MinimumElement)
data
(bestSplit, left, right)
let rec determineBranch data =
if List.length data < 4 then
Leaf(data)
else
// Use the entropy function to break the dataset on
// the category / value that best splits the data
let bestDecision, leftBranch, rightBranch = bestSplit data
Branch(
bestDecision,
determineBranch leftBranch,
determineBranch rightBranch)
// ------------------------------------
let rec printID3Result indent branch =
let padding = new System.String(' ', indent)
match branch with
| Leaf(data) ->
data |> List.iter (fun item -> printfn "%s%s" padding <| item.ToString())
| Branch(decision, lhs, rhs) ->
printfn "%sBranch predicate [%A]" padding decision
printfn "%sWhere predicate is true:" padding
printID3Result (indent + 4) lhs
printfn "%sWhere predicate is false:" padding
printID3Result (indent + 4) rhs
// ------------------------------------
let dataset =
[
{ Weather = "windy"; Temperature = "hot"; PlayTennis = false }
{ Weather = "windy"; Temperature = "cool"; PlayTennis = false }
{ Weather = "nice"; Temperature = "cool"; PlayTennis = true }
{ Weather = "nice"; Temperature = "cold"; PlayTennis = true }
{ Weather = "humid"; Temperature = "hot"; PlayTennis = false }
]
printfn "Given input list:"
dataset |> List.iter (printfn "%A")
printfn "ID3 split resulted in:"
let id3Result = determineBranch dataset
printID3Result 0 id3Result
You can use List.partition instead of your two List.choose calls.
http://research.microsoft.com/en-us/um/cambridge/projects/fsharp/manual/FSharp.Core/Microsoft.FSharp.Collections.List.html
(or now http://msdn.microsoft.com/en-us/library/ee353738(VS.100).aspx )
It isn't clear to me that pattern matching will buy you much here; the input type (list of lists) and processing (partitioning and 'pureness' check) doesn't really lend itself to that.
And of course when you finally get the 'end' (a pure list) you need to create a tree, and then presumably this function will create a Leaf when the input only has one 'side' and it's 'pure', but create a Node out of the left-side and right-side results for every other input. Maybe. I didn't quite grok the algorithm completely.
Hopefully that will help steer you a little bit. May be useful to draw up a few smaller sample inputs and outputs to help work out the various cases of the function body.
Thanks Brian & Chris! I was actually able to figure this out and I ended up with the following. This calculates the information gain for determining the best place to split. I'm sure there are probably better ways for me to arrive at this solution especially around the chosen data structures, but this is a start. I plan to refine things later.
#light
open System
let trainList =
[
[1.;0.;0.;1.;];
[0.;1.;0.;1.;];
[0.;0.;0.;0.;];
[1.;0.;1.;0.;];
[0.;0.;0.;0.;];
[1.;1.;0.;1.;];
[0.;1.;1.;0.;];
[1.;0.;0.;1.;];
[0.;0.;0.;0.;];
[1.;0.;0.;1.;];
]
type BinaryTree =
| Leaf of int
| Node of int * string * BinaryTree * BinaryTree
let entropyList nums =
let sumOfnums =
nums
|> Seq.sum
nums
|> Seq.map (fun x -> if x=0.00 then x else (-((x/sumOfnums) * Math.Log(x/sumOfnums, 2.))))
|> Seq.sum
let entropyBinaryList (dataListOfLists:list<list<float>>) =
let classList =
dataListOfLists
|> List.map (fun x -> x.Item(x.Length - 1))
let ListOfNo =
classList
|> List.choose (fun x -> if x = 0. then Some(x) else None)
let ListOfYes =
classList
|> List.choose (fun x -> if x = 1. then Some(x) else None)
let numberOfYes : float = float ListOfYes.Length
let numberOfNo : float = float ListOfNo.Length
let ListOfNumYesAndSumNo = [numberOfYes; numberOfNo]
entropyList ListOfNumYesAndSumNo
let conditionalEntropy (dataListOfLists:list<list<float>>) attributeNumber =
let NoAttributeList =
dataListOfLists
|> List.choose (fun x -> if x.Item(attributeNumber) = 0. then Some(x) else None)
let YesAttributeList =
dataListOfLists
|> List.choose (fun x -> if x.Item(attributeNumber) = 1. then Some(x) else None)
let numberOfYes : float = float YesAttributeList.Length
let numberOfNo : float = float NoAttributeList.Length
let noConditionalEntropy = (entropyBinaryList NoAttributeList) * (numberOfNo/(numberOfNo + numberOfYes))
let yesConditionalEntropy = (entropyBinaryList YesAttributeList) * (numberOfYes/(numberOfNo + numberOfYes))
[noConditionalEntropy; yesConditionalEntropy]
let findBestSplitIndex(listOfInstances : list<list<float>>) =
let IGList =
[0..(listOfInstances.Item(0).Length - 2)]
|> List.mapi (fun i x -> (i, (entropyBinaryList listOfInstances) - (List.sum (conditionalEntropy listOfInstances x))))
IGList
|> List.maxBy snd
|> fst
let isListPure (listToCheck : list<list<float>>) =
let splitList = listToCheck |> List.choose (fun x -> if x.Item(x.Length - 1) = 1. then Some(x) else None)
if splitList.Length = listToCheck.Length then 1
else if splitList.Length = 0 then 0
else -1
let rec createTree (listToSplit : list<list<float>>) =
let pureCheck = isListPure listToSplit
if pureCheck = 0 then
printfn "%s" "Pure - Leaf(0)"
else if pureCheck = 1 then
printfn "%s" "Pure - Leaf(1)"
else
printfn "%A - is not pure" listToSplit
if listToSplit.Length > 1 then // There are attributes we can split on
// Chose best place to split list
let splitIndex = findBestSplitIndex(listToSplit)
printfn "spliting at index %A" splitIndex
let leftSideSplit =
listToSplit |> List.choose (fun x -> if x.Item(splitIndex) = 1. then Some(x) else None)
let rightSideSplit =
listToSplit |> List.choose (fun x -> if x.Item(splitIndex) = 0. then Some(x) else None)
createTree leftSideSplit
createTree rightSideSplit
else
printfn "%s" "Not Pure, but can't split choose based on heuristics - Leaf(0 or 1)"

Handy F# snippets [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There are already two questions about F#/functional snippets.
However what I'm looking for here are useful snippets, little 'helper' functions that are reusable. Or obscure but nifty patterns that you can never quite remember.
Something like:
open System.IO
let rec visitor dir filter=
seq { yield! Directory.GetFiles(dir, filter)
for subdir in Directory.GetDirectories(dir) do
yield! visitor subdir filter}
I'd like to make this a kind of handy reference page. As such there will be no right answer, but hopefully lots of good ones.
EDIT Tomas Petricek has created a site specifically for F# snippets http://fssnip.net/.
Perl style regex matching
let (=~) input pattern =
System.Text.RegularExpressions.Regex.IsMatch(input, pattern)
It lets you match text using let test = "monkey" =~ "monk.+" notation.
Infix Operator
I got this from http://sandersn.com/blog//index.php/2009/10/22/infix-function-trick-for-f go to that page for more details.
If you know Haskell, you might find yourself missing infix sugar in F#:
// standard Haskell call has function first, then args just like F#. So obviously
// here there is a function that takes two strings: string -> string -> string
startsWith "kevin" "k"
//Haskell infix operator via backQuotes. Sometimes makes a function read better.
"kevin" `startsWith` "K"
While F# doesn't have a true 'infix' operator, the same thing can be accomplished almost as elegantly via a pipeline and a 'backpipeline' (who knew of such a thing??)
// F# 'infix' trick via pipelines
"kevin" |> startsWith <| "K"
Multi-Line Strings
This is pretty trivial, but it seems to be a feature of F# strings that is not widely known.
let sql = "select a,b,c \
from table \
where a = 1"
This produces:
val sql : string = "select a,b,c from table where a = 1"
When the F# compiler sees a back-slash followed by a carriage return inside a string literal, it will remove everything from the back-slash to the first non-space character on the next line. This allows you to have multi-line string literals that line up, without using a bunch of string concatenation.
Generic memoization, courtesy of the man himself
let memoize f =
let cache = System.Collections.Generic.Dictionary<_,_>(HashIdentity.Structural)
fun x ->
let ok, res = cache.TryGetValue(x)
if ok then res
else let res = f x
cache.[x] <- res
res
Using this, you could do a cached reader like so:
let cachedReader = memoize reader
Simple read-write to text files
These are trivial, but make file access pipeable:
open System.IO
let fileread f = File.ReadAllText(f)
let filewrite f s = File.WriteAllText(f, s)
let filereadlines f = File.ReadAllLines(f)
let filewritelines f ar = File.WriteAllLines(f, ar)
So
let replace f (r:string) (s:string) = s.Replace(f, r)
"C:\\Test.txt" |>
fileread |>
replace "teh" "the" |>
filewrite "C:\\Test.txt"
And combining that with the visitor quoted in the question:
let filereplace find repl path =
path |> fileread |> replace find repl |> filewrite path
let recurseReplace root filter find repl =
visitor root filter |> Seq.iter (filereplace find repl)
Update Slight improvement if you want to be able to read 'locked' files (e.g. csv files which are already open in Excel...):
let safereadall f =
use fs = new FileStream(f, FileMode.Open, FileAccess.Read, FileShare.ReadWrite)
use sr = new StreamReader(fs, System.Text.Encoding.Default)
sr.ReadToEnd()
let split sep (s:string) = System.Text.RegularExpressions.Regex.Split(s, sep)
let fileread f = safereadall f
let filereadlines f = f |> safereadall |> split System.Environment.NewLine
For performance intensive stuff where you need to check for null
let inline isNull o = System.Object.ReferenceEquals(o, null)
if isNull o then ... else ...
Is about 20x faster then
if o = null then ... else ...
Active Patterns, aka "Banana Splits", are a very handy construct that let one match against multiple regular expression patterns. This is much like AWK, but without the high performance of DFA's because the patterns are matched in sequence until one succeeds.
#light
open System
open System.Text.RegularExpressions
let (|Test|_|) pat s =
if (new Regex(pat)).IsMatch(s)
then Some()
else None
let (|Match|_|) pat s =
let opt = RegexOptions.None
let re = new Regex(pat,opt)
let m = re.Match(s)
if m.Success
then Some(m.Groups)
else None
Some examples of use:
let HasIndefiniteArticle = function
| Test "(?: |^)(a|an)(?: |$)" _ -> true
| _ -> false
type Ast =
| IntVal of string * int
| StringVal of string * string
| LineNo of int
| Goto of int
let Parse = function
| Match "^LET\s+([A-Z])\s*=\s*(\d+)$" g ->
IntVal( g.[1].Value, Int32.Parse(g.[2].Value) )
| Match "^LET\s+([A-Z]\$)\s*=\s*(.*)$" g ->
StringVal( g.[1].Value, g.[2].Value )
| Match "^(\d+)\s*:$" g ->
LineNo( Int32.Parse(g.[1].Value) )
| Match "^GOTO \s*(\d+)$" g ->
Goto( Int32.Parse(g.[1].Value) )
| s -> failwithf "Unexpected statement: %s" s
Maybe monad
type maybeBuilder() =
member this.Bind(v, f) =
match v with
| None -> None
| Some(x) -> f x
member this.Delay(f) = f()
member this.Return(v) = Some v
let maybe = maybeBuilder()
Here's a brief intro to monads for the uninitiated.
Option-coalescing operators
I wanted a version of the defaultArg function that had a syntax closer to the C# null-coalescing operator, ??. This lets me get the value from an Option while providing a default value, using a very concise syntax.
/// Option-coalescing operator - this is like the C# ?? operator, but works with
/// the Option type.
/// Warning: Unlike the C# ?? operator, the second parameter will always be
/// evaluated.
/// Example: let foo = someOption |? default
let inline (|?) value defaultValue =
defaultArg value defaultValue
/// Option-coalescing operator with delayed evaluation. The other version of
/// this operator always evaluates the default value expression. If you only
/// want to create the default value when needed, use this operator and pass
/// in a function that creates the default.
/// Example: let foo = someOption |?! (fun () -> new Default())
let inline (|?!) value f =
match value with Some x -> x | None -> f()
'Unitize' a function which doesn't handle units
Using the FloatWithMeasure function http://msdn.microsoft.com/en-us/library/ee806527(VS.100).aspx.
let unitize (f:float -> float) (v:float<'u>) =
LanguagePrimitives.FloatWithMeasure<'u> (f (float v))
Example:
[<Measure>] type m
[<Measure>] type kg
let unitize (f:float -> float) (v:float<'u>) =
LanguagePrimitives.FloatWithMeasure<'u> (f (float v))
//this function doesn't take units
let badinc a = a + 1.
//this one does!
let goodinc v = unitize badinc v
goodinc 3.<m>
goodinc 3.<kg>
OLD version:
let unitize (f:float -> float) (v:float<'u>) =
let unit = box 1. :?> float<'u>
unit * (f (v/unit))
Kudos to kvb
Scale/Ratio function builder
Again, trivial, but handy.
//returns a function which will convert from a1-a2 range to b1-b2 range
let scale (a1:float<'u>, a2:float<'u>) (b1:float<'v>,b2:float<'v>) =
let m = (b2 - b1)/(a2 - a1) //gradient of line (evaluated once only..)
(fun a -> b1 + m * (a - a1))
Example:
[<Measure>] type m
[<Measure>] type px
let screenSize = (0.<px>, 300.<px>)
let displayRange = (100.<m>, 200.<m>)
let scaleToScreen = scale displayRange screenSize
scaleToScreen 120.<m> //-> 60.<px>
Transposing a list (seen on Jomo Fisher's blog)
///Given list of 'rows', returns list of 'columns'
let rec transpose lst =
match lst with
| (_::_)::_ -> List.map List.head lst :: transpose (List.map List.tail lst)
| _ -> []
transpose [[1;2;3];[4;5;6];[7;8;9]] // returns [[1;4;7];[2;5;8];[3;6;9]]
And here is a tail-recursive version which (from my sketchy profiling) is mildly slower, but has the advantage of not throwing a stack overflow when the inner lists are longer than 10000 elements (on my machine):
let transposeTR lst =
let rec inner acc lst =
match lst with
| (_::_)::_ -> inner (List.map List.head lst :: acc) (List.map List.tail lst)
| _ -> List.rev acc
inner [] lst
If I was clever, I'd try and parallelise it with async...
F# Map <-> C# Dictionary
(I know, I know, System.Collections.Generic.Dictionary isn't really a 'C#' dictionary)
C# to F#
(dic :> seq<_>) //cast to seq of KeyValuePair
|> Seq.map (|KeyValue|) //convert KeyValuePairs to tuples
|> Map.ofSeq //convert to Map
(From Brian, here, with improvement proposed by Mauricio in comment below. (|KeyValue|) is an active pattern for matching KeyValuePair - from FSharp.Core - equivalent to (fun kvp -> kvp.Key, kvp.Value))
Interesting alternative
To get all of the immutable goodness, but with the O(1) lookup speed of Dictionary, you can use the dict operator, which returns an immutable IDictionary (see this question).
I currently can't see a way to directly convert a Dictionary using this method, other than
(dic :> seq<_>) //cast to seq of KeyValuePair
|> (fun kvp -> kvp.Key, kvp.Value) //convert KeyValuePairs to tuples
|> dict //convert to immutable IDictionary
F# to C#
let dic = Dictionary()
map |> Map.iter (fun k t -> dic.Add(k, t))
dic
What is weird here is that FSI will report the type as (for example):
val it : Dictionary<string,int> = dict [("a",1);("b",2)]
but if you feed dict [("a",1);("b",2)] back in, FSI reports
IDictionary<string,int> = seq[[a,1] {Key = "a"; Value = 1; } ...
Tree-sort / Flatten a tree into a list
I have the following binary tree:
___ 77 _
/ \
______ 47 __ 99
/ \
21 _ 54
\ / \
43 53 74
/
39
/
32
Which is represented as follows:
type 'a tree =
| Node of 'a tree * 'a * 'a tree
| Nil
let myTree =
Node
(Node
(Node (Nil,21,Node (Node (Node (Nil,32,Nil),39,Nil),43,Nil)),47,
Node (Node (Nil,53,Nil),54,Node (Nil,74,Nil))),77,Node (Nil,99,Nil))
A straightforward method to flatten the tree is:
let rec flatten = function
| Nil -> []
| Node(l, a, r) -> flatten l # a::flatten r
This isn't tail-recursive, and I believe the # operator causes it to be O(n log n) or O(n^2) with unbalanced binary trees. With a little tweaking, I came up with this tail-recursive O(n) version:
let flatten2 t =
let rec loop acc c = function
| Nil -> c acc
| Node(l, a, r) ->
loop acc (fun acc' -> loop (a::acc') c l) r
loop [] (fun x -> x) t
Here's the output in fsi:
> flatten2 myTree;;
val it : int list = [21; 32; 39; 43; 47; 53; 54; 74; 77; 99]
LINQ-to-XML helpers
namespace System.Xml.Linq
// hide warning about op_Explicit
#nowarn "77"
[<AutoOpen>]
module XmlUtils =
/// Converts a string to an XName.
let xn = XName.op_Implicit
/// Converts a string to an XNamespace.
let xmlns = XNamespace.op_Implicit
/// Gets the string value of any XObject subclass that has a Value property.
let inline xstr (x : ^a when ^a :> XObject) =
(^a : (member get_Value : unit -> string) x)
/// Gets a strongly-typed value from any XObject subclass, provided that
/// an explicit conversion to the output type has been defined.
/// (Many explicit conversions are defined on XElement and XAttribute)
/// Example: let value:int = xval foo
let inline xval (x : ^a when ^a :> XObject) : ^b =
((^a or ^b) : (static member op_Explicit : ^a -> ^b) x)
/// Dynamic lookup operator for getting an attribute value from an XElement.
/// Returns a string option, set to None if the attribute was not present.
/// Example: let value = foo?href
/// Example with default: let value = defaultArg foo?Name "<Unknown>"
let (?) (el:XElement) (name:string) =
match el.Attribute(xn name) with
| null -> None
| att -> Some(att.Value)
/// Dynamic operator for setting an attribute on an XElement.
/// Example: foo?href <- "http://www.foo.com/"
let (?<-) (el:XElement) (name:string) (value:obj) =
el.SetAttributeValue(xn name, value)
OK, this has nothing to do with snippets, but I keep forgetting this:
If you are in the interactive window, you hit F7 to jump back to the code window (without deselecting the code which you just ran...)
Going from code window to F# window (and also to open the F# window) is Ctrl Alt F
(unless CodeRush has stolen your bindings...)
Weighted sum of arrays
Calculating a weighted [n-array] sum of a [k-array of n-arrays] of numbers, based on a [k-array] of weights
(Copied from this question, and kvb's answer)
Given these arrays
let weights = [|0.6;0.3;0.1|]
let arrs = [| [|0.0453;0.065345;0.07566;1.562;356.6|] ;
[|0.0873;0.075565;0.07666;1.562222;3.66|] ;
[|0.06753;0.075675;0.04566;1.452;3.4556|] |]
We want a weighted sum (by column), given that both dimensions of the arrays can be variable.
Array.map2 (fun w -> Array.map ((*) w)) weights arrs
|> Array.reduce (Array.map2 (+))
First line: Partial application of the first Array.map2 function to weights yields a new function (Array.map ((*) weight) which is applied (for each weight) to each array in arr.
Second line: Array.reduce is like fold, except it starts on the second value and uses the first as the initial 'state'. In this case each value is a 'line' of our array of arrays. So applying an Array.map2 (+) on the first two lines means that we sum the first two arrays, which leaves us with a new array, which we then (Array.reduce) sum again onto the next (in this case last) array.
Result:
[|0.060123; 0.069444; 0.07296; 1.5510666; 215.40356|]
Performance testing
(Found here and updated for latest release of F#)
open System
open System.Diagnostics
module PerformanceTesting =
let Time func =
let stopwatch = new Stopwatch()
stopwatch.Start()
func()
stopwatch.Stop()
stopwatch.Elapsed.TotalMilliseconds
let GetAverageTime timesToRun func =
Seq.initInfinite (fun _ -> (Time func))
|> Seq.take timesToRun
|> Seq.average
let TimeOperation timesToRun =
GC.Collect()
GetAverageTime timesToRun
let TimeOperations funcsWithName =
let randomizer = new Random(int DateTime.Now.Ticks)
funcsWithName
|> Seq.sortBy (fun _ -> randomizer.Next())
|> Seq.map (fun (name, func) -> name, (TimeOperation 100000 func))
let TimeOperationsAFewTimes funcsWithName =
Seq.initInfinite (fun _ -> (TimeOperations funcsWithName))
|> Seq.take 50
|> Seq.concat
|> Seq.groupBy fst
|> Seq.map (fun (name, individualResults) -> name, (individualResults |> Seq.map snd |> Seq.average))
DataSetExtensions for F#, DataReaders
System.Data.DataSetExtensions.dll adds the ability to treat a DataTable as an IEnumerable<DataRow> as well as unboxing the values of individual cells in a way that gracefully handles DBNull by supporting System.Nullable. For example, in C# we can get the value of an integer column that contains nulls, and specify that DBNull should default to zero with a very concise syntax:
var total = myDataTable.AsEnumerable()
.Select(row => row.Field<int?>("MyColumn") ?? 0)
.Sum();
There are two areas where DataSetExtensions are lacking, however. First, it doesn't support IDataReader and second, it doesn't support the F# option type. The following code does both - it allows an IDataReader to be treated as a seq<IDataRecord>, and it can unbox values from either a reader or a dataset, with support for F# options or System.Nullable. Combined with the option-coalescing operator in another answer, this allows for code such as the following when working with a DataReader:
let total =
myReader.AsSeq
|> Seq.map (fun row -> row.Field<int option>("MyColumn") |? 0)
|> Seq.sum
Perhaps a more idiomatic F# way of ignoring database nulls would be...
let total =
myReader.AsSeq
|> Seq.choose (fun row -> row.Field<int option>("MyColumn"))
|> Seq.sum
Further, the extension methods defined below are usable from both F# and from C#/VB.
open System
open System.Data
open System.Reflection
open System.Runtime.CompilerServices
open Microsoft.FSharp.Collections
/// Ported from System.Data.DatasetExtensions.dll to add support for the Option type.
[<AbstractClass; Sealed>]
type private UnboxT<'a> private () =
// This class generates a converter function based on the desired output type,
// and then re-uses the converter function forever. Because the class itself is generic,
// different output types get different cached converter functions.
static let referenceField (value:obj) =
if value = null || DBNull.Value.Equals(value) then
Unchecked.defaultof<'a>
else
unbox value
static let valueField (value:obj) =
if value = null || DBNull.Value.Equals(value) then
raise <| InvalidCastException("Null cannot be converted to " + typeof<'a>.Name)
else
unbox value
static let makeConverter (target:Type) methodName =
Delegate.CreateDelegate(typeof<Converter<obj,'a>>,
typeof<UnboxT<'a>>
.GetMethod(methodName, BindingFlags.NonPublic ||| BindingFlags.Static)
.MakeGenericMethod([| target.GetGenericArguments().[0] |]))
|> unbox<Converter<obj,'a>>
|> FSharpFunc.FromConverter
static let unboxFn =
let theType = typeof<'a>
if theType.IsGenericType && not theType.IsGenericTypeDefinition then
let genericType = theType.GetGenericTypeDefinition()
if typedefof<Nullable<_>> = genericType then
makeConverter theType "NullableField"
elif typedefof<option<_>> = genericType then
makeConverter theType "OptionField"
else
invalidOp "The only generic types supported are Option<T> and Nullable<T>."
elif theType.IsValueType then
valueField
else
referenceField
static member private NullableField<'b when 'b : struct and 'b :> ValueType and 'b:(new:unit -> 'b)> (value:obj) =
if value = null || DBNull.Value.Equals(value) then
Nullable<_>()
else
Nullable<_>(unbox<'b> value)
static member private OptionField<'b> (value:obj) =
if value = null || DBNull.Value.Equals(value) then
None
else
Some(unbox<'b> value)
static member inline Unbox =
unboxFn
/// F# data-related extension methods.
[<AutoOpen>]
module FsDataEx =
type System.Data.IDataReader with
/// Exposes a reader's current result set as seq<IDataRecord>.
/// Reader is closed when sequence is fully enumerated.
member this.AsSeq =
seq { use reader = this
while reader.Read() do yield reader :> IDataRecord }
/// Exposes all result sets in a reader as seq<seq<IDataRecord>>.
/// Reader is closed when sequence is fully enumerated.
member this.AsMultiSeq =
let rowSeq (reader:IDataReader) =
seq { while reader.Read() do yield reader :> IDataRecord }
seq {
use reader = this
yield rowSeq reader
while reader.NextResult() do
yield rowSeq reader
}
/// Populates a new DataSet with the contents of the reader. Closes the reader after completion.
member this.ToDataSet () =
use reader = this
let dataSet = new DataSet(RemotingFormat=SerializationFormat.Binary, EnforceConstraints=false)
dataSet.Load(reader, LoadOption.OverwriteChanges, [| "" |])
dataSet
type System.Data.IDataRecord with
/// Gets a value from the record by name.
/// DBNull and null are returned as the default value for the type.
/// Supports both nullable and option types.
member this.Field<'a> (fieldName:string) =
this.[fieldName] |> UnboxT<'a>.Unbox
/// Gets a value from the record by column index.
/// DBNull and null are returned as the default value for the type.
/// Supports both nullable and option types.
member this.Field<'a> (ordinal:int) =
this.GetValue(ordinal) |> UnboxT<'a>.Unbox
type System.Data.DataRow with
/// Identical to the Field method from DatasetExtensions, but supports the F# Option type.
member this.Field2<'a> (columnName:string) =
this.[columnName] |> UnboxT<'a>.Unbox
/// Identical to the Field method from DatasetExtensions, but supports the F# Option type.
member this.Field2<'a> (columnIndex:int) =
this.[columnIndex] |> UnboxT<'a>.Unbox
/// Identical to the Field method from DatasetExtensions, but supports the F# Option type.
member this.Field2<'a> (column:DataColumn) =
this.[column] |> UnboxT<'a>.Unbox
/// Identical to the Field method from DatasetExtensions, but supports the F# Option type.
member this.Field2<'a> (columnName:string, version:DataRowVersion) =
this.[columnName, version] |> UnboxT<'a>.Unbox
/// Identical to the Field method from DatasetExtensions, but supports the F# Option type.
member this.Field2<'a> (columnIndex:int, version:DataRowVersion) =
this.[columnIndex, version] |> UnboxT<'a>.Unbox
/// Identical to the Field method from DatasetExtensions, but supports the F# Option type.
member this.Field2<'a> (column:DataColumn, version:DataRowVersion) =
this.[column, version] |> UnboxT<'a>.Unbox
/// C# data-related extension methods.
[<Extension; AbstractClass; Sealed>]
type CsDataEx private () =
/// Populates a new DataSet with the contents of the reader. Closes the reader after completion.
[<Extension>]
static member ToDataSet(this:IDataReader) =
this.ToDataSet()
/// Exposes a reader's current result set as IEnumerable{IDataRecord}.
/// Reader is closed when sequence is fully enumerated.
[<Extension>]
static member AsEnumerable(this:IDataReader) =
this.AsSeq
/// Exposes all result sets in a reader as IEnumerable{IEnumerable{IDataRecord}}.
/// Reader is closed when sequence is fully enumerated.
[<Extension>]
static member AsMultipleEnumerable(this:IDataReader) =
this.AsMultiSeq
/// Gets a value from the record by name.
/// DBNull and null are returned as the default value for the type.
/// Supports both nullable and option types.
[<Extension>]
static member Field<'T> (this:IDataRecord, fieldName:string) =
this.Field<'T>(fieldName)
/// Gets a value from the record by column index.
/// DBNull and null are returned as the default value for the type.
/// Supports both nullable and option types.
[<Extension>]
static member Field<'T> (this:IDataRecord, ordinal:int) =
this.Field<'T>(ordinal)
Handling arguments in a command line application:
//We assume that the actual meat is already defined in function
// DoStuff (string -> string -> string -> unit)
let defaultOutOption = "N"
let defaultUsageOption = "Y"
let usage =
"Scans a folder for and outputs results.\n" +
"Usage:\n\t MyApplication.exe FolderPath [IncludeSubfolders (Y/N) : default=" +
defaultUsageOption + "] [OutputToFile (Y/N): default=" + defaultOutOption + "]"
let HandlArgs arr =
match arr with
| [|d;u;o|] -> DoStuff d u o
| [|d;u|] -> DoStuff d u defaultOutOption
| [|d|] -> DoStuff d defaultUsageOption defaultOutOption
| _ ->
printf "%s" usage
Console.ReadLine() |> ignore
[<EntryPoint>]
let main (args : string array) =
args |> HandlArgs
0
(I had a vague memory of this technique being inspired by Robert Pickering, but can't find a reference now)
A handy cache function that keeps up to max (key,reader(key)) in a dictionary and use a SortedList to track the MRU keys
let Cache (reader: 'key -> 'value) max =
let cache = new Dictionary<'key,LinkedListNode<'key * 'value>>()
let keys = new LinkedList<'key * 'value>()
fun (key : 'key) -> (
let found, value = cache.TryGetValue key
match found with
|true ->
keys.Remove value
keys.AddFirst value |> ignore
(snd value.Value)
|false ->
let newValue = key,reader key
let node = keys.AddFirst newValue
cache.[key] <- node
if (keys.Count > max) then
let lastNode = keys.Last
cache.Remove (fst lastNode.Value) |> ignore
keys.RemoveLast() |> ignore
(snd newValue))
Creating XElements
Nothing amazing, but I keep getting caught out by the implicit conversion of XNames:
#r "System.Xml.Linq.dll"
open System.Xml.Linq
//No! ("type string not compatible with XName")
//let el = new XElement("MyElement", "text")
//better
let xn s = XName.op_Implicit s
let el = new XElement(xn "MyElement", "text")
//or even
let xEl s o = new XElement(xn s, o)
let el = xEl "MyElement" "text"
Pairwise and pairs
I always expect Seq.pairwise to give me [(1,2);(3;4)] and not [(1,2);(2,3);(3,4)]. Given that neither exist in List, and that I needed both, here's the code for future reference. I think they're tail recursive.
//converts to 'windowed tuples' ([1;2;3;4;5] -> [(1,2);(2,3);(3,4);(4,5)])
let pairwise lst =
let rec loop prev rem acc =
match rem with
| hd::tl -> loop hd tl ((prev,hd)::acc)
| _ -> List.rev acc
loop (List.head lst) (List.tail lst) []
//converts to 'paged tuples' ([1;2;3;4;5;6] -> [(1,2);(3,4);(5,6)])
let pairs lst =
let rec loop rem acc =
match rem with
| l::r::tl -> loop tl ((l,r)::acc)
| l::[] -> failwith "odd-numbered list"
| _ -> List.rev acc
loop lst []
Naive CSV reader (i.e., won't handle anything nasty)
(Using filereadlines and List.transpose from other answers here)
///Given a file path, returns a List of row lists
let ReadCSV =
filereadlines
>> Array.map ( fun line -> line.Split([|',';';'|]) |> List.ofArray )
>> Array.toList
///takes list of col ids and list of rows,
/// returns array of columns (in requested order)
let GetColumns cols rows =
//Create filter
let pick cols (row:list<'a>) = List.map (fun i -> row.[i]) cols
rows
|> transpose //change list of rows to list of columns
|> pick cols //pick out the columns we want
|> Array.ofList //an array output is easier to index for user
Example
"C:\MySampleCSV"
|> ReadCSV
|> List.tail //skip header line
|> GetColumns [0;3;1] //reorder columns as well, if needs be.
Date Range
simple but useful list of dates between fromDate and toDate
let getDateRange fromDate toDate =
let rec dates (fromDate:System.DateTime) (toDate:System.DateTime) =
seq {
if fromDate <= toDate then
yield fromDate
yield! dates (fromDate.AddDays(1.0)) toDate
}
dates fromDate toDate
|> List.ofSeq
toggle code to sql
More trivial than most on this list, but handy nonetheless:
I'm always taking sql in and out of code to move it to a sql environment during development. Example:
let sql = "select a,b,c "
+ "from table "
+ "where a = 1"
needs to be 'stripped' to:
select a,b,c
from table
where a = 1
keeping the formatting. It's a pain to strip out the code symbols for the sql editor, then put them back again by hand when I've got the sql worked out. These two functions toggle the sql back and forth from code to stripped:
// reads the file with the code quoted sql, strips code symbols, dumps to FSI
let stripForSql fileName =
File.ReadAllText(fileName)
|> (fun s -> Regex.Replace(s, "\+(\s*)\"", ""))
|> (fun s -> s.Replace("\"", ""))
|> (fun s -> Regex.Replace(s, ";$", "")) // end of line semicolons
|> (fun s -> Regex.Replace(s, "//.+", "")) // get rid of any comments
|> (fun s -> printfn "%s" s)
then when you are ready to put it back into your code source file:
let prepFromSql fileName =
File.ReadAllText(fileName)
|> (fun s -> Regex.Replace(s, #"\r\n", " \"\r\n+\"")) // matches newline
|> (fun s -> Regex.Replace(s, #"\A", " \""))
|> (fun s -> Regex.Replace(s, #"\z", " \""))
|> (fun s -> printfn "%s" s)
I'd love to get rid of the input file but can't even begin to grok how to make that happen. anyone?
edit:
I figured out how to eliminate the requirement of a file for these functions by adding a windows forms dialog input/output. Too much code to show, but for those who would like to do such a thing, that's how I solved it.
Pascal's Triangle (hey, someone might find it useful)
So we want to create a something like this:
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
Easy enough:
let rec next = function
| [] -> []
| x::y::xs -> (x + y)::next (y::xs)
| x::xs -> x::next xs
let pascal n =
seq { 1 .. n }
|> List.scan (fun acc _ -> next (0::acc) ) [1]
The next function returns a new list where each item[i] = item[i] + item[i + 1].
Here's the output in fsi:
> pascal 10 |> Seq.iter (printfn "%A");;
[1]
[1; 1]
[1; 2; 1]
[1; 3; 3; 1]
[1; 4; 6; 4; 1]
[1; 5; 10; 10; 5; 1]
[1; 6; 15; 20; 15; 6; 1]
[1; 7; 21; 35; 35; 21; 7; 1]
[1; 8; 28; 56; 70; 56; 28; 8; 1]
[1; 9; 36; 84; 126; 126; 84; 36; 9; 1]
[1; 10; 45; 120; 210; 252; 210; 120; 45; 10; 1]
For the adventurous, here's a tail-recursive version:
let rec next2 cont = function
| [] -> cont []
| x::y::xs -> next2 (fun l -> cont <| (x + y)::l ) <| y::xs
| x::xs -> next2 (fun l -> cont <| x::l ) <| xs
let pascal2 n =
set { 1 .. n }
|> Seq.scan (fun acc _ -> next2 id <| 0::acc)) [1]
Flatten a List
if you have something like this:
let listList = [[1;2;3;];[4;5;6]]
and want to 'flatten' it down to a singe list so the result is like this:
[1;2;3;4;5;6]
it can be done thusly:
let flatten (l: 'a list list) =
seq {
yield List.head (List.head l)
for a in l do yield! (Seq.skip 1 a)
}
|> List.ofSeq
List comprehensions for float
This [23.0 .. 1.0 .. 40.0] was marked as deprecated a few versions backed.
But apparently, this works:
let dl = 9.5 / 11.
let min = 21.5 + dl
let max = 40.5 - dl
let a = [ for z in min .. dl .. max -> z ]
let b = a.Length
(BTW, there's a floating point gotcha in there. Discovered at fssnip - the other place for F# snippets)
Parallel map
let pmap f s =
seq { for a in s -> async { return f s } }
|> Async.Parallel
|> Async.Run

Resources