I am learning F# by automating few of my tasks with F# scripts. I run this scripts with "fsi/fsarpi --exec" from command line. I am using .Net core for my work. One of the thing I was looking for is how to profile my F# script. I am primarily looking for
See overall time consumed by my entire script, I tried doing with stopwatch kind of functionality and it works well. Is there anything which can show time for my various top level function calls? Or timings/counts for function calls.
See the overall memory consumption by my script.
Hot spots in my scripts.
Overall I am trying to understand the performance bottlenecks of my scripts.
On a side note, is there a way to compile F# scripts to exe?
I recommend using BenchmarkDotNet for any benchmarking tasks (well, micro-benchmarks). Since it's a statistical tool, it accounts for many things that hand-rolled benchmarking will not. And just by applying a few attributes you can get a nifty report.
Create a .NET Core console app, add the BenchmarkDotNet package, create a benchmark, and run it to see the results. Here's an example that tests two trivial parsing functions, with one as the baseline for comparison, and informing BenchmarkDotNet to capture memory usage stats when running the benchmark:
open System
open BenchmarkDotNet.Attributes
open BenchmarkDotNet.Running
module Parsing =
/// "123,456" --> (123, 456)
let getNums (str: string) (delim: char) =
let idx = str.IndexOf(delim)
let first = Int32.Parse(str.Substring(0, idx))
let second = Int32.Parse(str.Substring(idx + 1))
first, second
/// "123,456" --> (123, 456)
let getNumsFaster (str: string) (delim: char) =
let sp = str.AsSpan()
let idx = sp.IndexOf(delim)
let first = Int32.Parse(sp.Slice(0, idx))
let second = Int32.Parse(sp.Slice(idx + 1))
struct(first, second)
[<MemoryDiagnoser>]
type ParsingBench() =
let str = "123,456"
let delim = ','
[<Benchmark(Baseline=true)>]
member __.GetNums() =
Parsing.getNums str delim |> ignore
[<Benchmark>]
member __.GetNumsFaster() =
Parsing.getNumsSpan str delim |> ignore
[<EntryPoint>]
let main _ =
let summary = BenchmarkRunner.Run<ParsingBench>()
printfn "%A" summary
0 // return an integer exit code
In this case, the results will show that the getNumsFaster function allocations 0 bytes and runs about 33% faster.
Once you've found something that consistently performs better and allocates less, you can transfer that over to a script or some other environment where the code will actually execute.
As for hotspots, your best tool is to actually run the script under a profiler like PerfView and look at CPU time and allocations caused by the script while it's executing. There's no simple answer here: interpreting profiling results correctly is challenging and time consuming work.
There's no way to compile an F# script to an executable for .NET Core. It's possible only on Windows/.NET Framework, but this is legacy behavior that is considered deprecated. It's recommended that you convert code in your script to an application if you'd like it to run as an executable.
I've been using F# for nearly six months and have been so sure that F# Interactive should have the same performance as compiled, that when I bothered to benchmark it, I was convinced it was some kind of compiler bug. Though now it occurs to me that I should have checked here first before opening an issue.
For me it is roughly 3x slower and the optimization switch does not seem to be doing anything at all.
Is this supposed to be standard behavior? If so, I really got trolled by the #time directive. I have the timings for how long it takes to sum 100M elements on this Reddit thread.
Update:
Thanks to FuleSnabel, I uncovered some things.
I tried running the example script from both fsianycpu.exe (which is the default F# Interactive) and fsi.exe and I am getting different timings for two runs. 134ms for the first and 78ms for the later. Those two timings also correspond to the timings from unoptimized and optimized binaries respectively.
What makes the matter even more confusing is that the first project I used to compile the thing is a part of the game library (in script form) I am making and it refuses to compile the optimized binary, instead switching to the unoptimized one without informing me. I had to start a fresh project to get it to compile properly. It is a wonder the other test compiled properly.
So basically, something funky is going on here and I should look into switching fsianycpu.exe to fsi.exe as the default interpreter.
I tried the example code in pastebin I don't see the behavior you describe. This is the result from my performance run:
.\bin\Release\ConsoleApplication3.exe
Total iterations: 300000000, Outer: 10000, Inner: 30000
reduce sequence of list, result 450015000, time 2836 ms
reduce array, result 450015000, time 594 ms
for loop array, result 450015000, time 180 ms
reduce list, result 450015000, time 593 ms
fsi -O --exec .\Interactive.fsx
Total iterations: 300000000, Outer: 10000, Inner: 30000
reduce sequence of list, result 450015000, time 2617 ms
reduce array, result 450015000, time 589 ms
for loop array, result 450015000, time 168 ms
reduce list, result 450015000, time 603 ms
It's expected that Seq.reduce would be the slowest, the for loop the fastest and that the reduce on list/array is roughly similar (this assumes locality of list elements which isn't guaranteed).
I rewrote your code to allow for longer runs w/o running out of memory and to improve cache locality of data. With short runs the uncertainity of measurements makes it hard to compare the data.
Program.fs:
module fs
let stopWatch =
let sw = new System.Diagnostics.Stopwatch()
sw.Start ()
sw
let total = 300000000
let outer = 10000
let inner = total / outer
let timeIt (name : string) (a : unit -> 'T) : unit =
let t = stopWatch.ElapsedMilliseconds
let v = a ()
for i = 2 to outer do
a () |> ignore
let d = stopWatch.ElapsedMilliseconds - t
printfn "%s, result %A, time %d ms" name v d
[<EntryPoint>]
let sumTest(args) =
let numsList = [1..inner]
let numsArray = [|1..inner|]
printfn "Total iterations: %d, Outer: %d, Inner: %d" total outer inner
let sumsSeqReduce () = Seq.reduce (+) numsList
timeIt "reduce sequence of list" sumsSeqReduce
let sumsArray () = Array.reduce (+) numsArray
timeIt "reduce array" sumsArray
let sumsLoop () =
let mutable total = 0
for i in 0 .. inner - 1 do
total <- total + numsArray.[i]
total
timeIt "for loop array" sumsLoop
let sumsListReduce () = List.reduce (+) numsList
timeIt "reduce list" sumsListReduce
0
Interactive.fsx:
#load "Program.fs"
fs.sumTest [||]
PS. I am running on Windows with Visual Studio 2015. 32bit or 64bit seemed to make only marginal difference
The Computer Language Benchmarks Game's F# entry for Threadring contains a seemingly useless line: if false then (). When I comment out this line, the program runs much faster (~2s vs ~55s for an input of 50000000) and produces the same result. How does this work? Why is this line there? What exactly is the compiler doing with what appears to be a no-op?
The code:
let ringLength = 503
let cells = Array.zeroCreate ringLength
let threads = Array.zeroCreate ringLength
let answer = ref -1
let createWorker i =
let next = (i+1)%ringLength
async { let value = cells.[i]
if false then ()
match value with
| 0 -> answer := i+1
| _ ->
cells.[next] <- value - 1
return! threads.[next] }
[<EntryPoint>]
let main args =
cells.[0] <- if args.Length>0 then int args.[0] else 50000000
for i in 0..ringLength-1 do
threads.[i]<-createWorker i
let result = Async.StartImmediate(threads.[0])
printfn "%d" !answer
0
I wrote this code originally. I don't remember the exact reason I added the line, but I'm guessing that, without it, the optimizer would do something I thought was outside of the spirit of the benchmark game. The reason for using asyncs in the first place is to achieve tail-call continuation to the next async (which is what makes this perform so much better than C# mono).
- Jomo
If the computation expression contains if false then () then the asynchronous workflow gets translated a bit differently. With the line, it uses async.Combine. Slightly simplified code looks like:
async.Delay(fun () ->
value = cells.[i]
async.Combine
( async.Return(if false then ())
async.Delay(fun () ->
match value with (...) ) ))
The translation inserts Combine because the (potentially) asynchronous computation done by if loop needs to be combined with the following code. Now, if you delete if you get something like:
async.Delay(fun () ->
value = cells.[i]
match value with (...) ) ))
The difference is that now a lot more work is done immediately in the function passed to Delay.
EDIT: I thought this caused a difference because the code uses Async.StartImmediate instead of Async.Start, but that does not seem to be the case. In fact, I do not understand why the code uses asynchronous workflows at all...
EDIT II.: I'm not entirely sure about Mono, but it definitely does replicate in the F# interactive - there, the version with Combine is about 4 times slower (which is what I'd expect, because of the function allocation overhead).
As suggested in answers to a previous question, I tried using Erlang proplists to implement a prefix trie.
The code seems to work decently well... But, for some reason, it doesn't play well with the interactive shell. When I try to run it, the shell hangs:
> Trie = trie:from_dict(). % Creates a trie from a dictionary
% ... the trie is printed ...
% Then nothing happens
I see the new trie printed to the screen (ie, the call to trie:from_dict() has returned), then the shell just hangs. No new > prompt comes up and ^g doesn't do anything (but ^c will eventually kill it off).
With a very small dictionary (the first 50 lines of /usr/share/dict/words), the hang only lasts a second or two (and the trie is built almost instantly)... But it seems to grow exponentially with the size of the dictionary (100 words takes 5 or 10 seconds, I haven't had the patience to try larger wordlists). Also, as the shell is hanging, I notice that the beam.smp process starts eating up a lot of memory (somewhere between 1 and 2 gigs).
So, is there anything obvious that could be causing this shell hang and incredible memory usage?
Some various comments:
I have a hunch that the garbage collector is at fault, but I don't know how to profile or create an experiment to test that.
I've tried profiling with eprof and nothing obvious showed up.
Here is my "add string to trie" function:
add([], Trie) ->
[ stop | Trie ];
add([Ch|Rest], Trie) ->
SubTrie = proplists:get_value(Ch, Trie, []),
NewSubTrie = add(Rest, SubTrie),
NewTrie = [ { Ch, NewSubTrie } | Trie ],
% Arbitrarily decide to compress key/value list once it gets
% more than 60 pairs.
if length(NewTrie) > 60 ->
proplists:compact(NewTrie);
true ->
NewTrie
end.
The problem is (amongst others ? -- see my comment) that you are always adding a new {Ch, NewSubTrie} tuple to your proplist Tries, no matter if Ch already existed, or not.
Instead of
NewTrie = [ { Ch, NewSubTrie } | Trie ]
you need something like:
NewTrie = lists:keystore(Ch, 1, Trie, {Ch, NewSubTrie})
You're not really building a trie here. Your end result is effectively a randomly ordered proplist of proplists that requires full scans at each level when walking the list. Tries are typically implied ordering based on position in the array (or list).
Here's an implementation that uses tuples as the storage mechanism. Calling set only rebuilds the root and direct path tuples.
(note: would probably have to make the pair a triple (adding size) make delete work with any efficiency)
I believe erlang tuples are really just arrays (thought I read that somewhere), so lookup should be super fast, and modify is probably straight forward. Maybe this is faster with the array module, but I haven't really played with it much to know.
this version also stores an arbitrary value, so you can do things like:
1> c(trie).
{ok,trie}
2> trie:get("ab",trie:set("aa",bar,trie:new("ab",foo))).
foo
3> trie:get("abc",trie:set("aa",bar,trie:new("ab",foo))).
undefined
4>
code (entire module): note2: assumes lower case non empty string keys
-module(trie).
-compile(export_all).
-define(NEW,{ %% 26 pairs, to avoid cost of calculating a new level at runtime
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth}
}
).
-define(POS(Ch), Ch - $a + 1).
new(Key,V) -> set(Key,V,?NEW).
set([H],V,Trie) ->
Pos = ?POS(H),
{_,SubTrie} = element(Pos,Trie),
setelement(Pos,Trie,{V,SubTrie});
set([H|T],V,Trie) ->
Pos = ?POS(H),
{SubKey,SubTrie} = element(Pos,Trie),
case SubTrie of
nodepth -> setelement(Pos,Trie,{SubKey,set(T,V,?NEW)});
SubTrie -> setelement(Pos,Trie,{SubKey,set(T,V,SubTrie)})
end.
get([H],Trie) ->
{Val,_} = element(?POS(H),Trie),
Val;
get([H|T],Trie) ->
case element(?POS(H),Trie) of
{_,nodepth} -> undefined;
{_,SubTrie} -> get(T,SubTrie)
end.
I have pieces of code like this in a project and I realize it's not
written in a functional way:
let data = Array.zeroCreate(3 + (int)firmwareVersions.Count * 27)
data.[0] <- 0x09uy //drcode
data.[1..2] <- firmwareVersionBytes //Number of firmware versions
let mutable index = 0
let loops = firmwareVersions.Count - 1
for i = 0 to loops do
let nameBytes = ASCIIEncoding.ASCII.GetBytes(firmwareVersions.[i].Name)
let timestampBytes = this.getTimeStampBytes firmwareVersions.[i].Timestamp
let sizeBytes = BitConverter.GetBytes(firmwareVersions.[i].Size) |> Array.rev
data.[index + 3 .. index + 10] <- nameBytes
data.[index + 11 .. index + 24] <- timestampBytes
data.[index + 25 .. index + 28] <- sizeBytes
data.[index + 29] <- firmwareVersions.[i].Status
index <- index + 27
firmwareVersions is a List which is part of a csharp library.
It has (and should not have) any knowledge of how it will be converted into
an array of bytes. I realize the code above is very non-functional, so I tried
changing it like this:
let headerData = Array.zeroCreate(3)
headerData.[0] <- 0x09uy
headerData.[1..2] <- firmwareVersionBytes
let getFirmwareVersionBytes (firmware : FirmwareVersion) =
let nameBytes = ASCIIEncoding.ASCII.GetBytes(firmware.Name)
let timestampBytes = this.getTimeStampBytes firmware.Timestamp
let sizeBytes = BitConverter.GetBytes(firmware.Size) |> Array.rev
Array.concat [nameBytes; timestampBytes; sizeBytes]
let data =
firmwareVersions.ToArray()
|> Array.map (fun f -> getFirmwareVersionBytes f)
|> Array.reduce (fun acc b -> Array.concat [acc; b])
let fullData = Array.concat [headerData;data]
So now I'm wondering if this is a better (more functional) way
to write the code. If so... why and what improvements should I make,
if not, why not and what should I do instead?
Suggestions, feedback, remarks?
Thank you
Update
Just wanted to add some more information.
This is part of some library that handles the data for a binary communication
protocol. The only upside I see of the first version of the code is that
people implementing the protocol in a different language (which is the case
in our situation as well) might get a better idea of how many bytes every
part takes up and where exactly they are located in the byte stream... just a remark.
(As not everybody understand english, but all our partners can read code)
I'd be inclined to inline everything because the whole program becomes so much shorter:
let fullData =
[|yield! [0x09uy; firmwareVersionBytes; firmwareVersionBytes]
for firmware in firmwareVersions do
yield! ASCIIEncoding.ASCII.GetBytes(firmware.Name)
yield! this.getTimeStampBytes firmware.Timestamp
yield! BitConverter.GetBytes(firmware.Size) |> Array.rev|]
If you want to convey the positions of the bytes, I'd put them in comments at the end of each line.
I like your first version better because the indexing gives a better picture of the offsets, which are an important piece of the problem (I assume). The imperative code features the byte offsets prominently, which might be important if your partners can't/don't read the documentation. The functional code emphasises sticking together structures, which would be OK if the byte offsets are not important enough to be mentioned in the documentation either.
Indexing is normally accidental complexity, in which case it should be avoided. For example, your first version's loop could be for firmwareVersion in firmwareVersion instead of for i = 0 to loops.
Also, like Brian says, using constants for the offsets would make the imperative version even more readable.
How often does the code run?
The advantage of 'array concatenation' is that it does make it easier to 'see' the logical portions. The disadvantage is that it creates a lot of garbage (allocating temporary arrays) and may also be slower if used in a tight loop.
Also, I think perhaps your "Array.reduce(...)" can just be "Array.concat".
Overall I prefer the first way (just create one huge array), though I would factor it differently to make the logic more apparent (e.g. have a named constant HEADER_SIZE, etc.).
While we're at it, I'd probably add some asserts to ensure that e.g. nameBytes has the expected length.