What is the best practice is to "register" the http client in one place, so it can be reused from this Elmish update function? Instead of having to create it for every request.
let update message model =
match message with
| SetMessage s -> { model with x = s }
| Loading -> { model with x = "Doing something long ..." }
let handleClick model dispatch _ =
dispatch Loading
async {
let url = Uri "https://api.github.com"
-- FIXME: too expensive to do this on per-update basis
use httpClient = new HttpClient(BaseAddress = url)
let! resp = httpClient.GetAsync "/users/srid" |> Async.AwaitTask
let! s = resp.Content.ReadAsStringAsync() |> Async.AwaitTask
dispatch (SetMessage s)
} |> Async.Start
I feel like this would normally go in Startup.fs. I use a client-only Bolero web app, so this would look like:
builder.Services.AddSingleton<HttpClient>(new HttpClient (BaseAddress=apiBase))
But then the question becomes ... how do I access it from my program in F#? What is the idiomatic way?
Probably the best way would either be to add HttpClient as another field in your model or as another parameter to your update function.
let update (client:HttpClient) message model = // Your code
let url = Uri "https://api.github.com"
let httpClient = new HttpClient(BaseAddress = url)
In general you shouldn't "do work" in your view and, by extension, event handlers. Instead, you should use the Elmish Cmd module something like this:
let update httpClient message model =
match message with
| SetMessage s ->
{ model with x = s }, Cmd.none
| GetMessageAsync ->
let cmd =
let getHttp () =
async {
let! resp = httpClient.GetAsync "/users/srid" |> Async.AwaitTask
return! resp.Content.ReadAsStringAsync() |> Async.AwaitTask
}
Cmd.OfAsync.perform getHttp () (fun s -> SetMessage s)
{ model with x = "Doing something long ..." }, cmd
let handleClick model dispatch _ =
dispatch GetMessageAsync
I have this code I'm running using Fable Elmish and Fable remoting to connect to a Suave server. I know that the server works because of postman and there are variations of this code that does call the server
let AuthUser model : Cmd<LogInMsg> =
let callServer = async {
let! result = server.RequestLogIn model.Credentials
return result
}
let result = callServer |> Async.RunSynchronously
match result with
| LogInFailed x -> Cmd.ofMsg (LogInMsg.LogInRejected x)
| UserLoggedIn x -> Cmd.ofMsg (LogInMsg.LogInSuccess x)
The callServer line in the let result fails with Object(...) is not a function, but I don't understand why. Any help would be appreciated.
According to Fable docs Async.RunSynchronously is not supported, though I'm not sure if that is causing your problem. Anyway you should structure your code so that you don't need to block asynchronous computations. In case of Elmish you can use Cmd.ofAsync to create a command out of an async that dispatches messages returned by the async when it completes.
let AuthUser model : Cmd<LogInMsg> =
let ofSuccess result =
match result with
| LogInFailed x -> LogInMsg.LogInRejected x
| UserLoggedIn x -> LogInMsg.LogInSuccess x
let ofError exn = (* Message representing failed HTTP request *)
Cmd.ofAsync server.RequestLogIn model.Credentials ofSuccess ofError
Hopefully this helps.
How to do an simple await in F# ?
In C# I have code like this:
await collection.InsertOneAsync(DO);
var r = collection.ReplaceOneAsync((fun d -> d.Id = DO.Id), DO)
So I created a let await = ... to my F# code become more similar with my C# code.
My current F# code is this:
let awaits (t: Threading.Tasks.Task) = t |> Async.AwaitTask |> Async.RunSynchronously
let await (t: Threading.Tasks.Task<'T>) = t |> Async.AwaitTask |> Async.RunSynchronously
let Busca (numero) =
let c = collection.Find(fun d -> d.Numero=numero).ToList()
c
let Insere(DO: DiarioOficial) =
//collection.InsertOneAsync(DO) |> Async.AwaitTask |> Async.RunSynchronously
collection.InsertOneAsync(DO) |> awaits
let Salva (DO: DiarioOficial) =
//let r = collection.ReplaceOneAsync((fun d -> d.Id = DO.Id), DO) |> Async.AwaitTask |> Async.RunSynchronously
let r = collection.ReplaceOneAsync((fun d -> d.Id = DO.Id), DO) |> await
r
I want to have only one definition for await (awaits), but the best I could do is this, because on Insere, type is Task, but on Salva, type is Task<'T>
If i use only the await, I get this compile error:
FS0001 The type 'Threading.Tasks.Task' is not compatible with the type 'Threading.Tasks.Task<'a>'
If I use only the awaits, it compiles, but I lose the return type from the async Task
I want to merge the await and awaits in a single
let await = ...
How can I do this?
In F# we tend to use another syntax. It is described e.g. here: https://fsharpforfunandprofit.com/posts/concurrency-async-and-parallel/.
or here: https://learn.microsoft.com/en-us/dotnet/fsharp/tutorials/asynchronous-and-concurrent-programming/async
The idea of working with C# Tasks is to "convert" them to async with Async.Await<'T>
You can do it probably another way, but it is the most straightforward.
There are two parts of writing async code in both F# and C#.
You need to mark the method or code block as asynchronous. In C#, this is done using the async keyword. The F# equivalent is to use the async { ... } block (which is an expression, but otherwise, it is similar).
Inside async method or async { .. } block, you can make non-blocking calls. In C#, this is done using await and in F# it is done using let!. Note that this is not just a function call - the compiler handles this in a special way.
F# also uses Async<T> type rather than Task<T>, but those are easy to convert - e.g. using Async.AwaitTask. So, you probably want something like this:
let myAsyncFunction () = async {
let! _ = collection.InsertOneAsync(DO) |> Async.AwaitTask
let r = collection.ReplaceOneAsync((fun d -> d.Id = DO.Id), DO)
// More code goes here
}
I used let! to show the idea, but if you have an asynchronous operation that returns unit, you can also use do!
do! collection.InsertOneAsync(DO) |> Async.AwaitTask
A common example used to illustrate asynchronous workflows in F# is retrieving multiple webpages in parallel. One such example is given at: http://en.wikibooks.org/wiki/F_Sharp_Programming/Async_Workflows Code shown here in case the link changes in the future:
open System.Text.RegularExpressions
open System.Net
let download url =
let webclient = new System.Net.WebClient()
webclient.DownloadString(url : string)
let extractLinks html = Regex.Matches(html, #"http://\S+")
let downloadAndExtractLinks url =
let links = (url |> download |> extractLinks)
url, links.Count
let urls =
[#"http://www.craigslist.com/";
#"http://www.msn.com/";
#"http://en.wikibooks.org/wiki/Main_Page";
#"http://www.wordpress.com/";
#"http://news.google.com/";]
let pmap f l =
seq { for a in l -> async { return f a } }
|> Async.Parallel
|> Async.Run
let testSynchronous() = List.map downloadAndExtractLinks urls
let testAsynchronous() = pmap downloadAndExtractLinks urls
let time msg f =
let stopwatch = System.Diagnostics.Stopwatch.StartNew()
let temp = f()
stopwatch.Stop()
printfn "(%f ms) %s: %A" stopwatch.Elapsed.TotalMilliseconds msg temp
let main() =
printfn "Start..."
time "Synchronous" testSynchronous
time "Asynchronous" testAsynchronous
printfn "Done."
main()
What I would like to know is how one should handle changes in global state such as loss of a network connection? Is there an elegant way to do this?
One could check the state of the network prior to making the Async.Parallel call, but the state could change during execution. Assuming what one wanted to do was pause execution until the network was available again rather than fail, is there a functional way to do this?
First of all, there is one issue with the example - it uses Async.Parallel to run multiple operations in parallel but the operations themselves are not implemented as asynchronous, so this will not avoid blocking excessive number of threads in the thread pool.
Asynchronous. To make the code fully asynchronous, the download and downloadAndExtractLinks functions should be asynchronous too, so that you can use AsyncDownloadString of the WebClient:
let asyncDownload url = async {
let webclient = new System.Net.WebClient()
return! webclient.AsyncDownloadString(System.Uri(url : string)) }
let asyncDownloadAndExtractLinks url = async {
let! html = asyncDownload url
let links = extractLinks html
return url, links.Count }
let pmap f l =
seq { for a in l -> async { return! f a } }
|> Async.Parallel
|> Async.RunSynchronously
Retrying. Now, to answer the question - there is no built-in mechanism for handling of errors such as network failure, so you will need to implement this logic yourself. What is the right approach depends on your situation. One common approach is to retry the operation certain number of times and throw the exception only if it does not succeed e.g. 10 times. You can write this as a primitive that takes other asynchronous workflow:
let rec asyncRetry times op = async {
try
return! op
with e ->
if times <= 1 then return (reraise e)
else return! asyncRetry (times - 1) op }
Then you can change the main function to build a workflow that retries the download 10 times:
let testAsynchronous() =
pmap (asyncRetry 10 downloadAndExtractLinks) urls
Shared state. Another problem is that Async.Parallel will only return once all the downloads have completed (if there is one faulty web site, you will have to wait). If you want to show the results as they come back, you will need something more sophisticated.
One nice way to do this is to use F# agent - create an agent that stores the results obtained so far and can handle two messages - one that adds new result and another that returns the current state. Then you can start multiple async tasks that will send the result to the agent and, in a separate async workflow, you can use polling to check the current status (and e.g. update the user interface).
I wrote a MSDN series about agents and also two articles for developerFusion that have a plenty of code samples with F# agents.
I'd like to write some code that runs a sequence of F# scripts (.fsx). The thing is that I could have literally hundreds of scripts and if I do that:
let shellExecute program args =
let startInfo = new ProcessStartInfo()
do startInfo.FileName <- program
do startInfo.Arguments <- args
do startInfo.UseShellExecute <- true
do startInfo.WindowStyle <- ProcessWindowStyle.Hidden
//do printfn "%s" startInfo.Arguments
let proc = Process.Start(startInfo)
()
scripts
|> Seq.iter (shellExecute "fsi")
it could stress too much my 2GB system. Anyway, I'd like to run scripts by batch of n, which seems also a good exercise for learning Async (I guess it's the way to go).
I have started to write some code for that but unfortunately it doesn't work:
open System.Diagnostics
let p = shellExecute "fsi" #"C:\Users\Stringer\foo.fsx"
async {
let! exit = Async.AwaitEvent p.Exited
do printfn "process has exited"
}
|> Async.StartImmediate
foo.fsx is just a hello world script.
What would be the most idiomatic way of solving this problem?
I'd like also to figure out if it's doable to retrieve a return code for each executing script and if not, find another way. Thanks!
EDIT:
Thanks a lot for your insights and links! I've learned a lot.
I just want to add some code for running batchs in parallel using Async.Parallel as Tomas suggested it. Please comment if there is a better implementation for my cut function.
module Seq =
/// Returns a sequence of sequences of N elements from the source sequence.
/// If the length of the source sequence is not a multiple
/// of N, last element of the returned sequence will have a length
/// included between 1 and N-1.
let cut (count : int) (source : seq<´T>) =
let rec aux s length = seq {
if (length < count) then yield s
else
yield Seq.take count s
if (length <> count) then
yield! aux (Seq.skip count s) (length - count)
}
aux source (Seq.length source)
let batchCount = 2
let filesPerBatch =
let q = (scripts.Length / batchCount)
q + if scripts.Length % batchCount = 0 then 0 else 1
let batchs =
scripts
|> Seq.cut filesPerBatch
|> Seq.map Seq.toList
|> Seq.map loop
Async.RunSynchronously (Async.Parallel batchs) |> ignore
EDIT2:
So I had some troubles to get Tomas's guard code working. I guess the f function had to be called in AddHandler method, otherwise we loose the event for ever... Here's the code:
module Event =
let guard f (e:IEvent<´Del, ´Args>) =
let e = Event.map id e
{ new IEvent<´Args> with
member this.AddHandler(d) = e.AddHandler(d); f() //must call f here!
member this.RemoveHandler(d) = e.RemoveHandler(d); f()
member this.Subscribe(observer) =
let rm = e.Subscribe(observer) in f(); rm }
The interesting thing (as mentioned by Tomas) is that it looks like the Exited event is stored somewhere when the process terminates, even though the process has not started with EnableRaisingEvents set to true.
When this property is finally set to true, the event is fired up.
Since I'm not sure that this is the official specification (and also a bit paranoid), I found another solution that consists in starting the process in the guard function, so we ensure that the code will work on whichever situation:
let createStartInfo program args =
new ProcessStartInfo
(FileName = program, Arguments = args, UseShellExecute = false,
WindowStyle = ProcessWindowStyle.Normal,
RedirectStandardOutput = true)
let createProcess info =
let p = new Process()
do p.StartInfo <- info
do p.EnableRaisingEvents <- true
p
let rec loop scripts = async {
match scripts with
| [] -> printfn "FINISHED"
| script::scripts ->
let args = sprintf "\"%s\"" script
let p = createStartInfo "notepad" args |> createProcess
let! exit =
p.Exited
|> Event.guard (fun () -> p.Start() |> ignore)
|> Async.AwaitEvent
let output = p.StandardOutput.ReadToEnd()
do printfn "\nPROCESSED: %s, CODE: %d, OUTPUT: %A"script p.ExitCode output
return! loop scripts
}
Notice I've replaced fsi.exe by notepad.exe so I can replay different scenarios step by step in the debugger and control explicitly the exit of the process myself.
I did some experiments and here is one way to deal with the problem discussed in the comments below my post and in the answer from Joel (which I think doesn't work currently, but could be fixed).
I think the specification of Process is that it can trigger the Exited event after we set the EnableRaisingEvents property to true (and will trigger the event even if the process has already completed before we set the property). To handle this case correctly, we need to enable raising of events after we attach handler to the Exited event.
This is a problme, because if we use AwaitEvent it will block the workflow until the event fires. We cannot do anything after calling AwaitEvent from the workflow (and if we set the property before calling AwaitEvent, then we get a race....). Vladimir's approach is correct, but I think there is a simpler way to deal with this.
I'll create a function Event.guard taking an event and returning an event, which allows us to specify some function that will be executed after a handler is attached to the event. This means that if we do some operation (which in turn triggers the event) inside this function, the event will be handled.
To use it for the problem discussed here, we need to change my original solution as follows. Firstly, the shellExecute function must not set the EnableRaisingEvents property (otherwise, we could lose the event!). Secondly, the waiting code should look like this:
let rec loop scripts = async {
match scripts with
| [] -> printf "FINISHED"
| script::scripts ->
let p = shellExecute fsi script
let! exit =
p.Exited
|> Event.guard (fun () -> p.EnableRaisingEvents <- true)
|> Async.AwaitEvent
let output = p.StandardOutput.ReadToEnd()
return! loop scripts }
Note the use of the Event.guard function. Roughly, it says that after the workflow attaches handler to the p.Exited event, the provided lambda function will run (and will enable raising of events). However, we already attached the handler to the event, so if this causes the event immediately, we're fine!
The implementation (for both Event and Observable) looks like this:
module Event =
let guard f (e:IEvent<'Del, 'Args>) =
let e = Event.map id e
{ new IEvent<'Args> with
member x.AddHandler(d) = e.AddHandler(d)
member x.RemoveHandler(d) = e.RemoveHandler(d); f()
member x.Subscribe(observer) =
let rm = e.Subscribe(observer) in f(); rm }
module Observable =
let guard f (e:IObservable<'Args>) =
{ new IObservable<'Args> with
member x.Subscribe(observer) =
let rm = e.Subscribe(observer) in f(); rm }
Nice thing is that this code is very straightforward.
Your approach looks great to me, I really like the idea of embedding process execution into asynchronous workflows using AwaitEvent!
The likely reason why it didn't work is that you need to set EnableRisingEvents property of the Process to true if you want it to ever trigger the Exited event (don't ask my why you have to do that, it sounds pretty silly to me!) Anyway, I did a couple of other changes to your code when testing it, so here is a version that worked for me:
open System
open System.Diagnostics
let shellExecute program args =
// Configure process to redirect output (so that we can read it)
let startInfo =
new ProcessStartInfo
(FileName = program, Arguments = args, UseShellExecute = false,
WindowStyle = ProcessWindowStyle.Hidden,
RedirectStandardOutput = true)
// Start the process
// Note: We must enable rising events explicitly here!
Process.Start(startInfo, EnableRaisingEvents = true)
Most importantly, the code now sets EnableRaisingEvents to true. I also changed the code to use a syntax where you specify properties of an object when constructing it (to make the code a bit more succinct) and I changed a few properties, so that I can read the output (RedirectStandardOutput).
Now, we can use the AwaitEvent method to wait until a process completes. I'll assume that fsi contains the path to fsi.exe and that scripts is a list of FSX scripts. If you want to run them sequentially, you could use a loop implemented using recursion:
let rec loop scripts = async {
match scripts with
| [] -> printf "FINISHED"
| script::scripts ->
// Start the proces in background
let p = shellExecute fsi script
// Wait until the process completes
let! exit = Async.AwaitEvent p.Exited
// Read the output produced by the process, the exit code
// is available in the `ExitCode` property of `Process`
let output = p.StandardOutput.ReadToEnd()
printfn "\nPROCESSED: %s, CODE: %d\n%A" script p.ExitCode output
// Process the rest of the scripts
return! loop scripts }
// This starts the workflow on background thread, so that we can
// do other things in the meantime. You need to add `ReadLine`, so that
// the console application doesn't quit immedeiately
loop scripts |> Async.Start
Console.ReadLine() |> ignore
Of course, you could also run the processes in parallel (or for example run 2 groups of them in parallel etc.) To do that you would use Async.Parallel (in the usual way).
Anyway, this is a really nice example of using asynchronous workflows in an area where I haven't seen them used so far. Very interesting :-)
In response to Tomas's answer, would this be a workable solution to the race condition involved in starting the process, and then subscribing to its Exited event?
type Process with
static member AsyncStart psi =
let proc = new Process(StartInfo = psi, EnableRaisingEvents = true)
let asyncExit = Async.AwaitEvent proc.Exited
async {
proc.Start() |> ignore
let! args = asyncExit
return proc
}
Unless I'm mistaken, this would subscribe to the event prior to starting the process, and package it all up as an Async<Process> result.
This would allow you to rewrite the rest of the code like this:
let shellExecute program args =
// Configure process to redirect output (so that we can read it)
let startInfo =
new ProcessStartInfo(FileName = program, Arguments = args,
UseShellExecute = false,
WindowStyle = ProcessWindowStyle.Hidden,
RedirectStandardOutput = true)
// Start the process
Process.AsyncStart(startInfo)
let fsi = "PATH TO FSI.EXE"
let rec loop scripts = async {
match scripts with
| [] -> printf "FINISHED"
| script::scripts ->
// Start the proces in background
use! p = shellExecute fsi script
// Read the output produced by the process, the exit code
// is available in the `ExitCode` property of `Process`
let output = p.StandardOutput.ReadToEnd()
printfn "\nPROCESSED: %s, CODE: %d\n%A" script p.ExitCode output
// Process the rest of the scripts
return! loop scripts
}
If that does the job, it's certainly a lot less code to worry about than Vladimir's Async.GetSubject.
What about a mailboxprocessor?
It is possible to simplify version of Subject from blogpost. instead of returning imitation of event, getSubject can return workflow.
Result workflow itself is state machine with two states
1. Event wasn't triggered yet: all pending listeners should be registered
2. Value is already set, listener is served immediately
In code it will appear like this:
type SubjectState<'T> = Listen of ('T -> unit) list | Value of 'T
getSubject implementation is trivial
let getSubject (e : IEvent<_, _>) =
let state = ref (Listen [])
let switchState v =
let listeners =
lock state (fun () ->
match !state with
| Listen ls ->
state := Value v
ls
| _ -> failwith "Value is set twice"
)
for l in listeners do l v
Async.StartWithContinuations(
Async.AwaitEvent e,
switchState,
ignore,
ignore
)
Async.FromContinuations(fun (cont, _, _) ->
let ok, v = lock state (fun () ->
match !state with
| Listen ls ->
state := Listen (cont::ls)
false, Unchecked.defaultof<_>
| Value v ->
true, v
)
if ok then cont v
)