My understanding is that what a workflow builder does is that it first "builds" the expression, and then subsequently executes it. So given that it first builds the expression, it should be able to count the number of let! statements before actually executing, right? And then it should be able to inject some logging that monitors progress? So is it possible to rework the async builder to automatically report progress and kill the printfn redundancy below?
async {
let! a = doSomething1 ()
printfn "%d/%d" 1 4
let! b = doSomething2 a
printfn "%d/%d" 2 4
let! c = doSomething3 b
printfn "%d/%d" 3 4
let! d = doSomething4 c
printfn "%d/%d" 4 4
return d
}
For loops, I guess just assume that the whole loop is a single step. Only top-level expressions count as steps here.
(Note if there's a way to do this without making a whole new workflow builder I guess that's fine too).
Note I've already gone through the path of a) making a "Task" iterator that just iterates tasks (but then you lose e.g. use handling, so it ended up being inadequate), and b) making a task counter, but that always had to be seeded and iterated manually so I'm hoping for something better.
As you tagged the question with the tag monads, I'll start by a theoretical nitpick. What you want to do would not actually be a monad. The problem is that monads require certain laws (see the Haskell page on monads). For F#, this means that the following two snippets should mean the same thing:
let computation1 =
async { let! x = m
return x }
let computation2 = m
This would not be the case for the extension you suggest, because computation1 has one more let! than computation2. Now, I do not think this is actually a problem - the logging could still be useful (even if it may give different results than you'd expect in some cases).
Adding this feature to F# async is not as easy - the problem is that you'd need to define your own type that replaces (or wraps) standard Async<'T>. The type needs to store the number of steps. If you can store the number of steps somewhere else (e.g. some mutable counter), then you just need to redefine the computation builder for async.
Here is a minimal example that does something like this - it just prints "step" for each let!:
// A custom computation builder that redirects all operations to
// the standard 'async' builder, but prints "step" in the Bind method
type LogAsyncBuilder() =
member x.Bind(c1, f) = async {
let! arg = c1
printfn "step!"
return! f arg }
member x.Return(v) = async.Return(v)
member x.ReturnFrom(c) = async.ReturnFrom(c)
// An instance of our custom computation builder
let logAsync = LogAsyncBuilder()
// Example that prints 'step' 4 times (for every Bind - let!)
let doSomething n = logAsync {
return n + 10 }
logAsync {
let! a = doSomething 0
let! b = doSomething a
let! c = doSomething b
let! d = doSomething c
return d }
|> Async.RunSynchronously
You could use a tuple ('a, int, int) to track the current result, the total number of steps and the number executed so far. Then you could write a function to take the current state, and the next async function to execute e.g.
//create the initial state
let startCount steps = ((), 0, steps)
let withCount af (a, c, steps) = async {
let nc = c + 1
let! res = af a
do printfn "%d %d" nc steps
return (res, nc, steps)
}
withCount takes a function which returns the next async operation, and the current state. It creates the next workflow, increments the number of executed steps and prints the status before returning the new state.
You can then use it like:
async {
let init = startCount 4
let! t = withCount doSomething init
let! t2 = withCount doSomething2 t
let! (r, _, _) = withCount doSomething3 t2
return r
}
Related
I'm trying to wrap my head around mon-, err, workflows in F# and while I think that I have a pretty solid understanding of the basic "Maybe" workflow, trying to implement a state workflow to generate random numbers has really got me stumped.
My non-completed attempt can be seen here:
let randomInt state =
let random = System.Random(state)
// Generate random number and a new state as well
random.Next(0,1000), random.Next()
type RandomWF (initState) =
member this.Bind(rnd,rest) =
let value, newState = rnd initState
// How to feed "newState" into "rest"??
value |> rest
member this.Return a = a // Should I maybe feed "initState" into the computation here?
RandomWF(0) {
let! a = randomInt
let! b = randomInt
let! c = randomInt
return [a; b; c]
} |> printfn "%A"
Edit: Actually got it to work! Not exactly sure how it works though, so if anyone wants to lay it out in a good answer, it's still up for grabs. Here's my working code:
type RandomWF (initState) =
member this.Bind(rnd,rest) =
fun state ->
let value, nextState = rnd state
rest value nextState
member this.Return a = fun _ -> a
member this.Run x = x initState
There are two things that make it harder to see what your workflow is doing:
You're using a function type for the type of your monad,
Your workflow not only builds up the computation, it also runs it.
I think it's clearer to follow once you see how it would look without those two impediments. Here's the workflow defined using a DU wrapper type:
type Random<'a> =
Comp of (int -> 'a * int)
let run init (Comp f) = f init
type Random<'a> with
member this.Run(state) = fst <| run state this
type RandomBuilder() =
member this.Bind(Comp m, f: 'a -> Random<_>) =
Comp <| fun state ->
let value, nextState = m state
let comp = f value
run nextState comp
member this.Return(a) = Comp (fun s -> a, s)
let random = RandomBuilder()
And here is how you use it:
let randomInt =
Comp <| fun state ->
let rnd = System.Random(state)
rnd.Next(0,1000), rnd.Next()
let rand =
random {
let! a = randomInt
let! b = randomInt
let! c = randomInt
return [a; b; c ]
}
rand.Run(0)
|> printfn "%A"
In this version you separately build up the computation (and store it inside the Random type), and then you run it passing in the initial state. Look at how types on the builder methods are inferred and compare them to what MSDN documentation describes.
Edit: Constructing a builder object once and using the binding as an alias of sorts is mostly convention, but it's well justified in that it makes sense for the builders to be stateless. I can see why having parameterized builders seems like a useful feature, but I can't honestly imagine a convincing use case for it.
The key selling point of monads is the separation of definition and execution of a computation.
In your case - what you want to be able to do is to take a representation of your computation and be able to run it with some state - perhaps 0, perhaps 42. You don't need to know the initial state to define a computation that will use it. By passing in the state to the builder, you end up blurring the line between definition and execution, and this simply makes the workflow less useful.
Compare that with async workflow - when you write an async block, you don't make the code run asynchronously. You only create an Async<'a> object representing a computation that will produce an object of 'a when you run it - but how you do it, is up to you. The builder doesn't need to know.
There is any way to do it like C/C#?
For example (C# style)
for (int i = 0; i < 100; i++)
{
if (i == 66)
break;
}
The short answer is no. You would generally use some higher-order function to express the same functionality. There is a number of functions that let you do this, corresponding to different patterns (so if you describe what exactly you need, someone might give you a better answer).
For example, tryFind function returns the first value from a sequence for which a given predicate returns true, which lets you write something like this:
seq { 0 .. 100 } |> Seq.tryFind (fun i ->
printfn "%d" i
i=66)
In practice, this is the best way to go if you are expressing some high-level logic and there is a corresponding function. If you really need to express something like break, you can use a recursive function:
let rec loop n =
if n < 66 then
printfn "%d" n
loop (n + 1)
loop 0
A more exotic option (that is not as efficient, but may be nice for DSLs) is that you can define a computation expression that lets you write break and continue. Here is an example, but as I said, this is not as efficient.
This is really ugly, but in my case it worked.
let mutable Break = false
while not Break do
//doStuff
if breakCondition then
Break <- true
done
This is useful for do-while loops, because it guarantees that the loop is executed at least once.
I hope there's a more elegant solution. I don't like the recursive one, because I'm afraid of stack overflows. :-(
You have to change it to a while loop.
let (i, ans) = (ref 0, ref -1)
while(!i < 100 and !ans < 0) do
if !i = 66 then
ans := !i
ans
(This breaks when i gets to 66--but yes the syntax is quite different, another variable is introduced, etc.)
seq {
for i = 0 to 99 do
if i = 66 then yield ()
}
|> Seq.tryItem 0
|> ignore
Try this:
exception BreakException
try
for i = 0 to 99 do
if i = 66 then
raise BreakException
with BreakException -> ()
I know that some folks don't like to use exceptions. But it has merits.
You don't have to think about complicated recursive function. Of
cause you can do that, but sometimes it is unnecessarily bothersome
and using exception is simpler.
This method allows you to break at halfway of the loop body. (Break "flag" method is simple too but it only allows to break at the end of the loop body.)
You can easily escape from nested loop.
For these kind of problems you could use a recursive function.
let rec IfEqualsNumber start finish num =
if start = finish then false
elif
start = num then true
else
let start2 = start + 1
IfEqualsNumber start2 finish num
Recently I tried to solve a similar situation:
A list of, say, 10 pieces of data. Each of them must be queried against a Restful server, then get a result for each.
let lst = [4;6;1;8]
The problem:
If there is a failed API call (e.g. network issue), there is no point making further calls as we need all the 10 results available. The entire process should stop ASAP when an API call fails.
The naive approach: use List.map()
lst |> List.map (fun x ->
try
use sqlComd = ...
sqlComd.Parameters.Add("#Id", SqlDbType.BigInt).Value <- x
sqlComd.ExecuteScala() |> Some
with
| :? System.Data.SqlClient.SqlException as ex -> None
)
But as said, it's not optimal. When a failed API occurs, the remaining items keep being processed. They do something that is ignored at the end anyway.
The hacky approach: use List.tryFindIndex()
Unlike map(), we must store the results somewhere in the lamda function. A reasonable choice is to use mutable list. So when tryFindIndex() returns None, we know that everything was ok and can start making use of the mutable list.
val myList: List<string>
let res = lst |> List.tryFindIndex (fun x ->
try
use sqlComd = ...
sqlComd.Parameters.Add("#Id", SqlDbType.BigInt).Value <- x
myList.Add(sqlComd.ExecuteScala())
false
with
|:? System.Data.SqlClient.SqlException as ex -> true
)
match res with
| Some _ -> printfn "Something went wrong"
| None -> printfn "Here is the 10 results..."
The idiomatic approach: use recursion
Not very idiomatic as it uses Exception to stop the operation.
exception MyException of string
let makeCall lstLocal =
match lstLocal with
| [] -> []
| head::tail ->
try
use sqlComd = ...
sqlComd.Parameters.Add("#Id", SqlDbType.BigInt).Value <- x
let temp = sqlComd.ExecuteScala()
temp :: makeCall (tail)
with
|:? System.Data.SqlClient.SqlException as ex -> raise MyException ex.Message
try
let res = makeCall lst
printfn "Here is the 10 results..."
with
| :? MyException -> printfn "Something went wrong"
The old-fashion imperative approach: while... do
This still involves mutable list.
Does anyone know of 'prior art' regarding the following subject :
I have data that take some decent time to load. they are historical level for various stocks.
I would like to preload them somehow, to avoid the latency when using my app
However, preloading them in one chunk at start makes my app unresponsive first which is not user friendly
So I would like to not load my data.... unless the user is not requesting any and playing with what he already has, in which case I would like to get little by little. So it is neither 'lazy' nor 'eager', more 'lazy when you need' and 'eager when you can', hence the acronym LWYNEWYC.
I have made the following which seems to work, but I just wonder if there is a recognized and blessed approach for such thing ?
let r = LoggingFakeRepo () :> IQuoteRepository
r.getHisto "1" |> ignore //prints Getting histo for 1 when called
let rc = RepoCached (r) :> IQuoteRepository
rc.getHisto "1" |> ignore //prints Getting histo for 1 the first time only
let rcc = RepoCachedEager (r) :> IQuoteRepository
rcc.getHisto "100" |> ignore //prints Getting histo 1..100 by itself BUT
//prints Getting histo 100 immediately when called
And the classes
type IQuoteRepository =
abstract getUnderlyings : string seq
abstract getHisto : string -> string
type LoggingFakeRepo () =
interface IQuoteRepository with
member x.getUnderlyings = printfn "getting underlyings"
[1 .. 100] |> List.map string :> _
member x.getHisto udl = printfn "getting histo for %A" udl
"I am a historical dataset in a disguised party"
type RepoCached (rep : IQuoteRepository) =
let memoize f =
let cache = new System.Collections.Generic.Dictionary<_, _>()
fun x ->
if cache.ContainsKey(x) then cache.[x]
else let res = f x
cache.[x] <- res
res
let udls = lazy (rep.getUnderlyings )
let gethistom = memoize rep.getHisto
interface IQuoteRepository with
member x.getUnderlyings = udls.Force()
member x.getHisto udl = gethistom udl
type Message = string * AsyncReplyChannel<UnderlyingWrap>
type RepoCachedEager (rep : IQuoteRepository) =
let udls = rep.getUnderlyings
let agent = MailboxProcessor<Message>.Start(fun inbox ->
let repocached = RepoCached (rep) :> IQuoteRepository
let rec loop l =
async { try
let timeout = if l|> List.isEmpty then -1 else 50
let! (udl, replyChannel) = inbox.Receive(timeout)
replyChannel.Reply(repocached.getHisto udl)
do! loop l
with
| :? System.TimeoutException ->
let udl::xs = l
repocached.getHisto udl |> ignore
do! loop xs
}
loop (udls |> Seq.toList))
interface IQuoteRepository with
member x.getUnderlyings = udls
member x.getHisto udl = agent.PostAndReply(fun reply -> udl, reply)
I like your solution. I think using agent to implement some background loading with a timeout is a great way to go - agents can nicely encapsulate mutable state, so it is clearly safe and you can encode the behaviour you want quite easily.
I think asynchronous sequences might be another useful abstraction (if I'm correct, they are available in FSharpX these days). An asynchronous sequence represents a computation that asynchronously produces more values, so they might be a good way to separate the data loader from the rest of the code.
I think you'll still need an agent to synchronize at some point, but you can nicely separate different concerns using async sequences.
The code to load the data might look something like this:
let loadStockPrices repo = asyncSeq {
// TODO: Not sure how you detect that the repository has no more data...
while true do
// Get next item from the repository, preferably asynchronously!
let! data = repo.AsyncGetNextHistoricalValue()
// Return the value to the caller...
yield data }
This code represents the data loader, and it separates it from the code that uses it. From the agent that consumes the data source, you can use AsyncSeq.iterAsync to consume the values and do something with them.
With iterAsync, the function that you specify as a consumer is asynchronous. It may block (i.e. using Sleep) and when it blocks, the source - that is.your loader - is also blocked. This is quite nice implicit way to control the loader from the code that consumes the data.
A feature that is not in the library yet (but would be useful) is an partially eager evaluator that takes AsyncSeq<'T> and returns a new AsyncSeq<'T> but obtains a certain number of elements from the source as soon as possible and caches them (so that the consumer does not have to wait when it asks for a value, as long as the source can produce values fast enough).
I am new to functional programming in general and started learning F# recently. I wanted to use an async workflow returning Async<'U option> to pick an item in a Sequence. I find a nice Seq.pick function, but I am not sure how I could use that with an async workflow.
If that is not possible, is there another alternative to using an imperative style program to pick the item from the list. The following is a modified variation of my program. Any feedback is highly appreciated.
let run = async {
while not stopped do
use! resource = acquireResourceLockAsync
let! items = fetchItemsAsync 5
let! item = Seq.pick returnIfLocked items
let! status = performTaskAsync item
do! updateStatusAsync status
do! Async.Sleep 1000
}
Thanks in anticipation.
EDIT: Updated my question based on the answer by jpalmer. I noticed both Seq.filter and Seq.pick earlier and decided that Seq.pick will meet my need better, as I need the first item that I am able to lock. However, I forgot to change the return value of my function - instead of returning true, it should return Some(item). Now with that update, is there an elegant way to approach this without 1) blocking a thread to convert Async<'U option> to 'U and 2) resorting to an imperative style looping?
I am unclear exactly what you are trying to do. If you want to convert from Async<'T> to 'T non-blocking, then you want to use let! in an async workflow. So the seq-like logic probably needs to be written as its own loop, as suggested below. If that doesn't help, then perhaps share more code, especially the intended types of items/item/returnIfLocked, as I'm unclear what's async in your example.
let asyncPick f (s:seq<_>) =
async {
use e = s.GetEnumerator()
let r = ref None
while Option.isNone(!r) && e.MoveNext() do
let! x = f e.Current
r := x
match !r with
| Some z -> return z
| None -> return failwith "no matching item found"
}
let chooser ax =
async {
let! x = ax
if x%3 = 0 then
return Some x
else
return None
}
let s = seq { for i in 1..10 do yield async { return i } }
let main() =
async {
let! firstChosen = s |> asyncPick chooser
return firstChosen
}
|> Async.RunSynchronously
|> printfn "%d"
main()
It is important to look at the signature of the function you are using,
Seq.pick expects a function which returns option<'t>, you want to use Seq.Filter which takes a function which returns a bool.
You will still have another problem though in that you have Async<bool> - you will need to convert that to a normal bool, but you could do this inside your 'Seq.Filter' function
I'd like to write some code that runs a sequence of F# scripts (.fsx). The thing is that I could have literally hundreds of scripts and if I do that:
let shellExecute program args =
let startInfo = new ProcessStartInfo()
do startInfo.FileName <- program
do startInfo.Arguments <- args
do startInfo.UseShellExecute <- true
do startInfo.WindowStyle <- ProcessWindowStyle.Hidden
//do printfn "%s" startInfo.Arguments
let proc = Process.Start(startInfo)
()
scripts
|> Seq.iter (shellExecute "fsi")
it could stress too much my 2GB system. Anyway, I'd like to run scripts by batch of n, which seems also a good exercise for learning Async (I guess it's the way to go).
I have started to write some code for that but unfortunately it doesn't work:
open System.Diagnostics
let p = shellExecute "fsi" #"C:\Users\Stringer\foo.fsx"
async {
let! exit = Async.AwaitEvent p.Exited
do printfn "process has exited"
}
|> Async.StartImmediate
foo.fsx is just a hello world script.
What would be the most idiomatic way of solving this problem?
I'd like also to figure out if it's doable to retrieve a return code for each executing script and if not, find another way. Thanks!
EDIT:
Thanks a lot for your insights and links! I've learned a lot.
I just want to add some code for running batchs in parallel using Async.Parallel as Tomas suggested it. Please comment if there is a better implementation for my cut function.
module Seq =
/// Returns a sequence of sequences of N elements from the source sequence.
/// If the length of the source sequence is not a multiple
/// of N, last element of the returned sequence will have a length
/// included between 1 and N-1.
let cut (count : int) (source : seq<´T>) =
let rec aux s length = seq {
if (length < count) then yield s
else
yield Seq.take count s
if (length <> count) then
yield! aux (Seq.skip count s) (length - count)
}
aux source (Seq.length source)
let batchCount = 2
let filesPerBatch =
let q = (scripts.Length / batchCount)
q + if scripts.Length % batchCount = 0 then 0 else 1
let batchs =
scripts
|> Seq.cut filesPerBatch
|> Seq.map Seq.toList
|> Seq.map loop
Async.RunSynchronously (Async.Parallel batchs) |> ignore
EDIT2:
So I had some troubles to get Tomas's guard code working. I guess the f function had to be called in AddHandler method, otherwise we loose the event for ever... Here's the code:
module Event =
let guard f (e:IEvent<´Del, ´Args>) =
let e = Event.map id e
{ new IEvent<´Args> with
member this.AddHandler(d) = e.AddHandler(d); f() //must call f here!
member this.RemoveHandler(d) = e.RemoveHandler(d); f()
member this.Subscribe(observer) =
let rm = e.Subscribe(observer) in f(); rm }
The interesting thing (as mentioned by Tomas) is that it looks like the Exited event is stored somewhere when the process terminates, even though the process has not started with EnableRaisingEvents set to true.
When this property is finally set to true, the event is fired up.
Since I'm not sure that this is the official specification (and also a bit paranoid), I found another solution that consists in starting the process in the guard function, so we ensure that the code will work on whichever situation:
let createStartInfo program args =
new ProcessStartInfo
(FileName = program, Arguments = args, UseShellExecute = false,
WindowStyle = ProcessWindowStyle.Normal,
RedirectStandardOutput = true)
let createProcess info =
let p = new Process()
do p.StartInfo <- info
do p.EnableRaisingEvents <- true
p
let rec loop scripts = async {
match scripts with
| [] -> printfn "FINISHED"
| script::scripts ->
let args = sprintf "\"%s\"" script
let p = createStartInfo "notepad" args |> createProcess
let! exit =
p.Exited
|> Event.guard (fun () -> p.Start() |> ignore)
|> Async.AwaitEvent
let output = p.StandardOutput.ReadToEnd()
do printfn "\nPROCESSED: %s, CODE: %d, OUTPUT: %A"script p.ExitCode output
return! loop scripts
}
Notice I've replaced fsi.exe by notepad.exe so I can replay different scenarios step by step in the debugger and control explicitly the exit of the process myself.
I did some experiments and here is one way to deal with the problem discussed in the comments below my post and in the answer from Joel (which I think doesn't work currently, but could be fixed).
I think the specification of Process is that it can trigger the Exited event after we set the EnableRaisingEvents property to true (and will trigger the event even if the process has already completed before we set the property). To handle this case correctly, we need to enable raising of events after we attach handler to the Exited event.
This is a problme, because if we use AwaitEvent it will block the workflow until the event fires. We cannot do anything after calling AwaitEvent from the workflow (and if we set the property before calling AwaitEvent, then we get a race....). Vladimir's approach is correct, but I think there is a simpler way to deal with this.
I'll create a function Event.guard taking an event and returning an event, which allows us to specify some function that will be executed after a handler is attached to the event. This means that if we do some operation (which in turn triggers the event) inside this function, the event will be handled.
To use it for the problem discussed here, we need to change my original solution as follows. Firstly, the shellExecute function must not set the EnableRaisingEvents property (otherwise, we could lose the event!). Secondly, the waiting code should look like this:
let rec loop scripts = async {
match scripts with
| [] -> printf "FINISHED"
| script::scripts ->
let p = shellExecute fsi script
let! exit =
p.Exited
|> Event.guard (fun () -> p.EnableRaisingEvents <- true)
|> Async.AwaitEvent
let output = p.StandardOutput.ReadToEnd()
return! loop scripts }
Note the use of the Event.guard function. Roughly, it says that after the workflow attaches handler to the p.Exited event, the provided lambda function will run (and will enable raising of events). However, we already attached the handler to the event, so if this causes the event immediately, we're fine!
The implementation (for both Event and Observable) looks like this:
module Event =
let guard f (e:IEvent<'Del, 'Args>) =
let e = Event.map id e
{ new IEvent<'Args> with
member x.AddHandler(d) = e.AddHandler(d)
member x.RemoveHandler(d) = e.RemoveHandler(d); f()
member x.Subscribe(observer) =
let rm = e.Subscribe(observer) in f(); rm }
module Observable =
let guard f (e:IObservable<'Args>) =
{ new IObservable<'Args> with
member x.Subscribe(observer) =
let rm = e.Subscribe(observer) in f(); rm }
Nice thing is that this code is very straightforward.
Your approach looks great to me, I really like the idea of embedding process execution into asynchronous workflows using AwaitEvent!
The likely reason why it didn't work is that you need to set EnableRisingEvents property of the Process to true if you want it to ever trigger the Exited event (don't ask my why you have to do that, it sounds pretty silly to me!) Anyway, I did a couple of other changes to your code when testing it, so here is a version that worked for me:
open System
open System.Diagnostics
let shellExecute program args =
// Configure process to redirect output (so that we can read it)
let startInfo =
new ProcessStartInfo
(FileName = program, Arguments = args, UseShellExecute = false,
WindowStyle = ProcessWindowStyle.Hidden,
RedirectStandardOutput = true)
// Start the process
// Note: We must enable rising events explicitly here!
Process.Start(startInfo, EnableRaisingEvents = true)
Most importantly, the code now sets EnableRaisingEvents to true. I also changed the code to use a syntax where you specify properties of an object when constructing it (to make the code a bit more succinct) and I changed a few properties, so that I can read the output (RedirectStandardOutput).
Now, we can use the AwaitEvent method to wait until a process completes. I'll assume that fsi contains the path to fsi.exe and that scripts is a list of FSX scripts. If you want to run them sequentially, you could use a loop implemented using recursion:
let rec loop scripts = async {
match scripts with
| [] -> printf "FINISHED"
| script::scripts ->
// Start the proces in background
let p = shellExecute fsi script
// Wait until the process completes
let! exit = Async.AwaitEvent p.Exited
// Read the output produced by the process, the exit code
// is available in the `ExitCode` property of `Process`
let output = p.StandardOutput.ReadToEnd()
printfn "\nPROCESSED: %s, CODE: %d\n%A" script p.ExitCode output
// Process the rest of the scripts
return! loop scripts }
// This starts the workflow on background thread, so that we can
// do other things in the meantime. You need to add `ReadLine`, so that
// the console application doesn't quit immedeiately
loop scripts |> Async.Start
Console.ReadLine() |> ignore
Of course, you could also run the processes in parallel (or for example run 2 groups of them in parallel etc.) To do that you would use Async.Parallel (in the usual way).
Anyway, this is a really nice example of using asynchronous workflows in an area where I haven't seen them used so far. Very interesting :-)
In response to Tomas's answer, would this be a workable solution to the race condition involved in starting the process, and then subscribing to its Exited event?
type Process with
static member AsyncStart psi =
let proc = new Process(StartInfo = psi, EnableRaisingEvents = true)
let asyncExit = Async.AwaitEvent proc.Exited
async {
proc.Start() |> ignore
let! args = asyncExit
return proc
}
Unless I'm mistaken, this would subscribe to the event prior to starting the process, and package it all up as an Async<Process> result.
This would allow you to rewrite the rest of the code like this:
let shellExecute program args =
// Configure process to redirect output (so that we can read it)
let startInfo =
new ProcessStartInfo(FileName = program, Arguments = args,
UseShellExecute = false,
WindowStyle = ProcessWindowStyle.Hidden,
RedirectStandardOutput = true)
// Start the process
Process.AsyncStart(startInfo)
let fsi = "PATH TO FSI.EXE"
let rec loop scripts = async {
match scripts with
| [] -> printf "FINISHED"
| script::scripts ->
// Start the proces in background
use! p = shellExecute fsi script
// Read the output produced by the process, the exit code
// is available in the `ExitCode` property of `Process`
let output = p.StandardOutput.ReadToEnd()
printfn "\nPROCESSED: %s, CODE: %d\n%A" script p.ExitCode output
// Process the rest of the scripts
return! loop scripts
}
If that does the job, it's certainly a lot less code to worry about than Vladimir's Async.GetSubject.
What about a mailboxprocessor?
It is possible to simplify version of Subject from blogpost. instead of returning imitation of event, getSubject can return workflow.
Result workflow itself is state machine with two states
1. Event wasn't triggered yet: all pending listeners should be registered
2. Value is already set, listener is served immediately
In code it will appear like this:
type SubjectState<'T> = Listen of ('T -> unit) list | Value of 'T
getSubject implementation is trivial
let getSubject (e : IEvent<_, _>) =
let state = ref (Listen [])
let switchState v =
let listeners =
lock state (fun () ->
match !state with
| Listen ls ->
state := Value v
ls
| _ -> failwith "Value is set twice"
)
for l in listeners do l v
Async.StartWithContinuations(
Async.AwaitEvent e,
switchState,
ignore,
ignore
)
Async.FromContinuations(fun (cont, _, _) ->
let ok, v = lock state (fun () ->
match !state with
| Listen ls ->
state := Listen (cont::ls)
false, Unchecked.defaultof<_>
| Value v ->
true, v
)
if ok then cont v
)