I've spent hours combing through documentation and tutorials, but can't figure out how to use ReactiveX to poll an external resource, or anything for that matter, every at an interval. Below is some code I wrote to get information from a REST API at an interval.
open System
open System.Reactive.Linq
module MyObservable =
let getResources =
async {
use client = new HttpClient()
let! response = client.GetStringAsync("http://localhost:8080/items") |> Async.AwaitTask
return response
} |> Async.StartAsTask
let getObservable (interval: TimeSpan) =
let f () = getResources.Result
Observable.Interval(interval)
|> Observable.map(fun _ -> f ())
To test this out, I tried subscribing to the Observable and waiting five seconds. It does receive something every second for five seconds, but the getResources is only called the first time and then the result is just used at each interval. How can I modify this to make the REST call at each interval instead of just the result of the first call being used over and over again?
let mutable res = Seq.empty
getObservable (new TimeSpan(0,0,1))
|> Observable.subscribe(fun (x: seq<string>) -> res <- res |> Seq.append x;)
|> ignore
Threading.Thread.Sleep(5000)
Don't use a Task. Tasks are what we call "hot", meaning that if you have a value of type Task in your hand, it means that the task is already running, and there is nothing you can do about it. In particular, this means you cannot restart it, or start a second instance of it. Once a Task is created, it's too late.
In your particular case it means that getResources is not "a way to start a task", but just "a task". Already started, already running.
If you want to start a new task every time, you have two alternatives:
First (the worse alternative), you could make getResources a function rather than a value, which you can do by giving it a parameter:
let getResources () =
async { ...
And then call it with that parameter:
let f () = getResources().Result
This will run the getResources function afresh every time you call f(), which will create a new Task every time and start it.
Second (a better option), don't use a Task at all. You're creating a perfectly good async computation and then turning it into a Task only to block on getting its result. Why? You can block on an async's result just as well!
let getResources = async { ... }
let getObservable interval =
let f () = getResources |> Async.RunSynchronously
...
This works, even though getResources is not a function, because asyncs, unlike Tasks, are what we call "cold". This means that, if you have an async in your hand, it doesn't mean that it's already running. async, unlike Task, represents not an "already running" computation, but rather "a way to start a computation". A corollary is that you can start it multiple times from the same async value.
One way to start it is via Async.RunSynchronously as I'm doing in my example above. This is not the best way, because it blocks the current thread until the computation is done, but it's equivalent to what you were doing with accessing the Task.Result property, which also blocks until the Task is done.
I'm trying to scrape some websites that need to run their JavaScript before the document has all the data I'm interested in. I'm trying to open a WebBrowser and wait for the document to load, but I can't get the data when I try to switch back to the thread the WebBrowser is on. Trying to run it without switching back to the thread gives casting errors. = (
What's stopping the async from switching threads? How do I fix this problem?
Script
open System
open System.Windows.Forms
open System.Threading
let step a = do printfn "%A" a
let downloadWebSite (address : Uri) (cont : HtmlDocument -> 'a) =
let browser = new WebBrowser()
let ctx = SynchronizationContext.Current
browser.DocumentCompleted.Add (fun _ ->
printfn "Document Loaded" )
async {
do step 1
do browser.Navigate(address)
do step 2
let! _ = Async.AwaitEvent browser.DocumentCompleted
do step 3
do! Async.SwitchToContext ctx
do step 4
return cont browser.Document }
let test =
downloadWebSite (Uri "http://www.google.com") Some
|> Async.RunSynchronously
Output
>
1
2
Document Loaded
3
# It just hangs here. I have to manually interrupt fsi.
- Interrupt
>
4
The problem with your approach is that RunSynchronously blocks the thread that you are trying to use to run the rest of the asynchronous computation using Async.SwitchToContext ctx.
When using F# Interactive, there is one main thread which runs in the F# Interactive and handles the user interactions. This is the thread that can use Windows Forms controls, so you correctly create WebBrowser outside of async. The waiting for DocumentCompleted happens on a thread pool thread (which runs the async workflow), but when you try to switch back to the main thread, it is already blocked by Async.RunSynchronously.
You can avoid blocking the thread by running a loop that calls Application.DoEvents to process events on the main thread (which will also allow it to run the rest of your async). Your downloadWebSite stays the same, but now you wait using:
let test =
downloadWebSite (Uri "http://www.google.com") Some
|> Async.Ignore
|> Async.StartAsTask
while not test.IsCompleted do
System.Threading.Thread.Sleep(100)
System.Windows.Forms.Application.DoEvents()
This is a bit of a hack - and there might be a better way of structuring this if you do not really need to wait for the result (e.g. just return a task and wait before running the next command), but this should do the trick.
I am implementing the Gossip Algorithm in which multiple actors spread a gossip at the same time in parallel. The system stops when each of the Actor has listened to the Gossip for 10 times.
Now, I have a scenario in which I am checking the listen count of the recipient actor before sending the gossip to it. If the listen count is already 10, then gossip will not be sent to the recipient actor. I am doing this using synchronous call to get the listen count.
def get_message(server, msg) do
GenServer.call(server, {:get_message, msg})
end
def handle_call({:get_message, msg}, _from, state) do
listen_count = hd(state)
{:reply, listen_count, state}
end
The program runs well in the starting but after some time the Genserver.call stops with a timeout error like following. After some debugging, I realized that the Genserver.call becomes dormant and couldn't initiate corresponding handle_call method. Is this behavior expected while using synchronous calls? Since all actors are independent, shouldn't the Genserver.call methods be running independently without waiting for each others response.
02:28:05.634 [error] GenServer #PID<0.81.0> terminating
** (stop) exited in: GenServer.call(#PID<0.79.0>, {:get_message, []}, 5000)
** (EXIT) time out
(elixir) lib/gen_server.ex:774: GenServer.call/3
Edit: The following code can reproduce the error when running in iex shell.
defmodule RumourActor do
use GenServer
def start_link(opts) do
{:ok, pid} = GenServer.start_link(__MODULE__,opts)
{pid}
end
def set_message(server, msg, recipient) do
GenServer.cast(server, {:set_message, msg, server, recipient})
end
def get_message(server, msg) do
GenServer.call(server, :get_message)
end
def init(opts) do
state=opts
{:ok,state}
end
def handle_cast({:set_message, msg, server, recipient},state) do
:timer.sleep(5000)
c = RumourActor.get_message(recipient, [])
IO.inspect c
{:noreply,state}
end
def handle_call(:get_message, _from, state) do
count = tl(state)
{:reply, count, state}
end
end
Open iex shell and load above module. Start two processes using:
a = RumourActor.start_link(["", 3])
b = RumourActor.start_link(["", 5])
Produce error by calling a deadlock condition as mentioned by Dogbert in comments. Run following without much time difference.
cb = RumourActor.set_message(elem(a,0), [], elem(b,0))
ca = RumourActor.set_message(elem(b,0), [], elem(a,0))
Wait for 5 seconds. Error will appear.
A gossip protocol is a way of dealing with asynchronous, unknown, unconfigured (random) networks that may be suffering intermittent outages and partitions and where no leader or default structure is present. (Note that this situation is somewhat unusual in the real world and out-of-band control is always imposed on systems in some way.)
With that in mind, let's change this to be an asynchronous system (using cast) so that we are following the spirit of the concept of chatty gossip style communication.
We need digest of messages that counts how many times a given message has been received, a digest of messages that have been received and are already over the magic number (so we don't re-send one if it is way late), and a list of processes enrolled in our system so we know to whom we are broadcasting:
(The following example is in Erlang because I just trip over Elixir syntax ever since I stopped using it...)
-module(rumor).
-record(s,
{peers = [] :: [pid()],
digest = #{} :: #{message_id(), non_neg_integer()},
dead = sets:new() :: sets:set(message_id())}).
-type message_id() :: zuuid:uuid().
Here I am using a UUID, but it could be whatever. An Erlang reference would be fine for a test case, but since gossip isn't useful within an Erlang cluster, and references are unsafe outside the originating system I'm just jumping to the assumption this is for a networked system.
We will need an interface function that allows us to tell a process to inject a new message into the system. We will also need an interface function that sends a message between two processes once it is already in the system. Then we will need an inner function that broadcasts messages to all the known (subscribed) peers. Ah, that means we need a greeting interface so that peer processes can notify each other of their presence.
We will also want a way to have a process tell itself to keep broadcasting over time. How long to set the interval on retransmission is not actually a simple decision -- it has everything to do with network topology, latency, variability, etc (you would actually probably occasionally ping peers and develop some heuristic based on the latency, drop peers that seem unresponsive, and so on -- but we're not going to get into that madness here). Here I'm just going to set it for 1 second because that is an easy to interpret interval for humans observing the system.
Note that everything below is asynchronous.
Interfaces...
insert(Pid, Message) ->
gen_server:cast(Pid, {insert, Message}).
relay(Pid, ID, Message) ->
gen_server:cast(Pid, {relay, ID, Message}).
greet(Pid) ->
gen_server:cast(Pid, {greet, self()}).
make_introduction(Pid, PeerPid) ->
gen_server:cast(Pid, {make_introduction, PeerPid}).
That last function is going to be our way as testers of the system to cause one of the processes to call greet/1 on some target Pid so they start to build a peer network. In the real world something slightly different usually goes on.
Inside our gen_server callback for receiving a cast we will get:
handle_cast({insert, Message}, State) ->
NewState = do_insert(Message, State);
{noreply, NewState};
handle_cast({relay, ID, Message}, State) ->
NewState = do_relay(ID, Message, State),
{noreply, NewState};
handle_cast({greet, Peer}, State) ->
NewState = do_greet(Peer, State),
{noreply, NewState};
handle_cast({make_introduction, Peer}, State) ->
NewState = do_make_introduction(Peer, State),
{noreply, NewState}.
Pretty simple stuff.
Above I mentioned that we would need a way for this thing to tell itself to resend after a delay. To do that we are going to send ourselves a naked message to "redo_relay" after a delay using erlang:send_after/3 so we are going to need a handle_info/2 to deal with it:
handle_info({redo_relay, ID, Message}, State) ->
NewState = do_relay(ID, Message, State),
{noreply, NewState}.
Implementation of the message bits is the fun part, but none of this is terribly tricky. Forgive the do_relay/3 below -- it could be more concise, but I'm writing this in a browser off the top of my head, so...
do_insert(Message, State = #s{peers = Peers, digest = Digest}) ->
MessageID = zuuid:v1(),
NewDigest = maps:put(MessageID, 1, Digest),
ok = broadcast(Message, Peers),
ok = schedule_resend(MessageID, Message),
State#s{digest = NewDigest}.
do_relay(ID,
Message,
State = #s{peers = Peers, digest = Digest, dead = Dead}) ->
case maps:find(ID, Digest) of
{ok, Count} when Count >= 10 ->
NewDigest = maps:remove(ID, Digest),
NewDead = sets:add_element(ID, Dead),
ok = broadcast(Message, Peers),
State#s{digest = NewDigest, dead = NewDead};
{ok, Count} ->
NewDigest = maps:put(ID, Count + 1),
ok = broadcast(ID, Message, Peers),
ok = schedule_resend(ID, Message),
State#s{digest = NewDigest};
error ->
case set:is_element(ID, Dead) of
true ->
State;
false ->
NewDigest = maps:put(ID, 1),
ok = broadcast(Message, Peers),
ok = schedule_resend(ID, Message),
State#s{digest = NewDigest}
end
end.
broadcast(ID, Message, Peers) ->
Forward = fun(P) -> relay(P, ID, Message),
lists:foreach(Forward, Peers).
schedule_resend(ID, Message) ->
_ = erlang:send_after(1000, self(), {redo_relay, ID, Message}),
ok.
And now we need the social bits...
do_greet(Peer, State = #s{peers = Peers}) ->
case lists:member(Peer, Peers) of
false -> State#s{peers = [Peer | Peers]};
true -> State
end.
do_make_introduction(Peer, State = #s{peers = Peers}) ->
ok = greet(Peer),
do_greet(Peer, State).
So what did all of the horribly untypespecced stuff up there do?
It avoided any possibility of a deadlock. The reason deadlocks are so, well, deadly in peer systems is that anytime you have two identical processes (or actors, or whatever) communicating synchronously, you have created a textbook case of a potential deadlock.
Any time A has a synchronous message headed toward B and B has a synchronous message headed toward A at the same time you now have a deadlock. There is no way to create to identical processes that call each other synchronously without creating a potential deadlock. In massively concurrent systems anything that might happen almost certainly will eventually, so you're going to run into this sooner or later.
Gossip is intended to be asynchronous for a reason: it is a sloppy, unreliable, inefficient way to deal with a sloppy, unreliable, inefficient network topology. Trying to make calls instead of casts not only defeats the purpose of gossip-style message relay, it also pushes you into impossible deadlock territory incident to changing the nature of the protocol from asynch to synch.
Genser.call has a default timeout of 5000 milliseconds. So what probably happening is, the message queue of the actor is filled with millions of messages and by the time it reaches to call, the calling actor has timed out.
You can handle timeout using a try...catch:
try do
c = RumourActor.get_message(recipient, [])
catch
:exit, reason ->
# handle timeout
Now, the called actor will finally get to the call message and respond, which will come as an unexpected message to the first process. This you'll need to handle using handle_info. So one way is to ignore the error in catch block and send it rumor from handle_info.
Also, this will significantly degrade the performance if there are many process waiting to be timed-out for 5 seconds before moving ahead. One could deliberately reduce the timeout and handle the reply in handle_info. This will reduce to using cast and handling reply from other process.
Your blocking call need to be broken into two non blocking calls. So if A is making a blocking call to B, instead of waiting for reply, A can ask B to send its state on a given address (A's address) and move on.
Then A will handle that message separately and reply if necessary.
A.fun1():
body of A before blocking call
result = blockingcall()
do things based on result
needs to be divided into:
A.send():
body of A before blocking call
nonblockingcall(A.receive) #A.receive is where B should send results
do other things
A.receive(result):
do things based on result
Good day,
I have a gen_server process which does some long-running state-updating tasks periodically in
handle_info:
handle_info(trigger, State) ->
NewState = some_long_running_task(),
erlang:send_after(?LOOP_TIME, self(), trigger),
{noreply, NewState}.
But when such task runs, then whole server gets unresponsive and any call to it leads to whole server crash:
my_gen_server:status().
** exception exit: {timeout,{gen_server,call,[my_gen_server,status]}}
in function gen_server:call/2
How it is possible to avoid blocking of gen_server ?
And when one call my_gen_server:status() at any time, the result should be something like:
{ok, task_active}
execute the long running task in a separate process. Let this process inform the gen_server of its progress with the task (that is if the task's progress can be tracked) OR let the process complete the task or fail but at least inform the gen_server of the results of the task.
Let the gen_server be linked with the process doing this long running task, and let the gen_server know the PID or registered name so that in case of exit signals, it can isolate the death of that important process from the Rest.
handle_info(trigger, State) ->
Pid = spawn_link(?MODULE,some_long_running_task,[State]),
NewState = save_pid(Pid,State),
{noreply, NewState};
handle_info({'EXIT',SomePid,_},State)->
case lookup_pid(State) == SomePid of
false -> %% some other process
{noreply,State};
true ->
%% our process has died
%% what do we do now ?
%% spawn another one ?
%% thats your decision to take
....
....
{noreply,State}
end;
handle_info({finished,TaskResult},State)->
.....%% update state e.t.c.
erlang:send_after(?LOOP_TIME, self(), trigger),
{noreply,NewState}.
some_long_running_task(ServerState)->
....do work
....return results
This call does not lead to a crash, but simply to an exception which can be caught:
status() ->
try gen_server:call(my_gen_server, status)
catch
exit:{timeout,_} -> {ok, task_active}
end.
However, the call will remain in the server's queue, and after it finishes handling the current message, it will send a reply message: {ServerRef, Reply}, which should be discarded by the calling process.
The only way to avoid blocking of any process in Erlang (whether gen_server or not) is not to run blocking tasks on it. So another alternative could be to run your long tasks on a different process which only talks to your server, so nobody cares that it's blocked.
I am doing F# programming, I have some special requirements.
I have 3 class instances; each class instance has to run for one hour every day, from 9:00AM to 10:00AM. I want to control them from main program, starting them at the same time, and stop them also at the same time. The following is my code to start them at the same time, but I don’t know how to stop them at the same time.
#light
module Program
open ClassA
open ClassB
open ClassC
let A = new CalssA.A("A")
let B = new ClassB.B("B")
let C = new ClassC.C("C")
let task = [ async { return A.jobA("A")};
async { return B.jobB("B")};
async { return C.jobC("C")} ]
task |> Async.Parallel |> Async.RunSynchronously |> ignore
Anyone knows hows to stop all 3 class instances at 10:00AM, please show me your code.
Someone told me that I can use async with cancellation tokens, but since I am calling instance of classes in different modules, it is difficult for me to find suitable code samples.
Thanks,
The jobs themselves need to be stoppable, either by having a Stop() API of some sort, or cooperatively being cancellable via CancellationTokens or whatnot, unless you're just talking about some job that spins in a loop and you'll just thread-abort it eventually? Need more info about what "stop" means in this context.
As Brian said, the jobs themselves need to support cancellation. The programming model for cancellation that works the best with F# is based on CancellationToken, because F# keeps CancellationToken automatically in asynchronous workflows.
To implement the cancellation, your JobA methods will need to take additional argument:
type A() =
member x.Foo(str, cancellationToken:CancellationToken) =
for i in 0 .. 10 do
cancellationToken.ThrowIfCancellationRequested()
someOtherWork()
The idea is that you call ThrowIfCancellationRequested frequently during the execution of your job. If a cancellation is requested, the method thorws and the operation will stop. Once you do this, you can write asynchronous workflow that gets the current CancellationToken and passes it to JobA member when calling it:
let task =
[ async { let! tok = Async.CancellationToken
return A.JobA("A", tok) };
async { let! tok = Async.CancellationToken
return B.JobB("B") }; ]
Now you can create a new token using CancellationTokenSource and start the workflow. When you then cancel the token source, it will automatically stop any jobs running as part of the workflow:
let src = new CancellationTokenSource()
Async.Start(task, cancellationToken = src.Token)
// To cancel the job:
src.Cancel()
You asked this question on hubfs.net, and I'll repeat here my answer: try using Quartz.NET. You'd just implement IInteruptableJob in A,B,C, defining how they stop. Then another job at 10:00AM to stop the others.
Quartz.NET has a nice tutorial, FAQ, and lots of examples. It's pretty easy to use for simple cases like this, yet very powerful if you ever need more complex scheduling, monitoring jobs, logging, etc.