In F#, do agent ReplyChannels/AsyncReplyChannels cross threads? - f#

For example, if I start an agent (1) on a UI thread, then start another agent (2) on a background thread and send a message from (1) to (2) that includes a ReplyChannel/AsyncReplyChannel and then send a reply from (2) back to (1), is that enough to ensure cross-thread communication between a background-thread agent and the UI-thread agent?
Other methods for communicating with an agent on a specific thread seem to prefer using the raising of events on the specific thread's SynchronizationContext, and say nothing about using ReplyChannels...

When u talk about agents do u mean mailboxprocessors?
In that case sending/posting messages and using replychanel is sufficient to communicate between two agents/threads.
Pls note that the communication will NOT happening between the UI thread and the background thread u mentioned directly. Each agent/mailboxprocessor has its own thread.
If u use "PostAndReply" from UI thread u don't need SynchronizationContext as the call is blocking internally.
If u use "PostAndAsyncReply" an async F# workflow will be returned. In that case u still need to use SynchronizationContext.
In that case pls refer to https://msdn.microsoft.com/en-us/visualfsharpdocs/conceptual/async.switchtocontext-method-%5Bfsharp%5D

Related

How to understand dart async operation?

As we know, dart is a single-threaded language. So according to the document, we can use Futrure/Stream to implement a async opetation. It sends the time-consuming operation to the Event Queue.
What confused me is where the Event Queue working on. It is working on the dart threat? if yes, it will block the app.
Another question is Event Queue a FIFO queue. If i have two opertion, one is a 1mins needed networking request, the other is a click event. The two operation will send to the Event Queue.
So if the click event will blocked by the networking request? Because the queue is a FIFO queue?
So where is the event queue working on?
Thank you very much!
One thing to note is that asynchronous and multithreading are two different things. Dart uses Futures and async/await to achieve asynchronicity, but Dart is still inherently a single-threaded language.
The way it works is when a Future is created (either manually or via calling an async method), that process is added to an event queue, as you read. Then, in the middle of all the synchronous execution, whenever there is a lull, the event queue can take priority. It can then go through the processes and figure out if any of the Futures have been completed. If so, the result is passed along to any other asynchronous processes that are waiting on that resource, if any.
This also means that, yes, if your program hangs in the middle of an asynchronous operation (with the easy example of an endless loop via while (true) {}), it will freeze the entire program, including the synchronous code and other asynchronous processes still waiting to resolve (even if the conditions allowing them to resolve have already occurred).
However, in your case, this won't be an issue. If you fire an asynchronous process in the form of a network request followed by another in the form of a "click event" (not sure what you're referring to, but I'll assume it's asynchronous as well), they will both be added to the event queue in that order. But if the click event resolves before the network request, the event queue will merely recognize that the network request Future has not yet resolved and will move on to the click event that has.
As a side note, it's worth noting that Dart does have a multi-threading capability, albeit in a fairly roundabout way. Dart has something called an Isolate, which isn't a thread but a completely separate child program. This means that the Isolate cannot access any of the same data in memory as the root program itself. However, data can be passed between the two using SendPorts and ReceivePorts. This makes using Isolates slightly more complicated than threads, but it also means that, if no memory is shared, it virtually eliminates race conditions based on which thread accesses the memory first.

Is there any reason to share a dispatch queue?

I'm working on some code that I can't contact the original developer.
He passes a class a reference to another class's serial dispatch queue and I'm not seeing any purpose to him not just creating another dispatch queue (each class is a singleton).
It's not creating issues but I'd like to understand it further, so any insight into the positive implications is appreciated, thank you.
Edit: I suggest reading all the answers here, they explain a lot.
It's actually not a good idea to share queues in this fashion (and no, not because they are expensive - they're not, quite the converse). The rationale is that it's not clear to anyone but a queue's creator just what the semantics of the queue are. Is it serial? Concurrent? High priority? Low priority? All are possible, and once you start passing internal queues around which were actually created for the benefit of a specific class, the external caller can schedule work on it which causes a mutual deadlock or otherwise behaves in an unexpected fashion with the other items on that queue because caller A knew to expect concurrent behavior and caller B was thinking it was a serial queue, without any of the "gotchas" that concurrent execution implies.
Queues should therefore, wherever possible, be hidden implementation details of a class. The class can export methods for scheduling work against its internal queue as necessary, but the methods should be the only accessors since they are the only ones who know for sure how to best access and schedule work on that specific type of queue!
If it's a serial queue, then they may be intending to serialize access to some resource shared between all objects that share it.
Dispatch queues are somewhat expensive to create, and tie up system resources. If you can share one, the system runs more efficiently. For that matter, if your app does work in the background, using a shared queue allows you to manage a single pool of tasks to be completed. So yes, there are good reasons for using a shared dispatch queue.

Check if pthread thread is blocking

Here's the situation, I have a thread running that is partially controlled by code that I don't own. I started the thread so I have it's thread id but then I passed it off to some other code. I need to be able to tell if that other code has currently caused the thread to block from another thread that I am in control of. Is there are way to do this in pthreads? I think I'm looking for something equivalent to the getState() method in Java's Thread class (http://download.oracle.com/javase/6/docs/api/java/lang/Thread.html#getState() ).
--------------Edit-----------------
It's ok if the solution is platform dependent. I've already found a solution for linux using the /proc file system.
You could write wrappers for some of the pthreads functions, which would simply update some state information before/after calling the original functions. That would allow you to keep track of which threads are running, when they're acquiring or holding mutexes (and which ones), when they're waiting on which condition variables, and so on.
Of course, this only tells you when they're blocked on pthreads synchronization objects -- it won't tell you when they're blocking on something else.
Before you hand the thread off to some other code, set a flag protected by a mutex. When the thread returns from the code you don't control, clear the flag protected by the mutex. You can then check, from wherever you need to, whether the thread is in the code you don't control.
From outside the code, there is no distinction between blocked and not-blocked. If you literally checked the state of the thread, you would get nonsensical results.
For example, consider two library implementations.
A: We do all the work in the calling thread.
B: We dispatch a worker thread to do the work. The calling thread blocks until the worker is done.
In both cases A and B the code you don't control is equally making forward progress. Your 'getstate' idea would provide different results. So it's not what you want.

Blackberry HTTPConnection best practices

I am developing a project for BB. The application works with the network and sends / receives data via HTTP. Now I use the queue and queue manager. Manager starts with a background thread and works in while (true) loop, checking the queue for new transactions to the server. If the queue is not empty, then the transaction is executed, otherwise the manager goes to sleep for 200 ms.
The process of the transaction as follows:
- Runs another thread (using the Runnable), which opens a connection to the network and first thread waiting for background thread or timeout (and for that we need a loop), which we set.
- If the connection is established, then starts another thread (using the Runnable), which runs getResponseCode (), and first thread waiting for background thread or timeout (and for that we need a loop), which we set.
Before it, we showing popup window with wait-rotating-image, and after it is removed. It synchronized via Application.getEventLock ().
Iit unstable sometimes and thread sleeps for a long time ignore timeout-waiting-loop.
I would like to know how valid such an approach, what advice and best-practice is, what is your experience?
I use 4.5, 4.6, 4.7 and 5.0.
The lock returned by Application.getEventLock() should only be used for code that modifies the UI or UI components - it's the lock used by the event dispatcher. You should not be using it for background tasks such as HTTP processing. If you want to synchronize that code, it would be best to just create your own lock object.
You do not need that many threads, your EDT (event dispatch thread a.k.a main thread) should insert he job (some runnable class) into a queue and use wait/notify to inform a dedicated worker thread, that is responsible for network transaction, to check the queue.
The worker thread will be responsible for opening connection, writing to connection and reading from it.
For information about wait/notify mechanism check out:
A simple scenario using wait() and notify() in java
Due to the fact that you can't update the UI using the worker thread, Once the network transaction is completed you can update the UI layer using InvokeLater
For more details go to http://www.blackberry.com/developers/docs/5.0.0api/net/rim/device/api/system/Application.html#invokeLater(java.lang.Runnable)
you can set a timeout in the HTTPConnection itself, but if you don't want to rely on that mechanism, you can schedule a TimerTask that will execute after some time and handle the timeout in case no response is received.
Once the response is received all you need to do is cancel the TimerTask so that the timeout will not be triggered.
Check out http://www.blackberry.com/developers/docs/4.0api/java/util/TimerTask.html

BackgroundWorker Thread - C#

I have to do 3 async operations parallely inside a Windows NT Service (using .Net2.0/C#). For that I am using Backgroundworker component.
Is it a good option/approach?
For continuous operation, I am calling RunWorkerAsync() again in RunWorkerCompleted event.
Please suggest me.
Usually BackgroundWorker is used for long-time operations. If you just need to execute 3 tasks in parallel you can use simple threads.
Usually, BackgroundWorker is used when you need RunWorkerCompleted to perform updates in a GUI. At least that's how I've been using it. It sounds like you want this to run constantly, so why not use a regular worker thread?
The biggest issue I see is that BackgroundWorker uses the .NET thread pool, so if you really want this to run continuously, you've just guaranteed to use up one of the thread pool threads, of which there are a limited number available.
I wouldn't be using backgroundworkers for the job you're describing here.
The background worker is meant to be used for some 'long' running operations while keeping a responsive UI. Using services breaks the UI pattern.
For a service I would be using a timer and three threads.
Let the timer check the existence of the three threads and restart them or report errors when needed. The three threads can do their own job in a while loop (don't forget to add a sleep(0) in there)

Resources