Are synchronous functions in Dart executed atomically? - dart

I understand that Dart is single-threaded and that within an isolate a function call is popped from the event loop queue and executed. There seems to be two cases, async and sync.
a) Async: An asynchronous function will run without interruption until it gets to the await keyword. At this point, it may release control of the instruction pointer or continue its routine. (i.e. async functions can be but are not required to be interrupted on await)
b) Sync: All instructions from setup -> body -> and teardown are executed without interruption. If this is the case, I would say that synchronous functions are atomic.
I have an event listener that may have multiple calls in the event loop queue. I think I have two options.
Using the Synchronized package
a) Async version:
import 'package:synchronized/synchronized.dart';
final Lock _lock = Lock();
...
() async {
await _lock.synchronized(() async {
if (_condition) {
_signal.complete(status);
_condition = !_condition;
}
});
}
b) Sync version:
() {
if (_condition) {
_signal.complete(status);
_condition = !_condition;
}
}
From my understanding of the Dart concurrency model these are equivalent. I prefer b) because it is simple. However, this requires that there cannot be a race condition between two calls to my sync event handler. I have used concurrency in languages with GIL and MT but not with the event-loop paradigm.

a) Async: An asynchronous function will run without interruption until it gets to the await keyword. At this point, it may release control of the instruction pointer or continue its routine. (i.e. async functions can be but are not required to be interrupted on await)
await always yields. It's equivalent to setting up a Future.then() callback and returning to the Dart event loop.
For your simple example, there's no reason to use _lock.synchronized(). Synchronous code cannot be interrupted, and isolates (as their name imply) don't share memory. You would want some form of locking mechanism if your callback did asynchronous work and you needed to prevent concurrent asynchronous operations from being interleaved.

Related

How to restore runOn Scheduler used in previous operator?

Folks, is it possible to obtain currently used Scheduler within an operator?
The problem that I have is that Mono.fromFuture() is being executed on a native thread (AWS CRT Http Client in my case). As result all subsequent operators are also executed on that thread. And later code wants to obtain class loader context that is obviously null. I realize that I can call .publishOn(originalScheduler) after .fromFuture() but I don't know what scheduler is used to materialize Mono returned by my function.
Is there elegant way to deal with this?
fun myFunction(): Mono<String> {
return Mono.just("example")
.flatMap { value ->
Mono.fromFuture {
// invocation of 3rd party library that executes Future on the thread created in native code.
}
}
.map {
val resource = Thread.currentThread().getContextClassLoader().getResources("META-INF/services/blah_blah");
// NullPointerException because Thread.currentThread().getContextClassLoader() returns NULL
resource.asSequence().first().toString()
}
}
It is not possible, because there's no guarantee that there is a Scheduler at all.
The place where the subscription is made and the data starts flowing could simply be a Thread. There is no mechanism in Java that allows an external actor to submit a task to an arbitrary thread (you have to provide the Runnable at Thread construction).
So no, there's no way of "returning to the previous Scheduler".
Usually, this shouldn't be an issue at all. If your your code is reactive it should also be non-blocking and thus able to "share" whichever thread it currently runs on with other computations.
If your code is blocking, it should off-load the work to a blocking-compatible Scheduler anyway, which you should explicitly chose. Typically: publishOn(Schedulers.boundedElastic()). This is also true for CPU-intensive tasks btw.

When would a queue consider a task is completed?

In the following code, when would queueT (serial queue) consider “task A” is completed?
The moment when aNetworkRequest switched to another thread?
Or in the doneInAnotherQueue block? ( commented // 1)
In another word, when would “task B” be executed?
let queueT = DispatchQueue(label: "com.test.a")
queueT.async { // task A
aNetworkRequest.doneInAnotherQueue() { // completed in another thread possibly
// 1
}
}
queueT.async { // task B
print("It's my turn")
}
It would much better if you could explain the mechanism how a queue consider a task is completed.
Thanks in advance.
In short, the first example starts an asynchronous network request, so the async call “finishes” as soon as that network request is submitted (but does not wait for that network request to finish).
I am assuming that the real question is that you want to know when the network request is done. Bottom line, GCD is not well suited for managing dependencies between tasks that are, themselves, asynchronous requests. The dispatching the initiation of a network request to a serial queue is undoubtedly not going to achieve what you want. (And before someone suggests using semaphores or dispatch groups to wait for the asynchronous request to finish, note that can solve the tactical issue, but it is a pattern to be avoided because it is inefficient use of resources and, in edge cases, can introduce deadlocks.)
One pattern is to use completion handlers:
func performRequestA(completion: #escaping () -> Void) { // task A
aNetworkRequest.doneInAnotherQueue() { object in
...
completion()
}
}
Now, in practice, we would generally use the completion handler with a parameter, perhaps even a Result type:
func performRequestA(completion: #escaping (Result<Foo, Error>) -> Void) { // task A
aNetworkRequest.doneInAnotherQueue() { result in
guard ... else {
completion(.failure(error))
return
}
let foo = ...
completion(.success(foo))
}
}
Then you can use the completion handler pattern, to process the results, update models, and perhaps initiate subsequent requests that are dependent upon the results of this request. For example:
performRequestA { result in
switch result {
case .failure(let error):
print(error)
case .success(let foo):
// update models or initiate next step in the process here
}
}
If you are really asking how to manage dependencies between asynchronous tasks, there are a number of other, elegant patterns (e.g., Combine, custom asynchronous Operation subclass, the forthcoming async/await pattern contemplated in SE-0296 and SE-0303, etc.). All of these are elegant solutions for managing dependencies between asynchronous tasks, controlling the degree of concurrency, etc.
We probably would need to better understand the nature of your broader needs before we made any specific recommendations. You have asked the question about a single dispatch, but the question probably is best viewed from a broader context of what you are trying to achieve. For example, I'm assuming you are asking because you have multiple asynchronous requests to initiate: Do you really need to make sure that they happen sequentially and lose all the performance benefits of concurrency? Or can you allow them to run concurrently and you just need to know when all of the concurrent requests are done and how to get the results in the correct order? And might you have so many concurrent requests that you might need to constrain the degree of concurrency?
The answers to those questions will probably influence our recommendation of how to best manage your multiple asynchronous requests. But the answer is almost certainly is not a GCD queue.
You can do a simple check
let queueT = DispatchQueue(label: "com.test.a")
queueT.async { // task A
DispatchQueue(label: "com.test2.a").async { // create another queue inside
for i in 0..<6 {
print(i)
}
}
}
queueT.async { // task B
for i in 10..<20 {
print(i)
}
}
}
you'll get different output each run this means yes when you switch thread the task is considered done
A GCD work item is complete when the closure you pass returns. So for your example, I'm going to rewrite it to make the function calls and parameters more explicit (rather than using trailing closure syntax).
queueT.async(execute: {
// This is a function call that takes a closure parameter. Whether this
// function returns, then this closure will continue. Whether that is before or
// after running completionHandler is an internal detail of doneInAnotherQueue.
aNetworkRequest.doneInAnotherQueue(closureParameter: { ... })
// At this point, the closure is complete. What doneInAnotherQueue() does with
// its closure is its business.
})
Assuming that doneInAnotherQueue() executes its closure parameter "sometime in the future", then your task B will likely run before that closure runs (it may not; it's really a race at that point, but probably). If the doneInAnotherQueue() blocks on its closure before returning, then closureParameter will definitely run before task B.
There is absolutely no magic here. The system has no idea what doneInAnotherQueue does with its parameter. It may never run it. It may run it immediately. It may run it sometime in the future. The system just calls doneInAnotherQueue() and passes it a closure.
I rewrote async in normal "function with parameters" syntax to make it even more clear that async() is just a function, and it takes a closure parameter. It also isn't magic. It's not part of the language. It's just a normal function in the Dispatch framework. All it does it take its parameter, put it on a dispatch queue, and return. It doesn't execute anything. There's just closures that get put on queues, scheduled, and executed.
Swift is in the process of adding structured concurrency, which will add more language-level concurrency features that will allow you to express much more advanced things than the simple primitives provided by GCD.
Your task A returns straight away. Dispatching work to another queue is synchronous. Think of the block (the trailing closure) after 'doneInAnotherQueue' as just an argument to the doneInAnotherQueue function, no different to passing an Int or a String. You pass that block along and then you return immediately with the closing brace from task A.

how to use suspendCoroutine to turn java 7 future into kotlin suspending function

What is the best approach to wrap java 7 futures inside a kotlin suspend function?
Is there a way to convert a method returning Java 7 futures into a suspending function?
The process is pretty straightforward for arbitrary callbacks or java 8 completablefutures, as illustrated for example here:
* https://github.com/Kotlin/kotlin-coroutines/blob/master/kotlin-coroutines-informal.md#suspending-functions
In these cases, there is a hook that is triggered when the future is done, so it can be used to resume the continuation as soon as the value of the future is ready (or an exception is triggered).
Java 7 futures however don't expose a method that is invoked when the computation is over.
Converting a Java 7 future to a Java 8 completable future is not an option in my codebase.
Of course, i can create a suspend function that calls future.get() but this would be blocking, which breaks the overall purpose of using coroutine suspension.
Another option would be to submit a runnable to a new thread executor, and inside the runnable call future.get() and invoke a callback. This wrapper will make the code looks like "non-blocking" from the consumer point of view, the coroutine can suspend, but under the hood we are still writing blocking code and we are creating a new thread just for the sake of blocking it
Java 7 future is blocking. It is not designed for asynchronous APIs and does not provide any way to install a callback that is invoked when the future is complete. It means that there is no direct way to use suspendCoroutine with it, because suspendCoroutine is designed for use with asynchronous callback-using APIs.
However, if your code is, in fact, running under JDK 8 or a newer version, there are high chances that the actual Future instance that you have in your code happens to implement CompletionStage interface at run-time. You can try to cast it to CompletionStage and use ready-to-use CompletionStage.await extension from kotlinx-coroutines-jdk8 module of kotlinx.coroutines library.
Of course Roman is right that a Java Future does not let you provide a callback for when the work is done.
However, it does give you a way to check if the work is done, and if it is, then calling .get() won't block.
Luckily for us, we also have a cheap way to divert a thread to quickly do a poll check via coroutines.
Let's write that polling logic and also vend it as an extension method:
suspend fun <T> Future<T>.wait(): T {
while(!isDone)
delay(1) // or whatever you want your polling frequency to be
return get()
}
Then to use:
fun someBlockingWork(): Future<String> { ... }
suspend fun useWork() {
val result = someBlockingWork().wait()
println("Result: $result")
}
So we have millisecond-response time to our Futures completing without using any extra threads.
And of course you'll want to add some upper bound to use as a timeout so you don't end up waiting forever. In that case, we can update the code just a little:
suspend fun <T> Future<T>.wait(timeoutMs: Int = 60000): T? {
val start = System.currentTimeMillis()
while (!isDone) {
if (System.currentTimeMillis() - start > timeoutMs)
return null
delay(1)
}
return get()
}
You should be now be able to do this by creating another coroutine in the same scope that cancels the Future when the coroutine is cancelled.
withContext(Dispatchers.IO) {
val future = getSomeFuture()
coroutineScope {
val cancelJob = launch {
suspendCancellableCoroutine<Unit> { cont ->
cont.invokeOnCancellation {
future.cancel(true)
}
}
}
future.get().also {
cancelJob.cancel()
}
}
}

Invoking async function without await in Dart, like starting a thread

I have two functions
callee() async {
// do something that takes some time
}
caller () async {
await callee();
}
In this scenario, caller() waits till callee() finishes. I don't want that. I want caller() to complete right after invoking callee(). callee() can complete whenever in the future, I don't care. I just want to start it just like a thread and then forget about it.
Is this possible?
When you call the callee function, it returns a Future. The await then waits for that future to complete. If you don't await the future, it will eventually complete anyway, but your caller function won't be blocked on waiting for that. So, you can just do:
caller() {
callee(); // Ignore returned Future (at your own peril).
}
If you do that, you should be aware of what happens if callee fails with an error. That would make the returned future complete with that error, and if you don't listen on the future, that error is considered "uncaught". Uncaught errors are handled by the current Zone and the default behavior is to act like a top-level uncaught error which may kill your isolate.
So, remember to handle the error.
If callee can't fail, great, you're done (unless it fails anyway, then you'll have fun debugging that).
Actually, because of the risk of just forgetting to await a future, the highly reocmmended unawaited_futures lint requires that you don't just ignore a returned future, and instead wants you to do unawaited(callee()); to signal that it's deliberate. (The unawaited function can be imported from package:meta and will be available from the dart:async library in SDK version 2.14).
The unawaited function doesn't handle errors though, so if you can have errors, you should do something more.
You can handle the error locally:
caller() {
callee().catchError((e, s) {
logErrorSomehow(e, s);
});
}
(Since null safety, this code only works if the callee() future has a nullable value type. From Dart 2.14, you'll be able to use callee().ignore() instead, until then you can do callee().then((_) => null, onError: (e, s) => logErrorSomehow(e, s)); instead.)
or you can install an error handling zone and run your code in that:
runZoned(() {
myProgram();
}, onError: logErrorSomehow);
See the runZoned function and it's onError parameter.
Sure, just omit await. This way callee() is called immediately and when an async operation is called the call will be scheduled in the event queue for later execution and caller() is continued immediately afterwards.
This isn't like a thread though. As mentioned processing is enqueued to the event queue which means it won't be executed until the current task and all previously enqueued tasks are completed.
If you want real parallel execution you need to utilize isolates.
See also
https://www.dartlang.org/articles/event-loop/
https://api.dartlang.org/stable/1.16.1/dart-isolate/dart-isolate-library.html
https://pub.dartlang.org/packages/isolate

Launching multiple async futures in response to events

I would like to launch a fairly expensive operation in response to a user clicking on a canvas element.
mouseDown(MouseEvent e) {
print("entering event handler");
var future = new Future<int>(expensiveFunction);
future.then((int value) => redrawCanvas(value);
print("done event handler");
}
expensiveFunction() {
for(int i = 0; i < 1000000000; i++){
//do something insane here
}
}
redrawCanvas(int value) {
//do stuff here
print("redrawing canvas");
}
My understanding of M4 Dart, is that this future constructor should launch "expensiveFunction" asynchronously, aka on a different thread from the main one. And it does appear this way, as "done event handler" is immediately printed into my output window in the IDE, and then some time later "redrawing canvas" is printed. However, if I click on the element again nothing happens until my "expensiveFunction" is done running from the previous click.
How do I use futures to simply launch an compute intensive function on new thread such that I can have multiple of them queued up in response to multiple clicks, even if the first future is not complete yet?
Thanks.
As mentioned in a different answer, Futures are just a "placeholder for a value that is made available in the future". They don't necessarily imply concurrency.
Dart has a concept of isolates for concurrency. You can spawn an isolate to run some code in a parallel thread or process.
dart2js can compile isolates into Web Workers. A Web Worker can run in a separate thread.
Try something like this:
import 'dart:isolate';
expensiveOperation(SendPort replyTo) {
var result = doExpensiveThing(msg);
replyTo.send(result);
}
main() async {
var receive = new ReceivePort();
var isolate = await Isolate.spawn(expensiveOperation, receive.sendPort);
var result = await receive.first;
print(result);
}
(I haven't tested the above, but something like it should work.)
Event Loop & Event Queue
You should note that Futures are not threads. They do not run concurrently, and in fact, Dart is single-threaded. All Dart code runs in an event loop.
The event loop is a loop that runs as long as the current Dart isolate is alive. When you call main() to start a Dart application, the isolate is created, and it is no longer alive after the main method is completed and all items on the event queue are completed as well.
The event queue is the set of all functions that still need to finish executing. Because Dart is single threaded, all of these functions need to run one at a time. So when one item in the event queue is completed, another one begins. The exact timing and scheduling of the event queue is something that's way more complicated than I can explain myself.
Therefore, asynchronous processing is important to prevent the single thread from being blocked by some long running execution. In a UI, a long process can cause visual jankiness and hinder your app.
Futures
Futures represent a value that will be available sometime in the Future, hence the name. When a Future is created, it is returned immediately, and execution continues.
The callback associated with that Future (in your case, expensiveFunction) is "started" by being added to the event queue. When you return from the current isolate, the callback runs and as soon as it can, the code after then.
Streams
Because your Futures are by definition asynchronous, and you don't know when they return, you want to queue up your callbacks so that they remain in order.
A Stream is an object that emits events that can be subscribed to. When you write canvasElement.onClick.listen(...) you are asking for the onClick Stream of MouseEvents, which you then subscribe to with listen.
You can use Streams to queue up events and register a callback on those events to run the code you'd like.
What to Write
main() {
// Used to add events to a stream.
var controller = new StreamController<Future>();
// Pause when we get an event so that we take one value at a time.
var subscription = controller.stream.listen(
(_) => subscription.pause());
var canvas = new CanvasElement();
canvas.onClick.listen((MouseEvent e) {
print("entering event handler");
var future = new Future<int>(expensiveFunction);
// Resume subscription after our callback is called.
controller.add(future.then(redrawCanvas).then(subscription.resume()));
print("done event handler");
});
}
expensiveFunction() {
for(int i = 0; i < 1000000000; i++){
//do something insane here
}
}
redrawCanvas(int value) {
//do stuff here
print("redrawing canvas");
}
Here we are queuing up our redrawCanvas callbacks by pausing after each mouse click, and then resuming after redrawCanvas has been called.
More Information
See also this great answer to a similar question.
A great place to start reading about Dart's asynchrony is the first part of this article about the dart:io library and this article about the dart:async library.
For more information about Futures, see this article about Futures.
For Streams information, see this article about adding to Streams and this article about creating Streams.

Resources