I am trying to understand how I shall port my Java chess engine to dart.
So I have understood that I should use Isolates and/or Futures to run my engine in parallell with the GUI but how can I force the engine to terminate the search.
In java I just set some boolean that where shared between the engine thread and the gui thread.
You should send a message to the isolate, telling it to stop. You can simply do something like:
port.send('STOP');
To be clear, isolates and futures are two different things, and you use them differently.
Use an isolate when you want some code to truly run concurrently, in a separate "isolated memory heap". An isolate is like a mini program, running separately from your main program. You send isolates messages, and you can receive messages from isolates.
Use a future when you want to be notified when a value is available later. "Later" is defined as "a future tick in the event loop". Each isolate has its own event loop. It's important to understand that just asking a Future to run a function doesn't make the function run in parallel. It just puts the function onto the event loop to be run "later".
Answering the implied question 'how can I get a long running task in an isolate to cease running?' rather than more explicitly asked 'how can I cause an isolate to terminate, release it's resources and generally cease to be?'
Break the long running task up into smaller, shorter running units.
Execute each unit with a Future. Chain futures as appropriate.
Provide a flag that each unit should check before executing its logic. If the flag is set, bail.
Listen for a 'stop' message and set the flag if/when received.
Splitting the main processing task up into Futures allows processing of the stop message to get onto the event queue ahead of units of processing of the main task.
There is now iso.Isolate.kill()
WARNING: This method is experimental and not handled on every platform yet.
Related
In the introductory section of the Concurrency chapter of "The Swift Programming Language" I read:
When an asynchronous function resumes, Swift doesn’t make any
guarantee about which thread that function will run on.
This surprised me. It seems odd, comparing for example with waiting on semaphore in pthreads, that execution can jump threads.
This leads me to the following questions:
Why doesn't Swift guarantee resuming on the same thread?
Are there any rules by which the resuming thread could be
determined?
Are there ways to influence this behaviour, for example make sure it's resumed on the main thread?
EDIT: My study of Swift concurrency & subsequent questions above were triggered by finding that a Task started from code running on the main thread (in SwiftUI) was executing it's block on another thread.
It helps to approach Swift concurrency with some context: Swift concurrency attempts to provide a higher-level approach to working with concurrent code, and represents a departure from what you may already be used to with threading models, and low-level management of threads, concurrency primitives (locking, semaphores), and so on, so that you don't have to spend any time thinking about low-level management.
From the Actors section of TSPL, a little further down on the page from your quote:
You can use tasks to break up your program into isolated, concurrent pieces. Tasks are isolated from each other, which is what makes it safe for them to run at the same time…
In Swift Concurrency, a Task represents an isolated bit of work which can be done concurrently, and the concept of isolation here is really important: when code is isolated from the context around it, it can do the work it needs to without having an effect on the outside world, or be affected by it. This means that in the ideal case, a truly isolated task can run on any thread, at any time, and be swapped across threads as needed, without having any measurable effect on the work being done (or the rest of the program).
As #Alexander mentions in comments above, this is a huge benefit, when done right: when work is isolated in this way, any available thread can pick up that work and execute it, giving your process the opportunity to get a lot more work done, instead of waiting for particular threads to be come available.
However: not all code can be so fully isolated that it runs in this manner; at some point, some code needs to interface with the outside world. In some cases, tasks need to interface with one another to get work done together; in others, like UI work, tasks need to coordinate with non-concurrent code to have that effect. Actors are the tool that Swift Concurrency provides to help with this coordination.
Actors help ensure that tasks run in a specific context, serially relative to other tasks which also need to run in that context. To continue the quote from above:
…which is what makes it safe for them to run at the same time, but sometimes you need to share some information between tasks. Actors let you safely share information between concurrent code.
… actors allow only one task to access their mutable state at a time, which makes it safe for code in multiple tasks to interact with the same instance of an actor.
Besides using Actors as isolated havens of state as the rest of that section shows, you can also create Tasks and explicitly annotate their bodies with the Actor within whose context they should run. For example, to use the TemperatureLogger example from TSPL, you could run a task within the context of TemperatureLogger as such:
Task { #TemperatureLogger in
// This task is now isolated from all other tasks which run against
// TemperatureLogger. It is guaranteed to run _only_ within the
// context of TemperatureLogger.
}
The same goes for running against the MainActor:
Task { #MainActor in
// This code is isolated to the main actor now, and won't run concurrently
// with any other #MainActor code.
}
This approach works well for tasks which may need to access shared state, and need to be isolated from one another, but: if you test this out, you may notice that multiple tasks running against the same (non-main) actor may still run on multiple threads, or may resume on different threads. What gives?
Tasks and Actors are the high-level tools in Swift concurrency, and they're the tools that you interface with most as a developer, but let's get into implementation details:
Tasks are actually not the low-level primitive of work in Swift concurrency; Jobs are. A Job represents the code in a Task between await statements, and you never write a Job yourself; the Swift compiler takes Tasks and creates Jobs out of them
Jobs are not themselves run by Actors, but by Executors, and again, you never instantiate or use an Executor directly yourself. However, each Actor has an Executor associated with it, that actually runs the jobs submitted to that actor
This is where scheduling actually comes into play. At the moment there are two main executors in Swift concurrency:
A cooperative, global executor, which schedules jobs on a cooperative thread pool, and
A main executor, which schedules jobs exclusively on the main thread
All non-MainActor actors currently use the global executor for scheduling and executing jobs, and the MainActor uses the main executor for doing the same.
As a user of Swift concurrency, this means that:
If you need a piece of code to run exclusively on the main thread, you can schedule it on the MainActor, and it will be guaranteed to run only on that thread
If you create a task on any other Actor, it will run on one (or more) of the threads in the global cooperative thread pool
And if you run against a specific Actor, the Actor will manage locks and other concurrency primitives for you, so that tasks don't modify shared state concurrently
With all of this, to get to your questions:
Why doesn't Swift guarantee resuming on the same thread?
As mentioned in the comments above — because:
It shouldn't be necessary (as tasks should be isolated in a way that the specifics of "which thread are we on?" don't matter), and
Being able to use any one of the available cooperative threads means that you can continue making progress on all of your work much faster
However, the "main thread" is special in many ways, and as such, the #MainActor is bound to using only that thread. When you do need to ensure you're exclusively on the main thread, you use the main actor.
Are there any rules by which the resuming thread could be determined?
The only rule for non-#MainActor-annotated tasks are: the first available thread in the cooperative thread pool will pick up the work.
Changing this behavior would require writing and using your own Executor, which isn't quite possible yet (though there are some plans on making this possible).
Are there ways to influence this behaviour, for example make sure it's resumed on the main thread?
For arbitrary threads, no — you would need to provide your own executor to control that low-level detail.
However, for the main thread, you have several tools:
When you create a Task using Task.init(priority:operation:), it defaults to inheriting from the current actor, whatever actor this happens to be. This means that if you're already running on the main actor, the task will continue using the current actor; but if you aren't, it will not. To explicitly annotate that you want the task to run on the main actor, you can annotate its operation explicitly:
Task { #MainActor in
// ...
}
This will ensure that regardless of what actor the Task was created on, the contained code will only run on the main actor.
From within a Task: regardless of the actor you're currently on, you can always submit a job directly onto the main actor with MainActor.run(resultType:body:). The body closure is already annotated as #MainActor, and will guarantee execution on the main thread
Note that creating a detached task will never inherit from the current actor, so guaranteed that a detached task will be implicitly scheduled through the global executor instead.
My study of Swift concurrency & subsequent questions above were triggered by finding that a Task started from code running on the main thread (in SwiftUI) was executing it's block on another thread.
It would help to see specific code here to explain exactly what happened, but two possibilities:
You created a non-explicitly #MainActor-annotated Task, and it happened to begin execution on the current thread. However, because you weren't bound to the main actor, it happened to get suspended and resumed by one of the cooperative threads
You created a Task which contained other Tasks within it, which may have run on other actors, or were explicitly detached tasks — and that work continued on another thread
For even more insight into the specifics here, check out Swift concurrency: Behind the scenes from WWDC2021, which #Rob linked in a comment. There's a lot more to the specifics of what's going on, and it may be interesting to get an even lower-level view.
If you want insights into the threading model underlying Swift concurrency, watch WWDC 2021 video Swift concurrency: Behind the scenes.
In answer to a few of your questions:
Why doesn't Swift guarantee resuming on the same thread?
Because, as an optimization, it can often be more efficient to run it on some thread that is already running on a CPU core. As they say in that video:
When threads execute work under Swift concurrency they switch between continuations instead of performing a full thread context switch. This means that we now only pay the cost of a function call instead. …
You go on to ask:
Are there any rules by which the resuming thread could be determined?
Other than the main actor, no, there are no assurances as to which thread it uses.
(As an aside, we’ve been living with this sort of environment for a long time. Notably, GCD dispatch queues, other than the main queue, make no such guarantee that two blocks dispatched to a particular serial queue will run on the same thread, either.)
Are there ways to influence this behaviour, for example make sure it's resumed on the main thread?
If we need something to run on the main actor, we simply isolate that method to the main actor (with #MainActor designation on either the closure, method, or the enclosing class). Theoretically, one can also use MainActor.run {…}, but that is generally the wrong way to tackle it.
As we know, dart is a single-threaded language. So according to the document, we can use Futrure/Stream to implement a async opetation. It sends the time-consuming operation to the Event Queue.
What confused me is where the Event Queue working on. It is working on the dart threat? if yes, it will block the app.
Another question is Event Queue a FIFO queue. If i have two opertion, one is a 1mins needed networking request, the other is a click event. The two operation will send to the Event Queue.
So if the click event will blocked by the networking request? Because the queue is a FIFO queue?
So where is the event queue working on?
Thank you very much!
One thing to note is that asynchronous and multithreading are two different things. Dart uses Futures and async/await to achieve asynchronicity, but Dart is still inherently a single-threaded language.
The way it works is when a Future is created (either manually or via calling an async method), that process is added to an event queue, as you read. Then, in the middle of all the synchronous execution, whenever there is a lull, the event queue can take priority. It can then go through the processes and figure out if any of the Futures have been completed. If so, the result is passed along to any other asynchronous processes that are waiting on that resource, if any.
This also means that, yes, if your program hangs in the middle of an asynchronous operation (with the easy example of an endless loop via while (true) {}), it will freeze the entire program, including the synchronous code and other asynchronous processes still waiting to resolve (even if the conditions allowing them to resolve have already occurred).
However, in your case, this won't be an issue. If you fire an asynchronous process in the form of a network request followed by another in the form of a "click event" (not sure what you're referring to, but I'll assume it's asynchronous as well), they will both be added to the event queue in that order. But if the click event resolves before the network request, the event queue will merely recognize that the network request Future has not yet resolved and will move on to the click event that has.
As a side note, it's worth noting that Dart does have a multi-threading capability, albeit in a fairly roundabout way. Dart has something called an Isolate, which isn't a thread but a completely separate child program. This means that the Isolate cannot access any of the same data in memory as the root program itself. However, data can be passed between the two using SendPorts and ReceivePorts. This makes using Isolates slightly more complicated than threads, but it also means that, if no memory is shared, it virtually eliminates race conditions based on which thread accesses the memory first.
I might have the wrong idea of Isolate and Future. Please help me to clear it up. Here is my understanding of both subjects.
Isolate: Isolates run code in its own event loop, and each event may run smaller tasks in a nested microtask queue.
Future: A Future is used to represent a potential value, or error, that will be available at some time in the future.
My confusions are:
The doc says Isolate has it own loop? I feel like having its own event queue makes more sense to me, am I wrong?
Is future running asynchronously on the main Isolate? I'm assuming future task actually got placed at the end of event queue so if it will be execute by loop in the future. Correct me if I'm wrong.
Why use Isolate when there is future? I saw some examples using Isolate for some heavy task instead of Future. But why? It only makes sense to me when future execute asynchronously on the main isolate queue.
A Future is a handle that allows you to get notified when async execution is completed.
Async execution uses the event queue and code is executed concurrently within the same thread.
https://webdev.dartlang.org/articles/performance/event-loop
Dart code is by default executed in the root isolate.
You can start up additional isolates that usually run on another thread.
An isolate can be either loaded from the same Dart code the root isolate was started with (with a different entry-point than main() https://api.dartlang.org/stable/2.0.0/dart-isolate/Isolate/spawn.html) or with different Dart code (loaded from some Dart file or URL https://api.dartlang.org/stable/2.0.0/dart-isolate/Isolate/spawnUri.html).
Isolates don't share any state and can only communicate using message passing (SendPort/ReceivePort). Each isolate has its own event queue.
https://webdev.dartlang.org/articles/performance/event-loop
An Isolate runs Dart code on a single thread. Synchronous code like
print('hello');
is run immediately and can't be interrupted.
An Isolate also has an Event Loop that it uses to schedule asynchronous tasks on. Asynchronous doesn't mean that these tasks are run on a separate thread. They are still run on the same thread. Asynchronous just means that they are scheduled for later.
The Event Loop runs the tasks that are scheduled in what is called an Event Queue. You can put a task in the Event Queue by creating a future like this:
Future(() => print(hello));
The print(hello) task will get run when the other tasks ahead of it in the Event Queue have finished. All of this is happening on the same thread, that is, the same Isolate.
Some tasks don't get added to the Event Queue right away, for example
Future.delayed(Duration(seconds: 1), () => print('hello'));
which only gets added to the queue after a delay of one second.
So far everything I've been talking about gets done on the same thread, the same Isolate. Some work may actually get done on a different thread, though, like IO operations. The underlying framework takes care of that. If something expensive like reading from disk were done on the main Isolate thread then it would block the app until it finished. When the IO operation finishes the future completes and the update with the result is added to the Event Queue.
When you need to do CPU intensive operations yourself, you should run them on another isolate so that it doesn't cause jank in your app. The compute property is good for this. You still use a future, but this time the future is returning the result from a different Isolate.
Further study
Futures - Isolates - Event Loop
Dart asynchronous programming: Isolates and event loops
Are Futures in Dart threads?
The Event Loop and Dart
Flutter/Dart non-blocking demystify
The Engine architecture
Single Thread Dart, What? — Part 1
Single Thread Dart, What? — Part 2
Flutter Threading: Isolates, Future, Async And Await
The Fundamentals of Zones, Microtasks and Event Loops in the Dart Programming Language
An introduction to the dart:io library
What thread / isolate does flutter run IO operations on?
In one sentence we could say,
Isolates: Dart is single-threaded but it is capable of doing multi-threading stuff using Isolates (many processes).
Future: Future is a result which is returned when dart has finished an asynchronous work. The work is generally done in that single-thread.
Isolate could be compared to Thread even if dart is not multithreaded. It has it's own memory and event loop indeed, when Futures shares the same memory
Dart is able to spawn standalone processes, called Isolates (web workers in dart2js), which do not share memory when the main program, but are able to asynchronously, in another process (effectively a thread of sorts) is able to do computations without blocking the main thread.
A Future is run inside the Isolate that called it, not necesserally the main isolate.
I recommend this article which has better explanation than me.
TLDR: https://medium.com/flutter-community/isolates-in-flutter-a0dd7a18b7f6
Let's understand async-await first and then go into isolates.
void main() async {
// Read some data.
final fileData = await _readFileAsync();
final jsonData = jsonDecode(fileData);
// Use that data.
print('Number of JSON keys: ${jsonData.length}');
}
Future<String> _readFileAsync() async {
final file = File(filename);
final contents = await file.readAsString();
return contents.trim();
}
We want to read some data from a file and then decode that JSON and print the JSON Keys length. We don’t need to go into the implementation details here but can take the help of the image below to understand how it works.
When we click on this button Place Bid, it sends a request to _readFileAsync, all of which is dart code that we wrote. But this function _readFileAsync, executes code using Dart Virtual Machine/OS to perform the I/O operation which in itself is a different thread, the I/O thread.
This means, the code in the main function runs inside the main isolate. When the code reaches the _readFileAsync, it transfers the code execution to I/O thread and the Main Isolate waits until the code is completely executed or an error occurs. This is what await keyword does.
Now, once the contents of the files are read, the control returns back to the main isolate and we start parsing the String data as JSON and print the number of keys. This is pretty straight forward. But let’s suppose, the JSON parsing was a very big operation, considering a very huge JSON and we start manipulating the data to conform to our needs. Then this work is happening on the Main Isolate. At this point of time, the UI could hang, making our users fustrated.
Now let's get back to isolates.
Dart uses Isolate model for concurrency. Isolate is nothing but a wrapper around thread. But threads, by definition, can share memory which might be easy for the developer but makes code prone to race conditions and locks. Isolates on the other hand cannot share memory and instead rely on message passing mechanism to talk with each other.
Using isolates, Dart code can perform multiple independent tasks at once, using additional cores if they’re available. Each Isolate has its own memory and a single thread running an event loop.
Hope this help solve someone's doubt.
I'm writing an Erlang application that requires actively polling some remote resources, and I want the process that does the polling to fit into the OTP supervision trees and support all the standard facilities like proper termination, hot code reloading, etc.
However, the two default behaviours, gen_server and gen_fsm seem to only support operation based on callbacks. I could abuse gen_server to do that through calls to self or abuse gen_fsm by having a single state that always loops to itself with a timeout 0, but I'm not sure that's safe (i.e. doesn't exhaust the stack or accumulate unread messages in the mailbox).
I could make my process into a special process and write all that handling myself, but that effectively makes me reimplement the Erlang equivalent of the wheel.
So is there a behavior for code like this?
loop(State) ->
do_stuff(State), % without waiting to be called
loop(NewState).
And if not, is there a safe way to trick default behaviours into doing this without exhausting the stack or accumulating messages over time or something?
The standard way of doing that in Erlang is by using erlang:send_after/3. See this SO answer and also this example implementation.
Is it possible that you could employ an essentially non OTP compliant process? Although to be a good OTP citizen, you do ideally want to make your long running processes into gen_server's and gen_fsm's, sometimes you have to look beyond the standard issue rule book and consider why the rules exist.
What if, for example, your supervisor starts your gen_server, and your gen_server spawns another process (lets call it the active_poll process), and they link to each other so that they have shared fate (if one dies the other dies). The active_poll process is now indirectly supervised by the supervisor that spawned the gen_server, because if it dies, so will the gen_server, and they will both get restarted. The only problem you really have to solve now is code upgrade, but this is not too difficult - your gen_server gets a code_change callback call when the code is to be upgraded, and it could simply send a message to the active_poll process, which can make an appropriate fully qualified function call, and bingo, it's running the new code.
If this doesn't suit you for some reason and/or you MUST use gen_server/gen_fsm/similar directly...
I'm not sure that writing a 'special process' really gives you very much. If you wrote a special process correctly, such that it is in theory compliant to OTP design principals, it could still be ineffective in practice if it blocks or busy waits in a loop somewhere, and doesn't invoke sys when it should, so you really have at most a small optimisation over using gen_server/gen_fsm with a zero timeout (or by having an async message handler which does the polling and sends a message to self to trigger the next poll).
If what ever you are doing to actively poll can block (such as a blocking socket read for example), this is really big trouble, as gen_server, gen_fsm or a special process will all be stopped from fullfilling their usual obligations (which they would usually be able to either because the callback in the case of gen_server/gen_fsm returns, or because receive is called and the sys module invoked explicitly in the case of a special process).
If what you are doing to actively poll is non blocking though, you can do it, but if you poll without any delay then it effectively becomes a busy wait (it's not quite because the loop will include a receive call somewhere, which means the process will yield, giving the scheduler voluntary opportunity to run other processes, but it's not far off, and it will still be a relative CPU hog). If you can have a 1ms delay between each poll that makes a world of difference vs polling as rapidly as you can. It's not ideal, but if you MUST, it'll work. So use a timeout (as big as you can without it becoming a problem), or have an async message handler which does the polling and sends a message to self to trigger the next poll.
I've been looking at lua and lvm.c. I'd very much like to implement an interface to allow me to control the VM interpreter state.
Cooperative multitasking from within lua would not work for me (user contributed code)
The debug hook gets me only about 50% of the way there, instruction execution limits, but it raises an exception which just crashes the running lua code - but I need to be able to tweak it even further.
I want to create a system where 10's of thousands of lua user scripts are running - individual threads would not work, and the execution limits would cause headache for beginning developers, I'm going to control execution speeds too. but ultimately
while true do
end
will execute forever, and I really don't care that it is.
Any ideas, help or other implementations that I could look at?
EDIT: This is not about sandboxing pretend I'm an expert in that field for this conversation
EDIT: I do not want to use an internally ran lua code coroutine based controller.
EDIT: I want to run one thread, and manage a large number of user contributed lua scripts, an external process level control mechansim would not scale at all.
You can search for Lua Sandbox implementations; for example, this wiki page and SO question provide some pointers. Note that most of the effort in sandboxing is focused on not allowing you to execute bad code, but not necessarily on preventing infinite loops. For better control you may need to combine Lua sandboxing with something like LXC or cpulimit. (not relevant based on the comments)
If you are looking for something Lua-based, lightweight, but not necessarily 100% foolproof, then you can try running your client code in a separate coroutine and set a debug hook on that coroutine that will be triggered every N-th line. In that hook you can check if the process you are running exceeded its quotes. You also need to take care of new coroutines started as those need to have their own hooks set (you either need to disable coroutine.create/wrap or to replace them with something that sets the debug hook you need).
The code in this case may look like:
local coro = coroutine.create(client_func)
debug.sethook(coro, debug_hook, "l", 1000) -- trigger hook on every 1000th line
It's not foolproof, because it may block on some IO operation and the debug hook will not help there.
[Edit based on updated question and comments]
Between "no lua code coroutine based controller" and "no external process control mechanism" I don't think you are left with much choice. It may be that your only option is to run one VM per user script and somehow give ticks to those VMs (there was a recent question on SO on this, but I can't find it). Before going this route, I would still try to do this with coroutines (which should scale to tens of thousands easily; Tir claims supporting 1M active users with coroutine-based architecture).
The mechanism would roughly look like this: you install the debug hook as I shown above and from that hook you yield back to your controller, which then decides what other coroutine (user script) to resume. I have this very mechanism working in the Lua debugger I've been developing (although it only does it for one client script). This doesn't protect you from IO calls that can block and for that you may still need to have a watchdog at the VM level to see if it's been blocked for longer than needed.
If you need to serialize and deserialize running code fragments that preserve upvalues and such, then Pluto is probably your only option.
Look at implementing lua_lock and lua_unlock.
http://www.lua.org/source/5.1/llimits.h.html#lua_lock
Take a look at lulu. It is lua VM written on lua. It's for Lua 5.1
For newer version you need to do some work. But it's then you really can make a schelduler.
Take a look at this,
https://github.com/amilamad/preemptive-task-scheduler-for-lua
I maintain this project. It,s a non blocking preemptive scheduler for running lua code. Suitable for long running game scripts.