OpenMP task - know number of tasks in real time - task

I would like to know if, in applications parallelized with OpenMP tasks, threads can figure out the number of tasks that were created but not executed yet (presumably hold in the scheduling queue, ready for execution). Is there any such call?
For example, the call
get_num_non_executed_tasks()
would return the number of tasks that were created so far, but not yet executed;

No, there is no way in OpenMP to find out how many tasks were created but not yet started.

Related

Why does Swift not resume an asynchronous function on the same thread it was started?

In the introductory section of the Concurrency chapter of "The Swift Programming Language" I read:
When an asynchronous function resumes, Swift doesn’t make any
guarantee about which thread that function will run on.
This surprised me. It seems odd, comparing for example with waiting on semaphore in pthreads, that execution can jump threads.
This leads me to the following questions:
Why doesn't Swift guarantee resuming on the same thread?
Are there any rules by which the resuming thread could be
determined?
Are there ways to influence this behaviour, for example make sure it's resumed on the main thread?
EDIT: My study of Swift concurrency & subsequent questions above were triggered by finding that a Task started from code running on the main thread (in SwiftUI) was executing it's block on another thread.
It helps to approach Swift concurrency with some context: Swift concurrency attempts to provide a higher-level approach to working with concurrent code, and represents a departure from what you may already be used to with threading models, and low-level management of threads, concurrency primitives (locking, semaphores), and so on, so that you don't have to spend any time thinking about low-level management.
From the Actors section of TSPL, a little further down on the page from your quote:
You can use tasks to break up your program into isolated, concurrent pieces. Tasks are isolated from each other, which is what makes it safe for them to run at the same time…
In Swift Concurrency, a Task represents an isolated bit of work which can be done concurrently, and the concept of isolation here is really important: when code is isolated from the context around it, it can do the work it needs to without having an effect on the outside world, or be affected by it. This means that in the ideal case, a truly isolated task can run on any thread, at any time, and be swapped across threads as needed, without having any measurable effect on the work being done (or the rest of the program).
As #Alexander mentions in comments above, this is a huge benefit, when done right: when work is isolated in this way, any available thread can pick up that work and execute it, giving your process the opportunity to get a lot more work done, instead of waiting for particular threads to be come available.
However: not all code can be so fully isolated that it runs in this manner; at some point, some code needs to interface with the outside world. In some cases, tasks need to interface with one another to get work done together; in others, like UI work, tasks need to coordinate with non-concurrent code to have that effect. Actors are the tool that Swift Concurrency provides to help with this coordination.
Actors help ensure that tasks run in a specific context, serially relative to other tasks which also need to run in that context. To continue the quote from above:
…which is what makes it safe for them to run at the same time, but sometimes you need to share some information between tasks. Actors let you safely share information between concurrent code.
… actors allow only one task to access their mutable state at a time, which makes it safe for code in multiple tasks to interact with the same instance of an actor.
Besides using Actors as isolated havens of state as the rest of that section shows, you can also create Tasks and explicitly annotate their bodies with the Actor within whose context they should run. For example, to use the TemperatureLogger example from TSPL, you could run a task within the context of TemperatureLogger as such:
Task { #TemperatureLogger in
// This task is now isolated from all other tasks which run against
// TemperatureLogger. It is guaranteed to run _only_ within the
// context of TemperatureLogger.
}
The same goes for running against the MainActor:
Task { #MainActor in
// This code is isolated to the main actor now, and won't run concurrently
// with any other #MainActor code.
}
This approach works well for tasks which may need to access shared state, and need to be isolated from one another, but: if you test this out, you may notice that multiple tasks running against the same (non-main) actor may still run on multiple threads, or may resume on different threads. What gives?
Tasks and Actors are the high-level tools in Swift concurrency, and they're the tools that you interface with most as a developer, but let's get into implementation details:
Tasks are actually not the low-level primitive of work in Swift concurrency; Jobs are. A Job represents the code in a Task between await statements, and you never write a Job yourself; the Swift compiler takes Tasks and creates Jobs out of them
Jobs are not themselves run by Actors, but by Executors, and again, you never instantiate or use an Executor directly yourself. However, each Actor has an Executor associated with it, that actually runs the jobs submitted to that actor
This is where scheduling actually comes into play. At the moment there are two main executors in Swift concurrency:
A cooperative, global executor, which schedules jobs on a cooperative thread pool, and
A main executor, which schedules jobs exclusively on the main thread
All non-MainActor actors currently use the global executor for scheduling and executing jobs, and the MainActor uses the main executor for doing the same.
As a user of Swift concurrency, this means that:
If you need a piece of code to run exclusively on the main thread, you can schedule it on the MainActor, and it will be guaranteed to run only on that thread
If you create a task on any other Actor, it will run on one (or more) of the threads in the global cooperative thread pool
And if you run against a specific Actor, the Actor will manage locks and other concurrency primitives for you, so that tasks don't modify shared state concurrently
With all of this, to get to your questions:
Why doesn't Swift guarantee resuming on the same thread?
As mentioned in the comments above — because:
It shouldn't be necessary (as tasks should be isolated in a way that the specifics of "which thread are we on?" don't matter), and
Being able to use any one of the available cooperative threads means that you can continue making progress on all of your work much faster
However, the "main thread" is special in many ways, and as such, the #MainActor is bound to using only that thread. When you do need to ensure you're exclusively on the main thread, you use the main actor.
Are there any rules by which the resuming thread could be determined?
The only rule for non-#MainActor-annotated tasks are: the first available thread in the cooperative thread pool will pick up the work.
Changing this behavior would require writing and using your own Executor, which isn't quite possible yet (though there are some plans on making this possible).
Are there ways to influence this behaviour, for example make sure it's resumed on the main thread?
For arbitrary threads, no — you would need to provide your own executor to control that low-level detail.
However, for the main thread, you have several tools:
When you create a Task using Task.init(priority:operation:), it defaults to inheriting from the current actor, whatever actor this happens to be. This means that if you're already running on the main actor, the task will continue using the current actor; but if you aren't, it will not. To explicitly annotate that you want the task to run on the main actor, you can annotate its operation explicitly:
Task { #MainActor in
// ...
}
This will ensure that regardless of what actor the Task was created on, the contained code will only run on the main actor.
From within a Task: regardless of the actor you're currently on, you can always submit a job directly onto the main actor with MainActor.run(resultType:body:). The body closure is already annotated as #MainActor, and will guarantee execution on the main thread
Note that creating a detached task will never inherit from the current actor, so guaranteed that a detached task will be implicitly scheduled through the global executor instead.
My study of Swift concurrency & subsequent questions above were triggered by finding that a Task started from code running on the main thread (in SwiftUI) was executing it's block on another thread.
It would help to see specific code here to explain exactly what happened, but two possibilities:
You created a non-explicitly #MainActor-annotated Task, and it happened to begin execution on the current thread. However, because you weren't bound to the main actor, it happened to get suspended and resumed by one of the cooperative threads
You created a Task which contained other Tasks within it, which may have run on other actors, or were explicitly detached tasks — and that work continued on another thread
For even more insight into the specifics here, check out Swift concurrency: Behind the scenes from WWDC2021, which #Rob linked in a comment. There's a lot more to the specifics of what's going on, and it may be interesting to get an even lower-level view.
If you want insights into the threading model underlying Swift concurrency, watch WWDC 2021 video Swift concurrency: Behind the scenes.
In answer to a few of your questions:
Why doesn't Swift guarantee resuming on the same thread?
Because, as an optimization, it can often be more efficient to run it on some thread that is already running on a CPU core. As they say in that video:
When threads execute work under Swift concurrency they switch between continuations instead of performing a full thread context switch. This means that we now only pay the cost of a function call instead. …
You go on to ask:
Are there any rules by which the resuming thread could be determined?
Other than the main actor, no, there are no assurances as to which thread it uses.
(As an aside, we’ve been living with this sort of environment for a long time. Notably, GCD dispatch queues, other than the main queue, make no such guarantee that two blocks dispatched to a particular serial queue will run on the same thread, either.)
Are there ways to influence this behaviour, for example make sure it's resumed on the main thread?
If we need something to run on the main actor, we simply isolate that method to the main actor (with #MainActor designation on either the closure, method, or the enclosing class). Theoretically, one can also use MainActor.run {…}, but that is generally the wrong way to tackle it.

Use of serialized target queues for Concurrent queues in iOS

I was going through this excellent blog post
(http://www.humancode.us/2014/08/14/target-queues.html)
of target threads in iOS and I could not help but wonder why do we need such a mechanism. In the example, we are specifying a serialised target queue for a custom concurrent queue. Can we not achieve the same by executing the blocks in the original concurrent queue in a serialised queue instead?
Whats the point of having a serialised target queue for a concurrent queue????
If I got You right, you're asking why would someone start serial task on a concurrent queue.
You would need that kind of behaviour in case, if most tasks with some resource can be performed concurrently (aka, simultaneously), but some tasks are, by nature, unsafe to be performed concurrently with others.
The most common example is readers/writers problem. Here you are accessing, for example, some resource of a file system. It's ok to read it even from different threads - every reader will get exactly what it needs. But here comes necessity to update contents of that file. Modifying it while someone reads it leads to unpredicted results - reader is not guaranteed to get the right, expected, info (partially from old version, partially from new). Even worse - there can be two writers (if file contents changes by application user and from some central storage via net) - result will be some crazy mix of two versions (actually, it can be now even corrupted)
Here comes necessity for each writer to wait till all other tasks performed (no one reads, no one writes), and for each reader to wait until no writing tasks take place (no one writes, no matter how many readers)
Wikipedia has nice article on this one. I haven't run into any other practical situations, where you would need this, but I believe there're more of them.
Hope it answers your question

How to terminate a long running isolate

I am trying to understand how I shall port my Java chess engine to dart.
So I have understood that I should use Isolates and/or Futures to run my engine in parallell with the GUI but how can I force the engine to terminate the search.
In java I just set some boolean that where shared between the engine thread and the gui thread.
You should send a message to the isolate, telling it to stop. You can simply do something like:
port.send('STOP');
To be clear, isolates and futures are two different things, and you use them differently.
Use an isolate when you want some code to truly run concurrently, in a separate "isolated memory heap". An isolate is like a mini program, running separately from your main program. You send isolates messages, and you can receive messages from isolates.
Use a future when you want to be notified when a value is available later. "Later" is defined as "a future tick in the event loop". Each isolate has its own event loop. It's important to understand that just asking a Future to run a function doesn't make the function run in parallel. It just puts the function onto the event loop to be run "later".
Answering the implied question 'how can I get a long running task in an isolate to cease running?' rather than more explicitly asked 'how can I cause an isolate to terminate, release it's resources and generally cease to be?'
Break the long running task up into smaller, shorter running units.
Execute each unit with a Future. Chain futures as appropriate.
Provide a flag that each unit should check before executing its logic. If the flag is set, bail.
Listen for a 'stop' message and set the flag if/when received.
Splitting the main processing task up into Futures allows processing of the stop message to get onto the event queue ahead of units of processing of the main task.
There is now iso.Isolate.kill()
WARNING: This method is experimental and not handled on every platform yet.

How can I check from my code if there's something enqueued in Sidekiq?

When certain conditions are met, I'd like to schedule a worker to run a particular job in 5 minutes. The thing is, if the same conditions are met again, I want to check if there's something scheduled to run. If there is such a worker scheduled to run, then, I don't want to enqueue again, but if there isn't, it should be queued. I hope you guys understood what I'm trying to do. Can it be achieved? If yes, how?
Sounds like you want to use or implement a simple persisted lock. The code that enqueues the job can first check for the availability of the lock, acquire and enqueue if available, skip if not. The enqueued job can be responsible for releasing the lock. You'll want to account for failure, like adding a lock timeout. The redis-mutex gem may be a useful implementation of this idea.
Best practices promote jobs that are idempotent. This means that you should be writing them in such a way that it should be safe to run them more than once. Any subsequent call doesn't change the result of the first call. You achieve this by writing logic that does the proper checks, and acts accordingly. Since you don't provide a description of what your worker does, I can't be more specific.
For an example, here is a link to the Sidekiq's FAQ: Make your workers idempotent and transactional
The benefit of this approach is that you're playing along the convenient abstraction of scheduled workers, instead of fighting against it.

Does OmnithreadLibrary support "work stealing"?

Work stealing is for example available in the Fork / Join framework on the Java platform. (See How is the fork/join framework better than a thread pool?) - is something similar possible with the OmniThreadLibrary?
Work stealing: worker threads that run out of things to do can steal tasks from other
threads that are still busy.
I don't know if I would call this technique "work stealing" but indeed OmniThreadLibrary keeps all your cores busy when executing Fork/Join abstraction.
When you use Fork/Join, you send a task into the computation pool by calling Compute. When you call Value to get the result of the subcomputation or Await to wait on the subcomputation to finish and the subcomputation has not completed its work yet, Value/Await will take another task from the computation pool and execute it. When this new task is finished, it will again check whether the subcomputation has completed its work and if not it will process next subtask.
This mechanism is further described on the OmniThreadLibrary wiki.
EDIT
I don't think Fork/Join approach should be called "work stealing". In OmniThreadLibrary implementation, work item is never assigned to a thread until the thread starts executing it. And once the thread starts executing it, nobody can steal it as there would be no purpose in that.

Resources