In Linux, with POSIX threads, is it possible to hint the scheduler to schedule a particular thread. Actually the scenario is that I have a process which is replica of another process. For deterministic execution, the follower process needs to acquire the locks in the same order as the leader process.
So for example, say in leader process, mutex a is locked by first thread 2, then 3 and 4. The follower must execute in the same order. So if in follower, thread 3 first encounters mutex a, I want thread 3 to say to the scheduler, ok I'm giving up my time slice, please schedule thread 2 instead. I know this can be achieved by modifying the scheduler, but I do not want that, I want to be able to control this from user space program.
In any system, Linux, Windows POSIX or not, if you have to ask this sort of question then I'm afraid that your app is heading for a dark place :(
Even if thread 3 were to yield, say with sleep(0), an interrupt straight after might well just schedule thread 3 back on again, preempting thread 2, or the OS might run thread 3 straightaway on another free core and it could get to the mutex first.
You have to make your app work correctly, (maybee not optimally), independently of the OS scheduling/dispatching algorithms. Even if you get your design to work on a test box, you will end up having to test your system on every combination of OS/hardware to ensure that it still works without deadlocking or performing incorrectly.
Fiddling with scheduling algorithms, thread priorities etc. should only be done to improve the performance of your app, not to try and make it work correctly or to stop it locking up!
Rgds,
Martin
Related
First off, I'd like to clarify that I'm not talking about concurrency here. I fully understand that having multiple threads modify the UI at the same time is bad, can give race conditions, deadlocks, bugs etc, but that's separate to my question.
I'd like to know why MacOS/iOS forces the main thread (ID 0, first thread, whatever) to be the thread on which the GUI must be used/updated/created on.
see here, related:
on OSX/iOS the GUI must always be updated from the main thread, end of story.
I understand that you only ever want a single thread doing the acutal updating of the GUI, but why does that thread have to be ID 0?
(this is background info, TLDR below)
In my case, I'm making a rust app that uses a couple of threads to do things:
engine - does processing and calculations
ui - self explanatory
program/main - monitors other threads and generally synchronizes things
I'm currently doing something semi-unsafe and creating the UI on it's own thread, which works since I'm on windows, but the API is explicitly marked as BAD to use, and it's not cross compatible for MacOS/iOS for obvious reasons (and I want it to be as compatible as possible).
With the UI/engine threads (there may be more in the future), they are semi-unstable and could crash/exit early, outside of my control (external code). This has happened before, and so I want to have a graceful shutdown if anything goes wrong, hence the 'main' thread monitoring (among other things it does).
I am aware that I could just make Thread 0 the UI thread and move the program to another thread, but the app will immediately quit when the main thread quits, which means if the UI crashes the whole things just aborts (and I don't want this). Essentially, I need my main function on the main thread, since I know it won't suddenly exit and abort the whole app abruptly.
TL;DR
Overall, I'd like to know three things
Why does MacOS/iOS enforce the GUI being on THread 0 (ignoring thread-safety outlined above)
Are there any ways to bypass this (use a different thread for GUI), or will I simply need to sacrifice those platforms (and possible others I'm unaware of)?
Would it be possible to do something like have the UI run as a separate process, and have it share some memory/communicate with the main process, using safe, simple rust?
p.s. I'm aware of this question, it's relevant but doesn't really answer my questions.
Why does MacOS/iOS enforce the GUI being on Thread 0.
Because it's been that way for over 30 years now (since NeXTSTEP), and changing it would break just about every program out there, since almost every Cocoa app assumes this, and relies on it regularly, not just for the main thread, but also the main runloop, the main dispatch group, and now the main actor. External UI events (which come from other processes like the window manager) are delivered on thread 0. NSDistributedNotifications are delivered on thread 0. Signal handling, the list goes on. Yes, it is certainly possible for Darwin (which underlies Cocoa) to be rewritten to allow this. That's not going to happen. I'm not sure what other answer you want.
Would it be possible to do something like have the UI run as a separate process, and have it share some memory/communicate with the main process, using safe, simple rust?
Absolutely. See XPC, which is explicitly for this purpose (communicating, not sharing memory; don't share memory, that's a mess). See sys-xpc for the Rust interface.
I want to have a controller that somehow runs 3 processes to run the robot's code.
I am trying to simulate a humanoid soccer robot in webots . To run our robot's code, we run 3 processes. One for the servomotors' power management , another one for image processing and communications and the last one for motion control.
Now I want to have a controller making me somehow able to simulate something like this or at least similar to it. Does anyone have any idea how I can do this?
Good news: the Webots API is thread safe :-)
Generally speaking, I would not recommend to use multi-threads, because programming threads is a big source of issues. So, if you have any possibility to merge your threads into a single-threaded application, it's the way to go!
If you would like to go in this direction, the best solution is certainly to create a single controller running your 3 threads, and synchronize them with the main thread (thread 0).
The tricky part is to deal correctly with the time management and the simulation steps. A solution could be to set the Robot.synchronization field to FALSE and to use the main thread to call the wb_robot_step(duration) function every duration time (real time).
Note, related but not the same: iPhone - Grand Central Dispatch main thread
I've failed at this question many times, so here's source code:
While on the main thread
dispatch_async( dispatch_get_main_queue(), ^{ NSString * str = #"Interrupt myself to do something."} );
I'm just curious, when a thread switches, it stores its registers in Thread Local Storage, switches context, runs from its new spot in the Program Counter (which I assume is within a copy of the program that simply uses a different stack and register), then it "goes back" to the main thread.
When it interrupts itself, I'm just wondering what decides when it should, and what happens to the Thread Local stuff.
I've read up on this a little, but I'm still wrapping my head around the fact that programs are not continuous. They're just "something to do in small chunks when the OS decides to run a chunk of a process, or its chunks (threads).
I am self-taught, which might add to my lack of register/asm knowledge that may be standard to any scholar.
Thanks. The code should help, this is iOS specific, but I believe the answer/question is related to any language going from main-to-main.
Since every past attempt has resulted in lengthy answers that ignore the reason I'm asking this, I will iterate one last time....
This is for the SAME thread. Main-to-main. Does it really just stop itself, move the program counter elsewhere, go, then end at the block? Also don't these things usually change at branches (if/for and blocks too).
Pointing me in the right direction works too, but like I said, previously the question was misread.
It is hard to answer your question specifically without having access to the internals of GCD, but generically, the answer is no, simply adding a unit of work to a dispatch queue will not immediately interrupt the executing code.
As you suggest context switches are expensive, not only in terms of state saving & restoration but also the processor will need to dump the instruction pipeline resulting in wasted cycles.
Typically the operating system will keep executing the current task until it suspends (e.g. waits on a network or other IO operation) or perhaps is interrupted by some external event (pressing the home key on the phone), but there are also time limits to prevent a runaway task from locking the whole device (This is pre-emptive multi-tasking, as opposed to co-operative multitasking where the task needs to relinquish the CPU)
With dispatch_async there is no guarantee of when the code will execute in relation to the current code block. The code block may not even be next in the queue - other threads may have added other units of work to the queue before this one.
I think the thing that's confusing you is the use of dispatch_async( dispatch_get_main_queue()), which submits code to run on a queue on the main thread.
Using dispatch_async on the main queue:
When you call dispatch_async( dispatch_get_main_queue()), it adds a unit of work to the main queue, which runs it's jobs from the main thread.
If you run this call from the main thread, the results are the same. The work gets added to the main queue for later processing.
When you make this call from the main thread, the system doesn't check the main queue for work to do until your code returns.
Think of this as a one-cook kitchen. As the cook works, he puts trays of dishes in the dishwashing area. He doesn't stop to do dishes until he gets to a breaking point in what he's currently doing. At that point he takes a tray of dishes, loads it into the dishwasher, and then goes back to cooking.
The cook knows that he has to check for dishes each time he gets to a breaking point, and then completes a dishwashing task before returning to cooking.
Using dispatch_async on a background queue:
A dispatch_async call to a background queue is like a 2-person kitchen. There is a dishwasher working at the same time. The cook puts a tray of dishes into the dishwashing station (the queue) and the dishwasher (the other thread) picks up that task as soon as it's finished with it's previous tasks, while the cook continues to work on cooking.
The above assumes a machine with multiple processors, which is the norm these days. Each processor can do work at the same time without having to juggle multiple tasks.
If you are running on a single-core system with preemptive multitasking, submitting tasks to separate threads/background queues has the same effect as if there were multiple processors, but now the OS has to do a juggling act. There's only one person in the kitchen, but he wears multiple hats. The person is doing the cook job, and the OS shouts "Switch!" The cook jots down notes on what he was doing (saves state) and then jumps into the dish-pit and starts washing dishes, and keeps washing dishes until the OS yells "Switch!" again, and the worker again saves state, switches to the next role, and picks up that role (cook) where it was left off.
Multi-tasking is more costly on a single-core system because each time the worker switches roles, it has to save the current state, then load the saved state for the other role, and continue. Those context switches take time.
I've looked at the BB API(5.0) and I can't find any way of serially executing a batch of threads. I know BB has a limit on the number of threads it will launch, so I don't want to launch 7 if the user clicks through things fast enough but I cannot find anything like a thread pool.
Is there an easy fix for this or do I have to create a data structure?
If you just want to execute a bunch of tasks on a single thread serially and order isn't important, you could create a Timer object (which has its own thread) then add each task to it as a TimerTask. If you schedule it with a delay of 0 or 1, it will essentially run that task as soon as possible. And since a Timer only has a single thread, if you schedule multiple tasks concurrently, it will ensure that only one will run at a time.
Incidentally, I was talking to a RIM engineer at the BlackBerry Developer Conference this year and he said that as of OS 5.0 there are no longer limits to the number of threads -- so this is becoming less and less of a concern.
I've tested Jeff Heaton's Thread Pool example on 4.5 and it works. (http://www.informit.com/articles/article.aspx?p=30483&seqNum=1).
I am developing a program in c++ and I have to implement a cron. This cron should be executed every hour and every 24 hours for different reasons. My first idea was to make an independent pthread and sleep it during 1h every time. Is this correct? I mean, is really efficient to have a thread asleep more than awake? What are the inconvenients of having a thread slept?
I would tend to prefer to have such a task running via cron/scheduler since it is to run at pre-determined intervals, as opposed to in response to some environmental event. So the program should just 'do' what ever it needs to do, and then be executed by the operating system as needed. This also makes it easy to change the frequency of execution - just change the scheduling, rather than needing to rebuild the app or expose extra configurability.
That said, if you really, really wanted to do it that way, you probably would not sleep for the whole hour; You would sleep in multiple of some smaller time frame (perhaps five minutes, or what ever seems appropriate) and have a variable keeping the 'last run' time so you know when to run again.
Sleep() calls typically won't be exceptionally accurate as far as the time the thread ends up sleeping; it depends on what other threads have tasks waiting, etc.
There is no performance impact of having a thread sleeping for a long time, aside of the fact that it will occupy some memory without doing anything useful. Maybe if you'd do it for thousands of threads there would be some slow down in the OS's management of threads, but it doesn't look like you are planning to do that.
A practical disadvantage of having a thread sleep for long is, that you can't do anything with it. If you, for example, want to tell the thread that it should stop because the application wants to shut down, the thread could only get this message after the sleep. So your application either needs a very long time to shut down, or the thread will need to be stopped forcefully.
My first idea was to make an independent pthread and sleep it during 1h every time.
I see no problem.
Is this correct? I mean, is really efficient to have a thread asleep more than awake?
As long as a thread sleeps and doesn't wake up, OS wouldn't even bother with its existence.
Though otherwise, if thread sleeps most of its life, why to have a dedicated thread? Why other thread (e.g. main thread) can't check the time, and start a thread to do the cron job?
What are the inconvenients of having a thread slept?
None. But the fact that the sleeping thread cannot be easily unblocked. That is a concern if one needs proper shutdown of an application. That is why it is a better idea to have another (busy) thread to check the time and start the cron job when needed.
When designing your solution keep these scenarios in mind:
At 07:03 the system time is reset to 06:59. What happens one minute later?
At 07:03 the system time is moved forward to 07:59. What happens one minute later?
At 07:59 the system time is moved forward to 08:01. Is the 08:00-job ever executed?
The answers to those questions will tell you a lot about how you should implement your solution.
The performance of the solution should not be an issue. A single sleeping thread will use minimal resources.