I have one query related to RTOS. We are using Nucleus RTOS. But my query is Generic. Suppose if a task is executing and if it's preemption is disabled, then is the task becoming atomic in nature? What i am asking here is, once the preemption of a task is disabled, can ISR routine interrupts interrupt the task? disabling the preemption of a task means, all other tasks cannot interrupt the particular task that is executing. so, can ISR in this situation can cause the interruption? and by disabling the preemption, is it a meaning of disabling the ISR aslo??
Disabling thread preemption does not imply the disabling of interrupt it merely means that rescheduling will not occur. If however you disable interrupts you will disable both.
The RTOS documentation should be clear on this - consult the documentation for whatever call you are using to lock the scheduler.
Related
Going through the manual for Free RTOS, I encountered a sentence where it mentions
It is important to note that the end of a time slice is not the only
place that the scheduler can select a new task to run; as will be
demonstrated throughout this book, the scheduler will also select a
new task to run immediately after the currently executing task enters
the Blocked state, or when an interrupt moves a higher priority task
into the Ready state.
I am confused with the way preemption works in Free RTOS. Consider a task A with priority 1 is in RUNNING state. Also consider task B with higher priority 2 enters READY state when the task A is in the middle of the time slice.
Q1: What type of interrupt is the manual talking about?
Q2: Is the interrupt only way to take the task B to READY state while the task A is in RUNNING state?
Q3: If answer to Q2 is no, when would the task switching occur if it is not interrupt driven? Is it after the time slot ends or is it immediately at the middle of the time slice without waiting for the end of time slice?
You describe the following situation where you have two tasks with different priorities:
Task A with priority 1 (lower), currently in RUNNING state
Task B with priority 2 (higher), entering READY state
In this situation, it's important to ask yourself a question - what would be the possible scenarios that would lead to this situation?
The general rule of a thumb when dealing with tasks of different priorities in FreeRTOS is that the higher priority task will be given all the available time, unless it cannot run due to being SUSPENDED, BLOCKED (waiting for queue, semaphore, mutex) or put in a delay (this also falls into BLOCKED state). Therefore in your case - task A would never enter RUNNING state unless task B was previously either SUSPENDED or BLOCKED.
To answer your questions:
Q1: What type of interrupt is the manual talking about?
I'd assume they're talking about a situation where task B is in a blocked state due to waiting for semaphore/queue and you "give a semaphore" / "send to quque" from an interrupt. Examples of this happening: IO level interrupt giving a semaphore, UART interrupt pushing received byte into a queue.
Q2: Is the interrupt only way to take the task B to READY state while the task A is in RUNNING state?
I'd say no. Other examples that come to mind (apart from the interrupt cases mentioned above):
Task B is SUSPENDED and task A decides to resume task B. When you do so, task B should resume execution immediately and take all the available time from this point on, unless it again enters SUSPENDED or BLOCKED state.
Task B is waiting for a mutex held by task A and task A releases it.
Task B is waiting on a semaphore/queue and task A "gives semaphore", "sends to queue".
Task B was in a delay and that delay ended.
Q3: If answer to Q2 is no, when would the task switching occur if it is not interrupt driven? Is it after the time slot ends or is it immediately at the middle of the time slice without waiting for the end of time slice?
I already listed possible examples above. To mention it again - when you have two tasks with different priorities, then unless the higher priority task gets into BLOCKED or SUSPENDED state, it'll take all available time from the lower priority task. While technically you can still speak of "time slices" in this case, all of the slices will be assigned/consumed by higher priority task. Therefore, speaking of "time slicing" only really makes sense when you have two or more tasks running with same priority, in which case time should be split between them evenly (unless they get BLOCKED or SUSPENDED).
I'm completely new with FreeRTOS. I have two tasks: the first one must be performed continuously in the loop and the second one should turn on only after interrupt and after the second one is done it should return to the first one, which needs to start from the beginning(it's important because the first task collects data and if I continue to perform it from the place where I interrupt I will get the trash.).
Can I use Semaphore for it or is there something better? Thank you in advance.
It is not clear what you are asking or what you want to use the semaphore for. Protecting data access by both the interrupt and the first task? Or maybe signaling the first task? From what I can make out it sounds like you want to have a lower priority task running continuously, then when an interrupt occurs have the interrupt handler unblock a higher priority task that will then preempt the lower priority task and execute. Then when it finishes and blocks again the scheduler will naturally continue running the lower priority task. I'm confused by your statement that if you continue executing from where it was interrupted you will get trash though - interrupts always return to where they interrupted.
The most efficient way of unblocking a task from an interrupt would be a direct-to-task notification. I would also recommend reading some of the generic FreeRTOS documentation and books available on the FreeRTOS.org site.
A couple of quotes from the manual.
Quoting man 3 pthread_mutex_unlock:
None of the mutex functions is a cancellation point, not even pthread_mutex_lock, in spite of the fact that it can suspend a thread for arbitrary durations. This way, the status of mutexes at cancellation points is predictable, allowing cancellation handlers to unlock precisely those mutexes that need to be unlocked before the thread stops executing.
But one paragraph later it is written that:
The mutex functions are not async-signal safe. What this means is that they should not be called from a signal handler. In particular, calling pthread_mutex_lock or pthread_mutex_unlock from a signal handler may deadlock the calling thread.
OK, so the manual orders me to unlock mutexes in a cleanup handler, but prohibits me from unlockin mutexes in a signal handler. Well, quoting man 3 pthread_cancel:
On Linux, cancellation is implemented using signals.
Ah. So a thread is cancelled by receiving a signal.
Doesn’t this make a cancellation cleanup handler actually a signal handler? Or maybe rather, I dunno, the cleanup handler is being called from a signal handler whose default action is to call functions installed by pthread_cleanup_push? One cannot deny that these cleanup handlers are called when a signal is being received.
But this would make the manual contradict it’s own statements…
How to understand things properly?
The fact that cancellation is handled using signals on Linux is an implementation detail. It doesn't mean you're only allowed to use async-signal-safe functions in them.
At least for deferred cancellation at a cancellation point, POSIX doesn't limit the functions that can be called, so the implementation has to make that work.
Work stealing is for example available in the Fork / Join framework on the Java platform. (See How is the fork/join framework better than a thread pool?) - is something similar possible with the OmniThreadLibrary?
Work stealing: worker threads that run out of things to do can steal tasks from other
threads that are still busy.
I don't know if I would call this technique "work stealing" but indeed OmniThreadLibrary keeps all your cores busy when executing Fork/Join abstraction.
When you use Fork/Join, you send a task into the computation pool by calling Compute. When you call Value to get the result of the subcomputation or Await to wait on the subcomputation to finish and the subcomputation has not completed its work yet, Value/Await will take another task from the computation pool and execute it. When this new task is finished, it will again check whether the subcomputation has completed its work and if not it will process next subtask.
This mechanism is further described on the OmniThreadLibrary wiki.
EDIT
I don't think Fork/Join approach should be called "work stealing". In OmniThreadLibrary implementation, work item is never assigned to a thread until the thread starts executing it. And once the thread starts executing it, nobody can steal it as there would be no purpose in that.
When does one use pthread_cancel and not pthread_kill?
I would use neither of those two but that's just personal preference.
Of the two, pthread_cancel is the safest for terminating a thread since the thread is only supposed to be affected when it has set its cancelability state to true using pthread_setcancelstate().
In other words, it shouldn't disappear while holding resources in a way that might cause deadlock. The pthread_kill() call sends a signal to the specific thread, and this is a way to affect a thread asynchronously for reasons other than cancelling it.
Most of my threads tends to be in loops doing work anyway and periodically checking flags to see if they should exit. That's mostly because I was raised in a world when pthread_kill() was dangerous and pthread_cancel() didn't exist.
I subscribe to the theory that each thread should totally control its own resources, including its execution lifetime. I've always found that to be the best way to avoid deadlock. To that end, I simply use mutexes for communication between threads (I've rarely found a need for true asynchronous communication) and a flag variable for termination.
You can not "kill" a thread with pthread_kill(). If you try to send SIGTERM or SIGKILL to a thread with pthread_kill(), it will terminate the entire process.
I subscribe to the theory that the PROGRAMMER and not the THREAD (nor the API designers) should totally control its own software in all aspects, including which threads cancel which.
I once worked in a firm where we developed a server that used a pool of worker threads and one special master thread that had the responsibility to create, suspend, resume and terminate the worker threads at any time it wanted. Of course the threads used some sort of synchronization, but it was of our design and not some API-enforced dogmas. The system worked very well and efficiently!
This was under Windows. Then I tried to port it for Linux and I stumbled at the pthreads' stupid "theories" about how wrong it is to suspend another thread etc. So I had to abandon pthreads and directly use the native Linux system calls (clone()) to implement the threads for our server.