Going through the manual for Free RTOS, I encountered a sentence where it mentions
It is important to note that the end of a time slice is not the only
place that the scheduler can select a new task to run; as will be
demonstrated throughout this book, the scheduler will also select a
new task to run immediately after the currently executing task enters
the Blocked state, or when an interrupt moves a higher priority task
into the Ready state.
I am confused with the way preemption works in Free RTOS. Consider a task A with priority 1 is in RUNNING state. Also consider task B with higher priority 2 enters READY state when the task A is in the middle of the time slice.
Q1: What type of interrupt is the manual talking about?
Q2: Is the interrupt only way to take the task B to READY state while the task A is in RUNNING state?
Q3: If answer to Q2 is no, when would the task switching occur if it is not interrupt driven? Is it after the time slot ends or is it immediately at the middle of the time slice without waiting for the end of time slice?
You describe the following situation where you have two tasks with different priorities:
Task A with priority 1 (lower), currently in RUNNING state
Task B with priority 2 (higher), entering READY state
In this situation, it's important to ask yourself a question - what would be the possible scenarios that would lead to this situation?
The general rule of a thumb when dealing with tasks of different priorities in FreeRTOS is that the higher priority task will be given all the available time, unless it cannot run due to being SUSPENDED, BLOCKED (waiting for queue, semaphore, mutex) or put in a delay (this also falls into BLOCKED state). Therefore in your case - task A would never enter RUNNING state unless task B was previously either SUSPENDED or BLOCKED.
To answer your questions:
Q1: What type of interrupt is the manual talking about?
I'd assume they're talking about a situation where task B is in a blocked state due to waiting for semaphore/queue and you "give a semaphore" / "send to quque" from an interrupt. Examples of this happening: IO level interrupt giving a semaphore, UART interrupt pushing received byte into a queue.
Q2: Is the interrupt only way to take the task B to READY state while the task A is in RUNNING state?
I'd say no. Other examples that come to mind (apart from the interrupt cases mentioned above):
Task B is SUSPENDED and task A decides to resume task B. When you do so, task B should resume execution immediately and take all the available time from this point on, unless it again enters SUSPENDED or BLOCKED state.
Task B is waiting for a mutex held by task A and task A releases it.
Task B is waiting on a semaphore/queue and task A "gives semaphore", "sends to queue".
Task B was in a delay and that delay ended.
Q3: If answer to Q2 is no, when would the task switching occur if it is not interrupt driven? Is it after the time slot ends or is it immediately at the middle of the time slice without waiting for the end of time slice?
I already listed possible examples above. To mention it again - when you have two tasks with different priorities, then unless the higher priority task gets into BLOCKED or SUSPENDED state, it'll take all available time from the lower priority task. While technically you can still speak of "time slices" in this case, all of the slices will be assigned/consumed by higher priority task. Therefore, speaking of "time slicing" only really makes sense when you have two or more tasks running with same priority, in which case time should be split between them evenly (unless they get BLOCKED or SUSPENDED).
Related
I'm completely new with FreeRTOS. I have two tasks: the first one must be performed continuously in the loop and the second one should turn on only after interrupt and after the second one is done it should return to the first one, which needs to start from the beginning(it's important because the first task collects data and if I continue to perform it from the place where I interrupt I will get the trash.).
Can I use Semaphore for it or is there something better? Thank you in advance.
It is not clear what you are asking or what you want to use the semaphore for. Protecting data access by both the interrupt and the first task? Or maybe signaling the first task? From what I can make out it sounds like you want to have a lower priority task running continuously, then when an interrupt occurs have the interrupt handler unblock a higher priority task that will then preempt the lower priority task and execute. Then when it finishes and blocks again the scheduler will naturally continue running the lower priority task. I'm confused by your statement that if you continue executing from where it was interrupted you will get trash though - interrupts always return to where they interrupted.
The most efficient way of unblocking a task from an interrupt would be a direct-to-task notification. I would also recommend reading some of the generic FreeRTOS documentation and books available on the FreeRTOS.org site.
I'm using the TaskRouter to create a Workspace, Tasks, Queues, Workers and Workflows.
When a Task enters a queue I need to perform some operations that may take up to a minute before I want the task to go to the next queue even if there's 0 available resources in it's current queue.
Is there a way to manually update the Task\Call to put it in another queue? Or is there a Workflow configuration to prevent the Task from moving to the next queue for a certain amount of time or specific conditions have been met?
TaskRouter engineer here!
Have a look at Workflow Timeouts in the docs. They allow the Task to sit in a target for a period of time before falling down to the next target (which may or may not move its Queue as well, depending on how you configure the next Target.
You also mentioned not waiting before moving on to the next Queue. For this you'd use a skip_if expression, which if evaluated to true immediately skips to the next target, regardless of the timeout.
Note, related but not the same: iPhone - Grand Central Dispatch main thread
I've failed at this question many times, so here's source code:
While on the main thread
dispatch_async( dispatch_get_main_queue(), ^{ NSString * str = #"Interrupt myself to do something."} );
I'm just curious, when a thread switches, it stores its registers in Thread Local Storage, switches context, runs from its new spot in the Program Counter (which I assume is within a copy of the program that simply uses a different stack and register), then it "goes back" to the main thread.
When it interrupts itself, I'm just wondering what decides when it should, and what happens to the Thread Local stuff.
I've read up on this a little, but I'm still wrapping my head around the fact that programs are not continuous. They're just "something to do in small chunks when the OS decides to run a chunk of a process, or its chunks (threads).
I am self-taught, which might add to my lack of register/asm knowledge that may be standard to any scholar.
Thanks. The code should help, this is iOS specific, but I believe the answer/question is related to any language going from main-to-main.
Since every past attempt has resulted in lengthy answers that ignore the reason I'm asking this, I will iterate one last time....
This is for the SAME thread. Main-to-main. Does it really just stop itself, move the program counter elsewhere, go, then end at the block? Also don't these things usually change at branches (if/for and blocks too).
Pointing me in the right direction works too, but like I said, previously the question was misread.
It is hard to answer your question specifically without having access to the internals of GCD, but generically, the answer is no, simply adding a unit of work to a dispatch queue will not immediately interrupt the executing code.
As you suggest context switches are expensive, not only in terms of state saving & restoration but also the processor will need to dump the instruction pipeline resulting in wasted cycles.
Typically the operating system will keep executing the current task until it suspends (e.g. waits on a network or other IO operation) or perhaps is interrupted by some external event (pressing the home key on the phone), but there are also time limits to prevent a runaway task from locking the whole device (This is pre-emptive multi-tasking, as opposed to co-operative multitasking where the task needs to relinquish the CPU)
With dispatch_async there is no guarantee of when the code will execute in relation to the current code block. The code block may not even be next in the queue - other threads may have added other units of work to the queue before this one.
I think the thing that's confusing you is the use of dispatch_async( dispatch_get_main_queue()), which submits code to run on a queue on the main thread.
Using dispatch_async on the main queue:
When you call dispatch_async( dispatch_get_main_queue()), it adds a unit of work to the main queue, which runs it's jobs from the main thread.
If you run this call from the main thread, the results are the same. The work gets added to the main queue for later processing.
When you make this call from the main thread, the system doesn't check the main queue for work to do until your code returns.
Think of this as a one-cook kitchen. As the cook works, he puts trays of dishes in the dishwashing area. He doesn't stop to do dishes until he gets to a breaking point in what he's currently doing. At that point he takes a tray of dishes, loads it into the dishwasher, and then goes back to cooking.
The cook knows that he has to check for dishes each time he gets to a breaking point, and then completes a dishwashing task before returning to cooking.
Using dispatch_async on a background queue:
A dispatch_async call to a background queue is like a 2-person kitchen. There is a dishwasher working at the same time. The cook puts a tray of dishes into the dishwashing station (the queue) and the dishwasher (the other thread) picks up that task as soon as it's finished with it's previous tasks, while the cook continues to work on cooking.
The above assumes a machine with multiple processors, which is the norm these days. Each processor can do work at the same time without having to juggle multiple tasks.
If you are running on a single-core system with preemptive multitasking, submitting tasks to separate threads/background queues has the same effect as if there were multiple processors, but now the OS has to do a juggling act. There's only one person in the kitchen, but he wears multiple hats. The person is doing the cook job, and the OS shouts "Switch!" The cook jots down notes on what he was doing (saves state) and then jumps into the dish-pit and starts washing dishes, and keeps washing dishes until the OS yells "Switch!" again, and the worker again saves state, switches to the next role, and picks up that role (cook) where it was left off.
Multi-tasking is more costly on a single-core system because each time the worker switches roles, it has to save the current state, then load the saved state for the other role, and continue. Those context switches take time.
Does the following function block on its running core?
wait(Sec) ->
receive
after (1000 * Sec) -> ok
end.
A great answer will detail the internal working of Erlang and/or the CPU.
The process which executes that code will block, the scheduler which runs that process currently will not block. The code you posted is equal to a yield, but with a timeout.
The Erlang VM scheduler for that core will continue to execute other processes until that timeout fires and that process will be scheduled for execution again.
Short answer: this will block only current (lightweight) process, and will not block all VM. For more details you must read about erlang scheduler. Nice description comes from book "Concurent Programming" by Francesco Cesarini and Simon Thompson.
...snip...
When a process is dispatched, it is assigned a number of reductionsâ€
it is allowed to execute, a number which is reduced for every
operation executed. As soon as the process enters a receive clause
where none of the messages matches or its reduction count reaches
zero, it is preempted. As long as BIFs are not being executed, this
strategy results in a fair (but not equal) allocation of execution
time among the processes.
...snip...
nothing Erlang-specific, pretty classical problem: timeouts can only happen on a system clock interrupt. Same answer as above: that process is blocked waiting for the clock interrupt, everything else is working just fine.
There is another discussion about the actual time that process is going to wait which is not that precise exactly because it depends on the clock period (and that's system dependent) but that's another topic.
In Linux, with POSIX threads, is it possible to hint the scheduler to schedule a particular thread. Actually the scenario is that I have a process which is replica of another process. For deterministic execution, the follower process needs to acquire the locks in the same order as the leader process.
So for example, say in leader process, mutex a is locked by first thread 2, then 3 and 4. The follower must execute in the same order. So if in follower, thread 3 first encounters mutex a, I want thread 3 to say to the scheduler, ok I'm giving up my time slice, please schedule thread 2 instead. I know this can be achieved by modifying the scheduler, but I do not want that, I want to be able to control this from user space program.
In any system, Linux, Windows POSIX or not, if you have to ask this sort of question then I'm afraid that your app is heading for a dark place :(
Even if thread 3 were to yield, say with sleep(0), an interrupt straight after might well just schedule thread 3 back on again, preempting thread 2, or the OS might run thread 3 straightaway on another free core and it could get to the mutex first.
You have to make your app work correctly, (maybee not optimally), independently of the OS scheduling/dispatching algorithms. Even if you get your design to work on a test box, you will end up having to test your system on every combination of OS/hardware to ensure that it still works without deadlocking or performing incorrectly.
Fiddling with scheduling algorithms, thread priorities etc. should only be done to improve the performance of your app, not to try and make it work correctly or to stop it locking up!
Rgds,
Martin