Thread pools and context switching (tasks)? - task

This is quite a general computer science question and not specific to any OS or framework.
So I am a little confused by the overhead associated with switching tasks on a thread pool. In many cases it doesn't make sense to give every job its own specific thread (we don't want to create too many hardware threads), so instead we put these jobs into tasks which can be scheduled to run on a thread. We setup up a pool of threads and then dynamically allocate the tasks to run on a thread taken from the thread pool.
I am just a little confused (can't find a in depth answer) on the overhead associated with switching tasks on a specific thread (in the thread pool). A DrDobbs article (sourced below) states it does but I need a more in depth answer to what is actually happening (a cite-able source would be fantastic :)).
By definition, SomeWork must be queued up in the pool and then run on
a different thread than the original thread. This means we necessarily
incur queuing overhead plus a context switch just to move the work to
the pool. If we need to communicate an answer back to the original
thread, such as through a message or Future or similar, we will incur
another context switch for that.
Source: http://www.drdobbs.com/parallel/use-thread-pools-correctly-keep-tasks-sh/216500409?pgno=1
What components of the thread are actually switching? The thread itself isn't actually switching, just the data that is specific to the thread. What is the overhead associated with this (more, less or the same)?

let´s clarify first 5 key concepts here and then discuss how they correlates in a thread pool context:
thread:
In a brief resume it can be described as a program execution context, given by the code that is being run, the data in cpu registries and the stack. when a thread is created it is assigned the code that should be executed in that thread context. In each cpu cycle the thread has an instruction to execute and the data in cpu registries and stack in a given state.
task:
Represents a unit of work. It's the code that is assigned to a thread to be executed.
context switch (from wikipedia):
Is the process of storing and restoring the state (context) of a thread so that execution can be resumed from the same point at a later time. This enables multiple processes to share a single CPU and is an essential feature of a multitasking operating system. What constitutes the context is as explained above is the code that is being executed, the cpu registries and the stack.
What is context switched is the thread. A task represents only a peace of work that can be assigned to a thread to be executed. At given moment a thread can be executing a task.
Thread Pool (from wikipedia):
In computer programming, the thread pool is where a number of threads are created to perform a number of tasks, which are usually organized in a queue.
Thread Pool Queue:
Where tasks are placed to be executed by threads in the pool. This data structure is a shared peace of memory where threads may compete to queue/dequeue, may lead to contention in high load scenarios.
Illustrating a thread pool usage scenario:
In your program (eventually running in the main thread), you create a task and schedules it to be executed in thread pool.
The task is queued in the thread pool queue.
When a thread from the pool executes it dequeues a task from the pool and starts to executed it.
If there is no free cpus to execute the thread from the pool, the operating system at some point (depending on thread scheduler policy and thread priorities) will stop a thread from executing, context switching to other thread.
the operating system can stop the execution of a thread at any time, context switching to another thread, returning latter to continue where it stopped.
The overhead of the context switching is augmented when the number of active threads that competes for cpus grows. Thus, ideally, a thread pool tries to use the minimum necessary threads to occupy all available cpus in a machine.
If your tasks haven't code that blocks somewhere, context switching is minimized because it is used no more threads than the available cpus on machine.
Of course if you have only one core, your main thread and the thread pool will compete for the same cpu.

The article probably talks about the case in which work is posted to the pool and the result of it is being waited for. Running a task on the thread-pool in general does not incur any context switching overhead.
Imagine queueing 1000 work items. A thread-pool thread will executed them one after the other. All of that without a single context switch in between.
Switching happens doe to waiting/blocking.

Related

Why is there only limited usage of thread pools in TensorFlow-Federated?

TFF's threading libraries start a new thread from ThreadRun by default, and the only usage (as of TFF 0.42.0) of the optional ThreadPool parameter is in the implementation of a single executor. Why is this the case?
After conferring with some people who were close to the implementation, the understanding we came to was:
The issue with totally general usage of thread pools in TFF is generally that if used incorrectly, we may be courting deadlock. We need FIFO scheduling in the thread pool itself, and FIFO-compatible usage in the runtime (if you need the result of a computation, you need to know it will be started before you start).
When implementing the first usages of thread pools in the TF executor, we reasoned ourselves to believing the following statement is true: at the leaf executors (that is, so long as an executor doesnt have any children), this FIFO-compatible programming is guaranteed by the stateful executor interface. That is, if you need a value, you know it has already been created (otherwise the executor wouldn't be able to resolve it), so as long as the thread pool is FIFO, it will be ready before you execute. Either the creating function already pushed a function onto this FIFO queue, or just created the value directly, so you can push yourself onto the FIFO queue no sweat.
Due to difficulty, we haven't really tried to reason too hard about how / whether we might be able to make similar statements about executors which have children (and these children may be pushing work onto the queue; AFAIK we dont really currently make any guarantees about how we do this, but i could imagine reasoning about a similar invariant step-by-step 'up the stack'). Thus we have only considered it safe so far to inject thread pool usage at leaf executors. The fact that we don't have this in the XLAExecutor yet is simply due to lack of use.

Which one is the performance cores in instruments?

When I'm running profile in instrument on iPhone X with A11 CPU. This CPU has two performance cores and four efficiency cores.
May I ask if there is a way to tell which one is the performance core? And as for the main thread, will GCD put main thread tasks more on the performance cores rather than the efficiency ones?
I'm very interested to understand how this actually works.
GCD doesn't know anything about different kind of cores and GCD also doesn't decide which code runs on which core.
GCD decides which queue gets a thread of which thread pool and which code is scheduled to run next on the thread of the queue.
Deciding when a thread will run and on which core it will run is done by the thread schedular of the kernel. And the kernel also decides how many threads are available in which GCD thread pool.
The main thread is just a thread like any other thread. How much CPU time a thread gets depends on its own priority level, the amount of other threads, their priority levels, and the amount of workload scheduled for each of them.
As the A11 allows all 6 cores to be active at the same time, the kernel will decide which thread gets a high performance core and which one just a low performance one. High priority threads and threads with high computation workload (those that want to run very often and usually use up their full runtime quantum when running) are preferred for high performance cores. Low priority threads and threads with little computation workload (those that want to run infrequently and very often yield/block although their runtime quantum hasn't been used up yet) are preferred for low performance cores. Though, in theory every thread can run on any core as it would be stupid to leave cores unused if threads are waiting to run, yet low power cores are generally preferred as that reduces power consumption and increases battery runtime.

Why is `pthread_mutex_lock` needed when `pthread_mutex_trylock` is there?

pthread_mutex_trylock detects deadlocks, doesn't block, then why would you even "need" pthread_mutex_lock?
Perhaps when you deliberately want the thread to block? But in that case it may result in a deadlock?
pthread_mutex_trylock does not detect deadlocks.
You can use it to avoid deadlocks but you have to do that by wrapping your own code around it, effectively multiple calls to pthread_mutex_trylock in a loop with a time-out, after which your thread releases all its resources.
In any case, you can avoid deadlocks even with pthread_mutex_lock if you just follow the simple rule that all threads allocate resources in the same order.
You use pthread_mutex_lock if you just want to efficiently wait until the resource is available, without having to spin on the mutex, something which is often very inefficient. Properly designed multi-threaded applications have no need for the pthread_mutex_trylock variant.
Locks should only be held for the absolute minimum time to do the work and, if that's too long, you can generally redesign things so the lock time is less (such as by using the mutex to only copy data to a thread's local data areas, and having the long-running bit work on that after the mutex is released).
The pseudo-code:
while not pthread_mutex_trylock:
yield
will continue to run your thread, waiting for the lock to be available, especially since there is no pthread_yield() in POSIX threads (though it's sometimes provided as a non-portable extension).
That means, at worst, the code segment above won't even be able to portably yield the CPU, therefore chewing up the rest of it's quantum every time through the scheduler cycle.
And at best, it will still activate the thread once per scheduler cycle just to see if the mutex can be obtained.
Whereas:
pthread_mutex_lock
will most likely totally pause your thread until the lock is made available, since it will move it to a waiting queue until the current lock holder releases the mutex.
That's probably the major reason why you should prefer pthread_mutex_lock to pthread_mutex_trylock.
Perhaps when you deliberately want the thread to block?
Yup, exactly in this case. But you can mimic pthread_mutex_lock() behavior with something like that
while(pthread_mutex_trylock(&mtx))
pthread_yield()

Does pthread_exit kill a thread.. I mean free the stack allocated to it?

I want to create a lot of threads for a writing into a thread, and after writing I call exit... But, when I call exit do I free up the stack or do I still consume it??
In order to avoid resource leaks, you have to do one of these 2:
Make sure some other thread call pthread_join() on the thread
Create the thread as 'detached', which can either be done by setting the proper pthread attribute to pthread_create, or by calling the pthread_detach() function.
Failure to do so will often result in the entire stack "leaking" in many implementations.
The system allocates underlying storage for each thread, (thread ID, thread retval, stack), and this will remain in the process space (and not be recycled) until the thread has terminated and has been joined by other threads.
If you have a thread which you don't care how the thread terminates, and a detached thread is a good choice.
For detached threads, the system recycles its underlying resources automatically after the thread terminates.
source article: http://www.ibm.com/developerworks/library/l-memory-leaks/

stack management in CLR

I understand the basic concept of stack and heap but great if any1 can solve following confusions:
Is there a single stack for entire application process or for each thread starting in a project a new stack is created?
Is there a single Heap for entire application process or for each thread starting in a project a new stack is created?
If Stack are created for each thread, then how process manage sequential flow of threads (and hence stacks)
There is a separate stack for every thread. This is true not only for CLR, and not only for Windows, but pretty much for every OS or platform out there.
There is single heap for every Application Domain. A single process may run several app domains at once. A single app domain may run several threads.
To be more precise, there are usually two heaps per domain: one regular and one for really large objects (like, say, a 64K array).
I don't understand what you mean by "sequential flow of threads".
One stack for each thread, all threads share the same heaps.
There is no 'sequential flow' of threads. A thread is an operating system object that stores a copy of the processor state. The processor state includes the register values. One of them is ESP, the stack pointer. Another really important one is EIP, the instruction pointer. When the operating system switches between threads, it stores the processor state in the current thread object and reloads the state from the thread object for the thread that was selected to run next. The processor now simply continues executing where it left off previously.
Getting a thread started is perhaps now easy to understand as well. The operating system allocates a megabyte of memory for the stack. And initializes the ESP register value to point to that memory. And sets the value of the EIP register to the address of the method where the thread should start executing. The value of the ThreadStart delegate in C#.
Each thread must have it's own stack, that's where local variables and parameters are held, and the return addresses of the previous functions.

Resources