About pthread_cond_wait - pthreads

For the following code:
f1()
{
pthread_mutex_lock(&mutex); //LINE1 (thread3 and thread4)
pthread_cond_wait(&cond, &mutex); //LINE2 (thread1 and thread2)
pthread_mutex_unlock(&mutex);
}
f2()
{
pthread_mutex_lock(&mutex);
pthread_cond_signal(&cond); //LINE3 (thread5)
pthread_mutex_unlock(&mutex);
}
Assume thread1 and thread2 are waiting at LINE2, thread3 and thread4 are blocked at LINE1. When thread5 executes LINE3, which threads will run first? thread1 or thread2? thread3 or thread4?

When thread5 signals the condition, either thread1 or thread2, or both, will be released from waiting, and will wait until the mutex can be locked... which won't be until after thread5 unlocks it.
When thread5 then unlocks the mutex, one of the threads waiting to lock the mutex will be able to do so. My reading of POSIX reveals only that the order in which threads waiting to lock will proceed is "under defined", though higher priority threads may be expected to run first. How things are scheduled is largely system dependent.
If you need threads to run in a particular order, then you need to arrange that for yourself.

Related

Prioritization of one thread to process a variable when it is shared by 2 threads and protected by pthread_mutex_lock

Could anyone please suggest how to optimize the application code below using pthread_mutex_lock?
Let me describe the situation:
I have 2 threads sharing a global shared memory variable. The variable shmPtr->status is protected with mutex lock in both functions. Although there is a sleep(1/2) inside the "for loop" in the task1 function, I can't access the shmPtr->status in task2 when required and have to wait until the "for loop" is finished in the task1 function. It takes around 50 seconds for the shmPtr->status to be available for the task2 function.
I am wondering why the task1 function is not releasing the mutex lock despite the sleep(1/2) line. I don't want to wait with processing the shmPtr->status in the task2 function. Please advice.
thr_id1 = pthread_create ( &p_thread1, NULL, (void *)execution_task1, NULL );
thr_id2 = pthread_create ( &p_thread2, NULL, (void *)execution_task2, NULL );
void execution_task1()
{
for(int i = 0;i < 100;i++)
{
//100 lines of application code running here
pthread_mutex_lock(&lock);
shmPtr->status = 1; //shared memory variable
pthread_mutex_unlock(&lock);
sleep(1/2);
}
}
void execution_task2()
{
//100 lines of application code running here
pthread_mutex_lock(&lock);
shmPtr->status = 0; //shared memory variable
pthread_mutex_unlock(&lock);
sleep(1/2);
}
Regards,
NK
I am wondering why the task1 function is not releasing the mutex lock even with an sleep(1/2).
There is no reason to think that the thread running execution_task1() in your example fails to release the mutex, though you would know for sure if you appropriately tested the return value of its pthread_mutex_unlock() call. Rather, the potential problem is that it reacquires the mutex before any other thread contending for it has an opportunity to acquire it.
It seems plausible that a call to sleep(1/2) is ineffective at preventing that. 1/2 is an integer division, evaluating to 0, so you are performing a sleep(0). That probably does not cause the calling thread to suspend at all, and it may not even cause the thread to yield the CPU.
More generally, sleep() is never a good solution for any thread-synchronization problem.
If you are running on a multi-core system, however, and maybe even if you aren't, then freezing out other threads by such a mechanism seems unlikely if the function really does execute a hundred lines of code between releasing the mutex and trying to lock it again. If that's what you think you see then look harder.
If you really do need to force a thread to allow others a chance to acquire the mutex, then you could perhaps set up a fair queueing system as described in Implementing a FIFO mutex in pthreads. For a case such as you dsecribe, however, with one long-running thread needing occasionally to yield to other, quicker tasks, you could consider introducing a condition variable on which that long-running thread can suspend, and an atomic counter by which it can determine whether it should do so:
#include <stdatomic.h>
// ...
pthread_cond_t yield_cv = PTHREAD_COND_INITIALIZER;
_Atomic unsigned int waiting_thread_count = ATOMIC_VAR_INIT(0);
void execution_task1() {
for (int i = 0; i < 100; i++) {
// ...
pthread_mutex_lock(&lock);
if (waiting_thread_count > 0) {
pthread_cond_wait(&yield_cv, &lock);
// spurrious wakeup ok
}
// ... critical section ...
pthread_mutex_unlock(&lock);
}
}
void execution_task2() {
// ...
waiting_thread_count += 1; // Atomic get & increment; safe w/o mutex
pthread_mutex_lock(&lock);
waiting_thread_count -= 1;
pthread_cond_signal(&yield_cv); // no problem if no-one is presently waiting
// ... critical section ...
pthread_mutex_unlock(&lock);
}
Using an atomic counter relieves the program from having to protect that counter with its own mutex, which could just shift the problem instead of solving it. That allows threads to use the counter to signal upcoming attempts to acquire the mutex. This intent is then visible to the other thread, so that it can suspend on the CV to allow the other to acquire the mutex.
The short-running threads then acknowledge acquiring the mutex by decrementing the counter. They must do so before releasing the mutex, else the long-running thread might cycle around, acquire the mutex, and read the counter before it is decremented, thus erroneously blocking on the CV when no additional signal can be expected.
Although CV's can be subject to spurrious wakeup, that does not present a serious problem for this approach. If the long-running thread wakes spurriously from it wait on the CV, then the worst that happens is that it performs one more iteration of its main loop, then waits again.

Why is it a good idea to hold a pthread mutex when signaling or broadcasting a condition? [duplicate]

I read somewhere that we should lock the mutex before calling pthread_cond_signal and unlock the mutex after calling it:
The pthread_cond_signal() routine is
used to signal (or wake up) another
thread which is waiting on the
condition variable. It should be
called after mutex is locked, and must
unlock mutex in order for
pthread_cond_wait() routine to
complete.
My question is: isn't it OK to call pthread_cond_signal or pthread_cond_broadcast methods without locking the mutex?
If you do not lock the mutex in the codepath that changes the condition and signals, you can lose wakeups. Consider this pair of processes:
Process A:
pthread_mutex_lock(&mutex);
while (condition == FALSE)
pthread_cond_wait(&cond, &mutex);
pthread_mutex_unlock(&mutex);
Process B (incorrect):
condition = TRUE;
pthread_cond_signal(&cond);
Then consider this possible interleaving of instructions, where condition starts out as FALSE:
Process A Process B
pthread_mutex_lock(&mutex);
while (condition == FALSE)
condition = TRUE;
pthread_cond_signal(&cond);
pthread_cond_wait(&cond, &mutex);
The condition is now TRUE, but Process A is stuck waiting on the condition variable - it missed the wakeup signal. If we alter Process B to lock the mutex:
Process B (correct):
pthread_mutex_lock(&mutex);
condition = TRUE;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
...then the above cannot occur; the wakeup will never be missed.
(Note that you can actually move the pthread_cond_signal() itself after the pthread_mutex_unlock(), but this can result in less optimal scheduling of threads, and you've necessarily locked the mutex already in this code path due to changing the condition itself).
According to this manual :
The pthread_cond_broadcast() or
pthread_cond_signal() functions
may be called by a thread whether or not it currently owns the mutex that
threads calling pthread_cond_wait()
or pthread_cond_timedwait() have
associated with the condition variable
during their waits; however, if
predictable scheduling behavior is
required, then that mutex shall be
locked by the thread calling
pthread_cond_broadcast() or
pthread_cond_signal().
The meaning of the predictable scheduling behavior statement was explained by Dave Butenhof (author of Programming with POSIX Threads) on comp.programming.threads and is available here.
caf, in your sample code, Process B modifies condition without locking the mutex first. If Process B simply locked the mutex during that modification, and then still unlocked the mutex before calling pthread_cond_signal, there would be no problem --- am I right about that?
I believe intuitively that caf's position is correct: calling pthread_cond_signal without owning the mutex lock is a Bad Idea. But caf's example is not actually evidence in support of this position; it's simply evidence in support of the much weaker (practically self-evident) position that it is a Bad Idea to modify shared state protected by a mutex unless you have locked that mutex first.
Can anyone provide some sample code in which calling pthread_cond_signal followed by pthread_mutex_unlock yields correct behavior, but calling pthread_mutex_unlock followed by pthread_cond_signal yields incorrect behavior?

pthread condition signal does not lock mutex

My mutex seems to be unlocked.
My code looks like this (not actual code) (using pthread):
thread
{
int id=...;
//locked aditional mutex _m2
mutex_lock(&_m);
varx=valuex;//irelevant
print("th%d signaling listener",id);
cond_signal(&_c);
print("th%d signaled listener",id);
mutex_unlock(&_m);
//unlocked additional mutex _m2
}
listener
{
tc=0
mutex_lock(&_m);
while(tc<threadcount)
{
cond_wait(&_c,&_m);
print("working");
tc++
work;
}
mutex_unlock(&_m);
}
Normal (predicted )out put:
th0 signaling listener;
working;
th0 signaled listener;
th1 signaling listener;
working;
th1 signaled listener;
My output:
0 signaling listener;
working;
0 signaled listener;
1 signaling listener;
1 signaled listener;
..so the thread skipped (listener does not execute nor locks _m) to printing output
I've profiled it with helgrind (full) and i have no errors but my app stops at listener because according to him he is waiting for all to finish.
Notes:
listener is joinable.
Additional mutex _m2 does not help.
Thread is detached. I have about 800 detached threads to avoid stack problems, max 50 simultaneous using semaphore to limit thread count.
Code worked for 3-4 threads
pthread_cond_signal() does not unlock any mutex. It's not supposed to (no mutex is passed to it). If at least one thread is waiting on the condition variable signalled, that thread will be scheduled when it can re-acquire the mutex that it passed to pthread_cond_wait().
In your case your listener appears to be waiting on a different condition (_s) than the other thread signals (_c).
If you fix that problem, you also have the problem that you don't seem to have any shared state between the thread that waits on the condition variable and the thread(s) that signal it. It looks like your tc counter should actually be a shared variable, protected by the _m mutex. Your threads would then do:
pthread_mutex_lock(&_m);
tc++;
if (tc >= threadcount)
pthread_cond_signal(&_c);
pthread_mutex_unlock(&_m);
and the listener would do:
pthread_mutex_lock(&_m);
while (tc < threadcount)
pthread_mutex_wait(&_c, &_m);
pthread_mutex_unlock(&_m);
The listener would then only continue once all of the threads have hit the signalling code, which seems to be what you've after.
Alternately you could use pthread_barrier_wait(), which appears to be what you're implementing.

pthreads wait and signal doubts linux

Before pthread wait we lock using a mutex so that some other code might not try to change the condition variable. wait then unlocks the mutex and waits for the signal.
Say, in some other thread i had locked the same mutex and after that, i had used 'signal'. and then unlock thread.
when signal is done, the waiting thread wakes up and aquires the mutex again.
Thread1 Thread2
{ {
lock(mutex); lock(mutex);
wait(mutex); signal(mutex);
unlock(mutex); unlock(mutex);
} }
Say the three thread one statements are enclosed in a while(1) loop. Then assume that thread2 locks the mutex, signals it, and unlocks the mutex. and then doesn't end, but goes to sleep.
So will the value of the condition variable be changed permanently? If three statements of thread one are running in infinite lop, will it never wait and just find that the signal has been given? When wait call returns, does it set the value of the condition variable back to initial value?
If yes, can I use create,destroy or initialize methods on the variables to set the value back? If yes, how? What exactly do these functions do?
Thanks,
pthread_cond_signal() will always wake at least one thread that is currently waiting on that condition variable in pthread_cond_wait(). If the same thread or a different thread then calls pthread_cond_wait() again, it will block and wait for another signal.
This means that pthread condition variables must always be paired with some kind of shared data, protected by the mutex that is held when calling pthread_cond_wait(). Before calling pthread_cond_wait(), the thread must check the shared data to see if the condition it wants to wait for has occurred - if not, it shouldn't wait.
The simplest example of such shared data might be a global flag. In your example:
int flag = 0;
Thread 1 {
pthread_mutex_lock(&mutex);
while (!flag)
pthread_cond_wait(&cond, &mutex);
pthread_mutex_unlock(&mutex);
}
Thread 2 {
pthread_mutex_lock(&mutex);
flag = 1;
pthread_mutex_signal(&cond);
pthread_mutex_unlock(&mutex);
}
You can see here that when the condition is "reset" is entirely under your control - for example you could have Thread 1 set flag = 0; before it calls pthread_mutex_unlock().
The shared state is often more complex than a simple flag - for example you might have a producer thread call pthread_mutex_wait() while there is no room in a shared buffer.

pthread conditional variable

I'm implementing a thread with a queue of tasks. As soon as as the first task is added to the queue the thread starts running it.
Should I use pthread condition variable to wake up the thread or there is more appropriate mechanism?
If I call pthread_cond_signal() when the other thread is not blocked by pthread_cond_wait() but rather doing something, what happens? Will the signal be lost?
Semaphores are good if-and-only-if your queue already is thread safe. Also,
some semaphore implementations may be limited by top counter value.
Even it is unlikely you would overrun maximal value.
Simplest and correct way to do this is following:
pthread_mutex_t queue_lock;
pthread_cond_t not_empty;
queue_t queue;
push()
{
pthread_mutex_lock(&queue_lock);
queue.insert(new_job);
pthread_cond_signal(&not_empty)
pthread_mutex_unlock(&queue_lock);
}
pop()
{
pthread_mutex_lock(&queue_lock);
if(queue.empty())
pthread_cond_wait(&queue_lock,&not_empty);
job=quque.pop();
pthread_mutex_unlock(&queue_lock);
}
From the pthread_cond_signal Manual:
The pthread_cond_broadcast() and pthread_cond_signal() functions shall have no effect if there are no threads currently blocked on cond.
I suggest you use Semaphores. Basically, each time a task is inserted in the queue, you "up" the semaphore. The worker thread blocks on the semaphore by "down"'ing it. Since it will be "up"'ed one time for each task, the worker thread will go on as long as there are tasks in the queue. When the queue is empty the semaphore is at 0, and the worker thread blocks until a new task arrives. Semaphores also easily handle the case when more than 1 task arrived while the worker was busy. Notice that you still have to lock access to the queue to keep inserts/removes atomic.
The signal will be lost, but you want the signal to be lost in that case. If there is no thread to wakeup, the signal serves no purpose. (If nobody is waiting for something, nobody needs to be notified when it happens, right?)
With condition variables, lost signals cannot cause a thread to "sleep through a fire". Unless you actually code a thread to go to sleep when there's already a fire, there is no need to "save a signal". When the fire starts, your broadcast will wake up any sleeping threads. And you would have to be pretty daft to code a thread to go to sleep when there's already a fire.
As already suggested, semaphores should be the best choice. If you need a fixed-size queue just use 2 semaphores (as in classical producer-consumer).
In artyom code, it would be better to replace "if" with "while" in pop() function, to handle spurious wakeup.
No effects.
If you check how pthread_condt_signal is implemented, the condt uses several counters to check whether there are any waiting threads to wake up. e.g., glibc-nptl
/* Are there any waiters to be woken? */
if (cond->__data.__total_seq > cond->__data.__wakeup_seq){
...
}

Resources