Consider the following program flow:
pthread_rwlock_rdlock( &mylock);
... compute a lot, maybe be the target of a pthread_cancel() ...
pthread_rwlock_unlock( &mylock);
that is going to leave the lock in a rdlock state if thread is canceled.
It appears that the "right" thing to do is to use pthread_cleanup_push() and pthread_cleanup_pop() and do the unlock inside my cleanup function, but there doesn't seem to be a valid order for the function calls:
void my_cleanup(void *arg) { pthread_rwlock_unlock(&mylock); }
...
pthread_cleanup_push( my_cleanup, 0);
/* A */
pthread_rwlock_rdlock( &mylock);
... compute a lot, maybe be the target of a pthread_cancel() ...
pthread_cleanup_pop( 1);
... that appears nearly correct, except that if the pthread_cancel() hits at "A" then the cleanup will unlock a mylock which is not yet locked leading to undefined behavior.
The whole answer may be:
void my_cleanup(void *arg) { pthread_rwlock_unlock(&mylock); }
...
pthread_setcancelstate( PTHREAD_CANCEL_DISABLE, &oldstate);
pthread_cleanup_push( my_cleanup, 0);
pthread_rwlock_rdlock( &mylock);
pthread_setcancelstate( oldstate, 0);
... compute a lot, maybe be the target of a pthread_cancel() ...
pthread_cleanup_pop( 1);
but at that point it seems like I'm wrapping some well defined primitives in bandages.
So is there a better idiom for this?
Unless you allow asynchronous cancel, the pending pthread_cancel() will be processed only at known points. Therefore it should be safe to push just after successfully locking, and the pop just before unlocking.
I took a look at the glibc/nptl source and I don't see a better way than what you suggested. There's not even enough state in a rwlock for you to "cheat" and trust that the unlock side will know if you got the lock or not. If you didn't acquire the lock but someone else did, a spurious unlock will corrupt the state (rather than returning error).
In fact, offhand I can't find a reason why a well-placed SIGCANCEL couldn't interrupt pthread_rwlock_rdlock itself and leave the lll_lock (glibc internal) lock held, causing all other access to the rwlock to hang. So your PTHREAD_CANCEL_DISABLE might be even more important. Add one around the cleanup as well if you really plan to create/cancel many of these threads. It would be worth writing a test program to stress-test it.
Related
I have a specific task routine which performs some operations in a specific order, and these operations handle few volatile variables. There is a specific interrupt which updates these volatile variables asynchronously. Hence, the task routine should restart if such an interrupt occurs. Normally FreeRTOS will resume the task, but this will result in wrong derived values, hence the requirement for restarting the routine. I also cannot keep the task routine under critical section, because I should not be missing any interrupts.
Is there a way in FreeRTOS with which I can achieve this? Like a vtaskRestart API. I could have deleted the task and re-created it, but this adds a lot of memory management complications, which I would like to avoid. Currently my only option is to add checks in the routine on a flag to see if a context switch have occured and if yes, restart, else continue.
Googling did not fetch any clue on this. Seems like people never faced such a problem or may be its that this design is poor. In FreeRTOS forum, few who asked for a task-restart didn't seem to have this same problem. stackOverflow didn't have a result on freertos + task + restart. So, this could be the first post with this tag combination ;)
Can someone please tell me if this is directly possible in FreeRTOS?
You can use semaphore for this purpose. If you decide using semaphore, you should do the steps below.
Firstly, you should create a binary semaphore.
The semaphore must be given in the interrupt routine with
xSemaphoreGiveFromISR( Example_xSemaphore, &xHigherPriorityTaskWoken
);
And, you must check taking semaphore in the task.
void vExample_Task( void * pvParameters )
{
for( ;; )
{
if (xSemaphoreTake( Example_xSemaphore, Example_PROCESS_TIME)==pdTRUE)
{
}
}
}
For this purpose you should use a queue and use the queue peek function to yield at your volatile data.
I'm using it as I have a real time timer and this way I make the time available to all my task, without any blocking.
Here it how it goes:
Declare the queue:
xQueueHandle RTC_Time_Queue;
Create the queue of 1 element:
RTC_Time_Queue = xQueueCreate( 1, sizeof(your volatile struct) );
Overwrite the queue everytime your interrupt occurs:
xQueueOverwriteFromISR(RTC_Time_Queue, (void*) &time);
And from other task peek the queue:
xQueuePeek(RTC_GetReadQueue(), (void*) &TheTime, 0);
The 0 at the end of xQueuePeek means you don't want to wait if the queue is empty. The queue peek won't delete the value in the queue so it will be present every time you peek and the code will never stop.
Also you should avoid having variable being accessed from ISR and the RTOS code as you may get unexpected corruption.
Assuming a thread successfully calls pthread_mutex_lock, is it still possible that a call to pthread_mutex_unlock in that same thread will fail? If so, can you actually do something about it besides abort the thread?
if(pthread_mutex_lock(&m) == 0)
{
// got the lock, let's do some work
if(pthread_mutex_unlock(&m) != 0) // can this really fail?
{
// ok, we have a lock but can't unlock it?
}
}
From this page, possible errors for pthread_mutex_unlock() are:
[EINVAL]
The value specified by mutex does not refer to an initialised
mutex object.
If the lock succeeded then this is unlikely to fail.
[EAGAIN]
The mutex could not be acquired because the maximum number of
recursive locks for mutex has been exceeded.
Really? For unlock?
The pthread_mutex_unlock() function may fail if:
[EPERM]
The current thread does not own the mutex.
Again, if lock succeeded then this also should not occur.
So, my thoughts are if there is a successful lock then in this situation unlock should never fail making the error check and subsequent handling code pointless.
Well before you cry "victory". I ended up on this page looking for a reason why one of my programs failed on a pthread_mutex_unlock (on HP-UX, not Linux).
if (pthread_mutex_unlock(&mutex) != 0)
throw YpException("unlock %s failed: %s", what.c_str(), strerror(errno));
This failed on me, after many million happy executions.
errno was EINTR, although I just now found out that I should not be checking errno, but rather the return value. But nevertheless, the return value was NOT 0. And I can mathematically prove that at that spot I do own a valid lock.
So let's just say your theory is under stress, although more research is required ;-)
From the man page for pthread_mutex_unlock:
The pthread_mutex_unlock() function may fail if:
EPERM
The current thread does not own the mutex.
These functions shall not return an error code of [EINTR].
If you believe the man page, it would seem that your error case cannot happen.
I'm using pthreads that don't allocate any local variables. For reasons I won't go into here, I need a pthread_cancel() option, and the threads I'm writing should be able to support it (no resources to clean up, OK to stop execution at any point). At the moment, I have a problem because pthread_cancel returns before the pthread is actually finished running, causing problems for shared resources I want to touch only after thread cancellation.
Is there any way I can know when my pthread has well and truly concluded? Is there perhaps a function for this I haven't found, or a parameter I'm not familiar with?
Would
pthread_cancel(thread_handle);
pthread_join(thread_handle, NULL);
do the trick, or is that not guaranteed (since thread_handle may already be invalid)?
I'm pretty new to pthreads, so best practices welcome (beyond "don't use pthread_cancel()," which I've already learned :P ).
The kernel.org manual page is actually doing it. It's safe.
s = pthread_cancel(thr);
if (s != 0)
handle_error_en(s, "pthread_cancel");
/* Join with thread to see what its exit status was */
s = pthread_join(thr, &res);
if (s != 0)
handle_error_en(s, "pthread_join");
Until you call pthread_join on a joinable thread, its tid remains valid. If the thread is joinable (which it must be for pthread_cancel to be safe), then the thread_handle must still be valid.
If the thread was detached, it wouldn't even be safe to call pthread_cancel. What if the thread terminated just as you called it?
OpenCL doesn't have a global barrier that will stop all threads, so I'm trying to create a work around with the following code:
void barrier(__global uint* scratch) {
uint nThreads = get_global_size(0);
atom_inc(scratch);
/* this loop never terminates */
while(scratch[0] < nThreads) {
continue;
}
}
The idea is that each thread loops until all of them increment that one piece of memory.
However, the value read from scratch[0] never changes for the threads once it's been read, and it loops forever. I know it's being incremented because it's the correct value when I read it back to the host.
Is the global memory being locally cached? What's going on here?
Found the problem: the order in which work groups are executed is implementation defined. This means that some threads might start only after others have finished.
In the code I gave, the work groups that are started first will loop forever waiting on the the others to hit the 'barrier'. And the work groups that would be started later won't ever start because they're waiting for the first ones to finish.
If the implementation (I'm on a Radeon 5750, using Stream SDK 2.2) executes all work groups concurrently, then it probably wouldn't be an issue. But that's not the case for my setup.
I have the following piece of code in thread A, which blocks using pthread_cond_wait()
pthread_mutex_lock(&my_lock);
if ( false == testCondition )
pthread_cond_wait(&my_wait,&my_lock);
pthread_mutex_unlock(&my_lock);
I have the following piece of code in thread B, which signals thread A
pthread_mutex_lock(&my_lock);
testCondition = true;
pthread_cond_signal(&my_wait);
pthread_mutex_unlock(&my_lock);
Provided there are no other threads, would it make any difference if pthread_cond_signal(&my_wait) is moved out of the critical section block as shown below ?
pthread_mutex_lock(&my_lock);
testCondition = true;
pthread_mutex_unlock(&my_lock);
pthread_cond_signal(&my_wait);
My recommendation is typically to keep the pthread_cond_signal() call inside the locked region, but probably not for the reasons you think.
In most cases, it doesn't really matter whether you call pthread_cond_signal() with the lock held or not. Ben is right that some schedulers may force a context switch when the lock is released if there is another thread waiting, so your thread may get switched away before it can call pthread_cond_signal(). On the other hand, some schedulers will run the waiting thread as soon as you call pthread_cond_signal(), so if you call it with the lock held, the waiting thread will wake up and then go right back to sleep (because it's now blocked on the mutex) until the signaling thread unlocks it. The exact behavior is highly implementation-specific and may change between operating system versions, so it isn't anything you can rely on.
But, all of this looks past what should be your primary concern, which is the readability and correctness of your code. You're not likely to see any real-world performance benefit from this kind of micro-optimization (remember the first rule of optimization: profile first, optimize second). However, it's easier to think about the control flow if you know that the set of waiting threads can't change between the point where you set the condition and send the signal. Otherwise, you have to think about things like "what if thread A sets testCondition=TRUE and releases the lock, and then thread B runs and sees that testCondition is true, so it skips the pthread_cond_wait() and goes on to reset testCondition to FALSE, and then finally thread A runs and calls pthread_cond_signal(), which wakes up thread C because thread B wasn't actually waiting, but testCondition isn't true anymore". This is confusing and can lead to hard-to-diagnose race conditions in your code. For that reason, I think it's better to signal with the lock held; that way, you know that setting the condition and sending the signal are atomic with respect to each other.
On a related note, the way you are calling pthread_cond_wait() is incorrect. It's possible (although rare) for pthread_cond_wait() to return without the condition variable actually being signaled, and there are other cases (for example, the race I described above) where a signal could end up awakening a thread even though the condition isn't true. In order to be safe, you need to put the pthread_cond_wait() call inside a while() loop that tests the condition, so that you call back into pthread_cond_wait() if the condition isn't satisfied after you reacquire the lock. In your example it would look like this:
pthread_mutex_lock(&my_lock);
while ( false == testCondition ) {
pthread_cond_wait(&my_wait,&my_lock);
}
pthread_mutex_unlock(&my_lock);
(I also corrected what was probably a typo in your original example, which is the use of my_mutex for the pthread_cond_wait() call instead of my_lock.)
The thread waiting on the condition variable should keep the mutex locked, and the other thread should always signal with the mutex locked. This way, you know the other thread is waiting on the condition when you send the signal. Otherwise, it's possible the waiting thread won't see the condition being signaled and will block indefinitely waiting on it.
Condition variables are typically used like this:
#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int go = 0;
void *threadproc(void *data) {
printf("Sending go signal\n");
pthread_mutex_lock(&lock);
go = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&lock);
}
int main(int argc, char *argv[]) {
pthread_t thread;
pthread_mutex_lock(&lock);
printf("Waiting for signal to go\n");
pthread_create(&thread, NULL, &threadproc, NULL);
while(!go) {
pthread_cond_wait(&cond, &lock);
}
printf("We're allowed to go now!\n");
pthread_mutex_unlock(&lock);
pthread_join(thread, NULL);
return 0;
}
This is valid:
void *threadproc(void *data) {
printf("Sending go signal\n");
go = 1;
pthread_cond_signal(&cond);
}
However, consider what's happening in main
while(!go) {
/* Suppose a long delay happens here, during which the signal is sent */
pthread_cond_wait(&cond, &lock);
}
If the delay described by that comment happens, pthread_cond_wait will be left waiting—possibly forever. This is why you want to signal with the mutex locked.
Both are correct, however for reactivity issues, most schedulers give hand to another thread when a lock is released. I you don't signal before unlocking, your waiting thread A is not in the ready list and thous will not be scheduled until B is scheduled again and call pthread_cond_signal().
The Open Group Base Specifications Issue 7 IEEE Std 1003.1, 2013 Edition (which as far as I can tell is the official pthread specification) says this on the matter:
The pthread_cond_broadcast() or pthread_cond_signal() functions may be
called by a thread whether or not it currently owns the mutex that
threads calling pthread_cond_wait() or pthread_cond_timedwait() have
associated with the condition variable during their waits; however, if
predictable scheduling behavior is required, then that mutex shall be
locked by the thread calling pthread_cond_broadcast() or
pthread_cond_signal().
To add my personal experience, I was working on an application that had code where the conditional variable was destroyed (and the memory containing it freed) by the thread that was woken up. We found that on a multi-core device (an iPad Air 2) the pthread_cond_signal() could actually crash sometimes if it was outside the mutex lock, as the waiter woke up and destroyed the conditional variable before the pthread_cond_signal had completed. This was quite unexpected.
So I would definitely veer towards the 'signal inside the lock' version, it appears to be safer.
Here is nice write up about the conditional variables: Techniques for Improving the Scalability of Applications Using POSIX Thread Condition Variables (look under 'Avoiding the Mutex Contention' section and point 7)
It says that, the second version may have some performance benefits. Because it makes possible for thread with pthread_cond_wait to wait less frequently.