iOS using pthread_mutexattr_t trouble - ios

i am porting an existing c++ project to objective-c++ and came across this mutex stuff. I am not sure what is done here neither if it is correct. To initialize some kind of multithreading lock mechanism (called "CriticalSection") the following is done:
#include <pthread.h>
pthread_mutex_t cs;
pthread_mutexattr_t attr;
Later in code:
pthread_mutexattr_init(&attr);
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);
pthread_mutex_init(&cs, &attr);
To enter the "lock" there is:
pthread_mutex_lock(&cs);
To leave the "lock":
pthread_mutex_unlock(&cs);
pthread_mutex_destroy(&cs);
My question (since I have no clue how this is done) is: Does this look like a correct implementation?
Because I encounter problems that look like the lock mechanism just does not work (bad memory access errors, corrupt pointers in situations where the "CriticalSection" was used).

It's correct except that you don't want to destroy the mutex when unlocking. Only destroy the mutex when you can guarantee that all threads have finished with the it.
for example:
class CriticalSection
{
public:
CriticalSection()
{
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);
pthread_mutex_init(&mutex, &attr);
}
~CriticalSection() { pthread_mutex_destroy(&mutex); }
void Enter() { pthread_mutex_lock(&mutex); }
void Leave() { pthread_mutex_unlock(&mutex); }
private:
pthread_mutex_t mutex;
};

Related

How to explain this glibc modification on libpthread?

We found an issue while we upgrade Yocto from 1.7 to 2.2.
App below behaves differently on newer glibc, it always got stuck on pthread_cond_destroy on glibc2.25, while on glibc2.20 it ends well.
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
void *thread1(void *);
void *thread2(void *);
int main(void)
{
pthread_t t_a;
pthread_t t_b;
pthread_create(&t_a,NULL,thread1,(void *)NULL);
pthread_create(&t_b,NULL,thread2,(void *)NULL);
sleep(1);
pthread_cond_destroy(&cond);
pthread_mutex_destroy(&mutex);
exit(0);
}
void *thread1(void *args)
{
pthread_cond_wait(&cond,&mutex);
}
void *thread2(void *args)
{
}
After investigation I find a glibc commit here:
https://sourceware.org/git/?p=glibc.git;a=commit;f=sysdeps/arm/nptl/bits/pthreadtypes.h;h=ed19993b5b0d05d62cc883571519a67dae481a14, which seems to be the reason.
We noticed the comment of __pthread_cond_destroy, it mentioned:
A correct program must make sure that no waiters are blocked on the
condvar when it is destroyed.
It must also ensure that no signal or broadcast are still pending to
unblock waiters.
and destruction that is concurrent with still-active waiters is
probably neither common nor performance critical.
From my perspective it brings in a defect. But glibc team seems to have a good reason. Could anybody explain whether this modification is necessary?
From the standard:
Attempting to destroy a condition variable upon which other threads
are currently blocked results in undefined behavior.
From my perspective it brings in a defect.
Your program depended on undefined behavior. Now you pay the price of doing so.

Why the child thread block in my code?

I was trying to test the pthread by using mutex:
#include <stdio.h>
#include <pthread.h>
#include <sys/types.h>
#include <stdlib.h>
#include <unistd.h>
pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER;
int global = 0;
void thread_set();
void thread_read();
int main(void){
pthread_t thread1, thread2;
int re_value1, re_value2;
int i;
for(i = 0; i < 5; i++){
re_value1 = pthread_create(&thread1,NULL, (void*)&thread_set,NULL);
re_value2 = pthread_create(&thread2,NULL,(void*)&thread_read,NULL);
}
pthread_join(thread1,NULL);
pthread_join(thread2,NULL);
/* sleep(2); */ // without it the 5 iteration couldn't finish
printf("exsiting\n");
exit(0);
}
void thread_set(){
pthread_mutex_lock(&mutex1);
printf("Setting data\t");
global = rand();
pthread_mutex_unlock(&mutex1);
}
void thread_read(){
int data;
pthread_mutex_lock(&mutex1);
data = global;
printf("the value is: %d\n",data);
pthread_mutex_unlock(&mutex1);
}
without the sleep(), the code won't finish the 5 iteration:
Setting data the value is: 1804289383
the value is: 1804289383
Setting data the value is: 846930886
exsiting
Setting data the value is: 1804289383
Setting data the value is: 846930886
the value is: 846930886
exsiting
It works only add the sleep() to the main thread, I think it should work without the sleep(), because the join() function wait for each child thread terminate
Any one can tell me why is it?
Your use of mutex objects looks fine, but this loop
for(i = 0; i < 5; i++) {
re_value1 = pthread_create(&thread1,NULL, (void*)&thread_set,NULL);
re_value2 = pthread_create(&thread2,NULL,(void*)&thread_read,NULL);
}
is asking for trouble as you are reusing the same thread instances thread1 and thread2 for each iteration of your loop. Internally this must be causing problems although I do not know exactly how it would manifest itself. You should really use a separate instance of thread object per thread you want to ensure reliable running. I do not know what would happen if you called pthread_create using an instance of a thread object that is already running but I suspect it is not a wise thing to do. I would suspect that at best it will block until the thread function has exited.
Also you are not cheking the return values from pthread_create() either which might be a good idea. In summary I would use a separate instance of thread objects, or add the pthread_join calls to the inside of your loop so that you are certain that the threads are finished running before the next call to pthread_create().
Finally the function signature for functions passed to pthread_create() are of the type
void* thread_function(void*);
and not
void thead_function()
like you have in your code.
You are creating 10 threads (5 iterations of two threads each), but only joining the last two you create (as mathematician1975 notes, you're re-using the thread handle variables, so the values from the last iteration are the only ones you have available to join on). Without the sleep(), it's quite possible that the scheduler has not started executing the first 8 threads before you hit exit(), which automatically terminates all threads, whether they have had a chance to run yet or not.

POSIX threading on ios

I've started experimenting with POSix threads using the ios platform. Coming fro using NSThread it's pretty daunting.
Basically in my sample app I have a big array filled with type mystruct. Every so often (very frequently) I want to perform a task with the contents of one of these structs in the background so I pass it to detachnewthread to kick things off.
I think I have the basics down but Id like to get a professional opinion before I attempt to move on to more complicated stuff.
Does what I have here seem "o.k" and could you point out anything missing that could cause problems? Can you spot any memory management issues etc....
struct mystruct
{
pthread thread;
int a;
long c;
}
void detachnewthread(mystruct *str)
{
// pthread_t thread;
if(str)
{
int rc;
// printf("In detachnewthread: creating thread %d\n", str->soundid);
rc = pthread_create(&str->thread, NULL, DoStuffWithMyStruct, (void *)str);
if (rc){
printf("ERROR; return code from pthread_create() is %d\n", rc);
//exit(-1);
}
}
//
/* Last thing that main() should do */
// pthread_exit(NULL);
}
void *DoStuffWithMyStruct(void *threadid)
{
mystruct *sptr;
dptr = (mystruct *)threadid;
// do stuff with data in my struct
pthread_detach(soundptr->thread);
}
One potential issue would be how the storage for the passed in structure mystruct is created. The lifetime of that variable is very critical to its usage in the thread. For example, if the caller of detachnewthread had that declared on the stack and then returned before the thread finished, it would be undefined behavior. Likewise, if it were dynamically allocated, then it is necessary to make sure it is not freed before the thread is finished.
In response to the comment/question: The necessity of some kind of mutex depends on the usage. For the sake of discussion, I will assume it is dynamically allocated. If the calling thread fills in the contents of the structure prior to creating the "child" thread and can guarantee that it will not be freed until after the child thread exits, and the subsequent access is read/only, then you would not need a mutex to protect it. I can imagine that type of scenario if the structure contains information that the child thread needs for completing its task.
If, however, more than one thread will be accessing the contents of the structure and one or more threads will be changing the data (writing to the structure), then you probably do need a mutex to protect it.
Try using Apple's Grand Central Dispatch (GCD) which will manage the threads for you. GCD provides the capability to dispatch work, via blocks, to various queues that are managed by the system. Some of the queue types are concurrent, serial, and of course the main queue where the UI runs. Based upon the CPU resources at hand, the system will manage the queues and necessary threads to get the work done. A simple example, which shows the how you can nest calls to different queues is like this:
__block MYClass *blockSelf=self;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[blockSelf doSomeWork];
dispatch_async(dispatch_get_main_queue(), ^{
[blockSelf.textField setStringValue:#"Some work is done, updating UI"];
});
});
__block MyClass *blockSelf=self is used simply to avoid retain cycles associated with how blocks work.
Apple's docs:
http://developer.apple.com/library/ios/#documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html
Mike Ash's Q&A blog post:
http://mikeash.com/pyblog/friday-qa-2009-08-28-intro-to-grand-central-dispatch-part-i-basics-and-dispatch-queues.html

Odd behavior when creating and cancelling a thread in close succession

I'm using g++ version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) and libpthread v. 2-11-1. The following code simply creates a thread running Foo(), and immediately cancels it:
void* Foo(void*){
printf("Foo\n");
/* wait 1 second, e.g. using nanosleep() */
return NULL;
}
int main(){
pthread_t thread;
int res_create, res_cancel;
printf("creating thread\n);
res_create = pthread_create(&thread, NULL, &Foo, NULL);
res_cancel = pthread_cancel(thread);
printf("cancelled thread\n);
printf("create: %d, cancel: %d\n", res_create, res_cancel);
return 0;
}
The output I get is:
creating thread
Foo
Foo
cancelled thread
create: 0, cancel: 0
Why the second Foo output? Am I abusing the pthread API by calling pthread_cancel right after pthread_create? If so, how can I know when it's safe to touch the thread? If I so much as stick a printf() between the two, I don't have this problem.
I cannot reproduce this on a slightly newer Ubuntu. Sometimes I get one Foo and sometimes none. I had to fix a few things to get your code to compile (missing headers, missing call to some sleep function implied by a comment and string literals not closed), which indicate you did not paste the actual code which reproduced the problem.
If the problem is indeed real, it might indicate some thread cancellation problem in glibc's IO library. It looks a lot like two threads doing a flush(stdout) on the same buffer contents. Now that should never happen normally because the IO library is thread safe. But what if there is some cancellation scenario like: the thread has the mutex on stdout, and has just done a flush, but has not updated the buffer yet to clear the output. Then it is canceled before it can do that, and the main thread flushes the same data again.

Locking mutex in one thread and unlocking it in the other

Will this code be correct and portable?
void* aThread(void*)
{
while(conditionA)
{
pthread_mutex_lock(mutex1);
//do something
pthread_mutex_unlock(mutex2);
}
}
void* bThread(void*)
{
while(conditionB)
{
pthread_mutex_lock(mutex2);
//do something
pthread_mutex_unlock(mutex1);
}
}
In the actual application I have three threads - two for adding values to an array and one for reading them. And I need the third thread to display the contents of the array right after one of the other threads adds a new item.
It is not. If thread A gets to mutex_unlock(2) before thread B got to mutex_lock(2), you are facing undefined behavior. You must not unlock another thread's mutex either.
The pthread_mutex_lock Open Group Base Specification says so:
If the mutex type is PTHREAD_MUTEX_NORMAL [...] If a thread attempts to unlock a mutex that it has not locked or a mutex which is unlocked, undefined behavior results.
As user562734's answer says, the answer is no - you cannot unlock a thread that was locked by another thread.
To achieve the reader/writer synchronisation you want, you should use condition variables - pthread_cond_wait(), pthread_cond_signal() and related functions.
Some implementations do allow locking and unlocking threads to
be different.
#include <iostream>
#include <thread>
#include <mutex>
using namespace std;
mutex m;
void f() {m.lock(); cout << "f: mutex is locked" << endl;}
void g() {m.unlock(); cout << "g: mutex is unlocked" << endl;}
main()
{
thread tf(f); /* start f */
tf.join(); /* Wait for f to terminate */
thread tg(g); /* start g */
tg.join(); /* Wait for g to terminate */
}
This program prints
f: mutex is locked
g: mutex is unlocked
System is Debian linux, gcc 4.9.1. -std=c++11.

Resources