Pthread join priority - pthreads

Let me show you the function first:
for (i=0; i<3;i=i+2){
pthread_create(&thread1, NULL, &randtrack, (void *)&rnum_array[i]);
pthread_create(&thread2, NULL, &randtrack, (void *)&rnum_array[i+1]);
pthread_join(thread1, NULL);
pthread_join(thread2, NULL);
}
print final result here;
My understanding is after two threads are created, the parent thread will blocked at join(thread1), what is the thread 2 actually come back earlier than thread1? How can make longer thread always stay behind?
Thanks

If thread2 finishes and thread1 hasn't, you'll continue waiting until thread1 finishes. Then you'll wait until thread2 finishes, which will complete more or less instantaneously. The order in which you wait for the threads won't matter (unless the threads try to interact with each other directly, such as by calling pthread_kill or pthread_join on each other).
Update: Your design is completely wrong for what you're actually trying to do. You want to do this:
Create a structure to track the work that needs to be done. It should be protected by a mutex, track how many threads are currently working, and what the next work unit that needs to be assigned is.
When you create the threads, have them rung a function that acquires the mutex, grabes the next unit of work, increments the number of threads running, and then does the work.
When a thread completes a work unit, it should acquire the mutex, decrement the number of threads running, and see if there's more work to do. When there's no work to do, the thread should terminate.
You can now wait for all threads to terminate, which will only happen when all the work is done. This eliminates the loop over the work units.
And please learn a very important general rule -- threads are just the things that get work done. What you want your code to focus on is doing the work, not how it will be done. Try to wait for work to be done, not for threads to be done.

Related

Is serial queue faster then synchronized block?

Is serial queue faster then synchronized block?
I have two ways to achieve thread safety. I have implemented option-1 but, my supervisor wants me to implement option-2.
Option 1:
synchronized fun doSomething(task: Task){
//task.do()
}
Option 2:
fun doSomething(task: Task){
serialQueue.add(task)
}
Which approach is faster and why? And why two versions exist for single purpose?
In the first option the calling thread acquires the lock on the object that doSomething is called on, then executes the task, holding the lock until it’s done. If another thread wants to execute the same method on the same instance, that thread has to wait for the tasks run by other threads to finish before it can execute the method.
In the second option a thread drops the task in a queue. The thread isn’t held up waiting while the task executes.
Which option to use can be affected by several things, like how long the task takes, how important it is for the task to get done right now as opposed to letting it process through the queue, and whether you're blocking or non-blocking and how much other work that waiting thread could be doing for you.

Clarifications on dispatch_queue, reentrancy and deadlocks

I need a clarifications on how dispatch_queues is related to reentrancy and deadlocks.
Reading this blog post Thread Safety Basics on iOS/OS X, I encountered this sentence:
All dispatch queues are non-reentrant, meaning you will deadlock if
you attempt to dispatch_sync on the current queue.
So, what is the relationship between reentrancy and deadlock? Why, if a dispatch_queue is non-reentrant, does a deadlock arise when you are using dispatch_sync call?
In my understanding, you can have a deadlock using dispatch_sync only if the thread you are running on is the same thread where the block is dispatch into.
A simple example is the following. If I run the code in the main thread, since the dispatch_get_main_queue() will grab the main thread as well and I will end in a deadlock.
dispatch_sync(dispatch_get_main_queue(), ^{
NSLog(#"Deadlock!!!");
});
Any clarifications?
All dispatch queues are non-reentrant, meaning you will deadlock if
you attempt to dispatch_sync on the current queue.
So, what is the relationship between reentrancy and deadlock? Why, if
a dispatch_queue is non-reentrant, does a deadlock arise when you are
using dispatch_sync call?
Without having read that article, I imagine that statement was in reference to serial queues, because it's otherwise false.
Now, let's consider a simplified conceptual view of how dispatch queues work (in some made-up pseudo-language). We also assume a serial queue, and don't consider target queues.
Dispatch Queue
When you create a dispatch queue, basically you get a FIFO queue, a simple data structure where you can push objects on the end, and take objects off the front.
You also get some complex mechanisms to manage thread pools and do synchronization, but most of that is for performance. Let's simply assume that you also get a thread that just runs an infinite loop, processing messages from the queue.
void processQueue(queue) {
for (;;) {
waitUntilQueueIsNotEmptyInAThreadSaveManner(queue)
block = removeFirstObject(queue);
block();
}
}
dispatch_async
Taking the same simplistic view of dispatch_async yields something like this...
void dispatch_async(queue, block) {
appendToEndInAThreadSafeManner(queue, block);
}
All it is really doing is taking the block, and adding it to the queue. This is why it returns immediately, it just adds the block onto the end of the data structure. At some point, that other thread will pull this block off the queue, and execute it.
Note, that this is where the FIFO guarantee comes into play. The thread pulling blocks off the queue and executing them always takes them in the order that they were placed on the queue. It then waits until that block has fully executed before getting the next block off the queue
dispatch_sync
Now, another simplistic view of dispatch_sync. In this case, the API guarantees that it will wait until the block has run to completion before it returns. In particular, calling this function does not violate the FIFO guarantee.
void dispatch_sync(queue, block) {
bool done = false;
dispatch_async(queue, { block(); done = true; });
while (!done) { }
}
Now, this is actually done with semaphores so there is no cpu loops and boolean flag, and it doesn't use a separate block, but we are trying to keep it simple. You should get the idea.
The block is placed on the queue, and then the function waits until it knows for sure that "the other thread" has run the block to completion.
Reentrancy
Now, we can get a reentrant call in a number of different ways. Let's consider the most obvious.
block1 = {
dispatch_sync(queue, block2);
}
dispatch_sync(queue, block1);
This will place block1 on the queue, and wait for it to run. Eventually the thread processing the queue will pop block1 off, and start executing it. When block1 executes, it will put block2 on the queue, and then wait for it to finish executing.
This is one meaning of reentrancy: when you re-enter a call to dispatch_sync from another call to dispatch_sync
Deadlock from reentering dispatch_sync
However, block1 is now running inside the queue's for loop. That code is executing block1, and will not process anything more from the queue until block1 completes.
Block1, though, has placed block2 on the queue, and is waiting for it to complete. Block2 has indeed been placed on the queue, but it will never be executed. Block1 is "waiting" for block2 to complete, but block2 is sitting on a queue, and the code that pulls it off the queue and executes it will not run until block1 completes.
Deadlock from NOT reentering dispatch_sync
Now, what if we change the code to this...
block1 = {
dispatch_sync(queue, block2);
}
dispatch_async(queue, block1);
We are not technically reentering dispatch_sync. However, we still have the same scenario, it's just that the thread that kicked off block1 is not waiting for it to finish.
We are still running block1, waiting for block2 to finish, but the thread that will run block2 must finish with block1 first. This will never happen because the code to process block1 is waiting for block2 to be taken off the queue and executed.
Thus reentrancy for dispatch queues is not technically reentering the same function, but reentering the same queue processing.
Deadlocks from NOT reentering the queue at all
In it's most simple case (and most common), let's assume [self foo] gets called on the main thread, as is common for UI callbacks.
-(void) foo {
dispatch_sync(dispatch_get_main_queue(), ^{
// Never gets here
});
}
This doesn't "reenter" the dispatch queue API, but it has the same effect. We are running on the main thread. The main thread is where the blocks are taken off the main queue and processed. The main thread is currently executing foo and a block is placed on the main-queue, and foo then waits for that block to be executed. However, it can only be taken off the queue and executed after the main thread gets done with its current work.
This will never happen because the main thread will not progress until `foo completes, but it will never complete until that block it is waiting for runs... which will not happen.
In my understanding, you can have a deadlock using dispatch_sync only
if the thread you are running on is the same thread where the block is
dispatch into.
As the aforementioned example illustrates, that's not the case.
Furthermore, there are other scenarios that are similar, but not so obvious, especially when the sync access is hidden in layers of method calls.
Avoiding deadlocks
The only sure way to avoid deadlocks is to never call dispatch_sync (that's not exactly true, but it's close enough). This is especially true if you expose your queue to users.
If you use a self-contained queue, and control its use and target queues, you can maintain some control when using dispatch_sync.
There are, indeed, some valid uses of dispatch_sync on a serial queue, but most are probably unwise, and should only be done when you know for certain that you will not be 'sync' accessing the same or another resource (the latter is known as deadly embrace).
EDIT
Jody, Thanks a lot for your answer. I really understood all of your
stuff. I would like to put more points...but right now I cannot. 😢 Do
you have any good tips in order to learn this under the hood stuff? –
Lorenzo B.
Unfortunately, the only books on GCD that I've seen are not very advanced. They go over the easy surface level stuff on how to use it for simple general use cases (which I guess is what a mass market book is supposed to do).
However, GCD is open source. Here is the webpage for it, which includes links to their svn and git repositories. However, the webpage looks old (2010) and I'm not sure how recent the code is. The most recent commit to the git repository is dated Aug 9, 2012.
I'm sure there are more recent updates; but not sure where they would be.
In any event, I doubt the conceptual frameworks of the code has changed much over the years.
Also, the general idea of dispatch queues is not new, and has been around in many forms for a very long time.
Many moons ago, I spent my days (and nights) writing kernel code (worked on what we believe to have been the very first symmetric multiprocessing implementation of SVR4), and then when I finally breached the kernel, I spent most of my time writing SVR4 STREAMS drivers (wrapped by user space libraries). Eventually, I made it fully into user space, and built some of the very first HFT systems (though it wasn't called that back then).
The dispatch queue concept was prevalent in every bit of that. It's emergence as a generally available user space library is only a somewhat recent development.
Edit #2
Jody, thanks for your edit. So, to recap a serial dispatch queue is
not reentrant since it could produce an invalid state (a deadlock).
On the contrary, an reentrant function will not produce it. Am I right?
– Lorenzo B.
I guess you could say that, because it does not support reentrant calls.
However, I think I would prefer to say that the deadlock is the result of preventing invalid state. If anything else occurred, then either the state would be compromised, or the definition of the queue would be violated.
Core Data's performBlockAndWait
Consider -[NSManagedObjectContext performBlockAndWait]. It's non-asynchronous, and it is reentrant. It has some pixie dust sprinkled around the queue access so that the second block runs immediately, when called from "the queue." Thus, it has the traits I described above.
[moc performBlock:^{
[moc performBlockAndWait:^{
// This block runs immediately, and to completion before returning
// However, `dispatch_async`/`dispatch_sync` would deadlock
}];
}];
The above code does not "produce a deadlock" from reentrancy (but the API can't avoid deadlocks entirely).
However, depending on who you talk to, doing this can produce invalid (or unpredictable/unexpected) state. In this simple example, it's clear what's happening, but in more complicated parts it can be more insidious.
At the very least, you must be very careful about what you do inside a performBlockAndWait.
Now, in practice, this is only a real issue for main-queue MOCs, because the main run loop is running on the main queue, so performBlockAndWait recognizes that and immediately executes the block. However, most apps have a MOC attached to the main queue, and respond to user save events on the main queue.
If you want to watch how dispatch queues interact with the main run loop, you can install a CFRunLoopObserver on the main run loop, and watch how it processes the various input sources in the main run loop.
If you've never done that, it's an interesting and educational experiment (though you can't assume what you observe will always be that way).
Anyway, I generally try to avoid both dispatch_sync and performBlockAndWait.

GCD: What happens when two threads want to execute blocks on the main thread at the same time

I am using GCD in my IOS app. I have threes threads: the main thread, thread 2, and thread 3.
The following code is executed on thread 2,
dispatch_async(dispatch_get_main_queue(), ^{ code block 1 ...
so code bock 1 will be executed on the main thread. What happens if the following code is executed in thread 3 before code bock 1 finish running:
dispatch_async(dispatch_get_main_queue(), ^{ code block 2 ...
will bock2 wait until bock1 terminates?
How can I find answers of such questions? Shall I read APPLE's documents or do some experiments myself? What kind of experiments can I do?
The document you want is the Concurrency Programming Guide. In particular you want the section on Dispatch Queues, and somewhat more importantly you want the section on Migrating Away from Threads.
You should not think of yourself as having three threads (in fact, you may not). You may have three blocks. You may have three queues. How and if these are dispatched to threads is an internal implementation detail.
In GCD, the word "dispatch" means "place on a queue." When a block reaches the front of a system queue, it will be eligible to run on an available thread. Queues may feed into other queues, but eventually they have to tie to one of the system queues (otherwise they would never execute).
The main queue is a serial queue. Like other serial queues, each block must complete before the next block is allowed to run (this is why you can starve or deadlock the main queue if you're not careful). There are also concurrent queues, which only require that each block be started before the next is considered.
But the key is to remember that these are just FIFO queues that you can put blocks onto.
The answer lies in the fact that the main thread is a serial queue. That is, in your example, block 2 will wait for block 1 to finish before it can be executed. Careful though, if your first block gets locked on something or is waiting for a long time, the execution of block 2 might be delayed for a long time or even indefinitely.
For a simple example you can run, you can refer to my answer on this question here: https://stackoverflow.com/a/20683252/1387258
What's happening there is:
The collectionView is requested to reload it's data on the main thread from a different thread.
The collectionView (for example) is then requested to add a new section, a bunch of items and such.
Now, this is vital if your second block is dependent on your first block. That is, you may need to first invalidate your layout, before you can add new items to it.
A second scenario could be: you need to change the layout on your collection view, before you can update it's contents.
How can I find answers of such.
I'm going to recommend you try various things like what I've suggested above. The main thread is for UI updates and such only. Try experimenting there and good luck.

is there a way that the synchronized keyword doesn't block the main thread

Imagine you want to do many thing in the background of an iOS application but you code it properly so that you create threads (for example using GCD) do execute this background activity.
Now what if you need at some point to write update a variable but this update can occur or any of the threads you created.
You obviously want to protect that variable and you can use the keyword #synchronized to create the locks for you but here is the catch (extract from the Apple documentation)
The #synchronized() directive locks a section of code for use by a
single thread. Other threads are blocked until the thread exits the
protected code—that is, when execution continues past the last
statement in the #synchronized() block.
So that means if you synchronized an object and two threads are writing it at the same time, even the main thread will block until both threads are done writing their data.
An example of code that will showcase all this:
// Create the background queue
dispatch_queue_t queue = dispatch_queue_create("synchronized_example", NULL);
// Start working in new thread
dispatch_async(queue, ^
{
// Synchronized that shared resource
#synchronized(sharedResource_)
{
// Write things on that resource
// If more that one thread access this piece of code:
// all threads (even main thread) will block until task is completed.
[self writeComplexDataOnLocalFile];
}
});
// won’t actually go away until queue is empty
dispatch_release(queue);
So the question is fairly simple: How to overcome this ? How can we securely add a locks on all the threads EXCEPT the main thread which, we know, doesn't need to be blocked in that case ?
EDIT FOR CLARIFICATION
As you some of you commented, it does seem logical (and this was clearly what I thought at first when using synchronized) that only two the threads that are trying to acquire the lock should block until they are both done.
However, tested in a real situation, this doesn't seem to be the case and the main thread seems to also suffer from the lock.
I use this mechanism to log things in separate threads so that the UI is not blocked. But when I do intense logging, the UI (main thread) is clearly highly impacted (scrolling is not as smooth).
So two options here: Either the background tasks are too heavy that even the main thread gets impacted (which I doubt), or the synchronized also blocks the main thread while performing the lock operations (which I'm starting reconsidering).
I'll dig a little further using the Time Profiler.
I believe you are misunderstanding the following sentence that you quote from the Apple documentation:
Other threads are blocked until the thread exits the protected code...
This does not mean that all threads are blocked, it just means all threads that are trying to synchronise on the same object (the _sharedResource in your example) are blocked.
The following quote is taken from Apple's Thread Programming Guide, which makes it clear that only threads that synchronise on the same object are blocked.
The object passed to the #synchronized directive is a unique identifier used to distinguish the protected block. If you execute the preceding method in two different threads, passing a different object for the anObj parameter on each thread, each would take its lock and continue processing without being blocked by the other. If you pass the same object in both cases, however, one of the threads would acquire the lock first and the other would block until the first thread completed the critical section.
Update: If your background threads are impacting the performance of your interface then you might want to consider putting some sleeps into the background threads. This should allow the main thread some time to update the UI.
I realise you are using GCD but, for example, NSThread has a couple of methods that will suspend the thread, e.g. -sleepForTimeInterval:. In GCD you can probably just call sleep().
Alternatively, you might also want to look at changing the thread priority to a lower priority. Again, NSThread has the setThreadPriority: for this purpose. In GCD, I believe you would just use a low priority queue for the dispatched blocks.
I'm not sure if I understood you correctly, #synchronize doesn't block all threads but only the ones that want to execute the code inside of the block. So the solution probably is; Don't execute the code on the main thread.
If you simply want to avoid having the main thread acquire the lock, you can do this (and wreck havoc):
dispatch_async(queue, ^
{
if(![NSThread isMainThread])
{
// Synchronized that shared resource
#synchronized(sharedResource_)
{
// Write things on that resource
// If more that one thread access this piece of code:
// all threads (even main thread) will block until task is completed.
[self writeComplexDataOnLocalFile];
}
}
else
[self writeComplexDataOnLocalFile];
});

pthread_singal awakes more than one thread on a multiprocessor system

This is an excerpt from pthread_cond_wait man page
Some implementations, particularly on a multi-processor, may sometimes cause multiple threads to wake up when the condition variable is signaled simultaneously on different processors.
In general, whenever a condition wait
returns, the thread has to re-evaluate
the predicate associated with the
condition wait to determine whether it
can safely proceed, should wait again,
or should declare a timeout. My
Question: here what is the meaning of
predicate?Does it mean that I need to
create one more variable apart from
the conditional variable provided in
pthread_cond_wait or does it referring to the same variable which
has been provided in
pthread_cond_wait
Yes, you need an additional variable like int done_flag; to use like this:
pthread_mutex_lock(&mutex);
while (!done_flag) pthread_cond_wait(&cond, &mutex);
/* do something that needs the lock held */
pthread_mutex_unlock(&mutex);
/* do some other stuff that doesn't need the lock held */
Of course it often might not be a flag but rather a count, or some other type of variable, with a more complicated condition to test.
This might be useful. You can use pthread_kill to wake a particular thread.
sigset_t _fSigMask; // global sigmask
We do this before creating our threads. Threads inherit their mask from the thread that creates them. We use SIGUSR1 to signal our threads. Other signals are available.
sigemptyset(&_fSigMask);
sigaddset(&_fSigMask, SIGUSR1);
sigaddset(&_fSigMask, SIGSEGV);
Then to sleep a thread
int nSig;
sigwait(&fSigMask, &nSig);
Then to wake the thread, YourThread.
pthread_kill(YourThread, SIGUSR1);
By the way, during our testing, sleeping and waking our threads this way was about 40x faster than using condition variables.

Resources