NSThread argument problem - ios

[NSThread detachNewThreadSelector:#selector(addressLocation:) toTarget:self withObject:parameter];
[self addressLocation:parameter];
Should these two statements do the same thing? Because one of them (the second one) gives me an accurate result, and the other consistently gives me a random location off the coast of Africa. From what I have read, they should both do the same thing; execute addressLocation with the argument 'parameter.' The only difference is the thread, but it is accessing a global volatile variable, so that shouldn't matter, should it?

Threads are much more complicated than that. When you call detachNewThreadSelector, you are creating a new thread, but there's no simple way for you to know when that call completes. It could complete before the next line of code in the calling thread or many seconds later.
If you create the thread first, you can then use performSelector:onThread:withObject:waitUntilDone and you should get the same result as if you used [self addressLocation:parameter]. That won't do you a lot of good though because your main thread will be doing nothing while you wait for the result.
There are lots of ways to get data back from a thread -- I like to call performSelectorOnMainThread from the secondary thread to send the data back to the main thread, for example.
I would read up on Grand Central Dispatch to see if it suits your needs.

Related

Clarifications on dispatch_queue, reentrancy and deadlocks

I need a clarifications on how dispatch_queues is related to reentrancy and deadlocks.
Reading this blog post Thread Safety Basics on iOS/OS X, I encountered this sentence:
All dispatch queues are non-reentrant, meaning you will deadlock if
you attempt to dispatch_sync on the current queue.
So, what is the relationship between reentrancy and deadlock? Why, if a dispatch_queue is non-reentrant, does a deadlock arise when you are using dispatch_sync call?
In my understanding, you can have a deadlock using dispatch_sync only if the thread you are running on is the same thread where the block is dispatch into.
A simple example is the following. If I run the code in the main thread, since the dispatch_get_main_queue() will grab the main thread as well and I will end in a deadlock.
dispatch_sync(dispatch_get_main_queue(), ^{
NSLog(#"Deadlock!!!");
});
Any clarifications?
All dispatch queues are non-reentrant, meaning you will deadlock if
you attempt to dispatch_sync on the current queue.
So, what is the relationship between reentrancy and deadlock? Why, if
a dispatch_queue is non-reentrant, does a deadlock arise when you are
using dispatch_sync call?
Without having read that article, I imagine that statement was in reference to serial queues, because it's otherwise false.
Now, let's consider a simplified conceptual view of how dispatch queues work (in some made-up pseudo-language). We also assume a serial queue, and don't consider target queues.
Dispatch Queue
When you create a dispatch queue, basically you get a FIFO queue, a simple data structure where you can push objects on the end, and take objects off the front.
You also get some complex mechanisms to manage thread pools and do synchronization, but most of that is for performance. Let's simply assume that you also get a thread that just runs an infinite loop, processing messages from the queue.
void processQueue(queue) {
for (;;) {
waitUntilQueueIsNotEmptyInAThreadSaveManner(queue)
block = removeFirstObject(queue);
block();
}
}
dispatch_async
Taking the same simplistic view of dispatch_async yields something like this...
void dispatch_async(queue, block) {
appendToEndInAThreadSafeManner(queue, block);
}
All it is really doing is taking the block, and adding it to the queue. This is why it returns immediately, it just adds the block onto the end of the data structure. At some point, that other thread will pull this block off the queue, and execute it.
Note, that this is where the FIFO guarantee comes into play. The thread pulling blocks off the queue and executing them always takes them in the order that they were placed on the queue. It then waits until that block has fully executed before getting the next block off the queue
dispatch_sync
Now, another simplistic view of dispatch_sync. In this case, the API guarantees that it will wait until the block has run to completion before it returns. In particular, calling this function does not violate the FIFO guarantee.
void dispatch_sync(queue, block) {
bool done = false;
dispatch_async(queue, { block(); done = true; });
while (!done) { }
}
Now, this is actually done with semaphores so there is no cpu loops and boolean flag, and it doesn't use a separate block, but we are trying to keep it simple. You should get the idea.
The block is placed on the queue, and then the function waits until it knows for sure that "the other thread" has run the block to completion.
Reentrancy
Now, we can get a reentrant call in a number of different ways. Let's consider the most obvious.
block1 = {
dispatch_sync(queue, block2);
}
dispatch_sync(queue, block1);
This will place block1 on the queue, and wait for it to run. Eventually the thread processing the queue will pop block1 off, and start executing it. When block1 executes, it will put block2 on the queue, and then wait for it to finish executing.
This is one meaning of reentrancy: when you re-enter a call to dispatch_sync from another call to dispatch_sync
Deadlock from reentering dispatch_sync
However, block1 is now running inside the queue's for loop. That code is executing block1, and will not process anything more from the queue until block1 completes.
Block1, though, has placed block2 on the queue, and is waiting for it to complete. Block2 has indeed been placed on the queue, but it will never be executed. Block1 is "waiting" for block2 to complete, but block2 is sitting on a queue, and the code that pulls it off the queue and executes it will not run until block1 completes.
Deadlock from NOT reentering dispatch_sync
Now, what if we change the code to this...
block1 = {
dispatch_sync(queue, block2);
}
dispatch_async(queue, block1);
We are not technically reentering dispatch_sync. However, we still have the same scenario, it's just that the thread that kicked off block1 is not waiting for it to finish.
We are still running block1, waiting for block2 to finish, but the thread that will run block2 must finish with block1 first. This will never happen because the code to process block1 is waiting for block2 to be taken off the queue and executed.
Thus reentrancy for dispatch queues is not technically reentering the same function, but reentering the same queue processing.
Deadlocks from NOT reentering the queue at all
In it's most simple case (and most common), let's assume [self foo] gets called on the main thread, as is common for UI callbacks.
-(void) foo {
dispatch_sync(dispatch_get_main_queue(), ^{
// Never gets here
});
}
This doesn't "reenter" the dispatch queue API, but it has the same effect. We are running on the main thread. The main thread is where the blocks are taken off the main queue and processed. The main thread is currently executing foo and a block is placed on the main-queue, and foo then waits for that block to be executed. However, it can only be taken off the queue and executed after the main thread gets done with its current work.
This will never happen because the main thread will not progress until `foo completes, but it will never complete until that block it is waiting for runs... which will not happen.
In my understanding, you can have a deadlock using dispatch_sync only
if the thread you are running on is the same thread where the block is
dispatch into.
As the aforementioned example illustrates, that's not the case.
Furthermore, there are other scenarios that are similar, but not so obvious, especially when the sync access is hidden in layers of method calls.
Avoiding deadlocks
The only sure way to avoid deadlocks is to never call dispatch_sync (that's not exactly true, but it's close enough). This is especially true if you expose your queue to users.
If you use a self-contained queue, and control its use and target queues, you can maintain some control when using dispatch_sync.
There are, indeed, some valid uses of dispatch_sync on a serial queue, but most are probably unwise, and should only be done when you know for certain that you will not be 'sync' accessing the same or another resource (the latter is known as deadly embrace).
EDIT
Jody, Thanks a lot for your answer. I really understood all of your
stuff. I would like to put more points...but right now I cannot. 😢 Do
you have any good tips in order to learn this under the hood stuff? –
Lorenzo B.
Unfortunately, the only books on GCD that I've seen are not very advanced. They go over the easy surface level stuff on how to use it for simple general use cases (which I guess is what a mass market book is supposed to do).
However, GCD is open source. Here is the webpage for it, which includes links to their svn and git repositories. However, the webpage looks old (2010) and I'm not sure how recent the code is. The most recent commit to the git repository is dated Aug 9, 2012.
I'm sure there are more recent updates; but not sure where they would be.
In any event, I doubt the conceptual frameworks of the code has changed much over the years.
Also, the general idea of dispatch queues is not new, and has been around in many forms for a very long time.
Many moons ago, I spent my days (and nights) writing kernel code (worked on what we believe to have been the very first symmetric multiprocessing implementation of SVR4), and then when I finally breached the kernel, I spent most of my time writing SVR4 STREAMS drivers (wrapped by user space libraries). Eventually, I made it fully into user space, and built some of the very first HFT systems (though it wasn't called that back then).
The dispatch queue concept was prevalent in every bit of that. It's emergence as a generally available user space library is only a somewhat recent development.
Edit #2
Jody, thanks for your edit. So, to recap a serial dispatch queue is
not reentrant since it could produce an invalid state (a deadlock).
On the contrary, an reentrant function will not produce it. Am I right?
– Lorenzo B.
I guess you could say that, because it does not support reentrant calls.
However, I think I would prefer to say that the deadlock is the result of preventing invalid state. If anything else occurred, then either the state would be compromised, or the definition of the queue would be violated.
Core Data's performBlockAndWait
Consider -[NSManagedObjectContext performBlockAndWait]. It's non-asynchronous, and it is reentrant. It has some pixie dust sprinkled around the queue access so that the second block runs immediately, when called from "the queue." Thus, it has the traits I described above.
[moc performBlock:^{
[moc performBlockAndWait:^{
// This block runs immediately, and to completion before returning
// However, `dispatch_async`/`dispatch_sync` would deadlock
}];
}];
The above code does not "produce a deadlock" from reentrancy (but the API can't avoid deadlocks entirely).
However, depending on who you talk to, doing this can produce invalid (or unpredictable/unexpected) state. In this simple example, it's clear what's happening, but in more complicated parts it can be more insidious.
At the very least, you must be very careful about what you do inside a performBlockAndWait.
Now, in practice, this is only a real issue for main-queue MOCs, because the main run loop is running on the main queue, so performBlockAndWait recognizes that and immediately executes the block. However, most apps have a MOC attached to the main queue, and respond to user save events on the main queue.
If you want to watch how dispatch queues interact with the main run loop, you can install a CFRunLoopObserver on the main run loop, and watch how it processes the various input sources in the main run loop.
If you've never done that, it's an interesting and educational experiment (though you can't assume what you observe will always be that way).
Anyway, I generally try to avoid both dispatch_sync and performBlockAndWait.

iOS blocks are called on what thread?

I'm learning about blocks from a Stanford video. I'm now at the part which explains core data. The teachers mentions something about:
- (void)openWithCompletionHandler:(void (^)(BOOL success))completionHandler;
He said that completionhandler block will be called in the thread which called the method. So basically the method runs async but the blocks get called on the thread, lets assume main.
So my question is do all blocks run on the thread from where the method call was made. To illustrate why I ask this question, I have a Async class which does request to a server.
The format of all these methods is like this:
- (void) getSomething:(id <delegateWhatever> const)delegate{
goto background thread using GCD..
Got result from server...
Go back to main thread and call the delegate method...
}
When I use blocks I do not need to worry about going back to main thread if they will be called where the call was made?
Hope this is clear,
Thanks in advance
If something runs asynchronously, you should read a documentation to know on which thread, e.g. the completion block will be executed. If it is your code, you are in charge here, you can use global GCD queues, you can create your own queue and execute it there or whatever...
In general, blocks behaves like a function or a method call, it is executed on thread, which calls it. It is even possible that the same block will be executed from 2 different threads at the same time.
And just to be clear: Even if you are using blocks, you need to care about going back to main thread, of course if it is necessary
Nothing forces blocks to be called on a particular thread, so it depends on the specific method whether or not you need to worry about its callback being on the main thread. (In practice I don't remember ever seeing a library where a method called on the main thread would not call its completion handler also on the main thread. But you still need to read the documentation of the specific library and method you are using, as always.)

Is this the right way to compare two GCD Queues?

Following an earlier question on SO, I'm now looking to compare two different grand central dispatch queues to try and determine if the current code is being run on the main thread or not. My question simply: is this a valid way of achieving this? Or are there some pitfalls of doing this that I haven't considered?
if (dispatch_get_current_queue() != dispatch_get_main_queue()) {
// We are currently on a background queue
} else {
// We are on the main queue
}
Cheers
Comparing the current queue against the main queue is not a valid way to check whether you are running on the main thread.
Use [NSThread isMainThread] or pthread_main_np() to explicitly check whether you are on the main thread if that is what you want to know.
You can be on the main thread without the current queue being the main queue, and you can be on the main queue without the current thread being the main thread (the latter only if dispatch_main() has been called, but still).
In recent releases this is documented explicitly in the CAVEATS section of the dispatch_get_main_queue(3) manpage:
The result of dispatch_get_main_queue() may or may not equal the result of dispatch_get_current_queue() when called on the main thread. Comparing the two is not a valid way to test whether code is executing on the main thread. Foundation/AppKit programs should use [NSThread isMainThread]. POSIX programs may use pthread_main_np(3).
In general you should avoid using queue pointer comparison to influence program logic. Dispatch queues exist in a dependency tree (the target queue hierarchy) and comparing individual leaves in that tree without taking their interdependency into account does not provide sufficient information to make safe decisions.
If you really need program logic based on queue interdependency, use the dispatch_get_specific(3)/dispatch_queue_set_specific(3) APIs which are target-queue aware and much more explicit.

Clarifications needed for concurrent operations, NSOperationQueue and async APIs

This is a two part question. Hope someone could reply with a complete answer.
NSOperations are powerful objects. They can be of two different types: non-concurrent or concurrent.
The first type runs synchronously. You can take advantage of a non-concurrent operations by adding them into a NSOperationQueue. The latter creates a thread(s) for you. The result consists in running that operation in a concurrent manner. The only caveat regards the lifecycle of such an operation. When its main method finishes, then it is removed form the queue. This is can be a problem when you deal with async APIs.
Now, what about concurrent operations? From Apple doc
If you want to implement a concurrent operation—that is, one that runs
asynchronously with respect to the calling thread—you must write
additional code to start the operation asynchronously. For example,
you might spawn a separate thread, call an asynchronous system
function, or do anything else to ensure that the start method starts
the task and returns immediately and, in all likelihood, before the
task is finished.
This is quite almost clear to me. They run asynchronously. But you must take the appropriate actions to ensure that they do.
What it is not clear to me is the following. Doc says:
Note: In OS X v10.6, operation queues ignore the value returned by
isConcurrent and always call the start method of your operation from a
separate thread.
What it really means? What happens if I add a concurrent operation in a NSOperationQueue?
Then, in this post Concurrent Operations, concurrent operations are used to download some HTTP content by means of NSURLConnection (in its async form). Operations are concurrent and included in a specific queue.
UrlDownloaderOperation * operation = [UrlDownloaderOperation urlDownloaderWithUrlString:url];
[_queue addOperation:operation];
Since NSURLConnection requires a loop to run, the author shunt the start method in the main thread (so I suppose adding the operation to the queue it has spawn a different one). In this manner, the main run loop can invoke the delegate included in the operation.
- (void)start
{
if (![NSThread isMainThread])
{
[self performSelectorOnMainThread:#selector(start) withObject:nil waitUntilDone:NO];
return;
}
[self willChangeValueForKey:#"isExecuting"];
_isExecuting = YES;
[self didChangeValueForKey:#"isExecuting"];
NSURLRequest * request = [NSURLRequest requestWithURL:_url];
_connection = [[NSURLConnection alloc] initWithRequest:request
delegate:self];
if (_connection == nil)
[self finish];
}
- (BOOL)isConcurrent
{
return YES;
}
// delegate method here...
My question is the following. Is this thread safe? The run loop listens for sources but invoked methods are called in a background thread. Am I wrong?
Edit
I've completed some tests on my own based on the code provided by Dave Dribin (see 1). I've noticed, as you wrote, that callbacks of NSURLConnection are called in the main thread.
Ok, but now I'm still very confusing. I'll try to explain my doubts.
Why including within a concurrent operation an async pattern where its callback are called in the main thread? Shunting the start method to the main thread it allows to execute callbacks in the main thread, and what about queues and operations? Where do I take advantage of threading mechanisms provided by GCD?
Hope this is clear.
This is kind of a long answer, but the short version is that what you're doing is totally fine and thread safe since you've forced the important part of the operation to run on the main thread.
Your first question was, "What happens if I add a concurrent operation in a NSOperationQueue?" As of iOS 4, NSOperationQueue uses GCD behind the scenes. When your operation reaches the top of the queue, it gets submitted to GCD, which manages a pool of private threads that grows and shrinks dynamically as needed. GCD assigns one of these threads to run the start method of your operation, and guarantees this thread will never be the main thread.
When the start method finishes in a concurrent operation, nothing special happens (which is the point). The queue will allow your operation to run forever until you set isFinished to YES and do the proper KVO willChange/didChange calls, regardless of the calling thread. Typically you'd make a method called finish to do that, which it looks like you have.
All this is fine and well, but there are some caveats involved if you need to observe or manipulate the thread on which your operation is running. The important thing to remember is this: don't mess with threads managed by GCD. You can't guarantee they'll live past the current frame of execution, and you definitely can't guarantee that subsequent delegate calls (i.e., from NSURLConnection) will occur on the same thread. In fact, they probably won't.
In your code sample, you've shunted start off to the main thread so you don't need to worry much about background threads (GCD or otherwise). When you create an NSURLConnection it gets scheduled on the current run loop, and all of its delegate methods will get called on that run loop's thread, meaning that starting the connection on the main thread guarantees its delegate callbacks also happen on the main thread. In this sense it's "thread safe" because almost nothing is actually happening on a background thread besides the start of the operation itself, which may actually be an advantage because GCD can immediately reclaim the thread and use it for something else.
Let's imagine what would happen if you didn't force start to run on the main thread and just used the thread given to you by GCD. A run loop can potentially hang forever if its thread disappears, such as when it gets reclaimed by GCD into its private pool. There's some techniques floating around for keeping the thread alive (such as adding an empty NSPort), but they don't apply to threads created by GCD, only to threads you create yourself and can guarantee the lifetime of.
The danger here is that under light load you actually can get away with running a run loop on a GCD thread and think everything is fine. Once you start running many parallel operations, especially if you need to cancel them midflight, you'll start to see operations that never complete and never deallocate, leaking memory. If you wanted to be completely safe, you'd need to create your own dedicated NSThread and keep the run loop going forever.
In the real world, it's much easier to do what you're doing and just run the connection on the main thread. Managing the connection consumes very little CPU and in most cases won't interfere with your UI, so there's very little to gain by running the connection completely in the background. The main thread's run loop is always running and you don't need to mess with it.
It is possible, however, to run an NSURLConnection connection entirely in the background using the dedicated thread method described above. For an example, check out JXHTTP, in particular the classes JXOperation and JXURLConnectionOperation

is there a way that the synchronized keyword doesn't block the main thread

Imagine you want to do many thing in the background of an iOS application but you code it properly so that you create threads (for example using GCD) do execute this background activity.
Now what if you need at some point to write update a variable but this update can occur or any of the threads you created.
You obviously want to protect that variable and you can use the keyword #synchronized to create the locks for you but here is the catch (extract from the Apple documentation)
The #synchronized() directive locks a section of code for use by a
single thread. Other threads are blocked until the thread exits the
protected code—that is, when execution continues past the last
statement in the #synchronized() block.
So that means if you synchronized an object and two threads are writing it at the same time, even the main thread will block until both threads are done writing their data.
An example of code that will showcase all this:
// Create the background queue
dispatch_queue_t queue = dispatch_queue_create("synchronized_example", NULL);
// Start working in new thread
dispatch_async(queue, ^
{
// Synchronized that shared resource
#synchronized(sharedResource_)
{
// Write things on that resource
// If more that one thread access this piece of code:
// all threads (even main thread) will block until task is completed.
[self writeComplexDataOnLocalFile];
}
});
// won’t actually go away until queue is empty
dispatch_release(queue);
So the question is fairly simple: How to overcome this ? How can we securely add a locks on all the threads EXCEPT the main thread which, we know, doesn't need to be blocked in that case ?
EDIT FOR CLARIFICATION
As you some of you commented, it does seem logical (and this was clearly what I thought at first when using synchronized) that only two the threads that are trying to acquire the lock should block until they are both done.
However, tested in a real situation, this doesn't seem to be the case and the main thread seems to also suffer from the lock.
I use this mechanism to log things in separate threads so that the UI is not blocked. But when I do intense logging, the UI (main thread) is clearly highly impacted (scrolling is not as smooth).
So two options here: Either the background tasks are too heavy that even the main thread gets impacted (which I doubt), or the synchronized also blocks the main thread while performing the lock operations (which I'm starting reconsidering).
I'll dig a little further using the Time Profiler.
I believe you are misunderstanding the following sentence that you quote from the Apple documentation:
Other threads are blocked until the thread exits the protected code...
This does not mean that all threads are blocked, it just means all threads that are trying to synchronise on the same object (the _sharedResource in your example) are blocked.
The following quote is taken from Apple's Thread Programming Guide, which makes it clear that only threads that synchronise on the same object are blocked.
The object passed to the #synchronized directive is a unique identifier used to distinguish the protected block. If you execute the preceding method in two different threads, passing a different object for the anObj parameter on each thread, each would take its lock and continue processing without being blocked by the other. If you pass the same object in both cases, however, one of the threads would acquire the lock first and the other would block until the first thread completed the critical section.
Update: If your background threads are impacting the performance of your interface then you might want to consider putting some sleeps into the background threads. This should allow the main thread some time to update the UI.
I realise you are using GCD but, for example, NSThread has a couple of methods that will suspend the thread, e.g. -sleepForTimeInterval:. In GCD you can probably just call sleep().
Alternatively, you might also want to look at changing the thread priority to a lower priority. Again, NSThread has the setThreadPriority: for this purpose. In GCD, I believe you would just use a low priority queue for the dispatched blocks.
I'm not sure if I understood you correctly, #synchronize doesn't block all threads but only the ones that want to execute the code inside of the block. So the solution probably is; Don't execute the code on the main thread.
If you simply want to avoid having the main thread acquire the lock, you can do this (and wreck havoc):
dispatch_async(queue, ^
{
if(![NSThread isMainThread])
{
// Synchronized that shared resource
#synchronized(sharedResource_)
{
// Write things on that resource
// If more that one thread access this piece of code:
// all threads (even main thread) will block until task is completed.
[self writeComplexDataOnLocalFile];
}
}
else
[self writeComplexDataOnLocalFile];
});

Resources