Why doesn't NSThread's cancel method call pthread_cancel? - ios

Apple's documentation, and all the open source implementations I can find, are in agreement that thread cancellation should be handled entirely by the user. That is, [thread cancel] just sets a BOOL property on the receiver. It's then up to the user's implementation to check [NSThread currentThread].isCancelled periodically and, if YES, prematurely return from the thread's main method.
Okay, fair enough. But why not instead rely on pthread_cancel, which has dozens of builtin cancellation points already implemented? Surely this would yield more responsive thread cancellation. NSThread could easily be extended to have a cancellationBlock property, or some other mechanism for user-defined cancellation behavior that NSThread itself could dispatch via pthread_cleanup_pop.
Did Apple think this would be too complicated for people to use or something? The rest of the NSThread API is basically 1:1 with the pthreads API, so I'm curious why a different (honestly naive) path was chosen here.

Related

What does isConcurrent mean for NSOperation running from NSOperationQueue?

Because NSOperationQueue always run tasks on a new thread,
I'm confused about the role of isConcurrent when NSOperation runs from NSOperationQueue.
If i have two subclasses of NSOperation, both running an async processes, both are launched from NSOperationQueue and in both I override isCancelled, isExecuting, isFinished and isReady.
What will be the difference if at one I override isConcurrent to always return YES and the other always to return NO.
Who actually calls isConcurrent? How the logic change if it is NO or YES.
It's a legacy method, used before OS X v10.6 and before iOS 4, i.e. before the introduction of GCD for NSOperationQueue dispatch.
From the doc
Operation queues usually provide the threads used to run their operations. In OS X v10.6 and later, operation queues use the libdispatch library (also known as Grand Central Dispatch) to initiate the execution of their operations. As a result, operations are always executed on a separate thread, regardless of whether they are designated as concurrent or non-concurrent operations. In OS X v10.5, however, operations are executed on separate threads only if their isConcurrent method returns NO. If that method returns YES, the operation object is expected to create its own thread (or start some asynchronous operation); the queue does not provide a thread for it.
If you're running OS X >= 10.6 or iOS >= 4, you can safely ignore it.
As a confirmation of this fact, from the doc of isConcurrent
In OS X v10.6 and later, operation queues ignore the value returned by this method and always start operations on a separate thread.
I have added a more concise answer now, left this for the time being as it was part of a discussion...
Ok, I was quite happy with my use of the isConcurrent method until today !
I read the sentence:
In OS X v10.6 and later, operation queues ignore the value returned by
this method and always start operations on a separate thread.
As a warning relating to QA1712, pointing out that for concurrent operations the start method can now be called on a different thread to the one that queued the operation, which is a change in 10.6 and iOS 4.
I didn't read this as indicating that the isConcurrent method is ignored completely by the queue and has no purpose at all, just that it no longer effects the thread on which start is called. I may have misunderstood this.
I also misunderstood the original question as a more general one about concurrent operations and the isConcurrent flag, and the accepted answer as effectively saying
The isConcurrent flag can be ignored since 10.6 and iOS 4
I'm not sure this is correct.
If I understand the original question now, to paraphrase:
Given a correctly built concurrent NSOperation does the isConcurrent flag itself actually alter the execution of the operation at all?
I guess it's hard to say for all possible setups, but we can say that:
It's not deprecated. It's normal for Apple to deprecate methods
that are no longer useful.
The documentation consistently refers to the method being a required
override.
Perhaps isConcurrent is effectively deprecated but as it's only a single BOOL flag perhaps it's not worth the effort to deprecate it in the docs. Or perhaps it does nothing now but Apple have kept it for possible future use and expect you to override it as described.
I created a quick test project with an NSOperation that overrode isConcurrent and main only, isConcurrent was not called at any stage. It was a very simple test though. I assume you may have tested it also? I assumed perhaps if the NSOperationQueue didn't call it NSOperation's default implementation of start might.
So where does that leave us? Obviously it's no trouble to implement it and return YES to satisfy the documented requirements. However from my perspective I think it's too much of a leap to go from the caveat regarding 10.6 and iOS 4.0 to say that it can be safely ignored now.
My Original Answer...
isConcurrent is not a legacy method and is not ignored by NSOperationQueue. The documentation as quoted in the other answers is a little unclear and easily misunderstood.
isConcurrent = YES means the operation provides its own means of concurrency. Or to put it another way, the operation "isAlreadyConcurrent" and doesn't need the NSOperationQueue to provide and manage the concurrency. As NSOperationQueue is no longer providing the concurrency you need to tell it when the operation isFinished or if it isCancelled (etc) hence the need to override these methods.
A common example is an NSOperation that manages an NSURLConnection. NSURLConnection has it's own mechanism for running in the background, so doesn't need to be made concurrent by NSOperationQueue.
An obvious question is then: "Why put an already concurrent operation into an NSOperationQueue?" It's so the operation can benefit from the other features of the NSOperationQueue like dependencies etc.
The misleading part of the documentation is referring only to what thread the start method of an NSOperation is called on. The change caused a problem discussed in QA1712.
Paraphrasing the original question to:
Given a correctly built concurrent NSOperation does the value returned by isConcurrent actually alter the execution of the operation at all?
Whilst we could try to understand how the isConcurrent flag is used by NSOperation, NSOperationQueue, or other parts of the operating system, it would be a mistake to rely on any information we discovered.
The purpose of the flag is described only as:
Return YES if the operation runs asynchronously with respect to the current thread or NO if the operation runs synchronously on whatever thread started it.
As long as you correctly respond, it should not be important who calls the method or how it effects any logic inside the Apple frameworks.
Additional Notes:
The documentation does make reference to a change in the way that NSOperationQueue used this value before and after OSX 10.6 and iOS 4.0.
In OS X v10.6, operation queues ignore the value returned by isConcurrent and always call the start method of your operation from a separate thread. In OS X v10.5, however, operation queues create a thread only if isConcurrent returns NO. ...
This change caused an issue described in QA1712.
This commentary is commonly interpreted to imply that the isConcurrent flag is no longer used after 10.6 and iOS 4, and can be ignored. I do not agree with this interpretation as the method has not been formally deprecated and is still documented as a required override for concurrent operations.
Also the fact that it's use has changed in the past is a reminder that it could change again in the future, so we should respond correctly even if we suspect it does not currently have any effect.

What is the best networking solution for a complex multithreaded app?

I have a streaming iOS app that captures video to Wowza servers.
It's a beast, and it's really finicky.
I'm grabbing configuration settings from a php script that shoots out JSON.
Now that I've implemented that, I've run into some strange threading issues. My app connects to the host, says its streaming, but never actually sends packets.
Getting rid of the remote configuration NSURLConnection (which I've made sure is properly formatted) delegate fixes the problem. So I'm thinking either some data is getting misconstrued across threads or something like that.
What will help me is knowing:
Are NSURLConnection delegate methods called on the main thread?
Will nonatomic data be vulnerable in a delegate method?
When dealing with a complex threaded app, what are the best practices for grabbing data from the web?
Have you looked at AFNetworking?
http://www.raywenderlich.com/30445/afnetworking-crash-course
https://github.com/AFNetworking/AFNetworking
It's quite robust and helps immensely with the threading, and there are several good tutorials.
Are NSURLConnection delegate methods called on the main thread?
Yes, on request completion it gives a call back on the main thread if you started it on the main thread.
Will nonatomic data be vulnerable in a delegate method?
Generally collection values (like array) are vulnerable with multiple threads; the rest shouldn't create anything other than a race problem.
When dealing with a complex threaded app, what are the best practices for grabbing data from the web?
I feel it's better to use GCD for handling your threads, and asynchronous retrieval using NSURLConnection should be helpful. There are few network libraries available to do the boilerplate code for you, such as AFNetworking, and ASIHTTPRequest (although that is a bit old).
Are NSURLConnection delegate methods called on the main thread?
Delegate methods can be executed on a NSOperationQueue or a thread. If you not explicitly schedule the connection, it will use the thread where it receives the start message. This can be the main thread, but it can also any other secondary thread which shall also have a run loop.
You can set the thread (indirectly) with method
- (void)scheduleInRunLoop:(NSRunLoop *)aRunLoop forMode:(NSString *)mode
which sets the run loop which you retrieved from the current thread. A run loop is associated to a thread in a 1:1 relation. That is, in order to set a certain thread where the delegate methods shall be executed, you need to execute on this thread, retrieve the Run Loop from the current thread and send scheduleInRunLoop:forMode: to the connection.
Setting up a dedicated secondary thread requires, that this thread will have a Run Loop. Ensuring this is not always straight forward and requires a "hack".
Alternatively, you can use method
- (void)setDelegateQueue:(NSOperationQueue *)queue
in order to set the queue where the delegate methods will be executed. Which thread will be actually used for executing the delegates is then undetermined.
You must not use both methods - so schedule on a thread OR a queue. Please consult the documentation for more information.
Will nonatomic data be vulnerable in a delegate method?
You should always synchronize access to shared resources - even for integers. On certain multiprocessor systems it is not even guaranteed that accesses to a shared integer is safe. You will have to use memory barriers on both threads in order to guarantee that.
You might utilize serial queues (either NSOperationQueue or dispatch queue) to guarantee safe access to shared resources.
When dealing with a complex threaded app, what are the best practices for grabbing data from the web?
Utilize queues, as mentioned, then you don't have to deal with threads. "Grabbing data" is not only a threading problem ;)
If you prefer a more specific answer you would need to describe your problem in more detail.
To answer your first question: The delegate methods are called on the thread that started the asynchronous load operation for the associated NSURLConnection object.

How to lock an NSLock on a specific thread

I have a property #property NSLock *myLock
And I want to write two methods:
- (void) lock
and
- (void) unlock
These methods lock and unlock myLock respectively and they need to do this regardless of what thread or queue called them. For instance, thread A might have called lock but queue B might be the one calling unlock. Both of these methods should work appropriately without reporting that I am trying to unlock a lock from a different thread/queue that locked it. Additionally, they need to do this synchronously.
It is rare anymore that NSLock is the right tool for the job. There much better tools now, particularly with GCD; more later.
As you probably already know from the docs, but I'll repeat for those reading along:
Warning: The NSLock class uses POSIX threads to implement its locking behavior. When sending an unlock message to an NSLock object, you must be sure that message is sent from the same thread that sent the initial lock message. Unlocking a lock from a different thread can result in undefined behavior.
That's very hard to implement without deadlocking if you're trying to lock and unlock on different threads. The fundamental problem is that if lock blocks the thread, then there is no way for the subsequent unlock to ever run on that thread, and you can't unlock on a different thread. NSLock is not for this problem.
Rather than NSLock, you can implement the same patterns with dispatch_semaphore_create(). These can be safely updated on any thread you like. You can lock using dispatch_semaphore_wait() and you can unlock using dispatch_semaphore_signal(). That said, this still usually isn't the right answer.
Most resource contention is best managed with an operation queue or dispatch queue. These provide excellent ways to handle work in parallel, manage resources, wait on events, implement producer/consumer patterns, and otherwise do almost everything that you would have done with an NSLock or NSThread in the past. I highly recommend the Concurrency Programming Guide as an introduction to how to design with queues rather than locks.

iOS. Do NSURLConnection and UIView's setNeedsDisplay rely on GCD for asynchronous behavior?

I am doing a lot of GCD and asynchronous rendering and data retrieval work lately and I really need to nail the mental model about how asynchronous is done.
I want to focus on setNeedsDisplay and the NSURLConnectionDelegate suite of methods.
Is it correct to call setNeedsDisplay asynchronous? I often call it via dispatch_async(dispatch_get_main_queue(), ^{}) which confuses me.
The NSURLConnectionDelegate callbacks are described as asynchronous but are they not actually concurrently run on the main thread/runloop. I am a but fuzzy on the distinction here.
More generally in the modern iOS era of GCD what is the best practice for making GCD and these methods play nice together. I'm just looking for general guidelines here since I use them regularly and am just trying not to get myself in trouble.
Cheers,
Doug
No, you generally don't call setNeedsDisplay asynchronously. But if you're invoking this from a queue other than the main queue (which I would guess you are), then you should note that you never should do UI updates from background queues. You always run those from the main queue. So, this looks like the very typical pattern of dispatching a UI update from a background queue to the main queue.
NSURLConnection is described as asynchronous because when you invoke it, unless you used sendSynchronousRequest, your app immediately returns while the connection progresses. The fact that the delegate events are on the main queue is not incompatible with the notion that the connection, itself, is asynchronous. Personally, I would have thought it bad form if I can some delegate methods that were not being called from the same queue from which the process was initiated, unless that was fairly explicit via the interface.
To the question of your question's title, whether NSURLConnection uses GCD internally, versus another concurrency technology (NSOperationQueue, threads, etc.), that's an internal implementation issue that we, as application developers, don't generally worry about.
To your final, follow-up question regarding guidelines, I'd volunteer the general rule I alluded to above. Namely, all time consuming processes that would block your user interface should be dispatched to background queue, but any subsequent UI updates required by the background queue should be dispatched back to the main queue. That's the most general rule of thumb I can think of that encapsulates why we generally do concurrent programming and how to do so properly.

dispatch_async and [NSURLConnection sendSynchronousRequest]

There are various questions around this topic, and lots of advice saying NOT to use sendSynchronousRequest within dispatch_async, because it blocks the thread, and GCD will spawn lots of new worker threads to service all the synchronous URL requests.
Nobody seems to have a definitive answer as to what iOS 5, [NSURLConnection sendAsynchronousRequest:queue:completionHandler:] does behind the scenes.
One post I read states that it 'might' optimise, and it 'might' use the run loop - but certainly won't create a new thread for each request.
When I pause my debugger when using sendAsynchronousRequest:queue:completionHandler, the stack trace looks like this:
..now it appears that sendAsynchronousRequest:queue:completionHandler, is actually calling sendSynchronousRequest, and I still have tons of threads created when I use the async method instead of the sync method.
Yes, there are other benefits to using the async call, which I don't want to discuss in this post.
All I'm interested in is performance / thread / system usage, and if i'm worse off using the sync call inside dispatch_async instead of using the async call.
I don't need advice on using ios4 async calls either, this is purely for educational purposes.
Does anyone have any insightful answers to this?
Thanks
This is actually open source. http://libdispatch.macosforge.org/
It is very unlikely that you will be able to manage the worker threads more efficiently than Apple's implementation. In this context "asynchronous" doesn't mean select/poll, it just means that the call will return immediately. So it is not surprising that the implementation is spawning threads.
As the previous answer stated, both GCD (libdispatch) and the libblocksruntime are open source. Internally, iOS/OS X manage a global pool of pthreads, plus any app-specific pools you create in your user code. Since there's a 1:N mapping between OS threads and dispatch tasks, you don't have to (and shouldn't) worry about thread creation and disposal under the hood. To that end, a GCD task doesn't use any time in an app's run loop after invocation as it's punted to a background thread.
Most kinds of NSURL operations are I/O-bound; there's no amount of concurrency voodo that can disguise this, but if Apple's own async implementation uses its synchronous counterpart in the background, it probably suggests it's quite highly optimized already. Contrary to what the previous answer said, libdispatch does use scalable I/O internally (kqueue on OS X/iOS/BSD), and if you hand-rolled something yourself you'd have to deal with file-descriptor readiness yourself to yield similar performance.
While possible, the return on time invested is probably marginal. Stick with Apple's implementation and stop worrying about threads!

Resources