How to lock an NSLock on a specific thread - ios

I have a property #property NSLock *myLock
And I want to write two methods:
- (void) lock
and
- (void) unlock
These methods lock and unlock myLock respectively and they need to do this regardless of what thread or queue called them. For instance, thread A might have called lock but queue B might be the one calling unlock. Both of these methods should work appropriately without reporting that I am trying to unlock a lock from a different thread/queue that locked it. Additionally, they need to do this synchronously.

It is rare anymore that NSLock is the right tool for the job. There much better tools now, particularly with GCD; more later.
As you probably already know from the docs, but I'll repeat for those reading along:
Warning: The NSLock class uses POSIX threads to implement its locking behavior. When sending an unlock message to an NSLock object, you must be sure that message is sent from the same thread that sent the initial lock message. Unlocking a lock from a different thread can result in undefined behavior.
That's very hard to implement without deadlocking if you're trying to lock and unlock on different threads. The fundamental problem is that if lock blocks the thread, then there is no way for the subsequent unlock to ever run on that thread, and you can't unlock on a different thread. NSLock is not for this problem.
Rather than NSLock, you can implement the same patterns with dispatch_semaphore_create(). These can be safely updated on any thread you like. You can lock using dispatch_semaphore_wait() and you can unlock using dispatch_semaphore_signal(). That said, this still usually isn't the right answer.
Most resource contention is best managed with an operation queue or dispatch queue. These provide excellent ways to handle work in parallel, manage resources, wait on events, implement producer/consumer patterns, and otherwise do almost everything that you would have done with an NSLock or NSThread in the past. I highly recommend the Concurrency Programming Guide as an introduction to how to design with queues rather than locks.

Related

Multi-thread daata access issue, #synchronized & serial queue

As you may have experienced, access none-thread safe variables is a big headache. For iOS one simple solution is to use keyword #synchronized, which will add NSLock to insure the data can be accessed by unique one thread, the disadvantage is as below:
Lock too many will reduce app performance greatly, especially when invoked by main thread.
Dead lock will occur when logic becomes complex.
Based on the above considerations, we prefer to use serial queue to handle, each thread safe critical operation will append to the end of the queue, it is a great solution, but the problem is that all access interfaces should by designed in asyn style, see the following one.
-(id)objectForKey:(NSString *)key;
The people who invoke this class aren't reluctant to design in this way. Anyone who has experience on this field please share and discuss together.
The final solution is using NSUserDefault to store small data, for large cache data put them in file maintained by ourselves.
Per Apple doc the advantage of NSUserDefault is thread safe and will do synchronize work periodically.

What is the difference between 'thread' and 'queue' in iOS development? [duplicate]

This question already has answers here:
Use of the terms "queues", "multicore", and "threads" in Grand Central Dispatch
(3 answers)
Closed 8 years ago.
I am new to iOS development. Now I am quite confused about the two concepts: "thread" and "queue". All I know is that they both are about multithread programming. Can anyone interpret those two concepts and the difference between them for me?
Thanks in advance!
How NSOperationQueue and NSThread Works:
NSThread:
iOS developers have to write code for the work/process he want to perform along with for the creation and management of the threads themselves.
iOS developers have to be careful about a plan of action for using threads.
iOS developer have to manage posiable problems like reuseability of thread, lockings etc. by them self.
Thread will consume more memory too.
NSOperationQueue:
The NSOperation class is an abstract class which encapsulates the code and data associated with a single task.
Developer needs to use subclass or one of the system-defined subclasses of NSOperation to perform the task.
Add operations into NSOperationQueue to execute them.
The NSOperationQueue creates a new thread for each operation and runs them in the order they are added.
Operation queues handle all of the thread management, ensuring that operations are executed as quickly and efficiently as possible.
An operation queue executes operations either directly by running them on secondary threads or indirectly using GCD (Grand Central Dispatch).
It takes care of all of the memory management and greatly simplifies the process.
If you don’t want to use an operation queue, you can also execute an operation by calling its start method. It may make your code too complex.
How To Use NSThread And NSOperationQueue:
NSThread:
Though Operation queues is the preferred way to perform tasks concurrently, depending on application there may still be times when you need to create custom threads.
Threads are still a good way to implement code that must run in real time.
Use threads for specific tasks that cannot be implemented in any other way.
If you need more predictable behavior from code running in the background, threads may still offer a better alternative.
NSOperationQueue:
Use NSOperationQueue when you have more complex operations you want to run concurrently.
NSOperation allows for subclassing, dependencies, priorities, cancellation and a supports a number of other higher-level features.
NSOperation actually uses GCD under the hood so it is as multi-core, multi-thread capable as GCD.
Now you should aware about advantages and disadvantages of NSTread and NSOperation. You can use either of them as per needs of your application.
Before you read my answer you might want to consider reading this - Migrating away from Threads
I am keeping the discussion theoretical as your question does not have any code samples. Both these constructs are required for increasing app responsiveness & usability.
A message queue is a data structure for holding messages from the time they're sent until the time the receiver retrieves and acts on them. Generally queues are used as a way to 'connect' producers (of data) & consumers (of data).
A thread pool is a pool of threads that do some sort of processing. A thread pool will normally have some sort of thread-safe queue (refer message queue) attached to allow you to queue up jobs to be done. Here the queue would usually be termed 'task-queue'.
So in a way thread pool could exist at your producer end (generating data) or consumer end (processing the data). And the way to 'pass' that data would be through queues. Why the need for this "middleman" -
It decouples the systems. Producers do not know about consumers & vice versa.
The Consumers are not bombarded with data if there is a spike in Producer data. The queue length would increase but the consumers are safe.
Example:
In iOS the main thread, also called the UI thread, is very important because it is in charge of dispatching the events to the appropriate widget and this includes the drawing events, basically the UI that the user sees & interacts.
If you touch a button on screen, the UI thread dispatches the touch event to the app, which in turn sets its pressed state and posts an request to the event queue. The UI thread dequeues the request and notifies the widget to redraw itself.

Is there any reason to share a dispatch queue?

I'm working on some code that I can't contact the original developer.
He passes a class a reference to another class's serial dispatch queue and I'm not seeing any purpose to him not just creating another dispatch queue (each class is a singleton).
It's not creating issues but I'd like to understand it further, so any insight into the positive implications is appreciated, thank you.
Edit: I suggest reading all the answers here, they explain a lot.
It's actually not a good idea to share queues in this fashion (and no, not because they are expensive - they're not, quite the converse). The rationale is that it's not clear to anyone but a queue's creator just what the semantics of the queue are. Is it serial? Concurrent? High priority? Low priority? All are possible, and once you start passing internal queues around which were actually created for the benefit of a specific class, the external caller can schedule work on it which causes a mutual deadlock or otherwise behaves in an unexpected fashion with the other items on that queue because caller A knew to expect concurrent behavior and caller B was thinking it was a serial queue, without any of the "gotchas" that concurrent execution implies.
Queues should therefore, wherever possible, be hidden implementation details of a class. The class can export methods for scheduling work against its internal queue as necessary, but the methods should be the only accessors since they are the only ones who know for sure how to best access and schedule work on that specific type of queue!
If it's a serial queue, then they may be intending to serialize access to some resource shared between all objects that share it.
Dispatch queues are somewhat expensive to create, and tie up system resources. If you can share one, the system runs more efficiently. For that matter, if your app does work in the background, using a shared queue allows you to manage a single pool of tasks to be completed. So yes, there are good reasons for using a shared dispatch queue.

What is the best networking solution for a complex multithreaded app?

I have a streaming iOS app that captures video to Wowza servers.
It's a beast, and it's really finicky.
I'm grabbing configuration settings from a php script that shoots out JSON.
Now that I've implemented that, I've run into some strange threading issues. My app connects to the host, says its streaming, but never actually sends packets.
Getting rid of the remote configuration NSURLConnection (which I've made sure is properly formatted) delegate fixes the problem. So I'm thinking either some data is getting misconstrued across threads or something like that.
What will help me is knowing:
Are NSURLConnection delegate methods called on the main thread?
Will nonatomic data be vulnerable in a delegate method?
When dealing with a complex threaded app, what are the best practices for grabbing data from the web?
Have you looked at AFNetworking?
http://www.raywenderlich.com/30445/afnetworking-crash-course
https://github.com/AFNetworking/AFNetworking
It's quite robust and helps immensely with the threading, and there are several good tutorials.
Are NSURLConnection delegate methods called on the main thread?
Yes, on request completion it gives a call back on the main thread if you started it on the main thread.
Will nonatomic data be vulnerable in a delegate method?
Generally collection values (like array) are vulnerable with multiple threads; the rest shouldn't create anything other than a race problem.
When dealing with a complex threaded app, what are the best practices for grabbing data from the web?
I feel it's better to use GCD for handling your threads, and asynchronous retrieval using NSURLConnection should be helpful. There are few network libraries available to do the boilerplate code for you, such as AFNetworking, and ASIHTTPRequest (although that is a bit old).
Are NSURLConnection delegate methods called on the main thread?
Delegate methods can be executed on a NSOperationQueue or a thread. If you not explicitly schedule the connection, it will use the thread where it receives the start message. This can be the main thread, but it can also any other secondary thread which shall also have a run loop.
You can set the thread (indirectly) with method
- (void)scheduleInRunLoop:(NSRunLoop *)aRunLoop forMode:(NSString *)mode
which sets the run loop which you retrieved from the current thread. A run loop is associated to a thread in a 1:1 relation. That is, in order to set a certain thread where the delegate methods shall be executed, you need to execute on this thread, retrieve the Run Loop from the current thread and send scheduleInRunLoop:forMode: to the connection.
Setting up a dedicated secondary thread requires, that this thread will have a Run Loop. Ensuring this is not always straight forward and requires a "hack".
Alternatively, you can use method
- (void)setDelegateQueue:(NSOperationQueue *)queue
in order to set the queue where the delegate methods will be executed. Which thread will be actually used for executing the delegates is then undetermined.
You must not use both methods - so schedule on a thread OR a queue. Please consult the documentation for more information.
Will nonatomic data be vulnerable in a delegate method?
You should always synchronize access to shared resources - even for integers. On certain multiprocessor systems it is not even guaranteed that accesses to a shared integer is safe. You will have to use memory barriers on both threads in order to guarantee that.
You might utilize serial queues (either NSOperationQueue or dispatch queue) to guarantee safe access to shared resources.
When dealing with a complex threaded app, what are the best practices for grabbing data from the web?
Utilize queues, as mentioned, then you don't have to deal with threads. "Grabbing data" is not only a threading problem ;)
If you prefer a more specific answer you would need to describe your problem in more detail.
To answer your first question: The delegate methods are called on the thread that started the asynchronous load operation for the associated NSURLConnection object.

Are all methods in an iOS app usually in a single thread? (for race condition prevention)

We can have many handlers: touches handler, UIControl handler (buttons, sliders), performSelector, CADisplayLink, NSTimer events, Gesture Recognizer, accelerometer handler, and UIView animation completion block, and some other ones.
Are all of them in the same thread? That is, only one of them can be running at the same time?
Can some other method or handler be part of another thread and therefore can create race conditions?
In general, you'll find that most simple applications on iOS tend to perform almost every action on the main thread. As you noted, the instant that you bring multithreading into the picture you add another set of tricky issues to watch out for. Many developers don't want to bother with this added complexity, or are unfamiliar with GCD or threading in general, so they avoid doing anything on a background thread or GCD queue.
Several of the items you list in your question involve interactions with UIKit, and in general those interactions must occur on the main thread (iOS 4.x added the ability to perform some drawing functions in the background, though). You receive touch and other related events on the main thread. If you wish to update most aspects of an interface, the safe way to do that is by performing these updates on the main thread.
Timers (NSTimer, CADisplayLink) can have their updates be fired on a background thread by attaching them to an NSRunLoop operating on that background thread. You rarely see people do this, but it can be done. Usually, people configure timers on the main run loop, which causes callbacks to be delivered on the main thread.
When performing animations, the animations themselves will run on a background thread (you see that they don't stop while you're blocking the main thread with something else), but you'll almost always receive a completion block or callback on the main thread when they're done. If I remember correctly, there are one or two exceptions to this and they are noted as such in Apple's documentation. Having these callbacks trigger on the main thread is a safe approach when dealing with developers who might not realize what's going on behind the scenes.
All that said, there are very good reasons to want to add multithreading to your application. Because all user interface updates and touch interactions occur on the main thread, if you have something that is computationally expensive or that simply will take a lot of time to perform, if you run this on your main thread you'll appear to have frozen your application. This is a terrible user experience, so you want to move this task onto a background thread so that the user can keep interacting with your application while this is going on. Additionally, more and more iOS devices are shipping every day with multiple cores in them, and balancing your work load across these cores and being efficient with this hardware requires some degree of concurrent processing.
People have written books about best practices when making code multithreaded, and you can find a lot of questions about this here, so I won't go into too much detail. What I can tell you is that you should read Apple's Concurrency Programming Guide and watch the WWDC videos from the last two years that deal with Grand Central Dispatch. With GCD, Apple has made it a lot easier to add multithreading to your application in an efficient and (relatively) safe manner, and I encourage you to look into this for your own applications.
For example, I have an open source iOS application that performs detailed rendering of molecular structures. I render each frame on a background GCD queue because sometimes they take more than 1/60th of a second to process, and in those cases they'd cause touch events to be dropped and the interface to stutter if this was all on the main thread. Additionally, I've seen up to a 40% performance boost by doing this when running on the newer multicore devices. To avoid race conditions, I wrap interactions with shared data structures and contexts in serial dispatch queues so that only one action can be using a resource at a time, no matter what thread a particular block is running on. This only required the addition of a few lines of code, but the performance and user experience benefits were huge.

Resources