iOS Equivalent of Pulse and Wait for Threading - ios

I am looking for an equivalent pattern in multi-threading in iOS as there is in .NET for pulse and wait. Essentially, I want a background thread to lay dormant until a flag is set, which essentially "kicks" the thread into action.
It is an alternative to a loop+thread.sleep(). The flag can be set on a different thread than the actual background thread doing the processing. Thanks!

There are a variety of different mix-and-match thread APIs available on iOS and OS X. What are you using to create your thread?
The simplest suggestion is to use a Grand Central Dispatch (GCD) semaphore.
Setup code:
dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
// Then make sure your thread has access to this semaphore
Thread code:
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
// Will block forever until the semaphore is triggered
Trigger code:
dispatch_semaphore_signal(semaphore);
An even better suggestion: GCD already manages its own thread pool, so just take advantage of it instead of spinning up your own thread. It's very easy to use dispatch_async to run some code in a background thread.

I'm not familiar with this pattern in .NET, but it is generally a bad idea to purposefully lock a thread (unless that thread cannot continue, but it sounds like you want it waiting most of the time or at least for long periods of time).
This is bad for the scheduler, bad for system resources.
You can use NSThread's +detachNewThreadSelector:... method to make a new thread. And you can use a variety of ideas to try to manage your logic for creating and maintaining such a thread in a single place like NSObject's -performSelectorOnMainThread:...

Related

How to make sure that OperationQueue tasks are always getting executed on background(global) thread in swift?

I encountered a problem with non-iOS developer while describing the flow of OperationQueue. We already know that with OperationQueue we can start as many threads as we want to execute the tasks in sync/async way.
But in practical, some of the people want proof for the OperationQueue is getting executed in the background and not with UI(Main) thread.
I just want to demonstrate that when operation queue starts, it already starts its execution in the background.
I already have convinced that when we try to set qos for the operationQueue that we create, it has all the parameters of global queue's qos viz: default userInitiated userInteractive background and utility.
So that is already perfect example in code to prove that all the OperationQueue operations mentioned are run on global thread. Unless we declare the OperationQueue.main
As Shadowrun said, you can add assertions for Thread.isMainThread. Also, if you ever want to add test that you are not on the main queue, you can add a precondition:
dispatchPrecondition(.notOnQueue(.main))
But it should be noted that the whole purpose of creating an operation queue is to get things off the main thread. E.g., the old Concurrency Programming Guide: Operation Queues says [emphasis added]:
Operations are designed to help you improve the level of concurrency in your application. Operations are also a good way to organize and encapsulate your application’s behavior into simple discrete chunks. Instead of running some bit of code on your application’s main thread, you can submit one or more operation objects to a queue and let the corresponding work be performed asynchronously on one or more separate threads.
This is not to say that operation queues cannot contribute to main thread responsiveness problems. You can wait from the main thread for operations added to an operation queue (which is obviously a very bad idea). Or, if you neglect to set a reasonable maxConcurrentOperationCount and have thread explosion, that can introduce all sorts of unexpected behaviors. Or you can entangle the main thread with operation queues through a misuse of semaphores or dispatch groups.
But operations on an operation queue (other than OperationQueue.main) simply do not run on the main thread. Technically, you could get yourself in trouble if you started messing with the target queue of the underlying queue, but I can’t possibly imagine that you are doing that. If you are having main thread problems, your problem undoubtedly rests elsewhere.

Thread Pool: DispatchQueue.main.async

I'd worked on Java, and pretty much clear with the working of threads and thread pool.
I was wondering if anyone can explain the working of how thread's are created and allocated space in the thread pool in swift ?.
Also, does
Dispatch.main.async {
// some code
}
Creates a new Thread or Asynchronously executes the task ?
Thanks in advance =)
Queues and threads are separate concepts. Queues are ordered (sometimes prioritized) sequences of blocks to execute. As (mostly) an implementation detail, blocks must be scheduled onto threads in order to execute, but this is not the major point of them.
So Dispatch.main.async dispatches (appends) a block to the main queue. The main queue is serial and somewhat special in that it is promised to also be run exclusively on the main thread (as noted by Paulw11). It also promises to be associated with the main runloop. Understanding this "appends a block to a queue" concept is critical, because it has significant impact on how you design things in queues vs how you design things in threads. async does not mean "start running this now." It means "stick this on a queue, but don't wait for it."
As a good example of how the designs can be different, placing something on a queue doesn't mean it will ever run (even without bugs or deadlocks). It is possible and useful to suspend queues so that it stops scheduling blocks. It's possible to tie queues to other queues so that when a queue "schedules" something, it just puts it onto another queue rather than executing it. There are lots of things you can do with queues unrelated to "run things in the background." You can attach completion handlers to blocks. You can use groups to wait on collections of blocks. GCD is a way of thinking about concurrency. Parallelism is just a side benefit. (A great discussion of this concept is Concurrency is not parallelism by Rob Pike. It's in Go, but the concepts still apply.)
If you call Dispatch.main.async while running on the main queue, then that block is absolutely certain to not execute until the current block finishes. In UIKit and AppKit, "the current block finishes" often means "you return from a method that was called by the OS." While not implemented this way, you can pretend that every time you're called from the OS, it was wrapped in a call to Dispatch.main.async.
This is also why you must never call Dispatch.main.sync (note sync) from the main queue. That block will wait for you to return, and you'll wait until the block finishes. A classic deadlock.
As a rule, the thread pool is not your business in iOS. It is an implementation detail. Occasionally you need to think about it for performance reasons, but if you are thinking too much about it, you probably are designing your concurrency incorrectly.
If you're coming from Java, you definitely want to read Migrating Away From Threads in the Concurrency Programming Guide. It's the definitive resource for how to rethink thread-based patterns in queues.
Your code queues the block of code on the main queue (Dispatch.main) and returns immediately (.async), before executing the code.
You do not have control over which thread is used by the queue. Even if you create an own queue:
let serialQueue = DispatchQueue(label: "queuename")
serialQueue.async {
...
}
you do not know which thread your code will be running on.
Update:
As correctly stated by Paulw11 in the comment,
... if you dispatch a task on the main queue, it is guaranteed to execute on the main thread. If you dispatch a task on any other queue, you don't know which thread it will execute on; it may execute on the main thread or some other thread.

Suspending and resuming Grand Central Dispatch thread

I am using Grand Central Dispatch to run a process in background. I want know how can i suspend, resume and stop that background thread. I have tried
dispatch_suspend(background_thread);
dispatch_resume(background_thread);
but these functions doesn't help me, it keeps on running. Please someone help me.
You seem to have some confusion. Direct manipulation of threads is not part of the GCD API. The GCD object you normally manipulate is a queue, not a thread. You put blocks in a queue, and GCD runs those blocks on any thread it wants.1
Furthermore, the dispatch_suspend man page says this:
The dispatch framework always checks the suspension status before executing a block, but such changes never affect a block during execution (non-preemptive).
In other words, GCD will not suspend a queue while the queue is running a block. It will only suspend a queue while the queue is in between blocks.
I'm not aware of any public API that lets you stop a thread without cooperation from the function running on that thread (for example by setting a flag that is checked periodically on that thread).
If possible, you should break up your long-running computation so that you can work on it incrementally in a succession of blocks. Then you can suspend the queue that runs those blocks.
Footnote 1. Except the main queue. If you put a block on the main queue, GCD will only run that block on the main thread.
You are describing a concurrent processing model, where different processes can be suspended and resumed. This is often achieved using threads, or in some cases coroutines.
GCD uses a different model, one of partially ordered blocks where each block is sequentially executed without pre-emption, suspension or resumption directly supported.
GCD semaphores do exist, and may suit your needs, however creating general cooperating concurrent threads with them is not the goal of GCD. Otherwise look at a thread based solution using NSThread or even Posix threads.
Take a look at Apple's Migrating Away from Threads to see if your model is suited to migration to GCD, but not all models are.

NSOperationQueue, concurrent operation and thread

I'm developing a sort of fast image scan application.
In the -captureOutput:didOutputSampleBuffer:fromConnection: method I pick up the CVPixelBuffer and add them into an NSOperation subclass. The NSOperation takes the buffer and transform it into an image saved on the filesystem.
The -isConcurrent method of NSOperation returns YES. After the operation is created, is added to an NSOperationQueue.
Everything runs fine except for one issue that is affecting the scan frame rate.
Using time profiler I've found that some of the NSOperation are run on the same thread of the AVCaputureVideoOutput delegate that I've created:
dispatch_queue_t queue = dispatch_queue_create("it.CloudInTouchLabs.avsession", DISPATCH_QUEUE_SERIAL);
[dataOutput setSampleBufferDelegate:(id)self queue:queue];
When an operation is running on the same thread of the AV session queue, it affects the frame rate, it happens only with some them, probably GCD is deciding under the hood dispatching on active threads. The only method I've found to solve that issue, is to create a separate thread and pass it to the the single operations forcing them to run on it.
Is there another way, to force operations run on a different thread?
[EDIT]
Following the #impcc suggestions I made some test.
Results are pretty interesting even if sort of inconsistent.
Test platform is an iPhone 5 connected in debugging mode through an MBP with 16gb of RAM quad core i7. The session has a 60fps output tested with the algorithm in RosyWriter apple sample code.
AVSession queue sync targeted to high priority queue
26 fps where sometimes the thread of the queue is shared with the one
of the operation. The time taken inside the delegate methods has an
average of 0.02s
14 fps with a thread created just for the operations, if the main
method is not called on this thread, the start will be forced to perform
on that specific thread. This thread is create one time and kept
alive with a dummy port.. The time taken inside the delegate is
0.008.
AVSession queue concurrent targeted to high priority queue
13.5 fps where sometimes the thread of the queue is shared with the one of the operation. The time taken inside the delegate methods has
an average of 0.02s, substacially equal to the counterpart with sync
queue.
14 fps with a thread created just for the operations, if the main
method is not called on this thread, the start will be forced to perform
on that specific thread. This thread is create one time and kept
alive with a dummy port. The time taken inside the delegate is
0.008.
Conclusion
Concurrent or serial queue doesn't seems to make a big difference, however concurrent for me is not ok cause of the timestamps that I need to take to preserve the sequence of single picture. The fact that is surprising me is the frame drop using an ad-hoc thread, even if the time taken inside the delegate method is considerably less, the frame rate drops about 10fps. Just trusting in GCD the frame rate is better but the delegate methods takes more than 2 times to finish, this probably due to the fact that sometimes the AV queue is used by also the nsoperations. I can't really get why the time taken inside the delegate doesn't seems to be correlated to the fps. Shouldn't be the faster the better?
By further investigations it really seems that what is stealing time si in the process of adding and probably executing operation in queue.
I think you might be misunderstanding the meaning of isConcurrent. This is totally understandable, since it's incredibly poorly named. When you return YES from -isConcurrent what it really means is "I will handle any concurrency needs associated with this operation, and will otherwise behave asynchronously." In this situation, NSOperationQueue is free to call -start on your operation synchronously on the thread from which you add the operation. The NSOperationQueue is expecting that, because you've declared that you will manage your own concurrency, that your -start method will simply kick off an asynchronous process and return right away. I suspect this is the source of your problem.
If you implemented your operation by overriding -main then you almost certainly want to be returning NO from isConcurrent. To make matters more complicated, the behaviors associated with isConcurrent have changed over the years (but all that is covered in the official docs.) A fantastic explanation of how to properly implement a returns-YES-from-isConcurrent NSOperation can be found here.
The way I'm reading your question here, it doesn't sound like your NSOperation subclass actually needs to "manage its own concurrency", but rather that you simply want it to execute asynchronously, "on a background thread", potentially "concurrently" with other operations.
In terms of assuring the best performance for your AVCaptureVideoDataOutputSampleBufferDelegate callbacks, I would suggest making the queue you pass in to -setSampleBufferDelegate:queue: be (itself) concurrent and that it target the high-priority global concurrent queue, like this:
dispatch_queue_t queue = dispatch_queue_create("it.CloudInTouchLabs.avsession", DISPATCH_QUEUE_CONCURRENT);
dispatch_set_target_queue(queue, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0));
[dataOutput setSampleBufferDelegate:(id)self queue:queue];
Then you should make the delegate callback methods as lightweight as possible -- nothing more than packaging up the information you need to make the NSOperation and add it to the NSOperationQueue.
This should ensure that the callbacks always take precedence over the NSOperations. (It's my understanding that NSOperationQueue targets either the main queue (for the NSOperationQueue that's associated with the main thread and run loop) or the default priority background queue.) This should allow your callbacks to fully keep up with the frame rate.
Another important thing to take away here (that another commenter was getting at) is that if you're doing all your concurrency using GCD, then there is only one special thread -- the main thread. Other than that, threads are just a generic resource that GCD can (and will) use interchangeably with one another. The fact that a thread with a thread ID of X was used at one point to service your delegate callback, and at another point was used to do your processing is not, in and of itself, indicative of a problem.
Hope this helps!

Why does Apple suggest do dispatch OpenGL commands in a serial background queue when this inevitably leads to crashes?

They suggest:
When using GCD, use a dedicated serial queue to dispatch commands to
OpenGL ES; this can be used to replace the conventional mutex pattern.
I don't understand this recommendation. There is this conflict that I cannot solve:
When the app delegate of an app receives the -applicationWillResignActive call, it must immediately stop calling any OpenGL function.
If the app continues to call an OpenGL function after -applicationWillResignActive returned, the app will crash.
If I follow Apple's recommendation to call OpenGL functions in a serial background queue, I am faced with this seemingly unsolvable problem:
1) After I receive -applicationWillResignActive I must immediately stop calling any further OpenGL functions.
2) But because the serial queue is in the middle of processing a block of code in the background, sometimes the block of code would finish executing after -applicationWillResignActive returns, and the app crashes.
Here is an illustration showing the concurrent "blocks". The main thread receives a full stop message and has to prevent further calls to OpenGL ES. But unfortunately these happen in a background queue which cannot be halted in flight of working on a block:
|_____main thread: "STOP calling OpenGL ES!"_____|
_____|_____drawing queue: "Draw!"_____|_____drawing queue: "Draw!"_____|
Technically I found no way to halt the background queue instantly and avoid further calls to OpenGL in the background. A submitted block of code once running keeps running.
The only solution I found was to NOT call OpenGL ES functions in the background. Instead, call them on the main thread to guarantee that they will never be called after the app lost access to the GPU.
So if it is okay to call OpenGL ES functions in the background, how do you ensure that they never get called after the app resigned active?
Just wait in applicationWillResignActive for the queue to finish all enqueued operations using a dispatch group or a similar mechanism.
You can find an example in the documentation:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_t group = dispatch_group_create();
// Add a task to the group
dispatch_group_async(group, queue, ^{
// Some asynchronous work
});
// Do some other work while the tasks execute.
// When you cannot make any more forward progress,
// wait on the group to block the current thread.
dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
// Release the group when it is no longer needed.
dispatch_release(group);
Beyond Sven's suggestion of using a dispatch group, I've gone a simpler route in the past by using a synchronous dispatch into your rendering serial queue within -applicationWillResignActive:
// Tell whatever is generating rendering operations to pause
dispatch_sync(openGLESSerialQueue, ^{
[EAGLContext setCurrentContext:context];
glFinish();
// Whatever other cleanup is required
});
The synchronous dispatch here will block until all actions in the serial queue have finished, then it will run your code within that block. If you have a timer or some other source that's triggering new rendering blocks, I'd pause that first, in case it puts one last block on the queue.
As an added measure of safety, I employ glFinish() which blocks until all rendering has finished on the GPU (the PowerVR GPUs like deferring rendering as much as they can). For extremely long-running rendered frames I've occasionally seen crashes due to frame rendering still going even after all responsible OpenGL ES calls have finished. This prevents that.
I use this within a few different applications, and it pretty much eliminates crashes due to rendering when transitioning to the background. I know others use something similar with my GPUImage library (which also uses a serial background dispatch queue for its rendering) and it seems to work well for them.
The one thing you have to be cautious of, with this or any other solution that blocks until a background process is done, is that you open yourself up to deadlocks if you're not careful. If you have any synchronous dispatches back to the main queue within blocks on your background queue, you're going to want to remove those or try to make them asynchronous. If they're waiting on the main queue, and the main queue is waiting on the background queue, you'll freeze up quite nicely. Usually, it's pretty straightforward to avoid that.

Resources