How AFNetworking nsrunloop stop - afnetworking

I want to learn some knowledge of the nsrunloop by learning the afnetworking source code. The af devs add the machport to the separated thread's runloop to keep the thread alive, but how does the af thread stop. I know that af operation sets the finish state (via the kvo way) indicating that the operation finished when the url connection has error or finishes loading, however, there is no place that it removes the mach port from the runloop, so how can the runloop stop?
it assigns nil to the connection, and closes the outputstream, but it doesn't remove the machport, so I guess the runloop is still running.

Related

Scheduling execution of blocks in Objective C

I'm creating an application in objective C where I have two threads:
The main thread, which is woken up from sleep and is called into asynchronously by a module above it
The callback block(thread) whose execution is asynchronous and is dependent on an external module "M" sending a notification.
On my main thread, I want to wait for the callback to come in before I start doing my tasks. So, I tried using dispatch_group_enter and dispatch_group_wait(FOREVER) on the main thread while calling into dispatch_group_leave on the callback thread. This ensured that when the main thread is the first to execute, things happen as they are supposed to, i.e the main thread waits for the callback to come in and unblock it before performing its tasks.
However, I'm seeing a race condition where the callback block gets called first sometimes and is stuck on dispatch_group_leave (since at this point the main thread has not called into dispatch_group_enter.
Is there a different GCD construct I can use for this purpose?
The “main thread” is a thread which handles UI, system events, notifications, etc. We never block that thread. Blocking it results in a horrible UX where the app will appear to freeze and your app may even be terminated by the “watch dog” process, which kills apps that it thinks are frozen. In some cases, the app will deadlock.
So, if you really mean “main thread”, then the answer is that you would never “wait” on that thread (or otherwise block it). The pattern is to have your background thread do what it needs, and then dispatch model/UI updates back to the main thread with GCD (or submit your notification and let the main thread process it).
If you want a UX where the user is not allowed to interact with the UI while this background process is underway, you would present something in your UI that makes that clear. A common pattern is a dimming/blurring view that covers the whole view, often with a UIActivityIndicatorView (i.e., a spinner), and when the task dispatched to the background queue is done (or have the notification handler do that), you’d then remove that dimming/blurred view and the spinner and update the UI accordingly.
But you never block the main thread by waiting.

what is the difference between sync and async input source for run loops?

What does it mean that input sources such as port deliver events to a run loop async while timers deliver events synchronously.
Does the timer block the thread?
The Threading Programming Guide: Run Loops says:
A run loop receives events from two different types of sources. Input sources deliver asynchronous events, usually messages from another thread or from a different application. Timer sources deliver synchronous events, occurring at a scheduled time or repeating interval. Both types of source use an application-specific handler routine to process the event when it arrives.
But a timer only blocks the thread while the timer’s closure or selector method is running. But as soon as you return from that, the thread is no longer blocked. So make sure to get in and out as quickly as possible.
For example, if you have scheduled a timer to fire in 10 seconds, and the code in the timer’s handling closure/selector takes 100 msec to run, then the thread is not blocked until the timer fires, and then only for 100 msec. Same with repeating timers.
Bottom line, as long as you’re not doing anything too computationally expensive in your timer handler, there’s nothing to worry about. And if you do need to do anything that might block for any material amount of time, then either have your timer handler asynchronously dispatch that relevant code to some background queue, or just schedule a GCD timer to run on a background queue directly, bypassing Timer altogether.
But for most Timer use-cases, this just isn’t an issue.

Understanding dispatch_queues and synchronous/asynchronous dispatch

I'm an Android engineer trying to port some iOS code that uses 5 SERIAL dispatch queues. I want to make sure I'm thinking about things the right way.
dispatch_sync to a SERIAL queue is basically using the queue as a synchronized queue- only one thread may access it and the block that gets executed can be thought of as a critical region. It happens immediately on the current thread- its the equivalent of
get_semaphore()
queue.pop()
do_block()
release_semaphore()
dispatch_async to a serial queue- performs the block on another thread and lets the current thread return immediately. However since its a serial queue it promises only one of these asynchronous thread is going to execute at a time (the next call to dispatch_async will wait until all other threads are finished). That block can also be thought of as a critical region, but it will occur on another thread. So the same code as above, but its passed to a worker thread first.
Am I off in any of that, or did I figure it out correctly?
This feels like an overly complicated way of thinking of it and there are lots of little details of that description that aren't quite right. Specifically, "it happens immediately on the current thread" is not correct.
First, let's step back: The distinction between dispatch_async and dispatch_sync is merely whether the current thread waits for it or not. But when you dispatch something to a serial queue, you should always imagine that it's running on a separate worker thread of GCD's own choosing. Yes, as an optimization, sometimes dispatch_sync will use the current thread, but you are in no way guaranteed of this fact.
Second, when you discuss dispatch_sync, you say something about it running "immediately". But it's by no means assured to be immediate. If a thread does dispatch_sync to some serial queue, then that thread will block until (a) any block currently running on on that serial queue finish; (b) any other queued blocks for that serial queue run and complete; and (c) obviously, the block that thread A itself dispatched runs and completes.
Now, when you use a serial queue for synchronization for, some thread-safe access to some object in memory, often that synchronization process is very quick, so the waiting thread will generally be blocked for a negligible amount of time as its dispatched block (and any prior dispatched blocks) to finish. But in general, it's misleading to say that it will run immediately. (If it always could run immediately, then you wouldn't need a queue to synchronize access).
Now your question talks about a "critical region", to which I assume you're talking about some bit of code that, in order to ensure thread-safety or for some other reason like that, must be synchronized. So, when running this code to be synchronized, the only question re dispatch_sync vs dispatch_async is whether the current thread must wait. A common pattern, for example, is to say that one may dispatch_async writes to some model (because there's no need to wait for the model to update before proceeding), but dispatch_sync reads from some model (because you obviously don't want to proceed until the read value is returned).
A further optimization of that sync/async pattern is the reader-writer pattern, where concurrent reads are permissible but concurrent writes are not. Thus, you'll use a concurrent queue, dispatch_barrier_async the writes (achieving serial-like behavior for the writes), but dispatch_sync the reads (enjoying concurrent performance with respect to other read operations).
To pick nits, dispatch_sync doesn't necessarily run the code on the current thread, but if it doesn't, it still blocks the current thread until the task completes. The distinction is only potentially important if you're relying on thread IDs or thread-local storage.
But otherwise, yes, unless I missed something subtle.

Clear process flow of NSRunLoop

Digging more than one day....Apple, Google, Slideshare and stackoverflow. But still not clear about NSRunLoop.
Every thread has a runloop by default.Application mainThread has mainRunLoop.
1. If MainRunLoop get input events is it creating new thread to execute it? Then another runLoop created? How then multiple thread and multiple runLoop work? Communicate?
2. If runLoop has no input event/task it sleeps.When a RunLoop ends?
3. Why i should care about runLoop?
4. Where i can use it?
Where i miss that i can't understand the life cycle?
Lets look on your`s list:
Wrong. Threads do not have built-in runloop. They need to be created manually.
Runloop doesn`t create another threads, its immediately executes an event. That is why at the main thread we can see locked interface - by heavy-load tasks in the main thread (UI in iPhone runs on the main thread). Runloops can communicate with each other with the help of mac ports.
Runloop sleeps before the first event come, then wakes up and ends. Only exception - timer, but it will not runloop. Runloop need to start run every time after Event (in the loop). If you call the run, there is already a built-in loop.
Can use to create some threads which must track or execute something periodically. For example, you can create a thread, when runloop for it and then other threads can execute it`s selectors through performSelector. This creates a background query processor, which does not require each time to create a new thread.

How to stop/cancel/suspend/resume tasks on GCD queue

How to stop/cancel/suspend/resume tasks on GCD queue
How does one stop background queue operations? I want to stop some screens in our app. And some screens it should be auto resume. So, how does one pass a queue in iOS?
I mean when user have browsing the app time we run the background thread in dispatch_queue_t. But it never stops and resume in the code. So how does one suspend and resume a queue
To suspend a dispatch queue, it is simply queue.suspend() (dispatch_suspend(queue) in Objective-C). That doesn't affect any tasks currently running, but merely prevents new tasks from starting on that queue. Also, you obviously only suspend queues that you created (not global queues, not main queue).
To resume a dispatch queue, it is queue.resume() (or dispatch_resume(queue) in Objective-C). There's no concept of “auto resume”, so you'd just have to manually resume it when appropriate.
To pass a dispatch queue around, you simply pass the DispatchQueue object that you created (or the dispatch_queue_t object that you created when you called dispatch_queue_create() in Objective-C).
In terms of canceling tasks queued on dispatch queues, this is a was introduced in iOS 8. One can item.cancel() a DispatchWorkItem (dispatch_block_cancel(block) a dispatch_block_t object in Objective-C). This cancels queued blocks/items that have not started, but does not stop ones that are underway. If you want to be able to interrupt a dispatched block/item, you have to periodically examine item.isCancelled (or dispatch_block_testcancel() in Objective-C).
See https://stackoverflow.com/a/38372384/1271826 for examples on canceling dispatch work items.
If you want to cancel tasks, you might also consider using operation queues, i.e. OperationQueue (NSOperationQueue in Objective-C). Its cancelable operations have been around for a while and you're likely to find lots of examples online. It also supports constraining the degree of concurrency with maxConcurrentOperationCount (whereas with dispatch queues you can only choose between serial and concurrent, and controlling concurrency more than that requires a tiny bit of effort on your part).
If using operation queues, you suspend and resume by changing the suspended property of the queue. And to pass it around, you just pass the NSOperationQueue object you instantiated.
Having said all of that, I'd suggest you expand your question to elaborate what sort of tasks are running in the background and articulate why you want to suspend them. There might be better approaches than suspending the background queue.
In your comments, you mention that you were using NSTimer, a.k.a. Timer in Swift. If you want to stop a timer, call timer.invalidate() to stop it. Create a new NSTimer when you want to start it again.
Or if the timer is really running on a background thread, GCD “dispatch source timers” do this far more gracefully. With a GCD timer, you can suspend/resume it just like you suspend/resume a queue, just using the timer object instead of the queue object.
You can't pause / cancel when using a GCD queue. If you need that functionality (and in a lot of general cases even if you don't) you should be using the higher level API - NSOperationQueue. This is built on top of GCD but it gives you the ability to control how many things are executing at the same time, suspend processing of the queue and to cancel individual / all operations.

Resources