Synchronizing UI, MKMapView and CLLocationManager delegates - ios

I have a ViewController that is set as delegate of MKMapView and CLLocationManager. This ViewController also receives actions from UI buttons.
I have a state machine that needs to be updated whenever some UI action happens or when a new location is available (whether it comes from MKMapView or CLLocationManager)
Obviously, the state machine update must be atomic, but I can't seem to find a good mechanism to do so.
Here's a snippet of the code:
m_locationSerialQueue = dispatch_queue_create("locationSerialQueue_ID", NULL);
m_fsmSerialQueue = dispatch_queue_create("fsmSerialQueue_ID", NULL);
...
- (void)locationManager:(CLLocationManager *)manager
didUpdateToLocation:(CLLocation *)newLocation
fromLocation:(CLLocation *)oldLocation
{
dispatch_sync(m_locationSerialQueue, ^{
...
[self updateFSMWithAction:LocationUpdated];
});
}
- (void)mapView:(MKMapView *)mapView didUpdateUserLocation:(MKUserLocation *)userLocation
{
dispatch_sync(m_locationSerialQueue, ^{
...
[self updateFSMWithAction:LocationUpdated];
});
}
- (IBAction)someButton:(id)sender
{
...
[self updateFSMWithAction:buttonPressed];
}
- (void)updateFSMWithAction:(enum action_t)action
{
dispatch_sync(m_fsmSerialQueue, ^{
...
});
}
I logged [NSThread currentThread] and all my logs have the same thread as currentThread (that is, neither MKMapView nor CLLocationManager are calling the delegate methods from different threads, it seems there is only one thread) which, unless I am missing something, will obviously prevent me from using any synchronization mechanism.
EDIT_Nov19: added more code to show the use of serial GCD queues, however I occasionally get deadlocks (most of the time when interacting with the UI), which is likely to be happening because everything is happening on a single thread, right?
According to what I read (and understood) from the doc, it should be valid to do the equivalent of:
dispatch_sync(firstSerialQueue, ^{
dispatch_sync(secondSerialQueue, ^{
...
});
});
But maybe I'm missing something regarding GCD? Actually, I'm new to it and I would prefer using standard stuff like threads, mutexes, semaphores, etc. But I'm mainly interested on a generic solution because I doubt I am the first one to encounter these kind of issues.
Are there any good practices to follow in order to synchronize delegates?
EDIT_Nov21: I've been trying to imagine how does a Cocoa app works and drawing the parallel with a basic C program on any OS, and I think I'm beginning to understand what could be happening.
A C program has a main(...) function and when the app is launched, the OS creates a process (not going into details) and then calls main(...). When main returns, the OS undoes what it did to create the process.
Cocoa takes over the main and enters into a loop (which is only exited if the app tells Cocoa it will exit).
Most likely this master loop handles I/O (touch, mouse, display), and when there are events (touches), it automagically calls the view controller associated with the current view. I say automatically because there must be lots happening under the hood, like any time a view is added to the view hierarchy, it's probably added to some internal queue of that master loop.
The master loop goes to its queue and checks if there's something to signal, if it is, it calls the appropriate functions. So far, this could be just one big thread.
Now, some objects (like CLLocationManager) must have their own threads, but apparently, instead of calling the delegates from their own threads, they must be somehow adding the calls to some queue on the master loop, which then goes on to call the functions. This would explain why all my functions are called from within the same thread (this might seem "duh!" for many Cocoa experienced people, but I find that quite surprising and the opposite of intuitive)
Is it a convention to call all delegates on the 'master loop'?
I would imagine that the master loop also handles the display updates, and thus can process queued operations at once (does it stops accepting requests or does it work with a copy of the queue?, it probably does something like that in order to avoid having a corrupted display update), resulting in a sort of "atomic" update.
If that's the case, then it could make sense that the calls to the delegates are scheduled to happen on the master loop, just in case an UI update is made.
However, if CLLocationManager can setup a call to its delegate to happen from within the master loop, why couldn't the UI objects do the same under the hood? Furthermore, they are probably adding operations to some queue of the master loop anyway.
EDIT_Nov21_2:
Maybe scheduling the calls to the delegates to happen from within the master loop they are attached to is to avoid having people use mutexes, etc. to sync calls? Although it is a possibility, it does not seems consistent with my logs. Have to keep digging.

Related

What is the block that CFRunLoopPerformBlock() handles?

I'm currently learning the runloop mechanism in iOS. After reading Run, RunLoop, Run! and the CFRunloop source code, I'm still confused about how it really works. One of my confusions is about the CFRunLoopPerformBlock() function. Many articles mentioned that this function will enqueue the block and execute it in the next runloop, but my question is: what does the block mean here?
Let's say I have a very simple CustomViewController.
- (void)viewDidLoad
{
[super viewDidLoad];
UIView *redView = [[UIView alloc] initWithFrame:CGRectMake(0, 50, 100, 100)];
redView.backgroundColor = [UIColor redColor];
[self.view addSubview:redView];
}
Apparently there's no block syntax in this code. Will viewDidLoad be called by CFRunLoopPerformBlock() ? If not, how is this snippet handled by runloop?
Apparently there's no block syntax in this code. Will viewDidLoad be called by CFRunLoopPerformBlock()? If not, how is this snippet handled by runloop?
The viewDidLoad has practically nothing to do with CFRunLoopPerformBlock. The viewDidLoad is just a method that is called in our view controller when the view has been loaded, but before it’s been presented in the UI, to give us a chance to configure our UI.
So what is the run loop? It is just a loop that is constantly running, checking for various events (events, timers, etc.). It’s running behind the scenes in every iOS app, though we rarely interact directly with it nowadays. (The exception might be when we start certain types of timers, we add them to the main run loop. But that’s about it nowadays.) But when we return from methods like viewDidLoad, we’re yielding control back to the run loop.
what does the block mean here?
A “block” (also known as a “closure” in Swift) is just a piece of code to be run, when this code block is stored in a variable or used as a parameter of a method. The CFRunLoopPerformBlock function effectively says, “here is a some code to run during the next iteration of the run loop”. The third parameter of that function is the code to be run, and is the “block” of code (in Objective-C it starts with ^{ and ends with the final }). For information about Objective-C blocks, see Apple's Blocks Programming Topics or Programming with Objective-C: Working with Blocks.
All of this having been said, it’s worth noting that one wouldn’t generally use CFRunLoopPerformBlock. If we want to dispatch a piece of code to be run, we’d generally now use Grand Central Dispatch (GCD). For example, here is some code that has two parameters, a queue and a block:
dispatch_async(dispatch_get_main_queue(), ^{
self.label.text = #"Done";
});
Again, everything from the ^{ to the } is part of that second parameter, which is the block. This code says “add this block of code that updates the text of the label to the main queue.”
According to Apple documentation,
This method enqueues a block object on a given runloop to be executed as the runloop cycles in specified modes.
This method enqueues the block only and does not automatically wake up the specified run loop. Therefore, execution of the block occurs the next time the run loop wakes up to handle another input source. If you want the work performed right away, you must explicitly wake up that thread using the CFRunLoopWakeUp function.
You can pass a block of code in it as
CFRunLoopPerformBlock(CFRunLoopGetMain(), kCFRunLoopCommonModes, ^{
// your code goes here
});

How to make a function atomic in Swift?

I'm currently writing an iOS app in Swift, and I encountered the following problem: I have an object A. The problem is that while there is only one thread for the app (I didn't create separate threads), object A gets modified when
1) a certain NSTimer() triggers
2) a certain observeValueForKeyPath() triggers
3) a certain callback from Parse triggers.
From what I know, all the above three cases work kind of like a software interrupt. So as the code run, if NSTimer()/observeValueForKeyPath()/callback from Parse happens, current code gets interrupted and jumps to corresponding code. This is not a race condition (since just one thread), and I don't think something like this https://gist.github.com/Kaelten/7914a8128eca45f081b3 can solve this problem.
There is a specific function B called in all three cases to modify object A, so I'm thinking if I can make this function B atomic, then this problem is solved. Is there a way to do this?
You are making some incorrect assumptions. None of the things you mention interrupt the processor. 1 and 2 both operate synchronously. The timer won't fire or observeValueForKeyPath won't be called until your code finishes and your app services the event loop.
Atomic properties or other synchronization techniques are only meaningful for concurrent (multi-threaded) code. If memory serves, Atomic is only for properties, not other methods/functions.
I believe Parse uses completion blocks that are run on a background thread, in which case your #3 **is* using separate threads, even though you didn't realize that you were doing so. This is the only case in which you need to be worried about synchronization. In that case the simplest thing is to simply bracket your completion block code inside a call to dispatch_async(dispatch_get_main_queue()), which makes all the code in the dispatch_async closure run on the main, avoiding concurrency issues entirely.

iOS: Background Threads / Multithreading?

If a second method is called from a method that is running on a background thread, is the second method automatically ran in that same thread or does it happen back on the main thread?
Note: I want my second method to be handled in the background, but since I update the UI inside it, would doing the following, be the right way to do it:
- (void)firstMethod {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
//Do stuff in background
...
//Call a second method (Assume it'll run in this background thread as well)
[self secondMethod];
});
}
//Second Method
- (void)secondMethod {
//Do heavy lifting here
...
dispatch_async(dispatch_get_main_queue(), ^{
//Update UI here
...
});
}
Update
Oh I totally forgot to mention that this method is something that loads suggestions into the view (think keyboard suggestions). Since every key tap would be calling this method, I only want it to run when a user has finished typing. The way I'm approaching it is by setting a 0.2 delay between keys and if a new key tap fall within that 0.2 delay it cancels the previous method call, and initiates a new call (this way, assuming the use types the word, "the", it doesn't run suggestions for "t", "th", "the". Since the user is typing pretty quickly we can assume they don't want suggestions for anything until they have stopped typing (allowing the call to go through after a 0.2s delay), or if they type slow (where they probably are looking for suggestions along the way).
So when calling my secondMethod I do the following:
[NSObject cancelPreviousPerformRequestsWithTarget:self selector:#selector(secondMethod) object:nil];
[self performSelector:#selector(secondMethod) withObject:nil afterDelay:0.2];
The problem is it's not being called (I'm assuming this method defaults it to be performed in the main thread?)
Generally speaking, nothing is going to hop between threads without being pretty explicit about it. Certainly something as trivial as just calling a method isn't. Your code seems fine. Remember to not access mutable state from more than one queue at once (for example if the heavy lifting uses instance variables, make sure that -firstMethod doesn't get called twice in a row. It'd spawn off two async calls to -secondMethod then, and they'd step all over each others data. If that's a problem, create a serial dispatch queue instead of using a global one).

objective-c, possible to queue async NSURLRequests?

I realize this question sounds contradictory. I have several Async requests going out in an application. The situation is that the first async request is an authentication request, and the rest will use an access token returned by the successful authentication request.
The two obvious solutions would be:
run them all synchronous, and risk UI block. (bad choice)
run them async, and put request 2-N in the completion handler for the first one. (not practical)
The trouble is that the subsequent requests may be handled anywhere in the project, at anytime. The failure case would be if the 2nd request was called immediately after the 1st authentication request was issued, and before the access token was returned.
My question thus is, is there any way to queue up Async requests, or somehow say not to issue them until the first request returns successfully?
EDIT:
Why (2) is not practical: The first is an authentication request, happening when the app loads. The 2nd+ may occur right away, in which case it is practical, but it also may occur in a completely separate class or any other part of a large application. I can't essentially put the entire application in the completion handler. Other accesses to the API requests may occur in other classes, and at anytime. Even 1-2 days away after many other things have occurred.
SOLUTION:
//pseudo code using semaphore lock on authentication call to block all other calls until it is received
// at start of auth
_semaphore = dispatch_semaphore_create(0)
// at start of api calls
if(_accessToken == nil && ![_apiCall isEqualToString:#"auth]){
dispatch_semaphore_wait(_semaphore, DISPATCH_TIME_FOREVER);
}
// at end of auth with auth token
dispatch_semaphore_signal([[SFApi Instance] semaphore]);
_accessToken = ...;
This sounds like a case where you'd want to use NSOperation's dependencies
From apple docs:
Operation Dependencies
Dependencies are a convenient way to execute operations in a specific order. You can add and remove dependencies for an operation using the addDependency: and removeDependency: methods. By default, an operation object that has dependencies is not considered ready until all of its dependent operation objects have finished executing. Once the last dependent operation finishes, however, the operation object becomes ready and able to execute.
note that in order for this to work, you must subclass NSOperation "properly" with respect to KVO-compliance
The NSOperation class is key-value coding (KVC) and key-value observing (KVO) compliant for several of its properties. As needed, you can observe these properties to control other parts of your application.
You can't really have it both ways-- there's no built-in serialization for the NSURLConnection stuff. However, you are probably already funneling all of your API requests through some common class anyway (presumably you're not making raw network calls willy-nilly all over the app).
You'll need to build the infrastructure inside that class that prevents the execution of the later requests until the first request has completed. This suggests some sort of serial dispatch queue that all requests (including the initial auth step) are funneled through. You could do this via dependent NSOperations, as is suggested elsewhere, but it doesn't need to be that explicit. Wrapping the requests in a common set of entry points will allow you to do this any way you want behind the scenes.
In cases like this I always find it easiest to write the code synchronously and get it running on the UI thread first, correctly, just for debugging. Then, move the operations to separate threads and make sure you handle concurrency.
In this case the perfect mechanism for concurrency is a semaphore; the authentication operation clears the semaphore when it is done, and all the other operations are blocking on it. Once authentication is done, floodgates are open.
The relevant functions are dispatch_semaphore_create() and dispatch_semaphore_wait() from the Grand Central Dispatch documentation: https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html#//apple_ref/doc/uid/TP40008079-CH2-SW2
Another excellent solution is to create a queue with a barrier:
A dispatch barrier allows you to create a synchronization point within a concurrent dispatch queue. When it encounters a barrier, a concurrent queue delays the execution of the barrier block (or any further blocks) until all blocks submitted before the barrier finish executing. At that point, the barrier block executes by itself. Upon completion, the queue resumes its normal execution behavior.
Looks like you got it running with a semaphore, nicely done!
Use blocks... 2 ways that I do it:
First, a block inside of a block...
[myCommKit getPlayerInfoWithCallback:^(ReturnCode returnCode, NSDictionary *playerInfo) {
if (playerInfo) {
// this won't run until the first one has finished
[myCommKit adjustSomething: thingToAdjust withCallback:^(ReturnCode returnCode, NSDictionary *successCode) {
if (successCode) {
// this won't run until both the first and then the second one finished
}
}];
}
}];
// don't be confused.. anything down here will run instantly!!!!
Second way is a method inside of a block
[myCommKit getPlayerInfoWithCallback:^(ReturnCode returnCode, NSDictionary *playerInfo) {
if (playerInfo) {
[self doNextThingAlsoUsingBlocks];
}
}];
Either way, any time I do async communication with my server I use blocks. You have to think differently when writing code that communicates with a server. You have to force things to go in the order you want and wait for the return success/fail before doing the next thing. And getting used to blocks is the right way to think about it. It could be 15 seconds between when you start the block and when it gets to the callback and executes the code inside. It could never come back if they're not online or there's a server outage.
Bonus way.. I've also sometimes done things using stages:
switch (serverCommunicationStage) {
case FIRST_STAGE:
{
serverCommunicationStage = FIRST_STAGE_WAITING;
// either have a block in here or call a method that has a block
[ block {
// in call back of this async call
serverCommunicationStage = SECOND_STAGE;
}];
break;
}
case FIRST_STAGE_WAITING:
{
// this just waits for the first step to complete
break;
}
case SECOND_STAGE:
{
// either have a block in here or call a method that has a block
break;
}
}
Then in your draw loop or somewhere keep calling this method. Or set up a timer to call it every 2 seconds or whatever makes sense for your application. Just make sure to manage the stages properly. You don't want to accidentally keep calling the request over and over. So make sure to set the stage to waiting before you enter the block for the server call.
I know this might seem like an older school method. But it works fine.

#synchronized block versus GCD dispatch_async()

Essentially, I have a set of data in an NSDictionary, but for convenience I'm setting up some NSArrays with the data sorted and filtered in a few different ways. The data will be coming in via different threads (blocks), and I want to make sure there is only one block at a time modifying my data store.
I went through the trouble of setting up a dispatch queue this afternoon, and then randomly stumbled onto a post about #synchronized that made it seem like pretty much exactly what I want to be doing.
So what I have right now is...
// a property on my object
#property (assign) dispatch_queue_t matchSortingQueue;
// in my object init
_sortingQueue = dispatch_queue_create("com.asdf.matchSortingQueue", NULL);
// then later...
- (void)sortArrayIntoLocalStore:(NSArray*)matches
{
dispatch_async(_sortingQueue, ^{
// do stuff...
});
}
And my question is, could I just replace all of this with the following?
- (void)sortArrayIntoLocalStore:(NSArray*)matches
{
#synchronized (self) {
// do stuff...
};
}
...And what's the difference between the two anyway? What should I be considering?
Although the functional difference might not matter much to you, it's what you'd expect: if you #synchronize then the thread you're on is blocked until it can get exclusive execution. If you dispatch to a serial dispatch queue asynchronously then the calling thread can get on with other things and whatever it is you're actually doing will always occur on the same, known queue.
So they're equivalent for ensuring that a third resource is used from only one queue at a time.
Dispatching could be a better idea if, say, you had a resource that is accessed by the user interface from the main queue and you wanted to mutate it. Then your user interface code doesn't need explicitly to #synchronize, hiding the complexity of your threading scheme within the object quite naturally. Dispatching will also be a better idea if you've got a central actor that can trigger several of these changes on other different actors; that'll allow them to operate concurrently.
Synchronising is more compact and a lot easier to step debug. If what you're doing tends to be two or three lines and you'd need to dispatch it synchronously anyway then it feels like going to the effort of creating a queue isn't worth it — especially when you consider the implicit costs of creating a block and moving it over onto the heap.
In the second case you would block the calling thread until "do stuff" was done. Using queues and dispatch_async you will not block the calling thread. This would be particularly important if you call sortArrayIntoLocalStore from the UI thread.

Resources