Managing asynchronous calls to web API in iOS - ios

I am fetching data (news articles) in JSON format from a web service. The fetched data needs to be converted to an Article object and that object should be stored or updated in the database. I am using Alamofire for sending requests to the server and Core Data for database management.
My approach to this was to create a DataFetcher class for fetching JSON data and converting it to Article object:
class DataFetcher {
var delegate:DataFetcherDelegate?
func fetchArticlesFromUrl(url:String, andCategory category:ArticleCategory) {
//convert json to article
//send articles to delegate
getJsonFromUrl(url) { (json:JSON?,error:NSError?) in
if error != nil {
print("An error occured while fetching json : \(error)")
}
if json != nil {
let articles = self.getArticleFromJson(json!,andCategory: category)
self.delegate?.receivedNewArticles(articles, fromCategory: category)
}
}
}
After I fetch the data I send it to DataImporter class to store it in database:
func receivedNewArticles(articles: [Article], fromCategory category:ArticleCategory) {
//update the database with new articles
//send articles to delegate
delegate?.receivedUpdatedArticles(articles, fromCategory:category)
}
The DataImporter class sends the articles to its delegate that is in my case the ViewController. This pattern was good when I had only one API call to make (that is fetchArticles), but now I need to make another call to the API for fetching categories. This call needs to be executed before the fetchArticles call in the ViewController.
This is the viewDidLoad method of my viewController:
override func viewDidLoad() {
super.viewDidLoad()
self.dataFetcher = DataFetcher()
let dataImporter = DataImporter()
dataImporter.delegate = self
self.dataFetcher?.delegate = dataImporter
self.loadCategories()
self.loadArticles()
}
My questions are:
What is the best way to ensure that one the call to the API gets executed before the other one?
Is the pattern that I implemented good since I need to make different method for different API calls?

What is the best way to ensure that one the call to the API gets executed before the other one?
If you want to ensure that two or more asynchronous functions execute sequentially, you should first remember this:
If you implement a function which calls an asynchronous function, the calling function becomes asynchronous as well.
An asynchronous function should have a means to signal the caller that it has finished.
If you look at the network function getJsonFromUrl - which is an asynchronous function - it has a completion handler parameter which is one approach to signal the caller that the underlying task (a network request) has finished.
Now, fetchArticlesFromUrl calls the asynchronous function getJsonFromUrl and thus becomes asynchronous as well. However, in your current implementation it has no means to signal the caller that its underlying task (getJsonFromUrl) has finished. So, you first need to fix this, for example, through adding an appropriate completion handler and ensuring that the completion handler will eventually be called from within the body.
The same is true for your function loadArticles and loadCategories. I assume, these are asynchronous and require a means to signal the caller that the underlying task has finished - for example, by adding a completion handler parameter.
Once you have a number of asynchronous functions, you can chain them - that is, they will be called sequentially:
Given, two asynchronous functions:
func loadCategories(completion: (AnyObject?, ErrorType?) -> ())
func loadArticles(completion: (AnyObject?, ErrorType?) -> ())
Call them as shown below:
loadCategories { (categories, error) in
if let categories = categories {
// do something with categories:
...
// Now, call loadArticles:
loadArticles { (articles, error) in
if let articles = articles {
// do something with the articles
...
} else {
// handle error:
...
}
}
} else {
// handler error
...
}
}
Is the pattern that I implemented good since I need to make different method for different API calls?
IMHO, you should not merge two functions into one where one performs the network request and the other processes the returned data. Just let them separated. The reason is, you might want to explicitly specify the "execution context" - that is, the dispatch queue, where you want the code to be executed. Usually, Core Data, CPU bound functions and network functions should not or cannot share the same dispatch queue - possibly also due to concurrency constraints. Due to this, you may want to have control over where your code executes through a parameter which specifies a dispatch queue.
If processing data may take perceivable time (e.g. > 100ms) don't hesitate and execute it asynchronously on a dedicated queue (not the main queue). Chain several asynchronous functions as shown above.
So, your code may consist of four asynchronous functions, network request 1, process data 1, network request 2, process data 2. Possibly, you need another function specifically for storing the data into Core Data.
Other hints:
Unless there's a parameter which can be set by the caller and which explicitly specifies the "execution context" (e.g. a dispatch queue) where the completion handler should be called on, it is preferred to submit the call of the completion handler on a concurrent global dispatch queue. This performs faster and avoids dead locks. This is in contrast to Alamofire that usually calls the completion handlers on the main thread per default and is prone to dead locks and also performs suboptimal. If you can configure the queue where the completion handler will be executed, please do this.
Prefere to execute functions and code on a dispatch queue which is not associated to the main thread - e.g. not the main queue. In your code, it seems, the bulk of processing the data will be executed on the main thread. Just ensure that UIKit methods will execute on the main thread.

Related

Can asynchronous operations be used with `progress` on OperationQueue?

Starting in iOS13, one can monitor the progress of an OperationQueue using the progress property. The documentation states that only operations that do not override start() count when tracking progress. However, asynchronous operations must override start() and not call super() according to the documentation.
Does this mean asynchronous operations and progress are mutually exclusive (i.e. only synchronous operations can be used with progress)? This seems like a massive limitation if this is the case.
In my own project, I removed my override of start() and everything appears to work okay (e.g. dependencies are only started when isFinished is set to true on the dependent operation internally in my async operation base class). BUT, this seems risky since Operation explicitly states to override start().
Thoughts?
Documentaiton references:
https://developer.apple.com/documentation/foundation/operationqueue/3172535-progress
By default, OperationQueue doesn’t report progress until totalUnitCount is set. When totalUnitCount is set, the queue begins reporting progress. Each operation in the queue contributes one unit of completion to the overall progress of the queue for operations that are finished by the end of main(). Operations that override start() and don’t invoke super don’t contribute to the queue’s progress.
https://developer.apple.com/documentation/foundation/operation/1416837-start
If you are implementing a concurrent operation, you must override this method and use it to initiate your operation. Your custom implementation must not call super at any time. In addition to configuring the execution environment for your task, your implementation of this method must also track the state of the operation and provide appropriate state transitions.
Update: I ended up ditching my AysncOperation for a simple SyncOperation that waits until finish() is called (using a semaphore).
/// A synchronous operation that automatically waits until `finish()` is called.
open class SyncOperation: Operation {
private let waiter = DispatchSemaphore(value: 0)
/// Calls `work()` and waits until `finish()` is called.
public final override func main() {
work()
waiter.wait()
}
/// The work of the operation. Subclasses must override this function and call `finish()` when their work is done.
open func work() {
preconditionFailure("Subclasses must override `work()` and call `finish()`")
}
/// Finishes the operation.
///
/// The work of the operation must be completed when called. Failing to call `finish()` is a programmer error.
final public func finish() {
waiter.signal()
}
}
You ask:
Does this mean asynchronous operations and progress are mutually exclusive (i.e. only synchronous operations can be used with progress)? This seems like a massive limitation if this is the case.
Yes, if you implement start, you have to add the operation’s child Progress to the queue’s parent progress yourself. (It is a little surprising that they did not have the base operation update the progress by observing the isFinished KVO, but it is what it is. Or they could have used the becomeCurrent(withPendingUnitCount:)-resignCurrent pattern, and then this fragile behavior would not exist.)
But I would not abandon asynchronous operations solely because you want their Progress. By making your operation synchronous, you would unnecessarily tie up one of the very limited number of worker threads for the duration of the operation. That is the sort of decision that seems very convenient, might not introduce immediate problems, but longer-term might introduce problems that are exceedingly hard to identify when you unexpectedly exhaust your worker thread pool.
Fortunately, adding our own child Progress is exceedingly simple. Consider a custom operation with its own child Progress:
class TestOperation: AsynchronousOperation {
let progress = Progress(totalUnitCount: 1)
override func main() {
DispatchQueue.main.asyncAfter(deadline: .now() + 1) { [self] in
progress.completedUnitCount = 1
finish()
}
}
}
And then, while adding them to your queue, add the operation’s progress as a child of the operation queue’s Progress:
class ViewController: UIViewController {
#IBOutlet weak var progressView: UIProgressView!
let queue: OperationQueue = ...
override func viewDidLoad() {
super.viewDidLoad()
queue.progress.totalUnitCount = 10
progressView.observedProgress = queue.progress
for _ in 0 ..< 10 {
queue.progress.becomeCurrent(withPendingUnitCount: 1)
queue.addOperation(TestOperation())
queue.progress.resignCurrent()
}
}
}
It is trivial to add the Progress of your own, custom, asynchronous, Operation subclasses to the operation queue’s Progress. Or, you might just create your own parent Progress and bypass the progress of the OperationQueue entirely. But either way, it is exceedingly simple and there is no point in throwing the baby (asynchronous custom Operation subclass) away with the bathwater.
If you want, you could simplify the calling point even further, e.g., define typealias for operations with Progress:
typealias ProgressOperation = Operation & ProgressReporting
extension OperationQueue {
func addOperation(progressOperation: ProgressOperation, pendingUnitCount: Int64 = 1) {
progress.addChild(progressOperation.progress, withPendingUnitCount: pendingUnitCount)
addOperation(progressOperation)
}
}
class TestOperation: AsynchronousOperation, ProgressReporting {
let progress = Progress(totalUnitCount: 1)
override func main() { ... }
}
And then when adding operations:
queue.progress.totalUnitCount = 10
progressView.observedProgress = queue.progress
for _ in 0 ..< 10 {
queue.addOperation(progressOperation: TestOperation())
}
You are combining two different but related concepts; asynchronous and concurrency.
An OperationQueue always dispatches Operations onto a separate thread so you do not need to make them explicitly make them asynchronous and there is no need to override start(). You should ensure that your main() does not return until the operation is complete. This means blocking if you perform asynchronous tasks such as network operations.
It is possible to execute an Operation directly. In the case where you want concurrent execution of those operations you need to make them asynchronous. It is in this situation that you would override start()
If you want to implement a concurrent operation—that is, one that runs asynchronously with respect to the calling thread—you must write additional code to start the operation asynchronously. For example, you might spawn a separate thread, call an asynchronous system function, or do anything else to ensure that the start method starts the task and returns immediately and, in all likelihood, before the task is finished.
Most developers should never need to implement concurrent operation objects. If you always add your operations to an operation queue, you do not need to implement concurrent operations. When you submit a nonconcurrent operation to an operation queue, the queue itself creates a thread on which to run your operation. Thus, adding a nonconcurrent operation to an operation queue still results in the asynchronous execution of your operation object code. The ability to define concurrent operations is only necessary in cases where you need to execute the operation asynchronously without adding it to an operation queue.
In summary, make sure your operations are synchronous and do not override start if you want to take advantage of progress
Update
While the normal advice is not to try and make asynchronous tasks synchronous, in this case it is the only thing you can do if you want to take advantage of progress. The problem is that if you have an asynchronous operation, the queue cannot tell when it is actually complete. If the queue can't tell when an operation is complete then it can't update progress accurately for that operation.
You do need to consider the impact on the thread pool of doing this.
The alternative is not to use the inbuilt progress feature and create your own property that you update from your tasks.

When would a queue consider a task is completed?

In the following code, when would queueT (serial queue) consider “task A” is completed?
The moment when aNetworkRequest switched to another thread?
Or in the doneInAnotherQueue block? ( commented // 1)
In another word, when would “task B” be executed?
let queueT = DispatchQueue(label: "com.test.a")
queueT.async { // task A
aNetworkRequest.doneInAnotherQueue() { // completed in another thread possibly
// 1
}
}
queueT.async { // task B
print("It's my turn")
}
It would much better if you could explain the mechanism how a queue consider a task is completed.
Thanks in advance.
In short, the first example starts an asynchronous network request, so the async call “finishes” as soon as that network request is submitted (but does not wait for that network request to finish).
I am assuming that the real question is that you want to know when the network request is done. Bottom line, GCD is not well suited for managing dependencies between tasks that are, themselves, asynchronous requests. The dispatching the initiation of a network request to a serial queue is undoubtedly not going to achieve what you want. (And before someone suggests using semaphores or dispatch groups to wait for the asynchronous request to finish, note that can solve the tactical issue, but it is a pattern to be avoided because it is inefficient use of resources and, in edge cases, can introduce deadlocks.)
One pattern is to use completion handlers:
func performRequestA(completion: #escaping () -> Void) { // task A
aNetworkRequest.doneInAnotherQueue() { object in
...
completion()
}
}
Now, in practice, we would generally use the completion handler with a parameter, perhaps even a Result type:
func performRequestA(completion: #escaping (Result<Foo, Error>) -> Void) { // task A
aNetworkRequest.doneInAnotherQueue() { result in
guard ... else {
completion(.failure(error))
return
}
let foo = ...
completion(.success(foo))
}
}
Then you can use the completion handler pattern, to process the results, update models, and perhaps initiate subsequent requests that are dependent upon the results of this request. For example:
performRequestA { result in
switch result {
case .failure(let error):
print(error)
case .success(let foo):
// update models or initiate next step in the process here
}
}
If you are really asking how to manage dependencies between asynchronous tasks, there are a number of other, elegant patterns (e.g., Combine, custom asynchronous Operation subclass, the forthcoming async/await pattern contemplated in SE-0296 and SE-0303, etc.). All of these are elegant solutions for managing dependencies between asynchronous tasks, controlling the degree of concurrency, etc.
We probably would need to better understand the nature of your broader needs before we made any specific recommendations. You have asked the question about a single dispatch, but the question probably is best viewed from a broader context of what you are trying to achieve. For example, I'm assuming you are asking because you have multiple asynchronous requests to initiate: Do you really need to make sure that they happen sequentially and lose all the performance benefits of concurrency? Or can you allow them to run concurrently and you just need to know when all of the concurrent requests are done and how to get the results in the correct order? And might you have so many concurrent requests that you might need to constrain the degree of concurrency?
The answers to those questions will probably influence our recommendation of how to best manage your multiple asynchronous requests. But the answer is almost certainly is not a GCD queue.
You can do a simple check
let queueT = DispatchQueue(label: "com.test.a")
queueT.async { // task A
DispatchQueue(label: "com.test2.a").async { // create another queue inside
for i in 0..<6 {
print(i)
}
}
}
queueT.async { // task B
for i in 10..<20 {
print(i)
}
}
}
you'll get different output each run this means yes when you switch thread the task is considered done
A GCD work item is complete when the closure you pass returns. So for your example, I'm going to rewrite it to make the function calls and parameters more explicit (rather than using trailing closure syntax).
queueT.async(execute: {
// This is a function call that takes a closure parameter. Whether this
// function returns, then this closure will continue. Whether that is before or
// after running completionHandler is an internal detail of doneInAnotherQueue.
aNetworkRequest.doneInAnotherQueue(closureParameter: { ... })
// At this point, the closure is complete. What doneInAnotherQueue() does with
// its closure is its business.
})
Assuming that doneInAnotherQueue() executes its closure parameter "sometime in the future", then your task B will likely run before that closure runs (it may not; it's really a race at that point, but probably). If the doneInAnotherQueue() blocks on its closure before returning, then closureParameter will definitely run before task B.
There is absolutely no magic here. The system has no idea what doneInAnotherQueue does with its parameter. It may never run it. It may run it immediately. It may run it sometime in the future. The system just calls doneInAnotherQueue() and passes it a closure.
I rewrote async in normal "function with parameters" syntax to make it even more clear that async() is just a function, and it takes a closure parameter. It also isn't magic. It's not part of the language. It's just a normal function in the Dispatch framework. All it does it take its parameter, put it on a dispatch queue, and return. It doesn't execute anything. There's just closures that get put on queues, scheduled, and executed.
Swift is in the process of adding structured concurrency, which will add more language-level concurrency features that will allow you to express much more advanced things than the simple primitives provided by GCD.
Your task A returns straight away. Dispatching work to another queue is synchronous. Think of the block (the trailing closure) after 'doneInAnotherQueue' as just an argument to the doneInAnotherQueue function, no different to passing an Int or a String. You pass that block along and then you return immediately with the closing brace from task A.

How do I ensure my DispatchQueue executes some code on the main thread specifically?

I have a singleton that manages an array. This singleton can be accessed from multiple threads, so it has its own internal DispatchQueue to manage read/write access across threads. For simplicity we'll say it's a serial queue.
There comes a time where the singleton will be reading from the array and updating the UI. How do I handle this?
Which thread my internal dispatch queue is not known, right? It's just an implementation detail I'm to not worry about? In most cases this seems fine, but in this one specific function I need to be sure it uses the main thread.
Is it okay to do something along the lines of:
myDispatchQueue.sync { // Synchronize with internal queue to ensure no writes/reads happen at the same time
DispatchQueue.main.async { // Ensure that it's executed on the main thread
for item in internalArray {
// Pretend internalArray is an array of strings
someLabel.text = item
}
}
}
So my questions are:
Is that okay? It seems weird/wrong to be nesting dispatch queues. Is there a better way? Maybe something like myDispatchQueue.sync(forceMainThread: true) { ... }?
If I DID NOT use DispatchQueue.main.async { ... }, and I called the function from the main thread, could I be sure that my internal dispatch queue will execute it on the same (main) thread as what called it? Or is that also an "implementation detail" where it could be, but it could also be called on a background thread?
Basically I'm confused that threads seem like an implementation detail you're not supposed to worry about with queues, but what happens on the odd chance when you DO need to worry?
Simple example code:
class LabelUpdater {
static let shared = LabelUpdater()
var strings: [String] = []
private let dispatchQueue: dispatchQueue
private init {
dispatchQueue = DispatchQueue(label: "com.sample.me.LabelUpdaterQueue")
super.init()
}
func add(string: String) {
dispatchQueue.sync {
strings.append(string)
}
}
// Assume for sake of example that `labels` is always same array length as `strings`
func updateLabels(_ labels: [UILabel]) {
// Execute in the queue so that no read/write can occur at the same time.
dispatchQueue.sync {
// How do I know this will be on the main thread? Can I ensure it?
for (index, label) in labels.enumerated() {
label.text = strings[index]
}
}
}
}
Yes, you can nest a dispatch to one queue inside a dispatch to another queue. We frequently do so.
But be very careful. Just wrapping an asynchronous dispatch to the main queue with a dispatch from your synchronizing queue is insufficient. Your first example is not thread safe. That array that you are accessing from the main thread might be mutating from your synchronization queue:
This is a race condition because you potentially have multiple threads (your synchronization queue’s thread and the main thread) interacting with the same collection. Rather than having your dispatched block to the main queue just interact objects directly, you should make a copy of of it, and that’s what you reference inside the dispatch to the main queue.
For example, you might want to do the following:
func process(completion: #escaping (String) -> Void) {
syncQueue.sync {
let result = ... // note, this runs on thread associated with `syncQueue` ...
DispatchQueue.main.async {
completion(result) // ... but this runs on the main thread
}
}
}
That ensures that the main queue is not interacting with any internal properties of this class, but rather just the result that was created in this closure passed to syncQueue.
Note, all of this is unrelated to it being a singleton. But since you brought up the topic, I’d advise against singletons for model data. It’s fine for sinks, stateless controllers, and the like, but not generally advised for model data.
I’d definitely discourage the practice of initiating UI controls updates directly from the singleton. I’d be inclined to provide these methods completion handler closures, and let the caller take care of the resulting UI updates. Sure, if you want to dispatch the closure to the main queue (as a convenience, common in many third party API), that’s fine. But the singleton shouldn’t be reaching in and update UI controls itself.
I’m assuming you did all of this just for illustrative purposes, but I added this word of caution to future readers who might not appreciate these concerns.
Try using OperationQueues(Operations) as they do have states:
isReady: It’s prepared to start
isExecuting: The task is currently running
isFinished: Once the process is completed
isCancelled: The task canceled
Operation Queues benefits:
Determining Execution Order
observe their states
Canceling Operations
Operations can be paused, resumed, and cancelled. Once you dispatch a
task using Grand Central Dispatch, you no longer have control or
insight into the execution of that task. The NSOperation API is more
flexible in that respect, giving the developer control over the
operation’s life cycle
https://developer.apple.com/documentation/foundation/operationqueue
https://medium.com/#aliakhtar_16369/concurrency-in-swift-operations-and-operation-queue-part-3-a108fbe27d61

Swift - How to get an unknown number of asynchronous operations to execute one after the other (synchronously)?

I have a set of asynchronous operations of an unknown amount that need to executed one after another because they are updating the same resource. At the end of all the executions, I want a single point of completion to be notified that they're all complete.
e.g. I have a basket that has an unknown number of eggs in it (numEggs). I need to call api.removeEgg(eggID:fromBasket:completion:) for numEggs times - but i don't want the subsequent egg to be removed until the previous egg is completely removed, as they cannot modify the basket at the same time. When all of the eggs are removed, the client code should be notified once.
Which is the best mechanism to achieve this given the amount of tasks is unknown? I've attempted to use DispatchGroup, but it seems the asynchronous tasks are kicked off at the same time. OperationQueue would work the same way.
NOTE: This is not a duplicate of this question: Calling asynchronous tasks one after the other in Swift (iOS)
The difference is that I do not know the number of asynchronous tasks that need to be completed one-after-the-other. In the referenced post, the type of the tasks are known at compile time and can simply be chained - I don't know until runtime how many asynchronous tasks I'll need to execute.
You can add a dependency on a certain operation that has to happen afterwards.
For example,
let operation = RandomOperation()
let laterOperation = RandomOperation()
laterOperation.addDependency(operation)
what it does is the laterOperation waits to start until the previous operation ends its process.
-- it seems that you have an unknown number of operation to take care of.
I recommend you make a custom class of "group operation".
And all the custom class does is to have a queue which is OperationQueue
and variable operations.
at the end, you add operations to queue by
queue.addOperation(operations:waitUntilFinished:)
Recursive function:
func removeAllEggs(eggIDs: [String], completion:#escaping () -> ()) {
var ids = eggIDs
let id = ids.last
Alamofire.request(endpoint, method: .post, parameters: parametes, encoding: JSONEncoding.default, headers: headers).responseData { response in
// check response etc...
if ids.count > 1 {
ids.removeLast()
removeAllEggs(eggIDs: ids, completion: completion)
} else {
// you just sent the last item
return completion()
}
}
}
and to use it:
func callItHere() {
removeAllEggs(eggIDs: allEggs) {
// Done! poof! all gone!
}
}

Swift function does not return soon enough

I'm trying to move my app over to MVC, I have a Parse query which I've moved over to a function in my model class, the function returns a Bool.
When the button in my ViewController below is pressed the model function 'parseQuery' should be run, return a bool and then I need to use that bool to continue. At the moment, the if statement is executed before the function has completed so it always detects false.
How can I ensure that the if statement is completed once the function has completed?
#IBAction func showAllExpiredUsers(sender: AnyObject) {
var success = searchResults.parseQuery()
if success {
print("true")
} else {
print("false")
}
//I have also tried:
searchResults.parseQuery()
if searchResults.parseQuery() {
print("true")
} else {
print("false")
}
You have a few options, but the issue is due to asynchronous calls.
Does Parse expose the same function, with a completion block?
If yes, then you place the processing of the Bool inside the completion block, which is called when the async task is completed.
If not, which I doubt, you could create an NSOperationQueue, with a maxConcurrency of 1 (so it is serial) and dispatch the calls onto the queue with
func addOperationWithBlock(_ block: () -> Void)
This is called on the queue. You would need to store the success bool globally so that you can access it inside the second queued block operation, to check the success state.
Update:
I haven't used parse, but checking the documentation for findObjectsInBackgroundWithBlock (https://parse.com/docs/ios/guide#queries) it takes a completion block where you can process the result, update your bool.
I'm not sure what you are trying to do. You don't need to have the success state of the query. You can check
if (!error) {
// do stuff
} else {
//error occurred - print("error \(error.localizedDescription)"
}
Check the example.
What you need to understand is threading. The async task provides a completion block because its asynchronous, it gets dispatched onto another thread for processing. I'm not sure how much you know about threading but there is something called a thread pool. This thread pool is accessed by Queues. The thread pool is managed by the OS, and makes sure available threads can be used by queues that need work done. As users interact with an application, this (and all UI work) is done on the main thread.
So whenever some processing is going to interfere with possible interaction or UI updates, it should be dispatched (Grand Central Dispatch) or queued (NSOperationQueue, built on top of GCD) off of the main thread.
Anyway, this is why the findObjectsInBackgroundWithBlock call is dispatched off the main thread, because otherwise it would block the main thread until its done, ruining the experience for the user. Also, if the main thread is blocked for more than 1 minute (last time I checked), the OS's watchdog will kill your process.
So yeah, assigning a boolean to the return of the block, would get the return of the function, which occurs before the completion block is done. The completion block is where you code some stuff to be done after the function completes. So the query gets dispatched onto another thread and starts processing, the thread that sent this work off for processing, continues with the rest of its execution. So checking the boolean directly after, wouldn't work because the other thread isn't complete. Even if the other thread finished in time, what is connecting the background thread with the main thread?
This is the beauty of blocks (function pointers), it's a lot cleaner and optimized and keeps code compact. The old way, which is still is use for some older frameworks, is delegates, which detaches the calling code with the callback and adds a delegate dependency. Blocks are beautiful.
Its also important to note that completion blocks don't always get called on the main thread. In many cases its up to you to dispatch the work back to the main thread, to handle any UI work that needs to be done with the objects available inside the completion block.
The query likely takes some time to run and should be run in a background thread with a callback function to handle the response WHEN it completes.
look at the Documentation
Specifically looking at the query.findObjectsInBackgroundWithBlock code:
var query = PFQuery(className:"GameScore")
query.whereKey("playerName", equalTo:"Sean Plott")
query.findObjectsInBackgroundWithBlock {
(objects: [PFObject]?, error: NSError?) -> Void in
if error == nil {
// The find succeeded.
print("Successfully retrieved \(objects!.count) scores.")
// Do something with the found objects
if let objects = objects as? [PFObject] {
for object in objects {
print(object.objectId)
}
}
} else {
// Log details of the failure
print("Error: \(error!) \(error!.userInfo!)")
}
}
The above code will execute the query and run the code in the block when it gets the results from Parse. This is known as an asynchronous task, for more information check out this guide

Resources