I need to run some code only after requesting multiple HTTP resources for gathering some data.
I've read a lot of documentation and I've found out I should use GCD and dispatch groups:
Create a group with dispatch_group_create()
For each request:
Enter the dispatch group with dispatch_group_enter()
Run the request
When receiving a response, leave the group with dispatch_group_leave()
Wait with dispatch_group_wait()
Release the group with dispatch_release()
Yet I'm not sure if this practice could have some pitfalls – or is there a better way to wait for parallels requests being finished?
The code below looks working well:
// Just send a request and call the when finished closure
func sendRequest(url: String, whenFinished: () -> Void) {
let request = NSMutableURLRequest(URL: NSURL(string: url))
let task = NSURLSession.sharedSession().dataTaskWithRequest(request, completionHandler: {
(data, response, error) -> Void in
whenFinished()
})
task.resume()
}
let urls = ["http://example.com?a",
"http://example.com?b",
"http://example.com?c",
"http://example.com?d",
"http://invalid.example.com"]
var fulfilledUrls: Array<String> = []
let group = dispatch_group_create();
for url in urls {
dispatch_group_enter(group)
sendRequest(url, {
() in
fulfilledUrls.append(url)
dispatch_group_leave(group)
})
}
dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
for url in fulfilledUrls { println(url) }
Yup, this is the basic idea, although you would ideally use dispatch_group_notify instead of dispatch_group_wait since dispatch_group_wait blocks the calling thread until the group completes, whereas dispatch_group_notify will call a block when the group completes without blocking the calling thread in the interim.
Related
I call a method on the native iOS side from Kotlin/Native framework. The method does its work asynchronously and responds back with some data in a different thread than it was called with.
I want a way to call the response function also in the same thread. Below is the code:
func makePostRestRequest(url: String, body: String) {
let thread1 = Thread.current
let request = NetworkServiceRequest(url: url,
httpMethod: "POST",
bodyJSON: body)
NetworkService.processRequest(requestModel: request) { [weak self] (response: Any?, requestStatus: RequestStatus) in
// This is different thread than "thread1". Need the below code to execute on "thread1"
if requestStatus.result == .fail {
knResponseCallback.onError(errorResponse: requestStatus.error)
} else {
knResponseCallback.onSuccess(response: response)
}
}
}
I have tried to solve this using two ways.
One is to use "semaphores". I just blocked the code execution after the network call and when I got the result back in the callback of network request, I saved it in a variable and signal the semaphore. After that I just call knResponseCallback and use the response variable to return back the response.
Another way I used is to use RunLoops. I got the handle of RunLoop.current, start the runLoop in a mode and in the callback of network request, I just call perform selector on thread with object method of NSObject which dispatches the work to that thread itself.
The problem with both of them is that they are blocking calls. RunLoop.run and semaphore.wait both blocks the thread they are called in.
Is there a way to dispatch some work onto a particular thread from another thread without blocking the particular thread?
You need to create a queue to send the request and use that same queue to handle the response. Something like this should work for you:
let queue = DispatchQueue(label: "my-thread")
queue.async {
let request = NetworkServiceRequest(url: url,
httpMethod: "POST",
bodyJSON: body)
NetworkService.processRequest(requestModel: request) { [weak self] (response: Any?, requestStatus: RequestStatus) in
queue.async {
if requestStatus.result == .fail {
knResponseCallback.onError(errorResponse: requestStatus.error)
} else {
knResponseCallback.onSuccess(response: response)
}
}
}
}
I have a few asynchronous, network tasks that I need to perform on my app. Let's say I have 3 resources that I need to fetch from a server, call them A, B, and C. Let's say I have to finish fetching resource A first before fetching either B or C. Sometimes, I'd want to fetch B first, other times C first.
Right now, I just have a long-chained closure like so:
func fetchA() {
AFNetworking.get(completionHandler: {
self.fetchB()
self.fetchC()
})
}
This works for now, but the obvious limitation is I've hard-coded the order of execution into the completion handler of fetchA. Now, say I want to only fetchC after fetchB has finished in that completion handler, I'd have to go change my implementation for fetchB...
Essentially, I'd like to know if there's some magic way to do something like:
let orderedAsync = [fetchA, fetchB, fetchC]
orderedAsync.executeInOrder()
where fetchA, fetchB, and fetchC are all async functions, but fetchB won't execute until fetchA has finished and so on. Thanks!
You can use a serial DispatchQueue mixed with a DispatchGroup which will ensure that only one execution block will run at a time.
let serialQueue = DispatchQueue(label: "serialQueue")
let group = DispatchGroup()
group.enter()
serialQueue.async{ //call this whenever you need to add a new work item to your queue
fetchA{
//in the completion handler call
group.leave()
}
}
serialQueue.async{
group.wait()
group.enter()
fetchB{
//in the completion handler call
group.leave()
}
}
serialQueue.async{
group.wait()
group.enter()
fetchC{
group.leave()
}
}
Or if you are allowed to use a 3rd party library, use PromiseKit, it makes handling and especially chaining async methods way easier than anything GCD provides. See the official GitHub page for more info.
You can wrap an async method with a completion handler in a Promise and chain them together like this:
Promise.wrap(fetchA(completion:$0)).then{ valueA->Promise<typeOfValueB> in
return Promise.wrap(fetchB(completion:$0)
}.then{ valueB in
}.catch{ error in
//handle error
}
Also, all errors are propagated through your promises.
You could use combination of dispatchGroup and dispatchSemaphore to perform the asynchronous code blocks in sequence.
DispatchGroup will maintain the enter and leave to notify when all the task are completed.
DispatchSemaphore with value 1 will make sure only one block of task is executed
Sample
code where fetchA, fetchB, fetchC are functions with closure (completion handler)
// Create DispatchQueue
private let dispatchQueue = DispatchQueue(label: "taskQueue", qos: .background)
//value 1 indicate only one task will be performed at once.
private let semaphore = DispatchSemaphore(value: 1)
func sync() -> Void {
let group = DispatchGroup()
group.enter()
self.dispatchQueue.async {
self.semaphore.wait()
fetchA() { (modelResult) in
// success or failure handler
// semaphore signal to remove wait and execute next task
self.semaphore.signal()
group.leave()
}
}
group.enter()
self.dispatchQueue.async {
self.semaphore.wait()
fetchB() { (modelResult) in
// success or failure handler
// semaphore signal to remove wait and execute next task
self.semaphore.signal()
group.leave()
}
}
group.enter()
self.dispatchQueue.async {
self.semaphore.wait()
fetchC() { (modelResult) in
// success or failure handler
// semaphore signal to remove wait and execute next task
self.semaphore.signal()
group.leave()
}
}
group.notify(queue: .main) {
// Perform any task once all the intermediate tasks (fetchA(), fetchB(), fetchC()) are completed.
// This block of code will be called once all the enter and leave statement counts are matched.
}
}
Not sure why other answers are adding unnecessary code, what you are describing is already the default behavior for a serial queue:
let fetchA = { print("a starting"); sleep(1); print("a done")}
let fetchB = { print("b starting"); sleep(1); print("b done")}
let fetchC = { print("c starting"); sleep(1); print("c done")}
let orderedAsync = [fetchA, fetchB, fetchC]
let queue = DispatchQueue(label: "fetchQueue")
for task in orderedAsync{
queue.async(execute: task) //notice "async" here
}
print("all enqueued")
sleep(5)
"all enqueued" will print immediately, and each task will wait for the previous one to finish before it starts.
FYI, if you added attributes: .concurrent to your DispatchQueue initialization, then they wouldn't be guaranteed to execute in order. But even then you can use the .barrier flag when you want things to execute in order.
In other words, this would also fulfill your requirements:
let queue = DispatchQueue(label: "fetchQueue", attributes: .concurrent)
for task in orderedAsync{
queue.async(flags: .barrier, execute: task)
}
I'm trying to retrieve the XML from an rss feed, get the links for each article, and then extract info from those articles. I'm using AEXML to get the xml, and ReadabilityKit for link extraction.
I'm successfully pulling the links from the XML, but the parser call on Readability is never executing. I don't want this on the main thread as it blocks all UI, but so far that's the only way I've made it work. Code is below (removed that dispatch get main queue):
func retrieveXML() {
let request = NSURLRequest(URL: NSURL(string: "<XML URL HERE>")!)
let task = NSURLSession.sharedSession().dataTaskWithRequest(request) {
(data, response, error) in
if data == nil {
print("\n\ndataTaskWithRequest error: \(error)")
return
}
do {
let xmlDoc = try AEXMLDocument(xmlData: data!)
for child in xmlDoc.root.children {
if let postURL = child["id"].value {
let url = NSURL(string: postURL)
let parser = Readability(url: url!)
let title = parser.title()
print("TITLE: \(title)")
}
}
} catch {
print(error)
}
}
task.resume()
}
Thanks for reporting. The new version is available in cocoa pods and cartage with a new aync API. Sync API is removed from the project.
Readability.parse(url: articleUrl, { data in
let title = data?.title
let description = data?.description
let keywords = data?.keywords
let imageUrl = data?.topImage
let videoUrl = data?.topVideo
})
Thanks for your contribution! For more info please check README https://github.com/exyte/ReadabilityKit
The problem is that Readability is deadlocking. You're calling it from a NSURLSession completion block (which defaults to a serial queue), but Readability blocks that queue with a semaphore until its own network request is completed. So Readability is deadlocking because it's blocking a thread waiting for a semaphore signal that is supposed to be sent from the same thread it is blocking.
You can fix this by asynchronously dispatching the code that instantiates Readability to a separate queue (e.g. a global queue).
dispatch_async(dispatch_get_global_queue(QOS_CLASS_UTILITY, 0)) {
let url = NSURL(string: postURL)
let parser = Readability(url: url!)
let title = parser.title()
print("TITLE: \(title)")
}
It looks like that API has been updated to run asynchronously, so get the latest version and this deadlocking issue is eliminated and the above asynchronous dispatch will not be needed. You'll obviously have to employ the completion handler pattern of the updated API.
I'm rather new at swift and have been doing some research on how to answer this question myself since I want to learn, but I am completely stumped.
I have a function which requests data from a server, and after the data is received, a completion handler is executed which parses the data. Within the previously mentioned completion handler, another function is called which is passed a completion handler itself.
For some reason, the function call within the function is being being skipped, and being finished after the first completion handler is fully executed. This might make more sense with the code below:
func loadSites(forceDownload: Bool){
self.inspectionSites = MyData.getLocallyStoredInspectionSites()
if self.inspectionSites.count < 1 || forceDownload {
self.http.requestSites({(sitesAcquired, jsonObject) -> Void in
guard sitesAcquired else{
SwiftOverlays.removeAllBlockingOverlays()
MyAlertController.alert("Unable to acquire sites from server or locally")
return
}
let result = jsonObject
for (_,subJson):(String, JSON) in result!.dictionaryValue {
let site = InspectionSite()
site.name = subJson[self.currentIndex]["name"].string!
site.city = subJson[self.currentIndex]["city"].string!
site.address = subJson[self.currentIndex]["address"].string!
site.state = subJson[self.currentIndex]["state"].string!
site.zip = subJson[self.currentIndex]["zip"].stringValue
site.siteId = subJson[self.currentIndex]["id"].string!
objc_sync_enter(self) //SAW A STACKOVERFLOW POST WITH THIS, THOUGHT IT MIGHT HELP
MyLocation.geoCodeSite(site, callback:{(coordinates) -> Void in
print("YO!!!! GEOCODING SITE!")
self.localLat = coordinates["lat"]!
self.localLon = coordinates["lon"]!
})
objc_sync_exit(self)
for type in subJson[self.currentIndex]["inspection_types"]{
let newType = InspectionType()
newType.name = type.1["name"].string!
newType.id = type.1["id"].string!
site.inspectionTypes.append(newType)
}
site.lat = self.localLat
print("HEYY!!!! ASSIGNING COORDS")
site.lon = self.localLon
let address = "\(site.address), \(site.city), \(site.state) \(site.zip)"
site.title = site.name
site.subtitle = address
MyData.persistInspectionSite(site)
self.currentIndex++
}
self.inspectionSites = MyData.getLocallyStoredInspectionSites()
SwiftOverlays.removeAllBlockingOverlays()
self.showSitesOnMap(self.proteanMap)
})
}else{
SwiftOverlays.removeAllBlockingOverlays()
self.showSitesOnMap(self.proteanMap)
}
}
I added those print statements which print "YOOO" and "HEYYY" just so I could see what was being executed first, and "HEYY" is always first. I just need to make sure that the geocoding always happens before the object is persisted. I saw a stackoverflow post which mentioned objc_sync_enter(self) for synchronous operation, but im not even sure if it's what I need.
This is the function which geocodes the site (incase it helps):
class func geoCodeSite(site: InspectionSite, callback: ((coordinates: Dictionary<String, String>)->Void)?) {
let geocoder = CLGeocoder()
let address: String = "\(site.address), \(site.city), \(site.state) \(site.zip)"
print(address)
geocoder.geocodeAddressString(address, completionHandler: {(placemarks, error) -> Void in
if((error) != nil){
print("Error", error)
}
if let placemark = placemarks?.first {
MyLocation.mLat = String(stringInterpolationSegment:placemark.location!.coordinate.latitude)
MyLocation.mLon = String(stringInterpolationSegment:placemark.location!.coordinate.longitude)
MyLocation.coordinates = ["lat":mLat, "lon":mLon]
print(MyLocation.coordinates)
callback?(coordinates: MyLocation.coordinates)
}
})
}
I think the behaviour your seeing is expected. You have two levels of asynchronous methods:
requestSites
geoCodeSite
Since the geoCodeSite method is also asynchronous, its callback is executed well after the line:
MyData.persistInspectionSite(site)
So your problem is how to wait till all InspectionSites have geocoded before persisting the site, right?
Dispatch groups can be used to detect when multiple asynchronous events have finished, see my answer here.
How to Implement Dispatch Groups
dispatch_groups are used to fire a callback when multiple async callbacks have finished. In your case, you need to wait for all geoCodeSite async callbacks to complete before persisting your site.
So, create a dispatch group, firing off your geoCodeSite calls, and implement the dispatch callback inside of which you can persist your geocoded sites.
var myGroup = dispatch_group_create()
dispatch_group_enter(myGroup)
...
fire off your geoCodeSite async callbacks
...
dispatch_group_notify(myGroup, dispatch_get_main_queue(), {
// all sites are now geocoded, we can now persist site
})
Don't forget to add
dispatch_group_leave(myGroup)
inside the closure of geoCodeSite! Otherwise dispatch_group will never know when your async call finish.
I am writing an application that depends on data from various sites/service, and involves performing calculations based on data from these different sources to produce an end product.
I have written an example class with two functions below that gathers data from the two sources. I have chosen to make the functions different, because sometimes we apply different authentication methods depending on the source, but in this example I have just stripped them down to their simplest form. Both of the functions use Alamofire to fire off and handle the requests.
I then have an initialisation function, which says if we have successfully gathered data from both sources, then load another nib file, otherwise wait up to for seconds, if no response has been returned, then load a server error nib file.
I've tried to make this example as simple as possible. Essentially. This is the kind of logic I would like to follow. Unfortunately it appears this does not currently work in its current implementation.
import Foundation
class GrabData{
var data_source_1:String?
var data_source_2:String?
init(){
// get data from source 1
get_data_1{ data_source_1 in
println("\(data_source_1)")
}
// get data from source 2
get_data_2{ data_source_1 in
println("\(data_source_1)")
}
var timer = 0;
while(timer<5){
if((data_source_1 == nil) && (data_source_2 == nil)){
// do nothing unless 4 seconds has elapsed
if (timer == 4){
// load server error nib
}
}else{
// load another nib, and start manipulating data
}
// sleep for 1 second
sleep(1)
timer = timer+1
}
}
func get_data_1(completionHandler: (String) -> ()) -> () {
if let datasource1 = self.data_source_1{
completionHandler(datasource1)
}else{
var url = "http://somewebsite.com"
Manager.sharedInstance.request(.GET, url).responseString {
(request, response, returnedstring, error) in
println("getting data from source 1")
let datasource1 = returnedstring
self.data_source_1 = datasource1
completionHandler(datasource1!)
}
}
}
func get_data_2(completionHandler: (String) -> ()) -> () {
if let datasource2 = self.data_source_2{
completionHandler(datasource2)
}else{
var url = "http://anotherwebsite.com"
Manager.sharedInstance.request(.GET, url).responseString {
(request, response, returnedstring, error) in
println("getting data from source 2")
let datasource2 = returnedstring
self.data_source_2 = datasource2
completionHandler(datasource2!)
}
}
}
}
I know that i could put the second closure within the first inside the init function, however, I don't think this would be best practice and I am actually pulling from more than 2 sources, so the closure would be n closures deep.
Any help to figuring out the best way to checking if multiple data sources gave a valid response, and handling that appropriately would be much appreciated.
Better than that looping process, which would block the thread, you could use dispatch group to keep track of when the requests were done. So "enter" the group before issuing each of the requests, "leave" the group when the request is done, and set up a "notify" block/closure that will be called when all of the group's tasks are done.
For example, in Swift 3:
let group = DispatchGroup()
group.enter()
retrieveDataFromURL(url1, parameters: firstParameters) {
group.leave()
}
group.enter()
retrieveDataFromURL(url2, parameters: secondParameters) {
group.leave()
}
group.notify(queue: .main) {
print("both requests done")
}
Or, in Swift 2:
let group = dispatch_group_create()
dispatch_group_enter(group)
retrieveDataFromURL(url1, parameters: firstParameters) {
dispatch_group_leave(group)
}
dispatch_group_enter(group)
retrieveDataFromURL(url2, parameters: secondParameters) {
dispatch_group_leave(group)
}
dispatch_group_notify(group, dispatch_get_main_queue()) {
print("both requests done")
}
The other approach is to wrap these requests within an asynchronous NSOperation subclass (making them cancelable, giving you control over constraining the degree of concurrency, etc.), but that's more complicated, so you might want to start with dispatch groups as shown above.