I have a table, which uses a NSFetchedResultsController to populate it's data. When I refresh my table, I need to update all 50+ items, so I do the following: I make a call to the server which returns JSON data, store the "media" object into an array, loop through this array and individually store each object to core data (in background thread), then reload the table. This works fine. However there is a major issue.
Sometimes the step of saving to the database takes 7+ seconds, due to looping through large arrays and individually storing each object to core data. And while this step is executing, when I fetch other data from the server, the response time is delayed tremendously. I wont be able to fetch new data until the save process is complete. I'm quite confused because this is supposed to be done in the background thread and not block other server calls.
Why does saving data to core data in bg causing my response time to be delayed? Is there a better approach to storing large arrays to core data without disrupting any responses?
//Refreshing User Table method
class func refreshUserProfileTable(callback: (error: NSError?) -> Void) {
//getProfile fetches data from server
ProfileWSFacade.getProfile(RequestManager.userID()!) {
(profile, isLastPage, error) -> () in
DataBaseManager.sharedInstance.saveInBackground({ (backgroundContext) in
let mediaList = profile?["media"] as? Array<JSONDictionary>
if let mediaList = mediaList {
//Response time is delayed when this loop is executing
for media in mediaList {
DataBaseManager.sharedInstance.storeObjectOfClass(Media.self, dict: media, context: backgroundContext)
}
}
}, completion: {
callback(error: error)
})
}
}
//MARK: Core data methods:
//Save in background method in Database manager
func saveInBackground(
block: (backgroundContext: NSManagedObjectContext) -> Void,
completion: (Void->Void)? = nil)
{
let mainThreadCompletion = {
if let completion = completion {
dispatch_async(dispatch_get_main_queue(), { () -> Void in
completion()
})
}
}
backgroundContext.performBlock { () -> Void in
guard RequestManager.userID() != nil else {
mainThreadCompletion()
return
}
block(backgroundContext: self.backgroundContext)
if RequestManager.userID() != nil {
_ = try? self.backgroundContext.save()
DataBaseManager.sharedInstance.save()
}
mainThreadCompletion()
}
}
//Stores class object
func storeObjectOfClass<T: NSManagedObject where T: Mappable>(
entityClass:T.Type,
dict: JSONDictionary,
context: NSManagedObjectContext? = nil) -> T
{
let context = context ?? mainManagedObjectContext
let predicate = NSPredicate(format: "%K LIKE %#", entityClass.primaryKey(), entityClass.primaryKeyFromDict(dict))
let requestedObject = DataBaseManager.createOrUpdateFirstEntity(
entityType: T.self,
predicate: predicate,
context: context) { (entity) -> () in
entity.populateFromDictionary(dict)
}
return requestedObject
}
//Creates or updates core data entity
class func createOrUpdateFirstEntity<T: NSManagedObject>(
entityType entityType: T.Type,
predicate: NSPredicate,
context: NSManagedObjectContext,
entityUpdateBlock:(entity: T) -> ()) -> T
{
guard DataBaseManager.sharedInstance.doPersistentStoreAvailible() else { return T() }
let desc = NSEntityDescription.entityForName(String(entityType), inManagedObjectContext: context)!
let existingEntityRequest = NSFetchRequest()
existingEntityRequest.entity = desc
existingEntityRequest.predicate = predicate
let requestedObject = try? context.executeFetchRequest(existingEntityRequest).first
if let requestedObject = requestedObject as? T {
entityUpdateBlock(entity: requestedObject)
return requestedObject
} else {
let newObject = T(entity: desc, insertIntoManagedObjectContext: context)
entityUpdateBlock(entity: newObject)
return newObject
}
}
I found out that .performBlock follows the FIFO rule, first in, first out. Meaning the blocks will be executed in the order in which they were put into the internal queue: SO Link. Because of that, the next rest call would wait until the first block has completed before it saved, and did its callback. The actual response time wasnt slow, it was just the saving time because of FIFO.
The solution was to use a different NSManagedContext for profile loading, rather than using the one that was being used for all background calls.
let profileContext: NSManagedObjectContext
//Instead of calling saveInBackground, we save to saveInProfileContext, which wont block other rest calls.
func saveInProfileContext(
block: (profileContext: NSManagedObjectContext) -> Void,
completion: (Void->Void)? = nil)
{
let mainThreadCompletion = {
if let completion = completion {
dispatch_async(dispatch_get_main_queue(), { () -> Void in
completion()
})
}
}
profileContext.performBlock { () -> Void in
guard RequestManager.userID() != nil else {
mainThreadCompletion()
return
}
block(profileContext: self.profileContext)
if RequestManager.userID() != nil {
_ = try? self.profileContext.save()
DataBaseManager.sharedInstance.save()
}
mainThreadCompletion()
}
}
Related
Here is my code. I have extra entities with nil attributes in Core Data. when I delete and run application firstly, I get one saved object with nil attributes fetched from core data.
class RepositoryEntity {
private var context = (UIApplication.shared.delegate as! AppDelegate).persistentContainer.viewContext
func fetchRepositories() -> [RepositoryEntity] {
do {
return try context.fetch(RepositoryEntity.fetchRequest())
} catch(let error) {
print("errr: ", error.localizedDescription)
}
return []
}
func saveObject(repo: Repository, onSuccess: () -> Void, onFailure: (_ error: String) -> Void) {
let repoEntity = RepositoryEntity(context: self.context)
repoEntity.fullName = repo.fullName
repoEntity.dateCreated = repo.dateCreated
repoEntity.url = repo.url
repoEntity.language = repo.language
repoEntity.repoDescription = repo.repoDescription
repoEntity.id = repo.id
let ownerEntity = OwnerEntity(context: self.context)
ownerEntity.ownerName = repo.owner.ownerName
ownerEntity.avatarUrl = repo.owner.avatarUrl
repoEntity.addToOwner(ownerEntity)
// Save the data
do {
try context.save()
onSuccess()
} catch(let error) {
onFailure("Something Happend. Try again later.")
print(error.localizedDescription)
}
}
func deleteRepository(repo: Repository, onSuccess: () -> Void, onFailure: (_ error: String) -> Void) {
let repositories = fetchRepositories()
guard let deletableRepo = repositories.first(where: {$0.id == repo.id}) else { return }
self.context.delete(deletableRepo)
do {
try context.save()
onSuccess()
} catch(let error) {
onFailure("Something Happens. try again later.")
print(error.localizedDescription)
}
}
}
when I delete and run application firstly, I get one saved object with nil attributes fetched from core data.
When you write "I get one saved object...": what object? RepositoryEntity? How do you know you have a saved object, by calling fetchRepositories()? I can only assume it's like that (as opposed to having an empty OwnerEntity).
In that case, the problem is that, to call func fetchRepositories() you need to create an instance. So, when you start with zero objects, as soon as you call fetchRepositories() you already have at least one.
Change:
func fetchRepositories() -> [RepositoryEntity]
with:
static func fetchRepositories() -> [RepositoryEntity]
and call it from the type:
RepositoryEntity.fetchRepositories()
The same also for deleteRepository.
I have an issue with using DispatchGroup (as it was recommended here) with FireStore snapshotListener
In my example I have two functions. The first one is being called by the ViewController and should return array of objects to be displayed in the View.
The second one is a function to get child object from FireStore for each array member. Both of them must be executed asynchronously. The second one should be called in cycle.
So I used DispatchGroup to wait till all executions of the second function are completed to call the UI update. Here is my code (see commented section):
/// Async function returns all tables with active sessions (if any)
class func getTablesWithActiveSessionsAsync(completion: #escaping ([Table], Error?) -> Void) {
let tablesCollection = userData
.collection("Tables")
.order(by: "name", descending: false)
tablesCollection.addSnapshotListener { (snapshot, error) in
var tables = [Table]()
if let error = error {
completion (tables, error)
}
if let snapshot = snapshot {
for document in snapshot.documents {
let data = document.data()
let firebaseID = document.documentID
let tableName = data["name"] as! String
let tableCapacity = data["capacity"] as! Int16
let table = Table(firebaseID: firebaseID, tableName: tableName, tableCapacity: tableCapacity)
tables.append(table)
}
}
// Get active sessions for each table.
// Run completion only when the last one is processed.
let dispatchGroup = DispatchGroup()
for table in tables {
dispatchGroup.enter()
DBQuery.getActiveTableSessionAsync(forTable: table, completion: { (tableSession, error) in
if let error = error {
completion([], error)
return
}
table.tableSession = tableSession
dispatchGroup.leave()
})
}
dispatchGroup.notify(queue: DispatchQueue.main) {
completion(tables, nil)
}
}
}
/// Async function returns table session for table or nil if no active session is opened.
class func getActiveTableSessionAsync (forTable table: Table, completion: #escaping (TableSession?, Error?) -> Void) {
let tableSessionCollection = userData
.collection("Tables")
.document(table.firebaseID!)
.collection("ActiveSessions")
tableSessionCollection.addSnapshotListener { (snapshot, error) in
if let error = error {
completion(nil, error)
return
}
if let snapshot = snapshot {
guard snapshot.documents.count != 0 else { completion(nil, error); return }
// some other code
}
completion(nil,nil)
}
}
Everything works fine till the moment when the snapshot is changed because of using a snapshotListener in the second function. When data is changed, the following closure is getting called:
DBQuery.getActiveTableSessionAsync(forTable: table, completion: { (tableSession, error) in
if let error = error {
completion([], error)
return
}
table.tableSession = tableSession
dispatchGroup.leave()
})
And it fails on the dispatchGroup.leave() step, because at the moment group is empty.
Thread 1: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
All dispatchGroup.enter() and dispatchGroup.leave() are already done on this step. And this closure was called by listener separately.
I tried to find the way how to check if the DispatchGroup is empty to do not call leave() method. But did not find any native solution.
The only similar solution I've found is in the following answer. But it looks too hacky and not sure if will work properly.
Is there any way to check if DispatchGroup is empty? According to this answer, there is no way to do it. But probably something changed during last 2 years.
Is there any other way to fix this issue and keep snapshotListener in place?
For now I implemented some kind of workaround solution - to use a counter.
I do not feel it's the best solution, but at least work for now.
// Get active sessions for each table.
// Run completion only when the last one is processed.
var counter = tables.count
for table in tables {
DBQuery.getActiveTableSessionAsync(forTable: table, completion: { (tableSession, error) in
if let error = error {
completion([], error)
return
}
table.tableSession = tableSession
counter = counter - 1
if (counter <= 0) {
completion(tables, nil)
}
})
}
I have the following generic function that works: it correctly creates the objects and I know is saved into core data because if do a fetch request right after, I get the object I just created. However, the object itself isn't a valid core data object (x-core data fault). Is there any way around so I don't have to do a fetch request right after a decoding an object? Many thanks.
func decode<T: Decodable>(data: Data?, objectType: T.Type, save: Bool = true, completionHandler: #escaping (T) -> ())
{
guard let d = data else { return }
do
{
let privateContext = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType)
privateContext.parent = SingletonDelegate.shared.context
let root = try JSONDecoder(context: privateContext).decode(objectType, from: d)
if save
{
try privateContext.save()
privateContext.parent?.performAndWait
{
do
{
if let p = privateContext.parent
{
try p.save()
}
}catch
{
print(error)
}
}
}
DispatchQueue.main.async
{
completionHandler(root)
}
}catch
{
print(error)
}
}
extension CodingUserInfoKey
{
static let context = CodingUserInfoKey(rawValue: "context")!
}
extension JSONDecoder
{
convenience init(context: NSManagedObjectContext)
{
self.init()
self.userInfo[.context] = context
}
}
A core data fault is a valid Core Data object; it just hasn't been retrieved from the backing store into memory yet.
To reduce memory use, Core Data only fetches the full object when you access one of its properties. This fetch is automatic and effectively transparent to your code.
This means you don't need to do anything special; you can just use the managed object.
I'm using Xcode 9 and Swift 4. I start a download of multiple JSON files when the application is in the foreground. The application then parses these files and saves them to CoreData. This works well when the application is in the foreground. However, if the application is in the background, the files still download correctly, but the data is not parsed and saved to CoreData. It's only when the user returns to the foreground that the parsing and saving of data continues.
I have Background Modes turned on - Background Fetch and Remote notifications.
I have around 10 functions that are similar to the one below in which it processes the JSON files concurrently:
func populateStuff(json: JSONDictionary) -> Void {
let results = json["result"] as! JSONDictionary
let stuffData = results["Stuff"] as! [JSONDictionary]
let persistentContainer = getPersistentContainer()
persistentContainer.performBackgroundTask { (context) in
for stuff in stuffData {
let newStuff = Stuff(context: context)
newStuff.stuffCode = stuff["Code"] as? String
newStuff.stuffDescription = stuff["Description"] as? String
do {
try context.save()
} catch {
fatalError("Failure to save context: \(error)")
}
}
}
}
func getPersistentContainer() -> NSPersistentContainer {
let persistentContainer = NSPersistentContainer(name: "MyProjectName")
persistentContainer.loadPersistentStores { (_, error) in
if let error = error {
fatalError("Failed to load core data stack: \(error.localizedDescription)")
}
}
persistentContainer.viewContext.automaticallyMergesChangesFromParent = true
persistentContainer.viewContext.mergePolicy = NSMergePolicy.mergeByPropertyObjectTrump
return persistentContainer
}
Can anyone advise me on why this might happen and how to over come this?
TIA
Use the beginBackgroundTaskWithName:expirationHandler: method:
func populateStuff(json: JSONDictionary) -> Void {
// Perform the task on a background queue.
DispatchQueue.global().async {
// Request the task assertion and save the ID.
self.backgroundTaskID = UIApplication.shared.beginBackgroundTask (withName: "Finish Network Tasks") {
// End the task if time expires.
UIApplication.shared.endBackgroundTask(self.backgroundTaskID!)
self.backgroundTaskID = UIBackgroundTaskInvalid
}
// Parse the json files
let results = json["result"] as! JSONDictionary
let stuffData = results["Stuff"] as! [JSONDictionary]
let persistentContainer = getPersistentContainer()
persistentContainer.performBackgroundTask { (context) in
for stuff in stuffData {
let newStuff = Stuff(context: context)
newStuff.stuffCode = stuff["Code"] as? String
newStuff.stuffDescription = stuff["Description"] as? String
do {
try context.save()
} catch {
fatalError("Failure to save context: \(error)")
}
}
}
// End the task assertion.
UIApplication.shared.endBackgroundTask(self.backgroundTaskID!)
self.backgroundTaskID = UIBackgroundTaskInvalid
}
Calling this method gives you extra time to perform important tasks. Notice the use of endBackgroundTask: method right after the task is done. It lets the system know that you are done. If you do not end your tasks in a timely manner, the system terminates your app.
I am using a Master Detail Application. Master Screen is a Dashboard and on selecting an item, moves to the detailed screen where I trigger an Alamofire request in the backend
Below is the snippet
class APIManager: NSObject {
class var sharedManager: APIManager {
return _sharedManager
}
private var requests = [Request]()
// Cancel any ongoing download
func cancelRequests() {
if requests.count > 0 {
for request in requests {
request.cancel()
}
}
}
func getData(completion: (dataSet: [Data]?, error: NSError?) -> Void) {
let request = Alamofire.request(.GET, "http://request")
.response { (request, response, data, error) in
dispatch_async(dispatch_get_main_queue(), {
if(error == nil) {
if let response = data, data = (try? NSJSONSerialization.JSONObjectWithData(response, options: [])) as? [NSDictionary] {
var dataSet = [Data]()
for (_, dictionary) in data.enumerate() {
let lat = dictionary["Latitude"]
let lng = dictionary["Longitude"]
let id = dictionary["ID"] as! Int
let data = Data(lat: lat!, long: lng!, id: shuttleID)
dataSet.append(data)
}
completion(dataSet: dataSet, error: nil)
}
} else { completion(dataSet: nil, error: error) }
})
}
requests.append(request)
}
}
I have a singleton API manager class and from the detail view controller I call getData() function. Everything works fine.
But, when I push and pop repeatedly, I see rapid increase in the memory and after 10-15 attempts, I get memory warning. However in the AppDelegate I am managing it to show an Alert message and adding a delay timer for 8 seconds. But however after 20-25 attempts app crashes due to memory warning.
In viewWillDisappear(), I cancel any ongoing requests also. But I couldn't able to stop memory warning issue. I commented the part where I call the request, I see no issues, even memory consumption is less.
I welcome ideas.
The problem is you are never removing the requests that you append to the member variable 'requests'.
You will need to ensure to remove the request when you either cancel it or when the request completes successfully.
Do the following modifications-
func cancelRequests() {
if requests.count > 0 {
for request in requests {
request.cancel()
}
}
requests.removeAll() //Delete all canseled requests
}
also
func getData(completion: (dataSet: [Data]?, error: NSError?) -> Void) {
let request = Alamofire.request(.GET, "http://request")
.response { (request, response, data, error) in
dispatch_async(dispatch_get_main_queue(), {
if(error == nil) {
if let response = data, data = (try? NSJSONSerialization.JSONObjectWithData(response, options: [])) as? [NSDictionary] {
var dataSet = [Data]()
for (_, dictionary) in data.enumerate() {
let lat = dictionary["Latitude"]
let lng = dictionary["Longitude"]
let id = dictionary["ID"] as! Int
let data = Data(lat: lat!, long: lng!, id: shuttleID)
dataSet.append(data)
}
requests.removeObject(request)
completion(dataSet: dataSet, error: nil)
}
} else {
requests.removeObject(request)
completion(dataSet: nil, error: error) }
})
}
requests.append(request)
}
Add this Handy extension on Array to remove item to your code:
// Swift 2 Array Extension
extension Array where Element: Equatable {
mutating func removeObject(object: Element) {
if let index = self.indexOf(object) {
self.removeAtIndex(index)
}
}
mutating func removeObjectsInArray(array: [Element]) {
for object in array {
self.removeObject(object)
}
}
}
On analysis, I found that the memory warning was not due to the Alamofire request. It was due to MKMapView. Loading a MKMapView, zooming in and zooming out consumes more memory. So, in viewWillDisappear I did the fix.
override func viewWillDisappear(animated:Bool){
super.viewWillDisappear(animated)
self.applyMapViewMemoryFix()
}
func applyMapViewMemoryFix(){
switch (self.mapView.mapType) {
case MKMapType.Hybrid:
self.mapView.mapType = MKMapType.Standard
break;
case MKMapType.Standard:
self.mapView.mapType = MKMapType.Hybrid
break;
default:
break;
}
self.mapView.showsUserLocation = false
self.mapView.delegate = nil
self.mapView.removeFromSuperview()
self.mapView = nil
}
Courtesy - Stop iOS 7 MKMapView from leaking memory