I'm optimising my first iOS app before it hits the store, and noting methods which take seemingly larger amounts of time. I have a fairly simple master-detail app where entities from the Core Data SQLite are shown in a UITableView, then tapping one brings up a detail view where the user can mark it as a favorite (setting a BOOL flag in that object to YES. As soon as they hit their Favorite button, I call [NSManagedObjectContext save] to ensure their changes are reflected immediately, and in case of an unscheduled terminate, etc.
This save operation is currently taking around 205ms when testing on my iPhone 4S. There are around 4,500 entries in the database, each with a few strings, and a few boolean values (wrapped in NSNumbers).
First question: should it take 200ms to make this change? I'm only setting one boolean value, then saving the context, but I've never used Core Data before so I don't know if this is about normal.
Second question: the code I'm using is below – am I doing something wrong in the code to make the method take this long to execute?
- (IBAction) makeFavorite: (id) sender
{
[self.delegate detailViewControllerDidMakeFavorite];
[_selectedLine setIsLiked: [NSNumber numberWithBool: YES]];
[_selectedLine setIsDisliked: [NSNumber numberWithBool: NO]];
NSError *error;
if (![[[CDManager sharedManager] managedObjectContext] save:&error]) NSLog(#"Saving changes failed: %#, %#", error, [error userInfo]);
}
Perhaps I'm worrying over nothing (I am still a relatively new programmer), but on a wider note, 200ms is enough for me to at least try to address this issue, right? :)
Consider UIManagedDocument. It automatically handles saving in a background context. I especially recommend it if you are on iOS 6. If you are not passing object IDs around, or merging with other contexts, then you should be able to use it fairly easily and reliably.
Your simple use case seems tailor made for it.
1) Should the save of one change of boolean value take 200 ms?
Yes, it might take this long. You are performing an IO operation and according to the documentation:
When Core Data saves a SQLite store, SQLite updates just part of the store file. Loss of that partial update would be catastrophic, so you may want to ensure that the file is written correctly before your application continues. Unfortunately, doing so means that in some situations saving even a small set of changes to an SQLite store can take considerably longer than saving to, say, an XML store.
-
2) am I doing something wrong in the code to make the method take this long to execute?
No. you are making the save to the store (under the assumption you have no parent context).
-
3) Are 200ms enough for me to at least try to address this issue?
Yes. 200ms are a noticeable time for a human, and will be felt. you could try and perform the save in the background, but this is unsafe according to the documentation. or, move it to the end of the entire object editing.
My advise would be to read and see if you could make some compromises in your context architecture (your CoreData stack structure).
From my experience, saving in the background is not that bad.
Related
I am having a major problem with my application speed in processing updates on a background thread. Instruments shows that almost all of this time is spend inside performBlockAndWait where I am fetching out the objects which need updating.
My updates may come in by the hundreds depending on the amount of time offline and the approach I am currently using is to process them individually; ie fetch request to pull out the object, update, then save.
It sounds slow and it is. The problem I have is that I don't want to load everything into memory at once, so need to fetch them individually as we go, also I save as I go to ensure that if there is an issue with a single update it won't mess up the rest.
Is there a better approach?
I hit similar slow performance when upserting a large collection of objects. In my case I'm willing to keep the full change set in memory and perform a single save so the large volume of fetch requests dominated my processing time.
I got a significant performance improvement from maintaining an in memory cache mapping my resources' primary keys to NSManagedObjectIDs. That allowed me to use existingObjectWithId:error: rather than a fetch request for an individual object.
I suspect I might do even better by collecting the primary keys for all resources of a given entity description, issuing a single fetch request for all of them at once (batching those results as necessary), and then processing the changes to each resource.
You may benefit from using NSBatchUpdateRequest assuming you're targeting iOS 8+ only.
These guys have a great example of it but the TLDR is basically:
Example: Say we want to update all unread instances of MyObject to be marked as read:
NSBatchUpdateRequest *req = [[NSBatchUpdateRequest alloc] initWithEntityName:#"MyObject"];
req.predicate = [NSPredicate predicateWithFormat:#"read == %#", #(NO)];
req.propertiesToUpdate = #{
#"read" : #(YES)
};
req.resultType = NSUpdatedObjectsCountResultType;
NSBatchUpdateResult *res = (NSBatchUpdateResult *)[context executeRequest:req error:nil];
NSLog(#"%# objects updated", res.result);
Note the above example is taken from the aforementioned blog, I didn't write the snippet.
I've been wrestling with temporary core data objects within my iOS app for a fair few months now. I use UIManagedDocument which may or may not complicate things a little. The problem I have is when views are trying save URIs for objects during state encoding for restoration I hit problems whenever newly created objects have objectID's that are temporaryIDs.
Previously I'd tried to force save the UIManagedDocument with the following
NSError *saveError=nil;
BOOL bSuccess=[document.managedObjectContext save:&saveError];
[document updateChangeCount:UIDocumentChangeDone];
[document savePresentedItemChangesWithCompletionHandler:^(NSError *errorOrNil)
I thought this was helping fix the temporary objectIDs, it was definitely forcing the saving to store/disk (which shouldn't be necessary when using the more automated UIManagedDocument), but I since discovered that newly created object id's on the document.managedObjectContext were still left with temporary ObjectIDs even after this.
Last night I discovered that the following brute force addition done after the save has occurred in the savePresentedItemChangesWithCompletionHandler's completion handler block seemed to be able to fix up the temporary ObjectIDs that I was still experiencing.
[document.managedObjectContext reset];
This presumably discards the entire context and forces everything to be refreshed with the new permanent ids following the save having completed. I presume this would require at least some form of SQL db being reloaded from disk and so wasn't really an ideal solution.
Finally I discovered that there may be another solution, one that doesn't require brute force saving on the UIManagedDocument, and that's to instead do the following on any newly created NSManagedObject instead
NSError *obtainError=nil;
BOOL bObtainSuccess = [object.managedObjectContext obtainPermanentIDsForObjects:#[object] error:&obtainError];
I do think that this seems to do what's written on the tin. If I test for object's being temporary even just a second or so later then it seems to clear up and find ALL object's processed as permanent. However if I try to test whether they're permanent immediately after calling obtainPermanentIDsForObjects as follows
NSError *obtainError=nil;
BOOL bObtainSuccess = [object.managedObjectContext obtainPermanentIDsForObjects:#[object] error:&obtainError];
assert(![[object objectID] isTemporaryID]);
Then the assert fires, ie. the object still has a temporaryID even though the obtainPermanentIDsForObjects method returned YES, and left obtainError as nil.
This is all done on the main thread, without any context performBlock.. Given the configuration of UIManagedDocument though this should be correct I think.
Has anyone got any thoughts on this? For now hopefully it seems to be ok if i don't check immediately, almost like there's some threading to the operation, which does make me wonder if it should be done on some different thread...
Thanks for your time
I'm building a tab bar application for iPhone and i'm using Core Data with two UIManagedDocuments. In the first tab, i write the data to database and in the second i read them into UITableView with UIFetchedResultsController.
At the start of application, if i write data first, and after then i read results, it works fine. Results appear in the second tab immediately. However, if i read some data first and after then if i write something to database, results appear in second tab with considerable delay (almost 1 minutes). If is there any synchronization problem between two UIManagedObjectContexts or two UIManagedDocuments, how does it works in the first condition? And, is there any solution for this delay?
The way that you can ensure that your UIManagedDocument is up to date is to make sure you're saving your changes properly. Given the information you've shown above, I'm not really sure about how you're managing your documents or your managedObjectContexts. There are just too many factors that could be affecting this to be able to give you a 100% concrete answer.
So without knowing what your code looks like and without knowing how you're managing your context, the only thing I can do is give you what I use in my own projects. This may or may not help you, but it helped me - more times than one - when it comes to handing core data by UIManagedDocument.
When it comes to Context:
I use a singleton to manage UIManagedDocument. I do this because I don't want to have to deal with what you're talking about above - having more than one managedObjectContext. When you start dealing with multiple contexts, you have the issue where the data will not be consistent unless you manage all of your contexts properly. If you save on one but don't update the other - then your data can become out of sync. You also have to make sure that each context is working on the property thread - the Apple Docs is a great resource for understanding the whys ad hows this even matters.
The point is, though, that this is one of the biggest problems with working with UIManagedDocument that isn't as bad as when you're working with pure core data and using a SQL persistent store. The main reason that I've found is because of how UIManagedDocument actually saves to its UIDocument store. It is very unpredictable about when it wants to save. This makes knowing when your UIManagedDocument will actually persist and have your data a freaking shot in the dark. You end having to do all kinds of stupid stuff just to make sure that it is always readily available.
Considering I have a belief (that many, maybe rightfully so, believe is an ignorant belief) that working with core data is hard, and UIManagedDocument makes it easier than it would be if you didn't work with it at all. That being said, I don't like it when working with something as simple as UIManagedDocument begins to get complicated - so I use the one thing that has always kept it simple, and that is a singleton, shared-instance of a single UIManagedDocument so that I only have 1 managedObjectContext, ever, to have to work with.
When it comes to saving:
Whenever I make any significant change to a model ( Create, Update, Delete, Edit ), I always make sure to call [document updateChangeCount:UIDocumentChangeDone]; I do this because I do not use the "undo manager" (NSUndoManager) when working with UIManagedDocument. Simply because I haven't needed to yet, plus because I hate all the "workaround" garbage you have to do with it.
Working only on the Main Thread:
Whenever I do anything with my UIManagedDocument or Core Data, I always make sure its on the main thread. I think I've already said it once, but I'll say it again: working in threads is helpful when you need it, but also when you actually understand threading in general. I like working in threads, but it comes at a cost of complexity that makes me not want to work in them when it comes to core data. With that being the case, I tend to stay strictly on the main thread as this keeps things simple and easy (for me).
Saving the Document
When I absolutely need to make sure that the UIManagedDocument is "saved" ( written to disk ), I have 2 methods that I wrote and use that are always readily available for me to call: saveDocument and forceSaveDocument.
The first one ( saveDocument ) merely checks the context for changes. If it has any, it then checks to see if we have any newly inserted objects. When insertedObjects are found, it obtains the perm ids for these items. You can think of this one as a good way to ensure that your core data model is up to date, and that your managed context is in a safe state, so that when your document is actually saved, that you get everything saved in the state that it needs to be in (your ids are realized, your contexts are clean, and what you are about to save represents a state of your model once all work has been complete on it).
Its big brother, forceSaveDocument, actually calls saveDocument first. Again, to make sure that your actual model/context is saved and proper. If it returns successful ( YES ), then it will actually do the real saving and write the document to disk by means of saveToUrl.
Some Code (hopefully it helps):
Here are those 2 methods in case it helps:
-(BOOL)saveDocument {
NSManagedObjectContext *context = self.document.managedObjectContext;
if(!context.hasChanges) return YES;
NSSet *inserts = [context insertedObjects];
if([inserts count] > 0) {
NSError *error = nil;
if(![context obtainPermanentIDsForObjects:[inserts allObjects] error:&error] && error) {
[self reportError:error];
return NO;
}
}
return YES;
}
-(void)forceSaveDocument {
if( [self saveDocument] ) {
[self.document saveToURL:self.document.fileURL forSaveOperation:UIDocumentSaveForOverwriting completionHandler:self.onSaveBlock ? self.onSaveBlock : nil];
}
}
General Rules/Guidelines
Overall, these are my guidelines that I follow ( and have worked for me for about 3 years now ) when working with UIManagedDocument and Core Data. I'm sure there are better out there from guys/gals much smarter than me, but these have what I use. The benefit I get out of them is that it makes me have to worry less about managing my data and gives me more freedom to work with everything else:
Use a singleton to manage my UIManagedDocument until the need of multiple threads is absolutely necessary - then migrate over to start using multiple contexts ( i've never needed to do this yet - but then again I try to keep things simple )
Always call updateChangeCount:UIDocumentChangeDone when I make any change to a model. It is very light weight and has little impact. If anything, it will help ensure your document stays up to date and never gets too out of sync with your data.
Don't use undo manager unless you actually need it ( I have yet to need it )
Use save/ForceSave sparingly, and only when absolutely necessary (deletes are a good reason to use it. Or if you create a new item on one view controller and need it on the next one, but can't wait for core data and the document to sync up - its kind of like kicking it in the ass and saying "I object to you saving whenever you want - save now lol.. )
Final Thoughts
All of the above is my own belief and understandings. These come from a lot of research, reading, and being a pain the ass when it comes to wanting to do things right, all while keeping it simple. Anyone can write a complex solution - but I think the fundamental question is always: do you really need the complexity, or do you just need it to work so you can focus on more complex issues?
I'm sure the above is way more than you probably wanted, and may even add more questions than you have. If you need some links and resources, let me know and I'll try to throw a few together.
Either way, hope that helps.
I pretty much know why my iPad app is crashing, but I'm having trouble coming up with a scheme to get around the scenario. The app is a jigsaw puzzle app. I've got a working version that's more stable than the one in the app store, but I still have a pesky problem I can't quite lick.
The root of the problem is a clash between user activity and automated saves. The save essentially stores the state of the puzzle as property list. The property list contains, among other things, a compilation of all the palettes in the puzzle, and for each palette, the details of all the pieces on that palette. It works well, except that user activity could change these details. A palette is essentially a UIView containing puzzle pieces as subviews. A user can move pieces around on a palette or move them from palette to palette.
I have the save process working in two phases. The first phase is kicked off by a timer. At regular intervals, this phase checks to see if there is some user activity that warrants a save. It sets a property abortSave to NO and then triggers a nonrepeating timer to wait for another period of time before starting phase two.
In phase two, the save takes place as long as abortSave is NO.
Meanwhile, if the user performs any operation that affects the save, abortSave is set to YES. The idea is that the delay between phase 1 and phase 2 is longer than it takes to perform a user operation, so if abortSave is NO, then it should be safe to do a save.
This process has eliminated 95% or so of the crashes, but I'm still getting crashes.
Of course, for decent performance of the app, the user activity as well as the save operation take place in background threads.
The type of circumstance I am running into is usually a mutation during fast enumeration, or something like that. Essentially, some user action is making a change during the save process. If I copy the object being fast enumerated and then work on the copy, it doesn't help. Sometimes the error will happen on the copy statement. If the object is an array, I don't use fast enumeration but use a regular for loop to work through the array. That helps a bit.
I hope this question isn't too generic. I suppose I could post some code, but I'm not sure how helpful it really would be. And I don't want to needlessly clutter the question.
One thing that I have not done yet, would be to use a flag working the other way:
saveProcessActive set to YES right before the save happens and set to NO when it finishes. Then all the user actions would have to be stalled if saveProcessActive is YES. The problem with this scenario is that it would result in a delay of the user action, potentially visible to the user, but maybe any delay is insignificant. It would only need to be as long as the save takes until its next check of abortSave. The aborted save process would then turn saveProcessActive to NO when it acknowledged the abort request. Is there a better solution?
Making a copy of the current game state in memory should be a fast action. When you want to save, make that copy, and then hand it to your background queue to save it with dispatch_async(). Doing it this way gets rid of all the concurrency issues because each piece of data is only ever accessed on a single queue.
EDIT: Here is how I've typically addressed such issues (untested):
- (void)fireSave:(NSTimer *)timer {
id thingToSave = [self.model copyOfThingToSave];
dispatch_async(self.backgroundSavingSerialQueue, ^{
[self writeToDisk:copyOfThingToSave];
}
}
- (void)saveLater {
[self.timer invalidate];
self.timer = [NSTimer scheduledTimerWithTimeInterval:5
target:self
selector:#selector(fireSave:)
userInfo:nil
repeats:NO];
}
Now, anywhere you modify data, you call [self saveLater]. Everything here is on the main thread except for writeToDisk: (which is passed a copy of the data). Since writeToDisk: always runs on its own serial queue, it also avoids race conditions, even if you ask it to save faster than it can.
You will need to synchronize access to the data, both while saving and while altering it during normal play. As writing to file would likely take longer than making a copy, in order to minimize lock time you should make a copy while you have a lock, then release the lock and
write the data to disk. There are a few ways to do this, but the easiest is an #synchronised block:
-(void) save
{
NSDictionary *old = self.data;
NSDictionary *new;
#synchronized(old) {
new = [old copy];
}
[self writeData:new];
}
And remember to synchronize changes too:
-(void) updateDataKey:(id)key toValue:(id)val
{
NSDictionary *old = self.data;
#synchronized(old) {
old[key] = val;
}
}
data obviously doesn't need to be an NSMutableDictionary, it was just a convenient example.
Is there any way to add transactionality to NSUserDefaults? I would need something like the well known begin - commit - revert functions on database handlers, thus I could revert a modification on the user defaults in some cases. Of course other users of this user defaults would be blocked from writing during the transaction.
Note1: synchronize method of the above class does not do this thing because:
according to the doc, it is called from time to time also by the framework
there is no "revert"
Note2: I saw dictionaryRepresentation and registerDefaults with that I could implement my own transaction mechanism (holding a copy of the old defaults in the memory / even saved to a plist during the transaction). But maybe there is a ready solution for this?
My use case:
I have a wizard-like flow of screens where the user can edit some settings on each screen. As of the current implementation these settings are stored immediately in the defaults as the user moves to the next screen of the wizard. Now this wizard can be interrupted by some other events (even the user can choose to exit/cancel the wizard at any screen) and in this case I would like to roll back the modifications.
One possible solution is to defer setting the values until the end of your wizard. This can be easily done for example using a proxy that will record the messages sent to it and then replay them on the real NSUserDefaults. Recording the messages should be pretty simple:
- (void) forwardInvocation: (NSInvocation*) invocation
{
[invocations addObject:invocation];
}
Where invocations is a mutable array. Replaying the messages back is also simple:
- (void) replayOnTarget: (id) target
{
for (NSInvocation *op in invocations)
[op invokeWithTarget:target];
}
This way the wizard does not have to know anything about the transactions. It would get the recording proxy instead of the expected NSUserDefaults instance and send the messages as usual. After the calling code knows the wizard succeeded, it can replay the messages from the proxy on the shared user defaults. (I have added some sample code on GitHub.)
Maybe this is overkill, but since the recording proxy is generic and can be used in other cases, maybe it’s not bad. Same thing can also be done using blocks:
[transaction addObject:[^{
[defaults setObject:… forKey:…];
} copy]];
Where transaction is a mutable array, again. When the wizard succeeds, you would simply iterate over the array and execute the stored blocks.