Need help figuring out how to avoid crashes in iOS - ios

I pretty much know why my iPad app is crashing, but I'm having trouble coming up with a scheme to get around the scenario. The app is a jigsaw puzzle app. I've got a working version that's more stable than the one in the app store, but I still have a pesky problem I can't quite lick.
The root of the problem is a clash between user activity and automated saves. The save essentially stores the state of the puzzle as property list. The property list contains, among other things, a compilation of all the palettes in the puzzle, and for each palette, the details of all the pieces on that palette. It works well, except that user activity could change these details. A palette is essentially a UIView containing puzzle pieces as subviews. A user can move pieces around on a palette or move them from palette to palette.
I have the save process working in two phases. The first phase is kicked off by a timer. At regular intervals, this phase checks to see if there is some user activity that warrants a save. It sets a property abortSave to NO and then triggers a nonrepeating timer to wait for another period of time before starting phase two.
In phase two, the save takes place as long as abortSave is NO.
Meanwhile, if the user performs any operation that affects the save, abortSave is set to YES. The idea is that the delay between phase 1 and phase 2 is longer than it takes to perform a user operation, so if abortSave is NO, then it should be safe to do a save.
This process has eliminated 95% or so of the crashes, but I'm still getting crashes.
Of course, for decent performance of the app, the user activity as well as the save operation take place in background threads.
The type of circumstance I am running into is usually a mutation during fast enumeration, or something like that. Essentially, some user action is making a change during the save process. If I copy the object being fast enumerated and then work on the copy, it doesn't help. Sometimes the error will happen on the copy statement. If the object is an array, I don't use fast enumeration but use a regular for loop to work through the array. That helps a bit.
I hope this question isn't too generic. I suppose I could post some code, but I'm not sure how helpful it really would be. And I don't want to needlessly clutter the question.
One thing that I have not done yet, would be to use a flag working the other way:
saveProcessActive set to YES right before the save happens and set to NO when it finishes. Then all the user actions would have to be stalled if saveProcessActive is YES. The problem with this scenario is that it would result in a delay of the user action, potentially visible to the user, but maybe any delay is insignificant. It would only need to be as long as the save takes until its next check of abortSave. The aborted save process would then turn saveProcessActive to NO when it acknowledged the abort request. Is there a better solution?

Making a copy of the current game state in memory should be a fast action. When you want to save, make that copy, and then hand it to your background queue to save it with dispatch_async(). Doing it this way gets rid of all the concurrency issues because each piece of data is only ever accessed on a single queue.
EDIT: Here is how I've typically addressed such issues (untested):
- (void)fireSave:(NSTimer *)timer {
id thingToSave = [self.model copyOfThingToSave];
dispatch_async(self.backgroundSavingSerialQueue, ^{
[self writeToDisk:copyOfThingToSave];
}
}
- (void)saveLater {
[self.timer invalidate];
self.timer = [NSTimer scheduledTimerWithTimeInterval:5
target:self
selector:#selector(fireSave:)
userInfo:nil
repeats:NO];
}
Now, anywhere you modify data, you call [self saveLater]. Everything here is on the main thread except for writeToDisk: (which is passed a copy of the data). Since writeToDisk: always runs on its own serial queue, it also avoids race conditions, even if you ask it to save faster than it can.

You will need to synchronize access to the data, both while saving and while altering it during normal play. As writing to file would likely take longer than making a copy, in order to minimize lock time you should make a copy while you have a lock, then release the lock and
write the data to disk. There are a few ways to do this, but the easiest is an #synchronised block:
-(void) save
{
NSDictionary *old = self.data;
NSDictionary *new;
#synchronized(old) {
new = [old copy];
}
[self writeData:new];
}
And remember to synchronize changes too:
-(void) updateDataKey:(id)key toValue:(id)val
{
NSDictionary *old = self.data;
#synchronized(old) {
old[key] = val;
}
}
data obviously doesn't need to be an NSMutableDictionary, it was just a convenient example.

Related

How to optimize performance of Results change listeners in Realm (Swift) with a deep hierarchy?

We're using Realm (Swift binding currently in version 3.12.0) from the earliest days in our project. In some early versions before 1.0 Realm provided change listeners for Results without actually giving changeSets.
We used this a lot in order to find out if a specific Results list changed.
Later the guys at Realm exchanged this API with changeSet providing methods. We had to switch and are now mistreating this API just in order to find out if anything in a specific List changed (inserts, deletions, modifications).
Together with RxSwift we wrote our own implementation of Results change listening which looks like this:
public var observable: Observable<Base> {
return Observable.create { observer in
let token = self.base.observe { changes in
if case .update = changes {
observer.onNext(self.base)
}
}
observer.onNext(self.base)
return Disposables.create(with: {
observer.onCompleted()
token.invalidate()
})
}
}
When we now want to have consecutive updates on a list we subscribe like so:
someRealm.objects(SomeObject.self).filter(<some filter>).rx.observable
.subscribe(<subscription code that gets called on every update>)
//dispose code missing
We wrote the extension on RealmCollection so that we can subscribe to List type as well.
The concept is equal to RxRealm's approach.
So now in our App we have a lot of filtered lists/results that we are subscribing to.
When data gets more and more we notice significant performance losses when it comes to seeing a change visually after writing something into the DB.
For example:
Let's say we have a Car Realm Object class with some properties and some 1-to-n and some 1-to-1 relationships. One of the properties is a Bool, namely isDriving.
Now we have a lot of cars stored in the DB and bunch of change listeners with different filters listing to changes of the cars collection (collection observers listening for changeSets in order to find out if the list was changed).
If I take one car of some list and set the property of isDriving from false to true (important: we do writes in the background) ideally the change listener fires fast and I have the nearly immediate correct response to my write on the main thread.
Added with edit on 2019-06-19:
Let's make the scenario still a little more real:
Let's change something down the hierarchy, let's say the tires manufacturer's name. Let's say a Car has a List<Tire>, a Tire has a Manufacturer and a Manufacturer has aname.
Now we're still listing toResultscollection changes with some more or less complex filters applied.
Then we're changing the name of aManufacturer` which is connected to one of the tires which are connected to one of the cars which is in that filtered list.
Can this still be fast?
Obviously when the length of results/lists where change listeners are attached to gets longer Realm's internal change listener takes longer to calculate the differences and fires later.
So after a write we see the changes - in worst case - much later.
In our case this is not acceptable. So we are thinking through different scenarios.
One scenario would be to not use .observe on lists/results anymore and switch to Realm.observe which fires every time anything did change in the realm, which is not ideal, but it is fast because the change calculation process is skipped.
My question is: What can I do to solve this whole dilemma and make our app fast again?
The crucial thing is the threading stuff. We're always writing in the background due to our design. So the writes itself should be very fast, but then that stuff needs to synchronize to the other threads where Realms are open.
In my understanding that happens after the change detection for all Results has run through, is that right?
So when I read on another thread, the data is only fresh after the thread sync, which happens after all notifications were sent out. But I am not sure currently if the sync happens before, that would be more awesome, did not test it by now.

Multiple UIManagedDocument For Read & Write

I'm building a tab bar application for iPhone and i'm using Core Data with two UIManagedDocuments. In the first tab, i write the data to database and in the second i read them into UITableView with UIFetchedResultsController.
At the start of application, if i write data first, and after then i read results, it works fine. Results appear in the second tab immediately. However, if i read some data first and after then if i write something to database, results appear in second tab with considerable delay (almost 1 minutes). If is there any synchronization problem between two UIManagedObjectContexts or two UIManagedDocuments, how does it works in the first condition? And, is there any solution for this delay?
The way that you can ensure that your UIManagedDocument is up to date is to make sure you're saving your changes properly. Given the information you've shown above, I'm not really sure about how you're managing your documents or your managedObjectContexts. There are just too many factors that could be affecting this to be able to give you a 100% concrete answer.
So without knowing what your code looks like and without knowing how you're managing your context, the only thing I can do is give you what I use in my own projects. This may or may not help you, but it helped me - more times than one - when it comes to handing core data by UIManagedDocument.
When it comes to Context:
I use a singleton to manage UIManagedDocument. I do this because I don't want to have to deal with what you're talking about above - having more than one managedObjectContext. When you start dealing with multiple contexts, you have the issue where the data will not be consistent unless you manage all of your contexts properly. If you save on one but don't update the other - then your data can become out of sync. You also have to make sure that each context is working on the property thread - the Apple Docs is a great resource for understanding the whys ad hows this even matters.
The point is, though, that this is one of the biggest problems with working with UIManagedDocument that isn't as bad as when you're working with pure core data and using a SQL persistent store. The main reason that I've found is because of how UIManagedDocument actually saves to its UIDocument store. It is very unpredictable about when it wants to save. This makes knowing when your UIManagedDocument will actually persist and have your data a freaking shot in the dark. You end having to do all kinds of stupid stuff just to make sure that it is always readily available.
Considering I have a belief (that many, maybe rightfully so, believe is an ignorant belief) that working with core data is hard, and UIManagedDocument makes it easier than it would be if you didn't work with it at all. That being said, I don't like it when working with something as simple as UIManagedDocument begins to get complicated - so I use the one thing that has always kept it simple, and that is a singleton, shared-instance of a single UIManagedDocument so that I only have 1 managedObjectContext, ever, to have to work with.
When it comes to saving:
Whenever I make any significant change to a model ( Create, Update, Delete, Edit ), I always make sure to call [document updateChangeCount:UIDocumentChangeDone]; I do this because I do not use the "undo manager" (NSUndoManager) when working with UIManagedDocument. Simply because I haven't needed to yet, plus because I hate all the "workaround" garbage you have to do with it.
Working only on the Main Thread:
Whenever I do anything with my UIManagedDocument or Core Data, I always make sure its on the main thread. I think I've already said it once, but I'll say it again: working in threads is helpful when you need it, but also when you actually understand threading in general. I like working in threads, but it comes at a cost of complexity that makes me not want to work in them when it comes to core data. With that being the case, I tend to stay strictly on the main thread as this keeps things simple and easy (for me).
Saving the Document
When I absolutely need to make sure that the UIManagedDocument is "saved" ( written to disk ), I have 2 methods that I wrote and use that are always readily available for me to call: saveDocument and forceSaveDocument.
The first one ( saveDocument ) merely checks the context for changes. If it has any, it then checks to see if we have any newly inserted objects. When insertedObjects are found, it obtains the perm ids for these items. You can think of this one as a good way to ensure that your core data model is up to date, and that your managed context is in a safe state, so that when your document is actually saved, that you get everything saved in the state that it needs to be in (your ids are realized, your contexts are clean, and what you are about to save represents a state of your model once all work has been complete on it).
Its big brother, forceSaveDocument, actually calls saveDocument first. Again, to make sure that your actual model/context is saved and proper. If it returns successful ( YES ), then it will actually do the real saving and write the document to disk by means of saveToUrl.
Some Code (hopefully it helps):
Here are those 2 methods in case it helps:
-(BOOL)saveDocument {
NSManagedObjectContext *context = self.document.managedObjectContext;
if(!context.hasChanges) return YES;
NSSet *inserts = [context insertedObjects];
if([inserts count] > 0) {
NSError *error = nil;
if(![context obtainPermanentIDsForObjects:[inserts allObjects] error:&error] && error) {
[self reportError:error];
return NO;
}
}
return YES;
}
-(void)forceSaveDocument {
if( [self saveDocument] ) {
[self.document saveToURL:self.document.fileURL forSaveOperation:UIDocumentSaveForOverwriting completionHandler:self.onSaveBlock ? self.onSaveBlock : nil];
}
}
General Rules/Guidelines
Overall, these are my guidelines that I follow ( and have worked for me for about 3 years now ) when working with UIManagedDocument and Core Data. I'm sure there are better out there from guys/gals much smarter than me, but these have what I use. The benefit I get out of them is that it makes me have to worry less about managing my data and gives me more freedom to work with everything else:
Use a singleton to manage my UIManagedDocument until the need of multiple threads is absolutely necessary - then migrate over to start using multiple contexts ( i've never needed to do this yet - but then again I try to keep things simple )
Always call updateChangeCount:UIDocumentChangeDone when I make any change to a model. It is very light weight and has little impact. If anything, it will help ensure your document stays up to date and never gets too out of sync with your data.
Don't use undo manager unless you actually need it ( I have yet to need it )
Use save/ForceSave sparingly, and only when absolutely necessary (deletes are a good reason to use it. Or if you create a new item on one view controller and need it on the next one, but can't wait for core data and the document to sync up - its kind of like kicking it in the ass and saying "I object to you saving whenever you want - save now lol.. )
Final Thoughts
All of the above is my own belief and understandings. These come from a lot of research, reading, and being a pain the ass when it comes to wanting to do things right, all while keeping it simple. Anyone can write a complex solution - but I think the fundamental question is always: do you really need the complexity, or do you just need it to work so you can focus on more complex issues?
I'm sure the above is way more than you probably wanted, and may even add more questions than you have. If you need some links and resources, let me know and I'll try to throw a few together.
Either way, hope that helps.

"Search As You Type" results taking too long

I'm implementing a "search as you type" search with a core data DB. It's working great with NSFetchedRequestController. But now, I have gotten a feature request to arrange the results by distance from the user(it's a shop list).
Say the user write "e" into the search, there are about 7000 results, the iOS device takes between 2-3 second to order them by distance, and in the meantime the UI is stuck.
I thought about sending the sort request to a different thread, but then what will I show the user? also, what happens if I send a request and then he type another letter? if he types and deletes a couple of times I will have many requests on many threads taking up computing power.
Any ideas with solving this problem?
First of all - analyse what is the most problematic part of your code via Instruments -> Time Profiler. Maybe you have problem inside the code, which can be resolved by rewriting.
If it is not helps, I suggest you such things:
1) easiest - remove feature "search while typing" :)
2) make delay between begin search and typing in 1 second - so while user typing you don't search, and then if user stop typing, you do it.
3) change DB model - (for example add some regions for shops, and if user in the region, search for shops only in the region. Just look more closer do your DB model and thing how you can improve it)
4) search in background and show activity indicator while it searching
There are a few things you can do to optimize the user experience here.
One option is to present the user with a loading indicator (e.g. https://github.com/jdg/MBProgressHUD), and create an NSOperation for the sort, which you can process in the background.
An NSOperationQueue here would have the benefit of being able to cancel the task, for example if the user continues to type extra letters in.
For example:
// Interface
#property (nonatomic, strong) NSOperationQueue *sortQueue;
// Implementation
self.sortQueue = [[NSOperationQueue alloc] init];
[self.sortQueue addOperationWithBlock:^{
//Sort results here
dispatch_async(dispatch_get_main_queue(), ^{
//Update UI here
});
}];
If the user entered more text, you can cancel the pending sort by doing:
[self.queue cancelAllOperations];
And queueing a new sort.

NSManagedObjectContext's Save method performance

I'm optimising my first iOS app before it hits the store, and noting methods which take seemingly larger amounts of time. I have a fairly simple master-detail app where entities from the Core Data SQLite are shown in a UITableView, then tapping one brings up a detail view where the user can mark it as a favorite (setting a BOOL flag in that object to YES. As soon as they hit their Favorite button, I call [NSManagedObjectContext save] to ensure their changes are reflected immediately, and in case of an unscheduled terminate, etc.
This save operation is currently taking around 205ms when testing on my iPhone 4S. There are around 4,500 entries in the database, each with a few strings, and a few boolean values (wrapped in NSNumbers).
First question: should it take 200ms to make this change? I'm only setting one boolean value, then saving the context, but I've never used Core Data before so I don't know if this is about normal.
Second question: the code I'm using is below – am I doing something wrong in the code to make the method take this long to execute?
- (IBAction) makeFavorite: (id) sender
{
[self.delegate detailViewControllerDidMakeFavorite];
[_selectedLine setIsLiked: [NSNumber numberWithBool: YES]];
[_selectedLine setIsDisliked: [NSNumber numberWithBool: NO]];
NSError *error;
if (![[[CDManager sharedManager] managedObjectContext] save:&error]) NSLog(#"Saving changes failed: %#, %#", error, [error userInfo]);
}
Perhaps I'm worrying over nothing (I am still a relatively new programmer), but on a wider note, 200ms is enough for me to at least try to address this issue, right? :)
Consider UIManagedDocument. It automatically handles saving in a background context. I especially recommend it if you are on iOS 6. If you are not passing object IDs around, or merging with other contexts, then you should be able to use it fairly easily and reliably.
Your simple use case seems tailor made for it.
1) Should the save of one change of boolean value take 200 ms?
Yes, it might take this long. You are performing an IO operation and according to the documentation:
When Core Data saves a SQLite store, SQLite updates just part of the store file. Loss of that partial update would be catastrophic, so you may want to ensure that the file is written correctly before your application continues. Unfortunately, doing so means that in some situations saving even a small set of changes to an SQLite store can take considerably longer than saving to, say, an XML store.
-
2) am I doing something wrong in the code to make the method take this long to execute?
No. you are making the save to the store (under the assumption you have no parent context).
-
3) Are 200ms enough for me to at least try to address this issue?
Yes. 200ms are a noticeable time for a human, and will be felt. you could try and perform the save in the background, but this is unsafe according to the documentation. or, move it to the end of the entire object editing.
My advise would be to read and see if you could make some compromises in your context architecture (your CoreData stack structure).
From my experience, saving in the background is not that bad.

NSUserDefaults transactionality

Is there any way to add transactionality to NSUserDefaults? I would need something like the well known begin - commit - revert functions on database handlers, thus I could revert a modification on the user defaults in some cases. Of course other users of this user defaults would be blocked from writing during the transaction.
Note1: synchronize method of the above class does not do this thing because:
according to the doc, it is called from time to time also by the framework
there is no "revert"
Note2: I saw dictionaryRepresentation and registerDefaults with that I could implement my own transaction mechanism (holding a copy of the old defaults in the memory / even saved to a plist during the transaction). But maybe there is a ready solution for this?
My use case:
I have a wizard-like flow of screens where the user can edit some settings on each screen. As of the current implementation these settings are stored immediately in the defaults as the user moves to the next screen of the wizard. Now this wizard can be interrupted by some other events (even the user can choose to exit/cancel the wizard at any screen) and in this case I would like to roll back the modifications.
One possible solution is to defer setting the values until the end of your wizard. This can be easily done for example using a proxy that will record the messages sent to it and then replay them on the real NSUserDefaults. Recording the messages should be pretty simple:
- (void) forwardInvocation: (NSInvocation*) invocation
{
[invocations addObject:invocation];
}
Where invocations is a mutable array. Replaying the messages back is also simple:
- (void) replayOnTarget: (id) target
{
for (NSInvocation *op in invocations)
[op invokeWithTarget:target];
}
This way the wizard does not have to know anything about the transactions. It would get the recording proxy instead of the expected NSUserDefaults instance and send the messages as usual. After the calling code knows the wizard succeeded, it can replay the messages from the proxy on the shared user defaults. (I have added some sample code on GitHub.)
Maybe this is overkill, but since the recording proxy is generic and can be used in other cases, maybe it’s not bad. Same thing can also be done using blocks:
[transaction addObject:[^{
[defaults setObject:… forKey:…];
} copy]];
Where transaction is a mutable array, again. When the wizard succeeds, you would simply iterate over the array and execute the stored blocks.

Resources