I'm having issues with my NSFetchedResultsController, which seem to be fixed by turning IncludesPendingChanges on, and this worries me. (I've found this to not be true, including pending changes does not help.)
What happens is that my Fetched Results Controller correctly fetches and displays my objects on a full API refresh. Stepping through, I have confirmed that this full API refresh's temporary context is saved and merged without error.
However, if I then return to a view which does this same fetch request from a different navigation flow, only one partially complete object is returned and displayed, rather than the complete set of objects. If I then do a full API Refresh at this point, the refresh displays the correct objects.
I believe my issue may be that mergeChangesFromContextDidSaveNotification is not propagating to the main context, but from my logs and stepping through, it seems to me that it is saving correctly. I'm not sure where to go from here. While IncludesPendingChanges fixes the symptom, I don't believe it fixes the underlying issue.
As some added information, I'm using this framework for my Core Data Management: https://github.com/vokalinteractive/CoreDataManager-iOS
I am turning on includesPendingChanges by adding on line 156 of VIFetchedResultsController.m
[fetchRequest setIncludesPendingChanges:YES]
Edit Whatever fluke caused me to believe includesPendingChanges to YES fixed the problem is no longer present. Turns out this project had some seriously hairy memory management, and after spending all day cleaning that up, I'm still not closer to making this work. Still at the same point though, it looks like the changes are not propagating to the main datastore, or aren't sticking.
As a hint, like I mentioned in my comment below, even when I specify "includePendingChanges:YES", Logging out my fetched objects with:
[self.fetchedResultsController.fetchedObjects description]
ends with "includesPendingChanges: NO". Any idea what can cause this?
Perhaps you got it the wrong way round. The default of includesPendingChanges is YES, as this is the behavior you usually want. That your UI updates getting corrupted is sort of expected.
Setting this attribute to NO is really just meant to facilitate fetches with NSDictionaryResultType, especially when including simple aggregations that can be dealt with on the database level.
I suspect this is by no means an error by you, but a possible flaw of the framework you are using. Anyway, what you identify as an issue, is really just expected behavior.
See the two short paragraphs in the documentation where this is explained quite well.
Related
the code i'm working on makes heavy usage of TFDMemTables, and clones of those tables using CloneCursor.
Sometimes, under specific conditions which I am unable to identify, the source table and its clone become out of sync: the data between them may be different, the record count as well.
Calling Refresh on the cloned table puts things back in order.
From my understanding, CloneCursor is used to address the same underlying memory where data is stored, meaning alterations to the underlying data from any of the two pointers should reflect on the other table, yet allow the user to have separate filter / record positioning per "view". so how can it possibly go out of sync?
I built a small simulator, where I can insert / delete / filter records in either the table or its clone, and observe the impact on the other one. Changes were reflected correctly.
Another downside of Refresh is that it slows the execution tremendously, if overused.
Has anyone faced similar issues or found explanations / documentation regarding this matter?
Edit:
to clarify what I mean by "out of sync", it means reading a value from the table using FieldByName will return X prior to Refresh, and Y post-refresh. I was not able to reproduce this behavior on the simulator mentioned above.
I've spent the better part of a workday trying to solve this.
Background
I have a simple core data model, with books and reading sessions. The books have covers (images) that are stored as binary data with "Allows External Storage".
On iOS 11.4 and below, everything works fine all the time. When I save a new session everything gets updated properly.
Problem
Since iOS 12, when I create a new reading session and link it to the book, about every second time, core data generates a SQL statement that also updates the book cover field, sometimes resulting in a bad reference (to file on disk) which often results in the cover being nil when restarting the app, and almost always creates duplicate copy of the cover on disk (as can be seen in Simulator's _EXTERNAL_DATA folder).
In-memory context and objects remain correct though (and everything in the UI is therefore OK), until the app is restarted, then the cover is often nil.
iOS 12 specific
On iOS 12, I can deterministically reproduce the error in the simulator, on physical devices, and users have reported the error as well. I cannot reproduce the error on iOS 11.4, and no users reported the error previous to iOS 12.
Steps taken
I've enabled "-com.apple.CoreData.ConcurrencyDebug 1", so it shouldn't be that I'm accessing anything from the wrong queue. I've also enabled "-com.apple.CoreData.SQLDebug 3" so that I can see exactly what gets written.
I've made sure the Book instance (and therefore the cover) is not modified by my code before the association with the new Session by checking hasChanges, just before I do newSession.book = book and context.save().
To be 100% sure I'm not touching the cover property on any thread I've short-circuited my getters and setters for that property. No improvement.
I've tried using objectID to request an instance of the book just before the association and save. No improvement.
I've even tried the option where the context keeps strong references to all objects, just to make sure it was not some kind of memory management issue. No improvement.
Question
Any ideas for next steps?
Status update
This is a defect in iOS 12. See accepted answer below for a detailed description of a resonable workaround.
Update: The underlying Core Data issue appears to be resolved in iOS 12.1 (verified in beta 4). We will keep the workaround described below in our app, and won't be recommending using the External Storage option any time soon.
After talking to Apple engineers and filing the Radar mentioned above, we couldn’t wait around for a fix, so we took the hit and switched to storing files on the filesystem and managing it directly ourselves.
Another alternative that we considered was migrating our model not to allow External Storage for BLOBs, but I don't know what impact that would have had on performance and I was also worried about a model migration at a time when this part of iOS seems to be unstable, especially after reading stories like this in the past: Core Data: don’t store large files as binary data – Alexander Edge – Medium
It wasn't too much of a pain to implement local storage ourselves. You just need to have a unique identifier for each record that you can use to create a filename so you can map files to records. We added an extension to our Managed Object subclass with methods for reading, writing and deleting the files. Now, instead of calling e.g. article.photo = image.pngData(), we now need to call something like article.savePhoto(image.pngData()) and then we do similar when we want to retrieve the image. You can also add some code to these methods to support backwards compatibility with any images that are currently stored in Core Data.
Deletion was a little more tricky because our objects are deleted from multiple places in the code, including cascading deletes. In the end I opted to do it in the managed object's prepareForDeletion method but it is not ideal. There is plenty of discussion of how best to implement this here: cocoa - How to handle cleanup of external data when deleting unsaved Core Data objects? - Stack Overflow
Finally, to prevent our app crashing when a non-Optional binary attribute has disappeared because of this bug, I override awakeFromFetch in my Managed Object subclass to ensure that any required attributes are not nil, and if they are, I set them to a placeholder image so that they can be saved without the validation failing.
I've been having this issue at random times for quite a while, where I will physically be looking at my firebase console and see that I have deleted a piece of data, and then in code I will call print(snapshot.ref) and see the correct reference (copy and pasted in browser to double check too), yet somehow when I try to get the values of the snapshot/iterate over its children the snapshot is containing old data that is not in the database anymore.
let key2 = ref.child(users).child("Employees").observeSingleEvent(of: FIRDataEventType.value, with: { (snapshot) in
print(snapshot)
for child in snapshot.children
{
self.nameList.append((child as AnyObject).value)
}
})
So here my database looks like this: (picture is cut off but there's no children under it)
Yet somehow when I print snapshot I get:
Snap (Employees) {
0 = "";
1 = "name1";
2 = "name1";
}
This has been frustrating me for a while, it seems like it could have something to do with old snapshot values somehow being stored locally or somehow not seeing the most up to date version of the database. If it matters I have similar calls to .observeSingleEvent in this file, the one copy and pasted above is nested within another. Even if it were a synchronization problem, I still don't know how that could make the printed value the old value.
Any help would be so so appreciated.
This behavior is apparently by design. It's so strange that I actually contacted Firebase Support about it, and was told that they'd consider revising either the behavior or the docs, but couldn't promise a date and I should monitor their Release Notes URL for updates to it.
It makes a little sense if you consider it from the SDK point of view. You're calling observeSingleEvent. To Firebase this means they should only call you ONE TIME. Developers would probably find it confusing if a method with that name produced more than one callback, right?
But if you have persistence enabled things get a little weird. Just like with observeEventOfType, Firebase will give you the on-disk value immediately so you get the fastest UI update, but then it will call the server for a fresher value to be sure it has the latest data from then on. The problem is, since you're telling it not to call you back with this data, it will remember it (so you WILL see it in the future, which is why it seems confusing) but not tell you that it's arrived.
What I've discovered through some trial and error is that the instinctive drive to use observeSingleEvent may be misguided with Firebase anyway. Both iOS and Android uses "recycler" view mechanisms for table/collection views such that only a handful of items are actually in-memory at a time anyway, even on screens with a lot of data. Beyond this built-in efficiency from the platform, Firebase itself seems to work just fine even managing many dozens of in-memory refs at a time. In my apps, I've taken to just using observeEventOfType for all of my use-cases except where I have a very specific, and not theoretical-efficiency-related reason, to use observeSingleEvent. The app performance has been minimal, and the data then works much more the way you expect.
I have a client app with coredata as its back end. Its simple enough, with two entities.
UPDATE: Using CloudKit as the sync service. I'm not really sure what is going on there. Except I can query and get results, incase things don't automatically work. The problem is, as I noticed with most third-party sync-service providers. 95% of the time, they all work. Its when I test it with a more than a few devices / simultaneous calls that some undesired change comes in.
This question is more about iOS and coredata than the actual syncing architecture.
there are times when there is definite sync data loss. I really can't tell when and how. That im still figuring out. Sometimes initial sync takes a long time (if theres existing data) and the user might close the app, (some people double press the home button and actually close apps!).
But no matter what I do, sometimes i miss an object, sometimes an attribute.
So I saw this NSAsynchronousFetchRequest and I thought about giving me an option to check if all local-data (coredata) is okay, to see if theres anything missing.
Perhaps i could use a simple predicate to see of some managedObject.title == nil and fetch its identifier. Collect those faulty objects and request the truth server for data for these objects? Is this a good use of NSAsynchronousFetchRequest?
If yes, when during the lifetime of the app would this be good?
Im thinking maybe after applicationDidEnterBackround would be a good time..? Then If I do get it, will need a good way to manage CoreData in the background!
If no, well.. Really don't know wat to do then.
Im trying to actually do this, will update with my results.
UPDATE: Question updated to reflect the use of Cloudkit
I'm having an issue with an umbraco site of mine: For some reason some of the nodes are timing out when I try to click on them in the back-end of the site.
The front-end works fine and there aren't any slowdown issues there, however I'm unable to edit these same nodes in the back-end as the system seems to just hang. This is making it incredibly difficult to debug as I have no idea what properties are actually causing the problems here. What's strange is I can create a node of the same document type and enter in some dummy values and that works fine, yet I can't seem to edit the existing nodes.
I've tried republishing the entire site, republishing the individual nodes, deleting the umbraco.config file and nothing has worked up to this point.
What's also interesting is that if I close down the browser the system seems to stop hanging and I can log in and try again.
Has anyone encountered this before or know where to begin?
Thanks
I have encountered something similar. The longer you work with Umbraco the slower it becomes and if you check the memory usage in Chrome's task manager, you can see that certain actions upon nodes bump the memory usage up a little further. The answer is just to close down the tab and open a new one.
I have reported this and Umbraco cannot replicate this. However, I do think that this is possibly due to maybe a package installed into Umbraco, maybe uComponents. It's very difficult to pin point.
Update:
If you can access some nodes but not others, then this is actually slightly easier to debug. I would check what similarities the nodes that timeout have.
Are they all of the same document type?
Do they all use the same data type?
I would guess that the nodes in question are using a data type that is performing an operation when the node is loading, and that operation is timing out. For example, do you have any data types that load data from the database, like enums? Do you have any datatypes that load data from a web service?
Do you have any usercontrol data types wrapped in the UserControlWrapper data type? These would be somewhere to check.
Finally, check:
The databases [umbracoLog] table. Any Umbraco-specific errors will be listed there.
Check the computer's event viewer. This will show any unhandled errors.
My money's on a database timeout.