My CoreData model has an entity that has an Image attribute.
I have always managed the images for these entities by storing them on the file system and just maintaining a reference to file in the CoreData attribute, i.e. path.
However I have recently shifted to using child managed contexts for handling editing (so that I can easily discard changes if the user should choose to cancel editing).
This is all well and good however I now have an issue of tracking any images changes, specifically if the user changes the image I can no longer just delete the old file (don't want orphaned files building up on the file system) and replace it with the new one, because if the user cancels the changes the old file is now lost.
As I see it I have two options:
I track the image changes in my business layer and only remove any old images once the context is saved, or conversely delete any new images if the context is discarded/cancelled.
I change my image attribute to a Binary Data type (checking 'allows external storage') and let CoreData manage the data... in which case everything should just work.
Looking for any guidance as to which is the better, and importantly - more performant, approach?
Or any other alternate solutions/options...
Thanks!
The first approach would be better. If the save is discardable, it makes sense to do it that way. And unless the images are generally small it's usually better to keep them external.
A good place to delete old images is probably in the managed object's willSave() method. Look at changedValues to find the old image name. If it's different from the current value, delete the old one.
To handle rolling back changes, a couple of possibilities come to mind.
Handle this in whatever code rolls back the change by looking at the about-to-be-rolled-back new instance and removing its image file.
Always put new images in NSTemporaryDirectory() and use willSave() to move them to a permanent location when saving changes. Then you don't need to do anything on a rollback-- you can let iOS handle clearing the temporary directory for you.
Related
I'm in the middle of adding an "offline mode" feature to an app I'm currently working on. Basically the idea is that users should able to make changes to the data, for example, edit the description of an item, without being connected to the internet, and the changes should survive between app launches.
Each change would normally result in an API request when working online but situation is different in offline mode.
Right now this is implemented by storing all data coming from the API in a Core Data database that acts as a cache. Entities that can be edited by user in addition to normal attributes have the following ones:
locallyCreated - whether the object was created offline
locallyDeleted - object was deleted offline
locallyUpdated - updated
This makes it possible to look for new/deleted/updated objects and send corresponding API requests when doing sync.
This worked well for creating and deleting objects, however, one disadvantage I found with this approach is when new data is retrieved from the API all local changes (i.e. attributes of objects marked as locally updated) are lost, which means that they have to be stored separately somehow.
What would be the best way to approach this problem?
Since you have your locallyUpdated key, the obvious answer is to modify your code that imports server changes, so that it doesn't overwrite changes to any object marked as changed. One way or another you need to avoid overwriting those changes, and you're already keeping a record of which objects have changes, so you already have the tools for a basic solution.
But you'll soon run into the complexity of syncing data. What if the local object has changes on one key, but the incoming data from the server has changes on a different key? You can't resolve that just by knowing that the local copy has changed somehow. Maybe you decide that the server always wins, or that the local copy always wins. Those are easy, if they make sense for your app. If you need to merge changes though, you have some work ahead of you. You would need to record not only a Boolean value indicating that changes were made, but also a list of which keys had changed. This can get complicated, but it's the nature of data syncing.
The new version of our app is a complete redo, different in almost every way. Reading about migration, I think I'd definitely fall into the heavy camp (added relationships), but I'm not sure I actually need most of the data and not sure I have the chops for a complex migration. That being said, I realized users can save favorite stories and I'd like to preserve some of that data (but not whole NSMO as a story entity is completely different now). But that's all I need, just a few pieces of data from a few NSMOs.
I've got the detection setup with -isConfiguration:compatibleWithStoreMetadata: and have deleted the old store completely with -removeItemAtURL: as a wort case scenario, but is there any way to fetch a couple things before I delete and build new objects?
Yes, you can stand up the old store with the old model and then fetch data out of it and manually insert it into the new stack. Then delete the old store afterwards. The actual workflow would be:
New model is a completely separate file
New store is a completely separate file (new file name)
Then:
Look for old file name on disk. If it doesn't exist just stand up the stack
Stand up old stack.
Stand up new stack.
Copy data from old stack to new stack.
Save new stack.
Release old stack and delete old store file from disk.
No migration needed.
i added old model file. then in the psc method, i detect if compatible and if not, stand up old stack, initiating model and psc both pointing to old one i just added and then creating context. added old class files for entity i need and fetched just the favorites and it worked! i know i can create new objects and insert to new stack, but what's the best way to save the new stack and release old when done? i still seem to be able to fetch even after removeitematurl. and is app delegate best place for all this?
First, the AppDelegate is NOT the place for this. Create a DataController that subclasses NSObject. Put all your Core Data code there and then pass that DataController around.
Next, you are not looking for a migration state. You are looking for files on disk with NSFileManager. If the old file exists then stand up the old store and the new store, copy data over. Then remove the old file.
To release the old stack, just set the references (MOM, PSC and MOC) to nil. ARC will remove them from memory.
I am using Magical Record library to easily maintain my core data related project.
Here i have a situation where i have to remove all changes done to default context and prevent saving it into the database.
The problem is i am not using any method of Magical Record which performs save operation. So it is not saving into the database fine. But it maintains data in current context.
How do i clear all the changes made to current context or root context?]
Thanks,
Pratik
Don't use the default context for changes you are not sure are going to eventually be persisted. The easiest way to do this is to create a new context. With MagicalRecord, creating a new context will automatically merge your changes to the default context when you save it. If you don't want to keep the changes in your new context, then just release it, along with any objects that use that contexts and those changes will be discarded. You don't have to go and manually undo everything. When you take advantage of multiple contexts, you will have a lot less work to do.
I have an Image : NSManagedObject that has two properties: NSString* localPath and NSString* remoteUrl.
When I save the object by calling save:&error on the managed object context, I want it to download the file and when the download fails, I want the save operation to fail too.
Because I have a deeply nested DB structure with multiple references to my Image Entity it would be complicated to find all my images to trigger the download manually.
Is this possible, and if so, how can I cancel the save or delete operation so that it fails?
If it's bad practice to do this in the Model, where should I do this?
It's probably possible to do what you describe but it would be an incredibly bad idea. Downloading images can take a long time. Saving changes in Core Data can already take a while. Since saving will affect every instance that needs an image, you'd be taking a potentially long operation and turning it into a ridiculously, insanely, excessively long operation. Saving wouldn't complete until every image download had finished, and that's an extremely unreasonable dependency.
You'd be much, much, much better off having image downloading and saving changes completely decoupled from each other. Download images separately. If an object's image is unavailable, use a placeholder of some kind.
Instead of having save: start the download process, which by the way saves the entire managed object context not just a single object, I would start the download first. If the download succeeds, you can write the image to disk, update the localPath and save your changes, if it fails then you don't need to do a save at all.
I think that MVCS (Model View Controller Service / Model View Controller Store) might be of interest to you. You could move your logic to the Store layer. It would perform image download asynchronously and create NSManagedObject if download completed successfully.
You can find some information about it at: MVCS - Model View Controller Service and https://softwareengineering.stackexchange.com/questions/184396/mvcs-model-view-controller-store
I've got an application that stores products in a Core Data file. These pruducts include images as "Transformable" data.
Now I tried adding some attributes using Lightweight migration. When I tested this with a small database it worked well but when I use a really large one with nearly 500 MB the application usually crashes because of low memory. Does anybody know how to solve this problem?
Thanks in advanced!
You'll have to use one of the other migration options. The automatic lightweight migration process is really convenient to use. But it has the drawback that it loads the entire data store into memory at once. Two copies, really, one for before migration and one for after.
First, can any of this data be re-created or re-downloaded? If so, you might be able to use a custom mapping model from the old version to the new one. With a custom mapping model you can indicate that some attributes don't get migrated, which reduces memory issues by throwing out that data. Then when migration is complete, recreate or re-download that data.
If that's not the case... Apple suggests a multiple pass technique using multiple mapping models. If you have multiple entity types that contribute to the large data store size, it might help. Basically you end up migrating different entity types in different passes, so you avoid the overhead of loading everything at once.
If that is not the case then (e.g. the bloat is all from instances of the same entity type), well, it's time to write your own custom migration code. This will involve setting up two Core Data stacks, one with the existing data and one with the new model. Run through the existing data store, creating new objects in the new store. If you do this in batches you'll be able to keep memory under control. The general approach would be:
Create new instances in the new model and copy attributes only. You can't set up relationships yet because related objects might not exist in the new data store. Keep a mutable dictionary mapping NSManagedObjectIDs from the old store to the new one, for use in the next step. To keep memory use low:
As soon as you have created a destination store object, free up the memory for the source object by using refreshObject:mergeChanges with NO for the second argument.
Every 10 instances (or 50, or whatever) save changes on the destination managed object context and then reset it. The interval is a balancing act-- do it too often and you'll slow down unnecessarily, do it too rarely and memory use rises.
Do a second pass where you set up relationships in the destination store. For each source object,
Find the corresponding destination object, using the object ID map you created
Run through the source object's relationships. For each one, find the corresponding destination object, also using the object ID map.
Set the destination object's relationship based on the result.
While you are at it consider why your data store is so big. Are you storing a bunch of binary data blobs in the data store? If so, make sure you're using the "Allows external storage" option in the new model.