It is possible to make a light migration of a one-to-one relationship to a one-to-many relationship. The schema is correctly updated. Ie A->B becomes A->>B
But the ont-to-one reference in A->B is lost. I was expecting to access B after the migration but it is now a zombie without anyone referencing it.
Do I have to create a mapping model for this to work?
Creating a mapping model will trigger a heavy weight migration. Heavy migrations are very slow and memory intensive. If you are running on iOS then you do not want to do that.
If the automatic migration is failing then you probably want to do an export/import type migration instead of a heavy migration.
Essentially for a export/import migration you walk through your existing Core Data model and export it out to some other format (I like to use JSON) and then import it back into the new data model.
If you are on the desktop, then you can definitely use a heavy migration. The desktop has far more memory available, a larger CPU, etc.
Update
I thought of that but it is complicated. I have to first export when the app starts. Then change the model version of the xcdatamodelid to the next version programatically. Then let the light migration do its work. Then import. For each app launch after that I have to check if the model version is the latest so I know to migrate or not.
Yes it is complicated although that is not how you would do it. The other option is to perform a heavy weight migration which risks popping memory, popping the watch dogs and takes longer thus creating a poor user experience.
If the lightweight migration can't do something then a heavy migration or manual migration is required. A heavy migration (with a mapping model) is not designed for iOS. It is a carry over from the OS X days. Yes you can get it to work but it is not performant.
Doing a manual migration is not that difficult. As part of your start up procedure, you should be checking for a migration event anyway. If a migration event occurs then you proceed into the manual migration by standing up the old stack using the old model. NSManagedObjectModel has methods to resolve this. Then you export to JSON, stand up the new stack and import.
Related
Is it possible with Realm to do linear, more focused, self-contained migrations, sort of how Rails does it?
For instance, as I currently understand it, if the Person class changes twice (so two migrations), when migration 2 gets added and the final schema changes, migration 1 will require revision so that it migrates to the final schema.
Is there a way to migrate to an intermediate schema (what the schema used to be when migration 1 was all there was) in between migrations?
I realize that it would be less efficient, since there would need to be transient tables created and extra work done instead of just migrating to the most recent schema. However, it's less development time spent modifying old migrations, cognitive overhead, test complexity, etc.
There are many scenarios in which linear migrations in Realm are supported, but the case you've described is not. The reason Realm can perform migrations without having to keep a full history of all schemas, like Core Data does through its xcdatamodeld bundle, is that Realm has access to the schema (on disk) and the target schema (in-memory model classes).
To support the use case you're requesting, you'd have to keep all the previous versions of your schema in your app so that Realm could know which tables to create, at intermediate migration steps. Not only is this more work for you as a Realm user, but it's a design anti-pattern which would undoubtedly lead to less efficient and longer migrations.
I hope this makes sense, and I'm happy to explain this further if you'd like.
For more information, please refer to Realm's migration documentation which covers a lot more: https://realm.io/docs/objc/latest/#migrations
I am currently building a CoreData migration for an app which has 200k / 500k average rows of data per entity. There are currently 15 entities within the CoreData Model.
This is the 7th migration I have built for this app, all of the previous have been simple (add 1or 2 column style) migrations, which have not been any trouble and have not needed any mapping models.
This Migration
The migration we are working on is fairly sizeable in comparison to previous migrations and adds a new entity between two existing entities. This requires a custom NSEntityMigrationPolicy which we have built to map the new entity relationships. We also have a *.xcmappingmodel, which defines the mapping between model 6 and the new model 7.
We have implemented our own subclass of NSMigrationManager (as per http://www.objc.io/issue-4/core-data-migration.html + http://www.amazon.com/Core-Data-Management-Pragmatic-Programmers/dp/1937785084/ref=dp_ob_image_bk).
The Problem
Apple uses the migrateStoreFromURL method of NSMigrationManager to migrate the model, however, this seems to be built for low/medium dataset sizes, which do not overload the memory.
We are finding that the app crashes due to memory overload (# 500-600mb on iPad Air/iPad 2) as a result of the following apple method not frequently dumping the memory on data transfer.
[manager migrateStoreFromURL:sourceStoreURL type:type options:nil withMappingModel:mappingModel toDestinationURL:destinationStoreURL destinationType:type destinationOptions:nil error:error];
Apple's Suggested Solution
Apple suggest that we should divide the *.xcmappingmodel up into a series of mapping models per individual entities - https://developer.apple.com/library/ios/documentation/cocoa/conceptual/CoreDataVersioning/Articles/vmCustomizing.html#//apple_ref/doc/uid/TP40004399-CH8-SW2. This would work neatly with the progressivelyMigrateURL methods defined in the above NSMigrationManager subclasses. However, we are not able to use this method as once entity alone will still lead to a memory overload due to the size of one entity by itself.
My guess would be that we would need to write our own migrateStoreFromURL method, but would like to keep this as close to as Apple would have intended as possible. Has anyone done this before and/or have any ideas for how we could achieve this?
The short answer is that heavy migrations are not good for iOS and should be avoided at literally any cost. They were never designed to work on a memory constrained device.
Having said that, a few question for you before we discuss a resolution:
Is the data recoverable? Can you download it again or is this user data?
Can you resolve the relationships between the entities without having the old relationship in place? Can it be reconstructed?
I have a few solutions but they are data dependent, hence the questions back to you.
Update 1
The data is not recoverable and cannot be re-downloaded. The data is formed from user activity within the application over a time period (reaching up to 1 year in the past). The relationships are also not reconstructable, unless we store them before we lose access to the old relationships.
Ok, what you are describing is the worst case and therefore the hardest case. Fortunately it isn't unsolvable.
First, Heavy migration is not going to work. We must write code to solve this issue.
First Option
My preferred solution is to do a lightweight migration that only adds the new relationship between the (now) three entities, it does not remove the old relationship. This lightweight migration will occur in SQLite and will be very quick.
Once that migration has been completed then we iterate over the objects and set up the new relationship based on the old relationship. This can be done as a background process or it can be done piece meal as the objects are used, etc. That is a business decision.
Once that conversion as been completed you can then do another migration, if needed, to remove the old relationship. This step is not necessary but it does help to keep the model clean.
Second Option
Another option which has value is to export and re-import the data. This has the added value of setting up code to back up the user's data in a format that is readable on other platforms. It is fairly simple to export the data out to JSON and then set up an import routine that pulls the data into the new model along with the new relationship.
The second option has the advantage of being cleaner but requires more code as well as a "pause" in the user's activities. The first option can be done without the user even being aware there is a migration taking place.
If I understand this correctly then you have one entity that is so big that when migrating this entity does cause the memory overload. In this case, how about splitting the migration of this one entity in several steps and therefore doing only some properties per each migration iteration?
That way you won't need to write your own code but you can benefit form the "standard" code.
I'm in the process of a manual core data migration and keep running into Cocoa Error 134140: NSMigrationMissingMappingModelError. I've noticed this happens any time I make any change to the model, even something as small as marking a property as optional. So far, the only solution I've found when this happens is to delete my mapping model and create a new mapping model. Are there any better, less tedious solutions?
There's a menu option to resolve this. If you update your model anytime after creating your mapping model just do the following:
Select the mapping model.
Choose Editor -> Refresh Data Models.
This happens because:
The migration map identifies the model files by the entity hashes, and
When you change an entity, you change its hash.
When you change the model, the map no longer matches it, and migration fails because no matching map can be found.
The workaround is to not mess with migration until you've nailed down what the new model looks like. Then create the map with the final version of the model. If you can't finalize the new model and need to work on migration, you've already discovered the necessary procedure.
Tom is correct but I would take it one further. I would not do a manual/heavy migration, ever. If it cannot be done in a lightweight migration consider doing an export/import. It will be faster and more performant than a heavy migration.
My standard recommendation is to keep your changes small enough so that you can always do a lightweight migration.
Update on Import/Export
Heavyweight migration is a hold-over from OS X where memory was cheap. It should not be used in iOS. So what is the right answer?
My recommendation to people is to handle it on your own. Lightweight migration if at all possible, even if it requires walking through several models to get from A to B. However in your case that does not sound possible.
So the second option is export/import. It is very easy to export Core Data out to JSON. I even did a quick example in a Stack Overflow post about it.
First, you stand up the old model and the current store. This involves finding the right model version and manually loading it using [[NSManagedObjectModel alloc] initWithContentsofURL:] and pointing to the right model version. There are details on how to find the right mold version in my book (grin).
Then export the current model out to JSON. That should be fairly quick. However, don't do this in your -applicationDidFinish.. for obvious reasons.
Step two is to load up the new Core Data stack with the "current" model and import that JSON. Since that JSOn is in a known format you can import it fairly easily.
This will allow you to control the entire experience and avoid the issues that heavy migration has.
I have a rather large Core Data-based database schema (~20 entities, over 140 properties) that is undergoing large changes as it migrates from our 1.x code base over to our 2.x code base.
I'm very familiar with performing lightweight migrations, but I'm a bit flummoxed with this particular migration because there's a few entities that used to store related objects as transformable attributes on the entity itself, but now I want to migrate those to actual entities.
This seems like a perfect example of when you should use a heavy migration instead of a lightweight one, but I'm not too happy about that either. I'm not familiar with heavy migrations, one of the entities that has this array -> modeled relationship conversion occurring takes up ~90% of the rows in the database, these databases tend to be more than 200 MB in size, and I know a good portion of our customers are using iPad 1s. That combined with the repeated warnings in Apple documentation and Marcus Zarra's (excellent) Core Data book regarding the speed and memory usage of a heavy migration make me very wary, and searching for another way to handle this situation.
WWDC 2010's "Mastering Core Data" session 118 (slides here, requires login, the 9th to last slide, with the title 'Migration Post-Processing' is what I'm referring to) mentions a way to sort of work around this – performing the migration, then using the store metadata to flag whether or not custom post processing you want to perform has been completed. I'm thinking this might be the way to go, but it feels a bit hacky (for lack of a better word) to me. Also, I'm worried about leaving attributes hanging around that are in practice, deprecated. ex. if I move entity foo's barArray attribute into a relationship between entity foo, and entity bar, and I nil out barArray, barArray still exists as an attribute that can be written to and read from. A potential way to solve this would be to signal that these attributes are deprecated by changing their names to have "deprecated" in front, as well as perhaps overriding the accessors to assert if they're used, but with KVO, there's no guaranteed compile-time solution that will prevent people from using them, and I loathe leaving 'trap code' around, especially since said 'trap code' will have to be around as long as I potentially have customers who still need to migrate from 1.0.
This turned into more of a brain dump than I intended, so for sake of clarity, my questions are:
1) Is a heavy migration a particularly poor choice with the constraints I'm working under? (business-critical app, lack of experience with heavy migrations, databases over 200 MB in size, tens of thousands of rows, customers using iPad 1s running iOS 5+)
2) If so, is the migration post-processing technique described in session 118 my best bet?
3) If so, how can I right away/eventually eliminate those 'deprecated' attributes so that they are no longer polluting my code base?
My suggestion is to stay away from heavy migration; full stop. It is too expensive on iOS and most likely will lead to a unacceptable user experience.
In this situation I would do a lazy migration. Create a lightweight migration that has the associated objects.
Then do the migration but don't move any data yet.
Change the accessor for that new relationship so that it first checks the old transformable, if the transformable is populated it pulls the data out, copies it over to the new relationship and then nil's out the transformable.
Doing this will cause the data to move over as it is being used.
Now there are some issues with this design.
If you are wanting to use predicates against those new objects it is going to be ... messy. You will want to do a two pass fetch. i.e. Fetch with a predicate that does not hit that new object and then do a section fetch once they are memory so that the transformable gets moved over.
I have a rails project that uses mongodb, the issue i am having is when i have records (documents) made from a previous model. (i'm gettin klass errors, just for the older records)
Is there a quick way to fix those mongodb documents the rails way, using some command.
Or is there a command i can run with mongoid for it to open the specific model up in mongo, then i can poke with the document manually (removing unneeded associations).
The concept of schema migration would need to exist in mongoid and I don't think it does. If you have made simple changes like renaming or removing fields then you can easily do that with an update statement, but for anything more complicated you will need to write code.
The code you will need to write will most likely need to go down to the driver level to alter the objects since the mapping layer is no longer compatible.
In general you need to be careful when you make schema changes, in your objects, since the server doesn't have that concept and can't enforce them. It is ultimately up to your code, or the framework you are using, to maintain compatibility.
This is generally an issue when you mapping system without doing batch upgrades to keep things at the same schema, from the mapping layer perspective.