Do inferred mapping models always result in lightweight migrations? - ios

We've had a few cases of our App seemingly having trouble migrating user data when an inferred mapping model has been used. The App has needed too long to complete the migration and the migration has failed. Yes – we shouldn't be migrating during launch!
I'd like to know if it's possible that an inferred mapping model might not result in a lightweight migration. All the accounts I've read suggest that inferred mappings are necessarily lightweight, but I've not seen a strong statement that this is a guarantee.
The situation where we've had a problem included a property being deleted from the schema which was stored as external binary data (Allows External Storage was ticked in the schema editor). I wondered if this particular migration, with its model inferred automatically, might still require a heavy migration where the whole database is drawn in to memory.
Is there a way to tell if a specific inferred migration is heavy weight?

Unless you yourself define a custom mapping model, the migration is by definition lightweight. This is the only possible interpretation of the definitions of "lightweight" and "custom" migration in the documentation.
This is independent from the migration failures you have seen. Maybe some changes necessitate a custom migration which is why the lightweight migration fails.

Related

Can you migrate with Realm in a more focused and linear fashion?

Is it possible with Realm to do linear, more focused, self-contained migrations, sort of how Rails does it?
For instance, as I currently understand it, if the Person class changes twice (so two migrations), when migration 2 gets added and the final schema changes, migration 1 will require revision so that it migrates to the final schema.
Is there a way to migrate to an intermediate schema (what the schema used to be when migration 1 was all there was) in between migrations?
I realize that it would be less efficient, since there would need to be transient tables created and extra work done instead of just migrating to the most recent schema. However, it's less development time spent modifying old migrations, cognitive overhead, test complexity, etc.
There are many scenarios in which linear migrations in Realm are supported, but the case you've described is not. The reason Realm can perform migrations without having to keep a full history of all schemas, like Core Data does through its xcdatamodeld bundle, is that Realm has access to the schema (on disk) and the target schema (in-memory model classes).
To support the use case you're requesting, you'd have to keep all the previous versions of your schema in your app so that Realm could know which tables to create, at intermediate migration steps. Not only is this more work for you as a Realm user, but it's a design anti-pattern which would undoubtedly lead to less efficient and longer migrations.
I hope this makes sense, and I'm happy to explain this further if you'd like.
For more information, please refer to Realm's migration documentation which covers a lot more: https://realm.io/docs/objc/latest/#migrations

Light migration one-to-one to one-to-many

It is possible to make a light migration of a one-to-one relationship to a one-to-many relationship. The schema is correctly updated. Ie A->B becomes A->>B
But the ont-to-one reference in A->B is lost. I was expecting to access B after the migration but it is now a zombie without anyone referencing it.
Do I have to create a mapping model for this to work?
Creating a mapping model will trigger a heavy weight migration. Heavy migrations are very slow and memory intensive. If you are running on iOS then you do not want to do that.
If the automatic migration is failing then you probably want to do an export/import type migration instead of a heavy migration.
Essentially for a export/import migration you walk through your existing Core Data model and export it out to some other format (I like to use JSON) and then import it back into the new data model.
If you are on the desktop, then you can definitely use a heavy migration. The desktop has far more memory available, a larger CPU, etc.
Update
I thought of that but it is complicated. I have to first export when the app starts. Then change the model version of the xcdatamodelid to the next version programatically. Then let the light migration do its work. Then import. For each app launch after that I have to check if the model version is the latest so I know to migrate or not.
Yes it is complicated although that is not how you would do it. The other option is to perform a heavy weight migration which risks popping memory, popping the watch dogs and takes longer thus creating a poor user experience.
If the lightweight migration can't do something then a heavy migration or manual migration is required. A heavy migration (with a mapping model) is not designed for iOS. It is a carry over from the OS X days. Yes you can get it to work but it is not performant.
Doing a manual migration is not that difficult. As part of your start up procedure, you should be checking for a migration event anyway. If a migration event occurs then you proceed into the manual migration by standing up the old stack using the old model. NSManagedObjectModel has methods to resolve this. Then you export to JSON, stand up the new stack and import.

Missing Mapping Model after editing the model

I'm in the process of a manual core data migration and keep running into Cocoa Error 134140: NSMigrationMissingMappingModelError. I've noticed this happens any time I make any change to the model, even something as small as marking a property as optional. So far, the only solution I've found when this happens is to delete my mapping model and create a new mapping model. Are there any better, less tedious solutions?
There's a menu option to resolve this. If you update your model anytime after creating your mapping model just do the following:
Select the mapping model.
Choose Editor -> Refresh Data Models.
This happens because:
The migration map identifies the model files by the entity hashes, and
When you change an entity, you change its hash.
When you change the model, the map no longer matches it, and migration fails because no matching map can be found.
The workaround is to not mess with migration until you've nailed down what the new model looks like. Then create the map with the final version of the model. If you can't finalize the new model and need to work on migration, you've already discovered the necessary procedure.
Tom is correct but I would take it one further. I would not do a manual/heavy migration, ever. If it cannot be done in a lightweight migration consider doing an export/import. It will be faster and more performant than a heavy migration.
My standard recommendation is to keep your changes small enough so that you can always do a lightweight migration.
Update on Import/Export
Heavyweight migration is a hold-over from OS X where memory was cheap. It should not be used in iOS. So what is the right answer?
My recommendation to people is to handle it on your own. Lightweight migration if at all possible, even if it requires walking through several models to get from A to B. However in your case that does not sound possible.
So the second option is export/import. It is very easy to export Core Data out to JSON. I even did a quick example in a Stack Overflow post about it.
First, you stand up the old model and the current store. This involves finding the right model version and manually loading it using [[NSManagedObjectModel alloc] initWithContentsofURL:] and pointing to the right model version. There are details on how to find the right mold version in my book (grin).
Then export the current model out to JSON. That should be fairly quick. However, don't do this in your -applicationDidFinish.. for obvious reasons.
Step two is to load up the new Core Data stack with the "current" model and import that JSON. Since that JSOn is in a known format you can import it fairly easily.
This will allow you to control the entire experience and avoid the issues that heavy migration has.

How to fix records when models have be updated with mongodb

I have a rails project that uses mongodb, the issue i am having is when i have records (documents) made from a previous model. (i'm gettin klass errors, just for the older records)
Is there a quick way to fix those mongodb documents the rails way, using some command.
Or is there a command i can run with mongoid for it to open the specific model up in mongo, then i can poke with the document manually (removing unneeded associations).
The concept of schema migration would need to exist in mongoid and I don't think it does. If you have made simple changes like renaming or removing fields then you can easily do that with an update statement, but for anything more complicated you will need to write code.
The code you will need to write will most likely need to go down to the driver level to alter the objects since the mapping layer is no longer compatible.
In general you need to be careful when you make schema changes, in your objects, since the server doesn't have that concept and can't enforce them. It is ultimately up to your code, or the framework you are using, to maintain compatibility.
This is generally an issue when you mapping system without doing batch upgrades to keep things at the same schema, from the mapping layer perspective.

Are there tools for Rails to validate referential integrity of the database?

Applications have bugs or get bugs when updated, some hidden that they get detected months or years later, producing orphaned records, keys pointing nowhere etc. even with proper test suites.
Allthough Rails doesn't enforce referential integrity on the database level - and for some good reasons discussed elsewhere it will stay like that - it would still be nice to have tools that can check if the database is in a consistent state.
Since the models describe what 'should be', wouldn't it be possible that an offline tool validates the integrity of all the data. It could be run regularly, before backing up data or just for the sake of the developers good sleep.
Is there anything like this?
I don't know of such a tool. At least you are aware of the dangers of referential integrity hazards. So why make yourself suffer? Just use foreign key references in the first place, as dportas suggested.
To use it in a migration, add something like this:
execute('ALTER TABLE users ADD FOREIGN KEY (language_id) REFERENCES languages(id)')
to make the language_id column of users reference a valid row in the languages table.
Depending on your DBMS this will even automatically create an index for you. There are also plugins for rails (check out pg_on_rails) which define easy to use alias functions for those tasks.
Checking the integrity only on the backup file is pointless, as the error has already occured then, and your data may be messed up already. (I've been there)
On the other hand, when using foreign key contraints as stated above, every operation which will mess up the integrity will fail.
Think of it as going to the dentist when you feel pain (=having surgery) vs. brushing your teeth ONCE with a magical tooth paste that guarantees that your teeth will be fine for the rest of your life.
Another thing to consider: An error in your application will be much more easier to locate, because and exception will be raised at the code which tries to insert the corrupt data.
So please, use foreign key contraints. You can easily add those statements to your existing database.
How about using the DBMS to enforce RI? You don't need Rails to do that for you.

Resources