link two objects in CoreData - ios

i am new in core data and i created 2 tables,Night and Session. i manage to create new object of Night and new object for Session. when i try this code:
Session * session = [NSEntityDescription insertNewObjectForEntityForName:#"Session" inManagedObjectContext:[[DataManager sharedManager] managedObjectContext]];
Night * night = [NSEntityDescription insertNewObjectForEntityForName:#"Night" inManagedObjectContext:[[DataManager sharedManager] managedObjectContext]];
night.sessions = [NSSet setWithObject:session];
the session is getting into the night and the cool thing is, when i Fetch this night and can get the session for the night using:
currentNight.Seesion
But i can't see this link in the DB tables :(
UPDATE:
I mean when i write night.sessions = [NSSet setWithObject:session]; i need to see in the table DB (yes in the DB.sqlite file).
i thought that i should see some thing there ...

Core Data is not a relational Database.It makes structure of their own.It defines the Database tables structure according to your Managed Objects.For debugging you can see what queries core data is firing on sqlite.This will show you how core data is getting data from these two tables.
You have to go Product -> Edit Scheme -> Then from the left panel select Run yourApp.app and go to the main panel's Arguments Tab.
There you can add an Argument Passed On Launch.
You should add -com.apple.CoreData.SQLDebug 1
Press OK and your are all set.
Than next time it will show all the queries it running to fetch data from your tables.

It's not clear to me what your question is. But:
A context is a scratchpad. Its contents will not be moved to the persistent store until you -save:. If you drop into the filing system and inspect your persistent store outside of your app without having saved, your changes will not be recorded there.
For all of the stores the on-disk format is undefined and implementation dependent. So inspecting them outside of Core Data is not intended to show any specific result.
Anecdotally, if you're using a SQLite store then you should look for a column called Z_SESSIONS or something similar. It'll be a multivalued column. Within it will be the row IDs of all linked sessions. Core Data stores relationships with appropriately named columns and direct row IDs, which are something SQLite supplies implicitly. It does not use an explicit foreign/primary key relationship.
To emphasise the point: that's an implementation-specific of Core Data. It's not defined to be any more reliable than exactly what ARM assembly LLVM will spit out for a particular code structure. It's as helpful to have a sense of it as to know about how the CPU tends to cache, to branch predict, etc, but you shouldn't expect to be able to take the SQLite file and use it elsewhere, or in any way interact with it other than via Core Data.

Related

Core Data fetch predicate nil check failing/unexpected results?

I have a Core Data layer with several thousand entities, constantly syncing to a server. The sync process uses fetch requests to check for deleted_at for the purposes of soft-deletion. There is a single context performing save operations in a performBlockAndWait call. The relationship mapping is handled by the RestKit library.
The CoreDataEntity class is a subclass of NSManagedObject, and it is also the superclass for all our different core data object classes. It has some attributes that are inherited by all our entities, such as deleted_at, entity_id, and all the boilerplate fetch and sync methods.
My issue is some fetch requests seem to return inconsistent results after modifications to the objects. For example after deleting an object (setting deleted_at to the current date):
[CoreDataEntity fetchEntitiesWithPredicate:[NSPredicate predicateWithFormat:#"deleted_at==nil"]];
Returns results with deleted_at == [NSDate today]
I have successfully worked around this behavior by additionally looping through the results and removing the entities with deleted_at set, however I cannot fix the converse issue:
[CoreDataEntity fetchEntitiesWithPredicate:[NSPredicate predicateWithFormat:#"deleted_at!=nil"]];
Is returning an empty array in the same conditions, preventing a server sync from succeeding.
I have confirmed deleted_at is set on the object, and the context save was successful. I just don't understand where to reset whatever cache is causing the outdated results?
Thanks for any help!
Edit: Adding a little more information, it appears that once one of these objects becomes corrupted, the only way get it to register is modifying the value again. Could this be some sort of Core Data index not updating when a value is modified?
Update: It appears to be a problem with RestKit https://github.com/RestKit/RestKit/issues/2218
You are apparently using some sintactic sugar extension to Core Data. I suppose that in your case it is a SheepData, right?
fetchEntitiesWithPredicate: there implemented as follows:
+ (NSArray*)fetchEntitiesWithPredicate:(NSPredicate*)aPredicate
{
return [self fetchEntitiesWithPredicate:aPredicate inContext:[SheepDataManager sharedInstance].managedObjectContext];
}
Are you sure that [SheepDataManager sharedInstance].managedObjectContext receives all the changes that you are making to your objects? Is it receives notifications of saves, or is it child context of your save context?
Try to replace your fetch one-liner with this:
[<your saving context> performBlockAndWait:^{
NSFetchRequest *request = [NSFetchRequest fetchRequestWithEntityName:#"CoreDataEntity"];
request.predicate = [NSPredicate predicateWithFormat:#"deleted_at==nil"];
NSArray *results = [<your saving context> executeFetchRequest:request error:NULL];
}];
First, after a save have you looked in the store to make sure your changes are there? Without seeing your entire Core Data stack it is difficult to get a solid understanding what might be going wrong. If you are saving and you see the changes in the store then the question comes into your contexts. How are they built and when. If you are dealing with sibling contexts that could be causing your issue.
More detail is required as to how your core data stack looks.
Yes, the changes are there. As I mentioned in the question, I can loop through my results and remove all those with deleted_at set successfully
That wasn't my question. There is a difference between looking at objects in memory and looking at them in the SQLite file on disk. The questions I have about this behavior are:
Are the changes being persisted to disk before you query for them again
Are you working with multiple contexts and potentially trying to fetch from a stale sibling.
Thus my questions about on disk changes and what your core data stack looks like.
Threading
If you are using one context, are you using more than one thread in your app? If so, are you using that context on more than one thread?
I can see a situation where if you are violating the thread confinement rules you can be corrupting data like this.
Try adding an extra attribute deleted that is a bool with a default of false. Then the attribute is always set and you can look for entities that are either true or false depending on your needs at the moment. If the value is true then you can look at deleted_at to find out when.
Alternatively try setting the deleted_at attribute to some old date (like perhaps 1 Jan 1980), then anything that isn't deleted will have a fixed date that is too old to have been set by the user.
Edit: There is likely some issue with deleted_at having never been touched on some entities that is confusing the system. It is also possible that you have set the fetch request to return results in the dictionary style in which case recent changes will not be reflected in the fetch results.

How to sync data from web service with Core Data?

I'm trying to sync my data from a web service in a simple way. I download my data using AFNetworking, and using a unique identifier on each object, I want to either insert, delete or update that data.
The problem is that with Core Data you have to actually insert objects in the NSObjectManagedContext to instantiate NSManagedObjects. Like this:
MyModel *model = (MyModel *)[NSEntityDescription insertNewObjectForEntityForName:#"MyModel" inManagedObjectContext:moc];
model.value = [jsonDict objectForKey:#"value"];
So when I get the data from the web service, I insert them right away in Core Data. So there's no real syncing going on: I just delete everything beforehand and then insert what's being returned from my web service.
I guess there's a better way of doing this, but I don't know how. Any help?
You are running into the classic insert/update/delete paradigm.
The answer is, it depends. If you get a chunk of json data then you can use KVC to extract the unique ids from that chunk and do a fetch against your context to find out what exists already. From there it is a simple loop over the chunk of data, inserting and updating as appropriate.
If you do not get the data in a nice chunk like that then you will probably need to do a fetch for each record to determine if it is an insert or update. That is far more expensive and should be avoided. Batch fetching before hand is recommended.
Deleting is just about as expensive as fetching/updating since you need to fetch the objects to delete them anyway so you might as well handle updating properly instead.
Update
Yes there is an efficient way of building the dictionary out of the Core Data objects. Once you get your array of existing objects back from Core Data, you can turn it into a dictionary with:
NSArray *array = ...; //Results from Core Data fetch
NSDictionary *objectMap = [NSDictionary dictionaryWithObjects:array forKeys:[array valueForKey:#"identifier"]];
This assumes that you have an attribute called identifier in your Core Data entity. Change the name as appropriate.
With that one line of code you now have all of your existing objects in a NSDictionary that you can then look up against as you walk the JSON.
The easiest thing to do is to restore the Json to a entity that maps properly to it. Once you've mapped it, determine if a object matching the entities ID exists already, if so then fetch the entity and merge changes. If not, create a new entity in Core Data and restore the Json to it.
I'm building a app were I do client side syncing with Evernote. They keep a syncUpdate number on all of their objects and at the server level. So when I start my sync I check if my clients syncUpdate count is less than the servers. If so, I know I am out of sync. If my updateCount is at 400 and the server is at 410, I tell the server to provide me with all objects between updateCount 400 and 410. Then I check if I already have the objects or not and perform my update/create.
Every time a object is modified on the server, that objects updateCount is increments along with the servers.
The server also keeps a time stamp of the last update, which I can check against also.

What does NSManagedObjectContext save do in terms of SQLite?

I'm porting some iOS persistence functionality to Android and trying to understand save(), in order to replicate the functionality in Android (pure SQLite).
Documentation says:
save:
Attempts to commit unsaved changes to registered objects to the receiver’s parent store.
Doesn't help a lot.
I know that iOS uses SQLite so this has to translate to SQLite somehow.
Looks like save is an upsert - will insert the data if not there yet, and otherwise update.
If this is true (also if not, if the question is still valid) - how is determined which row to update? I don't see how to add unique in xcode, so if I have e.g:
id | name | price
1 | apple | 2.0
2 | lemon | 1.0
with "id" being the internal row id,
and I get new model data "lemon" -> 3.0, when I update the moc, how does the database know that it has to update this row?:
2 | lemon | 1.0
In SQlite I would add a unique on the name, but I don't know how it's implemented in iOS.
I'm not an iOS dev, sorry for possibly super -ignorant or -strange question.
Thanks.
It is really difficult to discuss Core Data in terms of databases because it is not a database. It uses one to persist data but that is just about it.
Looks like save is an upsert - will insert the data if not there yet, and otherwise update.
An NSManagedObjectContext is the current state of not just one object (or row in database terms) but multiple. So when you ask the NSManagedObjectContext to 'save' it is saving the state of all the objects in the context. If an object is new, it will be the equivalent of an insert. If the object already exists, it will be the equivalent of an update. However, if at some point an object is deleted, the 'save' method will also remove the object from the SQLite database. The 'save' method specifically saves the state of the NSManagedObjectContext.
If this is true (also if not, if the question is still valid) - how is
determined which row to update? I don't see how to add unique in xcode
That is because Core Data handles the unique identity of objects. There is no default 'id' column to place a unique identifier. However, you can create an attribute (i.e. column/field) to hold a unique identifier if the database will be persisted across many devices, which I personally had to do at one time since the 'objectID' is not practical to use. In Android, you will have to maintain the unique identity of each row yourself unless you opt to use auto incrementation.
when I update the moc, how does the database know that it has to
update this row?
At one point or another, you ask the NSManagedObjectContext to insert a new "Entity" (i.e. table):
NSManagedObject *managedObject = [NSEntityDescription insertNewObjectForEntityForName:#"EntityName" inManagedObjectContext:managedObjectContext];
To update an entity, you could retrieve it by using:
NSManagedObject *managedObject = [managedObjectContext objectWithID:managedObject.objectID];
Make any adjustments and then 'save' the NSManagedObjectContext. The objectID is its unique identifier that was automatically assigned when inserted. Core Data handles the boiler plate code of inserting and updating rows so you end up with an abstract version as seen in the examples. If you save a few NSManagedObjects and open the SQLite file, you will find that it is very similar to any other database, other than a few Core Data specific fields that is uses for management.
I would suggest creating a new Master Detail Application project, run it in the simulator, save a couple entries, and open the SQLite file. You can find it in
/Users/<username>/Library/Application Support/iPhone Simulator/<iOS Version>/Applications/<Application UDID>/Documents/
Opening the SQLite file will show you that the database Core Data maintains is very similar to any other SQLite database and may help out with understanding the processes.
I don't know the following to be true, but I think I'm not far off.
An NSManagedObjectContext has a reference to objects (NSManagedObject) that are composed using the data from the SQLite database. These objects all have the objectID property, which is a unique identifier to the row in the SQLite database allowing you to uniquely, even between contexts, identify an object/row. When you change an object's property, this doesn't actually change anything in the database. The context knows about the changes, and when you call save:, it will go to the database and update all the records.
This is always an UPDATE, as you have to call -[NSEntityDescription insertNewObjectForEntityForName:InManagedObjectContext] to get a reference to an object. At that point, a record is already inserted and it is given an objectID.
NSManagedObjectContext is kind of a representation of the data model. It is from the framework called CoreData. By using CoreData, we do not manipulate the SQLite database directly. Which means we do not write any SQL queries, we just do all the update, insert or delete on NSManagedObjectContext. And when we call save(), NSManagedObjectContext will tell the database which row was updated, which row was deleted or which row was inserted. And here is another question which might help you to understand more about NSManagedObjectContext.

Optimistic locking support in NSIncrementalStore subclass

I am implementing a custom NSIncrementalStore subclass which uses a relational database for persistent storage. One of the things that I still struggle with is the support for optimistic locking.
(feel free to skip this lengthy description right to my question below)
I analyzed how Core Data's SQLite incremental store approaches this problem by examining SQL logs produced by it and came up with following conclusions:
Each entity table in the database has a Z_OPT column which indicates the number of times a particular instance of this entity (row) has been modified, starting from 1 (initial insertion).
Each time a managed object is modified, Z_OPT value in its corresponding database row is incremented.
The store maintains cache (referred to as row cache in Core Data docs) of NSIncrementalStoreNode instances, each having a version property equal to Z_OPT value returned by previous SELECT or UPDATE SQL query on managed object's row.
When a managed object is returned from NSManagedObjectContext (e.g. by executing NSFetchRequest on it), MOC creates snapshot of this object which contains this version number.
When the object is modified or deleted, Core Data makes sure that it has not been modified or deleted outside the context by comparing versions of cached row and object snapshot. All of this happen when -save: is called on the context that the object belongs to. If the versions are different then a merge conflict is detected and handled based on set merging policy.
When MOC is being saved, the -newValuesForObjectWithID:withContext:error: method is called for each modified/deleted object which in turn returns NSIncrementalStoreNode with version number. This version is then compared to snapshot's version and if they are different, the save fails with appropriate merge conflicts (at least with default merge policy).
This simple use case works properly with my store since -newValuesForObjectWithID:withContext:error: checks the row cache first which is enough if the object was concurrently modified in other context using the same store instance. If this is the case, then the cache contains updated row with higher version number which is enough to detect a conflict.
But how can I detect than the underlying database has been modified outside my store, possibly by other application or other store instance using the same database file? I know this is an unfrequent edge case but Core Data handles it properly and I would prefer to do the same.
Core Data's store uses SQL queries like these to update/delete object's row:
UPDATE ZFOO SET Z_OPT=Y, (...) WHERE (...) AND Z_OPT=X
DELETE FROM ZFOO WHERE (...) AND Z_OPT=X
where:
X - version number last known to the store (from cache)
Y - new version number
If such a query fails (no rows affected) the row is updated in store's cache and its version compared against the one previously cached.
My question is: how can a custom NSIncrementalStore inform Core Data that optimistic locking failure has occurred for some updated/deleted/locked objects? It is only the store that is able to tell that when it handles NSSaveChangesRequest passed to it its -executeRequest:withContext:error: method.
If the underlying database does not change under the store, then conflicts are detected since Core Data calls -newValuesForObjectWithID:withContext:error: on each modified/deleted/locked object prior to executing save changes request on the store. I was not able to find any way for NSIncrementalStore to inform Core Data that an optimistic locking failure has occurred after it started to handle the save request. Is there some undocumented way to do that? Core Data seems to throw some exception in that case which is then magically translated into failed save request with NSError listing all the conflicts. I am only able to mimic that partly by returning nil from -executeRequest:withContext:error: and creating the error message by myself. I think there must be a way to use the standard Core Data conflict handling mechanism in this scenario as well.
I realize that this is not an answer to you question, but I will try and give you my point of view on CoreData and correlation to Databases:
(1st level cache)
NSPesistentStoreCoordinator + NSPersistentStore == A single connection to the database
(2nd level cache)
NSManagedObjectContext == cache over the connection holding changes
So, to my understanding your issue is that you have multiple connections to your store, each making changes, but you have no central version control over your records.
Your store will receive a -executeRequest:withContext:error: with NSSaveRequestType
You will then be responsible to verify that the record versions match, if you find a conflict in the connection level (level 1) you report version mismatch between the context (level 2) and the coordinator.
you need to report version missmatch between your connection (level 1) and your store.
To be able to do this your store must report changes on it across all connections to it (ConnectionManager), or it might offer hooks to changes performed on it.
I'm no SQLite expert, but the SQLite API does have something to offer in that area:
update hook
commit hook
changes
total changes
(I have no experience in setting these kind of hooks, but if CoreData use them it will not show in the debug logs)
you can report these errors by setting the error pointer (NSError**) and setting its internal data to match the one that CoreData coordinator is setting (create merge conflict and set the information in them as needed)
Note that optimistic locking failure will only occur during -executeRequest:withContext:error:
(unless you have a rogue connection to the store, one that is not tracked by the manager.
To support this behaviour your manager might need to verify each record as it is committed for a save [huge performance cost] , or use some hooks into the changes recently made to records
)
To handle multiple connections to your store you might need to have a shared cache of NSIncrementalStoreNode, keyed by the store url:
static #{
url1 : actualCacheMapping1,
url2 : actualCacheMapping2,
...
}
each connection save to the store will be verified agains the store url actual cache.
Hope this make some sense for you.
My question is: how can a custom NSIncrementalStore inform Core Data that optimistic locking failure has occurred for some updated/deleted/locked objects? It is only the store that is able to tell that when it handles NSSaveChangesRequest passed to it its -executeRequest:withContext:error: method.
In an NSIncrementalStore, NSIncrementalStoreNodes represent the store snapshots. The version property of the node is the optimistic locking primitive. The persistent store is responsible for detecting optimistic locking failures in at the store level, while the managed object context can detect them higher up. An optimistic locking failure at the store level might happen if the system the store is talking to was changed by something else, and there is a conflict between that system's state and that representation of state in the persistent store. For example, if the store was communicating with a web service and the web service data was changed by another user, etc.
If an optimistic locking failure is detected in your store implementation during a save, your store is responsible for creating NSMergeConflict objects describing it. These will be propagated up by the NSPersistentStoreCoordinator.
[[NSMergeConflict alloc] initWithSource:managedObject newVersion:newVersion oldVersion:oldVersion cachedSnapshot:inMemorySnapshot persistedSnapshot:storedSnapshot];
Snapshot dictionaries should include all modelled attribute property names as keys along with their values. This does not include relationships. For some stores, using the values from the reference objects or NSIncrementalStoreNodes may suffice as long as they only include the modelled attribute property name as keys (and those are easy to get from the entity description).
Once these objects have been created, create an NSError in the NSCocoaErrorDomain with the code NSPersistentStoreSaveConflictsError. The userInfo object should contain the key NSPersistentStoreSaveConflictsErrorKey which should contain an array of the NSMergeConflict objects. Return that from the save request, and the NSPersistentStoreCoordinator will be responsible for finding resolution. Rememeber, you should not generate merge conflicts for conflicts between the state of objects in the NSManagedObjectContext and your store, only for conflicts between whatever in-memory or cached state in your store and where ever the data is kept or persisted (like a web service, or database, etc.)

Core Data: delete all objects of an entity type, ie clear a table

This has been asked before, but no solution described that is fast enough for my app needs.
In the communications protocol we have set up, the server sends down a new set of all customers every time a sync is performed. Earlier, we had been storing as a plist. Now want to use Core Data.
There can be thousands of entries. Deleting each one individually takes a long time. Is there a way to delete all rows in a particular table in Core Data?
delete from customer
This call in sqlite happens instantly. Going through each one individually in Core Data can take 30 seconds on an iPad1.
Is it reasonable to shut down Core Data, i.e. drop the persistence store and all managed object contexts, then drop into sqlite and perform the delete command against the table? No other activity is going on during this process so I don't need access to other parts of the database.
Dave DeLong is an expert at, well, just about everything, and so I feel like I'm telling Jesus how to walk on water. Granted, his post is from 2009, which was a LONG time ago.
However, the approach in the link posted by Bot is not necessarily the best way to handle large deletes.
Basically, that post suggests to fetch the object IDs, and then iterate through them, calling delete on each object.
The problem is that when you delete a single object, it has to go handle all the associated relationships as well, which could cause further fetching.
So, if you must do large scale deletes like this, I suggest adjusting your overall database so that you can isolate tables in specific core data stores. That way you can just delete the entire store, and possibly reconstruct the small bits that you want to remain. That will probably be the fastest approach.
However, if you want to delete the objects themselves, you should follow this pattern...
Do your deletes in batches, inside an autorelease pool, and be sure to pre-fetch any cascaded relationships. All these, together, will minimize the number of times you have to actually go to the database, and will, thus, decrease the amount of time it takes to perform your delete.
In the suggested approach, which comes down to...
Fetch ObjectIds of all objects to be deleted
Iterate through the list, and delete each object
If you have cascade relationships, you you will encounter a lot of extra trips to the database, and IO is really slow. You want to minimize the number of times you have to visit the database.
While it may initially sound counterintuitive, you want to fetch more data than you think you want to delete. The reason is that all that data can be fetched from the database in a few IO operations.
So, on your fetch request, you want to set...
[fetchRequest setRelationshipKeyPathsForPrefetching:#[#"relationship1", #"relationship2", .... , #"relationship3"]];
where those relationships represent all the relationships that may have a cascade delete rule.
Now, when your fetch is complete, you have all the objects that are going to be deleted, plus the objects that will be deleted as a result of those objects being deleted.
If you have a complex hierarchy, you want to prefetch as much as possible ahead of time. Otherwise, when you delete an object, Core Data is going to have to go fetch each relationship individually for each object so that it can managed the cascade delete.
This will waste a TON of time, because you will do many more IO operations as a result.
Now, after your fetch has completed, then you loop through the objects, and delete them. For large deletes you can see an order of magnitude speed up.
In addition, if you have a lot of objects, break it up into multiple batches, and do it inside an auto release pool.
Finally, do this in a separate background thread, so your UI does not pend. You can use a separate MOC, connected to a persistent store coordinator, and have the main MOC handle DidSave notifications to remove the objects from its context.
WHile this looks like code, treat it as pseudo-code...
NSManagedObjectContext *deleteContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateConcurrencyType];
// Get a new PSC for the same store
deleteContext.persistentStoreCoordinator = getInstanceOfPersistentStoreCoordinator();
// Each call to performBlock executes in its own autoreleasepool, so we don't
// need to explicitly use one if each chunk is done in a separate performBlock
__block void (^block)(void) = ^{
NSFetchRequest *fetchRequest = //
// Only fetch the number of objects to delete this iteration
fetchRequest.fetchLimit = NUM_ENTITIES_TO_DELETE_AT_ONCE;
// Prefetch all the relationships
fetchRequest.relationshipKeyPathsForPrefetching = prefetchRelationships;
// Don't need all the properties
fetchRequest.includesPropertyValues = NO;
NSArray *results = [deleteContext executeFetchRequest:fetchRequest error:&error];
if (results.count == 0) {
// Didn't get any objects for this fetch
if (nil == results) {
// Handle error
}
return;
}
for (MyEntity *entity in results) {
[deleteContext deleteObject:entity];
}
[deleteContext save:&error];
[deleteContext reset];
// Keep deleting objects until they are all gone
[deleteContext performBlock:block];
};
[deleteContext preformBlock:block];
Of course, you need to do appropriate error handling, but that's the basic idea.
Fetch in batches if you have so much data to delete that it will cripple memory.
Don't fetch all the properties.
Prefetch relationships to minimize IO operations.
Use autoreleasepool to keep memory from growing.
Prune the context.
Perform the task on a background thread.
If you have a really complex graph, make sure you prefetch all the cascaded relationships for all entities in your entire object graph.
Note, your main context will have to handle DidSave notifications to keep its context in step with the deletions.
EDIT
Thanks. Lots of good points. All well explained except, why create the
separate MOC? Any thoughts on not deleting the entire database, but
using sqlite to delete all rows from a particular table? – David
You use a separate MOC so the UI is not blocked while the long delete operation is happening. Note, that when the actual commit to the database happens, only one thread can be accessing the database, so any other access (like fetching) will block behind any updates. This is another reason to break the large delete operation into chunks. Small pieces of work will provide some chance for other MOC(s) to access the store without having to wait for the whole operation to complete.
If this causes problems, you can also implement priority queues (via dispatch_set_target_queue), but that is beyond the scope of this question.
As for using sqlite commands on the Core Data database, Apple has repeatedly said this is a bad idea, and you should not run direct SQL commands on a Core Data database file.
Finally, let me note this. In my experience, I have found that when I have a serious performance problem, it is usually a result of either poor design or improper implementation. Revisit your problem, and see if you can redesign your system somewhat to better accommodate this use case.
If you must send down all the data, perhaps query the database in a background thread and filter the new data so you break your data into three sets: objects that need modification, objects that need deletion, and objects that need to be inserted.
This way, you are only changing the database where it needs to be changed.
If the data is almost brand new every time, consider restructuring your database where these entities have their own database (I assume your database already contains multiple entities). That way you can just delete the file, and start over with a fresh database. That's fast. Now, reinserting several thousand objects is not going to be fast.
You have to manage any relationships manually, across stores. It's not difficult, but it's not automatic like relationships within the same store.
If I did this, I would first create the new database, then tear down the existing one, replace it with the new one, and then delete the old one.
If you are only manipulating your database via this batch mechanism, and you do not need object graph management, then maybe you want to consider using sqlite instead of Core Data.
iOS 9 and later
Use NSBatchDeleteRequest. I tested this in the simulator on a Core Data entity with more than 400,000 instances and the delete was almost instantaneous.
// fetch all items in entity and request to delete them
let fetchRequest = NSFetchRequest(entityName: "MyEntity")
let deleteRequest = NSBatchDeleteRequest(fetchRequest: fetchRequest)
// delegate objects
let myManagedObjectContext = (UIApplication.sharedApplication().delegate as! AppDelegate).managedObjectContext
let myPersistentStoreCoordinator = (UIApplication.sharedApplication().delegate as! AppDelegate).persistentStoreCoordinator
// perform the delete
do {
try myPersistentStoreCoordinator.executeRequest(deleteRequest, withContext: myManagedObjectContext)
} catch let error as NSError {
print(error)
}
Note that the answer that #Bot linked to and that #JodyHagins mentioned has also been updated to this method.
Really your only option is to remove them individually. I do this method with a ton of objects and it is pretty fast. Here is a way someone does it by only loading the managed object ID so it prevents any unnecessary overhead and makes it faster.
Core Data: Quickest way to delete all instances of an entity
Yes, it's reasonable to delete the persistent store and start from scratch. This happen fairly quick. What you can do is remove the persistent store (with the persistent store URL) from the persistent store coordinator, and then use the url of the persistent store to delete the database file from your directory folder. I did it using NSFileManager's removeItemAtURL.
Edit: one thing to consider: Make sure to disable/release the current NSManagedObjectContext instance, and to stop any other thread which might be doing something with a NSManagedObjectContext which is using the same persistent store. Your application will crash if a context tries to access the persistent store.

Resources