I'm using CoreData as my local app storage manager.
Sometimes I save BLOB files (images, videos, etc) and I notice that the app size increases, which is expected.
My problem appears when I delete some data, but the app size doesn't change.
I've downloaded the app's container and noticed that the appname.sqlite and appname.sqlite-wal are still large.
Does anyone know why this is happening?
Note: I'm using CoreData with CloudKit with NSPersistentCloudKitContainer if that can help.
Under CoreData is a sqlite database. Sqlite's vacuum command documentation explains what is happening
https://www.sqlite.org/lang_vacuum.html
When content is deleted from an SQLite database, the content is not usually erased but rather the space used to hold the content is marked as being available for reuse.
You could try to connect to the underlying sqlite database and use vacuum directly on it.
How to VACUUM a Core Data SQLite db?
But, in your case, I'd follow #matt's suggestion of not using BLOBs this way.
Related
The situation
I am using Core Data's "Allows External Storage" to store compressed images and small audio files in Core Data. Performance benchmarks have shown that this is actually quite performant. Also, I am using Core Data's PersistentCloudKitContainer to sync my database with iCloud.
"Allows External storage" will automatically save files that are bigger than ~500KB (?) to the file system and only store a reference in the database. This works nicely. For instance, a 1MB image file is stored as an external record and takes up the expected 1MB of iCloud storage after syncing. Also, files that are smaller than those ~500KB are not stored as external record (ckAsset), but as binaryData in the database record.
The problem:
For some reason a 0.47MB binary data file that is stored directly in the database will take up about 4.3MB of iCloud storage. That is 9x of the expected amount. Inspecting the binary data stored in the record shows that the binary data itself has the expected size of only 0.47MB (CloudKit Dashboard). Also, I have verified that the local app bundle only grows by the expected 0.47MB. Thus, how can those additional 3.8MB of consumed iCloud storage be explained? In contrast, audio and image files that are larger than ~500KB are stored as external records and take up the correct amount of iCloud storage.
Please look at this annotated image for a better understanding:
Image that illustrates the problem (CloudKit Dashboard)
Ideas / Workarounds / What I tried:
I could try to find a way to always store files as ckAssets/external records. (e.g. by lowering the limit for storing ckAssets to 0.01MB). If that is possible.
Could the Write-Ahead-Log (WAL) of SQLite be involved in creating huge temporary sqlite-wal files? I tried limiting the WAL journal size and the local sqlite-wal is small, so I don't think that this is where the problem lies. Unless there is an independent iCloud WAL file that I don't know about.
I would be glad if anyone could help me with this issue. Thanks in advance!
I have an iOS app that uses Core Data and ParcelKit to sync with Dropbox. However, the Dropbox Datastore API only allows records of 100K so it will not sync images that I store in the database. Any other workaround than storing images as separate files with filenames stored in the base? It is a little fragile when user can alter the content of the imagefile-folder thus braking the link to the database.
You should not store large images in the Core Data persistent store. Apple recommends that you should only store small images, such as thumbnails, perhaps 20K max. If you go beyond that, performance will eventually degrade significantly.
Thus, you cannot really avoid storing the images in separate files and storing their name/location in Core Data. This is the recommended pattern.
I do not see why you think this is fragile. Presumably you will store the images in the app sandbox there is no way the user can fiddle with them unless the iPhone is jailbroken.
The Dropbox sync should be managed independently from this setup.
FYI Dropbox just killed the Datastore API and will take it off line in 2016. :-(
You should monitor this ParcelKit Issue:
Dropbox Datastore is Deprecated #34
https://github.com/overcommitted/ParcelKit/issues/34
I am developing an app on xCode 5, iOS 7. I have some data stored in CoreData. My requirement is to upload that data to RackSpace. Whats the best way to do this?
Where can I find .sqlite file associated with CoreData?
The SQLite file is wherever you put it. There's no magic to it, you have to tell Core Data exactly where you want the file. You do this when you call addPersistentStoreWithType:configuration:URL:options:error:. The URL argument is the location of the SQLite file.
If you try and use the file directly, make sure that:
You shut down your Core Data stack completely before doing so, to make sure that all unsaved data has been flushed to disk. That means no managed objects, managed object contexts, or persistent store coordinators in memory anywhere.
Make sure to get the SQLite journal files. If your store file were named Foo.sqlite, they will be named Foo.sqlite-wal and Foo.sqlite-shm and will be located in the same directory. If you don't get these files, most or all of your data will be missing.
However simply uploading the file is not a good solution for syncing data. To sync data, you'd have to download a copy of the data, load that, and compare every object in the file with every object that's already on the phone. It's not impossible but it's definitely making things much more difficult than necessary. There are many options that can simplify the process, including full service providers like Parse, SDKs that let you use one of a variety of back ends like Ensembles.io, and others.
I'm importing data into Core Data and find that the save operation is slow. Using the iOS simulator, I watch the sqlite-wal file grow and grow until its over 7GB in size.
I'm importing approx 5000 records with about 10 fields. This isn't a lot of data.
Each object I'm inserting has a to-one relation to various other objects (6 relations total). All of those records combined equal less than 20 fields. There are no images or any binary data or anything that I can see that would justify why the resulting size of the WAL file is so huge.
I read the sqlite docs describing the wal file and I don't see how this can happen. The source data isn't more than 50 MB.
My app is multi-threaded.
I create a managed object context in the background thread that performs the import (creates and saves the core data objects).
Without writing the code out here, has anyone encountered this? Anyone have a thought on what I should be checking. The code isn't super simple and all the parts would take time to input here so lets start with general ideas.
I'll credit anyone who gets me going in the right direction.
Extra info:
I've disabled the undo manager for the context as I don't need that (I think it's nil by default on iOS but I explicitly set it to nil).
I only call save after the entire loop is complete and all managed objects are in ram (ram goes up to 100 MB btw).
The loop and creation of the core data objects takes only 5 seconds or so. The save takes almost 3 minutes as it writes the the awl file.
It seems my comment to try using the old rollback(DELETE) journal mode rather than WAL journal mode fixed the problem. NOTE that there seem to be a range of problems when using WAL journal mode including the following:
this problem
problems with database migrations when using the migratePersistentStore API
problems with lightweight migrations
Perhaps we should start a Core Data WAL problems page and get a comprehensive list and ask Apple to fix the bugs.
Note that the default mode under OS X 10.9 and iOS 7 now uses WAL mode. To change this back add the following option
#{ NSSQLitePragmaOptions : #{ #"journal_mode" : #"DELETE" } }
All changed pages of a transaction get appended to the -wal file.
If you are importing multiple records, you should, if possible, use a single transaction for the entire import.
SQLite cannot do a full WAL checkpoint while some other connection is reading the database (which might just be some statement that you forgot to close).
So, I have been using MagicalRecord to develop an iPad app, and recently after moving to an auto-migrating store I have been experiencing some issues. I need to sync my .db file from one device over to another, so I need all of the data to be in the .db, but it seems like with WAL journaling mode enabled (the default for Magical Record auto-migrating stores) no matter how I save it only persists the changes to either the .db-wal or the .db-shm files. I switched to a normal sqlite store and everything worked fine. So, my question is, with WAL journaling enabled do I need to do anything special to actually get Core Data to save to the main database, or will I just have to disable it?
Change the journal mode. You have the Magical Record source, after all. Change the SQLite journal mode to DELETE, and the journal mode will be deleted after every transaction. Disabling journalling is a really bad idea, don't do that. But using a different mode should be fine.
Core Data does not offer any API for manipulating the journal once the persistent store is open. SQLite is an implementation detail, and Core Data doesn't expose the internal SQLite details. The closest you can get is the options parameter when setting up the Core Data stack, which is where you can change the journal mode (and where MR changes it).
The -wal file is part of the database; you must synchronize it together with the .db file.
Alternatively, you can copy the data to the main database file by executing a checkpoint.