iOS Is it efficient to use realm for cache? - ios

Now my project is implemented as local file storage.
refactoring project, i Using realm to Caching.
Implementation realm by caching, I cache the data of the next view controller.
But realm operate in Main Thread. Every time I enter the next view controller, Main Thread is active.(read, write.. CRUD) I wonder if it's an efficient way or not a resource-intensive operation.
Is using Realm for caching a good practice?
Local FileManager Caching -> Realm Caching

Realm operates on the main thread by default, which can be a problem if you have a lot of read and write operations happening simultaneously, as this can cause the main thread to become blocked and lead to a poor user experience. However, Realm also supports background threads, which allows you to perform read and write operations on a background thread, so you can keep the main thread responsive. Using a background thread to perform operations on Realm can be a good solution to overcome this performance bottleneck and improve the user experience. Another important factor to consider is the size of the data you need to cache. Realm can handle large amounts of data, but if your cache is extremely large, it might be more efficient to use a different solution, such as a file-based cache.
Realm's caching mechanism is built-in and it makes it more efficient for data that has a high read-write ratio unlike with file-based storage you have to handle the caching mechanism manually.Realm can be a great solution for caching data in iOS, but it's important to understand the limitations and trade-offs of using it, especially in terms of performance and data size.I suggest you to profile your app and measure the performance of the different solutions and choose the one that best fits your needs.
Hope it helps and let me know if you have any other questions.

Related

Best practices: when mobile db need clearing in social mobile iOS app?

For example, we have an iOS application that looks like facebook. In which there is a pullToRefresh and incremental loading and using realm for it.
If you do not delete data from the database, it will grow infinitely. This will affect performance.
When is it worth cleaning bd? And should it be completely cleaned? Or partially? What are the best practices?
best practices with realm?
It's hard to recommend any single best practice since all of this depends on your app, the size of the data it manages, and what would be a good user experience.
It's definitely a good idea to periodically clear out extremely old entries in a local cache database. But deleting a lot of data can also hurt performance, so it's not a good idea to do it too aggressively. There's nothing wrong with having a large local database, if it means it's providing a better user experience. Realm's zero-copy mechanism means it's very efficient, even when there's a lot of content.
I recommend you take a look at your app, and work out how old the data needs to be before the user will absolutely not care about it anymore. Depending on how much data your app downloads, this could be up to a day, a week or a month. Conversely, you could also set a size tolerance, so when the database grows to a size you don't like, you can then make the app start looking at shrinking it.
Since deletes can be time consuming, I'd definitely recommend you do any sort of cleaning in a background thread. You could easily set up a small class that runs an NSOperation periodically (Maybe once a week), queries for the oldest data in your database, and then deletes it.

Is it acceptable to load Realm objects in the main ui thread?

we are adopting (Swift)Realm as a data store in our iOS app and we are really pleased with it so far. We have a question around the design for the retrieval and storage of objects with Realm and multi-threading:
Is it acceptable to load objects in the main ui thread?
We know about the constraints that objects loaded with realm cannot be shared between threads.
We are also not seeing any performance issues yet, but our approach so far is to load all kinds of resources in background threads.
In the case where we load and filter some data and register a notification block, we don't see problems with using the main ui thread, but how would we handle a situation, where we for example want to display all data in a table view?
Is it acceptable to load objects in the main ui thread?
Yes, it is in most cases* acceptable and fast enough. It wouldn't be acceptable if reading from the database would block the user, but as there is no concept like faults, read access is always predictable fast.
Only if you have a really complex object graph, where you need to do heavy pre-processing to be able to display the objects on the UI, it would make sense to employ a background thread and/or caching to warrant a good user experience.
In the case where we load and filter some data and register a notification block, we don't see problems with using the main ui thread, but how would we handle a situation, where we for example want to display all data in a table view?
A UITableView only request those cells which are currently visible on the screen and reuses the view containers. A Realm collection is similar in this lazy nature, when you don't filter it, it doesn't enlarge the memory pressure, because you get only object accessors for those objects which you pull out of it. There is no need for pagination as long as you rely on the builtin Realm Results or List collections. Only if you need to apply a custom and complex filter in a way which isn't supported by Realm, it might be necessary to process that on a background thread.

CoreData - (Performance) Considerations for frequent data

Background
We have an app that receives sensor data at 100 Hz. Each sensor data contains three floats. Occasionally (max 1/s) some other metadata may be received that needs to be saved as well. The UI displays the latest 1000 sensor values in a graph. There are no undo-requirements - all received data must be saved to file. Each session lasts for at least 10 min, but may (in rare circumstances and mostly due to mistake) be up to an hour.
Current approach
Model: SensorData has a many-to-one relationship with Session. MetaData has a many-to-one relationship with Session.
CoreData: Set up a UIManagedDocument to handle CoreData. One MOC on main thread with a child MOC on a private queue. The child MOC creates the objects and add them to the object graph. Every 100th data, save child MOC. Once session ends, save main MOC to PSC.
Edit: The problem I have with the current approach is that saving in the child MOC lags behind, which means not all data has been processed when session ends and processing time increases with run time.
Questions
Is it feasible to use CoreData as storage mechanism at ~100 Hz, or should I look at some alternative (like saving to a csv-file)?
What considerations must I take to ensure proper/optimal performance?
I have had performance issues with saves taking a long time and blocking UI. How can I avoid this? I.e. what saving policy should I use?
Drawbacks and advantages of current approach?
I think Core Data can do this.
You could use Marcus Zarra's approach of three contexts to make sure the actual save also happens in the background.
RootContext (background) saves to persistent store ---> is parent of
MainContext (main thread) to update the UI ---> is parent of one or more
WorkerContext (background) to create new data from sensor
You could then actually save more frequently in the background to the persistent store directly without impacting UI responsiveness. This should also improve memory usage. Saving the worker context will push the changes to the UI which can be updated accordingly.
For performance make sure you batch save - with three floats I would estimate every 1.000 to 5.000 records or so (you need to experiment to find the optimal value).
Turn off the undo manager. (context.undoManager = nil)
Another consideration would be to maybe think hard about what you want to show in the UI and perhaps calculate values to display on the fly and send that to the UI, rather than have the UI rely on the entire session's data set to update itself.
I have come up against exactly this issue, in an elaboration of this project.
My task is to record live sensor data from (for example) Core Motion and Core Location at rates up to 100Hz whilst simultaneously running a smoothly animating interface which can inolve any of Core Graphics, Core Animation, OpenGL and live video. There are ~20-40 separate data items to track, mostly doubles but one or two strings, and they do not all arrive at the same sync rate.
Any hold-up during saves, however slight, will have an immediate hit on the interface.
I was interested to compare using Core Data against writing directly to a SQL database (using sqlite3). My personal experience so far (this is a work in progress) is that the SQL approach is much better suited to this type of problem than Core data. In fact its not really what Core Data was optimised for (which is rather to manage complex document object models with undo, persistence and efficient faulting). The Core Data model almost assumes that persistent saves will be prohibitively slow (for example, saving to iCloud), and much of it's engineering is designed to offer solutions to that problem.
I have tried various core data patterns, backgrounding, parent/child contexts, sync, async, batching saves ... and invariably i find a noticeable stutter whenever a persistent save actually occurs.
The SQL approach, on the other hand, is simple to understand, efficent and completely free of noticable glitches.
It may well be that I have not arrived at the optimal core data pattern for this problem (and I will be digging deeper into this, as it is an interesting edge case). However I would definitely suggest a look at the direct-to-SQL approach if that makes sense for you in your broader app context.
In slightly different data-streaming use-cases (for example, a 250-500Hz signal delivered over bluetooth) I have opted for the kind of signal-processing tricks used by audio interfaces - ring buffers, queues and callbacks can become very useful as your data rate goes up. At some point the data rate will get too high for a database-writing process to keep up: then - as you suggest - saving directly to file will be more efficient. You can always read the data back out of files at some later point and populate the database (or core data) when sampling is not taking place.
Matt Gallagher made a nice comparision of Core Data and Databases.
It's a fairly old piece, but the patterns haven't changed so it is still relevant. There's also useful little (and similarly-aged) discussion here on the benefits of flat file over database writing with high-frequency streams.

Core data, what concurrency model to use?

I am developing iOS an app which will gather big amounts of data from several sources (up to tens of thousands of objects, but simple objects, no images) and save it to my own database using core data. I then analyse this data and display the results to the user.
I want to know if there is any benefit to using a Main Queue Nsmanagedobjectcontext or if it is enough that I use a private one.
I also want to know what the benefit is of having several NSManagedObjectContext or if one is enough?
The concurrency model i am using currently only has one private queue nsmanagedobjectcontext connected to a persistant store coordinator. All the data analysis is performed on the private queue and then I simply pass the analyzed data to the main queue to display it. On older devices (iPhone 4) my application can sometimes crash when too much data is being loaded (i.e. downloaded from the external databases) at the same time, is this related to my choice of concurrency model?
Your current approach sounds fine. You only need a main thread context if you want the main thread to interact with the data, and in your case you don't so that's fine.
Your memory management is effectively unrelated and is more tied to how many things you have going on at once (it sounds like one) and how many objects you try to keep in main memory at any one time (it sounds like many) instead of faulting them out to the data store. This is what you need to look at / work on. Instruments can help you see how many objects you're keeping in memory.
At least call refreshObject:mergeChanges: with NO for merge changes to fault out any objects that you aren't using.
Also, remember that you're working on a mobile device and that processing up to tens of thousands of objects is a job better handled by a server...

Multi-thread daata access issue, #synchronized & serial queue

As you may have experienced, access none-thread safe variables is a big headache. For iOS one simple solution is to use keyword #synchronized, which will add NSLock to insure the data can be accessed by unique one thread, the disadvantage is as below:
Lock too many will reduce app performance greatly, especially when invoked by main thread.
Dead lock will occur when logic becomes complex.
Based on the above considerations, we prefer to use serial queue to handle, each thread safe critical operation will append to the end of the queue, it is a great solution, but the problem is that all access interfaces should by designed in asyn style, see the following one.
-(id)objectForKey:(NSString *)key;
The people who invoke this class aren't reluctant to design in this way. Anyone who has experience on this field please share and discuss together.
The final solution is using NSUserDefault to store small data, for large cache data put them in file maintained by ourselves.
Per Apple doc the advantage of NSUserDefault is thread safe and will do synchronize work periodically.

Resources