Memory Management & Performance with EK EventKit - ios

In a calendar app I display events based on the EventKit API. I fetch events from EKEventStore and display them in daily, weekly, monthly views, as lists, etc. Now I am running into some performance problems on iPhone 4.
The performance problems are mainly speed related. It takes several seconds, for example, to collapse or expand all table view sections (representing dates) to show the rows (representing events). It also takes 5-8 seconds to reload the table for the editing / export interface. I would have to check Instruments to give more details.
So far there have been no memory issues.
My strategy right now is to minimise the memory footprint. I am using arrays in memory, but they only contain the eventIdentifier, a short string. I can retrieve events with the EKEventStore method eventWithEventIdentifier:. I suspect that this is the reason for the performance hit.
Two alternatives come to mind:
Use EKEvent objects instead of identifiers. However, I believe that this can be unpredictable regarding memory. Some events have lots of text so that the amount of data to be kept in memory is not limited. The duration of the period that has to show events could potentially also be very long.
Port everything to Core Data, maybe with original EKEvent objects stored as transformable objects. This would be a major refactoring, but I could take advantage of NSFetchedResultsController and its optimisation features.
I have tried 1 and 2 - performance is still bad!
What is your experience? Have you seen performance issues with repeated calls to the EKEventStore database? What would be your advice?
UPDATE:
Instruments report that indeed the tableView's reloadData takes quite long (1.5 secs). I am not sure why because the state of the table view (collapsed sections or not) and the entire data are loaded before and that code is efficient.
I am not calculating any cell heights (sometimes this has been reported to force the entire table to load before display). The same lag appears when I call
[tableView beginUpdates];
[tableView endUpdates];
in order to animate the collapsing of the sections.
Note: maybe the topic of this question should be changed eliminating the EventKit part.

While I have never used EventKit, I can give you some suggestions:
NSFetchedResultsController is a great thing, especially when your data is going to be changing after your table view has initially been loaded. The NSFetchedResultsController monitors for changes for your data and the NSFetchedResultsControllerDelegate protocol allows for incremental changes to your UITableView rather than reloading an entire UITableView when changes are made to the data that populates your table. In general, avoid doing a full reloadData on your UITableView when possible.
If drawing your UITableView is really slow, chances are the source of the issue lays somewhere in the tableView:cellForRowAtIndexPath: method (almost guaranteed, although other culprits could be other UITableViewDataSource methods such as tableView:titleForHeaderInSection: or
tableView:titleForFooterInSection:). If you are reading from disk, not using cached UITableViewCells, or doing heavy drawing in these methods, it could cause your UITableView to be extremely slow to reload / scroll.
If you find that you are reading from the disk or performing some other slow operation in tableView:cellForRowAtIndexPath:, consider performing one read from the disk before your UITableView is drawn (i.e. in viewDidLoad:) and caching the results in an NSArray. Then your UITableViewDataSource methods can just read from the NSArray instead of the disk.

The problem is your row animations and (probably) recalculating sections. This is pretty common, and is a UITableView thing rather than an EventKit or CoreData thing. More profiling should give you some ideas of how you can optimize your sections to be more performant.
Look at your implementation of the UITableViewDataSource methods that involve sections:
- numberOfSectionsInTableView:
- tableView:numberOfRowsInSection:
– tableView:sectionForSectionIndexTitle:atIndex:
– tableView:titleForHeaderInSection:
– tableView:titleForFooterInSection:
You are probably doing things in those methods that are relatively heavy. Maybe to build your sections you have to look at all of your data (this is common). If that's the case, come up with a reasonable strategy for caching the section information and invalidating it when the event store changes.

Related

iOS UITableView: What is difference between the tableView:willDisplayCell:forRowAtIndexPath: and tableView(_:prefetchRowsAt:) methods?

Both these functions can be used for fetching paginated data as the user scrolls through the list. But are there any significant advantages of one over other? Which is better for implementing pagination?
willDisplayCell is called when the cell is about to be displayed and is subject to the developer handling corner cases in terms of repeat displays, etc. It is intended to be used to perform cheap state-update operations on the cells in question -- not to prefetch data.
prefetchRowsAt, on the other hand, is called well ahead of time and allows you 'breathing room' to kick off potentially expensive operations that your cells depend on.
The docs for prefetchRowsAt state:
The table view calls this method on the main dispatch queue as the user scrolls, providing the index paths for cells it is likely to display in the near future.
Use your implementation of this method to start any expensive data loading operations. Always load your data asynchronously and forward the results to your table's data source object. Table views do not call this method for cells they require immediately, so your data source object must also be able to fetch the data itself.
Prefetch operations can also be cancelled by the UITableView when it calls the tableView(_, cancelPrefetchingForRowsAt:) function on your data source.
So, yea, there's quite a difference. One option requires a whole lot more work than the other.
from apple documentation i think that's the difference you are looking for.
prefetchRowsAt: Providing the index paths for cells it is likely to display in the near future. Use your implementation of this method to start any expensive data loading operations.
Vs
willDisplayCell:
send message just before it uses cell to draw a row,This method gives the delegate a chance to override state-based properties set earlier by the table view, such as selection and background color

Performance of NSCoreData entries fetching

For example, i have 10 000 entries in database, and i need to display them into a UITableView.
So, i should setup all the NSCoreData stuff, create NSFetchRequest, and NSFetchedResultsController.
Then i could access these entries in cellForRowAtIndexPath with [NSFetchedResultsController objectAtIndex] method.
The questions is:
Will NSFetchedResultsController load all these objects in a LAZY way, only after user actually SCROLL UITableView to a corresponding cells?
Is it enough just to set NSFetchRequest's fetchBatchSize to a number equal to cell count on screen?
Should i use separate NSManagedObjectContext with background thread for loading these objects? In case of separate thread, how will work [NSFetchedResultsController objectAtIndex] method, while it called from Main-UI-thread?
Should i even worry about these things, while i have just 10k entries?
The NSFetchedResultsController takes care of all your concerns for you.
Lazy Loading? Indeed, the FRC will try to be as lazy as possible. Nothing for you to do. Actually, it is optimized for scrolling, speed, performance, memory usage, you name it.
Batch Size? Irrelevant. You can just forget about it. If you feel like it, set up a performance test and compare with and without batchSize. I predict you will not find any difference or it will be negligible. (If anything at all maybe with extreme scrolling speeds.)
Separate NSManagedObjectContext? Absolutely no. The NSFetchedResultsController is supposed to be used on the main thread.
Concern with number of records? You do not have to worry about this at all for the above reasons, especially lazy loading. I have had great results with X00.000s records.
If you set the batch size correctly.
I go for about twice as much
Don't bother, just run it off the main UI moc
Try it and see. CD can handle that volume, but there are still performance optimisations that you can do tha depends on things that aren't in your question.

UICollectionView is very slow when inserting or deleting many items

I'm trying to insert and delete a large amount of items (say, 20,000) from a collection view, and the operation takes a very long time.
The test fixture I created is composed of the following:
UICollectionView with no configuration besides a data source.
Default UICollectionViewFlowLayout.
Data source that returns either 10K or 30K items depending on a BOOL variable.
Button to toggle that variable. When set to YES, 20K items are being added to the data source (just by changing numberOfItemsInSection:) and insertItemsAtIndexPaths: with 20K items. When set to NO, deleteItemsAtIndexPaths: is called with 20K items.
Cell configuration in the data source does nothing besides dequeuing a default UICollectionViewCell and returning it.
Running this on simulator, which should be faster than any device, yields the following timings:
Insertion of 20K items: 220ms.
Deletion of the same 20K items: 1100ms.
This is, by all means, horribly slow, especially when performed on the main thread.
Here's a screenshot from instruments, showing the hotspots in UICollectionView's internal implementation (specifically, _computeItemUpdates):
I've noticed that the use of reloadData instead of inserting or updating the items is way faster (~20ms), probably because no animations are triggered so there's no need to compute the position of each item and section for animation purposes.
Any ideas on how to overcome this would be appreciated.
Expand _computeItemUpdates. If anything it's calling is yours, then yes you can.
An example would be if you are using a custom layout, you could ask it to calculate the new positions on a background thread and then call insert/delete when that operation finishes.
You could also be smart about it and only call insert/delete for ranges that are currently visible, then after the rearrange animation finishes you could reloadData and it shouldn't look too different from a users perspective.

Do you put NSFetchRequest into cellForRowAtIndexPath?

It seems me somewhat slow to perform NSFetchRequest in cellforRowAtIndexPath.
How do you treat it? Is it more memory / time efficient to perform it in viewDidLoad, cache result into a dictionary, and use that in cellForRowAtIndexPath?
It is a bad idea due to performance reasons, as #Paulw11 mentioned in his comment. Additionaly, you will execute fetch requests more times than it's actually needed (while scrolling the table view back and forth), because cellForRowAtIndexPath is called each time a cell is reused.
I would recommend using NSFetchedResultsController. It is designed specifically to show Core Data records in UITableView. It allows batch fetching (doesn't fetch all the objects, but only the ones that need to be displayed). Also you will be able to easily track changes in your model.

iOS NSFetchedResultsController Insert Section Redundant Cycling Issue

So I am using a UICollectionView with an NSFetchedResultsController with cache (leaning on Ash Furrow's implementation https://github.com/AshFurrow/UICollectionView-NSFetchedResultsController). Everything is functional, but I'm getting sluggish response due to what seems like redundant cycling of the uicollectionviewcontroller delegates.
This is what I'm doing:
Every few seconds, I am inserting a new entity into the managedObjectContext. Each entity ends up in it's own section based on the value of section key attribute when fetched. Each entity has 22 different attributes that end up being presented in 22 new cells (rows) for the entity section.
This is what is happening:
My NSFetchedResultsController delegates appear to working as expected (they capture the single section entry). When I end up running the batch updates, it calls cellForItemAtIndexPath for only the new entity (as expected). BUT (and this is MY ISSUE) the UICollectionViewController cycles through numberOfItemsInSection and sizeForItemAtIndexPath (I have different sized cells for the 22 new cells) for every single existing section. Isn't this cached? Don't we already know this? Any ideas why this is happening? I'm a noob, so am I misunderstanding something fundamental here? Am I fundamentally implementing the sections and rows of the collectionView incorrectly?
And the end result is that the extra cycles become very expensive as the number of entities/sections increase and the ui quickly becomes too staggeringly sluggish to be acceptable/usable.
Ideas?
UPDATE: REASONING FOUND
So I guess I needed to delve deeper into the implications of using custom cell formatting calls in my delegate. I don't really understand why this needs to be so costly (calling it on every row not just those in view), but it is.
heightForRowAtIndexPath being called for all rows & how many rows in a UITableView before performance issues?
So I guess I needed to delve deeper into the implications of using custom cell formatting calls in my delegate. I don't really understand why this needs to be so costly (calling it on every row not just those in view), but it is.
This SO thread talks about it:
heightForRowAtIndexPath being called for all rows & how many rows in a UITableView before performance issues?

Resources