Simperium Updates "Freezing" device until finished - simperium

Hi Simperium Developers.
I am building an iOS app using your data platform, as I go deeper into final testing I see this issue as the amount of coredata increases the app "freezes" as it is update.
The issue is very apparent when a user who has been using my iPad version of the app moves to another device and they are syncing the large dataset for the first time to a new device, but I see similar behaviour for small changes too.
So I see the following in the Log
Simperium enqueuing 253 object requests (PeopleModel)
Simperium enqueuing 301 object requests (SeatsModel)
Simperium enqueuing 139 object requests (KeywordModel)
etc.
I have approx 20 Entities in total
Then the app appears to hang the user interface until I see
Simperium finished processing index for PeopleModel
Simperium finished processing index for SeatsModel
Simperium finished processing index for KeywordModel
There can be many minutes of wait between the enqueuing and finsheed processing messages.
If I turn on verbose logging I only see additional information as each object is enqueued - no errors or warnings...
Is there anything I can do/change in my iOS app to release the rest of the app to continue processing ...?
HELP !
Cheers
Steve
Hi Mike,
Thanks for the quick reply, adding useWebsockets appears to make things slightly better. On the smaller updates it definitely helped. But when I do a main sync (i.e. the scenario where a user is syncing a dataset that exists in the cloud already , for example when they link a new device and the data already exists) I see the same "freezing" behavior , I then received this CRASH :
Simperium websocket failed with error Error Domain=org.lolrus.SocketRocket Code=2145 "Error writing to stream" UserInfo=0x28b85f60 {NSLocalizedDescription=Error writing to stream}
2013-05-13 19:29:56.898 MeetingPad[652:907] * -[SRWebSocket send:]: message sent to deallocated instance 0x1d172bc0
I tried the test 3 more times but I didn't see the websocket error above again... so possibly not relevant.
My coredata is about 2.4MB in size - here is the full console output, any more thoughts would be appreciated.... I would be happy to popup a message to the user simply saying "Syncing Please Wait" but I'm unsure how to know when the sync has finished to remove the message.....
2013-05-13 20:09:37.367 MeetingPad[897:907] Init DropBox
2013-05-13 20:09:37.575 MeetingPad[897:907] Init MPIAPHelper
2013-05-13 20:09:37.588 MeetingPad[897:1803] NETWORK REACHABLE!
2013-05-13 20:09:37.676 MeetingPad[897:907] Simperium error: bucket list not loaded. Ensure Simperium is started before any objects are fetched.
2013-05-13 20:09:37.702 MeetingPad[897:907] Simperium error: bucket list not loaded. Ensure Simperium is started before any objects are fetched.
2013-05-13 20:09:37:928 MeetingPad[897:907] Simperium didn't find an existing auth token
2013-05-13 20:09:37:956 MeetingPad[897:907] Simperium starting...
2013-05-13 20:09:37:960 MeetingPad[897:907] Simperium loaded 21 entity definitions
2013-05-13 20:09:37.977 MeetingPad[897:907] Simperium managing 0 ActionLinks82 object instances
2013-05-13 20:09:38.000 MeetingPad[897:907] Simperium managing 1 Relationship1Model82 object instances
2013-05-13 20:09:38.004 MeetingPad[897:907] Simperium managing 0 AttendeeModel82 object instances
2013-05-13 20:09:38.010 MeetingPad[897:907] Simperium managing 0 ClipModel82 object instances
2013-05-13 20:09:38.015 MeetingPad[897:907] Simperium managing 0 ShapesModel82 object instances
2013-05-13 20:09:38.020 MeetingPad[897:907] Simperium managing 0 SeatModel82 object instances
2013-05-13 20:09:38.026 MeetingPad[897:907] Simperium managing 0 AgendaItemModel82 object instances
2013-05-13 20:09:38.031 MeetingPad[897:907] Simperium managing 0 PointsModel82 object instances
2013-05-13 20:09:38.036 MeetingPad[897:907] Simperium managing 0 AgendaItemVersionModel82 object instances
2013-05-13 20:09:38.040 MeetingPad[897:907] Simperium managing 0 Relationship2Model82 object instances
2013-05-13 20:09:38.044 MeetingPad[897:907] Simperium managing 0 ImagesModel82 object instances
2013-05-13 20:09:38.054 MeetingPad[897:907] Simperium managing 1 SeatingPlanModel82 object instances
2013-05-13 20:09:38.075 MeetingPad[897:907] Simperium managing 0 NoteLink82 object instances
2013-05-13 20:09:38.080 MeetingPad[897:907] Simperium managing 0 RecordingModel82 object instances
2013-05-13 20:09:38.085 MeetingPad[897:907] Simperium managing 0 ActionsModel82 object instances
2013-05-13 20:09:38.089 MeetingPad[897:907] Simperium managing 0 KeywordLinks82 object instances
2013-05-13 20:09:38.093 MeetingPad[897:907] Simperium managing 0 PeopleLinks82 object instances
2013-05-13 20:09:38.098 MeetingPad[897:907] Simperium managing 0 EvernoteDeletions82 object instances
2013-05-13 20:09:38.105 MeetingPad[897:907] Simperium managing 6 StylesModel82 object instances
2013-05-13 20:09:38.107 MeetingPad[897:907] Simperium managing 1 NotesModel82 object instances
2013-05-13 20:09:38.111 MeetingPad[897:907] Simperium managing 0 PeopleModel82 object instances
2013-05-13 20:09:38:116 MeetingPad[897:907] Simperium didn't find an existing auth token
2013-05-13 20:09:54:931 MeetingPad[897:907] Simperium authenticating: https://auth.simperium.com/1/wqqewweqeqw-disabeqweqwilities-33we2/authorize/
2013-05-13 20:09:56:010 MeetingPad[897:907] Simperium authentication success!
2013-05-13 20:09:56.150 MeetingPad[897:907] Reachability Flag Status: -R ------- networkStatusForFlags
2013-05-13 20:09:56:151 MeetingPad[897:907] Simperium starting network managers...
2013-05-13 20:09:56.151 MeetingPad[897:907] Opening Connection...
2013-05-13 20:09:57:855 MeetingPad[897:907] Simperium processing 106 objects from index (ClipModel82)
2013-05-13 20:09:57:878 MeetingPad[897:907] Simperium enqueuing 106 object requests (ClipModel82)
2013-05-13 20:09:58:069 MeetingPad[897:907] Simperium processing 290 objects from index (AttendeeModel82)
2013-05-13 20:09:58:124 MeetingPad[897:907] Simperium enqueuing 290 object requests (AttendeeModel82)
2013-05-13 20:09:58:173 MeetingPad[897:907] Simperium processing 223 objects from index (ShapesModel82)
2013-05-13 20:09:58:175 MeetingPad[897:907] Simperium processing 28 objects from index (ImagesModel82)
2013-05-13 20:09:58:176 MeetingPad[897:907] Simperium processing 47 objects from index (Relationship2Model82)
2013-05-13 20:09:58:196 MeetingPad[897:907] Simperium enqueuing 47 object requests (Relationship2Model82)
2013-05-13 20:09:58:205 MeetingPad[897:907] Simperium enqueuing 28 object requests (ImagesModel82)
2013-05-13 20:09:58:232 MeetingPad[897:907] Simperium processing 100 objects from index (Relationship1Model82)
2013-05-13 20:09:58:234 MeetingPad[897:907] Simperium enqueuing 223 object requests (ShapesModel82)
2013-05-13 20:09:58:256 MeetingPad[897:907] Simperium enqueuing 100 object requests (Relationship1Model82)
2013-05-13 20:09:58:276 MeetingPad[897:907] Simperium processing 250 objects from index (ActionLinks82)
2013-05-13 20:09:58:322 MeetingPad[897:907] Simperium processing 251 objects from index (SeatingPlanModel82)
2013-05-13 20:09:58:348 MeetingPad[897:907] Simperium enqueuing 250 object requests (ActionLinks82)
2013-05-13 20:09:58:378 MeetingPad[897:907] Simperium enqueuing 251 object requests (SeatingPlanModel82)
2013-05-13 20:09:58:383 MeetingPad[897:907] Simperium processing 155 objects from index (RecordingModel82)
2013-05-13 20:09:58:412 MeetingPad[897:907] Simperium enqueuing 155 object requests (RecordingModel82)
2013-05-13 20:09:58:442 MeetingPad[897:907] Simperium processing 24 objects from index (StylesModel82)
2013-05-13 20:09:58:449 MeetingPad[897:907] Simperium enqueuing 24 object requests (StylesModel82)
2013-05-13 20:09:58:471 MeetingPad[897:907] Simperium processing 232 objects from index (NoteLink82)
2013-05-13 20:09:58:482 MeetingPad[897:907] Simperium processing 289 objects from index (ActionsModel82)
2013-05-13 20:09:58:486 MeetingPad[897:907] Simperium processing 248 objects from index (NotesModel82)
2013-05-13 20:09:58:500 MeetingPad[897:907] Simperium processing 295 objects from index (PeopleLinks82)
2013-05-13 20:09:58:593 MeetingPad[897:907] Simperium enqueuing 232 object requests (NoteLink82)
2013-05-13 20:09:58:655 MeetingPad[897:907] Simperium enqueuing 248 object requests (NotesModel82)
2013-05-13 20:09:58:686 MeetingPad[897:907] Simperium enqueuing 295 object requests (PeopleLinks82)
2013-05-13 20:09:58:696 MeetingPad[897:907] Simperium enqueuing 289 object requests (ActionsModel82)
2013-05-13 20:09:59:987 MeetingPad[897:907] Simperium processing 481 objects from index (PointsModel82)
2013-05-13 20:10:00:073 MeetingPad[897:907] Simperium processing 275 objects from index (SeatModel82)
2013-05-13 20:10:00:080 MeetingPad[897:907] Simperium processing 267 objects from index (KeywordLinks82)
2013-05-13 20:10:00:088 MeetingPad[897:907] Simperium processing 220 objects from index (PeopleModel82)
2013-05-13 20:10:00:092 MeetingPad[897:907] Simperium enqueuing 481 object requests (PointsModel82)
2013-05-13 20:10:00:450 MeetingPad[897:907] Simperium enqueuing 220 object requests (PeopleModel82)
2013-05-13 20:10:00:461 MeetingPad[897:907] Simperium enqueuing 267 object requests (KeywordLinks82)
2013-05-13 20:10:00:461 MeetingPad[897:907] Simperium enqueuing 275 object requests (SeatModel82)
2013-05-13 20:13:29.671 MeetingPad[897:907] Opening Connection...
2013-05-13 20:13:30:598 MeetingPad[897:907] Simperium processing 100 objects from index (Relationship1Model82)
2013-05-13 20:13:30:706 MeetingPad[897:907] Simperium processing 250 objects from index (ActionLinks82)
2013-05-13 20:13:30:743 MeetingPad[897:907] Simperium processing 290 objects from index (AttendeeModel82)
2013-05-13 20:13:30:804 MeetingPad[897:907] Simperium processing 106 objects from index (ClipModel82)
2013-05-13 20:13:30:806 MeetingPad[897:907] Simperium processing 47 objects from index (Relationship2Model82)
2013-05-13 20:13:30:846 MeetingPad[897:907] Simperium processing 275 objects from index (SeatModel82)
2013-05-13 20:13:30:858 MeetingPad[897:907] Simperium processing 28 objects from index (ImagesModel82)
2013-05-13 20:13:30:870 MeetingPad[897:907] Simperium processing 155 objects from index (RecordingModel82)
2013-05-13 20:13:30:948 MeetingPad[897:907] Simperium processing 223 objects from index (ShapesModel82)
2013-05-13 20:13:30:955 MeetingPad[897:907] Simperium processing 267 objects from index (KeywordLinks82)
2013-05-13 20:13:30:977 MeetingPad[897:907] Simperium processing 481 objects from index (PointsModel82)
2013-05-13 20:13:31:011 MeetingPad[897:907] Simperium processing 232 objects from index (NoteLink82)
2013-05-13 20:13:31:053 MeetingPad[897:907] Simperium processing 289 objects from index (ActionsModel82)
2013-05-13 20:13:31:062 MeetingPad[897:907] Simperium processing 295 objects from index (PeopleLinks82)
2013-05-13 20:13:31:076 MeetingPad[897:907] Simperium processing 24 objects from index (StylesModel82)
2013-05-13 20:13:31:338 MeetingPad[897:907] Simperium processing 220 objects from index (PeopleModel82)
2013-05-13 20:14:34.901 MeetingPad[897:907] Opening Connection...

The sheer amount of data you were syncing across a large number of buckets exposed some performance problems. These have been fixed with this commit.
In particular:
NSNotifications for added/changed objects were being triggered very aggressively while indexing. Since most people don't seem to make use of these anyway (during indexing), they've been removed for now. They're still fired when objects are added/changed otherwise.
The storage of metadata that tracks pending relationships has been moved from NSUserDefaults to metadata directly on the NSPersistentStore.
The resolution of pending relationships has been moved to its own GCD queue, since it touches the database potentially very frequently during indexing.

Related

Neo4J save query performance (GrapheneDB)

I have created a .Net application that utilizes a Neo4J Graph Database (with GrapheneDB as a provider). I am having performance issues when I save a new graph object. I am not keeping a history of the graph so each time I save, I first delete the old one including nodes and relationships, then I save the new one. I have not indexed my nodes yet. I don't think this is the problem because loading multiple of these graphs at a time is very fast.
My save method steps through each branch and merges the nodes and relationships. (I left the relationships out of each step for cleanliness). After the full query is created the code is executed in one shot.
merge the root node 37 and node 4
merge type1 node 12-17 with 4
merge type2 node 18-22 with 4
merge 2 with 37
merge 7-11 with 2
merge 5 with 37 (creates relationships)
merge 23-26 with 5
merge 6 with 37 (creates relationships)
merge 30-27 with 6
Nodes 2, 4, 5, 6 can have 100-200 leaf nodes. I have about 100 of these graphs in my database. This save can take the server 10 - 20 seconds on production and sometimes times out.
I have tried saving a different way, and it takes longer but doesn't timeout as frequently. I create groups of nodes first. Each node stores the root id 37. Each group is created in a separate execution. After the nodes are created I create relationships by selecting child nodes and the root node. This splits the query up into separate smaller queries.
How can I improve the performance of this save? Loading 30 of these graphs takes 3-5 seconds. I should also note that the save got significantly less performant as more data was added.
Since you delete all the nodes (and their relationships) beforehand, you should not be using MERGE at all, as that requires a lot of scanning (without the relevant indexes) to determine whether each node already exists.
Try using CREATE instead (as long as the CREATEs avoid creating duplicates).

Update huge Firebase database as a transaction

I am using Firebase database for iOS application and maintain huge database. For some events I have use cloud functions to simultaneous updates of several siblings nodes as a transaction. However some nodes contains huge child nodes (may be one million). Is it worst expanding huge number of records in cloud function?
Firebase does have a limit on the size of data it can GET and POST. Take a look at the Data Tree of this site https://firebase.google.com/docs/database/usage/limits
It mentions the max depth, and size limits of objects.
Maximum depth of child nodes 32
Maximum size of a string 10 MB
If your database has millions or records you should use the query parameters and limit your request to smaller sub sections of your data.

Adobe Analytics evars, props & events in App Measurement 2.0.0

i know adobe analytics/sitecatalyst for a while now (and i know all those "dont combine props and success events etc.") but i am still confused about the results i see in my reports: what are those numbers telling me exactly?
background: I stumbled across the idea of "page view success events", but i am not sure if this is still state of the art.
for my example i use one prop and evar, which contain exactly the same characteristics (prop = evar).
props + page views + visits + instances + orders
result: 0 pageviews < 100 visits < 120 instances (orders not selectable)
my interpretation: this prop is set in an s.tl() call, so no page views are related (?). it was set 120 times in 100 sessions, so some sessions triggered the prop more than once. success metrics (purchase metrics) cannot be combined with props.
evars + page views + visits + instances + orders
result: 20 orders < 100 visits < 120 instances < 6.000 page views
my interpretation: the variable was set in the same s.tl() call like the prop above, thats why visits and instances are matching. after setting this variable, 20 orders were triggered. furthermore, after the s.tl() call which set the variables, the 100 sessions triggered 6.000 additional s.t() calls (?).
I guess it must depend somehow on the sequence of s.t() and s.tl() calls but i am not sure..would be very glad if someone could shed some light :)
eVars persist data, so the 6000 page views are all page views that occurred after it was defined until the eVar expired (defaults to visit).
Page views are only s.t() calls; Instances are the number of times it was defined in both s.t() and s.tl() calls.

The Result of Batch Processing in CloudKit is "Limit Exceeded"

In CloudKit, I tried to save a large number of records by batch processing. However, my app crashed with the following error:
Error pushing local data: <CKError 0x15a69e640: "Limit Exceeded"
(27/1020); "Your request contains 561 items which is more than the
maximum number of items in a single request (400)">
This is my code:
CKModifyRecordsOperation *modifyRecordsOperation = [[CKModifyRecordsOperation alloc] initWithRecordsToSave:localChanges recordIDsToDelete: localDeletions];
modifyRecordsOperation.savePolicy = CKRecordSaveAllKeys;
 
modifyRecordsOperation.modifyRecordsCompletionBlock = ^(NSArray *savedRecords, NSArray *deletedRecordIDs, NSError *error) {
if (error) {
NSLog(#"[%#] Error pushing local data: %#", self.class, error);
}
};
[privateDatabase addOperation:modifyRecordsOperation];
If I get to fetch a record, it seems all can be obtained by setting the resultsLimit in CKQueryOperation.
https://stackoverflow.com/questions/24191999/cloudkit-count-records
https://stackoverflow.com/questions/26324643/cloudkit-your-request-contains-more-than-the-maximum-number-of-items-in-a-singl
https://forums.developer.apple.com/thread/11121
When I want to save a large number of records in the batch processing using CKModifyRecordsOperation, is there any way to eliminate the limit?
I'm afraid there is no way to eliminate the limit. And you can't count on it being 400 either - the server may decide to reject a request of any size.
Instead, handle the CKErrorLimitExceeded error as Apple suggests: by refactoring the operation into multiple smaller batches (i.e. multiple CKModifyRecordsOperations).
To ensure the speed of fetching and saving records, the server may reject large operations. When this occurs, a block reports the CKErrorLimitExceeded error. Your app should handle this error, and refactor the operation into multiple smaller batches.
Source: CKModifyRecordsOperation Class Reference
So to summarize:
Attempt a CKModifyRecordsOperation with the batch of records you want to save.
If the operation returns CKErrorLimitExceeded, split the batch of records into multiple smaller batches and submit them as multiple CKModifyRecordsOperations. (A simple split would be to just divide the batch in half, but this depends on factors like whether you have CKReferences amongst the new records in the batch.)
If any of those new CKModifyRecordsOperations fail with CKErrorLimitExceeded, split their records... and so on and so forth.

Neo4j: Monitoring and logging Indexing as it happens

I have a Neo4j database with roughly 417 million nodes, 780 million relationships and 2.6 billion properties.
As creating indexes takes considerable amount of time, is there any way in Neo4j to trace and monitor the progress of index creation?
In the Neo4j browser, use the command
:SCHEMA
to get information about the indexes, including if they are online or still being built.
Use
:SCHEMA await
to wait for indexes to be built.

Resources