What is the best strategy to synchronize Parse objects across the application?
Take Twitter as an example, they have many Tweet objects, same tweet object can be shown on multiple places, say viewController1 and viewController2, so it is not efficient for both of them to hold deep copies of the same parse object.
When I increase the likeCount of Tweet_168 in viewController2, how should I update the likeCount of Tweet_168 in viewController1?
I created a singleton container class (TweetContainer) so every Parse request goes through this and this checks if the incoming objectIds are already in the container,
A) if it is, it updates the previous object's fields and dumps the new object. (to keep single deep copy of a parse object.)
B) if it is not, it adds the new object.
(This process is fast as I'm using hashmaps)
This container holds deep copies to those objects, and gives shallow copies to viewControllers, thus editing a Tweet in a viewController will result in its update on all viewControllers!
Taking one step further, let's say Tweet objects have pointers to Author objects. When an Author object is updated, I want all of them to be updated (say image change). I can create a new AuthorContainer with the same strategy and give shallow copies to Tweet objects in TweetContainer.
I could, in an ideal world, propagate every update to cloud and refresh every object before showing to user over the cloud, but that's not feasible neither bandwidth nor latency-wise
Related
I create a Kafka Streams application, which receives different JSON objects from different topics and I want to implement some kind of wait function, but I'm not sure about how to implement it best.
To simplify the problem I'll use simplified entities in the following section, I hope the problem can be described very good with it.
So in one of my streams I receive car objects and every car has an id. In a second stream I receive person objects and every person has also a car id and is assigned to a car with this id.
I want to read with my Kafka Streams application from both input streams (topics) and enrich the car object with the four persons, which have the same car id. The car objects should only be forwarded to the next downstream processor when all four persons are included into the car object.
I have planned to create an input stream for the car and one for the person objects, parse the JSON data into the internal object representation, merge both streams together and apply a "selectKey" function on the merged stream to extract the keys out of the entities.
After that I would push the data into a custom transformation function, which has a state store inlcuded. Inside this transform function I would store every arriving car object with its id in the state store. As soon as new person objects arrive, I would add them to the respective car object in the state store (please ignore the case of late arriving cars here). As soon as four persons are in a car object, I would forward the object to the next stream function and remove the car object out of the state store.
Would this be a suitable approach for this? I'm not sure about scalability, because I have to make sure that when running multiple instances that the car and person objects with the same id will be processed by the same application instance. I would use the selectKey function for this, would that work?
Thanks!
The basic design looks sound to me.
However, selectKey() itself will not be sufficient, because transform() (in contrast to DSL operators) does not trigger an auto-rebalance. Thus, you need to manually rebalance via through().
stream.selectKey(...)
.through("user-created-topic")
.transform(...);
https://docs.confluent.io/current/streams/upgrade-guide.html#auto-repartitioning
So I am a bit confused about the amount of copies core data keeps for every managed object. First it stores a copy in the row cache which it uses to fulfill faults. Then for every object it also keeps a snapshot of the unmodified object as well as the actual data for the object (assuming its not a fault). That's 3 copies for one object so I assume I am misunderstanding something. This would mean migrations would need 4 times the size of the objects in the original database as it also has to create a new destination stack. I assume core data is smart and may do things like copy-on-write under the hood and not create a snapshot unless the object is actually modified.
Could someone please explain what is wrong about my thought process? Also is it true that the row cache will be shared by different managed object contexts created from the same persistent coordinator, or is there a row cache for each context?
There are at least 2 main collection types used in Realm:
List
Results
The relevant description from the documentation on a Results object says:
Results is an auto-updating container type in Realm returned from
object queries.
Because I want my UITableView to respond to any changes on the Realm Object Server, I really think I want my UITableView to be backed by a Results object. In fact, I think I would always want a Results object to back my UI for this reason. This is only reinforced by the description of a List object in the documentation:
List is the container type in Realm used to define to-many
relationships.
Sure seems like a List is focused on data modeling... So, being new to Realm and just reading the API, I'm thinking the answer is to use the Results object, but the tutorial (Step 5) uses the List object while the RealmExamples sample code uses Results.
What am I missing? Should I be using List objects to back my UITableViews? If so, what are the reasons?
Short answer: use a List if one already exists that closely matches what you want to display in your table view, otherwise use a Results.
If the data represented by a List that's already stored in your Realm corresponds to what you want to display in your table view, you should certainly use that to back it. Lists have an interesting property in that they are implicitly ordered, which can sometimes be helpful, like in the tutorial you linked to above, where a user can reorder tasks.
Results contain the results of a query in Realm. Running this query typically has a higher runtime overhead than accessing a List, by how much depends on the complexity of the query and the number of items in the Realm.
That being said, mutating a List has performance implications too since it's writing to the file in an atomic fashion. So if this is something that will be changing frequently, a Results is likely a better fit.
You should use Results<> as the Results is auto updating to back your UITableView. List can be used to link child models in a Realm model. where as Results is used to query the Realm Objects and you should add a Realm Notification Token so you know when the Results are updated and take necessary action (reload table view etc.) Look here for realm notifications: https://realm.io/docs/swift/latest/#notifications
P.S. The data in that example is just static and no changes are observed
I'd like to able to get a unique ID for an object so that I can later test another object against it, without having to keep the first object. Say I have an NSArray of NSDictionaries that describes some sort of playlist (not the best way to store data but for arguments sake). Its too big to store, and the user may have many of them; everytime the user uses the application I just download it again. I want to be able to offer the user the ability to continue the playlist from his last location but only if the playlist is exactly the same (say its some sort of feed that updates regularly, but always has the same count). Is there some sort of quick way to ID that object into a small'fingerprint' object, and then test the new playlists fingerprint against it? Obviously it doesn't have to be perfect (otherwise it would be the whole object), just something unique enough to test 'likely' equality. (Note: I'd prefer a solution that worked for any NSObject-not just an NSArray of NSDictionaries).
My naive approach was to just concatenate the first the first three letters of all of the NSStrings in each NSDictionary - that seems silly.
I am developing a game app where i am using an NSCountedSet as my character inventory where the inventory dynamically changes from view to view.
In other words:
the user can buy items from view 1 and add to the inventory, then the user switches to view 2 and uses some items and those should be removed from the inventory, and so on..
My questions are:
1.How can I write and read a NSCounted set efficiently to a plist?
2.is the best approach to write the data to disk as view 1 closes and the reread the data as view 2 opens? or is there a way i can read the data once when the app launches, make all the changes and then save the data back when the app is terminating?
The data consists of strings and numbers only and is small in ammount.
THe following are snippets from my code:
- (void) initInventory
{
//initialize the inventory with some string objects
[Inventory addObject:#"x"];
[Inventory addObject:#"y"];
[Inventory addObject:#"z"];
}
- (void) addItemToInvetory:(NSString*)ItemName
{
//add object passed in method to the inventory
[Inventory addObject:ItemName];
}
- (void) removeItemFromInventory:(NSString*)ItemName
{
//add object passed in method to the inventory
[Inventory removeObject:ItemName];
}
1.How can I write and read a NSCounted set efficiently to a plist? ...The data consists of strings and numbers only and is small in amount.
You can just record it using an array of (alternating) strings and numbers. The number represents the count of the string object. For a small set, you should not need to worry about the performance of the operation.
2.is the best approach to write the data to disk as view 1 closes and the reread the data as view 2 opens? or is there a way i can read the data once when the app launches, make all the changes and then save the data back when the app is terminating?
You can pass it (the model) from one view controller to the next, and just share the same model instance in many cases. Whether it makes sense to dispose or not depends on whether or not you need a reference, and how often that information is needed. So best practice depends on the memory and your ability to ensure the data is correct. For example, you may opt to share in order to avoid unnecessary I/O, and to keep the data synchronized, but you should avoiding holding thousands of objects if you don't need them anytime soon.
If your data were not small, you should consider something like CoreData instead (3 values is extra-tiny).