Let me preface by saying I know the pros and cons of using CloudKit and that I am trying to work around one of these issues. However since Cloudkit is more or less free, thats what I'm using. I am aware this might be easier with a different framework.
For simplicity sake lets say I'm making a social media image sharing app, I want to track photo likes and my framework is CloudKit. I'm trying to figure out the most efficient way to track likes for publicly shared photos.
A few of the options I've mulled over:
Every like is a new record in the public database with a back reference to the photo. Since CK has no aggregate queries, in order to display the number of likes for a photo I need to query all the records for a given reference and count them. If this number gets very large then I'm iterating over the cursor. This seems quick and accurate to write but potentially pretty slow to display a lot of photos.
Likes are individual records in the private database, I update the aggregate in a single record per photo the public db at the same time. Getting the total like count is now a one record query. Determining if a user has already liked something also becomes easier with a smaller private db of likes. The route sounds like the fastest but its is potentially inaccurate if multiple users are liking the same photo. Also deleting a user and all their private likes leaves my aggregates unchanged, I'd need some process updating the aggregates.
I'd love any advice I can get, thanks !
You second approach is the best, and it's accurate. When you read a data, change it and save it, CloudKit will check if the data is changed in the mean time. So when 2 are updating 1 photo at the same time, then one will get a CloudKit error that the data has changed. Then just try to update again until you succeed.
No matter what solution you choose, if you want to update the likes after a user is removed, you need a process that does this.
Related
I am fairly new to Power Apps, and am trying to make a batch data entry form.
I am prototyping this now, and while I think in theory it should be working I keep running into technical errors.
The data source I'm using is google sheets. For prototyping purposes, there are three columns, item_id, item, and recorded_value.
For this app, it will be pulling a list of standard values into a gallery, where the input values can then be selected.
The approach I have taken is to create a gallery, which is added to a collection using the code below:
ClearCollect(
collection,
ForAll(
Filter(Gallery1.AllItems,true),
{ item:t_item.Text,item_id:t_item_id.Text,
recorded_value:t_recorded_value.Text
}
)
)
This is then uploaded to google sheets, I have found "success" using the two methods below:
ForAll(collection,Patch(records, Defaults(records),{item:item,item_id:item_id,recorded_value:recorded_value}))
or
Collect(records, collection)
I would say overall I am seeing 2 main issues in the testing:
The initial 'collect' seems like it fails to capture items on occasion. I don't know if it is cache related or what, but it seems like unless I scroll all the way down it will leave some fields blank (maybe not an issue in real use, but seems odd)
Uploading of records seems to take excruciatingly long in some cases. While initially it was just straight up crashing due to the problems in issue 1, I have found that it will sometimes get to say item 85 before sitting for a minute or so and then going through the rest of the list. For just 99 items it is taking several minutes to upload.
Ultimately I am looking to know if there is a better approach for what I am doing. I am basically just wanting to take a max of 99 rows and paste it on to the table, but it feels really inefficient right now due to the looping nature of the function. I am not sure if this is more of a powerapps or google sheets issue, but any advice would be appreciated.
From everything I could research, it seems like batch upload of records like this is going to be time consuming nearly any way you approach it.
I was able to come up with a workaround however which more or less eliminates the problem.
Instead of uploading each individual record, I am taking the approach of concatenating all records in the collection in a single cell through a variable, using delimiters to differentiate the rows/columns. (set variable with concat function, then patch the variable to the data source.)
This method allows all of the data to be stored nearly instantaneously.
After that I am just going to perform some basic etl through Python to transform the data into a more standard format and load it into SQL server which is fairly trivial to do.
I recommend others looking to take a 'batch insert' approach try something similar, as it will now only take users essentially a second to load records rather than several minutes.
We are developing a social app with Firebase (swift / iOS).
We face the problem that we have two data trees and have to calculate the delta without generating a high data traffic.
Example:
We have a structure cars and a structure user.
The structure cars contain 100 different vehicle models.
The user structure contains all vehicle models that have already been driven by the user.
We now want to implement a high-performance solution in order to determine all the vehicles that have not yet been driven by a user without downloading the whole tree structure.
The number of users and the number of vehicles are growing steadily.
Does anyone have a solution approach or idea in which direction we need to think?
love, alex
I think they key to effectively using firebase is data duplication. So if you want to display a list of cars the user has and hasn't driven, create a separate table containing only the information displayed in that list, like the path to an image, the make & model, using unique IDs as the keys to entries in that table. You wouldn't need to know things like top speed and price until they tap into details, right? (I'm making some assumptions here though.)
Then, simply get the list of unique IDs for the cars the user already has driven, and manipulate your offline model accordingly.
Right now I'm using an external server to manage data duplication, that propagates a write operation to other places in the database when necessary. I'm on my phone right now but I think Ray Wenderlich has an article about this.
What would be the better approach to let a user search for other users who use the app (using Parse.com as the backend) :
Import all the the data in the _User table then filter t in the app when using the UISearchBar
Querying parse for the search term and loading the results to the tableview
There is no "right" answer. It depends entirely on how you define "better."
Option 1 likely produces superficially the best user experience, in the sense that filtering a list on the fly looks a lot more responsive. But you have to schedule downloading the user list for when the user isn't already trying to search.
Option 2 is likely more efficient. Less bandwidth, less storage. But the user had to be online to search and you probably can't do a "real time" filter unless you're on a fast network.
There may be other factors also. I didn't want to expose a list of users, for example, so I went for option 2.
I am working on an application which uses foursquare and other server api-s for getting information from the internet. But I have to use some datas when the application is not connected to the internet, I need a method which easily saves these datas from the internet store them on the "disk" as a cache if the phone lost connectivity. Basically I want to store some of my model classes like:
VenueCategory contains a name, id, images(~10), weather reports for 7 days, venues.
A Venue contains images, rating, name, category, categoryImage, address, phone number and open hours schedule.
A water report contains date, max, min temperature, wind, ....
I am thinking on 3 methods but I don't know which is the best for my problem, maybe you can give me better ideas.
Database
Pro:
I get a nice representation from my datas.
Cons:
It is hard to modify if the application is live.
I don't need a new table for the venue category, a table is to much for 1 record inside it.
I have to do a lot of query, insertion, deletion, update, etc.
Serialization
It is easy if I can found a nice way I just say write the whole class to disk and read from disk.(I've never tried)
Plist: (just like the database)
My final question is that, what do you think which is the best and why? Do you have better idea?
The simplest way to approach this is (IMO), is to have your DTOs comply to NSCoding and serialize them using NSKeyedArchiver and deserialize them using NSKeyedUnarchiver.
You can use AutoCoding for that, which automatically implements the required methods in NSCoding with no effort at all.
Have you considered using CoreData? That was the first solution that came to my mind.
CoreData is (by default) based on SQLite and is in my opinion the defacto standard for persisting objects in iOS.
I am struggling improve the search speed of my iOS app which uses core data. Can anyone help or suggest alternative solutions to improve my search speed? I've listed details to my situation below.
Project Details
I am currently creating a data reference app which uses core data with a preloaded SQLite database. I want to be able to search on one or more attributes of an entity which could contain over 100000 records and return results quickly.
The best results I have achieved so far(searching still quiet slow though) is to load a view with a search display controller, set the fetch limit(currently 100) for the fetch request of the fetchResultController. I've also used search scopes to simplify the predicates. I do use the 'contains' keyword in my predicates, but I am not sure how to implement the suggestion in session 137 of WWDC 2010 and what keywords I should be storing or how many I should store.
Here is a link to one of my classes,
http://pastebin.com/cHHicc1s
Thank you for your time and help.
Regards
Jing Jing Tao
You may want to normalize an existing attribute as a new attribute then index it. Remove the "CONTAINS" from your predicate and instead use >= or < etc values. Also, normalize the search text so that the comparison balances. Apple documents all this in the 'Derived Property' example and in WWDC 2010 session # 118 video.
If you are doing large searches on attributes, you should create indexes. You can do this in Xcode when you define the model. Click on the entity, and right under where you specify the entity name, you can create additional indexes.
Note, however, that you will incur additional file size overhead, and inserts/deletes will also take a bit more time. But, your searches will be very fast.