What would be the better approach to let a user search for other users who use the app (using Parse.com as the backend) :
Import all the the data in the _User table then filter t in the app when using the UISearchBar
Querying parse for the search term and loading the results to the tableview
There is no "right" answer. It depends entirely on how you define "better."
Option 1 likely produces superficially the best user experience, in the sense that filtering a list on the fly looks a lot more responsive. But you have to schedule downloading the user list for when the user isn't already trying to search.
Option 2 is likely more efficient. Less bandwidth, less storage. But the user had to be online to search and you probably can't do a "real time" filter unless you're on a fast network.
There may be other factors also. I didn't want to expose a list of users, for example, so I went for option 2.
Related
I have a Rails inventory app that is available to global users, allowing them to enter their own inventory information and query those of others.
a British person in London adds 10 units of "bicycle" to the inventory table
a Japanese person adds 2 units of 自転車 (bicycle in Japanese)
a Vietnamese adds 5 units of xe dap (bicycle in Vietnamese)
The British person can query 'bicycle' and it will output all bicycles in the system (17 units) and can show the details of each in their original language, without the users classifying them beforehand. Likewise, the Japanese person can query '自転車', which will show all bicycles.
How can this be done?
The globalize gem requires users to manually translate each record so it's not the correct way. I've heard about machine learning and deep learning but I don't know if it's the right solution for this.
So if stackoverflow is not the right place to ask this? Where should I ask? Quora does not allow long questions.
Machine learning does not seem like a proper solution in this context since you don't have enough experience with it and it's a complex matter to just start with it and learn enough to apply to a real life problem.
Here are a few solutions you could implement today, as long as you understand the requirements and the up/downs for each, you will have to figure those out by yourself.
Since I don't have enough information about your system I'll try and generalize it to something that's likely.
Solutions:
1.Define a limited number of items for your system, like Bike and add
them to a config file or in a items database, each item having it's
unique id and when a user will have to add something they will have
to select from your list. Have a Other item as a catch-all, and
maybe provide a note so the users can add anything to recognize the
item.
2.Similar to the above solution but you give the users a way to add new items into the system, so you have 10 standard items and every user can add items to the site (those being moderated) and other users will have access to them.
3.Have a solid search system in place like Elasticsearch (or anything else), and when the user create items you index that item in the language that is entered, and then use Google translation API (or another translation service) to translate them in all the languages you need and index those for search as well.
I think solution 1 is the best if you are able to implement it followed by solution 2.
We are developing a social app with Firebase (swift / iOS).
We face the problem that we have two data trees and have to calculate the delta without generating a high data traffic.
Example:
We have a structure cars and a structure user.
The structure cars contain 100 different vehicle models.
The user structure contains all vehicle models that have already been driven by the user.
We now want to implement a high-performance solution in order to determine all the vehicles that have not yet been driven by a user without downloading the whole tree structure.
The number of users and the number of vehicles are growing steadily.
Does anyone have a solution approach or idea in which direction we need to think?
love, alex
I think they key to effectively using firebase is data duplication. So if you want to display a list of cars the user has and hasn't driven, create a separate table containing only the information displayed in that list, like the path to an image, the make & model, using unique IDs as the keys to entries in that table. You wouldn't need to know things like top speed and price until they tap into details, right? (I'm making some assumptions here though.)
Then, simply get the list of unique IDs for the cars the user already has driven, and manipulate your offline model accordingly.
Right now I'm using an external server to manage data duplication, that propagates a write operation to other places in the database when necessary. I'm on my phone right now but I think Ray Wenderlich has an article about this.
Let me preface by saying I know the pros and cons of using CloudKit and that I am trying to work around one of these issues. However since Cloudkit is more or less free, thats what I'm using. I am aware this might be easier with a different framework.
For simplicity sake lets say I'm making a social media image sharing app, I want to track photo likes and my framework is CloudKit. I'm trying to figure out the most efficient way to track likes for publicly shared photos.
A few of the options I've mulled over:
Every like is a new record in the public database with a back reference to the photo. Since CK has no aggregate queries, in order to display the number of likes for a photo I need to query all the records for a given reference and count them. If this number gets very large then I'm iterating over the cursor. This seems quick and accurate to write but potentially pretty slow to display a lot of photos.
Likes are individual records in the private database, I update the aggregate in a single record per photo the public db at the same time. Getting the total like count is now a one record query. Determining if a user has already liked something also becomes easier with a smaller private db of likes. The route sounds like the fastest but its is potentially inaccurate if multiple users are liking the same photo. Also deleting a user and all their private likes leaves my aggregates unchanged, I'd need some process updating the aggregates.
I'd love any advice I can get, thanks !
You second approach is the best, and it's accurate. When you read a data, change it and save it, CloudKit will check if the data is changed in the mean time. So when 2 are updating 1 photo at the same time, then one will get a CloudKit error that the data has changed. Then just try to update again until you succeed.
No matter what solution you choose, if you want to update the likes after a user is removed, you need a process that does this.
IOS newb working on app that pulls JSON feed from web server into an IOS app.
Is it best practice to create an API (in PHP in my case) that spills all the records--which could be thousands into a JSON feed and have the IOS app handle all of them (though displaying only one screen at a time)?
Or is best practice to limit the results in the JSON feed to say ten or one hundred and then have some user action in the IOS App draw down the next batch?
The first would seem more desirable given the reusable cell concept, however, it seems that sending huge numbers of records is bound to be slower and/or problematic from a web traffic and memory management point of view.
On the other hand, the second seems really complicated. How would you know which page to pull down based on IOS style gestures.
Looking to learn best practice on this as it seems to be common for many apps.
Many thanks for any suggestions.
It probably depends on your use case. If you have a lot of data then you probably wouldn't want to load all of them at once. At a certain point it will take too much time and you probably have to do that more than once if you don't persist or if you want to refresh.
In that case a good API is designed to be iterated over e.g. limit/offset parameters. Meaning for example the user scrolls a list of items and when scrolling to the bottom of the list you fetch a new batch of items.
In the other case where one call is fast enough to deliver all the data at once there is no reason to make it more complicated as you said and you can still add iteration afterwards.
I am struggling improve the search speed of my iOS app which uses core data. Can anyone help or suggest alternative solutions to improve my search speed? I've listed details to my situation below.
Project Details
I am currently creating a data reference app which uses core data with a preloaded SQLite database. I want to be able to search on one or more attributes of an entity which could contain over 100000 records and return results quickly.
The best results I have achieved so far(searching still quiet slow though) is to load a view with a search display controller, set the fetch limit(currently 100) for the fetch request of the fetchResultController. I've also used search scopes to simplify the predicates. I do use the 'contains' keyword in my predicates, but I am not sure how to implement the suggestion in session 137 of WWDC 2010 and what keywords I should be storing or how many I should store.
Here is a link to one of my classes,
http://pastebin.com/cHHicc1s
Thank you for your time and help.
Regards
Jing Jing Tao
You may want to normalize an existing attribute as a new attribute then index it. Remove the "CONTAINS" from your predicate and instead use >= or < etc values. Also, normalize the search text so that the comparison balances. Apple documents all this in the 'Derived Property' example and in WWDC 2010 session # 118 video.
If you are doing large searches on attributes, you should create indexes. You can do this in Xcode when you define the model. Click on the entity, and right under where you specify the entity name, you can create additional indexes.
Note, however, that you will incur additional file size overhead, and inserts/deletes will also take a bit more time. But, your searches will be very fast.