Recommender system result curation - machine-learning

so I want to ask if there's some sort of curation algorithm that arranges/sends results from a recommender system to a user.
For example, how Twitter recommends feeds to users. Is there some sort of algorithm that does that or Twitter just sorts it by highest number of interactions with that tweet (based on time posted too).

No, there is nothing like that.
Actually the recommendation system model is made in such a way, where it sort it based on Content Based filtering or Collaborative filtering according to the view stats of the user.
There are some algorithms like calculating co-relation between the view stats of the user and the content which is in twitter, and then recommend it.
Or Cosine Similarity and Cosine distance can also be used to calculate distance between view stats and content of twitter to recommend.
You must explore also other recommendation system, which is based on other algo's like Pearson Correlation, Weighted Average,etc.

Related

How to take user characteristics into account while building a recommendation system?

I'm learning Recommender systems. I have user's age, profession, gender etc. available in the dataset, along with their interaction scores with the available videos. I also have certain characteristics about each video like genre, #views etc. I am thinking of applying collaborative filtering for this particular dataset, but most examples online ignore this information whereas these characteristics can help understand why a certain user rated a certain video with the rating he gave, and can help produce more accurate predictions. Are there some algorithms which take this information into account?

Converting "Implicit" user interactions to "Explicit" user ratings for recommender systems

I'm currently in the process of building a recommendation system with implicit data (e.g. clicks, views, purchases), however much of the research I've looked at seems to skip the step of "aggregating implicit data". For example, how do you aggregate multiple clicks and purchases overtime into a single user rating (as is required for a standard matrix factorization model)?
I've been experimenting with several Matrix Factorization based methods, including Neural Collaborative Filtering, Deep Factorization Machines, LightFM, and Variational Autoencoders for Collaborative Filtering. None of these papers seem to address the issue of aggregating implicit data. They also do not discuss how to weight different types of user events (e.g. clicks vs purchase) when calculating a score.
For now I've been using a confidence score approach (the conference score corresponds to the count of events) as outlined in this paper: http://yifanhu.net/PUB/cf.pdf. However this approach doesn't address incorporating other types of user events (other than clicks), nor does it address negative implicit feedback (e.g. a ton of impressions with zero clicks).
Anyway, I'd love some insight on this topic! Any thoughts at all would be hugely appreciated!
There's the method for building a recommendation system - Bayesian personalized ranking from implicit feedback. I also wrote an article on how it can be implemented using TensorFlow.
There's no "right" answer for the question of how to transfer implicit feedback explicitly. The answer will depend on business requirements. If the task is to increase the click rate, you should try to use the clicks. If the task of increasing conversion, you need to work with purchases.

Options on evaluating Recommender System

I've created a recommender system that works this way:
-each user selects some filters and based on those filters there's a score generated
-each user is clustered using k-means based on those scores
-whenever a user receives a recommandation i'm using pearson's correlation to see which user has the best correlation to other users from the same cluster
My problem is that i'm not really sure what would be the best way to evaluate this system? I've seen that one way to do it is by hiding some values of the dataset but that's not the case for me because i'm not predicting scores.
Are there any metrics or something that i could use?

How to predict edges in the social network?

I need to make a recommendation system that would predict friends for users in the social graph. The number of users is around 1.500.000. I thought of creating all possible pairs of users and then calculate such metrics as jaccard distance for each pair but doing it for the matrix 1.500.000*1.500.000 seems to be an impossible task. What approaches exist to handle such an amount of nodes?

Architecture & Essential Components of StumbleUpon's Recommendation Engine

I would like to know how stumbleupon recommends articles for its users?.
Is it using a neural network or some sort of machine-learning algorithms or is it actually recommending articles based on what the user 'liked' or is it simply recommending articles based on the tags in the interests area?. With tags I mean, using something like item-based collaborative filtering etc.?
First, i have no inside knowledge of S/U's Recommendation Engine. What i do know, i've learned from following this topic for the last few years and from studying the publicly available sources (including StumbleUpon's own posts on their company Site and on their Blog), and of course, as a user of StumbleUpon.
I haven't found a single source, authoritative or otherwise, that comes anywhere close to saying "here's how the S/U Recommendation Engine works", still given that this is arguably the most successful Recommendation Engine ever--the statistics are insane, S/U accounts for over half of all referrals on the Internet, and substantially more than facebook, despite having a fraction of the registered users that facebook has (800 million versus 15 million); what's more S/U is not really a site with a Recommendation Engine, like say, Amazon.com, instead the Site itself is a Recommendation Engine--there is a substantial volume of discussion and gossip among the fairly small group of people who build Recommendation Engines such that if you sift through this, i think it's possible to reliably discren the types of algorithms used, the data sources supplied to them, and how these are connected in a working data flow.
The description below refers to my Diagram at bottom. Each step in the data flow is indicated by a roman numeral. My description proceeds backwards--beginning with the point at which the URL is delivered to the user, hence in actual use step I occurs last, and step V, first.
salmon-colored ovals => data sources
light blue rectangles => predictive algorithms
I. A Web Page recommended to an S/U user is the last step in a multi-step flow
II. The StumbleUpon Recommendation Engine is supplied with data (web pages) from three distinct sources:
web pages tagged with topic tags matching your pre-determined
Interests (topics a user has indicated as interests, and which are
available to view/revise by clicking the "Settings" Tab on the upper
right-hand corner of the logged-in user page);
socially Endorsed Pages (*pages liked by this user's Friends*); and
peer-Endorsed Pages (*pages liked by similar users*);
III. Those sources in turn are results returned by StumbleUpon predictive algorithms (Similar Users refers to users in the same cluster as determined by a Clustering Algorithm, which is perhaps k-means).
IV. The data used fed to the Clustering Engine to train it, is comprised of web pages annotated with user ratings
V. This data set (web pages rated by StumbleUpon users) is also used to train a Supervised Classifier (e.g., multi-layer perceptron, support-vector machine) The output of this supervised classifier is a class label applied to a web page not yet rated by a user.
The single best source i have found which discussed SU's Recommendation Engine in the context of other Recommender Systems is this BetaBeat Post.

Resources