Improve Mahout suggestions - mahout

I'm searching for the way to improve Mahout suggestions (form Item-based recommender, and data sets originally are user/item/weight) using an 'external' set of data.
Assuming we already have recommendations: a number of Users were suggested by the number of items.
But also, it's possible to receive a feedback from these suggested users in a binary form: 'no, not for me' and 'yes, i was suggested because i know about items'; this way 1/0 by each of suggested users.
What's the better and right way to use this kind of data? Is there any approaches built-in Mahout? If no, what approach will be suitable to train the data set and use that information in the next rounds?

It's not ideal that you get explicit user feedback as 0-1 (strongly disagree - strongly agree), otherwise the feedback could be treated as any other user rating from the input.
Anyway you can introduce this user feedback in you initial training set, with recommended score ('1' feedback) or 1 - recommended score ('0' feedback) as weight and retrain your model.
It would be nice to add a 3-rd option 'neutral' that does not do anything, to avoid noise in the data (e.g. recommended score is 0.5 and user disagrees, you would still add it as 0.5 regardless...) and model over fitting.

Boolean data IS ideal but you have two actions: "like" and "dislike"
The latest way to use this is by using indicators and cross-indicators. You want to recommend things that are liked so for this data you create an indicator. However it is quite likely that a user's pattern of "dislikes" can be used to recommend likes, for this you need to create a cross-indicator.
The latest Mahout SNAPSHOT-1.0 has the tools you need in *spark-itemsimilarity". It can take two actions, one primary the other secondary and will create an indicator matrix and a cross-indicator matrix. These you index and query using a search engine, where the query is a user's history of likes and dislikes. The search will return an ordered list of recommendations.
By using cross-indicators you can begin to use many different actions a user takes in your app. The process of creating cross-indicators will find important correlations between the two actions. In other words it will find the "dislikes" that lead to specific "likes". You can do the same with page-views, applying tags, viewing categories, almost any recorded user action.
The method requires Mahout, Spark, Hadoop, and a search engine like Solr. It is explained here: http://mahout.apache.org/users/recommender/intro-cooccurrence-spark.html under How to use Multiple User Actions

Related

Content based vs Collaborative based filtering?

Content based filtering (CBF): It works on basis of product/ item attributes. Say user_1 has placed order(or liked) for some of the items in the past.
Now we need to identify relevant features of those ordered items and compare them with other items to recommend any new one.
One of the famous model to find the similar items based on feature set is Random forest or decision tree
Collaborative filtering (CLF): It uses user behavior . Say user_1 has placed order(or liked) for some of the items in the past. Now we find similar user. Users
who ordered/likes the same items in the past can be considered similar user. Now we can recommend some of the items ordered by similar user based on scores.
One of the famous model to find similar user is KNN
Question : Say I have to find similar users not on based of their behavior (like I mentioned) in CBF but based on some user profile features like
nationality/height/weight/language/salary etc will it be considered CBF or CLF ?
Second related doubt I have is both CBF or CLF will not work for the new user in system as he has not done any activity in the system. Is that correct ? same
is the case when system is new or launched as we won't have much data here ?
You can think content based approach as regression problem wherein you have your x_i's as your data points and their corresponding y_i's as rating given by the user.
You have correctly stated the CLF, it uses an user-item matrix from which it creates item-item or user-user matrices and then recommends products/items based on these matrices.
But in content-based you need to build a vector corresponding to each user. e.g. lets say we want to create a vector for a netflix user. This vector can include features like how many movies this user has watched, what genere of movies he/she likes, is he a critical user, etc. some of the features you have mentioned like his average salary and others and this vector will have an y_i which will the rating. These kinds of recommendation systems are known as content based and this answers your first question.
Coming to your second question, wherein when a new user/item comes into the picture, then how does one recommend items to that user. This problem is known as cold start problem. In that case you can use the geographical location of that user to pick the top items that are watched by the people in his country and recommend based on that. Once he starts rating those top items, then both your CLF and Content based can work as they normally work.

mahout recommendations on two event on similar item

I am trying to solve a problem on mahout. The question is we have users and courses, a user can view a course or can take a course. If a user is viewing a course frequently then i have to recommend to take the course. I have data like userid and itemid and there is no preferences associated with.
EX:
1 2
1 7
2 4
2 8
3 5
4 6
where in first column 1 is userid and in 2nd column 2 is course id.The twist is in 2nd column can hold both viewed or/and complete of a particular course.suppose courseA which is viewed has id 2 and same courseA which is taken has id 7 for user 1. if a user other than user 1 coming and viewing the courseA than i have to predict courceA to be taken.now the problem here is if all the user viewing a course but not taking it, then user based recommendation in mahout will be failed.because for business perspective we have to give them the course that they are viewing should be taken. Do i need to factorize my dataset here or which algo is best suitable for this kind of problem.
One problem is that viewing may not predict (and certainly won't predict as well) that the user wants to take the course. You should look at the new cross-cooccurrence recommender stuff in Mahout v1. It's part of a complete revamp of Mahout on Spark using a new Scala DSL and built in optimizer for linear algebra. The command line job you are looking for is spark-itemsimilarity and it can ingest your user and item ids directly without translating them into cardinal non-negative numbers.
The algo takes the actions you know you want to recommend (user takes a course) these are the strongest "indicators" that can be used in your recommender. Then finds correlated views, views that led to the user taking that course. This is done with the spark-itemsimilarity job, which can take two actions at a time finding correlations, filtering out noise, and producing two "indicators". From the job you get two sparse matrices, each row is an item from the "user takes a course" action dataset and the values are an ordered list of item ids that are most similar. The first output will be items similar by other peoples taking the course, the second will be items similar by other people viewing and taking the course.
Input uses application specific IDs. You can leave you data mixed if you include a filter term that ids the action. It looks something like:
user-id-1,item-id1,user-took-class
user-id-1,item-id2,user-viewed-class-page
user-id-1,item-id5,user-viewed-class-page
...
The output is text delimited (think CSV but you can control the format) and is all item-id tokens that by default looks like this:
item-id-1,item-id-100 item-id-200 item-id-250 ...
This is an item id, comma, and an ordered list of similar items separated by spaces. Index this with a search engine and use the current user's history of action 1 to query against the primary indicator and the user's history of action 2 against the secondary cross-cooccurrence indicator. These can be indexed together as two fields of the same doc so there is only one query against two fields. This also gives you a server that is as scalable as Solr or Elasticsearch. You just create the data models with Mahout then index and query them with a search engine.
Mahout docs:http://mahout.apache.org/users/recommender/intro-cooccurrence-spark.html
Preso on the theory and other things you can do with these techniques: http://www.slideshare.net/pferrel/unified-recommender-39986309
Using this technique you can take virtually the entire user clickstream recorded as separate actions and use them to make better recs. The actions don't even have to be on the same items. You can use the user's search term history, for instance, and get a cross-cooccurrence indicator. In this case the output would have search terms that lead users to take the course and so your query would be the current user's search term history.

Apache Mahout modified abstract similarity .. To incorporate trust network .. Need suggestions

I have modified the AbstractSimilarity class / UserSimilarity method with the following:
Collection c = multiMap.get(user1);
if(c.contains(user2)){
result = result+0.50;
}
I use the epinions dataset that has two files. One with userid, itemid, rating and a trust network user-user which is stored in the multimap above. The rating set is on the datamodel.
Finally: I would like to add a value to a user (e.g +0.50) if he is on the trust network of the user who asks for the recommendations.
Would it be better to use two datamodels?
Thnaks
You've hit upon a very interesting topic in recommenders: multi-modal or multi-action recommenders. They solve the problem of have several actions by the same users and how to use the data to recommend the primary action using all available data. For instance how to recommend purchases with purchase AND page view data.
To use epinions is good intuition on your part. The problem is that there may be no correlation between trust and rating for an individual user. The general technique you use here is to correlate the two bits of data by using a multi-action indicator. Just adding a weight may have little or no effect and can, in your own real-world data, even produce a negative effect.
The snapshot Mahout 1.0 has a new spark-itemsimilarity CLI job (you can use it like a library too) that takes two actions and correlates the second to the first producing two "indicator" outputs. The primary action is the one you want to recommend, in this case recommending people that an individual might like. The secondary action may be anything but must have the user IDs in common, in epinions it's the trust action. The epinions data is actually what is used to test this technique.
Running both inputs through spark-itemsimilarity will produce an "indicator-matrix" and a "cross-indicator-matrix" These are the core of any "cooccurrence" recommender. If you want to learn more about this technique I'd suggest bringing it up on the Mahout mailing list: user#mahout.apache.org

how to build an efficient ItemBasedRecommender in Mahout?

I am building an Item Based Recommender System for 10 millions users who
rate categories over 20 possible categories (news categories like politic,
sport etc...)
I would like for each one of them to be recommended at least another
category which they don't know (no rating).
I runned a GenericUserBasedRecommender and asked for recommendations for
each user but It looks extremely long: maybe 1000 user proceeded per minute.
My questions are:
1- Can I run this same GenericUserBasedRecommender on hadoop and would it
really be faster? I saw and run an ItemBasedRecommender with command line on
a cluster, but I would rather run a User Based one.
1,5 - I saw many users not having a single recommendations. What is the alogrithm criterium to determine if a user get a recommendation? I thought It could be that the user who don't get recommendations are the one who only give a single rating, but I don't understand why.
2- Is there another smarter way to deal with my problem? Maybe some clustering
solution instead of recommendation? I don't exactly see how.
3- Finally, am I right when I say that the algorithms who have no command line
are not to be used with hadoop?
Thank you for your answers.
Sometimes you won't get recommendations for certain items or users because there are few items over which they overlap. It could also be a case where the user data may be 'enough', but his behaviour/use patterns are very unique and/or disagreement with popular trends in the data.
You could perhaps try LogLikelihood or Tanimoto based ItemSimilarity.
Another thing you could look into is a Matrix Factorization based model. You could use the ALSWR Factorizer to generate recommendations. this method decomposes the original User-Item matrix, to a User-Feature, Item-Feature and Diagonal matrix,--> then reduces the dimensionality-->and then recronstructs the matrix which is closest to the original matrix with same rank. You might lose some data this method, but the missing values in the user-item matrix are imputed and you get estimate preference/recommendation values.
If you have the features and not just implicit ratings, you could probably experiment with clustering techniques, perhaps start with Hierarchical Clustering.
I did not quite get your last question.

Mahout -- Recommend for a type of people

I am a newbie learning mahout.
I learned that there are five recommenders in mahout. User-based, Item-based,...
The datasets I used is movielens 100K
I am thinking implement a little different movie recommender from user based one. i.e., instead of taking user id as an input to recommend movies to only one user, I want to take user demographic information, e.g., age range, gender, occupation, and zip code.
But the problem is how do I create my own user similarity method (The original one is taking two long type user id as parameters) and how do I combine u.user file and u.data file together?
I understand your question now. I think the simplest thing is to temporary create a dummy user with the demographic properties you are querying for, and then recommend for that dummy user.
Yes, you would have to write a UserSimilarity that implements whatever similarity rule you want on top of the demographic data.
Maybe there is another solution.
I implement my own Rescorer to deal with u.user file and input (gender, age range, ...). If each piece of information is equal, then I put the according user id into a FastIDSet.
Then, in the rescore method, I will check if the current user id is in FastIDSet, if yes, the augment the score.
In my own Recommender, I will use PlusAnoymousUserDataModel to get a temp id, and call the method recommen(id, howMany, rescorer)
However, after I tried different dataset file, I get 0 recommended item.
I am thinking whether it is the right way to use PlusAnoymousUserDataModel.

Resources