This example shows how to use matrix factorization to build a recommendation system. This example is particulary suitable for a dataset with only two related ids like user id and product id that the corresponding user has purchased.
Based on this example, I prepared an input data like below.
[UserId] [ProductId]
3 1
3 15
3 23
5 9
5 1
8 2
8 1
.
.
And change the column name, making TextLoader.
var reader = ctx.Data.TextReader(new TextLoader.Arguments()
{
Separator = "tab",
HasHeader = true,
Column = new[]
{
new TextLoader.Column("Label", DataKind.R4, 0),
new TextLoader.Column("UserId", DataKind.U4, new [] { new TextLoader.Range(0) }, new KeyRange(0, 100000)),
new TextLoader.Column("ProductId", DataKind.U4, new [] { new TextLoader.Range(1) }, new KeyRange(0, 300))
}
});
It works great. It recommends a list of products that the target user may purchase with individual scores. However, it doesn't work with a new customer data that didn't exist in the initial input data, say UserId 1, it gives score NaN as a result of the prediction.
Retraining the model could be an obvious answer, but it seems futile to retrain the model everytime a new data comes in. I think there's definitely a way to update the existing model but I cannot find the relevant documentation, APIs, or a sample anywhere. I ended up leaving a question in the official github of ML.NET but I've got no answers so far.
Question would be very simple, in a nutshell, how can I update a trained model in ML.NET? Linking a relevant source of information would be greatly appreciated too.
In this particular example because of the task being performed you are limited to the scope of observations the model was trained on and can make predictions on that set. As you mentioned, a good way to go about it would be to re-train. I haven't tried this myself, but you might want to try one of the following:
Run Fit function again using the new data you want to train with as your input. Not only should the model persist it's previous training but also re-train using the additional data you have provided it with.
Save model to file, Load persisted model, Run Fit function like above.
As of 2021:
The re-training process is described in details here: https://learn.microsoft.com/en-us/dotnet/machine-learning/how-to-guides/retrain-model-ml-net
Related
In the following way i try to update pre-trained decision tree model with new data points, but i'm getting a new model which is completely seems like a model which is build on new data points instead of combined version of trained model plus new data points?.
is anything i missed?.
// setup trainer
DecisionTreeClassificationTrainer trainer =
new DecisionTreeClassificationTrainer(maxDepth, minImpurity);
DatasetBuilder<Integer, double[]> datasetBuilder = new CacheBasedDatasetBuilder<>(ignite, dataCache);
Model mdl = trainer.updateModel(
(DecisionTreeNode) prevMdl,
datasetBuilder,
featureExtractor,
labelExtractor
);
return mdl;
}
For now, the ML module doesn't support updates for decision trees. The problem in a tree structure, we don't come up good approach for branch delete during a model update.
Model update works well for other, non-tree-based algorithms.
Let's say I have a large data from an online gaming platform (like steam) which has 'date, user_id, number_of_hours_played, no_of_games' and I have to write a model to predict how many hours a user will play in future for a given date. Now, user_id has a large number of unique values (in millions). I know for class data we can use one hot encoding, but not sure what to do when I have millions of unique classes. Also, suggest if we can use any other method to preprocess the data.
Using directly the user id in the model is not a good idea, since that would result like you said into a large number of features, but also in overfitting since you would get one id per line (If I understood correctly your data). It would also make your model useless in case of a new user id and you would have to retrain your model each time you have a new user.
What I would recommand in the first place is to drop this variable and try to build a model with only the other variables.
Another Idea that you could try is to perform a clustering on the users you have based on other features, and then pass the cluster as a feature instead of the user id, but I don't know if this is a good idea since I don't know the kind of data you have.
Also, you are talking about making a prediction on a given date. The data you described doesn't suggest that but if you have the number of hours per multiple dates, this is closer to a time series prediction problem, which is different from a 'classic' regression problem.
I am trying out mahout and wondering about the input datamodel
for non-distributed version
file datamodel has to follow: userid, itemid, userPreference
the problem is i dont have this user preference values, have to precompute it
does mahout have any method to do it?
I found an article http://www.codeproject.com/Articles/620717/Building-A-Recommendation-Engine-Machine-Learning
the author seems did not really have user perference values, but he used org.apache.mahout.cf.taste.hadoop.item.RecommenderJob -s SIMILARITY_COOCCURRENCE
to compute from {userid, questionid}
from what I can tell, mahout seems compute perference values from data then compute recommendation, am I correct in this case?
If you don't have user preference values, maybe you don't need them. Mahout offers an implementation for recommending items for users without having preference values. This is called Boolean preferences. Basically you just know that some user likes some item, but you don't know how much. Sometimes this is fine.
Bellow is a sample code how this can be done. Basically only the first line differs, where you tell that your data model is of type BooleanPrefDataModel. Then with boolean data you can use two types of similarity measures: LogLikelihoodSimilarity, TanimotoCoefficientSimilarity. Both can be used for compute user-based and item-based recommendations.
DataModel model = new GenericBooleanPrefDataModel( GenericBooleanPrefDataModel.toDataMap( new FileDataModel(new File("FILE_NAME"))));
UserSimilarity similarity = new LogLikelihoodSimilarity(model);
UserNeighborhood neighborhood = new NearestNUserNeighborhood(10, similarity, model);
Reecommender recommender = new GenericUserBasedRecommender(model, neighborhood, similarity);
List<RecommendedItem> recommendations = recommender.recommend(1, 10);
for (RecommendedItem recommendation : recommendations) {
System.out.println(recommendation);
}
The other alternative is to compute the preference values outside mahout and feed the data model in some other user or item-based algorithms. But as far as I know, mahout does not offer implementation for computing preference values.
You can define preference value for your data model (but, it depends on your data model). For example, your data model items are tracks which are listened by users. The preferences value can be defined that user1 listens trackA x times. Thus, preferences value for data model should be defined for every userid-itemid unique pair.
The example of data model :
userid,itemid,preferences
1,1,3 -
1,2,5 -
.... -
5,1,2... so on.
The scenario is like this:
I am trying to make a recommender using apache mahaout and i have some sample preference(user,item,preference value) data for generating the similarity matrix and determining item-item similarities. But the actual preference data is much larger than the sample preference data. The list of item IDs that are present in the actual preference data are all present in the sample preference data as well. But the User ids in sample data are much lesser than the actual data.
Now, when i try to run the recommender on the actual data, it keeps giving me error that user id does not exist because it was not present in the sample data. How can i inject new user ids and their preferences in the recommender of mahout so that it can generate recommendations for any user on the fly based on item-item similarity? Or if there is any other way possible to generated recommendations for a new user, then please suggest.
Thanks.
If you think your sample data is complete for computing the item-item similarities, why don't you precompute them and use Collection<GenericItemSimilarity.ItemItemSimilarity> corrMatrix = new ArrayList<GenericItemSimilarity.ItemItemSimilarity>(); to store your precomputed similarities. Then from this you can create your ItemSimilarity like this: ItemSimilarity similarity = new GenericItemSimilarity(correlationMatrix);
I think it is not good idea for using sample of your data for computing item-item similarities based on the preference values, because you might be missing a lot of useful data. If you think that computing it on the fly is slow, you can always precomputed it and store it in a database, and load it when needed.
If you are still getting this error, than you probably use your sample data model in the recommendation class, or you use UserSimilarity to compute the item similarities.
If you want to add new user you can either use Mahout's FileDataModel and update the file periodically by including new users (I think you can create new file with some suffix, I am not sure). You can find more about this in the book Mahout in Action. The in-memory DataModel implementations are immutable. You can extend them by implementing the methods setPreference() and removePreference().
EDIT: I have an implementation for MutableDataModel that extends the AbstractDataModel. I can share it with you if you want.
I am trying to build a recommendation engine using Mahout that gives recommendations solely based on item-to-item similarity, not taking into account user preferences (i.e. ratings). The item similarities are calculated by some other process external to mahout and saved to a file. So far, I have determined that I can use the class:
GenericBooleanPrefItemBasedRecommender
...to pick items, which the documentation says is "appropriate for use when no notion of preference value exists in the data." However, the class still takes as input:
(DataModel dataModel, ItemSimilarity similarity)
I know I can use ItemSimilarity class to supply the item-to-item similarity value, but what is my datamodel in this case? I have no preferences, which seems to be the exact thing the datamodel represents. how do I work around this, or am I looking at the wrong thing here?
Here is a simple code how you can create an instance of your DataModel that uses GenericBooleanPrefDataModel
DataModel model = new GenericBooleanPrefDataModel(GenericBooleanPrefDataModel.toDataMap(new FileDataModel(new File("YOUR_FILE_NAME"))));
However, even if you have data model with preference values, and you have custom implementation of ItemSimilarity that does not use this preference values, you will get the desired result.
Best,
Dragan
Simply use a GenericBooleanPrefDataModel.