Time Sensitive Collaborative Filtering - machine-learning

I am trying to use collaborative filtering to recommend items to the user based on their past purchase. I have created a user vector representing his usage and item vector(A) with values populated as probabilty of B given A. The objective to somewhat capture the items sold together in items vector representation. Now I need to find the time when these recommendations should be presented. As the items I am recommending are of periodic use timing is very important.
So I am trying to explore constraint-based Recommendations to make my recommendation time sensitive. The approach I am thinking is to create time-sensitive constraint based on the last date of purchase and average consumption rate. But the problem is creating constraint as user level will become computationally difficult.
I need your suggestion regarding the approach or suggestion of any better way to implement the same. All I want to develop a recommendation engine using customer's usage data of items that are consumed and required to purchase again. I need to output list of recommendation as well as timing of presenting the recommendation to the user
Thanks

The way I see it, there are two basic options here that you can pursue. On the one hand, the temporal features can be incorporated as additional information and converted into a kind of hybrid recommendation. The Python package "lightfm" is a good example.
On the other hand, the problem can also be modeled as a time series problem. A well-known paper dealing with Next Basket Recommendations is "A Dynamic Recurrent Model for Next Basket Recommendation". Here too, there are already implementations on Github.

Related

Automate Solving of customer technical issue Production L3 tickets

I want to develop a app/software which understand text from various input and make Decision according to it. Further if any point the system got confused then user can manual supply the output for it and from next time onwards system must learn to give such output in these scenarios. Basically system must learn from its past experience. The job that i want handle with this system is mundane job of resolving customer technical problems.( Production L3 tickets). The input in this case would be customer problem like with the order( like the state in which order is stuck and the state in which he wants it to be pushed) and second input be the current state order( data retrieved for that order from multiple tables of db) . For these two inputs the output would be the desired action to be taken like to update certain columns and fire XML for that order. The tools which I think would required is a Natural Language processor( NLP) library for understanding text and machine learning so as learn from past confusing scenarios.
If you want to use Java libraries for your NLP Pipeline, have a look at Opennlp.
you've a lot of basic support here.
And then you've deeplearning4j where you've a lot of Neural Network implementations in java.
As you want a Dynamic model which can learn from past experiences rather than a static one, you've a number of neural netwrok implementations which you can play with in deeplearning4j.
Hope this helps!

Recommender System: Is it content-based filtering?

Can someone please help me clarify.
I am currently using collaborative filtering (ALS) which returns a recommendation list with scores corresponding to the recommended items. In addition to this, I am boosting the scores (+0.1) if the items contain a tag that corresponds with what the user has specified they prefer such as "romantic movies". To me, this is considered a hybrid collaborative approach since it's boosting the Collaborative filtering results with content-based filtering (Please correct me if I am wrong).
Now, what if I did the same approach without doing Collaborative filtering? would it be considered Content-based Filtering? since I will be still recommending dishes based on the content and attributes of each dish corresponding to what the user has specified they like (such as "romantic movies").
The reason why I'm confused is because I've seen content-based filtering where they apply an algorithm such as Naive Bayes etc, and this approach would be similar to a simple search of the items (on the contents).
Not sure you can do what you suggest because you have no score to boost without CF.
You are indeed using a hybrid, much the same as the Universal Recommender. To do purely content-based recommendations you have to implement two methods
Personalized recommendations: here you have to look at the content of items the user preferred and find items that have similar content. This can be done by using something like the Mahout spark-rowsimilarity job to create a model of item: list-of-similar-items then indexing the results with a search engine and using the user's preferred item ids as the query. This is being added to the Universal Recommender.
"People who liked this also liked these": these are items similar to one being viewed, for example, and are the same for all users. They are not personalized and so are useful even for anonymous users with no history. This can be done with the same indexed ids as above but using the items similar to the one being viewed as the query. One might think to use only the similar items themselves but by using them as a query you can put the categorical boost in the search engine query and have boosted items returned. This already works in the Universal Recommender but the similar items are not in the model yet.
That said mixing content with collaborative-filtering will almost surely give better results since CF works better when the data is available. The only time to rely on content-based recommendations is when your catalog is of one-off items, which never get enough CF interactions or you have rich content, which has a short lifetime like breaking news.
BTW anyone who wants to help add the pure content-based part to the Universal Recommender can contact the new maintainers of it at ActionML.com

Transaction Fact Table approach

I'm working on financial data mart structure.
And I'm having some doubts on whats the better approach to do so.
The source system database,Dynamics AX 2009, has three tables for customer transaction.
One table for open transactions, where the Customer still needs to pay for service/product;
One table for settle transactions, where it holds what the customer have already paid;
Finally a table that have all customers transactions, holds transactions from open to settle and also others transactions as customer to bank or ledger accounts.
I thought in two options, first I will maintain a fact table representing the three table, fact for open transactions, fact for any customer transaction and fact for settle transaction.
Second is to create a single fact to hold all transactions, to do so I would have to do a full join on three tables.
I'm not sure on both approaches, as the first seems to copy tables from production and create the proper dimension.
On the Second one I would create a massive fact table, that where data would constantly change, as open transaction are delete on source system when they are settle.
Another doubt, should i create a fact with scd(slowly changing dimension) structure to maintain history data?(star date, end date , flag)
It's hard to say from the information given whether this needs to be one or more Fact tables. However, the key point which you should use to decide is whether all of the information is at the same granularity. Consider the grain of your intended Fact table(s) and you should find an answer for whether you need one table or multiple tables.
If all of the information sits at the same grain - i.e. all of the same dimensions apply to all of the measures you are considering putting into the same Fact table - then they can probably all live in the same Fact table. If you're finding that some of the Dimensions wouldn't apply to some of the measures then you probably need to re-think your design. Either you might need multiple Fact tables, or you might need to take all of your measures down to the lowest grain and combine hierarchies into single Dimensions if you currently have them split across multiple Dimensions.
While it's been mentioned that having measures in separate cubes could make it difficult to compare things, keep in mind that you don't need one cube per Fact table. You can have multiple Fact tables in a single cube, and sometimes this is very helpful when you need to be able to compare measures which share some Dimensions but not others. This is far, far better than forcing data which does not have the same grain into one Fact table.
Also, it sounds like what you're trying to model is the sales ledger of an organisation. I'd suggest having a dig around via Google as you may well be able to find materials discussing dimensional data warehouse design for sales ledger structures, rather than reinventing the wheel. If you don't have a decent understanding of the accounting concepts you're trying to model I would especially recommend looking for a reference schema to work from, or failing that doing some reading up on accountancy concepts (and sales ledgers specifically). Understanding the account structure should help you understand what the grain of your Fact table(s) needs to be, how to model the Dimensions, and so on.
This is a really helpful abridged version of Kimball's modelling techniques which discusses grain, and the different types of Fact table, amongst many other topics:
http://www.kimballgroup.com/wp-content/uploads/2013/08/2013.09-Kimball-Dimensional-Modeling-Techniques11.pdf
I think you should just use one fact table (one cube) and use a dimension to differentiate between open/settled/etc. transactions. That's what dimensions are for: They help you to categorize your measures and get a specific view on them. This approach will also open much more possibilities to create knowledge with your cube. With separate cubes for open/settled/etc. transactions, it will be harder or not possible to set this data into contrast.
Since the data is changing constantly, you should consider to update your fact table in a given time and rebuild your cube if it needs to.
If you use scd or not really depends on the data you process and what it is used for. Is there a business case claiming it? Is there a technical use?
I think this is something you have to decide on your own.

Mahout trust aware collaborative filtering

I'm trying develop a trust-aware collaborative filtering approach. I have two epinions datasets. One with who trusts who: <ID_truster, ID_trusted>. And one with ratings: <ID_truster, ITEM, RATING>.
How can I make recommendations (User-User based) using only ratings from people who I trust?
At the moment I only make recommendations using the second dataset, taking in consideration every user.
Thank you
The closest thing I can think of is to use a user-neighborhood-based approach, and only include trusted users in the neighborhood. You would need to write some extra code for that, to disqualify untrusted users, by returning a very negative similarity value for them. Look at the UserSimilarity interface.

Apache Mahout: can we combine User-Item and Item-Item?

I am new to Mahout, and am still playing around with it.
My question is, is it appropriate to combine Item-Item and User-Item?
My use case is, a social networking application will try to recommend something for the current user based on user historical data (with higher priority), and combine the recommendation results from the current user's friends historical data (with lower priority), and display the result with ordered rating list.
The reason is, for example a new user might not have much historical data in the system, we can recommend something from his friends historical data. Once the user accumulate enough historical data, the recommendation should be based on more on that.
Is it appropriate to design system in this way?
Thank you for your time,
George
This is fairly simple to write. You can create recommendations for the user, and then combine with recommendations for the other users. A dumb version of this logic would be to add: merge lists of recommendations by adding the scores for items that appear in both lists. Maybe you add N friends' recs together, and then add N times the user's own recs. You take recommendations from this list then.
This doesn't exist in the project per se but it's quite easy to write a method to do this on the List<RecommendedItem> that comes back from recommend().

Resources