I have a stream of user-item pairs, hold a block based on last 6M records and update it each minute. I don't like that between these rebuilds some important data might be unused. For example new user has joined the system, but the model doesn't know about him yet. I've found class PlusAnonymousConcurrentUserDataModel, which allows to add few entries to the model and get more accurate recommendation. Documentation proposes more constrained usage scenario for it yet: I have to:
allocate temporary user
add extra data
get recommendation
and then release user and extra data
Is it ok to use this class for collecting data iteratively till model is actually rebuilt by timer? What is the right way to do this? It seems that PlusAnonymousConcurrentUserDataModel is a bit for different purposes.
This part of Mahout is very old an being deprecated. I think it is not even in the 0.14.0 build, you would have to build from source.
Mahout now uses a whole new technology for recommending. The new algorithm is called Correlated Cross-Ocurrence (CCO). The old method you are using does not make use of real time input as you have outlined. CCO can recommend to anonymous users that have not been built into the model as long as there is behavioral data for them in some form.
The architecture to implement CCO requires a datastore in a DB and a KNN engine (search engine) to make model queries. These are all packaged together in Apache PredictionIO + the Universal Recommender template.
Community support for the Universal Recommender itself can be found here: https://groups.google.com/forum/#!forum/actionml-user or on the mailing lists of the other projects.
Related
I want to develop a app/software which understand text from various input and make Decision according to it. Further if any point the system got confused then user can manual supply the output for it and from next time onwards system must learn to give such output in these scenarios. Basically system must learn from its past experience. The job that i want handle with this system is mundane job of resolving customer technical problems.( Production L3 tickets). The input in this case would be customer problem like with the order( like the state in which order is stuck and the state in which he wants it to be pushed) and second input be the current state order( data retrieved for that order from multiple tables of db) . For these two inputs the output would be the desired action to be taken like to update certain columns and fire XML for that order. The tools which I think would required is a Natural Language processor( NLP) library for understanding text and machine learning so as learn from past confusing scenarios.
If you want to use Java libraries for your NLP Pipeline, have a look at Opennlp.
you've a lot of basic support here.
And then you've deeplearning4j where you've a lot of Neural Network implementations in java.
As you want a Dynamic model which can learn from past experiences rather than a static one, you've a number of neural netwrok implementations which you can play with in deeplearning4j.
Hope this helps!
I'm looking for some advice / guidance --
I'm working on a recommendation engine / personnel assistance app, using Mahout as the framework -
What I want to do is for new users of the app to begin by answering 5 questions and use the answers from the questions to effect the recommendation -- pretty much feeding the answers as a user-preference
I'm just not sure how to incorporate this into my code, I'm not even sure where to begin looking - I've been Googling but none of the search results really address this...
Any suggestions / advice / guidance will be greatly appreciated
Thanks
I did just that with the new Spark Itemsimilarity implementation about a year ago. You'll need a search engine for the recommendations query because Mahout doesn't have a server. I'd suggest using the new "Universal Recommender" engine template with PredicitonIO. It uses Mahout to calculate the model and Elasticsearch to serve it.
https://templates.prediction.io/PredictionIO/template-scala-parallel-universal-recommendation
PreditionIO is a framework of integrated components that provide an event server (for event storage) integration with Hadoop/HDFS, Spark, Hbase, and a REST or SDK API. All you do is install it and get the template as a plugin engine. This will provide pretty advanced recommendations queries with multiple event ingestion, a hybrid content-based method to tune results, and several methods of using popular items for backfill when no other recommendations can be made. It also uses realtime user actions for recommendations.
This last bit is super important if you want to have your users go through some training. This way they will see the benefit of training in realtime. Check this site, where I did exactly what you are talking about: https://guide.finderbots.com Notice the "Trainer". It presents you with movies and asks for thumbs up or down for as many as you care to do, then when you ask for recommendations they will be based on the realtime preferences of the user. You need to create an account first so we have a user-id.
The way I created the list for the trainer is by cluster popular items. By clustering I mean based on the users that preferred the items. Clustering produces items that are differentiated because they belong to different clusters, which means different user-sets tended to like them, and the popular ones are more likely to be known by users when they go through training. These are good things to have in a trainer.
I'm using Apache Mahout as a recommendation engine. It's great, but I'm running into an issue that I'm not sure how to fix, and may or may not even be fixable...
All the recommendation data is computed and stored in memory. When I restart my machine I lose all that data and have to re-compute it. Is there a way to save what is in memory and then put it back into memory when the machine boots back up? I realize that I may not be asking this question using the correct terminology or even describing the mechanisms at work correctly, but in essence I just want to be able to restart my machine without losing all the data as the computations take a long time to complete.
Any help getting me in the right direction on how to solve this would be helpful. I'm not specifically looking for a Mahout-specific solution, just some help understanding the general problem... I'm in new territory here.
Thanks,
Mark
For recommendations, you may store the item-item similarities and load them during init. I do not know of an existing implementation in the Mahout distribution. But to quote a discussion from December 2011 on the mailing list:
A model for item-based collaborative filtering simply consists of the
precomputed item similarities.
We currently support such a precomputation only as hadoop job, but it
should be a matter of an hour to create a class that precalculates the
item similarities sequentially using an ItemBasedRecommender.
You can either store these similarities in the database and load them
via MySQLJDBCInMemoryItemSimilarity/SQL92JDBCInMemoryItemSimilarity or
you can write them to a .csv file and load them via FileItemSimilarity.
I have the following problem and was thinking I could use machine learning but I'm not completely certain it will work for my use case.
I have a data set of around a hundred million records containing customer data including names, addresses, emails, phones, etc and would like to find a way to clean this customer data and identify possible duplicates in the data set.
Most of the data has been manually entered using an external system with no validation so a lot of our customers have ended up with more than one profile in our DB, sometimes with different data in each record.
For Instance We might have 5 different entries for a customer John Doe, each with different contact details.
We also have the case where multiple records that represent different customers match on key fields like email. For instance when a customer doesn't have an email address but the data entry system requires it our consultants will use a random email address, resulting in many different customer profiles using the same email address, same applies for phones, addresses etc.
All of our data is indexed in Elasticsearch and stored in a SQL Server Database. My first thought was to use Mahout as a machine learning platform (since this is a Java shop) and maybe use H-base to store our data (just because it fits with the Hadoop Ecosystem, not sure if it will be of any real value), but the more I read about it the more confused I am as to how it would work in my case, for starters I'm not sure what kind of algorithm I could use since I'm not sure where this problem falls into, can I use a Clustering algorithm or a Classification algorithm? and of course certain rules will have to be used as to what constitutes a profile's uniqueness, i.e what fields.
The idea is to have this deployed initially as a Customer Profile de-duplicator service of sorts that our data entry systems can use to validate and detect possible duplicates when entering a new customer profile and in the future perhaps develop this into an analytics platform to gather insight about our customers.
Any feedback will be greatly appreciated :)
Thanks.
There has actually been a lot of research on this, and people have used many different kinds of machine learning algorithms for this. I've personally tried genetic programming, which worked reasonably well, but personally I still prefer to tune matching manually.
I have a few references for research papers on this subject. StackOverflow doesn't want too many links, but here is bibliograpic info that should be sufficient using Google:
Unsupervised Learning of Link Discovery Configuration, Andriy Nikolov, Mathieu d’Aquin, Enrico Motta
A Machine Learning Approach for Instance Matching Based on Similarity Metrics, Shu Rong1, Xing Niu1, Evan Wei Xiang2, Haofen Wang1, Qiang Yang2, and Yong Yu1
Learning Blocking Schemes for Record Linkage, Matthew Michelson and Craig A. Knoblock
Learning Linkage Rules using Genetic Programming, Robert Isele and Christian Bizer
That's all research, though. If you're looking for a practical solution to your problem I've built an open-source engine for this type of deduplication, called Duke. It indexes the data with Lucene, and then searches for matches before doing more detailed comparison. It requires manual setup, although there is a script that can use genetic programming (see link above) to create a setup for you. There's also a guy who wants to make an ElasticSearch plugin for Duke (see thread), but nothing's done so far.
Anyway, that's the approach I'd take in your case.
Just came across similar problem so did a bit Google. Find a library called "Dedupe Python Library"
https://dedupe.io/developers/library/en/latest/
The document for this library have detail of common problems and solutions when de-dupe entries as well as papers in de-dupe field. So even if you are not using it, still good to read the document.
I have an application which will require a "dynamic business rules" engine. Some of the business rules changes very frequently. Some of then applies for a limited set of business accounts. For example: my customer have a process where they qualify stores, based on their size, number of the sales person, number of products, location, etc. But he manages different account, and each account give different "weights" to each attribute.
How do I implement this engine using Ruby? I know Java has drools, but I find drools annoying and complex. And I prefer not having to use JRuby...
Regards,
Rubem
If you're sure a rule engine is what you need, you will need to find one you can use in Ruby. A quick Google search brought up Rools (http://rools.rubyforge.org/) and Ruby Rules (http://xircles.codehaus.org/projects/ruby-rules). I'm not sure of the status of either project though. Using JRuby with Drools might be your best bet but then again, I'm a Java developer and a big Drools advocate. :)
Without knowing all the details, it's a little hard to say how that should be implemented. It also depends on how you want the rules to be updated. One approach is to write a collection of rules similar to this: "if a store exists with more than 50 sales people and the store hasn't had its weight updated to reflect that, then update the store's weight." However, in some way you could compare that to hardcoding.
A better approach might be to create Weight objects with criteria that need to be met for the weight to apply. Then you could write one rule that matches on both Weights and Stores: "if a Store exists that matches a Weight's criteria and the Store doesn't already have that Weight assigned to it, then add the Weigh to the Store." Then the business folks could just create and update Weights, possibly in a web front-ended database, instead of maintaining rules.