should I use mahout for this? - mahout

I want to recommend items that are tagged and are categorized into three price categories (cheap, regular and expensive). I know that with Mahout recommendation could be achieved but here's why I don't know how to use it.
Mahout is based on the other users opinion but all of the new items that I want to recommend are just the new ones that don't have any preferences set yet.
Is Mahout the right tool for this? Is this content-based? (which mahout don't support yet????) or should I use classification?
Thanks!

Since I've never built any recommender system - do not take this answer very seriously (no-one has answered it, so I try)
recommendation system has to be built on some already known (or partially known data). If you have only new (unseen) data there is only possibility to use some clustering algorithm in order to build some clusters.
And if those clusters would be ok, they can be used for training some recommendation system.
Mahout is just a tool which implement various ML methods. You can use other tools like Weka, R, ...

If you have no data at all about a new user, there's really nothing you can do to make recommendations, no matter what you do. There is zero input that would differentiate the person from anyone else.
Good systems should however be able to do something reasonable after the first input is available.
This is not a classifier problem by nature, no. It is also not a clustering tool, other answers notwithstanding.
The price categories are not core to any rec process you would use. You have other data presumably, what is it? That's important.
Finally whether or not to use Mahout depends on taste. You would use it if you want to use Java and Hadoop. And in turn you would only consider Hadoop if you had very large input, and few people have that much data (like >10M data points at least).
(Well, not quite -- my recommender pieces in Mahout pre-date Hadoop and are for on-line, smaller-scale applications. You might indeed be interested in this, if you are working in Java.)

Related

Using NLP or other algorithms to mach two strings

My goal of a project is to correctly assign medications. I have a large catalog at my disposal for this purpose. However, the medications do not appear there in exactly the same spelling. Possibly additional information was added or possible parts of the prescription were abbreviated.
I was already able to implement a possible algorithm using the Levensthein distance (token_set_ratio).
Because of the sometimes long additional information this algorithm assigns wrong medications, I wanted to ask if there are better algorithms for comparing strings. For example, does it make sense to implement machine learning algorithms or NLP technology? This is a relatively new area for me. I would appreciate any ideas or inspiration.
This sounds like a classic Deduplication task. For example, have a look at dedupe. This tool lets you annotate training examples and learns when two items refer to the same thing. It can be used with as few as 10 training sanples and has an active learning approach implemented.

best way to whittle down features in a classification machine learning model

I have 66 features which i'm using to create a classifcation machine learning model in python. However, just to prevent issues like overfitting, I was wondering what the best way to reduce the number of fetures would be. I have read about PCA, but am not sure whether any good methodology exists to reduce features, and whether any tools exist in sklearn to help facilitate this.
Thanks.
The first thing you should then maybe do is reading through the documentation of scikit-learn's feature selection methods.
Every method has its perks and peeves, and which one is best (if there is even one) depends on the specific use-case.
That being said, the methods offered in scikit-learn are by no means exhaustive. But discussing different choices and elaborating on an appropriate method is maybe better asked on platforms like Cross Validated or similar.

Transfer Learning for small datasets of structured data

I am looking to implement machine learning for a problems that are built on small data sets related to approvals of expenses in a specific supply chain domain. Typically labelled data is unavailable
I was looking to build models in one data set that I have labelled data and then use that model developed in similar contexts- where the feature set is very similar, but not identical. The expectation is that this allows the starting point for recommendations and gather labelled data in the new context.
I understand this is the essence of Transfer Learning. Most of the examples I read in this domain speak of image data sets- any guidance how this can be leveraged in small data sets using standard tree-based classification algorithms
I can’t really speak to tree-based algos, I don’t know how to do transfer learning with them. But, for deep learning models, the customary method for transfer learning is to load up a pretrained model, then retrain the last layer of the dataset using your new data, and then fine-tune the rest of the network.
If you don’t have much data to go on, you might look into creating synthetic data.
raghu, I believe you are looking for a kernel method when you are saying abstraction layer in deep learning. There are several ML algorithms that support kernel functions. With kernel functions, you might be able to do it; but using kernel functions might be more complex than solving your original problem. I would lean toward Tdoggo's suggestion of using Decision Tree.
Sorry, I want to add a comment, but they won't allow me, so I posted a new answer.
Ok with tree-based algos you can do just what you said: train the tree on one dataset and apply it to another similar dataset. All you would need to do is change the terms/nodes on the second tree.
For instance, let’s say you have a decision tree trained for filtering expenses for a construction company. You will outright deny any reimbursements for workboots, because workers should provide those themselves.
You want to use the trained tree on your accounting firm, and so instead of workboots, you change that term to laptops, because accountants should be buying their own.
Does that make sense, and is that helpful to you?
After some research, we have decided to proceed with random forest models with the intuition that trees in the original model that have common features will form the starting point for decisions.
As we gain more labelled data in the new context, we will start replacing the original trees with new trees that comprise of (a)only new features and (b) combination of old and new features
This has worked to provide reasonable results in initial trials

Research papers classification on the basis of title of the research paper

Dear all I am working on a project in which I have to categories research papers into their appropriate fields using titles of papers. For example if a phrase "computer network" occurs somewhere in then title then this paper should be tagged as related to the concept "computer network". I have 3 million titles of research papers. So I want to know how I should start. I have tried to use tf-idf but could not get actual results. Does someone know about a library to do this task easily? Kindly suggest one. I shall be thankful.
If you don't know categories in advance, than it's not classification, but instead clustering. Basically, you need to do following:
Select algorithm.
Select and extract features.
Apply algorithm to features.
Quite simple. You only need to choose combination of algorithm and features that fits your case best.
When talking about clustering, there are several popular choices. K-means is considered one of the best and has enormous number of implementations, even in libraries not specialized in ML. Another popular choice is Expectation-Maximization (EM) algorithm. Both of them, however, require initial guess about number of classes. If you can't predict number of classes even approximately, other algorithms - such as hierarchical clustering or DBSCAN - may work for you better (see discussion here).
As for features, words themselves normally work fine for clustering by topic. Just tokenize your text, normalize and vectorize words (see this if you don't know what it all means).
Some useful links:
Clustering text documents using k-means
NLTK clustering package
Statistical Machine Learning for Text Classification with scikit-learn and NLTK
Note: all links in this answer are about Python, since it has really powerful and convenient tools for this kind of tasks, but if you have another language of preference, you most probably will be able to find similar libraries for it too.
For Python, I would recommend NLTK (Natural Language Toolkit), as it has some great tools for converting your raw documents into features you can feed to a machine learning algorithm. For starting out, you can maybe try a simple word frequency model (bag of words) and later on move to more complex feature extraction methods (string kernels). You can start by using SVM's (Support Vector Machines) to classify the data using LibSVM (the best SVM package).
The fact, that you do not know the number of categories in advance, you could use a tool called OntoGen. The tool basically takes a set of texts, does some text mining, and tries to discover the clusters of documents. It is a semi-supervised tool, so you must guide the process a little, but it does wonders. The final product of the process is an ontology of topics.
I encourage you, to give it a try.

Machine learning/information retrieval project

I’m reading towards M.Sc. in Computer Science and just completed first year of the source. (This is a two year course). Soon I have to submit a proposal for the M.Sc. Project. I have selected following topic.
“Suitability of machine learning for document ranking in information retrieval system”. Researchers have been using various machine learning algorithms for ranking documents. So as the first phase of the project I will be doing a complete literature survey and finding out advantages/disadvantages of current approaches. In the second phase of the project I will be proposing a new (modified) algorithm in order to overcome the limitations of current approaches.
Actually my question is whether this type of project is suitable as a M.Sc. project? Moreover if somebody has some interesting idea in information retrieval filed, is it possible to share those ideas with me.
Thanks
Ranking is always the hardest part of any of Information Retrieval systems. I think it is a very good topic but you have to take care to -- as soon as possible -- to define a scope of the work. Probably you will not be able to develop a new IR engine but rather build a prototype based on, e.g., apache lucene.
Currently there is a lot of dataset including stackoverflow data dump, which provide you all information you need to define a rich feature vector (number of points, time, you can mine topics of previous question etc., popularity of a tag) for you machine learning ranking algorithm. In this part of the work you could, e.g., classify types of features (e.g., user specific, semantic feature - software name in the title) and perform series of experiments to learn which features are most important and which are not for a given dataset.
The second direction of such a project can be how to perform learning efficiently. The reason behind is the quantity of data within web or community forums and changes in the forum (this would be important if you take a community specific features), e.g., changes in technologies, new software release, etc.
There are many other topics related to search and machine learning. The best idea is to search on scholar.google.com for the recent survey papers on ranking, machine learning, and search to learn what is the state-of-the-art. The very next step would be to talk with your MSc supervisor.
Good luck!
Everything you said is good and should be done, but you forgot the most important part:
Prove that your algorithm is better and/or faster than other algorithms, with good experiments and maybe some statistics (p-value, confidence interval).
If you do that and convince people that your algorithm is useful you surely will not fail :)

Resources