Predefined Multilable Text Classification - machine-learning

Friends,
We are trying work on a problem where we have a dump of only reviews but there is no rating in a .csv file. Each row in .csv is one review given by customer of a particular product, lets a TV.
Here, I wanted to do classification of that text into below pre-defined category given by the domain expert of that products:
Quality
Customer
Support
Positive Feedback
Price
Technology
Some reviews are as below:
Bought this product recently, feeling a great product in the market.
Was waiting for this product since long, but disappointed
The built quality is not that great
LED screen is picture perfect. Love this product
Damm! bought this TV 2 months ago, guess what, screen showing a straight line, poor quality LED screen
This has very complicated options, documentation of this TV is not so user-friendly
I cannot use my smart device to connect to this TV. Simply does not work
Customer support is very poor. I don't recommend this
Works great. Great product
Now, with above 10 reviews by 10 different customers, how do I categorize them into the given buckets (you can call multilabel classification or Named Entity Recognition or Information extraction with sentiment analysis or be it anything)
I tried all NLP word frequency counting related stuff (in R) and referred StanfordNLP (https://nlp.stanford.edu/software/CRF-NER.shtml) and many more. But could not get a concrete solution.
Can anybody please guide me how can we tackle this problem? Thanks !!!

Most NLP frameworks will handle multi-class classification. Word count by itself in R will not likely be very accurate. A python library you can explore is Spacy. Commercial APIs like Google, AWS, Microsoft can also be used. You will need quite a few examples per category for training. Feel free to post your code and the problem or performance gap you see for further help.

Related

Data prediction from previous data history using AI/ML

I am looking for solutions where I can automatically approve or disapprove different supplier invoices based on historical data.
Let's say, I got an invoice from an HP laptop supplier and based on the previous data, I have to approve or reject that invoice.
Basically, I want to make a decision or prediction based on the data already available based on the history with artificial intelligence, machine learning or any other cloud service
This isn't a direct question though but you can start by looking into various methods of classifications. There is a huge amount of material available online. Try reading about K-Nearest Neighbors, Naive Bayes, K-means, etc. to get an idea about how algorithms in Machine Learning domain work. Once you start understanding what is written in the documentation then start implementing them. You will face a lot of problems which you can search online and I'm sure you will find most of them answered here in this portal.

How does PC/phone recognize person with one pic?

Recently, I'm studying about facial recognition with OpenCV, and I'm trying some simple example based on study.
I'm considering to use it at front door condition.
Nowadays some buildings or apartments use facial recognition for preventing intruders. When someone joins them (such as company or houses), they require the person's picture. As I know, they require just one picture.
I didn't care about that last time, but now, I'm very curious about it.
The famous algorithms such as PCA, LDA use machine learning, so they increase successful percentages(cases). To use machine learning, they need sample images as many as I can provide. That's why I'm curious about that. Buildings or companys require just one picture, but they can recognize each person. Moreover, their accuracy is very good. How can this happen? Is there any other algorithm besides PCA or LDA?
Thanks for reading!
As far as I know, this hasn't been achieved yet. So I don't think they can develop a software recognizing a person by using only one picture.
It is most likely that they teach the algorithm with the authorized person's pictures. So if that one picture does not match with the trained ones, the algorithm can say this is an intrusion.
Edit:
As linuxqwerty pointed out those commercial products are already trained with huge datasets.
As a result of this training, learning happens and the algorithm achieves feature extraction of all those sample faces.
Then the algorithm knows almost every kinds of features that an human face can have.
For example: thickness of eyebrows, distance between eyes, roundness of chin... These are only a human can say about faces. The algorithm can extract thousands of these features.
It can keep faces as a representation of those features.
So now we have this commercial software which can represent faces as binary codes with a lot of digits.
I am getting your question again.
The apartment or company bought this software.
They included the picture of authorized person.
What the software does is simply converting the picture as it was a thousand digits password.
So that person has this unique password which the system can only reproduce that password only from his face.
To sum up:
The learning part was achieved using big face databases.
Thanks to learning part, the recognition part can be done by using only one picture.
PS: Corrections are welcome.
I happened to read about facial recognition before, that time I wanted to do it as my semester project. And of course I have heard and thought of using OpenCV as well.
Your question is simple, those company or home that use facial recognition, they usually use very well-developed product, which normally includes well-programmed facial recognition. As we are talking about security here, normally companies will buy these security products, unless if they just want to use it as a tool to deter intruders which focus less on the practical usage, and recognition accuracy, they can opt for free facial recognition software.
So, when I'm talking about well-programmed facial recognition, it means that it was trained with huge amount of databases (the photos to be recognized that you mentioned), this means the training is done even before the software is officially launched, which is during the development stage. A good facial recognition software requires both good, complete and detailed programming coding, and also huge photo databases (taken at different ambient light intensity, different facial features like hair style, spectacles) to train it.
Therefore, the accuracy of the software does not depend solely on the amount of pictures given during the usage of the software provided that it is well-programmed in the first place. Thanks and hope I answered your question and wonder.
ps: recognize is spelled this way (US); recognise (UK) =)

Applying AI, recommendation or machine learning techniques to search feature

I'm new to the area of AI, machine learning, recommendation engines and data mining however would like to find a way to get into the area.
I'm working on an conference room booking application which will recommend meeting rooms to employees at which it calculates to be the most suitable time and location. The recommendations are based on criteria which an employee will enter before submitting a search. The criteria can include meeting attendees (which can be in different locations and timezones), room capacity (based on attendees) and types of equipment required.
The recommendation engine will take into consideration timezones and locations and recommend one or more meetings rooms , depending on whether employees are in different builings/geo-graphical regions.
Can anyone recommend recommendation engine, machine learning or AI techniques which i could apply to solving the solution? I'm new to this area so all suggestions are greatly appreciated.
This looks more like an optimization problem. You have some hard constraints and some preferences. Look at Linear Programming. Also google Constraint based Scheduling, there are several tutorials.
Just a warning: This is in general an NP-hard problem, so unless you are trying to solve it for a small number participants, you will need to use some heuristics and approximations. If you want to go a little bit overboard, there is a coursera class on optimization running right now.

Recommendation rules for sorting a list based on a profile

I working on a site that needs to present a set of options that have no particular order. I need to sort this list based on the customer that is viewing the list. I thought of doing this by generating recommendation rules and sorting the list putting the best suited to be liked by the customer on the top. Furthermore I think I'd be cool that if the confidence in the recommendation is high, I can tell the customer why I'm recommending that.
For example, lets say we have an icecream joint who has website where customers can register and make orders online. The customer information contains basic info like gender, DOB, address, etc. My goal is mining previous orders made by customers to generate rules with the format
feature -> flavor
where feature would be either information in the profile or in the order itself (like, for example, we might ask how many people are you expecting to serve, their ages, etc).
I would then pull the rules that apply to the current customer and use the ones with higher confidence on the top of the list.
My question, what's the best standar algorithm to solve this? I have some experience in apriori and initially I thought of using it but since I'm interested in having only 1 consequent I'm thinking now that maybe other alternatives might be better suited. But in any case I'm not that knowledgeable about machine learning so I'd appreciate any help and references.
This is a recommendation problem.
First the apriori algorithm is no longer the state of the art of recommendation systems. (a related discussion is here: Using the apriori algorithm for recommendations).
Check out Chapter 9 Recommendation System of the below book Mining of Massive Datasets. It's a good tutorial to start with.
http://infolab.stanford.edu/~ullman/mmds.html
Basically you have two different approaches: Content-based and collaborative filtering. The latter can be done in terms of item-based or user-based approach. There are also methods to combine the approaches to get better recommendations.
Some further readings that might be useful:
A recent survey paper on recommendation systems:
http://arxiv.org/abs/1006.5278
Amazon item-to-item collaborative filtering: http://www.cs.umd.edu/~samir/498/Amazon-Recommendations.pdf
Matrix factorization techniques: http://research.yahoo4.akadns.net/files/ieeecomputer.pdf
Netflix challenge: http://blog.echen.me/2011/10/24/winning-the-netflix-prize-a-summary/
Google news personalization: http://videolectures.net/google_datar_gnp/
Some related stackoverflow topics:
How to create my own recommendation engine?
Where can I learn about recommendation systems?
How do I adapt my recommendation engine to cold starts?
Web page recommender system

Using Artificial Intelligence (AI) to predict Stock Prices

Given a set of data very similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5> (ie. correlation coefficient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie. correlation coefficient = -1) or somewhere in between.
Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be:
user
direction (long/short)
market direction
sector of stock
The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a combination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended).
The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specific time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumption is that a single user is right no more than 57% of the time - but who knows.
I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings.
NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coefficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price.
Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started...
My hope is to implement this with F# and be able to impress my friends with a new skill set in F# with an implementation of machine learning and potentially something (application / source) I can include in a tech portfolio or blog space;
Thank you for any advice in advance.
I have an MBA, and teach data mining at a top grad school.
The term project this year was to predict stock price movements automatically from news reports. One team had 70% accuracy, on a reasonably small sample, which ain't bad.
Regarding your question, a lot of companies have made a lot of money on pair trading (find a pair of assets that normally correlate, and buy/sell pair when they diverge). See the writings of Ed Thorpe, of Beat the Dealer. He's accessible and kinda funny, if not curmudgeonly. He ran a good hedge fund for a long time.
There is probably some room in using data mining to predict companies that will default (be unable to make debt payments) and shorting† them, and use the proceeds to buy shares in companies less likely to default. Look into survival analysis. Search Google Scholar for "predict distress" etc in finance journals.
Also, predicting companies that will lose value after an IPO (and shorting them. edit: Facebook!). There are known biases, in academic literature, that can be exploited.
Also, look into capital structure arbitrage. This is when the value of the stocks in a company suggest one valuation, but the value of the bonds or options suggest another value. Buy the cheap asset, short the expensive one.
Techniques include survival analysis, sequence analysis (Hidden Markov Models, Conditional Random Fields, Sequential Association Rules), and classification/regression.
And for the love of God, please read Fooled By Randomness by Taleb.
† shorting a stock usually involves calling your broker (that you have a good relationship with) and borrowing some shares of a company. Then you sell them to some poor bastard. Wait a while, hopefully the price has gone down, you buy some more of the shares and give them back to your broker.
My Advice to You:
There are several Machine Learning/Artificial Intelligence (ML/AI) branches out there:
http://www-formal.stanford.edu/jmc/whatisai/node2.html
I have only tried genetic programming, but in the "learning from experience" branch you will find neural nets. GP/GA and neural nets seem to be the most commonly explored methodologies for the purpose of stock market predictions, but if you do some data mining on Predict Wall Street, you might be able to utilize a Naive Bayes classifier to do what you're interested in doing.
Spend some time learning about the various ML/AI techniques, get a small data set and try to implement some of those algorithms. Each one will have its strengths and weaknesses, so I would recommend that you try to combine them using Naive Bays classifier (or something similar).
My Experience:
I'm working on the problem for my Masters Thesis so I'll pitch my results using Genetic Programming: www.twitter.com/darwins_finches
I started live trading with real money in 09/09/09.. yes, it was a magical day! I post the GP's predictions before the market opens (i.e. the timestamps on twitter) and I also place the orders before the market opens. The profit for this period has been around 25%, we've consistently beat the Buy & Hold strategy and we're also outperforming the S&P 500 with stocks that are under-performing it.
Some Resources:
Here are some resources that you might want to look into:
Max Dama's blog: http://www.maxdama.com/search/label/Artificial%20Intelligence
My blog: http://mlai-lirik.blogspot.com/
AI Stock Market Forum: http://www.ai-stockmarketforum.com/
Weka is a data mining tool with a collection of ML/AI algorithms: http://www.cs.waikato.ac.nz/ml/weka/
The Chatter:
The general consensus amongst "financial people" is that Artificial Intelligence is a voodoo science, you can't make a computer predict stock prices and you're sure to loose your money if you try doing it. None-the-less, the same people will tell you that just about the only way to make money on the stock market is to build and improve on your own trading strategy and follow it closely.
The idea of AI algorithms is not to build Chip and let him trade for you, but to automate the process of creating strategies.
Fun Facts:
RE: monkeys can pick better than most experts
Apparently rats are pretty good too!
I understand monkeys can pick better than most experts, so why not an AI? Just make it random and call it an "advanced simian Mersenne twister AI" or something.
Much more money is made by the sellers of "money-making" systems then by the users of those systems.
Instead of trying to predict the performance of companies over which you have no control, form a company yourself and fill some need by offering a product or service (yes, your product might be a stock-predicting program, but something a little less theoretical is probably a better idea). Work hard, and your company's own value will rise much quicker than any gambling you'd do on stocks. You'll also have plenty of opportunities to apply programming skills to the myriad of internal requirements your own company will have.
If you want to go down this long, dark, lonesome road of trying to pick stocks you may want to look into data mining techniques using advanced data mining software such as SPSS or SAS or one of the dozen others.
You'll probably want to use a combination or technical indicators and fundamental data. The data will more than likely be highly correlated so a feature reduction technique such as PCA will be needed to reduce the number of features.
Also keep in mind your data will constantly have to be updated, trimmed, shuffled around because market conditions will constantly be changing.
I've done research with this for a grad level class and basically I was somewhat successful at picking whether a stock would go up or down the next day but the number of stocks in my data set was fairly small (200) and it was over a very short time frame with consistent market conditions.
What I'm trying to say is what you want to code has been done in very advanced ways in software that already exists. You should be able to input your data into one of these programs and using either regression, or decision trees or clustering be able to do what you want to do.
I have been thinking of this for a few months.
I am thinking about Random Matrix Theory/Wigner's distribution.
I am also thinking of Kohonen self-learning maps.
These comments on speculation and past performance apply to you as well.
I recently completed my masters thesis on deep learning and stock price forecasting. Basically, the current approach seems to be LSTM and other deep learning models. There are also 10-12 technical indicators (TIs) based on moving average that have been shown to be highly predictive for stock prices, especially indexes such as SP500, NASDAQ, DJI, etc. In fact, there are libraries such as pandas_ta for computing various TIs.
I represent a group of academics that are trying to predict stocks in a general form that can also be applied to anything, even the rating of content.
Our algorithm, which we describe as truth seeking, works as follows.
Basically each participant has their own credence rating. This means that the higher your credence or credibility, then the more their vote counts. Credence is worked out by how close to the weighted credence each vote is. It's like you get a better credence value the closer you get to the average vote that has already been adjusted for credence.
For example, let's say that everyone is predicting that a stock's value will be at value X in 30 day's time (a future's option). People who predict on the average get a better credence. The key here is that the individual doesn't know what the average is, only the system. The system is tweaked further by weighting the guesses so that the target spot that generates the best credence is those votes that are already endowed with more credence. So the smartest people (historically accurate) project the sweet spot that will be used for further defining who gets more credence.
The system can be improved too to adjust over time. For example, when you find out the actual value, those people who guessed it can be rewarded with a higher credence. In cases where you can't know the future outcome, you can still account if the average weighted credence changes in the future. People can be rewarded even more if they spotted the trend early. The point is we don't need to even know the outcome in the future, just the fact that the weighted rating changed in the future is enough to reward people who betted early on the sweet spot.
Such a system can be used to rate anything from stock prices, currency exchange rates or even content itself.
One such implementation asks people to vote with two parameters. One is their actual vote and the other is an assurity percentage, which basically means how much a particular participant is assured or confident of their vote. In this way, a person with a high credence does not need to risk downgrading their credence when they are not sure of their bet, but at the same time, the bet can be incorporated, it just won't sway the sweet spot as much if a low assurity is used. In the same vein, if the guess is directly on the sweet spot, with a low assurity, they won't gain the benefits as they would have if they had used a high assurity.

Resources