I have a data set as below,
Code | Description
AB123 | Cell Phone
B467A | Mobile Phone
12345 | Telephone
WP9876 | Wireless Phone
SP7654 | Satellite Phone
SV7608 | Sedan Vehicle
CC6543 | Car Coupe
Need to create a automated grouping based on the Code and Description. Lets assume I have so many such data already classified into 0-99 groups. Whenever a new data comes in with a Code and Description, the Machine Learning algorithm needs to automatically classify this based on the previously available data.
Code | Description | Group
AB123 | Cell Phone | 1
B467A | Mobile Phone | 1
12345 | Telephone | 1
WP9876 | Wireless Phone | 1
SP7654 | Satellite Phone | 1
SV7608 | Sedan Vehicle | 2
CC6543 | Car Coupe | 3
Can this be achieved to some level of accuracy? Currently this process is so manual. Any such ideas or references are there, please help with that.
Try reading up on Supervised Learning. You need to provide labels for your training data so that the algorithms know what are the correct answers - and are able to generate appropriate models for you.
Then you can "predict" the output classes for your new incoming data using the generated model(s).
Finally, you may wish to circle back to check the accuracy of the predicted results. If you then enter the labels for the newly received and predicted data then those data can then be used for further training on your model(s).
Yes, it's possible with supervised learning. You pick yourself a model which you "train" with the data you already have. The model/algorithm then "generalizes" to previously unseen data from the known data.
What you specify as a group would be called class or "label" which needs to be predicted based on 2 input features (code/description). Whether you input these features directly or preprocess them into more abstract features which suits the algorithm better, depends on which algorithm you choose.
If you have no experience with Machine Learning, you might start with learning some basics while testing already implemented algorithms in tools such as RapidMiner, Weka or Orange.
I don't think machine learning methods are the most appropriate for the solution of the problem, because text based machine learning algorithms tend to be quite complicated. From the examples you provided I'm not sure how
I think the simplest way of solving, or attempting to solve this problem is the following, which can be implemented in many free programming languages, such as python. Each description can be stored as a string. What you could do is to store all the substrings of all the strings (ie Phone is your string, the substrings will be 'P','h',Ph',..,'e') that belong in a particular group in a list (see this question for how to implement it in python... Substrings of a string using Python). Then you want to for each substring and all substrings stored, see which ones are unique to a certain group. Then select strings over a certain length (say 3 characters long, to get rid of random letter concatenations) as your classification criteria. Then when you get new data, check whether the description is unique to a certain group. With this for instance, you would be able to classify all objects that are in group 1 based on whether their description contains the word phone.
Its hard to provide concrete code to solves your problem without knowing what languages you are familiar with/are feasible to use. I hope this helps anyway. Yves
Related
I am successfully done with the speech to text conversion program and the results show a usual 80-90 % confidence in the transcript.
Now is there a way (a service) that can be used (or usually used) to improve the confidence in the transcript?
For example:
I want to search a name from the directory
.
.
Sunil Chauhan
Sunit Chavan
Sumit Chawhan
.
.
All the above three names are valid (as in they exist). But Sunit is less common than Sunil or Sumit, and all of them have an almost similar surname.
We can understand the difference in human speech but how to differentiate the text response from Google speech recognition which gives the most common Sunil/Sumit with most common Chauhan for Sunit Chavan.
Is there an available AI or ML service which can be used in such cases?
I have some user chat data and categorised in various categories, the problem is there are a lot of algorithm generated categories, please see example below:
Message | Category
I want to play cricket | Play cricket
I wish to watch cricket | Watch cricket
I want to play cricket outside | Play cricket outside
As you can see Categories (essentially phrases) are extracted from the text itself,
based on my data there are 10,000 messages with approx 4,500 unique catgories.
Is there any suitable algorithm which can give me good prediction accuracy in such cases.
Well, I habitually use OpenNLP's DocumentCategorizer for tasks like this, but StanfordNLP core I think does some similar stuff. OpenNLP uses Maximum Entropy for this, but there are many ways to do it.
First some thoughts on the amount of unique labels. Basically you only have a few samples per class, and that is generally a bad thing: your classifier is going to give sucky results no matter what it is if you try to do it the way you are implying because of overlap and / or underfitting. So here's what i've done before in a similar situation: separate concepts into different thematic classifiers, then assemble the best scores for each. For example, based on what you wrote above, you may be able to detect OUTSIDE or INSIDE with one classification model, and then WATCHING CRICKET vs PLAYING CRICKET in another. Then at runtime, you would pass the text into both classifiers, and take the best hit for each to assemble a single category. Pseudo code:
DoccatModel outOrIn = new DoccatModel(modelThatDetectsOutsideOrInside);
DoccatModel cricketMode = new DoccatModel(modelThatDetectsPlayingOrWatchingCricket)
String stringToDetectClassOf = "Some dude is playing cricket outside, he sucks";
String outOrInCat = outOrIn.classify(stringToDetectClassOf);
String cricketModeCat = cricketMode .classify(stringToDetectClassOf);
String best = outOrInCat + " " + cricketModeCat ;
you get the point I think.
Also some other random thoughts:
- Use a text index to explore the amount of data you get back to figure out how to break up the categories.
- You want a few hundred examples for each model
let me know if you want me to give you some code examples from OpenNLP if you are doing this in Java
I am trying to solve the following question - Given a text file containing a bunch of biological information, find out the one gene which is {up/down}regulated. Now, for this I have many such (60K) files and have annotated some (1000) of them as to which gene is {up/down}regulated.
Conditions -
Many sentences in the file have some gene name mention and some of them also have neighboring text that can help one decide if this is indeed the gene being modulated.
Some files also have NO gene modulated. But these still have gene mentions.
Given this, I wanted to ask (having absolutely no background in ML), what sequence learning algorithm/tool do I use that can take in my annotated (training) data (after probably converting the text to vectors somehow!) and can build a good model on which I can then test more files?
Example data -
Title: Assessment of Thermotolerance in preshocked hsp70(-/-) and
(+/+) cells
Organism: Mus musculus
Experiment type: Expression profiling by array
Summary: From preliminary experiments, HSP70 deficient MEF cells display moderate thermotolerance to a severe heatshock of 45.5 degrees after a mild preshock at 43 degrees, even in the absence of hsp70 protein. We would like to determine which genes in these cells are being activated to account for this thermotolerance. AQP has also been reported to be important.
Keywords: thermal stress, heat shock response, knockout, cell culture, hsp70
Overall design: Two cell lines are analyzed - hsp70 knockout and hsp70 rescue cells. 6 microarrays from the (-/-)knockout cells are analyzed (3 Pretreated vs 3 unheated controls). For the (+/+) rescue cells, 4 microarrays are used (2 pretreated and 2 unheated controls). Cells were plated at 3k/well in a 96 well plate, covered with a gas permeable sealer and heat shocked at 43degrees for 30 minutes at the 20 hr time point. The RNA was harvested at 3hrs after heat treatment
Here my main gene is hsp70 and it is down-regulated (deducible from hsp(-/-) or HSP70 deficient). Many other gene names are also there like AQP.
There could be another file with no gene modified at all. In fact, more files have no actual gene modulation than those who do, and all contain gene name mentions.
Any idea would be great!!
If you have no background in ML I suggest buying a product like this one, this one or this one. These products where in development for decades with team budgets in millions.
What you are trying to do is not that simple. For example a lot of papers contain negative statements by first citing the original statement from another paper and then negating it. In your example how are you going to handle this:
AQP has also been reported to be important by Doe et al. However, this study suggest that this might not be the case.
Also, if you are looking into large corpus of biomedical research papers, or for this matter any corpus of research papers. You will find tons of papers that suggest something for example gene being up-regulated or not, and then there is one paper published in Cell magazine that all previous research has been mistaken.
To make matters worse, gene/protein names are not that stable. Besides few famous ones like P53. There is a bunch of run of the mill ones that are initially thought that they are one gene, but later it turns out that these are two different things. When this happen there are two ways community handles it. Either both of the genes get new names (usually with some designator at the end) or if the split is uneven the larger class retains original name and the second one gets the new name. To compound this problem, after this split happens not all researchers get the memo at instantly, so there is still stream of publications using old publication.
These are just two simple problems, there are 100s of these.
If you are doing this for personal enrichment. Here are some suggestions:
Build a language model on biomedical papers. Existing language models are usually built from news-wire sources or from social media data. All three of the corpora claim to be written in English language. But in reality these are three different languages with their own grammar and vocabulary
Look into things like embeddings and word2vec.
Look into Kaggle competitions, this is somewhat popular topic there.
Subscribe to KDD and BIBM magazines or find them in nearby library. There are 100s of papers on this subject.
We want to identify the address fields from a document. For Identifying the address fields we converted the document to OCR files using Tesseract. From the tesseract output we want to check a string contains the address field or not . Which is the right strategy to resolve this problem ?
Its not possible to solve this problem using the regex because address fields are different for various documents and countries
Tried NLTK for classifying the words but not works perfectly for address field.
Required output
I am staying at 234 23 Philadelphia - Contains address files <234 23 Philadelphia>
I am looking for a place to stay - Not contains address
Provide your suggestions to solve this problem .
As in many ML problems, there are mutiple posible solutions, and the important part(and the one commonly has greater impact) is not which algorithm or model you use, but feature engineering ,data preprocessing and standarization ,and things like that. The first solution comes to my mind(and its just an idea, i would test it and see how it performs) its:
Get your training set examples and list the "N" most commonly used words in all examples(thats your vocabulary), this list will contain every one of the "N" most used words , every word would be represented by a number(the list index)
Transform your training examples: read every training example and change its representation replacing every word by the number of the word in the vocabolary.
Finally, for every training example create a feature vector of the same size as the vocabulary, and for every word in the vocabulary your feature vector will be 0(the corresponding word doesnt exists in your example) or 1(it exists) , or the count of how many times the word appears(again ,this is feature engineering)
Train multiple classifiers ,varing algorithms,parameters, training set sizes, etc, and do cross validation to choose your best model.
And from there keep the standard ML workflow...
If you are interested in just checking YES or NO and not extraction of complete address, One simple solution can be NER.
You can try to check if Text contains Location or not.
For Example :
import nltk
def check_location(text):
for chunk in nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize(text))):
if hasattr(chunk, "label"):
if chunk.label() == "GPE" or chunk.label() == "GSP":
return "True"
return "False"
text="I am staying at 234 23 Philadelphia."
print(text+" - "+check_location(text))
text="I am looking for a place to stay."
print(text+" - "+check_location(text))
Output:
# I am staying at 234 23 Philadelphia. - True
# I am looking for a place to stay. - False
If you want to extract complete address as well, you will need to train your own model.
You can check: NER with NLTK , CRF++.
You're right. Using regex to find an address in a string is messy.
There are APIs that will attempt to extract addresses for you. These APIs are not always guaranteed to extract addresses from strings, but they will do their best. One example of an street address extract API is from SmartyStreets. Documentation here and demo here.
Something to consider is that even your example (I am staying at 234 23 Philadelphia) doesn't contain a full address. It's missing a state or ZIP code field. This makes is very difficult to programmatically determine if there is an address. Once there is a state or ZIP code added to that sample string (I am staying at 234 23 Philadelphia PA) it becomes much easier to programmatically determine if there is an address contained in the string.
Disclaimer: I work for SmartyStreets
A better method to do this task could be as followed below:
Train your own custom NER model (extending pre-trained SpaCy's model or building your own CRF++ / CRF-biLSTM model, if you have annotated data) or using a pre-trained models like SpaCy's large model or geopandas, etc.
Define a weighted score mechanism based on your problem statement.
For example - Let's assume every address have 3 important components - an address, a telephone number and an email id.
Text that would have all three of them would get a score of 33.33% + 33.33% + 33.33% = 100 %
For identifying if it's an address field or not you may take into account - the per% of SpaCy's location tags (GPE, FAC, LOC, etc) out of total tokens in text which gives a good estimate of how many location tags are present in text. Then run a regex for postal codes, and match the found city names with the 3-4 words just before the found postal code, if there's an overlap, you have correctly identified a postal code and hence an address field - (got your 33.33% score!).
For telephone numbers - certain checks and regex could do it but an important criteria would be that it performs these phone checks only if an address field is located in above text.
For emails/web address again you could perform nomial regex checks and finally add all these 3 scores to a cumulative value.
An ideal address would get 100 score while missing fields wile yield 66% etc. The rest of the text would get a score of 0.
Hope it helped! :)
Why do you say regular expressions won't work?
Basically, define all the different forms of address you might encounter in the form of regular expressions. Then, just match the expressions.
I have a very small question which has been baffling me for a while. I have a dataset with interesting features, but some of them are dimensionless quantities (I've tried using z-scores) on them but they've made things worse. These are:
Timestamps (Like YYYYMMDDHHMMSSMis) I am getting the last 9 chars from this.
User IDs (Like in a Hash form) How do I extract meaning from them?
IP Addresses (You know what those are). I only extract the first 3 chars.
City (Has an ID like 1,15,72) How do I extract meaning from this?
Region (Same as city) Should I extract meaning from this or just leave it?
The rest of the things are prices, widths and heights which understand. Any help or insight would be much appreciated. Thank you.
Timestamps can be transformed into Unix Timestamps, which are reasonable natural numbers
User IF/Cities/Regions are nominal values, which has to be encoded somehow. The most common approach is to create as much "dummy" dimensions as the number of possible values. So if you have 100 ciries, than you create 100 dimensions and give "1" only on the one representing a particular city (and 0 on the others)
IPs should rather be removed, or transformed into some small group of them (based on the DNS-network identification and nominal to dummy transformation as above)