More samples in AutoML did not yield better results - machine-learning

I uploaded more samples into AutoML, but this did not yield better results. Why can I improve the model performance?

There are many factors that affect a model performance. More training data does not necessarily link to better results. Please make sure the number of training data per label matches to minimum required on the data import page. If you're not happy with the quality levels, you can go back to earlier steps to improve the quality:
Consider adding more documents to any labels with low quality.
You may need to add different types of documents. For example, longer or shorter documents, documents by different authors that use different wording or style.
You can clean up labels.
Consider removing labels altogether if you don't have enough training documents.

Related

In ML, using RNN for an NLP project, is it necessary for DATA redundancy?

Is it necessary to repeat similar template data... Like the meaning and context is the same, but the smaller details vary. If I remove these redundancies, the dataset is very small (size in hundreds) but if the data like these are included, it easily crosses thousands. Which is the right approach?
SAMPLE DATA
This is acutally not a question suited for stack overflow but I'll answer anyways:
You have to think about how the emails (or what ever your data this is) will look in real-life usage: Do you want to detect any kind of spam or just similiar to what your sample data shows? If the first is the case, your dataset is just not suited for this problem since there are not enough various data samples. When you think about it, every of the senteces are exactly the same because the company name isn't really valueable information and will probably not be learned as a feature by your RNN. So the information is almost the same. And since every input sample will run through the network multiple times (once each epoch) it doesnt really help having almost the same sample multiple times.
So you shouldnt have one kind of almost identical data samples dominating your dataset.
But as I said: When you primarily want to filter out "Dear customer, we wish you a ..." you can try it with this dataset but you wouldnt really need an RNN to detect that. If you want to detect all kind of spam, you should search for a new dataset since ~100 unique samples are not enough. I hope that was helpful!

Practical advice on dealing with very long inputs using LSTM model?

I built a character-level LSTM model on text data, but ultimately I'm looking to apply this model on very long text documents (such as a novel) where it's important to understand contextual information, such as where in the novel it's in.
For these large-scale NLP tasks, is the data usually cut into smaller pieces and concatenated with metadata - such as position within the document, detected topic, etc. - to be fed into the model? Or are there more elegant techniques?
Personally, I have not gone that in depth with using LSTMs to go into the level of depth that you are trying to attain but I do have some suggestions.
One solution to your problem, which you mentioned above, could be to simply analyze different pieces of the document by splitting your document into smaller pieces and analyzing them that way. You'll probably have to be creative.
Another solution, that I think might be of interest of you is to uses a Tree LSTM model in order to get the level to depth. Here's the link to the paper Using the Tree model you could feed in individual characters or words on the lowest level and then feed it upward to higher levels of abstraction. Again, I am not completely familiar with the model, so don't take my word on it, but it could be a possible solution.
Adding few more ideas in answer pointed by bhaskar, which are used to handle this problem.
You can used Attention mechanism, which is used to deal with long term dependencies. Because for a long sequence, it certainly forget information or its next prediction may not depend on all the sequence information, it has in its cell. So attention mechanism helps to find the reasonable weights for the characters, it depend on. For more info you can check this link
There is potentially lots of research on this problem. This is very recent paper on this problem.
You can also break the sequence and use seq2seq model, which encode the features into low dims space and then decoder will extract it . This is short-article on this.
My personal advice is to break the sequence and then train it, because sliding window on the complete sequence is pretty much able to find the correlation between each sequence.

What is the amount of training data needed for additional Named Entity Recognition with spaCy?

I'm using the spaCy module to find name entities for input text. I am training the model to predict medical terms. I currently have access to 2 million medical notes, which I wrote a program to that annotates the notes.
I cross reference the medical notes against a pre-defined list of ~90 thousand terms, which is used for the annotation task. At the current pace of annotation, it takes about an hour and a half to annotate 10,000 notes. The way that annotation currently works, I end up with about 90% of the notes having no annotations (I'm currently working on getting a better list of cross-reference terms), so I take the ~1000 annotated notes and train the model on these.
I have checked and the model sort of responds to known annotated terms that it has seen (for example, the term tachycardia has been seen before from annotation, and will sometimes pick it up when the term shows up in the text).
This background might not be too relevant to my particular question, but I thought I would give a small bit of background to my current position.
I was wondering if anyone who has successfully trained a new entity in spaCy could give me some insight into their personal experience in the amount of training that was necessary to have at least somewhat reliable entity recognition.
Thanks!
I trained the Named Entity Recognizer of the Greek language from scratch because no data was available, so I would try to give you a summary of the things I noticed for my case.
I trained the NER with Prodigy annotation tool.
The answer to your question from my personal experience depends on the following things:
The number of labels you want your recognizer to be able to predict. It makes sense that when the numbers of labels (possible outputs) increases, it gets more difficult for your neural network to be able to distinguish them so the amount of data you need increases.
How different are the labels. For example, GPE and LOC tags are quite close and often used in the same context, so neural network was confusing them a lot at the beginning. It is advisable to provide more data related to labels that are close to each other.
The way of training. Pretty much there are two possibilities here:
Fully annotated sentences. This means that you tell your neural network that there are no missing tags to your annotations.
Partially annotated sentences. This means that you tell your neural network that your annotations are correct, but probably some tags are missing. This makes it harder for the network to rely on your data and for this reason, more data need to be provided.
Hyper-parameters. It is really important to fine tune your network in order to get the maximum out of your dataset.
The quality of the dataset. That means that if the dataset is representative of the things that you are going to ask your network to predict less data is required. However, if you are building a more general neural network (that would answer correctly in different contexts), more data is needed for that.
For the Greek model, I tried to predict among 6 labels that were distinct enough, I provided around 2000 fully annotated sentences and I spent a great amount of time fine-tuning.
Results: 70% F-measure, which is quite good for the complexity of the task.
Hope it helps!

How to build a good training data set for machine learning and predictions?

I have a school project to make a program that uses the Weka tools to make predictions on football (soccer) games.
Since the algorithms are already there (the J48 algorithm), I need just the data. I found a website that offers football game data for free and I tried it in Weka but the predictions were pretty bad so I assume my data is not structured properly.
I need to extract the data from my source and format it another way in order to make new attributes and classes for my model. Does anyone know of a course/tutorial/guide on how to properly create your attributes and classes for machine learning predictions? Is there a standard that describes the best way of choosing the attributes of a data set for training a machine learning algorithm? What's the approach on this?
here's an example of the data that I have at the moment: http://www.football-data.co.uk/mmz4281/1516/E0.csv
and here is what the columns mean: http://www.football-data.co.uk/notes.txt
The problem may be that the data set you have is too small. Suppose you have ten variables and each variable has a range of 10 values. There are 10^10 possible configurations of these variables. It is unlikely your data set will be this large let alone cover all of the possible configurations. The trick is to narrow down the variables to the most relevant to avoid this large potential search space.
A second problem is that certain combinations of variables may be more significant than others.
The J48 algorithm attempts to to find the most relevant variable using entropy at each level in the tree. each path through the tree can be thought of as an AND condition: V1==a & V2==b ...
This covers the significance due to joint interactions. But what if the outcome is a result of A&B&C OR W&X&Y? The J48 algorithm will find only one and it will be the one where the the first variable selected will have the most overall significance when considered alone.
So, to answer your question, you need to not only find a training set which will cover the most common variable configurations in the "general" population but find an algorithm which will faithfully represent these training cases. Faithful meaning it will generally apply to unseen cases.
It's not an easy task. Many people and much money are involved in sports betting. If it were as easy as selecting the proper training set, you can be sure it would have been found by now.
EDIT:
It was asked in the comments how to you find the proper algorithm. The answer is the same way you find a needle in a haystack. There is no set rule. You may be lucky and stumble across it but in a large search space you won't ever know if you have. This is the same problem as finding the optimum point in a very convoluted search space.
A short-term answer is to
Think about what the algorithm can really accomplish. The J48 (and similar) algorithms are best suited for classification where the influence of the variables on the result are well known and follow a hierarchy. Flower classification is one example where it will likely excel.
Check the model against the training set. If it does poorly with the training set then it will likely have poor performance with unseen data. In general, you should expect the model to performance against the training to exceed the performance against unseen data.
The algorithm needs to be tested with data it has never seen. Testing against the training set, while a quick elimination test, will likely lead to overconfidence.
Reserve some of your data for testing. Weka provides a way to do this. The best case scenario would be to build the model on all cases except one (Leave On Out Approach) then see how the model performs on the average with these.
But this assumes the data at hand are not in some way biased.
A second pitfall is to let the test results bias the way you build the model.For example, trying different models parameters until you get an acceptable test response. With J48 it's not easy to allow this bias to creep in but if it did then you have just used your test set as an auxiliary training set.
Continue collecting more data; testing as long as possible. Even after all of the above, you still won't know how useful the algorithm is unless you can observe its performance against future cases. When what appears to be a good model starts behaving poorly then it's time to go back to the drawing board.
Surprisingly, there are a large number of fields (mostly in the soft sciences) which fail to see the need to verify the model with future data. But this is a matter better discussed elsewhere.
This may not be the answer you are looking for but it is the way things are.
In summary,
The training data set should cover the 'significant' variable configurations
You should verify the model against unseen data
Identifying (1) and doing (2) are the tricky bits. There is no cut-and-dried recipe to follow.

Twitter data topical classification

So I have a data set which consists of tweets from various news organizations. I've loaded it into RapidMiner, tokenized it, and produced some n-grams of it. Now I want to be able to have RapidMiner automatically classify my data into various categories based on the topic of the tweets.
I'm pretty sure RapidMiner can do this, but according to the research I've done into it, I need a training data set to be able to show RapidMiner how I want things classified. So I need a training data set, though given the categories I wanted to classify things into, I might have to create my own.
So my questions are these:
1) Is there a training data set for twitter data that focuses more on the topic of the tweet as opposed to a sentiment analysis publicly available?
2) If there isn't one publicly available, how can I create my own? My idea to do it was to go through the tweets themselves and associate the tokens and n-grams with the categories I want. Some concerns I have with that are that I won't be able to manually classify enough tweets to create a training data set comprehensive enough so that I can get a good accuracy rate for the automatic classifier.
3) Any general advice for topical classification of text data would be great. This is the first time that I've done a project like this, and I'm sure there are things I could improve on. :)
There may be training corpora that work for you, but you need to say what your topic or categories are to identify it. The fact that this is Twitter may be relevant, but the data source is likely to be much less relevant to the classification accuracy you will achieve than the topic is. So if you take the infamous 20 newsgroups data set this is likely to work on Twitter as well, but only if the categories you are after are the 20 categories from that data set. If you want to classify cats vs dogs or Android vs iPhone you need to find a data set for that.
In most cases you will have to create initial labels manually, which is, as you say, a lot of work. One workaround might be to start with something simpler like a keyword search to create subsets of your tweets for which you know they deal with a particular category. Then you create the model on top of that and hope that it generalizes to identify the same categories even though the original keywords do not occur.
Alternatively, depending on your application (and if you actually want to build an applicaion), you may as well start with only a small data set and accept that you have poor classification. Then you generate classifications, show them to the users of your apps, and collect some form of explicit or implicit feedback on the classification (e.g. users can flag tweets as incorrectly classified). This way you improve your training corpus and periodically update your model.
Finally, if you do not know what your topics are and you want RapidMiner to identify the topics, you may want to try clustering as opposed to classification. Just create a few clusters and look at the top words for each cluster. They may well be quite dissimilar and describe what the respective clusters are about.
I believe your third question may be a bit broad for stackoverflow and is probably better answered by a text book.

Resources