Prediction with LightSide does not work - machine-learning

I am working with the machine learning workbench LightSide for my MA-thesis. I have successfully trained some models, now I would like to use a trained model to predict new data. However, when I try to do so, the system stops after a few seconds, with the pop-up message "prediction as been stopped" and no hint on why. It happens with different data sets, as well as algorithms used for training...
Has anyone encountered the same problem and found a solution for it?
Thank you for your help :)
edit: I tried to export the feature table to WEKA and train models there, but WEKA gets lost in an endless training loop, I assume it has to do with the built-in feature UNIGRAM I use form LightSide. But I am still not closer to predicting on new data...
edit II: LightSide throws an error saying that one feature is not part of the model, when in fact it is

If you have a huge feature space, then lightSIDE might end up saying the prediction has stopped. This happened to me when I tried to use a column (ex: problem_name) that had too many different values as one of the features (during the feature extraction phase). I ended up recomputing another column (ex: problem_difficulty based on problem_name) that had fewer values and lightSIDE could predict new data successfully.
Thanks to Dr. Rose for the solution.

Related

How to test model on PascalVOC 2012 or COCO test sets when no annotations are provided for them?

I am new to the field of computer vision, so I apologise if the question is inappropriate in any way.
I have created a segmentation model using the PascalVOC 2012 data set, and so far I have only been able to test it on the train and val data sets. Now I wanted to test my model using the test set, however, it does not provide any annotations, so I am unsure of what I could do in order to measure my model's performance on the test data.
I have noticed that other data sets, such as COCO, do not provide the annotations for the test data.
I am curious how researchers who have published papers regarding models trained on these data sets have tested them on the test data in such case and what I can do in order to do the same.
Thanks in advance for your help!
The main reason why many of the major datasets do not release the test set is to avoid people reporting unreliable results due to overfitting.
For model selection and "informal" evaluation, you should split your training set into train and validation split, and evaluate on the latter while training only on the first.
So how is that researchers report results on the test set in the papers?
Once you have a definitive model you want to evaluate, you can upload your results to an evaluation server; in this way you can benchmark yourself w.r.t. the state-of-the-art without having explicit access to the test set.
Examples:
For COCO dataset you can find the guidelines here about how to upload your results (for each task).
For CityScapes dataset you can submit your results through this page.
As a side note: VOC2012 is pretty old, so maybe you can also find the test set if you really need it. Take a look at this mirror by Joseph Redmon for example.

Machine learning where labelling of training data might not be 100% accurate

I have a dataset which consists of people who have diabetes, and who have not. Using this data, I want to train a model to calculate a risk probability for people with unknown diabetes status. I know that the majority of people who have not been diagnosed with diabetes in the training do not have diabetes, but it is likely that some of these people may have undiagnosed diabetes.
This appears to present a catch 22 situation. I want to identify people who are at-risk, or potentially have undiagnosed diabetes, however I know some of the people in my training dataset are incorrectly labelled as not having diabetes because they have not yet been diagnosed. Has anyone encountered such a problem? Can one still proceed on the basis that there may be some incorrectly labelled data, if it only counts for a small percentage of the data?
There might be several approaches to solving your problem.
First - it might not be a problem after all. If the mislabeled data accounts for a small part of your training set, it might not matter. Actually, there are some cases when adding mislabeled data or just random noise improves robustness and generalization power of your classifier.
Second - you might want to use the training set to train the classifier and then check the data points for which the classifier gave the incorrect classification. It is possible that the classifier was actually right and directs you to the incorrectly labeled data. This data can be subsequently manually checked if such a thing is possible.
Third - you can filter the data up front using methods like consensus filters. This article might be a good way to start your research on this topic: Identifying Mislabeled Training Data - C.E. Brody and M.A. Friedl.

CIFAR10 example running time

I have been learning ML using TensorFlow for a few weeks. I have been following the tutorials given on the TensorFlow website (here). I started with training the model and it has been running on the system with the following specifications(it was taken before training started therefore showing minimal usage)
It has completed more than 200,000 steps so for how long should it be running or is there anything I am missing here.
Also, a similar question was found on the forum here. I could not find any reference on TensorFlow website where it says that you have to terminate it yourself when you get the desired loss. Even if it is so, how to determine what is the value of loss where you can stop the training?
As of now, there's no such fixed loss or anything by which you can say, you have a best model. It depends on training samples how simple/complex it is to train. Sometime 200k step might be more than enough and sometime not. Higher number of iteration causes over-fitting of the model and less causes under-fitting. But somehow you can use validation and test set to evaluate the model.

Incorporating user feedback in a ML model

I have developed a ML model for a classification (0/1) NLP task and deployed it in production environment. The prediction of the model is displayed to users, and the users have the option to give a feedback (if the prediction was right/wrong).
How can I continuously incorporate this feedback in my model ? From a UX stand point you dont want a user to correct/teach the system more than twice/thrice for a specific input, system shld learn fast i.e. so the feedback shld be incorporated "fast". (Google priority inbox does this in a seamless way)
How does one build this "feedback loop" using which my system can improve ? I have searched a lot on net but could not find relevant material. any pointers will be of great help.
Pls dont say retrain the model from scratch by including new data points. Thats surely not how google and facebook build their smart systems
To further explain my question - think of google's spam detector or their priority inbox or their recent feature of "smart replies". Its a well known fact that they have the ability to learn / incorporate (fast) user feed.
All the while when it incorporates the user feedback fast (i.e. user has to teach the system correct output atmost 2-3 times per data point and the system start to give correct output for that data point) AND it also ensure it maintains old learnings and does not start to give wrong outputs on older data points (where it was giving right output earlier) while incorporating the learning from new data point.
I have not found any blog/literature/discussion w.r.t how to build such systems - An intelligent system that explains in detaieedback loop" in ML systems
Hope my question is little more clear now.
Update: Some related questions I found are:
Does the SVM in sklearn support incremental (online) learning?
https://datascience.stackexchange.com/questions/1073/libraries-for-online-machine-learning
http://mlwave.com/predicting-click-through-rates-with-online-machine-learning/
https://en.wikipedia.org/wiki/Concept_drift
Update: I still dont have a concrete answer but such a recipe does exists. Read the section "Learning from the feedback" in the following blog Machine Learning != Learning Machine. In this Jean talks about "adding a feedback ingestion loop to machine". Same in here, here, here4.
There could be couple of ways to do this:
1) You can incorporate the feedback that you get from the user to only train the last layer of your model, keeping the weights of all other layers intact. Intuitively, for example, in case of CNN this means you are extracting the features using your model but slightly adjusting the classifier to account for the peculiarities of your specific user.
2) Another way could be to have a global model ( which was trained on your large training set) and a simple logistic regression which is user specific. For final predictions, you can combine the results of the two predictions. See this paper by google on how they do it for their priority inbox.
Build a simple, light model(s) that can be updated per feedback. Online Machine learning gives a number of candidates for this
Most good online classifiers are linear. In which case we can have a couple of them and achieve non-linearity by combining them via a small shallow neural net
https://stats.stackexchange.com/questions/126546/nonlinear-dynamic-online-classification-looking-for-an-algorithm

Use feedback or reinforcement in machine learning?

I am trying to solve some classification problem. It seems many classical approaches follow a similar paradigm. That is, train a model with some training set and than use it to predict the class labels for new instances.
I am wondering if it is possible to introduce some feedback mechanism into the paradigm. In control theory, introducing a feedback loop is an effective way to improve system performance.
Currently a straight forward approach on my mind is, first we start with a initial set of instances and train a model with them. Then each time the model makes a wrong prediction, we add the wrong instance into the training set. This is different from blindly enlarge the training set because it is more targeting. This can be seen as some kind of negative feedback in the language of control theory.
Is there any research going on with the feedback approach? Could anyone shed some light?
There are two areas of research that spring to mind.
The first is Reinforcement Learning. This is an online learning paradigm that allows you to get feedback and update your policy (in this instance, your classifier) as you observe the results.
The second is active learning, where the classifier gets to select examples from a pool of unclassified examples to get labelled. The key is to have the classifier choose the examples for labelling which best improve its accuracy by choosing difficult examples under the current classifier hypothesis.
I have used such feedback for every machine-learning project I worked on. It allows to train on less data (thus training is faster) than by selecting data randomly. The model accuracy is also improved faster than by using randomly selected training data. I'm working on image processing (computer vision) data so one other type of selection I'm doing is to add clustered false (wrong) data instead of adding every single false data. This is because I assume I will always have some fails, so my definition for positive data is when it is clustered in the same area of the image.
I saw this paper some time ago, which seems to be what you are looking for.
They are basically modeling classification problems as Markov decision processes and solving using the ACLA algorithm. The paper is much more detailed than what I could write here, but ultimately they are getting results that outperform the multilayer perceptron, so this looks like a pretty efficient method.

Resources