Recursive feature elimination with cross validation - machine-learning

I don't know why with increasing number of features, the accuracy could decrease?
The following figure is the example of sklearn which shows, if we use more than (the top) 3 features, we will not only improve the accuracy, but instead will decrease it]2
Could anyone please explain or give some references?

Related

How to reduce false negatives in image anomaly detection?

I'm currently working in a quality inspection project and I need to develop a program that can detect irregular parts. The problem I'm facing is that I don't have many irregular samples (only seven for more than 3,000 regular ones). I tried with CNN's but due to the unbalanced number of samples the model detects all as regular, so the approach I'm exploring is to use anomaly detection algorithms. I also tried with autoencoders but as the differences between regular and irregular are minimal, I could't get any good results. So far, the approach that gave me the best results is with Local Outlier Factor in combination with feature extractors (HOG). The only problem with this one is that even after tuning the algorithm's parameters it still gives me false positives (normal samples are labeled as irregular), which for this application is not acceptable. Is there anything I can add to the process to eliminate the false positives? o can you recommend me other approach? I'd really appreciate any help :)
Use Focal loss function since u have imbalanced data or u can try data augmentation technique as well.

Should I adapt loss weights for misclassified samples during epochs?

I am using FCN (Fully Convolutional Networks) and trying to do image segmentation. When training, there are some areas which are mislabeled, however further training doesn't help much to make them go away. I believe this is because network learns about some features which might not be completely correct ones, but because there are enough correctly classified examples, it is stuck in local minimum and can't get out.
One solution I can think of is to train for an epoch, then validate the network on training images, and then adjust weights for mismatched parts to penalize mismatch more there in next epoch.
Intuitively, this makes sense to me - but I haven't found any writing on this. Is this a known technique? If yes, how is it called? If no, what am I missing (what are the downsides)?
It highly depends on your network structure. If you are using the original FCN, due to the pooling operations, the segmentation performance on the boundary of your objects is degraded. There have been quite some variants over the original FCN for image segmentation, although they didn't go the route you're proposing.
Just name a couple of examples here. One approach is to use Conditional Random Field (CRF) on top of the FCN output to refine the segmentation. You may search for the relevant papers to get more idea on that. In some sense, it is close to your idea but the difference is that CRF is separated from the network as a post-processing approach.
Another very interesting work is U-net. It employs some idea from the residual network (RES-net), which enables high resolution features from lower levels can be integrated into high levels to achieve more accurate segmentation.
This is still a very active research area. So you may bring the next break-through with your own idea. Who knows! Have fun!
First, if I understand well you want your network to overfit your training set ? Because that's generally something you don't want to see happening, because this would mean that while training your network have found some "rules" that enables it to have great results on your training set, but it also means that it hasn't been able to generalize so when you'll give it new samples it will probably perform poorly. Moreover, you never talk about any testing set .. have you divided your dataset in training/testing set ?
Secondly, to give you something to look into, the idea of penalizing more where you don't perform well made me think of something that is called "AdaBoost" (It might be unrelated). This short video might help you understand what it is :
https://www.youtube.com/watch?v=sjtSo-YWCjc
Hope it helps

How to improve classification accuracy for machine learning

I have used the extreme learning machine for classification purpose and found that my classification accuracy is only at 70+% which leads me to use the ensemble method by creating more classification model and testing data will be classified based on the majority of the models' classification. However, this method only increase classification accuracy by a small margin. Can I asked what are the other methods which can be used to improve classification accuracy of the 2 dimension linearly inseparable dataset ?
Your question is very broad ... There's no way to help you properly without knowing the real problem you are treating. But, some methods to enhance a classification accuracy, talking generally, are:
1 - Cross Validation : Separe your train dataset in groups, always separe a group for prediction and change the groups in each execution. Then you will know what data is better to train a more accurate model.
2 - Cross Dataset : The same as cross validation, but using different datasets.
3 - Tuning your model : Its basically change the parameters you're using to train your classification model (IDK which classification algorithm you're using so its hard to help more).
4 - Improve, or use (if you're not using) the normalization process : Discover which techniques (change the geometry, colors etc) will provide a more concise data to you to use on the training.
5 - Understand more the problem you're treating... Try to implement other methods to solve the same problem. Always there's at least more than one way to solve the same problem. You maybe not using the best approach.
Enhancing a model performance can be challenging at times. I’m sure, a lot of you would agree with me if you’ve found yourself stuck in a similar situation. You try all the strategies and algorithms that you’ve learnt. Yet, you fail at improving the accuracy of your model. You feel helpless and stuck. And, this is where 90% of the data scientists give up. Let’s dig deeper now. Now we’ll check out the proven way to improve the accuracy of a model:
Add more data
Treat missing and Outlier values
Feature Engineering
Feature Selection
Multiple algorithms
Algorithm Tuning
Ensemble methods
Cross Validation
if you feel the information is lacking then this link should you learn, hopefully can help : https://www.analyticsvidhya.com/blog/2015/12/improve-machine-learning-results/
sorry if the information I give is less satisfactory

Increase dimensionality for attributes using weka?

I'm working on replicating results from a paper and when the authors are describing their setup for SVM they say this:
To increase the dimensionality of our feature vectors to be better
suited to SVMs, we expanded the feature space by taking the polynomial
combinations of degree less than or equal to 2, of all features. This
increased the number of features from 12 to 91.
How would you do this in the gui version of weka?
I really can't figure out what setting they changed to increase the number of attributes by 79. I've looked through the internet and through the weka documentation and even just clicking around on the gui but I can't seem to find any functionality that would do this.
Thank you for your help!
It seems the authors of the paper do not really understand how SVM works. Simply train SVM with polynomial kernel of degree 2 and you will get the same expressive power.

What should I look for in the analysis of the attached signals?

I'm looking to analyze and compare the following `signals':
(Edit: better renderings here: oscillations good and here: oscillations bad)
What you see are plots of neuron activations from a type of artificial neural network plotted against time. Each line in the plot is a neuron's activation over time which can have a value between -1 and 1.
In the first plot, the activities are stable and consistent while the second exemplifies more chaotic activity (for want of a better term)-- some kind of destructive interference seems to occur ever so often..
Anyhow, I would like to do some kind of 'clever' analysis but since signal analysis is really not my strong point, thought I'd ask for some advice here...
EDIT: Let me clarify a bit. Ultimately, I would like to characterize the data. This could for example involve the pinpointing of correlations between the individual signals contained in each plot. I would like to measure 'regularity' or data invariance: in the above examples, the upper plot is more regular than the lower plot. I guess therefore I could compute the variance of each signal and take that as a measure; but I was wondering if some more comprehensive signal-processing technique could be better suited (I'm not sure). In fact I'm not even sure if signal-processing is what I really want now that I think about it. Perhaps some kind of wavelet or ft analysis...
For those interested, I am working on the computational modelling of worm locomotion.
You should consult some good books on nonlinear time series analysis. For instane, a measure for the regularity of your signal could be the Lyapunov spectrum. Another possibility would entropy. If you are interested in the correlation between signal, you could use transfer-entropy or granger causality, or for neurons it would be good to have a look at some measure for phase synchronization. The bayesian stuff could also be worth trying.
But – most important – firstly you need a proper question about what you really want to know. Once you've got that it is far more easy to pick the right tool.
And one final hint. Look for tools outside the engineering community. Their tools are mostly linear, but you are dealing with a highly nonlinear system. Wavelets, FFT and stuff are useful if you don't know anything about your signal and you want to have another perspective on it, but they are not suited for your kind of problem.

Resources