How to perform downsampling and upweighting technique? - machine-learning

I am referring Google's machine learning DataPrep course, in this lecture https://developers.google.com/machine-learning/data-prep/construct/sampling-splitting/imbalanced-data about solving class imbalanced problem, the technique mentioned is to first downsample and then upweight. This lecture talks about the theory but I couldn't find its practical implementation. Can someone guide?

Upweighting is done to calibrate the probablities provided by probabilistic classifiers so that the output of the predict_proba method can be directly interpreted as a confidence level.
Python implementation of the two calibration methods is provided here - https://scikit-learn.org/stable/auto_examples/calibration/plot_calibration.html#sphx-glr-auto-examples-calibration-plot-calibration-py
More details about probablity calibration is provided here - https://scikit-learn.org/stable/modules/calibration.html

Related

Are there examples of using reinforcement learning for text classification?

Imagine a binary classification problem like sentiment analysis. Since we have the labels, cant we use the gap between actual - predicted as reward for RL ?
I wish to try Reinforcement Learning for Classification Problems
Interesting thought! According to my knowledge it can be done.
Imitation Learning - On a high level it is observing sample trajectories performed by the agent in the environment and use it to predict the policy given a particular stat configuration. I prefer Probabilistic Graphical Models for the prediction since I have more interpretability in the model. I have implemented a similar algorithm from the research paper: http://homes.soic.indiana.edu/natarasr/Papers/ijcai11_imitation_learning.pdf
Inverse Reinforcement Learning - Again a similar method developed by Andrew Ng from Stanford to find the reward function from sample trajectories, and the reward function can be used to frame the desirable actions.
http://ai.stanford.edu/~ang/papers/icml00-irl.pdf

Difference between SVM implementations

I am trying to implement an SVM in Rapidminer. However I am presented with several SVM implementations, libsvm, mysvm,JMySVM, Particle Swarm Optimization based SVM and Evolutionary SVM. Know I know the basic differences between the implementations but what are the advantages and disadvantages of them to know which one to implement?
I am not finding much information about this online, I would like to avoid a try them all to see which one presents the best results. So I would like to know in which situation I should use them.
From the first, you seem to confuse different implementations and algorithms. As far as I know, libsvm, mysvm and JmySVM are standard implementation which solve the SVM optimization problem by algorithms such as sequential minimal optimization.
On the contrary, the other SVMs you mentioned (additionally) use less common approaches like particle swarm optimization or evolutionary algorithms for the optimization. Such methods usually give you good approximation with small effort, which might be advantageous for large-scale problems (--but I admit I don't know the exact motivation for their invention).
If you are looking for the SVM model which is common in machine learning and related fields, I would suggest you to try the library libsvm. Alternatively, you can have a look on the collection here.

How to approach a machine learning programming competition

Many machine learning competitions are held in Kaggle where a training set and a set of features and a test set is given whose output label is to be decided based by utilizing a training set.
It is pretty clear that here supervised learning algorithms like decision tree, SVM etc. are applicable. My question is, how should I start to approach such problems, I mean whether to start with decision tree or SVM or some other algorithm or is there is any other approach i.e. how will I decide?
So, I had never heard of Kaggle until reading your post--thank you so much, it looks awesome. Upon exploring their site, I found a portion that will guide you well. On the competitions page (click all competitions), you see Digit Recognizer and Facial Keypoints Detection, both of which are competitions, but are there for educational purposes, tutorials are provided (tutorial isn't available for the facial keypoints detection yet, as the competition is in its infancy. In addition to the general forums, competitions have forums also, which I imagine is very helpful.
If you're interesting in the mathematical foundations of machine learning, and are relatively new to it, may I suggest Bayesian Reasoning and Machine Learning. It's no cakewalk, but it's much friendlier than its counterparts, without a loss of rigor.
EDIT:
I found the tutorials page on Kaggle, which seems to be a summary of all of their tutorials. Additionally, scikit-learn, a python library, offers a ton of descriptions/explanations of machine learning algorithms.
This cheatsheet http://peekaboo-vision.blogspot.pt/2013/01/machine-learning-cheat-sheet-for-scikit.html is a good starting point. In my experience using several algorithms at the same time can often give better results, eg logistic regression and svm where the results of each one have a predefined weight. And test, test, test ;)
There is No Free Lunch in data mining. You won't know which methods work best until you try lots of them.
That being said, there is also a trade-off between understandability and accuracy in data mining. Decision Trees and KNN tend to be understandable, but less accurate than SVM or Random Forests. Kaggle looks for high accuracy over understandability.
It also depends on the number of attributes. Some learners can handle many attributes, like SVM, whereas others are slow with many attributes, like neural nets.
You can shrink the number of attributes by using PCA, which has helped in several Kaggle competitions.

General SVM implementation

I want to learn General SVM implementation which uses QP problem for training. Initially I do not want to learn Sequential minimal Optimization(SMO) kind of algorithm which over comes the QP matrix size issue. Can any one please give me some references to learn Pure General SVM implementation in any programming languages like C,C++ or Java. So that I can understand basic issues in SVM and it will help me in learning some other SVM optimized algorithms.
This blog post by Mathieu Blondel explains how to solve the SVM problem both with and without kernels using a generic QP solver in Python (in this case he is using CVXOPT).
The source code is published on this gist and is very simple to understand thanks to the numpy array notation for n-dimensional arrays (in this case, mostly 2D matrices and 1D vectors).
You could check some of the resources mentioned here. It is also advisable to have a look at the existing code. One of the most popular implementations, LIBSVM, is open-source, so you can study the implementation.

How does HOG feature descriptor training work?

There doesn't seem to be any implementations of HOG training in openCV and little sources about how HOG training works. From what I gathered, HOG training can be done in real time. But what are the requirements of training? How does the training process actually work?
As with most computer vision algorithms, Google Scholar is your friend :) I would suggest reading a few papers on how it works. Here is one of the most referenced papers on HoG for you to start with.
Another tip when researching in computer vision is to note the authors of the papers you find interesting, and try to find their websites. They will tend to have an implementation of their algorithms as well as rules of thumb on how to use them. Also, look up the references that are sited in the paper about your algorithm. This can be very helpful in aquiring the background knowledge to truly understand how the algorithm works and why.
Your terminology is a bit mixed up. HOG is a feature descriptor. You can train a classifier using HOG, which can in turn be used for object detection. OpenCV includes a people detector that uses HOG features and an SVM classifier. It also includes CascadeClassifier, which can use HOG, and which is typically used for face detection.
There is a program in OpenCV called opencv_traincascade, which lets you train a cascade object detector, an which gives you the option to use HOG. There is a function in the Computer Vision System Toolbox for MATLAB called trainCascadeObjectDetector, which does the same thing.

Resources