Random Forests with a Customized Loss Function - machine-learning

I am a complete beginner in the field of machine learning. For a project, I have to use a customized loss function in the Random Forest Classification. I have used scikit till now. Suggestions on implementing this through scikit will be more helpful.

Loss functions (Gini impurity and entropy in case of classification trees) are implemented in _tree.pyx cython file in scikit (they're called criteria in the source). You can start by modifying/adding to these functions. If you add your custom loss function (criterion) to the cython file, you also need to expose it in the tree.py python file (look at the CRITERIA_CLF and CRITERIA_REG lists).

Related

How to extract and transfer learnt parameters in svm(scikit)?

I have trained SVM image classifier using sklearn. Assignment requirement is to make separate "prediction.py" function which takes an image and classifies it. Generally it's done by clf.predict() but how can I get values of learnt coefficients so that I may transfer them to predict.py function?
The Scikit learn documentation addresses this, see https://scikit-learn.org/stable/modules/model_persistence.html

Is there a way of using cosine similarity instead of dot product in python (sklearn/keras)

I just started using Sklearn (MLPRegressor) and Keras (Sequential, with Dense layers).
Today I read this paper describing how using cosine similarity instead of the dot product improves the performance. This basically says that if we replace f(w^Tx) with f((w^Tx)/(|x||w|)), i.e. we don't just feed the dot product to the activation function but we normalize it, we get a better and quicker performance.
Is there a way of doing this in Python, specifically in MLPRegressor in SKlearn (or another), or in Keras? (maybe TensorFlow?)
Sklearn uses prebuilt networks, so no. I also don't think it's possible in Keras, as it has prebuilt layers.
It sure can be implemented in Tensorflow though. Note that in TF you can explicitly define layers.
For example in this snippet you'd need to add normalization in line 25, namely you can divide output rows tf.nn.sigmoid(tf.matmul(X, w_1)) by appropriate norms of input rows (you can get them using tf.nn.l2_normalize with dim=1)

Can linear classification take non binary targets?

I'm following a TensorFlow example that takes a bunch of features (real estate related) and "expensive" (ie house price) as the binary target.
I was wondering if the target could take more than just a 0 or 1. Let's say, 0 (not expensive), 1 (expensive), 3 (very expensive).
I don't think this is possible as the logistic regression model has asymptotes nearing 0 and 1.
This might be a stupid question, but I'm totally new to ML.
I think I found the answer myself. From Wikipedia:
First, the conditional distribution y|x is a Bernoulli distribution rather than a Gaussian distribution, because the dependent variable is binary. Second, the predicted values are probabilities and are therefore restricted to (0,1) through the logistic distribution function because logistic regression predicts the probability of particular outcomes.
Logistic Regression is defined for binary classification tasks.(For more details, please logistic_regression. For multi-class classification problems, you can use Softmax Classification algorithm. Following tutorials shows how to write a Softmax Classifier in Tensorflow Library.
Softmax_Regression in Tensorflow
However, your data set is linearly non-separable (most of the time this is the case in real-world datasets) you have to use an algorithm which can handle nonlinear decision boundaries. Algorithm such as Neural Network or SVM with Kernels would be a good choice. Following IPython notebook shows how to create a simple Neural Network in Tensorflow.
Neural Network in Tensorflow
Good Luck!

Machine Learning, After training, how exactly does it get a prediction? opencv

So after you have a machine learning algorithm trained, with your layers, nodes, and weights, how exactly does it go about getting a prediction for an input vector? I am using MultiLayer Perceptron (neural networks).
From what I currently understand, you start with your input vector to be predicted. Then you send it to your hidden layer(s) where it adds your bias term to each data point, then adds the sum of the product of each data point and the weight for each node (found in training), then runs that through the same activation function used in training. Repeat for each hidden layer, then does the same for your output layer. Then each node in the output layer is your prediction(s).
Is this correct?
I got confused when using opencv to do this, because in the guide it says when you use the function predict:
If you are using the default cvANN_MLP::SIGMOID_SYM activation
function with the default parameter values fparam1=0 and fparam2=0
then the function used is y = 1.7159*tanh(2/3 * x), so the output
will range from [-1.7159, 1.7159], instead of [0,1].
However, when training it is also stated in the documentation that SIGMOID_SYM uses the activation function:
f(x)= beta*(1-e^{-alpha x})/(1+e^{-alpha x} )
Where alpha and beta are user defined variables.
So, I'm not quite sure what this means. Where does the tanh function come into play? Can anyone clear this up please? Thanks for the time!
The documentation where this is found is here:
reference to the tanh is under function descriptions predict.
reference to activation function is by the S looking graph in the top part of the page.
Since this is a general question, and not code specific, I did not post any code with it.
I would suggest that you read about appropriate algorithm that your are using or plan to use. To be honest there is no one definite algorithm to solve a problem but you can explore what features you got and what you need.
Regarding how an algorithm performs prediction is totally depended on the choice of algorithm. Support Vector Machine (SVM) performs prediction by fitting hyperplanes on the feature space and using some metric such as distance for learning and than the learnt model is used for prediction. KNN on the other than uses simple nearest neighbor measurement for prediction.
Please do more work on what exactly you need and read through the research papers to get proper understanding. There is not magic involved in prediction but rather mathematical formulations.

How do I update a trained model (weka.classifiers.functions.MultilayerPerceptron) with new training data in Weka?

I would like to load a model I trained before and then update this model with new training data. But I found this task hard to accomplish.
I have learnt from Weka Wiki that
Classifiers implementing the weka.classifiers.UpdateableClassifier interface can be trained incrementally.
However, the regression model I trained is using weka.classifiers.functions.MultilayerPerceptron classifier which does not implement UpdateableClassifier.
Then I checked the Weka API and it turns out that no regression classifier implements UpdateableClassifier.
How can I train a regression model in Weka, and then update the model later with new training data after loading the model?
I have some data mining experience in Weka as well as in scikit-learn and r and updateble regression models do not exist in weka and scikit-learn as far as I know. Some R libraries however do support updating regression models (take a look at this linear regression model for example: http://stat.ethz.ch/R-manual/R-devel/library/stats/html/update.html), so if you are free to switching data mining tool this might help you out.
If you need to stick to Weka than I'm afraid that you would probably need to implement such a model yourself, but since I'm not a complete Weka expert please check with the guys at weka list (http://weka.wikispaces.com/Weka+Mailing+List).
The SGD classifier implementation in Weka supports multiple loss functions. Among them are two loss functions that are meant for linear regression, viz. Epsilon insensitive, and Huber loss functions.
Therefore one can use a linear regression trained with SGD as long as either of these two loss functions are used to minimize training error.

Resources