I want to plot a learning curve for my regression problem, using RMSE as a "scoring parameter".
How do I set a different "scoring" parameter? Seems that "scoring" is deprecated in the learning curve function...
Can anyone help me? Or recommend me another function...
In the latest version https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html there is scoring parameter.
If you mean plot_learning_curve function from there https://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html#sphx-glr-auto-examples-model-selection-plot-learning-curve-py
You could change it like this:
def plot_learning_curve(estimator, title, X, y, axes=None, ylim=None, cv=None,
n_jobs=None, train_sizes=np.linspace(.1, 1.0, 5), scoring='metric'):
...
train_sizes, train_scores, test_scores, fit_times, _ = \
learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs,
train_sizes=train_sizes,
return_times=True, scoring=scoring)
...
Related
I am trying to separate background from signal where it is known that the quantity x^2 - y^2 is the physical reason why the background and signal are different. If I provide x and y as input variables, the BDT is having the hard time figuring out how to achieve the separation. Is BDT unable to do squares?
No, a binary decision tree is unable to take squares of input features. Given input features x, y, it will try to approximate the desired function by subdividing the x,y plane along vertical and horizontal lines. Let us take a look at an example: I fit a decision tree classifier to a square grid of points, and plot the decision boundary.
from sklearn.tree import DecisionTreeClassifier
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-5.5, 5.5, 1)
y = np.arange(-5.0, 6.0, 1)
xx, yy = np.meshgrid(x,y)
#the function we want to learn:
target = xx.ravel()**2 - yy.ravel()**2 > 0
data = np.c_[xx.ravel(), yy.ravel()]
#Fit a decision tree:
clf = DecisionTreeClassifier()
clf.fit(data, target)
#Plot the decision boundary:
xxplot, yyplot = np.meshgrid(np.arange(-7, 7, 0.1),
np.arange(-7, 7, 0.1))
Z = clf.predict(np.c_[xxplot.ravel(), yyplot.ravel()])
# Put the result into a color plot
Z = Z.reshape(xxplot.shape)
plt.contourf(xxplot, yyplot, Z, cmap=plt.cm.hot)
# Plot also the training points
plt.scatter(xx.ravel(), yy.ravel(), c=target, cmap=plt.cm.flag)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Decision boundary for a binary decision tree learning a function x**2 - y**2 > 0")
plt.show()
Here you can see what kind of boundaries a decision tree can learn: piecewise-rectangular. They are not going to approximate your function well, especially in the area where there are few training points. Since you know that x^2 - y^2 is the quantity that determines the answer, you can just add it as a new feature instead of trying to learn it.
I'm in a situation where I need to train a model to predict a scalar value, and it's important to have the predicted value be in the same direction as the true value, while the squared error being minimum.
What would be a good choice of loss function for that?
For example:
Let's say the predicted value is -1 and the true value is 1. The loss between the two should be a lot greater than the loss between 3 and 1, even though the squared error of (3, 1) and (-1, 1) is equal.
Thanks a lot!
This turned out to be a really interesting question - thanks for asking it! First, remember that you want your loss functions to be defined entirely of differential operations, so that you can back-propagation though it. This means that any old arbitrary logic won't necessarily do. To restate your problem: you want to find a differentiable function of two variables that increases sharply when the two variables take on values of different signs, and more slowly when they share the same sign. Additionally, you want some control over how sharply these values increase, relative to one another. Thus, we want something with two configurable constants. I started constructing a function that met these needs, but then remembered one you can find in any high school geometry text book: the elliptic paraboloid!
The standard formulation doesn't meet the requirement of sign agreement symmetry, so I had to introduce a rotation. The plot above is the result. Note that it increases more sharply when the signs don't agree, and less sharply when they do, and that the input constants controlling this behaviour are configurable. The code below is all that was needed to define and plot the loss function. I don't think I've ever used a geometric form as a loss function before - really neat.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
def elliptic_paraboloid_loss(x, y, c_diff_sign, c_same_sign):
# Compute a rotated elliptic parabaloid.
t = np.pi / 4
x_rot = (x * np.cos(t)) + (y * np.sin(t))
y_rot = (x * -np.sin(t)) + (y * np.cos(t))
z = ((x_rot**2) / c_diff_sign) + ((y_rot**2) / c_same_sign)
return(z)
c_diff_sign = 4
c_same_sign = 2
a = np.arange(-5, 5, 0.1)
b = np.arange(-5, 5, 0.1)
loss_map = np.zeros((len(a), len(b)))
for i, a_i in enumerate(a):
for j, b_j in enumerate(b):
loss_map[i, j] = elliptic_paraboloid_loss(a_i, b_j, c_diff_sign, c_same_sign)
fig = plt.figure()
ax = fig.gca(projection='3d')
X, Y = np.meshgrid(a, b)
surf = ax.plot_surface(X, Y, loss_map, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
plt.show()
From what I understand, your current loss function is something like:
loss = mean_square_error(y, y_pred)
What you could do, is to add one other component to your loss, being this a component that penalizes negative numbers and does nothing with positive numbers. And you can choose a coefficient for how much you want to penalize it. For that, we can use like a negative shaped ReLU. Something like this:
Let's call "Neg_ReLU" to this component. Then, your loss function will be:
loss = mean_squared_error(y, y_pred) + Neg_ReLU(y_pred)
So for example, if your result is -1, then the total error would be:
mean_squared_error(1, -1) + 1
And if your result is 3, then the total error would be:
mean_squared_error(1, -1) + 0
(See in the above function how Neg_ReLU(3) = 0, and Neg_ReLU(-1) = 1.
If you want to penalize more the negative values, then you can add a coefficient:
coeff_negative_value = 2
loss = mean_squared_error(y, y_pred) + coeff_negative_value * Neg_ReLU
Now the negative values are more penalized.
The ReLU negative function we can build it like this:
tf.nn.relu(tf.math.negative(value))
So summarizing, in the end your total loss will be:
coeff = 1
Neg_ReLU = tf.nn.relu(tf.math.negative(y))
total_loss = mean_squared_error(y, y_pred) + coeff * Neg_ReLU
|I have a neural network in torch7 and would like to check how the momentum of the neural network is developing, this because I want to modify/reduce it because I want to do some extra processing with the values and need the velocity term in order to do that.
So I have something like the following code:
for t = 1, params.num_iterations do
local x, losses = optim.adam(feval, img, optim_state)
img=postProccess(img,content_imageprep,params)
print(velocity) -- how?
end
And would like to see what the velocity is doing. Anybody know how to do this?
Printing the optim_state gives me the following output
v : CudaTensor - size: 1327104
m : CudaTensor - size: 1327104
learningRate : 10
denom : CudaTensor - size: 1327104
t : 4
but I'm now sure if and if so what term represents the velocity, anybody know?
You won't find the value of the momentum in the state argument but in the config argument (which is absent in your function call, then the momentum value will be equal to its default value, i.e 0.9 for beta1 and 0.999 for beta2.
Have a look at the source code https://github.com/torch/optim/blob/master/adam.lua#L24
I need to solve the following equation:
I Know the matrix G, how can I find the the matrix p subject to ||p|| = 1.
Currently I am solving in opencv as follows:
Mat w, u, EigenVectors;
SVD::compute(A, w, u, EigenVectors);
Mat p = EigenVectors.row(EigenVectors.rows-1);
I want to know how can I ensure the condition ||p|| = 1.
Also I want to know the significance and meaning of other rows/cols of the EigenVectors(transposed) ?
I believe you can use cv::SVD::solveZ(). It finds a unit-length solution x of a singular linear system A * x = 0
Looks like you need to use Lagrange multipliers method.
As I know, OpenCV have no ready to use tools for that.
Good example for MATLAB: Lagrange Multipliers
I want to implement a simple SVM classifier, in the case of high-dimensional binary data (text), for which I think a simple linear SVM is best. The reason for implementing it myself is basically that I want to learn how it works, so using a library is not what I want.
The problem is that most tutorials go up to an equation that can be solved as a "quadratic problem", but they never show an actual algorithm! So could you point me either to a very simple implementation I could study, or (better) to a tutorial that goes all the way to the implementation details?
Thanks a lot!
Some pseudocode for the Sequential Minimal Optimization (SMO) method can be found in this paper by John C. Platt: Fast Training of Support Vector Machines using Sequential Minimal Optimization. There is also a Java implementation of the SMO algorithm, which is developed for research and educational purpose (SVM-JAVA).
Other commonly used methods to solve the QP optimization problem include:
constrained conjugate gradients
interior point methods
active set methods
But be aware that some math knowledge is needed to understand this things (Lagrange multipliers, Karush–Kuhn–Tucker conditions, etc.).
Are you interested in using kernels or not? Without kernels, the best way to solve these kinds of optimization problems is through various forms of stochastic gradient descent. A good version is described in http://ttic.uchicago.edu/~shai/papers/ShalevSiSr07.pdf and that has an explicit algorithm.
The explicit algorithm does not work with kernels but can be modified; however, it would be more complex, both in terms of code and runtime complexity.
Have a look at liblinear and for non linear SVM's at libsvm
The following paper "Pegasos: Primal Estimated sub-GrAdient SOlver for SVM" top of page 11 describes the Pegasos algorithm also for kernels.It can be downloaded from http://ttic.uchicago.edu/~nati/Publications/PegasosMPB.pdf
It appears to be a hybrid of coordinate descent and subgradient descent. Also, line 6 of the algorithm is wrong. In the predicate the second appearance of y_i_t should be replaced with y_j instead.
I would like to add a little supplement to the answer about original Platt's work.
There is a bit simplified version presented in Stanford Lecture Notes, but derivation of all the formulas should be found somewhere else (e.g. this random notes I found on the Internet).
If it's ok to deviate from original implementations, I can propose you my own variation of the SMO algorithm that follows.
class SVM:
def __init__(self, kernel='linear', C=10000.0, max_iter=100000, degree=3, gamma=1):
self.kernel = {'poly':lambda x,y: np.dot(x, y.T)**degree,
'rbf':lambda x,y:np.exp(-gamma*np.sum((y-x[:,np.newaxis])**2,axis=-1)),
'linear':lambda x,y: np.dot(x, y.T)}[kernel]
self.C = C
self.max_iter = max_iter
def restrict_to_square(self, t, v0, u):
t = (np.clip(v0 + t*u, 0, self.C) - v0)[1]/u[1]
return (np.clip(v0 + t*u, 0, self.C) - v0)[0]/u[0]
def fit(self, X, y):
self.X = X.copy()
self.y = y * 2 - 1
self.lambdas = np.zeros_like(self.y, dtype=float)
self.K = self.kernel(self.X, self.X) * self.y[:,np.newaxis] * self.y
for _ in range(self.max_iter):
for idxM in range(len(self.lambdas)):
idxL = np.random.randint(0, len(self.lambdas))
Q = self.K[[[idxM, idxM], [idxL, idxL]], [[idxM, idxL], [idxM, idxL]]]
v0 = self.lambdas[[idxM, idxL]]
k0 = 1 - np.sum(self.lambdas * self.K[[idxM, idxL]], axis=1)
u = np.array([-self.y[idxL], self.y[idxM]])
t_max = np.dot(k0, u) / (np.dot(np.dot(Q, u), u) + 1E-15)
self.lambdas[[idxM, idxL]] = v0 + u * self.restrict_to_square(t_max, v0, u)
idx, = np.nonzero(self.lambdas > 1E-15)
self.b = np.sum((1.0-np.sum(self.K[idx]*self.lambdas, axis=1))*self.y[idx])/len(idx)
def decision_function(self, X):
return np.sum(self.kernel(X, self.X) * self.y * self.lambdas, axis=1) + self.b
In simple cases it works not much worth than sklearn.svm.SVC, comparison shown below (I have posted code that generates these images on GitHub)
I used quite different approach to derive formulas, you may want to check my preprint on ResearchGate for details.