what does this notation mean <x1,x2>? - machine-learning

I was reading about k-strong convexity and came a across an equation like this
f(x_1) >= f(x_2) + <\Delta f(x_2), x_1-x_2> + (k/2)*(||x_1-x_2||^2)
Could some one explain what the notation <> means?
Sorry for the errors with latex.

<x,y> is the dot (scalar) product of x and y
<x,y> = y'*x
as y' is the tronspose of y, for more you can cheek scalar product

I think that these part of quantum mechanics, especially matrix mechanics
< x1 is called bra vector and x2> is called as the ket vector. Please check with quantum mechanics book. Please check the book by Dirac on quantum mechanics.

Related

Maxima numerical integration syntax

I'm trying to obtain a numerical solution to the following integral:
1
The correct answer is -0.324 + 0.382i but as seen below I am not getting a numerical answer and would appreciate help with the Maxima syntax.
2
Perhaps related to why I am not getting a numerical output are two specific questions:
I read that e and i in Maxima need to be preceded by % in input but should these also appear as %e and %i as seen in the Maxima output?
Why is dy missing at the end of the integral in the Maxima output?
Thank you!
Looks to me like your input is okay, however, the function to compute approximations to integrals is named quad_qags. (There are actually several related functions. See ?? quad_ for more info.) Also, a wrinkle here is that the integrand is a complex-valued function (of a real variable), and quad_qags can only work on real-valued integrands, so we'll have to work around it. Here's how I would arrange it.
myintegrand: exp(%i*(1 + %i*y))/(1 + %i*y + 1/(1 + %i*y));
result_realpart: quad_qags (realpart (myintegrand), y, 0, 6);
result_imagpart: quad_qags (imagpart (myintegrand), y, 0, 6);
result: result_realpart[1] + %i*result_imagpart[1];
I get 0.3243496676292901*%i + 0.3820529930785175 as the final result. That's a little different from what you said; maybe a minus sign went missing? or there's a missing or extra factor of %i?
A quick approximation
0.1 * lsum (x, x, float (rectform (makelist (ev (myintegrand, y = k/10), k, 0, 60))));
seems to show the result from quad_qags is reasonable.

Analytic solution to an equation including the error function in Maxima

Maxima does not seem to come up with an analytic solution to this equation which includes the error function. The independent variable here is "p" and the dependent variable to be solved for is "x".
see an illustration of equation follow link
(%i3) solveexplicit:true$ ratprint:false$ fpprintprec:6$
(%i4) eqn: (sqrt(%pi)*(25*2^(3/2)*p-25*sqrt(2))*erf(1/(25*2^(3/2)*x))*x+1)/(25*p) = 0.04;
(%i5) solve (eqn, x);
(%o5) []
(%i6) eqn, [p=2,x=0.00532014],numer;
(%o6) 0.04=0.04
Any help or pointing in the right direction is appreciated.
As far as I know, Maxima can't solve equations containing erf. You can get a numerical result via find_root:
(%i5) find_root (eqn, x, 0.001, 0.999), p=2;
(%o5) 0.005320136894034347
As for symbolic solutions, I worked with the equation a little bit. One can get it into the form erf(something/x)*x = otherstuff, or equivalently erf(y) = somethingelse*y where y = something/x and somethingelse = otherstuff/something if I'm not mistaken. I don't know anything in particular about equations of that form, but perhaps you can find something.
Yes, solve can only do polynominals. I used the series expansion for small values of x and the accuracy is good enough.
(%i11) seriesE: 1$
termE: erf(x)$
for p: 1 unless p > 3 do
(termE: diff (termE, x)/p,
seriesE: seriesE + subst (x=0, termE)*x^p)$
seriesE;
(%o11) -(2*x^3)/(3*sqrt(%pi))+(2*x)/sqrt(%pi)+1
However, the "Expression longer than allowed by the configuration setting!"

Normalize a feature in this table

This has become quite a frustrating question, but I've asked in the Coursera discussions and they won't help. Below is the question:
I've gotten it wrong 6 times now. How do I normalize the feature? Hints are all I'm asking for.
I'm assuming x_2^(2) is the value 5184, unless I am adding the x_0 column of 1's, which they don't mention but he certainly mentions in the lectures when talking about creating the design matrix X. In which case x_2^(2) would be the value 72. Assuming one or the other is right (I'm playing a guessing game), what should I use to normalize it? He talks about 3 different ways to normalize in the lectures: one using the maximum value, another with the range/difference between max and mins, and another the standard deviation -- they want an answer correct to the hundredths. Which one am I to use? This is so confusing.
...use both feature scaling (dividing by the
"max-min", or range, of a feature) and mean normalization.
So for any individual feature f:
f_norm = (f - f_mean) / (f_max - f_min)
e.g. for x2,(midterm exam)^2 = {7921, 5184, 8836, 4761}
> x2 <- c(7921, 5184, 8836, 4761)
> mean(x2)
6676
> max(x2) - min(x2)
4075
> (x2 - mean(x2)) / (max(x2) - min(x2))
0.306 -0.366 0.530 -0.470
Hence norm(5184) = 0.366
(using R language, which is great at vectorizing expressions like this)
I agree it's confusing they used the notation x2 (2) to mean x2 (norm) or x2'
EDIT: in practice everyone calls the builtin scale(...) function, which does the same thing.
It's asking to normalize the second feature under second column using both feature scaling and mean normalization. Therefore,
(5184 - 6675.5) / 4075 = -0.366
Usually we normalize all of them to have zero mean and go between [-1, 1].
You can do that easily by dividing by the maximum of the absolute value and then remove the mean of the samples.
"I'm assuming x_2^(2) is the value 5184" is this because it's the second item in the list and using the subscript _2? x_2 is just a variable identity in maths, it applies to all rows in the list. Note that the highest raw mid-term exam result (i.e. that which is not squared) goes down on the final test and the lowest raw mid-term result increases the most for the final exam result. Theta is a fixed value, a coefficient, so somewhere your normalisation of x_1 and x_2 values must become (EDIT: not negative, less than 1) in order to allow for this behaviour. That should hopefully give you a starting basis, by identifying where the pivot point is.
I had the same problem, in my case the thing was that I was using as average the maximum x2 value (8836) minus minimum x2 value (4761) divided by two, instead of the sum of each x2 value divided by the number of examples.
For the same training set, I got the question as
Q. What is the normalized feature x^(3)_1?
Thus, 3rd training ex and 1st feature makes out to 94 in above table.
Now, normalized form is
x = (x - mean(x's)) / range(x)
Values are :
x = 94
mean(89+72+94+69) / 4 = 81
range = 94 - 69 = 25
Normalized x = (94 - 81) / 25 = 0.52
I'm taking this course at the moment and a really trivial mistake I made first time I answered this question was using comma instead of dot in the answer, since I did by hand and in my country we use comma to denote decimals. Ex:(0,52 instead of 0.52)
So in the second time I tried I used dot and works fine.

How does fitEllipse work in OpenCV?

I am working with opencv and I need to understand how does the function fitEllipse exactly works. I looked at the code at (https://github.com/Itseez/opencv/blob/master/modules/imgproc/src/shapedescr.cpp) and I know it uses least-squares to determine the likely ellipses. I also looked at the paper given in the documentation(Andrew W. Fitzgibbon, R.B.Fisher. A Buyer’s Guide to Conic Fitting. Proc.5th British Machine Vision Conference, Birmingham, pp. 513-522, 1995.)
But I cannot understand exactly the algorithm. For example, why does it need to solve 3 times the least square problem? why bd is initialized to 10000 before the first svd(I guess it is juste a random value for the initialization but why this value can be random?)? why does the values in Ad needs to be negative before the first svd?
Thank you!
Here is Matlab code.. it might help
function [Q,a]=fit_ellipse_fitzgibbon(data)
% function [Q,a]=fit_ellipse_fitzgibbon(data)
%
% Ellipse specific fit, according to:
%
% Direct Least Square Fitting of Ellipses,
% A. Fitzgibbon, M. Pilu and R. Fisher. PAMI 1996
%
%
% See Also:
% FIT_ELLIPSE_LS
% FIT_ELLIPSE_HALIR
[m,n] = size(data);
assert((m==2||m==3)&&n>5);
x = data(1,:)';
y = data(2,:)';
D = [x.^2 x.*y y.^2 x y ones(size(x))]; % design matrix
S = D'*D; % scatter matrix
C(6,6)=0; C(1,3)=-2; C(2,2)=1; C(3,1)=-2; % constraints matrix
% solve the generalized eigensystem
[V,D] = eig(S, C);
% find the only negative eigenvalue
[n_r, n_c] = find(D<0 & ~isinf(D));
if isempty(n_c),
warning('Error getting the ellipse parameters, will do LS');
[Q,a] = fit_ellipse_ls(data); %
return;
end
% the parameters
a = V(:, n_c);
[A B C D E F] = deal(a(1),a(2),a(3),a(4),a(5),a(6)); % deal is slow!
Q = [A B/2 D/2; B/2 C E/2; D/2 E/2 F];
end % fit_ellipse_fitzgibbon
Fitzibbon solution has some numerical stability though. See the work of Halir for a solution to this.
It is essentially least squares solution, but specifically designed so that it will produce a valid ellipse, not just any conic.

Implementing a linear, binary SVM (support vector machine)

I want to implement a simple SVM classifier, in the case of high-dimensional binary data (text), for which I think a simple linear SVM is best. The reason for implementing it myself is basically that I want to learn how it works, so using a library is not what I want.
The problem is that most tutorials go up to an equation that can be solved as a "quadratic problem", but they never show an actual algorithm! So could you point me either to a very simple implementation I could study, or (better) to a tutorial that goes all the way to the implementation details?
Thanks a lot!
Some pseudocode for the Sequential Minimal Optimization (SMO) method can be found in this paper by John C. Platt: Fast Training of Support Vector Machines using Sequential Minimal Optimization. There is also a Java implementation of the SMO algorithm, which is developed for research and educational purpose (SVM-JAVA).
Other commonly used methods to solve the QP optimization problem include:
constrained conjugate gradients
interior point methods
active set methods
But be aware that some math knowledge is needed to understand this things (Lagrange multipliers, Karush–Kuhn–Tucker conditions, etc.).
Are you interested in using kernels or not? Without kernels, the best way to solve these kinds of optimization problems is through various forms of stochastic gradient descent. A good version is described in http://ttic.uchicago.edu/~shai/papers/ShalevSiSr07.pdf and that has an explicit algorithm.
The explicit algorithm does not work with kernels but can be modified; however, it would be more complex, both in terms of code and runtime complexity.
Have a look at liblinear and for non linear SVM's at libsvm
The following paper "Pegasos: Primal Estimated sub-GrAdient SOlver for SVM" top of page 11 describes the Pegasos algorithm also for kernels.It can be downloaded from http://ttic.uchicago.edu/~nati/Publications/PegasosMPB.pdf
It appears to be a hybrid of coordinate descent and subgradient descent. Also, line 6 of the algorithm is wrong. In the predicate the second appearance of y_i_t should be replaced with y_j instead.
I would like to add a little supplement to the answer about original Platt's work.
There is a bit simplified version presented in Stanford Lecture Notes, but derivation of all the formulas should be found somewhere else (e.g. this random notes I found on the Internet).
If it's ok to deviate from original implementations, I can propose you my own variation of the SMO algorithm that follows.
class SVM:
def __init__(self, kernel='linear', C=10000.0, max_iter=100000, degree=3, gamma=1):
self.kernel = {'poly':lambda x,y: np.dot(x, y.T)**degree,
'rbf':lambda x,y:np.exp(-gamma*np.sum((y-x[:,np.newaxis])**2,axis=-1)),
'linear':lambda x,y: np.dot(x, y.T)}[kernel]
self.C = C
self.max_iter = max_iter
def restrict_to_square(self, t, v0, u):
t = (np.clip(v0 + t*u, 0, self.C) - v0)[1]/u[1]
return (np.clip(v0 + t*u, 0, self.C) - v0)[0]/u[0]
def fit(self, X, y):
self.X = X.copy()
self.y = y * 2 - 1
self.lambdas = np.zeros_like(self.y, dtype=float)
self.K = self.kernel(self.X, self.X) * self.y[:,np.newaxis] * self.y
for _ in range(self.max_iter):
for idxM in range(len(self.lambdas)):
idxL = np.random.randint(0, len(self.lambdas))
Q = self.K[[[idxM, idxM], [idxL, idxL]], [[idxM, idxL], [idxM, idxL]]]
v0 = self.lambdas[[idxM, idxL]]
k0 = 1 - np.sum(self.lambdas * self.K[[idxM, idxL]], axis=1)
u = np.array([-self.y[idxL], self.y[idxM]])
t_max = np.dot(k0, u) / (np.dot(np.dot(Q, u), u) + 1E-15)
self.lambdas[[idxM, idxL]] = v0 + u * self.restrict_to_square(t_max, v0, u)
idx, = np.nonzero(self.lambdas > 1E-15)
self.b = np.sum((1.0-np.sum(self.K[idx]*self.lambdas, axis=1))*self.y[idx])/len(idx)
def decision_function(self, X):
return np.sum(self.kernel(X, self.X) * self.y * self.lambdas, axis=1) + self.b
In simple cases it works not much worth than sklearn.svm.SVC, comparison shown below (I have posted code that generates these images on GitHub)
I used quite different approach to derive formulas, you may want to check my preprint on ResearchGate for details.

Resources