How does the output value of step 1 result in the value of "0.582"?
I am looking at an example of the usage of backpropagation in order to have a basic understanding of it. However I fail to understand how the value "0.582" is formed as the output for the example below.
EDIT:
I have attempted Feed-Forward, which has resulted in having the output value of "0.5835...".
Now I am unsure if the example above output value is correct or whether the method I have used is correct.
EDIT2:
My FF Calculation
f(x) = 1/1+e^-x
.
>
NodeJ = f( 1*W1j+0.4*W2j+0.7*W3j )
NodeJ = f( 1(0.2)+0.4(0.3)+0.7(-0.1) ) = f(0.25)
NodeJ = 1/(1+e^-0.25) = 0.562...
.
NodeI = f( 1*W1i+0.4*W2i+0.7*W3i )
NodeI = f( 1(0.1)+0.4(-0.1)+0.7(0.2) ) = f(0.25)
NodeI = 1/(1+e^-0.25) = 0.562...
.
NodeK = f( NodeJ * Wjk + NodeI * Wik)
NodeK = f( 0.562(0.1)+0.562(0.5)) = f(0.3372)
NodeK = 1/(1+e^-0.3372) = 0.5835
Output = 0.5835
Related
I'm trying to fit a line using quadratic poly, but because the fit results in continuous values, the integer conversion (for CartesianIndex) rounds it off, and I loose data at that pixel.
I tried the method
here. So I get new y values as
using Images, Polynomials, Plots,ImageView
img = load("jTjYb.png")
img = Gray.(img)
img = img[end:-1:1, :]
nodes = findall(img.>0)
xdata = map(p->p[2], nodes)
ydata = map(p->p[1], nodes)
f = fit(xdata, ydata, 2)
ydata_new .= round.(Int, f.(xdata)
new_line_fitted_img=zeros(size(img))
new_line_fitted_img[xdata,ydata_new].=1
imshow(new_line_fitted_img)
which results in chopped line as below
whereas I was expecting it to be continuous line as it was in pre-processing
Do you expect the following:
Raw Image
Fitted Polynomial
Superposition
enter image description here
enter image description here
enter image description here
Code:
using Images, Polynomials
img = load("img.png");
img = Gray.(img)
fx(data, dCoef, cCoef, bCoef, aCoef) = #. data^3 *aCoef + data^2 *bCoef + data*cCoef + dCoef;
function fit_poly(img::Array{<:Gray, 2})
img = img[end:-1:1, :]
nodes = findall(img.>0)
xdata = map(p->p[2], nodes)
ydata = map(p->p[1], nodes)
f = fit(xdata, ydata, 3)
xdt = unique(xdata)
xdt, fx(xdt, f.coeffs...)
end;
function draw_poly!(X, y)
the_min = minimum(y)
if the_min<0
y .-= the_min - 1
end
initialized_img = Gray.(zeros(maximum(X), maximum(y)))
initialized_img[CartesianIndex.(X, y)] .= 1
dif = diff(y)
for i in eachindex(dif)
the_dif = dif[i]
if abs(the_dif) >= 2
segment = the_dif รท 2
initialized_img[i, y[i]:y[i]+segment] .= 1
initialized_img[i+1, y[i]+segment+1:y[i+1]-1] .= 1
end
end
rotl90(initialized_img)
end;
X, y = fit_poly(img);
y = convert(Vector{Int64}, round.(y));
draw_poly!(X, y)
In pycocotools in cocoeval.py sctipt there is COCOeval class and in this class there is accumulate function for calculating Precision and Recall. Does anyone know what is this npig variable? Is this negative-positive or?
Because I saw this formula for recall: Recall = (True Positive)/(True Positive + False Negative)
Can I just use this precision and recall variable inside dictionary self.eval to get precision and recall of my model which I'm testing, and plot a precision-recall curve?
And the variable scores is this F1 score?
Because I'm not very well understand this T,R,K,A,M what is happening with this.
How can I print precision and recall in terminal?
def accumulate(self, p = None):
'''
Accumulate per image evaluation results and store the result in self.eval
:param p: input params for evaluation
:return: None
'''
print('Accumulating evaluation results...')
tic = time.time()
if not self.evalImgs:
print('Please run evaluate() first')
# allows input customized parameters
if p is None:
p = self.params
p.catIds = p.catIds if p.useCats == 1 else [-1]
T = len(p.iouThrs)
R = len(p.recThrs)
K = len(p.catIds) if p.useCats else 1
A = len(p.areaRng)
M = len(p.maxDets)
precision = -np.ones((T,R,K,A,M)) # -1 for the precision of absent categories
recall = -np.ones((T,K,A,M))
scores = -np.ones((T,R,K,A,M))
# create dictionary for future indexing
_pe = self._paramsEval
catIds = _pe.catIds if _pe.useCats else [-1]
setK = set(catIds)
setA = set(map(tuple, _pe.areaRng))
setM = set(_pe.maxDets)
setI = set(_pe.imgIds)
# get inds to evaluate
k_list = [n for n, k in enumerate(p.catIds) if k in setK]
m_list = [m for n, m in enumerate(p.maxDets) if m in setM]
a_list = [n for n, a in enumerate(map(lambda x: tuple(x), p.areaRng)) if a in setA]
i_list = [n for n, i in enumerate(p.imgIds) if i in setI]
I0 = len(_pe.imgIds)
A0 = len(_pe.areaRng)
# retrieve E at each category, area range, and max number of detections
for k, k0 in enumerate(k_list):
Nk = k0*A0*I0
for a, a0 in enumerate(a_list):
Na = a0*I0
for m, maxDet in enumerate(m_list):
E = [self.evalImgs[Nk + Na + i] for i in i_list]
E = [e for e in E if not e is None]
if len(E) == 0:
continue
dtScores = np.concatenate([e['dtScores'][0:maxDet] for e in E])
# different sorting method generates slightly different results.
# mergesort is used to be consistent as Matlab implementation.
inds = np.argsort(-dtScores, kind='mergesort')
dtScoresSorted = dtScores[inds]
dtm = np.concatenate([e['dtMatches'][:,0:maxDet] for e in E], axis=1)[:,inds]
dtIg = np.concatenate([e['dtIgnore'][:,0:maxDet] for e in E], axis=1)[:,inds]
gtIg = np.concatenate([e['gtIgnore'] for e in E])
npig = np.count_nonzero(gtIg==0 )
if npig == 0:
continue
tps = np.logical_and( dtm, np.logical_not(dtIg) )
fps = np.logical_and(np.logical_not(dtm), np.logical_not(dtIg) )
tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float)
fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float)
for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)):
tp = np.array(tp)
fp = np.array(fp)
nd = len(tp)
rc = tp / npig
pr = tp / (fp+tp+np.spacing(1))
q = np.zeros((R,))
ss = np.zeros((R,))
if nd:
recall[t,k,a,m] = rc[-1]
else:
recall[t,k,a,m] = 0
# numpy is slow without cython optimization for accessing elements
# use python array gets significant speed improvement
pr = pr.tolist(); q = q.tolist()
for i in range(nd-1, 0, -1):
if pr[i] > pr[i-1]:
pr[i-1] = pr[i]
inds = np.searchsorted(rc, p.recThrs, side='left')
try:
for ri, pi in enumerate(inds):
q[ri] = pr[pi]
ss[ri] = dtScoresSorted[pi]
except:
pass
precision[t,:,k,a,m] = np.array(q)
scores[t,:,k,a,m] = np.array(ss)
self.eval = {
'params': p,
'counts': [T, R, K, A, M],
'date': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
'precision': precision,
'recall': recall,
'scores': scores,
}
toc = time.time()
print('DONE (t={:0.2f}s).'.format( toc-tic))
I am learning Machine Learning course from coursera from Andrews Ng. I have written a code for logistic regression in octave. But, it is not working. Can someone help me?
I have taken the dataset from the following link:
Titanic survivors
Here is my code:
pkg load io;
[An, Tn, Ra, limits] = xlsread("~/ML/ML Practice/dataset/train_and_test2.csv", "Sheet2", "A2:H1000");
# As per CSV file we are reading columns from 1 to 7. 8-th column is Survived, which is what we are going to predict
X = [An(:, [1:7])];
Y = [An(:, 8)];
X = horzcat(ones(size(X,1), 1), X);
# Initializing theta values as zero for all
#theta = zeros(size(X,2),1);
theta = [-3;1;1;-3;1;1;1;1];
learningRate = -0.00021;
#learningRate = -0.00011;
# Step 1: Calculate Hypothesis
function g_z = estimateHypothesis(X, theta)
z = theta' * X';
z = z';
e_z = -1 * power(2.72, z);
denominator = 1.+e_z;
g_z = 1./denominator;
endfunction
# Step 2: Calculate Cost function
function cost = estimateCostFunction(hypothesis, Y)
log_1 = log(hypothesis);
log_2 = log(1.-hypothesis);
y1 = Y;
term_1 = y1.*log_1;
y2 = 1.-Y;
term_2 = y2.*log_2;
cost = term_1 + term_2;
cost = sum(cost);
# no.of.rows
m = size(Y, 1);
cost = -1 * (cost/m);
endfunction
# Step 3: Using gradient descent I am updating theta values
function updatedTheta = updateThetaValues(_X, _Y, _theta, _hypothesis, learningRate)
#s1 = _X * _theta;
#s2 = s1 - _Y;
#s3 = _X' * s2;
# no.of.rows
#m = size(_Y, 1);
#s4 = (learningRate * s3)/m;
#updatedTheta = _theta - s4;
s1 = _hypothesis - _Y;
s2 = s1 .* _X;
s3 = sum(s2);
# no.of.rows
m = size(_Y, 1);
s4 = (learningRate * s3)/m;
updatedTheta = _theta .- s4';
endfunction
costVector = [];
iterationVector = [];
for i = 1:1000
# Step 1
hypothesis = estimateHypothesis(X, theta);
#disp("hypothesis");
#disp(hypothesis);
# Step 2
cost = estimateCostFunction(hypothesis, Y);
costVector = vertcat(costVector, cost);
#disp("Cost");
#disp(cost);
# Step 3 - Updating theta values
theta = updateThetaValues(X, Y, theta, hypothesis, learningRate);
iterationVector = vertcat(iterationVector, i);
endfor
function plotGraph(iterationVector, costVector)
plot(iterationVector, costVector);
ylabel('Cost Function');
xlabel('Iteration');
endfunction
plotGraph(iterationVector, costVector);
This is the graph I am getting when I am plotting against no.of.iterations and cost function.
I am tired by adjusting theta values and learning rate. Can someone help me to solve this problem.
Thanks.
I have done a mathematical error. I should have used either power(2.72, -z) or exp(-z). Instead I have used as -1 * power(2.72, z). Now, I'm getting a proper curve.
Thanks.
I get what this wiki page says(http://en.wikipedia.org/wiki/Multinomial_logistic_regression), but I don't know how to get the update rules for stochastic gradient descent. Sorry to ask this here(this is really just about machine learning theories instead of actual implementation). Could someone provide a solution with explanation? Thanks in advance!
I happened to write code to implent softmax, I refer most to the page http://ufldl.stanford.edu/wiki/index.php/Softmax_Regression
this is the code I wrote in matlab ,hope it will help
function y = sigmoid_multi(weight,x,class_index)
%% weight feature_dim * class_num
%% x feature_dim * 1
%% class_index scalar
sum = eps;
class_num = size(weight,2);
for i = 1:class_num
sum = sum + exp(weight(:,i)'*x);
end
y = exp(weight(:,class_index)'*x)/sum;
end
function g = gradient(train_patterns,train_labels,weight)
m = size(train_patterns,2);
class_num = size(weight,2);
g = zeros(size(weight));
for j = 1:class_num
for i = 1:m
if(train_labels(i) == j)
g(:,j) = g(:,j) + (1 - log( sigmoid_multi(weight,train_patterns(:,i),j) + eps))*train_patterns(:,i);
end
end
end
g = -(g/m);
end
function J = object_function(train_patterns,train_labels,weight)
m = size(train_patterns,2);
J = 0;
for i = 1:m
J = J + log( sigmoid_multi(weight,train_patterns(:,i),train_labels(i)) + eps);
end
J = -(J/m);
end
function weight = multi_logistic_train(train_patterns,train_labels,alpha)
%% weight feature_dim * class_num
%% train_patterns featur_dim * sample_num
%% train_labels 1 * sample_num
%% alpha scalar
class_num = length(unique(train_labels));
m = size(train_patterns,2); %% sample_number;
n = size(train_patterns,1); % feature_dim;
weight = rand(n,class_num);
for i = 1:40
J = object_function(train_patterns,train_labels,weight);
fprintf('objec function value : %f\n',J);
weight = weight - alpha*gradient(train_patterns,train_labels,weight);
end
end
I write JPEG compression in Scilab (equivalent of MATLAB) using function imdct. In this function is used function DCT from openCV and I don't know which equation is used in dct function.
lenna by imdct
lenna by my_function
You can see lenna by imdct which is internal function and lenna by my_function is my function in scilab.
I add my code in scilab
function vystup = dct_rovnice(vstup)
[M,N] = size(vstup)
for u=1:M
for v=1:N
cos_celkem = 0;
for m=1:M
for n=1:N
pom = double(vstup(m,n));
cos_citatel1 = cos(((2*m) * u * %pi)/(2*M));
cos_citatel2 = cos(((2*n) * v * %pi)/(2*N));
cos_celkem = cos_celkem + (pom * cos_citatel1 * cos_citatel2);
end
end
c_u = 0;
c_v = 0;
if u == 1 then
c_u = 1 / sqrt(2);
else
c_u = 1;
end
if v == 1 then
c_v = 1 / sqrt(2);
else
c_v = 1;
end
vystup(u,v) = (2/sqrt(n*m)) * c_u * c_v * cos_celkem;
end
end
endfunction
function vystup = dct_prevod(vstup)
Y = vstup(:,:,1);
Cb = vstup(:,:,2);
Cr = vstup(:,:,3);
[rows,columns]=size(vstup)
vystup = zeros(rows,columns,3)
for y=1:8:rows-7
for x=1:8:columns-7
blok_Y = Y(y:y+7,x:x+7)
blok_Cb = Cb(y:y+7,x:x+7)
blok_Cr = Cr(y:y+7,x:x+7)
blok_dct_Y = dct_rovnice(blok_Y)
blok_dct_Cb = dct_rovnice(blok_Cb)
blok_dct_Cr = dct_rovnice(blok_Cr)
vystup(y:y+7,x:x+7,1)= blok_dct_Y
vystup(y:y+7,x:x+7,2)= blok_dct_Cb
vystup(y:y+7,x:x+7,3)= blok_dct_Cr
end
end
vystup = uint8(vystup)
endfunction
You can see equation I used
EQUATION
The issue seems to be in the use of different normalization of the resulting coefficients.
The OpenCV library uses this equation for a forward transform (N=8, in your case):
The basis g is defined as
where
(Sorry for the ugly images, but SO does not provide any support for typesetting equations.)
Take care there are several definitions of the dct function (DCT-I, DCT-II, DCT-III and DCT-IV normalized and un-normmalized)
Moreover have you tried the Scilab builtin function dct (from FFTW) which can be applied straightforward to images.