How to perform power() function on RGB matrix in matlab - image-processing

In a project, I need to perform power() function on RGB matrix in a matlab GUI program, but matlab keeps returning error meesage.
Below is the code and the error message
img_src = getappdata(handles.figure_pjimage, 'img_src');
R=img_src(:,:,1);
G=img_src(:,:,2);
B=img_src(:,:,3);
C = 12;
gamma = 0.8;
R1 = C * power(R, gamma);
G1 = C * power(G, gamma);
B1 = C * power(B, gamma);
R2 = power((R1 / C), (1/gamma));
G2 = power((G1 / C), (1/gamma));
B2 = power((B1 / C), (1/gamma));
disp(max(R2));
new_img = cat(3,R2,G2,B2);
axes(handles.axes_dst);
imshow(new_img);
And here is the error message
Integers can only be raised to positive integral powers.
However, when I try to use power() function in command window, this can be done.
>> A = [2,2
2,2]
A =
2 2
2 2
>> power(A,0.4)
ans =
1.3195 1.3195
1.3195 1.3195
Please tell me if any of you get the solution, thanks.

Probably your RGB-matrices are e.g. in format uint8 or uint16, as this is the output format of image import functions for a lot of file types. As power intends not to violate the format definition, which it would for fractional powers, it throws the error.
So basically you only have to change lines 2-4 to:
R = double( img_src(:,:,1) );
G = double( img_src(:,:,2) );
B = double( img_src(:,:,3) );
and your code should work as desired.

Related

Octave fminunc doesn't converge

I'm trying to use fminunc in Octave for a logistic problem, but it doesn't work. It says that I didn't define variables, but actually I did. If I define variables directly in the costFunction,and not in the main, it doesn't give any problem, but the function doesn't work really. In fact the exitFlag is equal to -3 and it doesn't converge at all.
Here's my function:
function [jVal, gradient] = cost(theta, X, y)
X = [1,0.14,0.09,0.58,0.39,0,0.55,0.23,0.64;1,-0.57,-0.54,-0.16,0.21,0,-0.11,-0.61,-0.35;1,0.42,0.45,-0.41,-0.6,0,-0.44,0.38,-0.29];
y = [1;0;1];
theta = [0.8;0.2;0.6;0.3;0.4;0.5;0.6;0.2;0.4];
jVal = 0;
jVal = costFunction2(X, y, theta); %this is another function that gives me jVal. I'm quite sure it is
%correct because I use it also with other algorithms and it
%works perfectly
m = length(y);
xSize = size(X, 2);
gradient = zeros(xSize, 1);
sig = X * theta;
h = 1 ./(1 + exp(-sig));
for i = 1:m
for j = 1:xSize
gradient(j) = (1/m) * sum(h(i) - y(i)) .* X(i, j);
end
end
Here's my main:
theta = [0.8;0.2;0.6;0.3;0.4;0.5;0.6;0.2;0.4];
options = optimset('GradObj', 'on', 'MaxIter', 100);
[optTheta, functionVal, exitFlag] = fminunc(#cost, theta, options)
if I compile it:
optTheta =
0.80000
0.20000
0.60000
0.30000
0.40000
0.50000
0.60000
0.20000
0.40000
functionVal = 0.15967
exitFlag = -3
How can I resolve this problem?
You are not in fact using fminunc correctly. From the documentation:
-- fminunc (FCN, X0)
-- fminunc (FCN, X0, OPTIONS)
FCN should accept a vector (array) defining the unknown variables,
and return the objective function value, optionally with gradient.
'fminunc' attempts to determine a vector X such that 'FCN (X)' is a
local minimum.
What you are passing is not a handle to a function that accepts a single vector argument. Instead, what you are passing (i.e. #cost) is a handle to a function that takes three arguments.
You need to 'convert' this into a handle to a function that takes only one input, and does what you want under the hood. The easiest way to do this is by 'wrapping' your cost function into an anonymous function that only takes one argument, and calls the cost function in the appropriate way, e.g.
fminunc( #(t) cost(t, X, y), theta, options )
Note: This assumes X and y are defined in the scope where you do this 'wrapping' business

How to find accumulator matrix for line in an image?

I am a newbie in the field of CV and IP. I was writing the HoughTransform algorithm for finding line.I am not getting what is wrong with this code in which i m trying to find the accumulator array
numRowsInBW = size(BW,1);
numColsInBW = size(BW,2);
%length of the diagonal of image
D = sqrt((numRowsInBW - 1)^2 + (numColsInBW - 1)^2);
%number of rows in the accumulator array
nrho = 2*(ceil(D/rhoStep)) + 1;
%number of cols in the accumulator array
ntheta = length(theta);
H = zeros(nrho,ntheta);
%this means the particular pixle is white
%i.e the edge pixle
[allrows allcols] = find(BW == 1);
for i = (1 : size(allrows))
y = allrows(i);
x = allcols(i);
for th = (1 : 180)
d = floor(x*cos(th) - y*sin(th));
H(d+floor(nrho/2),th) += 1;
end
end
I m applying this for a simple image
I m getting this result
But this is expected
I am not able to find the mistake.Please help me.Thanks in advance.
There are several issues with your code. The main issue is here:
ntheta = length(theta);
% ...
for i = (1 : size(allrows))
% ...
for th = (1 : 180)
d = floor(x*cos(th) - y*sin(th));
% ...
th seems to be an angle in degrees. cos(th) is meaningless. Instead, use cosd and sind.
Another issue is that th iterates from 1 to 180, but there is no guarantee that ntheta is 180. So, loop as follows instead:
for i = 1 : size(allrows)
% ...
for j = 1 : numel(theta)
th = theta(j);
% ...
and use th as the angle, and j as the index into H.
Finally, given your image and your expected output, you should apply some edge detection first (Canny, for example). Maybe you already did this?

Fortran: matrix from symbolic functions

I need to transform a set of symbolic equations defining relations between \vec(a) = (a,b,c) and \vec(x) = (x,y), e.g.
a = 1./2 * x
b = -1./2 * x
c = 1./2 * y
into a matrix form so that I get the matrix A, when I write \vec(a) = A * \vec(x):
/ a \ / 1./2 0 \ / x \
| b | = | -1./2 0 | * \ y /
\ c / \ 0 1./2 /
Now the problem is, that the whole things needs to be in Fortran: reading the equations and transforming them to the matrix A.
I have found the module fparser (https://www.sourceforge.net/projects/fparser/) to evaluate symbolic math expressions, but I could need some help figuring out how to most efficiently build these matrices without doing too much string parsing...
An approach (workaround?) in 100% pure Fortran might be...
! calc.f90
program main
implicit none
real avec( 3 ), xvec( 2 ), A( 3, 2 )
integer i
do i = 1, size(xvec)
xvec = 0 ; xvec(i) = 1.0
call calc()
A(:,i) = avec
enddo
do i = 1, size(avec)
print *, A(i,:)
enddo
contains
subroutine calc()
real a,b,c, x,y
x = xvec(1)
y = xvec(2)
include 'eq.inc'
avec = [a,b,c]
end subroutine
end
eq.inc:
a = 1./2 * x
b = -1./2 * x
c = 1./2 * y
$ gfortran calc.f90 && ./a.out
0.500000000 0.00000000
-0.500000000 -0.00000000
0.00000000 0.500000000
Although it's long ago, I want to post what helped me solving the issue:
I used fparser (http://fparser.sourceforge.net/).

Incrementation in Lua

I am playing a little bit with Lua.
I came across the following code snippet that have an unexpected behavior:
a = 3;
b = 5;
c = a-- * b++; // some computation
print(a, b, c);
Lua runs the program without any error but does not print 2 6 15 as expected. Why ?
-- starts a single line comment, like # or // in other languages.
So it's equivalent to:
a = 3;
b = 5;
c = a
LUA doesn't increment and decrement with ++ and --. -- will instead start a comment.
There isn't and -- and ++ in lua.
so you have to use a = a + 1 or a = a -1 or something like that
If you want 2 6 15 as the output, try this code:
a = 3
b = 5
c = a * b
a = a - 1
b = b + 1
print(a, b, c)
This will give
3 5 3
because the 3rd line will be evaluated as c = a.
Why? Because in Lua, comments starts with --. Therefore, c = a-- * b++; // some computation is evaluated as two parts:
expression: c = a
comment: * b++; //// some computation
There are 2 problems in your Lua code:
a = 3;
b = 5;
c = a-- * b++; // some computation
print(a, b, c);
One, Lua does not currently support incrementation. A way to do this is:
c = a - 1 * b + 1
print(a, b, c)
Two, -- in Lua is a comment, so using a-- just translates to a, and the comment is * b++; // some computation.
Three, // does not work in Lua, use -- for comments.
Also it's optional to use ; at the end of every line.
You can do the following:
local default = 0
local max = 100
while default < max do
default = default + 1
print(default)
end
EDIT: Using SharpLua in C# incrementing/decrementing in lua can be done in shorthand like so:
a+=1 --increment by some value
a-=1 --decrement by some value
In addition, multiplication/division can be done like so:
a*=2 --multiply by some value
a/=2 --divide by some value
The same method can be used if adding, subtracting, multiplying or dividing one variable by another, like so:
a+=b
a-=b
a/=b
a*=b
This is much simpler and tidier and I think a lot less complicated, but not everybody will share my view.
Hope this helps!

Gradient in continuous regression using a neural network

I'm trying to implement a regression NN that has 3 layers (1 input, 1 hidden and 1 output layer with a continuous result). As a basis I took a classification NN from coursera.org class, but changed the cost function and gradient calculation so as to fit a regression problem (and not a classification one):
My nnCostFunction now is:
function [J grad] = nnCostFunctionLinear(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
m = size(X, 1);
a1 = X;
a1 = [ones(m, 1) a1];
a2 = a1 * Theta1';
a2 = [ones(m, 1) a2];
a3 = a2 * Theta2';
Y = y;
J = 1/(2*m)*sum(sum((a3 - Y).^2))
th1 = Theta1;
th1(:,1) = 0; %set bias = 0 in reg. formula
th2 = Theta2;
th2(:,1) = 0;
t1 = th1.^2;
t2 = th2.^2;
th = sum(sum(t1)) + sum(sum(t2));
th = lambda * th / (2*m);
J = J + th; %regularization
del_3 = a3 - Y;
t1 = del_3'*a2;
Theta2_grad = 2*(t1)/m + lambda*th2/m;
t1 = del_3 * Theta2;
del_2 = t1 .* a2;
del_2 = del_2(:,2:end);
t1 = del_2'*a1;
Theta1_grad = 2*(t1)/m + lambda*th1/m;
grad = [Theta1_grad(:) ; Theta2_grad(:)];
end
Then I use this func in fmincg algorithm, but in firsts iterations fmincg end it's work. I think my gradient is wrong, but I can't find the error.
Can anybody help?
If I understand correctly, your first block of code (shown below) -
m = size(X, 1);
a1 = X;
a1 = [ones(m, 1) a1];
a2 = a1 * Theta1';
a2 = [ones(m, 1) a2];
a3 = a2 * Theta2';
Y = y;
is to get the output a(3) at the output layer.
Ng's slides about NN has the below configuration to calculate a(3). It's different from what your code presents.
in the middle/output layer, you are not doing the activation function g, e.g., a sigmoid function.
In terms of the cost function J without regularization terms, Ng's slides has the below formula:
I don't understand why you can compute it using:
J = 1/(2*m)*sum(sum((a3 - Y).^2))
because you are not including the log function at all.
Mikhaill, I´ve been playing with a NN for continuous regression as well, and had a similar issues at some point. The best thing to do here would be to test gradient computation against a numerical calculation before running the model. If that´s not correct, fmincg won´t be able to train the model. (Btw, I discourage you of using numerical gradient as the time involved is much bigger).
Taking into account that you took this idea from Ng´s Coursera class, I´ll implement a possible solution for you to try using the same notation for Octave.
% Cost function without regularization.
J = 1/2/m^2*sum((a3-Y).^2);
% In case it´s needed, regularization term is added (i.e. for Training).
if (reg==true);
J=J+lambda/2/m*(sum(sum(Theta1(:,2:end).^2))+sum(sum(Theta2(:,2:end).^2)));
endif;
% Derivatives are computed for layer 2 and 3.
d3=(a3.-Y);
d2=d3*Theta2(:,2:end);
% Theta grad is computed without regularization.
Theta1_grad=(d2'*a1)./m;
Theta2_grad=(d3'*a2)./m;
% Regularization is added to grad computation.
Theta1_grad(:,2:end)=Theta1_grad(:,2:end)+(lambda/m).*Theta1(:,2:end);
Theta2_grad(:,2:end)=Theta2_grad(:,2:end)+(lambda/m).*Theta2(:,2:end);
% Unroll gradients.
grad = [Theta1_grad(:) ; Theta2_grad(:)];
Note that, since you have taken out all the sigmoid activation, the derivative calculation is quite simple and results in a simplification of the original code.
Next steps:
1. Check this code to understand if it makes sense to your problem.
2. Use gradient checking to test gradient calculation.
3. Finally, use fmincg and check you get different results.
Try to include sigmoid function to compute second layer (hidden layer) values and avoid sigmoid in calculating the target (output) value.
function [J grad] = nnCostFunction1(nnParams, ...
inputLayerSize, ...
hiddenLayerSize, ...
numLabels, ...
X, y, lambda)
Theta1 = reshape(nnParams(1:hiddenLayerSize * (inputLayerSize + 1)), ...
hiddenLayerSize, (inputLayerSize + 1));
Theta2 = reshape(nnParams((1 + (hiddenLayerSize * (inputLayerSize + 1))):end), ...
numLabels, (hiddenLayerSize + 1));
Theta1Grad = zeros(size(Theta1));
Theta2Grad = zeros(size(Theta2));
m = size(X,1);
a1 = [ones(m, 1) X]';
z2 = Theta1 * a1;
a2 = sigmoid(z2);
a2 = [ones(1, m); a2];
z3 = Theta2 * a2;
a3 = z3;
Y = y';
r1 = lambda / (2 * m) * sum(sum(Theta1(:, 2:end) .* Theta1(:, 2:end)));
r2 = lambda / (2 * m) * sum(sum(Theta2(:, 2:end) .* Theta2(:, 2:end)));
J = 1 / ( 2 * m ) * (a3 - Y) * (a3 - Y)' + r1 + r2;
delta3 = a3 - Y;
delta2 = (Theta2' * delta3) .* sigmoidGradient([ones(1, m); z2]);
delta2 = delta2(2:end, :);
Theta2Grad = 1 / m * (delta3 * a2');
Theta2Grad(:, 2:end) = Theta2Grad(:, 2:end) + lambda / m * Theta2(:, 2:end);
Theta1Grad = 1 / m * (delta2 * a1');
Theta1Grad(:, 2:end) = Theta1Grad(:, 2:end) + lambda / m * Theta1(:, 2:end);
grad = [Theta1Grad(:) ; Theta2Grad(:)];
end
Normalize the inputs before passing it in nnCostFunction.
In accordance with Week 5 Lecture Notes guideline for a Linear System NN you should make following changes in the initial code:
Remove num_lables or make it 1 (in reshape() as well)
No need to convert y into a logical matrix
For a2 - replace sigmoid() function to tanh()
In d2 calculation - replace sigmoidGradient(z2) with (1-tanh(z2).^2)
Remove sigmoid from output layer (a3 = z3)
Replace cost function in the unregularized portion to linear one: J = (1/(2*m))*sum((a3-y).^2)
Create predictLinear(): use predict() function as a basis, replace sigmoid with tanh() for the first layer hypothesis, remove second sigmoid for the second layer hypothesis, remove the line with max() function, use output of the hidden layer hypothesis as a prediction result
Verify your nnCostFunctionLinear() on the test case from the lecture note

Resources