Fortran: matrix from symbolic functions - parsing

I need to transform a set of symbolic equations defining relations between \vec(a) = (a,b,c) and \vec(x) = (x,y), e.g.
a = 1./2 * x
b = -1./2 * x
c = 1./2 * y
into a matrix form so that I get the matrix A, when I write \vec(a) = A * \vec(x):
/ a \ / 1./2 0 \ / x \
| b | = | -1./2 0 | * \ y /
\ c / \ 0 1./2 /
Now the problem is, that the whole things needs to be in Fortran: reading the equations and transforming them to the matrix A.
I have found the module fparser (https://www.sourceforge.net/projects/fparser/) to evaluate symbolic math expressions, but I could need some help figuring out how to most efficiently build these matrices without doing too much string parsing...

An approach (workaround?) in 100% pure Fortran might be...
! calc.f90
program main
implicit none
real avec( 3 ), xvec( 2 ), A( 3, 2 )
integer i
do i = 1, size(xvec)
xvec = 0 ; xvec(i) = 1.0
call calc()
A(:,i) = avec
enddo
do i = 1, size(avec)
print *, A(i,:)
enddo
contains
subroutine calc()
real a,b,c, x,y
x = xvec(1)
y = xvec(2)
include 'eq.inc'
avec = [a,b,c]
end subroutine
end
eq.inc:
a = 1./2 * x
b = -1./2 * x
c = 1./2 * y
$ gfortran calc.f90 && ./a.out
0.500000000 0.00000000
-0.500000000 -0.00000000
0.00000000 0.500000000

Although it's long ago, I want to post what helped me solving the issue:
I used fparser (http://fparser.sourceforge.net/).

Related

high order derivatives in lua?

I was messing around with high order derivatives on lua (i haven't tested this on any different language) and found that the lowest you set delta to the sooner it will output 0.0, look at this code and then the output
why does this happen?
function derivate (f, delta)
delta = delta or 1e-4
return function(x) return (f(x + delta) - f(x)) / delta end
end
c = derivate ( math.cos )
-- c is now -sinx
d = derivate (c)
-- d is now -cosx
e = derivate (d)
-- e is now sinx
f = derivate (e)
-- f is now cosx
print(math.cos(1.23 ), c(1.23) , d(1.23) , e(1.23) , f(1.23) )
output :
0.3342377271245 -0.94250551224695 -0.3341434795523 0.94257934790676 0.0

"Bitwise AND" in Lua

I'm trying to translate a code from C to Lua and I'm facing a problem.
How can I translate a Bitwise AND in Lua?
The source C code contains:
if ((command&0x80)==0)
...
How can this be done in Lua?
I am using Lua 5.1.4-8
Implementation of bitwise operations in Lua 5.1 for non-negative 32-bit integers
OR, XOR, AND = 1, 3, 4
function bitoper(a, b, oper)
local r, m, s = 0, 2^31
repeat
s,a,b = a+b+m, a%m, b%m
r,m = r + m*oper%(s-a-b), m/2
until m < 1
return r
end
print(bitoper(6,3,OR)) --> 7
print(bitoper(6,3,XOR)) --> 5
print(bitoper(6,3,AND)) --> 2
Here is a basic, isolated bitwise-and implementation in pure Lua 5.1:
function bitand(a, b)
local result = 0
local bitval = 1
while a > 0 and b > 0 do
if a % 2 == 1 and b % 2 == 1 then -- test the rightmost bits
result = result + bitval -- set the current bit
end
bitval = bitval * 2 -- shift left
a = math.floor(a/2) -- shift right
b = math.floor(b/2)
end
return result
end
usage:
print(bitand(tonumber("1101", 2), tonumber("1001", 2))) -- prints 9 (1001)
Here's an example of how i bitwise-and a value with a constant 0x8000:
result = (value % 65536) - (value % 32768) -- bitwise and 0x8000
In case you use Adobe Lightroom Lua, Lightroom SDK contains LrMath.bitAnd() method for "bitwise AND" operation:
-- x = a AND b
local a = 11
local b = 6
local x = import 'LrMath'.bitAnd(a, b)
-- x is 2
And there are also LrMath.bitOr(a, b) and LrMath.bitXor(a, b) methods for "bitwise OR" and "biwise XOR" operations.
This answer is specifically for Lua 5.1.X
you can use
if( (bit.band(command,0x80)) == 0) then
...
in Lua 5.3.X and onwards it's very straight forward...
print(5 & 6)
hope that helped 😉

How can I fix this issue with my Mandelbrot fractal generator?

I've been working on a project that renders a Mandelbrot fractal. For those of you who know, it is generated by iterating through the following function where c is the point on a complex plane:
function f(c, z) return z^2 + c end
Iterating through that function produces the following fractal (ignore the color):
When you change the function to this, (z raised to the third power)
function f(c, z) return z^3 + c end
the fractal should render like so (again, the color doesn't matter):
(source: uoguelph.ca)
However, when I raised z to the power of 3, I got an image extremely similar as to when you raise z to the power of 2. How can I make the fractal render correctly? This is the code where the iterations are done: (the variables real and imaginary simply scale the screen from -2 to 2)
--loop through each pixel, col = column, row = row
local real = (col - zoomCol) * 4 / width
local imaginary = (row - zoomRow) * 4 / width
local z, c, iter = 0, 0, 0
while math.sqrt(z^2 + c^2) <= 2 and iter < maxIter do
local zNew = z^2 - c^2 + real
c = 2*z*c + imaginary
z = zNew
iter = iter + 1
end
So I recently decided to remake a Mandelbrot fractal generator, and it was MUCH more successful than my attempt last time, as my programming skills have increased with practice.
I decided to generalize the mandelbrot function using recursion for anyone who wants it. So, for example, you can do f(z, c) z^2 + c or f(z, c) z^3 + c
Here it is for anyone that may need it:
function raise(r, i, cr, ci, pow)
if pow == 1 then
return r + cr, i + ci
end
return raise(r*r-i*i, 2*r*i, cr, ci, pow - 1)
end
and it's used like this:
r, i = raise(r, i, CONSTANT_REAL_PART, CONSTANT_IMAG_PART, POWER)

How to perform power() function on RGB matrix in matlab

In a project, I need to perform power() function on RGB matrix in a matlab GUI program, but matlab keeps returning error meesage.
Below is the code and the error message
img_src = getappdata(handles.figure_pjimage, 'img_src');
R=img_src(:,:,1);
G=img_src(:,:,2);
B=img_src(:,:,3);
C = 12;
gamma = 0.8;
R1 = C * power(R, gamma);
G1 = C * power(G, gamma);
B1 = C * power(B, gamma);
R2 = power((R1 / C), (1/gamma));
G2 = power((G1 / C), (1/gamma));
B2 = power((B1 / C), (1/gamma));
disp(max(R2));
new_img = cat(3,R2,G2,B2);
axes(handles.axes_dst);
imshow(new_img);
And here is the error message
Integers can only be raised to positive integral powers.
However, when I try to use power() function in command window, this can be done.
>> A = [2,2
2,2]
A =
2 2
2 2
>> power(A,0.4)
ans =
1.3195 1.3195
1.3195 1.3195
Please tell me if any of you get the solution, thanks.
Probably your RGB-matrices are e.g. in format uint8 or uint16, as this is the output format of image import functions for a lot of file types. As power intends not to violate the format definition, which it would for fractional powers, it throws the error.
So basically you only have to change lines 2-4 to:
R = double( img_src(:,:,1) );
G = double( img_src(:,:,2) );
B = double( img_src(:,:,3) );
and your code should work as desired.

Gradient in continuous regression using a neural network

I'm trying to implement a regression NN that has 3 layers (1 input, 1 hidden and 1 output layer with a continuous result). As a basis I took a classification NN from coursera.org class, but changed the cost function and gradient calculation so as to fit a regression problem (and not a classification one):
My nnCostFunction now is:
function [J grad] = nnCostFunctionLinear(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
m = size(X, 1);
a1 = X;
a1 = [ones(m, 1) a1];
a2 = a1 * Theta1';
a2 = [ones(m, 1) a2];
a3 = a2 * Theta2';
Y = y;
J = 1/(2*m)*sum(sum((a3 - Y).^2))
th1 = Theta1;
th1(:,1) = 0; %set bias = 0 in reg. formula
th2 = Theta2;
th2(:,1) = 0;
t1 = th1.^2;
t2 = th2.^2;
th = sum(sum(t1)) + sum(sum(t2));
th = lambda * th / (2*m);
J = J + th; %regularization
del_3 = a3 - Y;
t1 = del_3'*a2;
Theta2_grad = 2*(t1)/m + lambda*th2/m;
t1 = del_3 * Theta2;
del_2 = t1 .* a2;
del_2 = del_2(:,2:end);
t1 = del_2'*a1;
Theta1_grad = 2*(t1)/m + lambda*th1/m;
grad = [Theta1_grad(:) ; Theta2_grad(:)];
end
Then I use this func in fmincg algorithm, but in firsts iterations fmincg end it's work. I think my gradient is wrong, but I can't find the error.
Can anybody help?
If I understand correctly, your first block of code (shown below) -
m = size(X, 1);
a1 = X;
a1 = [ones(m, 1) a1];
a2 = a1 * Theta1';
a2 = [ones(m, 1) a2];
a3 = a2 * Theta2';
Y = y;
is to get the output a(3) at the output layer.
Ng's slides about NN has the below configuration to calculate a(3). It's different from what your code presents.
in the middle/output layer, you are not doing the activation function g, e.g., a sigmoid function.
In terms of the cost function J without regularization terms, Ng's slides has the below formula:
I don't understand why you can compute it using:
J = 1/(2*m)*sum(sum((a3 - Y).^2))
because you are not including the log function at all.
Mikhaill, I´ve been playing with a NN for continuous regression as well, and had a similar issues at some point. The best thing to do here would be to test gradient computation against a numerical calculation before running the model. If that´s not correct, fmincg won´t be able to train the model. (Btw, I discourage you of using numerical gradient as the time involved is much bigger).
Taking into account that you took this idea from Ng´s Coursera class, I´ll implement a possible solution for you to try using the same notation for Octave.
% Cost function without regularization.
J = 1/2/m^2*sum((a3-Y).^2);
% In case it´s needed, regularization term is added (i.e. for Training).
if (reg==true);
J=J+lambda/2/m*(sum(sum(Theta1(:,2:end).^2))+sum(sum(Theta2(:,2:end).^2)));
endif;
% Derivatives are computed for layer 2 and 3.
d3=(a3.-Y);
d2=d3*Theta2(:,2:end);
% Theta grad is computed without regularization.
Theta1_grad=(d2'*a1)./m;
Theta2_grad=(d3'*a2)./m;
% Regularization is added to grad computation.
Theta1_grad(:,2:end)=Theta1_grad(:,2:end)+(lambda/m).*Theta1(:,2:end);
Theta2_grad(:,2:end)=Theta2_grad(:,2:end)+(lambda/m).*Theta2(:,2:end);
% Unroll gradients.
grad = [Theta1_grad(:) ; Theta2_grad(:)];
Note that, since you have taken out all the sigmoid activation, the derivative calculation is quite simple and results in a simplification of the original code.
Next steps:
1. Check this code to understand if it makes sense to your problem.
2. Use gradient checking to test gradient calculation.
3. Finally, use fmincg and check you get different results.
Try to include sigmoid function to compute second layer (hidden layer) values and avoid sigmoid in calculating the target (output) value.
function [J grad] = nnCostFunction1(nnParams, ...
inputLayerSize, ...
hiddenLayerSize, ...
numLabels, ...
X, y, lambda)
Theta1 = reshape(nnParams(1:hiddenLayerSize * (inputLayerSize + 1)), ...
hiddenLayerSize, (inputLayerSize + 1));
Theta2 = reshape(nnParams((1 + (hiddenLayerSize * (inputLayerSize + 1))):end), ...
numLabels, (hiddenLayerSize + 1));
Theta1Grad = zeros(size(Theta1));
Theta2Grad = zeros(size(Theta2));
m = size(X,1);
a1 = [ones(m, 1) X]';
z2 = Theta1 * a1;
a2 = sigmoid(z2);
a2 = [ones(1, m); a2];
z3 = Theta2 * a2;
a3 = z3;
Y = y';
r1 = lambda / (2 * m) * sum(sum(Theta1(:, 2:end) .* Theta1(:, 2:end)));
r2 = lambda / (2 * m) * sum(sum(Theta2(:, 2:end) .* Theta2(:, 2:end)));
J = 1 / ( 2 * m ) * (a3 - Y) * (a3 - Y)' + r1 + r2;
delta3 = a3 - Y;
delta2 = (Theta2' * delta3) .* sigmoidGradient([ones(1, m); z2]);
delta2 = delta2(2:end, :);
Theta2Grad = 1 / m * (delta3 * a2');
Theta2Grad(:, 2:end) = Theta2Grad(:, 2:end) + lambda / m * Theta2(:, 2:end);
Theta1Grad = 1 / m * (delta2 * a1');
Theta1Grad(:, 2:end) = Theta1Grad(:, 2:end) + lambda / m * Theta1(:, 2:end);
grad = [Theta1Grad(:) ; Theta2Grad(:)];
end
Normalize the inputs before passing it in nnCostFunction.
In accordance with Week 5 Lecture Notes guideline for a Linear System NN you should make following changes in the initial code:
Remove num_lables or make it 1 (in reshape() as well)
No need to convert y into a logical matrix
For a2 - replace sigmoid() function to tanh()
In d2 calculation - replace sigmoidGradient(z2) with (1-tanh(z2).^2)
Remove sigmoid from output layer (a3 = z3)
Replace cost function in the unregularized portion to linear one: J = (1/(2*m))*sum((a3-y).^2)
Create predictLinear(): use predict() function as a basis, replace sigmoid with tanh() for the first layer hypothesis, remove second sigmoid for the second layer hypothesis, remove the line with max() function, use output of the hidden layer hypothesis as a prediction result
Verify your nnCostFunctionLinear() on the test case from the lecture note

Resources