Is it possible to vectorize this calculation in numpy? - vectorization

Can the following expression of numpy arrays be vectorized for speed-up?
k_lin1x = [2*k_lin[i]*k_lin[i+1]/(k_lin[i]+k_lin[i+1]) for i in range(len(k_lin)-1)]
Is it possible to vectorize this calculation in numpy?

x1 = k_lin
x2 = k_lin
s = len(k_lin)-1
np.roll(x2, -1) #do this do bring the column one position right
result1 = x2[:s]+x1[:s] #your divider. You add everything but the last element
result2 = x2[:s]*x1[:s] #your upper part
# in one line
result = 2*x2[:s]*x1[:s] / (x2[:s]+x1[:s])
You last column wont be added or taken into the calculations and you can do this by simply using np.roll to shift the columns. x2[0] = x1[1], x2[1] = x1[2].
This is barely a demo of how you should approach google numpy roll. Also instead of using s on x2 you can simply drop the last column since it's useless for the calculations.

Related

Drawing a Matrix

Im trying to generate a random map using a matrix but I dont really know how. Here is the
function for the matrix. wMap and hMap are the width and height, and mapSprites is a table containing some ground sprites. Also how can I draw the matrix? Im sorry if this is too much of a question, but Im really in need for some help
function buildMap(wMap, hMap)
for i = 1, wMap do
mt[i] = {}
for j = 1, hMap do
mt[i][j] = math.random(mapSprites)
end
end
end
Generating a random map in any programming language will utilize two core concepts: The language's random function and nested for loops, two for the case of a map/matrix/2d array.
The first problem, is you may or may not have mt initialized outside the function. This function assumes the variable exists outside of the function and each time the function is called it will overwrite mt (or initialize it for the first function call) with random values.
The second problem, the width, wMap, and height, hMap, of the map are in the wrong order, as maps/matrices/2d arrays first iterate over the height (y dimension) and then the width (x dimension).
The last problem, mapSpripes also has to be declared outside the function (which is not clear with your code snippet), which will be the highest possible value the random function can generate. You can read more about math.random here: http://lua-users.org/wiki/MathLibraryTutorial
Consider this function I wrote that makes those adjustments as well as has some additional variables for the minimum and maximum random value. Of course, you can remove these to have it fit your intended purposes.
function buildMap(wMap, hMap)
local minRand = 10
local maxRand = 20
for y = 1, hMap do
matrix[y] = {}
for x = 1, wMap do
matrix[y][x] = math.random(minRand, maxRand)
end
end
end
I suggest you use this function as inspiration for your future iteratins. You can make minRand and maxRand parameters or make matrix a returned value rather than manipulating an already declared matrix value outside of the function.
Best of luck!
EDIT:
Regarding your second question. Look back at the section I wrote about nested for loops. This will be crucial to "drawing" your map. I believe you have the building blocks to resolve this issue yourself as there isn't enough context provided about what "drawing" looks like. Here is a fundamentally similiar function, based on my previous function, on printing the map:
function printMap(matrix)
for i = 1, #matrix do
for j = 1, #matrix[i] do
io.write(matrix[i][j] .. " ")
end
io.write("\n")
end
end
For choosing random sprite, I recommend you to create a table of sprites and then save index of sprite in matrix. Then you can draw it in same loop, but now, you will iterate over matrix and draw sprite based on sprite index saved in matrix in position given by matrix position (x and y in loop) times size of sprite.
local sprites, mt = {}, {}
local spriteWidth, spriteHeight = 16, 16 -- Width and height of sprites
function buildMap(wMap, hMap)
mt = {}
for i = 1, wMap do
mt[i] = {}
for j = 1, hMap do
mt[i][j] = math.random(#sprites) -- We choose random sprite index (#sprites is length of sprites table)
end
end
end
function love.load()
sprites = {
love.graphics.newImage('sprite1.png'),
love.graphics.newImage('sprite2.png'),
-- ...
}
buildMap()
end
function love.draw()
for y, row in ipairs(mt) do
for x, spriteIndex in ipairs(row) do
-- x - 1, because we want to start at 0, 0, but lua table indexing starts at 1
love.graphics.draw(sprites[spriteIndex], (x - 1) * spriteWidth, (y - 1) * spriteHeight)
end
end
end

How to detect contiguos images

I am trying to detect when two images correspond to a chunk that matches the other image but there is no overlap.
That is, suppose we have the Lenna image:
Someone unknown to me has split it vertically in two and I must know if both pieces are connected or not (assume that they are independent images or that one is a piece of the other).
A:
B:
The positive part is that I know the order of the pieces, the negative part is that there may be other images and I must know which of them fit or not to join them.
My first idea has been to check if the MAE between the last row of A and the first row B is low.
def mae(a, b):
min_mae = 256
for i in range(-5, 5, 1):
a_s = np.roll(a, i, axis=1)
value_mae = np.mean(abs(a_s - b))
min_mae = min(min_mae, value_mae)
return min_mae
if mae(im_a[im_a.shape[0] - 1:im_a.shape[0], ...], im_b[0:1, ...]) < threshold:
# join images a and b
The problem is that it is a not very robust metric.
I have done the same using the horizontal derivative, as well as applying various smoothing filters, but I find myself in the same situation.
Is there a way to solve this problem?
Your method seems like a decent one. Even on visual inspection it looks reasonable:
Top (Bottom row expanded)
Bottom (Top row expanded)
Diff of the images:
It might even be more clear if you also check neighboring columns, but this already looks like the images are similar enough.
Code
import cv2
import numpy as np
# load images
top = cv2.imread("top.png");
bottom = cv2.imread("bottom.png");
# gray
tgray = cv2.cvtColor(top, cv2.COLOR_BGR2GRAY);
bgray = cv2.cvtColor(bottom, cv2.COLOR_BGR2GRAY);
# expand rows
texp = tgray;
bexp = bgray;
trow = np.zeros_like(texp);
brow = np.zeros_like(bexp);
trow[:] = texp[-1, :];
brow[:] = bexp[0, :];
trow = trow[:100, :];
brow = brow[:100, :];
# check absolute difference
ldiff = trow - brow;
rdiff = brow - trow;
diff = np.minimum(ldiff, rdiff);
# show
cv2.imshow("top", trow);
cv2.imshow("bottom", brow);
cv2.imshow("diff", diff);
cv2.waitKey(0);
# save
cv2.imwrite("top_out.png", trow);
cv2.imwrite("bottom_out.png", brow);
cv2.imwrite("diff_out.png", diff);

Trouble understanding pseudocode for Two-Pass Connected Components

I'm having some trouble understanding the wikipedia's pseudocode for connected-components labeling using Two-Pass algorithm. Here's the pseudocode:
algorithm TwoPass(data) is
linked = []
labels = structure with dimensions of data, initialized with the value of Background
First pass
for row in data do
for column in row do
if data[row][column] is not Background then
neighbors = connected elements with the current element's value
if neighbors is empty then
linked[NextLabel] = set containing NextLabel
labels[row][column] = NextLabel
NextLabel += 1
else
Find the smallest label
L = neighbors labels
labels[row][column] = min(L)
for label in L do
linked[label] = union(linked[label], L)
Second pass
for row in data do
for column in row do
if data[row][column] is not Background then
labels[row][column] = find(labels[row][column])
return labels
My problem is with the line linked[NextLabel] = set containing NextLabel. It never initializes the NextLabel and still uses it. Also, what does it mean by "set containing NextLabel"? I'm really confused by this code.

Why isn't Python OpenCV HoughP Transform able to identify all the spaced lines?

When we have spaced lines on 1px. HoughP transform of python opencv doesn't mark all the points.
I used:
cv2.HoughLinesP(img,1,np.pi/180,400)
Theoretically it should be working fine be it dashed or non dashed. In this case it doesn't mark all the lines if they are on the same height.
HoughP Transfrom Sample Output
The Green Lines indicate the white lines that were identified.
I changed the parameters to this:
cv2.HoughLinesP(img,1,np.pi/180,10,10,10)
And got this output, as you can see the detection is still missing some parts. Its unclear how, for a straight line, a shorter line is marked but not a longer line.
*** After the method suggested!
After method suggested by Robert
Input Image: Input Image
Here is the code:
import numpy as np
import cv2
import time
img=cv2.imread("in.PNG")
img2=np.abs(img)
img=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
lines = cv2.HoughLinesP(img,rho = 1,theta = 1*np.pi/180,threshold =
10,minLineLength = 10,maxLineGap = 10)
N = lines.shape[0]
print lines
for i in range(N):
x1 = lines[i][0][0]
y1 = lines[i][0][1]
x2 = lines[i][0][2]
y2 = lines[i][0][3]
cv2.line(img2,(x1,y1),(x2,y2),(0,255,0),1)
#cv2.imshow("Window",thresh1)
cv2.imwrite("out.PNG",img2)

How to estimate? "simple" Nonlinear Regression + Parameter Constraints + AR residuals

I am new to this site so please bear with me. I want to
the nonlinear model as shown in the link: https://i.stack.imgur.com/cNpWt.png by imposing constraints on the parameters a>0 and b>0 and gamma1 in [0,1].
In the nonlinear model [1] independent variable is x(t) and dependent are R(t), F(t) and ΞΎ(t) is the error term.
An example of the dataset can be shown here: https://i.stack.imgur.com/2Vf0j.png 68 rows of time series
To estimate the nonlinear regression I use the nls() function with no problem as shown below:
NLM1 = nls(**Xt ~ (aRt-bFt)/(1-gamma1*Rt), start = list(a = 10, b = 10, lamda = 0.5)**,algorithm = "port", lower=c(0,0,0),upper=c(Inf,Inf,1),data = temp2)
I want to estimate NLM1 with allowing for also an AR(1) on the residuals.
Basically I want the same procedure as we go from lm() to gls(). My problem is that in the gnls() function I dont know how to put contraints for the model parameters a, b, gamma1 and the model estimates wrong values for them.
nls() has the option for lower and upper bounds. I cant do the same on gnls()
In the gnls(): I need to add the contraints something like as in nls() lower=c(0,0,0),upper=c(Inf,Inf,1)
NLM1_AR1 = gnls( model = Xt ~ (aRt-bFt)/(1-gamma1*Rt), data = temp2, start = list(a =13, b = 10, lamda = 0.5),correlation = corARMA(p = 1))
Does any1 know the solution on how to do it?
Thank you

Resources