How do I compute an averaged profile in OpenCV - opencv

I call an average profile the 1D signal obtained by averaging along the rows or columns in a rectangular image.
Px := Σ(y=1,H) I(x, y) / H
and
Py := Σ(x=1,W) I(x, y) / W
I couldn't find that in the API, maybe by not using the appropriate terminology.
I don't want a box filter, just one value per row/column. The sum instead of the average is equally good.

You can use the reduce function: http://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#reduce
using namespace cv;
// Average over rows
Mat mean_over_rows;
reduce(input_mat, mean_over_rows, 0, CV_REDUCE_AVG);
// Average over the columns
Mat mean_over_cols;
reduce(input_mat, mean_over_cols, 1, CV_REDUCE_AVG);
You can use the CV_REDUCE_SUM flag if you just want the summed projection along the desired axes.

Related

Why is the x variable tensor reshaped with -1 in the MNIST tutorial for tensorflow?

I'm following the TensorFlow tutorial
Initially x is defined as
x = tf.placeholder(tf.float32, shape=[None, 784])
Later on it reshapes x, I'm trying to understand why.
To apply the layer, we first reshape x to a 4d tensor, with the second and third dimensions corresponding to image width and height, and the final dimension corresponding to the number of color channels.
x_image = tf.reshape(x, [-1,28,28,1])
What does -1 mean in the reshaping vector and why is x being reshaped?
1) What does -1 mean in the reshaping vector
From the documentation of reshape:
If one component of shape is the special value -1, the size of that
dimension is computed so that the total size remains constant. In
particular, a shape of [-1] flattens into 1-D. At most one component
of shape can be -1.
this is a standard feature and is available in numpy as well. Basically it means - I do not have time to calculate all the dimensions, so infer the one for me. In your case because x * 28 * 28 * 1 = 784 so your -1 = 1
2) Why is x being reshaped
They are planning to use convolution for image classification. So they need to use some spatial information. Current data is 1 dimensional. So they transform it to 4 dimensions. I do not know the point of the forth dimension because in my opinion they might have used only (x, y, color). Or even (x, y). Try to modify their reshape and convolution and most probably you will get similar accuracy.
why 4 dimensions
TensorFlow’s convolutional conv2d operation expects a 4-dimensional tensor with dimensions corresponding to batch, width, height and channel.
[batch, in_height, in_width, in_channels]

Explanation of rho and theta parameters in HoughLines

Can you give me a quick definition of rho and theta parameters in OpenCV's HoughLines function
void cv::HoughLines ( InputArray image,
OutputArray lines,
double rho,
double theta,
int threshold,
double srn = 0,
double stn = 0,
double min_theta = 0,
double max_theta = CV_PI
)
The only thing I found in the doc is:
rho: Distance resolution of the accumulator in pixels.
theta: Angle resolution of the accumulator in radians.
Do this mean that if I set rho=2 then 1/2 of my image's pixels will be ignored ... a kind of stride=2 ?
I have searched for this for hours and still haven't found a place where it is neatly explained. But picking up the pieces, I think I got it.
The algorithm goes over every edge pixel (result of Canny, for example) and calculates ρ using the equation ρ = x * cosθ + y * sinθ, for many values of θ.
The actual step of θ is defined by the function parameter, so if you use the usual math.pi / 180.0 value of theta, the algorithm will compute ρ 180 times in total for just one edge pixel in the image. If you would use a larger theta, there would be fewer calculations, fewer accumulator columns/buckets and therefore fewer lines found.
The other parameter ρ defines how "fat" a row of the accumulator is. With a value of 1, you are saying that you want the number of accumulator rows to be equal to the biggest ρ possible, which is the diagonal of the image you're processing. So if for some two values of θ you get close values for ρ, they will still go into separate accumulator buckets because you are going for precision. For a larger value of the parameter rho, those two values might end up in the same bucket, which will ultimately give you more lines because more buckets will have a large vote count and therefore exceed the threshold.
Some helpful resources:
http://docs.opencv.org/3.1.0/d6/d10/tutorial_py_houghlines.html
https://www.mathworks.com/help/vision/ref/houghtransform.html
https://www.youtube.com/watch?v=2oGYGXJfjzw
To detect lines with Hough Transform, the best way is to represents lines with an equation of two parameters rho and theta as shown on this image. The equation is the following :
x cos⁡(θ)+y sin⁡(θ)=ρ
where (x,y) are line parameters.
This writing in (θ,ρ) parameters allow the detection to be less position-depending than a writing as y=a*x+b
(θ,ρ) in this context give the discretization for these two parameters

Armadillo ifft dimension

Does Armadillo ifft support "dim" as Octave/Matlab?
The Armadillo documentation reads "If given a matrix, the transform is done on each column vector of the matrix". So, no, it does not support dim, but you can treat Armadillo's ifft() method as Octave/MATLAB's with dim = 1 (columns).
If you instead want to perform ifft() with rows, then you can just transpose your input matrix...
mat X; // This is the matrix you want to call ifft() on.
// The second transpose is optional, but it will get the Y matrix to have
// the same dimension as the X matrix.
mat Y = ifft(X.t()).t();

OpenCV: Efficient Difference-of-Gaussian

I am trying to implement difference of guassians (DoG), for a specific case of edge detection. As the name of the algorithm suggests, it is actually fairly straightforward:
Mat g1, g2, result;
Mat img = imread("test.png", CV_LOAD_IMAGE_COLOR);
GaussianBlur(img, g1, Size(1,1), 0);
GaussianBlur(img, g2, Size(3,3), 0);
result = g1 - g2;
However, I have the feeling that this can be done more efficiently. Can it perhaps be done in less passes over the data?
The question here has taught me about separable filters, but I'm too much of an image processing newbie to understand how to apply them in this case.
Can anyone give me some pointers on how one could optimise this?
Separable filters work in the same way as normal gaussian filters. The separable filters are faster than normal Gaussian when the image size is large. The filter kernel can be formed analytically and the filter can be separated into two 1 dimensional vectors, one horizontal and one vertical.
for example..
consider the filter to be
1 2 1
2 4 2
1 2 1
this filter can be separated into horizontal vector (H) 1 2 1 and vertical vector(V) 1 2 1. Now these sets of two filters are applied to the image. Vector H is applied to the horizontal pixels and V to the vertical pixels. The results are then added together to get the Gaussian Blur. I'm providing a function that does the separable Gaussian Blur. (Please dont ask me about the comments, I'm too lazy :P)
Mat sepConv(Mat input, int radius)
{
Mat sep;
Mat dst,dst2;
int ksize = 2 *radius +1;
double sigma = radius / 2.575;
Mat gau = getGaussianKernel(ksize, sigma,CV_32FC1);
Mat newgau = Mat(gau.rows,1,gau.type());
gau.col(0).copyTo(newgau.col(0));
filter2D(input, dst2, -1, newgau);
filter2D(dst2.t(), dst, -1, newgau);
return dst.t();
}
One more method to improve the calculation of Gaussian Blur is to use FFT. FFT based convolution is much faster than the separable kernel method, if the data size is pretty huge.
A quick google search provided me with the following function
Mat Conv2ByFFT(Mat A,Mat B)
{
Mat C;
// reallocate the output array if needed
C.create(abs(A.rows - B.rows)+1, abs(A.cols - B.cols)+1, A.type());
Size dftSize;
// compute the size of DFT transform
dftSize.width = getOptimalDFTSize(A.cols + B.cols - 1);
dftSize.height = getOptimalDFTSize(A.rows + B.rows - 1);
// allocate temporary buffers and initialize them with 0's
Mat tempA(dftSize, A.type(), Scalar::all(0));
Mat tempB(dftSize, B.type(), Scalar::all(0));
// copy A and B to the top-left corners of tempA and tempB, respectively
Mat roiA(tempA, Rect(0,0,A.cols,A.rows));
A.copyTo(roiA);
Mat roiB(tempB, Rect(0,0,B.cols,B.rows));
B.copyTo(roiB);
// now transform the padded A & B in-place;
// use "nonzeroRows" hint for faster processing
Mat Ax = computeDFT(tempA);
Mat Bx = computeDFT(tempB);
// multiply the spectrums;
// the function handles packed spectrum representations well
mulSpectrums(Ax, Bx, Ax,0,true);
// transform the product back from the frequency domain.
// Even though all the result rows will be non-zero,
// we need only the first C.rows of them, and thus we
// pass nonzeroRows == C.rows
//dft(Ax, Ax, DFT_INVERSE + DFT_SCALE, C.rows);
updateMag(Ax);
Mat Cx = updateResult(Ax);
//idft(tempA, tempA, DFT_SCALE, A.rows + B.rows - 1 );
// now copy the result back to C.
Cx(Rect(0, 0, C.cols, C.rows)).copyTo(C);
//C.convertTo(C, CV_8UC1);
// all the temporary buffers will be deallocated automatically
return C;
}
Hope this helps. :)
I know this post is old. But the question is interresting and may interrest future readers. As far as I know, a DoG filter is not separable. So there is two solutions left:
1) compute both convolutions by calling the function GaussianBlur() twice then subtract the two images
2) Make a kernel by computing the difference of two gaussian kernels then convolve it with the image.
About which solution is faster:
The solution 2 seems faster at first sight because it convolves the image only once.
But this does not involve a separable filter. On the contrary, the first solution involves two separable filter and may be faster finaly. (I do not know how the OpenCV function GaussianBlur() is optimised and whether it uses separable filters or not. But it is likely.)
However, if one uses FFT technique to convolve, the second solution is surely faster.
If anyone has any advice to add or wishes to correct me, please do.

How to smooth a cyclic column vector

This is an OpenCV2 question.
I have a matrix representing a closed space curve.
cv::Mat_<Point3f> points;
I want to smooth it (using, for example a Gaussian kernel).
I have tried using:
cv::Mat_<Point3f> result;
cv::GaussianBlur(points, result, cv::Size(4 * sigma, 1), sigma, sigma, cv::BORDER_WRAP);
But I get the error:
Assertion failed (columnBorderType != BORDER_WRAP)
What is the best way to convolve a cyclic vector in OpenCV? ("Best" should take into account space and time requirements.)
I found a way. I repeat the matrix, then blur, then extract a range.
GaussianBlur(repeat(points, 3, 1), ret, cv::Size(0,0), sigma);
int rows = points.rows;
result = Mat(result, Range(rows, 2 * rows - 1), Range::all());
This requires extra work (and extra space?).
Edit: I now manually expand points by copying (wrapping) as many points are required by the kernel. I then crop off the extra points. This is similar to the above, but wastes less space and time.

Resources