Column sum of Opencv Matrix elements - opencv

I need to compute sum of elements in all columns separately.
Now I'm using:
Matrix cross_corr should be summed.
Mat cross_corr_summed;
for (int i=0;i<cross_corr.cols;i++)
{
double column_sum=0;
for (int k=0;k<cross_corr.rows;k++)
{
column_sum +=cross_corr.at<float>(k,i);
}
cross_corr_summed.push_back(column_sum);
}
The problem is that my program takes quite a long time to run. This is one of parts that is suspicious to cause this.
Can you advise any possible faster implementation???
Thanks!!!

You need a cv::reduce:
cv::reduce(cross_corr, cross_corr_summed, 0, CV_REDUCE_SUM, CV_32S);

If you know that your data is continuous and single-channeled, you can access the matrix data directly:
int width = cross_corr.cols;
float* data = (float*)cross_corr.data;
Mat cross_corr_summed;
for (int i=0;i<cross_corr.cols;i++)
{
double column_sum=0;
for (int k=0;k<cross_corr.rows;k++)
{
column_sum += data[i + k*width];
}
cross_corr_summed.push_back(column_sum);
}
which will be faster than your use of .at_<float>(). In general I avoid the use of .at() whenever possible because it is slower than direct access.
Also, although cv::reduce() (suggested by Andrey) is much more readable, I have found it is slower than even your implementation in some cases.

Mat originalMatrix;
Mat columnSum;
for (int i = 0; i<originalMatrix.cols; i++)
columnSum.push_back(cv::sum(originalMatrix.col(i))[0]);

Related

Matchingproblems when using OpenCVs matchShapes function

I´m trying to find a objekt in a larger Picture with the findContour/matchShape functions (the object can vary so it´s not possible to look after the color or something similar, Featuredetectors like SIFT also doesn´t work because the object could be symetric)
I have written following code:
Mat scene = imread...
Mat Template = imread...
Mat imagegray1, imagegray2, imageresult1, imageresult2;
int thresh=80;
double ans=0, result=0;
// Preprocess pictures
cvtColor(scene, imagegray1,CV_BGR2GRAY);
cvtColor(Template,imagegray2,CV_BGR2GRAY);
GaussianBlur(imagegray1,imagegray1, Size(5,5),2);
GaussianBlur(imagegray2,imagegray2, Size(5,5),2);
Canny(imagegray1, imageresult1,thresh, thresh*2);
Canny(imagegray2, imageresult2,thresh, thresh*2);
vector<vector <Point> > contours1;
vector<vector <Point> > contours2;
vector<Vec4i>hierarchy1, hierarchy2;
// Template
findContours(imageresult2,contours2,hierarchy2,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
// Szene
findContours(imageresult1,contours1,hierarchy1,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
imshow("template", Template);
double helper = INT_MAX;
int idx_i = 0, idx_j = 0;
// Match all contours with eachother
for(int i = 0; i < contours1.size(); i++)
{
for(int j = 0; j < contours2.size(); j++)
{
ans=matchShapes(contours1[i],contours2[j],CV_CONTOURS_MATCH_I1 ,0);
// find the best matching contour
if((ans < helper) )
{
idx_i = i;
helper = ans;
}
}
}
// draw the best contour
drawContours(scene, contours1, idx_i,
Scalar(255,255,0),3,8,hierarchy1,0,Point());
When I'm using a scene where only the Template is located in, i get a good matching result:
But when there are more objects in the pictures i have trouble detecting the object:
Hope someone can tell me whats the problem with the code i´m using. Thanks
You have a huge amount of contours in the second image (almost each letter).
As the matchShape checks for scale-invariant Hu-moments (http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gab001db45c1f1af6cbdbe64df04c4e944) also a very small contours may fit the shape you are looking for.
Furthermore, the original shape is not distinguished properly like can be seen when excluding all contours with an area smaller 50.
if(contourArea(contours1[i]) > 50)
drawContours(scene, contours1, i, Scalar(255, 255, 0), 1);
To say it with other words, there is no problem with your code. The contour can simply not be detected very well. I would suggest to have a look at approxCurve and convexHull and try to close the contour this way. Or improve the use of Canny in some way.
Then you could use a priori knowledge to restrict the size (and maybe rotation?) of the contour you are looking for.

OpenCV MultiBandBlender doesn't work

I try to blend my images into pano with MultiBandBlender, but it return black pano. But FeatherBlender works fine. What I doing wrong?
blendImages(const std::vector<cv::Point> &corners, std::vector<cv::Mat> images)
{
std::vector<cv::Size> sizes;
for(int i = 0; i < images.size(); i++)
sizes.push_back(images[i].size());
float blend_strength = 5;
cv::Size dst_sz = cv::detail::resultRoi(corners, sizes).size();
float blend_width = sqrt(static_cast<float>(dst_sz.area())) * blend_strength / 100.f;
cv::Ptr<cv::detail::Blender> blender = cv::detail::Blender::createDefault(cv::detail::Blender::MULTI_BAND);
//cv::detail::FeatherBlender* fb = dynamic_cast<cv::detail::FeatherBlender*>(blender.get());
//fb->setSharpness(1.f/blend_width);
cv::detail::MultiBandBlender* mb = dynamic_cast<cv::detail::MultiBandBlender*>(blender.get());
mb->setNumBands(static_cast<int>(ceil(log(blend_width)/log(2.)) - 1.));
blender->prepare(corners, sizes);
for(int i = 0; i < images.size(); i++)
{
cv::Mat image_s;
images[i].convertTo(image_s, CV_16SC3);
blender->feed(image_s, cv::Mat::ones(image_s.size(), CV_8UC1), corners[i]);
}
cv::Mat pano;
cv::Mat panoMask = cv::Mat::ones(dst_sz, CV_8UC1);
blender->blend(pano, panoMask);
return pano;
}
Three possible causes:
Try keeping all image_s and masks in a vector, and feed with the following structure:
for (int i = 0; i < images_s.size(); ++i)
blender->feed(images_s[i], masks[i], corners[i]);
Don't initialize panoMask to ones before blending.
Make sure corners are well defined
Actually, I can't compile your code with OpenCV 2.4, because of blender.get function. There is no such a function in my build of OpenCV 2.4.
Anyway, if you wish to make a panorama, you'd better not use resultRoi function. You need boundingRect. I suppose, it is really hard to get all horizontally aligned images for one panorama.
Also, look at my answer here. It demonstrates how to use MultiBandBlender.
Hey I was getting the same black pano while using MultiBand blender in opencv. Actually the issue was resolved by changing
cv::Mat::ones(image_s.size(), CV_8UC1)
to
cv::Mat::ones(image_s.size(), CV_8UC1)*255
This is because Mat::ones initialize all the pixels to a value of numerical 1, Thus, we need to muliply it with 255 in order to get a pure black & white mask.
And, thanks, your issue solved my problem :)

FFT and accelerometer data: why am I getting this output?

I have read various posts here at StackOverflow regarding the execution of FFT on accelerometer data, but none of them helped me understand my problem.
I am executing this FFT implementation on my accelerometer data array in the following way:
int length = data.size();
double[] re = new double[256];
double[] im = new double[256];
for (int i = 0; i < length; i++) {
input[i] = data[i];
}
FFT fft = new FFT(256);
fft.fft(re, im);
float outputData[] = new float[256];
for (int i = 0; i < 128; i++) {
outputData[i] = (float) Math.sqrt(re[i] * re[i]
+ im[i] * im[i]);
}
I plotted the contents of outputData (left,) and also used R to perform the FFT on my data (right.)
What am I doing wrong here? I am using the same code for executing the FFT that I see in other places.
EDIT: Following the advice of #PaulR to apply a windowing function, and the link provided by #BjornRoche (http://baumdevblog.blogspot.com.br/2010/11/butterworth-lowpass-filter-coefficients.html), I was able to solve my problem. The solution is pretty much what is described in that link. This is my graph now: http://imgur.com/wGs43
The low frequency artefacts are probably due to a lack of windowing. Try applying a window function.
The overall shift is probably due to different scaling factors in the two different FFT implementations - my guess is that you are seeing a shift of 24 dB which corresponds to a difference in scaling by a factor of 256.
Because all your data on left are above 0, for frequency analyze it is a DC signal. So after your fft, it abstract the DC signal out, it is very hugh. For your scene, you only need to cut off the DC signal, just preserve the signal over 0 Hz(AC signal), that makes sense.

OpenCV column index skips rows when using 16-bit PGM (C++ interface)

Having a strange problem. I wrote a couple of functions to convert from an mat into a 2D int array and vice versa. I first wrote 3 channel 8 bit versions which work fine, but the 16-bit grayscale versions seem to be skipping indices on one of the dimensions.
Basically every second row is blank. (Only every second one is written to.) The only thing I can think is that it has something to do with the 16 bit representation.
The following is the code:
// Convert a Mat image to a standard int array
void matToArrayGS(cv::Mat imgIn, unsigned int **array)
{
int i, j;
for(i=0; i<imgIn.rows; i++)
{
for(j=0; j<imgIn.cols; j++)
array[i][j]=imgIn.at<unsigned int>(i,j);
}
}
// Convert an array into a Greyscale Mat image
void arrayToMatGS(unsigned int **arrayin, cv::Mat imgIn)
{
int i, j;
for(i=0; i<imgIn.rows; i++)
{
for(j=0; j<imgIn.cols; j++)
imgIn.at<unsigned int>(i,j)=arrayin[i][j];
}
}
I can't help thinking it has something to do with the 16 bit representation in Mat but I can't find info on this. It's strange also that it works fine in one dimension and not the other....
Anyone have an idea?
Thanks in advance
I think this is caused by "unsigned int" usage. Try "unsigned short" for 16 bits grayscale image.

Problem assigning values to Mat array in OpenCV 2.3 - seems simple

Using the new API for OpenCV 2.3, I am having trouble assigning values to a Mat array (or say image) inside a loop. Here is the code snippet which I am using;
int paddedHeight = 256 + 2*padSize;
int paddedWidth = 256 + 2*padSize;
int n = 266; // padded height or width
cv::Mat fx = cv::Mat(paddedHeight,paddedWidth,CV_64FC1);
cv::Mat fy = cv::Mat(paddedHeight,paddedWidth,CV_64FC1);
float value = -n/2.0f;
for(int i=0;i<n;i++)
{
for(int j=0;j<n;j++)
fx.at<cv::Vec2d>(i,j) = value++;
value = -n/2.0f;
}
meshElement = -n/2.0f;
for(int i=0;i<n;i++)
{
for(int j=0;j<n;j++)
fy.at<cv::Vec2d>(i,j) = value;
value++;
}
Now in the first loop as soon as j = 133, I get an exception which seems to be related to depth of the image, I cant figure out what I am doing wrong here.
Please Advise! Thanks!
You are accessing the data as 2-component double vector (using .at<cv::Vec2d>()), but you created the matrices to contain only 1 component doubles (using CV_64FC1). Either create the matrices to contain two components per element (with CV_64FC2) or, what seems more appropriate to your code, access the values as simple doubles, using .at<double>(). This explodes exactly at j=133 because that is half the size of your image and when treated as containing 2-component vectors when it only contains 1, it is only half as wide.
Or maybe you can merge these two matrices into one, containing two components per element, but this depends on the way you are going to use these matrices in the future. In this case you can also merge the two loops together and really set a 2-component vector:
cv::Mat f = cv::Mat(paddedHeight,paddedWidth,CV_64FC2);
float yValue = -n/2.0f;
for(int i=0;i<n;i++)
{
float xValue = -n/2.0f;
for(int j=0;j<n;j++)
{
f.at<cv::Vec2d>(i,j)[0] = xValue++;
f.at<cv::Vec2d>(i,j)[1] = yValue;
}
++yValue;
}
This might produce a better memory accessing scheme if you always need both values, the one from fx and the one from fy, for the same element.

Resources