Converting matches from 8-bit 4 channels to 64-bit 1 channel in OpenCV - opencv

I have a vector of Point2f which have color space CV_8UC4 and need to convert them to CV_64F, is the following code correct?
points1.convertTo(points1, CV_64F);
More details:
I am trying to use this function to calculate the essential matrix (rotation/translation) through the 5-point algorithm, instead of using the findFundamentalMath included in OpenCV, which is based on the 8-point algorithm:
https://github.com/prclibo/relative-pose-estimation/blob/master/five-point-nister/five-point.cpp#L69
As you can see it first converts the image to CV_64F. My input image is a CV_8UC4, BGRA image. When I tested the function, both BGRA and greyscale images produce valid matrices from the mathematical point of view, but if I pass a greyscale image instead of color, it takes way more to calculate. Which makes me think I'm not doing something correctly in one of the two cases.
I read around that when the change in color space is not linear (which I suppose is the case when you go from 4 channels to 1 like in this case), you should normalize the intensity value. Is that correct? Which input should I give to this function?
Another note, the function is called like this in my code:
vector<Point2f>imgpts1, imgpts2;
for (vector<DMatch>::const_iterator it = matches.begin(); it!= matches.end(); ++it)
{
imgpts1.push_back(firstViewFeatures.second[it->queryIdx].pt);
imgpts2.push_back(secondViewFeatures.second[it->trainIdx].pt);
}
Mat mask;
Mat E = findEssentialMat(imgpts1, imgpts2, [camera focal], [camera principal_point], CV_RANSAC, 0.999, 1, mask);
The fact I'm not passing a Mat, but a vector of Point2f instead, seems to create no problems, as it compiles and executes properly.
Is it the case I should store the matches in a Mat?

I am no sure do you mean by vector of Point2f in some color space, but if you want to convert vector of points into vector of points of another type you can use any standard C++/STL function like copy(), assign() or insert(). For example:
copy(floatPoints.begin(), floatPoints.end(), doublePoints.begin());
or
doublePoints.insert(doublePoints.end(), floatPoints.begin(), floatPoints.end());

No, it is not. A std::vector<cv::Pointf2f> cannot make use of the OpenCV convertTo function.
I think you really mean that you have a cv::Mat points1 of type CV_8UC4. Note that those are RxCx4 values (being R and C the number of rows and columns), and that in a CV_64F matrix you will have RxC values only. So, you need to be more clear on how you want to transform those values.
You can do points1.convertTo(points1, CV_64FC4) to get a RxCx4 matrix.
Update:
Some remarks after you updated the question:
Note that a vector<cv::Point2f> is a vector of 2D points that is not associated to any particular color space, they are just coordinates in the image axes. So, they represent the same 2D points in a grey, rgb or hsv image. Then, the execution time of findEssentialMat doesn't depend on the image color space. Getting the points may, though.
That said, I think your input for findEssentialMat is ok (the function takes care of the vectors and convert them into their internal representation). In this cases, it is very useful to draw the points in your image to debug the code.

Related

I am confused using the OpenCV sepfilter2D function's kernelX, and kernelY

I don't know how to use sepFilter2D properly. I'm confused using the function parameters such kernelX, kernelY in OpenCV sepFilter2D function.
vector<double> filter1; //row vector
sepFilter2D(src, convolvedImg, CV_64FC3, filter1, filter1, Point(-1, -1), 0.0, BORDER_DEFAULT);
//filter1 = [0.00443305 0.0540056 0.242036 0.39905 0.242036 0.0540056 0.00443305]
As you might be aware, the operation of convolution is widely used in image processing. It involves using a 2D filter, usually small in size (e.g. 3x3 or 5x5), and the short explanation is that you overlay the filter to each position, multiply the values in the filter with the values in the image and add everything together. The wikipedia page is much more detailed in presenting this operation.
Just to get a sense for this, assuming you have a MxN image and a UxV filter. For each pixel, you have to apply the filter once. Therefore, you have to perform MNU*V multiplications and additions.
Some filters have a nice property called separability. You can achieve the same effect of a UxV 2D filter by applying once a horizontal filter of size V and then a vertical filter of size U. Now you have MNU + MNV = MN(U+V) operations, therefore this is more efficient.
The sepFilter2D does exactly this: applies a vertical and a horizontal 1D filter. The full function signature is:
void sepFilter2D(InputArray src, OutputArray dst, int ddepth, InputArray kernelX, InputArray kernelY, Point anchor=Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT )
, where src is your initial image, the filtered image will be in dst, ddepth represents the desired type of the destination image, kernelX and kernelY are the horizontal and vertical 1D kernels I described above, anchor represents the kernel origin (default means center), delta represents a value that is added to the destination image to offset its brightness and borderType represents the method used around borders.
Use Mat data structure to declare kernels. (I'm not sure about vector, I'm not near me PC right now. I'll check later.)

Issue with drawContours OpenCV c++

I have a code in python and I am porting it to c++. I am getting a weird issue with drawContours function in OpenCV c++.
self.contours[i] = cv2.convexHull(self.contours[i])
cv2.drawContours(self.segments[object], [self.contours[i]], 0, 255, -1)
this is the function call in python and the value -1 for the thickness parameter is used for filling the contour and the result looks like
I am doing exactly the same in c++,
cv::convexHull(cv::Mat(contour), hull);
cv::drawContours(this->objectSegments[currentObject], cv::Mat(hull), -1, 255, -1);
but this is the resulting image:
(please look careful to see the convexhull points, this is not easily visible). I am getting only the points and not the filled polygon. I also tried using fillPoly like,
cv::fillPoly(this->objectSegments[currentObject],cv::Mat(hull),255);
but doesn't help.
Please help me in fixing the issue. I am sure that i am missing something very trivial but couldn't spot it.
The function drawContours() expects to receive a sequence of contours, each contour being a "vector of points".
The expression cv::Mat(hull) you use as a parameter returns the matrix in incorrect format, with each point being treated as a separate contour -- that's why you see only a few pixels.
According to the documentation of cv::Mat::Mat(const std::vector<_Tp>& vec) the vector passed into the constructor is used in the following manner:
STL vector whose elements form the matrix. The matrix has a single column and the number of rows equal to the number of vector elements.
Considering this, you have two options:
Transpose the matrix you've created (using cv::Mat::t()
Just use a vector of vectors of Points directly
The following sample shows how to use the vector directly:
cv::Mat output_image; // Work image
typedef std::vector<cv::Point> point_vector;
typedef std::vector<point_vector> contour_vector;
// Create with 1 "contour" for our convex hull
contour_vector hulls(1);
// Initialize the contour with the convex hull points
cv::convexHull(cv::Mat(contour), hulls[0]);
// And draw that single contour, filled
cv::drawContours(output_image, hulls, 0, 255, -1);

Converting a Matrix to a Vector in openCV

I have an image 16x16 pixel image , how can I put it in a matrix 1x256 pixel and then convert it back to a 16x16 pixel Using opencv ?
I tried reshape but it didn't succeed as when i make cout<< image.cols << image.rows give me the same number which is 16,16 also sometimes the image is not continuous so reshape won't work
Btw I need it in coding a neural network classifier.
// create matrix for the result
Mat image1x256(Size(256,1), image.type());
// use reshape function
Mat image16x16 = image1x256.reshape(image.channels(), 16);
// copy the data from your image to new image
image.copyTo(image16x16);
Since image16x16 and image1x256 are just different pointers to same data, then copying the data to one of them will actually change both.
Note that reshape function creates a new header (i.e. new smart pointer) that may be used instead of old one, but it is not changing properties of the original header that still exist and can be used.

Opencv Transforming Image

I am new to Open Cv, I want to transform the two images src and dst image . I am using cv::estimateRigidTransform() to calculate the transformation matrix and after that using cv::warpAffine() to transform from dst to src. when I compare the new transformed image with src image it is almost same (transformed), but when I am getting the abs difference of new transformed image and the src image, there is lot of difference. what should I do as My dst image has some rotation and translation factor as well. here is my code
cv::Mat transformMat = cv::estimateRigidTransform(src, dst, true);
cv::Mat output;
cv::Size dsize = leftImageMat.size(); //This specifies the output image size--change needed
cv::warpAffine(src, output, transformMat, dsize);
Src Image
destination Image
output image
absolute Difference Image
Thanks
You have some misconceptions about the process.
The method cv::estimateRigidTransform takes as input two sets of corresponding points. And then solves set of equations to find the transformation matrix. The output of the transformation matches src points to dst points (exactly or closely, if exact match is not possible - for example float coordinates).
If you apply estimateRigidTransform on two images, OpenCV first find matching pairs of points using some internal method (see opencv docs).
cv::warpAffine then transforms the src image to dst according to given transformation matrix. But any (almost any) transformation is loss operation. The algorithm has to estimate some data, because they aren't available. This process is called interpolation, using known information you calculate the unknown value. Some info regarding image scaling can be found on wiki. Same rules apply to other transformations - rotation, skew, perspective... Obviously this doesn't apply to translation.
Given your test images, I would guess that OpenCV takes the lampshade as reference. From the difference is clear that the lampshade is transformed best. Default the OpenCV uses linear interpolation for warping as it's fastest method. But you can set more advances method for better results - again consult opencv docs.
Conclusion:
The result you got is pretty good, if you bear in mind, it's result of automated process. If you want better results, you'll have to find another method for selecting corresponding points. Or use better interpolation method. Either way, after the transform, the diff will not be 0. It virtually impossible to achieve that, because bitmap is discrete grid of pixels, so there will always be some gaps, which needs to be estimated.

1D histogram opencv with double values

I'm trying to create an histogram using opencv. I have an image (32 bit) that came out from a blurring operation, so I just know that the values are in the range [-0.5; 0.5] but I don't know anything else about the starting data.
the problem is that I don't understand how to set the parameters to compute such histogram.
the code I wrote is:
int numbins=1000;
float range[]={-0.5, 0.5};
float *ranges[]={range};
CvHistogram *hist=cvCreateHist(1, &numbins, CV_HIST_ARRAY, ranges, 1);
cvCalcHist(&img, hist);
were img is the image I want to get the histogram. if I try to print the histogram I just get a black picture, while with the same function I get a correct histogram if use a grayscale 8bit image.
Have you looked at the calcHist example? Also, the camshiftdemo makes heavy use of histograms.
Are you normalizing the histogram output with normalize before display (camshiftdemo shows how to do this)? Values near 0 will appear black when displayed, but when normalized between say 0 and 255 will show up nicely.

Resources