Example code for SIFT in OpenCV 3? - opencv

Im new to OpenCV, Im trying to use SIFT to extract key points from a grayscale image. But failing to successfully compile the code. There seems to be no clear help on the internet for usage of SIFT. Pls help. Thanks.
while(true)
{
Mat myFrame;
Mat grayFrame;
capture.read(myFrame);
cvtColor(myFrame, grayFrame, CV_BGR2GRAY);
vector<Vec2f> outputArray;
vector<KeyPoint> keypoint;
Feature2D EXTRACTOR;
Mat descriptors;
EXTRACTOR.detectAndCompute(grayFrame, outputArray, keypoint, descriptors);
}

vector<KeyPoint> keyVector;
Mat outputDiscriptor;
Ptr<SIFT> detector = SIFT::create();
detector->detect(grayFrame, keyVector);
//here grayFrame is the gray scale of the original frame/image
if you want to get the descriptors of the key points use the
detector->compute(grayFrame, keyVector)

Related

object and contour detection in an image

I am new to image processing and trying to get the contours of the apples in these images. To do so, i use openCV. But i do not get a propper contour detection. I want the algorithm also be able to get contours of other objects. So not limmited to apples (= circles).
Original picture
If i follow the instructions there are 4 steps to be taken.
Open the image file
Convert the file to grayscale
Do some processing (blur, errode, dillitate, you name it)
Get the contours
The first point that confuses me is the grayscale conversion.
I did:
Mat image;
Mat HSVimage;
Mat Grayimage;
image = imread(imageName, IMREAD_COLOR); // Read the file
cvtColor(image, HSVimage, COLOR_BGR2HSV);
Mat chan[3];
split(HSVimage, chan);
Grayimage = chan[2];
First question:
Is this correct choice, or shoud i just read the file in Grayscale or use YUV ?
I only use 1 channel of the HSV, is this correct ?
I tried alot of processing methodes, but there are so many i lost track. The best result i got was when i used a treshold and an adaptiveTreshold.
threshold(Grayimage, Grayimage,49, 0, THRESH_TOZERO);
adaptiveThreshold(Grayimage, Tresholdimage, 256, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY, 23, 4);
The result i get is:
result after processing
But a find contours does not find a closed object. So i get:
contours
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(Tresholdimage, contours, hierarchy, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE, Point(0, 0));
I tried hough cicles,
vector<Vec3f> circles;
HoughCircles(Tresholdimage, circles, HOUGH_GRADIENT, 2.0, 70);
and i got:
hough circles ok
So i was a happy man, but as soon is i tried the code on an other picture, i got:
second picture original
hough circles wrong
I can experiment with the HoughCircles function i see there are alot of posibilities, but i can only detect circles with it, so it is not my first choice.
I am a newbie at this. My questions are:
Is this a correct way or is it better to use technics like blob detection or ML to find the objects in the picture ?
If it is a correct way, what functions should i use to get the better results ?
Regards,
Peter

Converting OpenCV Mat to Libfreenect2 Frame for Registration

I need to reconstruct a PointCloud using libfreenect2 registration::apply with color/depth images. The problem is I pre-saved the color & depth images as PNG files. Both color & depth images was obtained using libfreenect2::Frame and converted to a OpenCV Mat for imwrite.
Mat(rgb->height, rgb->width, CV_8UC4, rgb->data).copyTo(bgrImage);
Mat(depth->height, depth->width, CV_32FC1, depth->data).copyTo(depthImage);
I tried the following method to get my color Frame & it works. The problem is when I tried to do the same thing with the depth image, issues emerge.
Mat color_CV8UC3 = cv::imread(colorFilename);
cvtColor(color_CV8UC3, color_CV8UC4, CV_BGR2BGRA); //Convert to BGRX
libfreenect2::Frame rgb(color_CV8UC4.cols, color_CV8UC4.rows, 4, color_CV8UC4.data);
Mat depthImg = cv::imread(depthFilename);
depthImg.convertTo(depthFrame, CV_32FC1, 1.0/255.0); //Convert to 32FC1
libfreenect2::Frame depth(depthFrame.cols, depthFrame.rows, 4, depthFrame.data);
I tested the conversion of cv::Mat to libfreenect2::Frame by re-converting the libfreenect2::Frame for both rgb and depth & while I got the same image for rgb, it wasn't true for the depth image.
Mat(rgb.height, rgb.width, CV_8UC4, rgb.data).copyTo(rgb_image);
Mat(depth.height, depth.width, CV_32FC1, depth.data ).copyTo(depth_image);
imshow("Depth", depth_image);
imshow("Color", rgb_image);
Depth Image - Loaded & Converted to 32FC1
Depth Image - After Converted to Frame & Reconverted Back
Thanks for any assistance provided and please give any feedback as this is my first time posting a question here.

what is the meaning of Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );?

Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 ); this is a part of code what this really does is making a object in Mat structure called drawing i don't really understand what is with Mat::Zeros please help me i am new to open cv and c++..
It creates a Mat object filled with zeros (i.e a black image), that has the same size as canny_output, 8 bits depth and 3 channels.
For more information
Mat::zeros
As it say in the official documentation : here.
This line create a Mat filled with zeros of the same size as your canny_ouput mat with type CV_8UC3.
For more explanation of the datatype : here

doubts regarding the output of canny and findcountours in OpenCV

I am using the standard flow to process an image and just find I cannot understand the meaning of contours generated by canny and findCountours.
Here is the image:
And after canny:
after findContours, it has 4 contours. So I draw the 4 contours out.
That is the confusing part: why does it have 4 contours instead of 2? Because from the canny output, we can only see 2 contours: the outside one and the inside one.
Could someone clear my doubts?
Thanks
Deryk
code is here:
Mat src = imread("images/andgate.png");
Mat gray;
cvtColor(src, gray, CV_BGR2GRAY);
Mat bw;
Canny(gray, bw, 100, 200);
vector<vector<Point> > contours2;
vector<Vec4i> hierarchy2;
findContours(bw, contours2, hierarchy2,CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);

Bug in cv::Orb?

I recently found some very strange behavior in opencv's ORB descriptor.
cv::Mat grey; //greyscale image
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
cv::ORB detector;
detector(grey,cv::Mat(),keypoints,descriptors);
The above code consistently crashes if given an image containing no potential keypoints (a black image for example) with the error
OpenCV Error: Assertion failed (m.dims >= 2) in Mat, file /Users/user/slave/ios_framework/src/opencv/modules/core/src/matrix.cpp, line 268
I found that to fix the problem I could do the following
cv::Mat grey;
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
cv::ORB detector;
detector(grey,cv::Mat(),keypoints);
if(keypoints.size() > 0)
{
detector(grey,cv::Mat(),keypoints,descriptors,true);
}
Which first detects keypoints and then generates their descriptors if any keypoints were detected. I am using opencv2 as a .framework on iOS.
Is this a bug in OpenCV? If not, what am I doing wrong? If so, are there any versions in which it is fixed?
I just ran this code
cv::Mat grey = cv::Mat::zeros(100, 100, CV_8UC1);
std::vector<cv::KeyPoint> keypoints;
cv::Mat descriptors;
cv::ORB detector;
detector(grey,cv::Mat(),keypoints,descriptors);
with OpenCV 2.4.1 without problems.
Did you debug into your code to see where exactly the assertion fails?

Resources