How to use convexHull function of openCV in iOS objective C? - ios

I am someone who is new to openCV and has been trying to use convexhull function in the openCV library for an app (objective-C being used), I need to know what is the input format of function arguments, it's pretty confusing. And does this function return the points in a sequence? Like, if I use addLineToPoint to draw a bezierpath of this hull, is it possible?

Some sample code for you:
std::vector<cv::Point> points;
//fill that vector with your points
std::vector<cv::Point> hull;
if (points.size()) {
cv::convexHull(points, hull);
}
cv::Size size = cv::Size(w, h);
//some size for the matrix where you will draw your convex hull
cv::Mat hullMask = Mat::zeros(size, CV_8UC1);
int hull_count = (int)hull.size();
if (hull_count) {
const cv::Point* hull_pts = &hull[0];
cv::fillPoly(hullMask, &hull_pts, &hull_count, 1, cv::Scalar(255));
}
This code will help you to create convex hull and draw it.
Here you can find complete documentation for that function. It will return points in a sequence according to the "clockwise" argument. By default it will be counter-clockwise.

Related

Compute fundamental matrix with 8 point algorithm

I need to write an own implementation of computing the fundamental matrix between two images based on the corresponding image coordinates without using OpenCV.
Is it possible to describe this algorithm in its simplest form in accordance with the following function? a simple and straightforward formula.
FMatrixEightPoint()
Input Arguments:
points1(x,y)−pixel coordinates in the first image ,
corresponding to points2 in the second image
points2(x,y)−pixel coordinates in the second image ,
corresponding to points1 in the first image
Output :
F − the fundamental matrix between the first image and the second image
Yes, it is possible to describe the algorithm in the mentioned form.
If you would use OpenCV, you could just use findFundamentalMat. This also provides the 8-point method for computing the fundamental matrix.
The example (in C++) taken from the OpenCV documentation, but adapted (using the RANSAC algorithm for computing the fundamental matrix):
// Example. Estimation of fundamental matrix using the 8-point algorithm
int point_count = 8; // must be >= 8
vector<Point2f> points1(point_count);
vector<Point2f> points2(point_count);
// initialize the points here ... */
for( int i = 0; i < point_count; i++ )
{
points1[i] = ...;
points2[i] = ...;
}
Mat fundamental_matrix =
findFundamentalMat(points1, points2, CV_FM_8POINT);
If you want to write your own function, it would look like this (no valid code)
Matrix findFundamentalMat(Array points1, Array points2)
{
Matrix fundamentalMatrix;
// compute fundamental matrix based on input points1 and points2 or call OpenCV's findFundamentalMat
return fundamentalMatrix;
}

OpenCV matchShapes() output value

How do I use value from OpenCV matchShapes output? We implemented OpenCV matchShapes function to compare two images, particularly, shapes. But when we obtained the answer we are confused how to use these values?
The code is
- (bool) someMethod:(UIImage *)image :(UIImage *)temp {
RNG rng(12345);
cv::Mat src_base, hsv_base;
cv::Mat src_test1, hsv_test1;
src_base = [self cvMatWithImage:image];
src_test1 = [self cvMatWithImage:temp];
int thresh=150;
double ans=0, result=0;
Mat imageresult1, imageresult2;
cv::cvtColor(src_base, hsv_base, cv::COLOR_BGR2HSV);
cv::cvtColor(src_test1, hsv_test1, cv::COLOR_BGR2HSV);
std::vector<std::vector<cv::Point>>contours1, contours2;
std::vector<Vec4i>hierarchy1, hierarchy2;
Canny(hsv_base, imageresult1, thresh, thresh*2);
Canny(hsv_test1, imageresult2, thresh, thresh*2);
findContours(imageresult1,contours1,hierarchy1,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours1.size();i++)
{
//cout<<contours1[i]<<endl;
Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(imageresult1,contours1,i,color,1,8,hierarchy1,0,cv::Point());
}
findContours(imageresult2,contours2,hierarchy2,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours2.size();i++)
{
Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(imageresult2,contours2,i,color,1,8,hierarchy2,0,cv::Point());
}
for(int i=0;i<contours1.size();i++)
{
ans = matchShapes(contours1[i],contours2[i],CV_CONTOURS_MATCH_I1,0);
cout<<" "<<ans<<endl;
}
std::cout<<"The answer is "<<ans<<endl;
if (ans<=20) {
return true;
}
return false;
}
The output values are
0.225069
0.234417
0
7.63599
0
7.06392
0.335966
0.211358
0.327552
0.842969
0.761659
0.614039
The image is
See my comment on imoutidi's answer. Here is a visual explanation:
The first col are the two original images,the second the canny edges. The 3. col are an arbitrary selection of detected shapes with the same index in both images. As you see, it is not even guaranteed that they correspond to the same image parts as a human would see them. What you end up comparing are different triangles in this case, which say little about the overall shape similarity. The two shape arrays are not even of the same size, since there are more structures in the bottom drawing for example(like small shapes between a thick line). in The 4. col is the last shape in the array. This is the best bet you can make to compare the images. In this example, I get a value of 0.0920794532771 for their similarity.
If I understand correctly your question, you want to know what the return value of matchShapes() stands for.
In your case given the two contours (shapes) the function returns a similarity metric (value). A small value indicates that the two shapes are similar and a big value that they are not.
A good explanation is here: http://docs.opencv.org/3.1.0/d5/d45/tutorial_py_contours_more_functions.html (check the third paragraph).
Also check out the documentation: http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gaadc90cb16e2362c9bd6e7363e6e4c317

get OpenCV Mat variables of rgb and depth information from pcl openni grabber

I'm now using pcl openni grabber to get point cloud from kinect cameras. But I also want to get OpenCV Mat variables for the rgb and depth information.
Does anyone know how to achieve this?
Thanks a lot!
I just found pcl has its own openni2 wrapper to get the color and depth image directly. We can write a callback function like:
void PclProcessor::image_cb1_ (const boost::shared_ptr<pcl::io::Image>& rgb1, const boost::shared_ptr<pcl::io::DepthImage>& depth1, float reciprocalFocalLength)
{
if (refreshK1)
{
C1 = Mat(rgb1->getHeight(), rgb1->getWidth(), CV_8UC3);
rgb1->fillRGB(C1.cols,C1.rows,C1.data,C1.step);
cvtColor(C1, C1, CV_RGB2BGR);
D1 = Mat(depth1->getHeight(), depth1->getWidth(), CV_32F);
depth1->fillDepthImage(D1.cols, D1.rows,(float *)D1.data,D1.step);
refreshK1 = false;
}
imshow("camera 1 color",C1);
imshow("camera 1 depth",D1);
cv::waitKey(0);
}
In this case, I can get the color image right. However, the depth image does not look right.

Approximating a contour with rotated rectangles

After some color detection and binary thresholding, I use the following code to find the contours and draw them onto the image:
using (MemStorage stor = new MemStorage())
{
Contour<Point> contours = img.FindContours(
Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_LIST,
stor);
for (; contours != null; contours = contours.HNext)
{
Contour<Point> currentContour = contours.ApproxPoly(contours.Perimeter * poly, stor);
img.Draw(currentContour,new Bgr(255,255,255),1);
Rectangle currentrect = currentContour.BoundingRectangle;
img.Draw(currentrect,new Bgr(255,255,255),2);
}
}
My problem is, as I expected, that if the contour is a rectangle but is rotated in the image, the bounding rectangle does not change its orientation to fit the rotation. Is their another way to accomplish this function? Any help would be greatly appreciated.
Yes, there is another way to accomplish this. You can use
contour.GetConvexHull(ORIENTATION.CV_CLOCKWISE);
using Moments, you can easily get the orientation and adjust the rectangle accordingly.
The method you are looking for is:
PointCollection.MinAreaRect(points);
Worked example is here:
http://www.emgu.com/wiki/index.php/Minimum_Area_Rectangle_in_CSharp
Complete documentation (which has little more than the above) is located here:
http://www.emgu.com/wiki/files/2.4.0/document/html/0d5fd148-0afb-fdbf-e995-6dace8c8848d.htm

OpenCV C++/Obj-C: goodFeaturesToTrack inside specific blob

Is there a quick solution to specify the ROI only within the contours of the blob I'm intereseted in?
My ideas so far:
Using the boundingRect, but it contains too much stuff I don't want to analyse.
Applying goodFeaturesToTrack to the whole image and then loop through the output coordinates to eliminate the once outside my blobs contour
Thanks in advance!
EDIT
I found what I need: cv::pointPolygonTest() seems to be the right thing, but I'm not sure how to implement it …
Here's some code:
// ...
IplImage forground_ipl = result;
IplImage *labelImg = cvCreateImage(forground.size(), IPL_DEPTH_LABEL, 1);
CvBlobs blobs;
bool found = cvb::cvLabel(&forground_ipl, labelImg, blobs);
IplImage *imgOut = cvCreateImage(cvGetSize(&forground_ipl), IPL_DEPTH_8U, 3);
if (found) {
vb::CvBlob *greaterBlob = blobs[cvb::cvGreaterBlob(blobs)];
cvb::cvRenderBlob(labelImg, greaterBlob, &forground_ipl, imgOut);
CvContourPolygon *polygon = cvConvertChainCodesToPolygon(&greaterBlob->contour);
}
"polygon" contains the contour I need.
goodFeaturesToTrack is implemented this way:
- (std::vector<cv::Point2f>)pointsFromGoodFeaturesToTrack:(cv::Mat &)_image
{
std::vector<cv::Point2f> corners;
cv::goodFeaturesToTrack(_image,corners, 100, 0.01, 10);
return corners;
}
So next I need to loop through the corners and check each point with cv::pointPolygonTest(), right?
You can create a mask over your interest region:
EDIT
How to make a mask:
Make a mask;
Mat mask(origImg.size(), CV_8UC1);
mask.setTo(Scalar::all(0));
// here I assume your contour is extracted with findContours,
// and is stored in a vector<vector<Point>>
// and that you know which contour is the blob
// if it's not the case, use fillPoly instead of drawContour();
Scalar color(255,255,255); // white. actually, it's monchannel.
drawContours(mask, contours, contourIdx, color );
// fillPoly(Mat& img, const Point** pts, const int* npts,
// int ncontours, const Scalar& color)
And now you're ready to use it. BUT, look carefully at the result - I have heard about some bugs in OpenCV regarding the mask parameter for feature extractors, and I am not sure if it's about this one.
// note the mask parameter:
void goodFeaturesToTrack(InputArray image, OutputArray corners, int maxCorners,
double qualityLevel, double minDistance,
InputArray mask=noArray(), int blockSize=3,
bool useHarrisDetector=false, double k=0.04 )
This will also improve the speed of your aplication - goodFeaturesToTrack eats a hoge amount of time, and if you apply it only on a smaller image, the overall gain is significant.

Resources