opencv- vehicle tracking using optical flow - opencv

I have implemented optical flow to track vehicles on road and it turned out to be very slow.
my code uses the functions:
cvGoodFeaturesToTrack
cvFindCornerSubPix
cvCalcOpticalFlowPyrLK
How do I make this tracking fast and efficient?
My code is:
#include "highgui.h"
#include "cv.h"
#include "cxcore.h"
#include <iostream>
using namespace std;
const int MAX_CORNERS = 500;
int main()
{
CvCapture* capture=cvCreateFileCapture("E:\cam1.avi");
IplImage* img_A;// = cvLoadImage("image0.png", CV_LOAD_IMAGE_GRAYSCALE);
IplImage* img_B;// = cvLoadImage("image1.png", CV_LOAD_IMAGE_GRAYSCALE);
img_A=cvQueryFrame(capture);
IplImage* imgA = cvCreateImage( cvGetSize(img_A), 8, 1 );
IplImage* imgB = cvCreateImage( cvGetSize(img_A), 8, 1 );
cvNamedWindow( "ImageA", CV_WINDOW_AUTOSIZE );
cvNamedWindow( "ImageB", CV_WINDOW_AUTOSIZE );
cvNamedWindow( "LKpyr_OpticalFlow", CV_WINDOW_AUTOSIZE );
while(1)
{
int couter=0;
for(int k=0;k<20;k++)
{
img_B=cvQueryFrame(capture);
}
//cvCvtColor(imgA,imgA,CV_BGR2GRAY);
//cvCvtColor(imgB,imgB,CV_BGR2GRAY);
// Load two images and allocate other structures
/*IplImage* imgA = cvLoadImage("image0.png", CV_LOAD_IMAGE_GRAYSCALE);
IplImage* imgB = cvLoadImage("image1.png", CV_LOAD_IMAGE_GRAYSCALE);*/
CvSize img_sz = cvGetSize( img_A );
int win_size = 10;
IplImage* imgC = cvCreateImage( cvGetSize(img_A), 8, 1 );
cvZero(imgC);
// Get the features for tracking
IplImage* eig_image = cvCreateImage( img_sz, IPL_DEPTH_32F, 1 );
IplImage* tmp_image = cvCreateImage( img_sz, IPL_DEPTH_32F, 1 );
int corner_count = MAX_CORNERS;
CvPoint2D32f* cornersA = new CvPoint2D32f[ MAX_CORNERS ];
cvCvtColor(img_A,imgA,CV_BGR2GRAY);
cvCvtColor(img_B,imgB,CV_BGR2GRAY);
cvGoodFeaturesToTrack( imgA, eig_image, tmp_image, cornersA, &corner_count ,0.05, 5.0, 0, 3, 0, 0.04 );
cvFindCornerSubPix( imgA, cornersA, corner_count, cvSize( win_size, win_size ) ,cvSize( -1, -1 ), cvTermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03 ) );
// Call Lucas Kanade algorithm
char features_found[ MAX_CORNERS ];
float feature_errors[ MAX_CORNERS ];
CvSize pyr_sz = cvSize( imgA->width+8, imgB->height/3 );
IplImage* pyrA = cvCreateImage( pyr_sz, IPL_DEPTH_32F, 1 );
IplImage* pyrB = cvCreateImage( pyr_sz, IPL_DEPTH_32F, 1 );
CvPoint2D32f* cornersB = new CvPoint2D32f[ MAX_CORNERS ];
/*int jk=0;
for(int i=0;i<imgA->width;i+=10)
{
for(int j=0;j<imgA->height;j+=10)
{
cornersA[jk].x=i;
cornersA[jk].y=j;
++jk;
}
}
*/
cvCalcOpticalFlowPyrLK( imgA, imgB, pyrA, pyrB, cornersA, cornersB, corner_count,
cvSize( win_size, win_size ), 5, features_found, feature_errors,
cvTermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.3 ), 0 );
// Make an image of the results
for( int i=0; i < corner_count; i++ )
{
if( features_found[i]==0|| feature_errors[i]>550 )
{
//printf("Error is %f/n",feature_errors[i]);
continue;
}
//printf("Got it/n");
CvPoint p0 = cvPoint( cvRound( cornersA[i].x ), cvRound( cornersA[i].y ) );
CvPoint p1 = cvPoint( cvRound( cornersB[i].x ), cvRound( cornersB[i].y ) );
cvLine( imgC, p0, p1, CV_RGB(255,0,0), 2 );
cout<<p0.x<<" "<<p0.y<<endl;
}
cvShowImage( "LKpyr_OpticalFlow", imgC );
cvShowImage( "ImageA", imgA );
cvShowImage( "ImageB", imgB );
//cvCopyImage(imgB,imgA);
delete[] cornersA;
delete[] cornersB;
cvWaitKey(33);
}
return 0;
}

I might be going a bit over the line here but I would suggest you to check out OpenTLD. OpenTLD (aka Predator) is one of the most efficient tracking algorithm. Zdenek Kalal has implemented OpenTLD in MATLAB. George Nebehay has made a very efficient C++ OpenCV port of OpenTLD.
It's very easy to install and tracking is really efficient.
OpenTLD uses Median Flow Tracker to track and implements PN learning algorithm. In this YouTube Video, Zdenek Kalal shows the use of OpenTLD.
If you just want to implement a Median Flow Tracker, follow this link https://github.com/gnebehay/OpenTLD/tree/master/src/mftracker
If you want to use it in Python, I have made a Median Flow Tracker and also made a Python port of OpenTLD. But python port isn't much efficient.

First of all to track a car you have to somehow detect it (using color segmentation/background subtraction for example). When car is detected you have to track it (track some points on it) using cvCalcOpticalFlowPyrLK. I didn't find code that responces for car detection.
Take a look at this and this articles. Your idea should be the same.
Also your code is a bit wrong. For example why do you call cvGoodFeaturesToTrack in the main loop? You have to call it once - before loop to detect good features to track. But this will also detect non-cars.
Take a look at default OpenCV example: OpenCV/samples/cpp/lkdemo.cpp.

Related

Regarding a specific Object Detection in OpenCV using WebCam and comparing it with an input Image

I am new to OpenCV and want to develop a program which takes the camera input and compares it with a known image of an object which would be input to it as a .jpg image and if the input of the Webcam matches with the fed in image upto a certain level of accuracy, then some message etc should be displayed that the required object has been found.
Eg: If I get a Computer Cable before the webcam, it needs to be detected and compared to the image of the Computer cable I have fed into the program.
I've tried many techniques and find Template matching to be effective as mentioned in the foll0wing link---
Real-time template matching - OpenCV, C++
However after drawing the rectangle and getting the roiImage..I want to compare its likeliness with a known image on my disk(in the opencv working directory). For this I am trying to convert the roiImg and my other images in HSV format and get 4 values according to the Algorithms.
I have tried to combine the 2 codes but it doesn;t seem to work as roiImg is being made at runtime and is not being able to compare with the other 2 Images using imread.
#include <iostream>
#include "opencv2/opencv.hpp"
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/objdetect/objdetect.hpp>
#include <sstream>
using namespace cv;
using namespace std;
Point point1, point2; /* vertical points of the bounding box */
int drag = 0;
Rect rect; /* bounding box */
Mat img, roiImg; /* roiImg - the part of the image in the bounding box */
int select_flag = 0;
bool go_fast = false;
Mat mytemplate;
Mat src_base, hsv_base;
Mat src_test1, hsv_test1;
Mat src_test2, hsv_test2;
Mat hsv_half_down;
///------- template matching -----------------------------------------------------------------------------------------------
Mat TplMatch( Mat &img, Mat &mytemplate )
{
Mat result;
matchTemplate( img, mytemplate, result, CV_TM_SQDIFF_NORMED );
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
return result;
}
///------- Localizing the best match with minMaxLoc ------------------------------------------------------------------------
Point minmax( Mat &result )
{
double minVal, maxVal;
Point minLoc, maxLoc, matchLoc;
minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );
matchLoc = minLoc;
return matchLoc;
}
///------- tracking --------------------------------------------------------------------------------------------------------
void track()
{
if (select_flag)
{
//roiImg.copyTo(mytemplate);
// select_flag = false;
go_fast = true;
}
// imshow( "mytemplate", mytemplate ); waitKey(0);
Mat result = TplMatch( img, mytemplate );
Point match = minmax( result );
rectangle( img, match, Point( match.x + mytemplate.cols , match.y + mytemplate.rows ), CV_RGB(255, 255, 255), 0.5 );
std::cout << "match: " << match << endl;
/// latest match is the new template
Rect ROI = cv::Rect( match.x, match.y, mytemplate.cols, mytemplate.rows );
roiImg = img( ROI );
roiImg.copyTo(mytemplate);
imshow( "roiImg", roiImg ); //waitKey(0);
//Compare the roiImg with a know image to calculate resemblence
/*Method Base - Base Base - Half Base - Test 1 Base - Test 2
Correlation 1.000000 0.930766 0.182073 0.120447
Chi-square 0.000000 4.940466 21.184536 49.273437
Intersection 24.391548 14.959809 3.889029 5.775088
Bhattacharyya 0.000000 0.222609 0.646576 0.801869
For the Correlation and Intersection methods, the higher the metric, the more accurate the match. As we can see,
the match base-base is the highest of all as expected. Also we can observe that the match base-half is the second best match (as we predicted).
For the other two metrics, the less the result, the better the match. We can observe that the matches between the test 1 and test 2 with respect
to the base are worse, which again, was expected.)*/
src_base = imread("roiImg");
src_test1 = imread("Samarth.jpg");
src_test2 = imread("Samarth2.jpg");
//double l2_norm = cvNorm( src_base, src_test1 );
/// Convert to HSV
cvtColor( src_base, hsv_base, COLOR_BGR2HSV );
cvtColor( src_test1, hsv_test1, COLOR_BGR2HSV );
cvtColor( src_test2, hsv_test2, COLOR_BGR2HSV );
hsv_half_down = hsv_base( Range( hsv_base.rows/2, hsv_base.rows - 1 ), Range( 0, hsv_base.cols - 1 ) );
/// Using 50 bins for hue and 60 for saturation
int h_bins = 50; int s_bins = 60;
int histSize[] = { h_bins, s_bins };
// hue varies from 0 to 179, saturation from 0 to 255
float h_ranges[] = { 0, 180 };
float s_ranges[] = { 0, 256 };
const float* ranges[] = { h_ranges, s_ranges };
// Use the o-th and 1-st channels
int channels[] = { 0, 1 };
/// Histograms
MatND hist_base;
MatND hist_half_down;
MatND hist_test1;
MatND hist_test2;
/// Calculate the histograms for the HSV images
calcHist( &hsv_base, 1, channels, Mat(), hist_base, 2, histSize, ranges, true, false );
normalize( hist_base, hist_base, 0, 1, NORM_MINMAX, -1, Mat() );
calcHist( &hsv_half_down, 1, channels, Mat(), hist_half_down, 2, histSize, ranges, true, false );
normalize( hist_half_down, hist_half_down, 0, 1, NORM_MINMAX, -1, Mat() );
calcHist( &hsv_test1, 1, channels, Mat(), hist_test1, 2, histSize, ranges, true, false );
normalize( hist_test1, hist_test1, 0, 1, NORM_MINMAX, -1, Mat() );
calcHist( &hsv_test2, 1, channels, Mat(), hist_test2, 2, histSize, ranges, true, false );
normalize( hist_test2, hist_test2, 0, 1, NORM_MINMAX, -1, Mat() );
/// Apply the histogram comparison methods
for( int i = 0; i < 4; i++ )
{
int compare_method = i;
double base_base = compareHist( hist_base, hist_base, compare_method );
double base_half = compareHist( hist_base, hist_half_down, compare_method );
double base_test1 = compareHist( hist_base, hist_test1, compare_method );
double base_test2 = compareHist( hist_base, hist_test2, compare_method );
printf( " Method [%d] Perfect, Base-Half, Base-Test(1), Base-Test(2) : %f, %f, %f, %f \n", i, base_base, base_half , base_test1, base_test2 );
}
printf( "Done \n" );
}
///------- MouseCallback function ------------------------------------------------------------------------------------------
void mouseHandler(int event, int x, int y, int flags, void *param)
{
if (event == CV_EVENT_LBUTTONDOWN && !drag)
{
/// left button clicked. ROI selection begins
point1 = Point(x, y);
drag = 1;
}
if (event == CV_EVENT_MOUSEMOVE && drag)
{
/// mouse dragged. ROI being selected
Mat img1 = img.clone();
point2 = Point(x, y);
rectangle(img1, point1, point2, CV_RGB(255, 0, 0), 3, 8, 0);
imshow("image", img1);
}
if (event == CV_EVENT_LBUTTONUP && drag)
{
point2 = Point(x, y);
rect = Rect(point1.x, point1.y, x - point1.x, y - point1.y);
drag = 0;
roiImg = img(rect);
roiImg.copyTo(mytemplate);
// imshow("MOUSE roiImg", roiImg); waitKey(0);
}
if (event == CV_EVENT_LBUTTONUP)
{
/// ROI selected
select_flag = 1;
drag = 0;
}
}
///------- Main() ----------------------------------------------------------------------------------------------------------
int main()
{
int k;
///open webcam
VideoCapture cap(0);
if (!cap.isOpened())
return 1;
/* ///open video file
VideoCapture cap;
cap.open( "Wildlife.wmv" );
if ( !cap.isOpened() )
{ cout << "Unable to open video file" << endl; return -1; }*/
/*
/// Set video to 320x240
cap.set(CV_CAP_PROP_FRAME_WIDTH, 320);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 240);*/
cap >> img;
GaussianBlur( img, img, Size(7,7), 3.0 );
imshow( "image", img );
while (1)
{
cap >> img;
if ( img.empty() )
break;
// Flip the frame horizontally and add blur
cv::flip( img, img, 1 );
GaussianBlur( img, img, Size(7,7), 3.0 );
if ( rect.width == 0 && rect.height == 0 )
cvSetMouseCallback( "image", mouseHandler, NULL );
else
track();
imshow("image", img);
// waitKey(100); k = waitKey(75);
k = waitKey(go_fast ? 30 : 10000);
if (k == 27)
break;
}
return 0;
}
if you want to detect a object in live feed , detecting the object in each frame is not efficient .. for the first time you have to detect after you have to track the object.
so this process involving both detection and tracking..
for detection you have to segment the object from the rest, opencv provides many algorithms for segmenting an object from background based on colors color based detection.other than color you can use the objects's shape to segment the object from backgroundshape based segmentation.
you can use lk optical flow algorithm as a starting to tracking.
additionally, you can use template matching or camshift or medial flow tracker.. etc to obtain quick results.all the above algorithm will be useful based on scale change of the object and lighting change of the feed. opencv has sample programs to the above algorithms.

using a custom haar cascade classifier

I have created a haar cascade classifier for detecting a hand with 1000 positive images and 2000 negative images. The xml file was created using convert_cascade.c from opencv samples. Now I am using the following code for detection, but the assert statement is giving an error as shown below
"assertion failed= cascade && storage && capture, line 21", which is the assertion call itself. I know that assertion fails when the expression evaluates to zero. so, any idea what could be wrong with classifier, because storage and capture should be working fine anyways,
#include <stdio.h>
#include "opencv/cv.h"
#include "opencv/highgui.h"
CvHaarClassifierCascade *cascade;
CvMemStorage *storage;
void detect( IplImage *img );
int main( )
{
CvCapture *capture;
IplImage *frame;
int key;
char *filename = "haar3.xml"; // name of my classifier
cascade = ( CvHaarClassifierCascade* )cvLoad( filename, 0, 0, 0 );
storage = cvCreateMemStorage(0);
capture = cvCaptureFromCAM(0);
assert( cascade && storage && capture );
cvNamedWindow("video", 1);
while(1) {
frame = cvQueryFrame( capture );
detect(frame);
key = cvWaitKey(50);
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
cvDestroyWindow("video");
cvReleaseHaarClassifierCascade(&cascade);
cvReleaseMemStorage(&storage);
return 0;
}
void detect(IplImage *img)
{
int i;
CvSeq *object = cvHaarDetectObjects(
img,
cascade,
storage,
1.5, //-------------------SCALE FACTOR
2,//------------------MIN NEIGHBOURS
1,//----------------------
// CV_HAAR_DO_CANNY_PRUNING,
cvSize( 24,24), // ------MINSIZE
cvSize(640,480) );//---------MAXSIZE
for( i = 0 ; i < ( object ? object->total : 0 ) ; i++ )
{
CvRect *r = ( CvRect* )cvGetSeqElem( object, i );
cvRectangle( img,
cvPoint( r->x, r->y ),
cvPoint( r->x + r->width, r->y + r->height ),
CV_RGB( 255, 0, 0 ), 2, 8, 0 );
//printf("%d,%d\nnumber =%d\n",r->x,r->y,object->total);
}
cvShowImage( "video", img );
}

Tracking features

I want to track an object in 2 images (Shot A, Shot B).
I know the location of the object in the first shot (ShotA) but I don't know the location of the object in the second shot (Shot B).
Shot A has multiple objects, so in order to track a specific object, I am selecting ROI of image where the object I want to track is. The problem is how do I track the features of that object in Shot B while keeping the same ROI size. Can I track the features of that object in the whole Image B without selecting an ROI?
This is the code I have. Currently it selects the same ROI of SHOTA in SHOTB, but sometimes the object in ROI of SHOTA is not in the ROI of SHOT B.
IplImage* imgA = cvLoadImage("52783180_RAW_OVR1.jpg",CV_LOAD_IMAGE_GRAYSCALE);
cvSetImageROI(imgA, cvRect(2300, 1700, 1000,1200));
cvNamedWindow("SHOTA",0);
cvShowImage("SHOTA", imgA);
//cvWaitKey(0);
CvSize img_sz = cvGetSize( imgA );
int win_size = 10;
IplImage* imgB = cvLoadImage("52783180_RAW_OVR2.jpg",CV_LOAD_IMAGE_GRAYSCALE);
cvSetImageROI(imgB, cvRect(2300, 1700, 1000,1200));
cvNamedWindow("SHOTB",0);
cvShowImage("SHOTB", imgB);
IplImage* imgC=cvLoadImage("52783180_RAW_OVR2.jpg",CV_LOAD_IMAGE_UNCHANGED);
cvSetImageROI(imgC, cvRect(2300, 1700, 1000,1200));
//cvNamedWindow("SHOTA",0);
//cvShowImage("SHOTA", imgA);
IplImage* eig_image = cvCreateImage( img_sz, IPL_DEPTH_32F, 1 );
IplImage* tmp_image = cvCreateImage( img_sz, IPL_DEPTH_32F, 1 );
int corner_count = MAX_CORNERS;
CvPoint2D32f* cornersA = new CvPoint2D32f[ MAX_CORNERS ];
//cvSetImageROI(imgA, cvRect(2300, 1700, 1000,1200));
cvGoodFeaturesToTrack(
imgA,
eig_image,
tmp_image,
cornersA,
&corner_count,
0.01,
5.0,
0,
3,
0,
0.04
);
//cvResetImageROI(imgA);
cvFindCornerSubPix(
imgA,
cornersA,
corner_count,
cvSize(win_size,win_size),
cvSize(-1,-1),
cvTermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS,20,0.03)
);
// Call the Lucas Kanade algorithm
//
char features_found[ MAX_CORNERS ];
float feature_errors[ MAX_CORNERS ];
CvSize pyr_sz = cvSize( imgA->width+8, imgB->height/3 );
IplImage* pyrA = cvCreateImage( pyr_sz, IPL_DEPTH_32F, 1 );
IplImage* pyrB = cvCreateImage( pyr_sz, IPL_DEPTH_32F, 1 );
CvPoint2D32f* cornersB = new CvPoint2D32f[ MAX_CORNERS ];
cvCalcOpticalFlowPyrLK(
imgA,
imgB,
pyrA,
pyrB,
cornersA,
cornersB,
corner_count,
cvSize( win_size,win_size ),
5,
features_found,
feature_errors,
cvTermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, .3 ),
0
);
// Now make some image of what we are looking at:
//
float sum=0;
for( int i=0; i<corner_count; i++ ) {
if( features_found[i]==0|| feature_errors[i]>550 ) {
// printf("Error is %f/n",feature_errors[i]);
continue;
}
sum+=(cornersA[i].x-cornersB[i].x)*(cornersA[i].x-cornersB[i].x)+(cornersA[i].y-cornersB[i].y)*(cornersA[i].y-cornersB[i].y);
// printf("Got it/n");
CvPoint p0 = cvPoint(
cvRound( cornersA[i].x ),
cvRound( cornersA[i].y )
);
CvPoint p1 = cvPoint(
cvRound( cornersB[i].x ),
cvRound( cornersB[i].y )
);
cvLine( imgC, p0, p1, CV_RGB(255,0,0),2 );
}
cvResetImageROI(imgC);
sum=sum/corner_count;
printf("%f\n",sum);
cvNamedWindow("ImageA",0);
cvNamedWindow("ImageB",0);
cvNamedWindow("LKpyr_OpticalFlow",0);
cvShowImage("ImageA",imgA);
cvShowImage("ImageB",imgB);
cvShowImage("LKpyr_OpticalFlow",imgC);
cvWaitKey(0);
Problem solved by using mask instead of setimageroi for GoodfeaturestoTrack

Real-time face detection in OpenCV

I am trying to write some simple real time face detection code, but somehow it doesn't work. (I tried face detection code on an image and it works but with the code below i get a grey image onscreen and the code fails)
here is the code i have tried (it prints 'face detected!' one time to the output window)
CvHaarClassifierCascade *cascade;
CvMemStorage *storage;
char *face_cascade="haarcascade_frontalface_alt2.xml";
CvRect* r;
const CvArr* img_size;
IplImage *grayscale;
void detectFacialFeatures( IplImage *img)
{
grayscale = cvCreateImage(cvGetSize(img), 8, 1);
cvCvtColor(img, grayscale, CV_BGR2GRAY);
CvMemStorage* storage=cvCreateMemStorage(0);
cvClearMemStorage( storage );
cvEqualizeHist(grayscale, grayscale);
cascade = ( CvHaarClassifierCascade* )cvLoad( face_cascade, 0, 0, 0 );
CvSeq* faces = cvHaarDetectObjects(grayscale, cascade, storage, 1.1, 3, CV_HAAR_DO_CANNY_PRUNING, cvSize( 50, 50 ) );
if(faces)
{
printf("face detected!");
r = ( CvRect* )cvGetSeqElem( faces, 0 );
cvRectangle( img,cvPoint( r->x, r->y ),cvPoint( r->x + r->width, r->y + r->height ), CV_RGB( 255, 0, 0 ), 1, 8, 0 );
}
}
int _tmain(int argc, _TCHAR* argv[])
{
int c;
IplImage* color_img;
CvCapture* cv_cap = cvCreateCameraCapture(0);
cvSetCaptureProperty(cv_cap, CV_CAP_PROP_FRAME_WIDTH, 640);
cvSetCaptureProperty(cv_cap, CV_CAP_PROP_FRAME_HEIGHT, 480);
cvNamedWindow("Video",1); // create window
for(;;) {
color_img = cvQueryFrame(cv_cap); // get frame
if(color_img==0)
break;
cvFlip(color_img, 0, 1); //mirror image
detectFacialFeatures(color_img);
cvShowImage("Video", color_img); // show frame
c = cvWaitKey(10); // wait 10 ms or for key stroke
if(c == 27)
break; // if ESC, break and quit
}
/* clean up */
cvReleaseCapture( &cv_cap );
cvDestroyWindow("Video");
}
Try without calling functions cvFlip and cvEqualizeHistogram.
Look at(just use cvShowImage) result of each operation(cvFlip, cvCvtColor, cvEqualizeHistogram) - it's possible that result of one of these operations is gray image.
You don't have to load haar classifier each time you try to find a face - load it at the beginning. Operations on files are slow so it should makes you code faster.

openCV usage cvFindDominantPoints

Does anyone know how to use the cvFindDominantPoints API of openCV? I basically have a 1 channel, binary image from which I get a set of contours. Judging from the image, I seem to be getting the correct contours. Now, I am selecting one of these contours to get dominant points of. This contour has about 60 vertices. However, the API call to cvFindDominantPoints is giving me a sequence of points (about 15) that does not even lie on the contour. It is quite far from it. Any insight?
my usage:
CvSeq *dominantpoints = cvFindDominantPoints(targetSeq, tristorage, CV_DOMINANT_IPAN, 7, 9, 9, 150);
#include "cv.h"
#include "highgui.h"
CvSeq* contours = 0;
CvSeq* dps = 0;
int main( int argc, char** argv )
{
int i, idx;
CvPoint p;
CvMemStorage* storage_ct = cvCreateMemStorage(0);
CvMemStorage* storage_dp = cvCreateMemStorage(0);
IplImage* img = cvLoadImage("contour.bmp", CV_LOAD_IMAGE_GRAYSCALE);
cvNamedWindow( "image" );
cvShowImage( "image", img );
cvFindContours( img, storage_ct, &contours, sizeof(CvContour),
CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE );
dps = cvFindDominantPoints( contours, storage_dp, CV_DOMINANT_IPAN, 7, 20, 9, 150 );
contours = cvApproxPoly( contours, sizeof(CvContour), storage_ct, CV_POLY_APPROX_DP, 3, 1 );
printf("found %d DPs and %d Contours \n", dps->total, contours->total );
for ( i = 0; i < dps->total; i++)
{
idx = *(int *) cvGetSeqElem(dps, i);
p = *(CvPoint *) cvGetSeqElem(contours, idx);
cvDrawCircle( img, p , 1, cvScalarAll(255) );
printf("%d %d %d\n", idx, p.x, p.y);
}
cvDrawContours(img, contours, cvScalarAll(100), cvScalarAll(200), 100 );
cvNamedWindow( "contours" );
cvShowImage( "contours", img );
cvWaitKey(0);
cvReleaseMemStorage( &storage_ct );
cvReleaseMemStorage( &storage_dp );
cvReleaseImage( &img );
return 0;
}

Resources