How to find the closest object to the camera? - opencv

I am working on an algorithm which can find the point where the object is closest to the camera. I have found some help online by using meanshift. I have pasted the code to do that. How the code works is : it uses a track_window to track the closest object. But the problem is I don't know how to find the location of the closest object.
What I have tried:
I can make the track_window go through the full image and thus narrow down the area of interest but its too long and I still won't know the exact location.
Could someone help me find the closes object to the camera?
Here is my code:
/*
* back_project.cpp
*
* Created on: Mar 8, 2015
* Author: sushrut
*/
#include "backProject.h"
using namespace cv;
using namespace std;
/// Global Variables
Mat src; Mat hsv; Mat hue;
int bins = 25;
Rect track_window = cv::Rect(250, 250, 50, 50);
int x=1, y=1, wr=1, hr=1;
/// Function Headers
void Hist_and_Backproj(int, void* );
/** #function main */
int back_proj_main()
{
VideoCapture cap(0);
if(!cap.isOpened())
{
cout << "Cannot open the first web cam" << endl;
return -1;
}
while(true)
{
Mat src;
bool bSuccess = cap.read(src);
if(!bSuccess)
{
cout << "Cannot read a frame from video stream 1" << endl;
break;
}
/// Transform it to HSV
cvtColor( src, hsv, CV_BGR2HSV );
/// Use only the Hue value
hue.create( hsv.size(), hsv.depth() );
int ch[] = { 0, 0 };
mixChannels( &hsv, 1, &hue, 1, ch, 1 );
/// Create Trackbar to enter the number of bins
char* window_image = "Source image";
namedWindow( window_image, CV_WINDOW_AUTOSIZE );
createTrackbar("* Hue bins: ", window_image, &bins, 180, Hist_and_Backproj );
Hist_and_Backproj(0, 0);
// create a rectangle on the image
rectangle(src, Point(x,y), Point(x+wr, y+hr), Scalar(0, 250, 0));
/// Show the image
imshow( window_image, src );
if (waitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop
{
cout << "esc key is pressed by user" << endl;
break;
}
}
// TODO release the capture
return 0;
}
/**
* #function Hist_and_Backproj
* #brief Callback to Trackbar
*/
void Hist_and_Backproj(int, void* )
{
MatND hist;
int histSize = MAX( bins, 2 );
float hue_range[] = { 0, 180 };
const float* ranges = { hue_range };
/// Get the Histogram and normalize it
calcHist( &hue, 1, 0, Mat(), hist, 1, &histSize, &ranges, true, false );
normalize( hist, hist, 0, 255, NORM_MINMAX, -1, Mat() );
/// Get Backprojection
MatND backproj;
calcBackProject( &hue, 1, 0, hist, backproj, &ranges, 1, true );
/// Draw the backproj
imshow( "BackProj", backproj );
// code to create a meanshift
meanShift(backproj,track_window, TermCriteria(TermCriteria::COUNT|TermCriteria::EPS, 10, 1));
// update the value of recctange
x = track_window.x;
y = track_window.y;
wr = track_window.width;
hr = track_window.height;
cout << "values: " << x << ", " << y << ", " << wr << ", " << hr << endl;
/// Draw the histogram
int w = 400; int h = 400;
int bin_w = cvRound( (double) w / histSize );
Mat histImg = Mat::zeros( w, h, CV_8UC3 );
for( int i = 0; i < bins; i ++ )
{ rectangle( histImg, Point( i*bin_w, h ), Point( (i+1)*bin_w, h - cvRound( hist.at<float>(i)*h/255.0 ) ), Scalar( 0, 0, 255 ), -1 ); }
imshow( "Histogram", histImg );
}
Refrences:
http://www.opencv.org.cn/opencvdoc/2.3.1/html/doc/tutorials/imgproc/histograms/back_projection/back_projection.html

Generally, it is impossible to detect which object is closest to the camera if you have only one frame and one camera - object relative position.
You should process video sequence and track feature points to determine positions of objects in the coordinate frame of the camera.
You should study some materials on multiple view geometry. For example, lectures of professor Daniel Cremers on YouTube. A book by Hartley and Zisserman is also very good.
Update after reading the comment
It is unclear what points you mean. Are they pixel coordinates of the object in the image or points in 3D Euclidean space?
If first, then you can proceed in the following. Backproject returns you an image, where each pixel contains a probability of belonging to the object. You could threshold this image to throw out very low probabilities, filter the result from noise, refine with morphological operators (erosion, closing) thus getting a blob of pixels, most probably belonging to you object. Then you can find its bounding rectangle. If you do not want to use backproject, because it doesn't always find the object, then you can use 2D feature detectors and matchers from OpenCV. They can estimate locations of some "extreme" points, belonging to the object.
In the second case the task looks like a task of SLAM (Simultaneous Localization and Mapping). It is more complex subject. I again reference you to multiview geometry and SLAM courses. This task also heavily uses 2D feature points.

Related

Regarding a specific Object Detection in OpenCV using WebCam and comparing it with an input Image

I am new to OpenCV and want to develop a program which takes the camera input and compares it with a known image of an object which would be input to it as a .jpg image and if the input of the Webcam matches with the fed in image upto a certain level of accuracy, then some message etc should be displayed that the required object has been found.
Eg: If I get a Computer Cable before the webcam, it needs to be detected and compared to the image of the Computer cable I have fed into the program.
I've tried many techniques and find Template matching to be effective as mentioned in the foll0wing link---
Real-time template matching - OpenCV, C++
However after drawing the rectangle and getting the roiImage..I want to compare its likeliness with a known image on my disk(in the opencv working directory). For this I am trying to convert the roiImg and my other images in HSV format and get 4 values according to the Algorithms.
I have tried to combine the 2 codes but it doesn;t seem to work as roiImg is being made at runtime and is not being able to compare with the other 2 Images using imread.
#include <iostream>
#include "opencv2/opencv.hpp"
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/objdetect/objdetect.hpp>
#include <sstream>
using namespace cv;
using namespace std;
Point point1, point2; /* vertical points of the bounding box */
int drag = 0;
Rect rect; /* bounding box */
Mat img, roiImg; /* roiImg - the part of the image in the bounding box */
int select_flag = 0;
bool go_fast = false;
Mat mytemplate;
Mat src_base, hsv_base;
Mat src_test1, hsv_test1;
Mat src_test2, hsv_test2;
Mat hsv_half_down;
///------- template matching -----------------------------------------------------------------------------------------------
Mat TplMatch( Mat &img, Mat &mytemplate )
{
Mat result;
matchTemplate( img, mytemplate, result, CV_TM_SQDIFF_NORMED );
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
return result;
}
///------- Localizing the best match with minMaxLoc ------------------------------------------------------------------------
Point minmax( Mat &result )
{
double minVal, maxVal;
Point minLoc, maxLoc, matchLoc;
minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );
matchLoc = minLoc;
return matchLoc;
}
///------- tracking --------------------------------------------------------------------------------------------------------
void track()
{
if (select_flag)
{
//roiImg.copyTo(mytemplate);
// select_flag = false;
go_fast = true;
}
// imshow( "mytemplate", mytemplate ); waitKey(0);
Mat result = TplMatch( img, mytemplate );
Point match = minmax( result );
rectangle( img, match, Point( match.x + mytemplate.cols , match.y + mytemplate.rows ), CV_RGB(255, 255, 255), 0.5 );
std::cout << "match: " << match << endl;
/// latest match is the new template
Rect ROI = cv::Rect( match.x, match.y, mytemplate.cols, mytemplate.rows );
roiImg = img( ROI );
roiImg.copyTo(mytemplate);
imshow( "roiImg", roiImg ); //waitKey(0);
//Compare the roiImg with a know image to calculate resemblence
/*Method Base - Base Base - Half Base - Test 1 Base - Test 2
Correlation 1.000000 0.930766 0.182073 0.120447
Chi-square 0.000000 4.940466 21.184536 49.273437
Intersection 24.391548 14.959809 3.889029 5.775088
Bhattacharyya 0.000000 0.222609 0.646576 0.801869
For the Correlation and Intersection methods, the higher the metric, the more accurate the match. As we can see,
the match base-base is the highest of all as expected. Also we can observe that the match base-half is the second best match (as we predicted).
For the other two metrics, the less the result, the better the match. We can observe that the matches between the test 1 and test 2 with respect
to the base are worse, which again, was expected.)*/
src_base = imread("roiImg");
src_test1 = imread("Samarth.jpg");
src_test2 = imread("Samarth2.jpg");
//double l2_norm = cvNorm( src_base, src_test1 );
/// Convert to HSV
cvtColor( src_base, hsv_base, COLOR_BGR2HSV );
cvtColor( src_test1, hsv_test1, COLOR_BGR2HSV );
cvtColor( src_test2, hsv_test2, COLOR_BGR2HSV );
hsv_half_down = hsv_base( Range( hsv_base.rows/2, hsv_base.rows - 1 ), Range( 0, hsv_base.cols - 1 ) );
/// Using 50 bins for hue and 60 for saturation
int h_bins = 50; int s_bins = 60;
int histSize[] = { h_bins, s_bins };
// hue varies from 0 to 179, saturation from 0 to 255
float h_ranges[] = { 0, 180 };
float s_ranges[] = { 0, 256 };
const float* ranges[] = { h_ranges, s_ranges };
// Use the o-th and 1-st channels
int channels[] = { 0, 1 };
/// Histograms
MatND hist_base;
MatND hist_half_down;
MatND hist_test1;
MatND hist_test2;
/// Calculate the histograms for the HSV images
calcHist( &hsv_base, 1, channels, Mat(), hist_base, 2, histSize, ranges, true, false );
normalize( hist_base, hist_base, 0, 1, NORM_MINMAX, -1, Mat() );
calcHist( &hsv_half_down, 1, channels, Mat(), hist_half_down, 2, histSize, ranges, true, false );
normalize( hist_half_down, hist_half_down, 0, 1, NORM_MINMAX, -1, Mat() );
calcHist( &hsv_test1, 1, channels, Mat(), hist_test1, 2, histSize, ranges, true, false );
normalize( hist_test1, hist_test1, 0, 1, NORM_MINMAX, -1, Mat() );
calcHist( &hsv_test2, 1, channels, Mat(), hist_test2, 2, histSize, ranges, true, false );
normalize( hist_test2, hist_test2, 0, 1, NORM_MINMAX, -1, Mat() );
/// Apply the histogram comparison methods
for( int i = 0; i < 4; i++ )
{
int compare_method = i;
double base_base = compareHist( hist_base, hist_base, compare_method );
double base_half = compareHist( hist_base, hist_half_down, compare_method );
double base_test1 = compareHist( hist_base, hist_test1, compare_method );
double base_test2 = compareHist( hist_base, hist_test2, compare_method );
printf( " Method [%d] Perfect, Base-Half, Base-Test(1), Base-Test(2) : %f, %f, %f, %f \n", i, base_base, base_half , base_test1, base_test2 );
}
printf( "Done \n" );
}
///------- MouseCallback function ------------------------------------------------------------------------------------------
void mouseHandler(int event, int x, int y, int flags, void *param)
{
if (event == CV_EVENT_LBUTTONDOWN && !drag)
{
/// left button clicked. ROI selection begins
point1 = Point(x, y);
drag = 1;
}
if (event == CV_EVENT_MOUSEMOVE && drag)
{
/// mouse dragged. ROI being selected
Mat img1 = img.clone();
point2 = Point(x, y);
rectangle(img1, point1, point2, CV_RGB(255, 0, 0), 3, 8, 0);
imshow("image", img1);
}
if (event == CV_EVENT_LBUTTONUP && drag)
{
point2 = Point(x, y);
rect = Rect(point1.x, point1.y, x - point1.x, y - point1.y);
drag = 0;
roiImg = img(rect);
roiImg.copyTo(mytemplate);
// imshow("MOUSE roiImg", roiImg); waitKey(0);
}
if (event == CV_EVENT_LBUTTONUP)
{
/// ROI selected
select_flag = 1;
drag = 0;
}
}
///------- Main() ----------------------------------------------------------------------------------------------------------
int main()
{
int k;
///open webcam
VideoCapture cap(0);
if (!cap.isOpened())
return 1;
/* ///open video file
VideoCapture cap;
cap.open( "Wildlife.wmv" );
if ( !cap.isOpened() )
{ cout << "Unable to open video file" << endl; return -1; }*/
/*
/// Set video to 320x240
cap.set(CV_CAP_PROP_FRAME_WIDTH, 320);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 240);*/
cap >> img;
GaussianBlur( img, img, Size(7,7), 3.0 );
imshow( "image", img );
while (1)
{
cap >> img;
if ( img.empty() )
break;
// Flip the frame horizontally and add blur
cv::flip( img, img, 1 );
GaussianBlur( img, img, Size(7,7), 3.0 );
if ( rect.width == 0 && rect.height == 0 )
cvSetMouseCallback( "image", mouseHandler, NULL );
else
track();
imshow("image", img);
// waitKey(100); k = waitKey(75);
k = waitKey(go_fast ? 30 : 10000);
if (k == 27)
break;
}
return 0;
}
if you want to detect a object in live feed , detecting the object in each frame is not efficient .. for the first time you have to detect after you have to track the object.
so this process involving both detection and tracking..
for detection you have to segment the object from the rest, opencv provides many algorithms for segmenting an object from background based on colors color based detection.other than color you can use the objects's shape to segment the object from backgroundshape based segmentation.
you can use lk optical flow algorithm as a starting to tracking.
additionally, you can use template matching or camshift or medial flow tracker.. etc to obtain quick results.all the above algorithm will be useful based on scale change of the object and lighting change of the feed. opencv has sample programs to the above algorithms.

How to get better results with OpenCV face recognition Module

I'm trying to use OpenCV's face recognition module to recognize 2 subjects from a video. I cropped 30 face images of the first subject and 20 face images of the second subject from the video and I use these as my training set.
I've tested all three approaches (Eigenfaces, Fisherfaces and LBP histograms), but I'm not getting good results in neither of the approaches. Sometimes the first subject is classified as the second subject and vice-verse, sometimes false detections are classified as one of the two subjects and sometimes other people in the video are classified as one of the two subjects.
How can I improve performance? Would enlarging the training set help in improving the results? Are there any other packages I can consider that performs face recognition in C++? I think it should be an easy task as I'm trying to recognize only two different subjects.
Here is my code (I'm using OpenCV 2.4.7 on windows 8 with VS2012):
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/contrib/contrib.hpp"
#include <iostream>
#include <stdio.h>
#include <fstream>
#include <sstream>
#define EIGEN 0
#define FISHER 0
#define LBPH 1;
using namespace std;
using namespace cv;
/** Function Headers */
void detectAndDisplay( Mat frame , int i,Ptr<FaceRecognizer> model);
static Mat toGrayscale(InputArray _src) {
Mat src = _src.getMat();
// only allow one channel
if(src.channels() != 1) {
CV_Error(CV_StsBadArg, "Only Matrices with one channel are supported");
}
// create and return normalized image
Mat dst;
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC1);
return dst;
}
static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';') {
std::ifstream file(filename.c_str(), ifstream::in);
if (!file) {
string error_message = "No valid input file was given, please check the given filename.";
CV_Error(CV_StsBadArg, error_message);
}
string line, path, classlabel;
while (getline(file, line)) {
stringstream liness(line);
getline(liness, path, separator);
getline(liness, classlabel);
if(!path.empty() && !classlabel.empty()) {
images.push_back(imread(path, 0));
labels.push_back(atoi(classlabel.c_str()));
}
}
}
/** Global variables */
String face_cascade_name = "C:\\OIM\\code\\OIM2 - face detection\\Debug\\haarcascade_frontalface_alt.xml";
//String face_cascade_name = "C:\\OIM\\code\\OIM2 - face detection\\Debug\\NewCascade.xml";
//String face_cascade_name = "C:\\OIM\\code\\OIM2 - face detection\\Debug\\haarcascade_eye_tree_eyeglasses.xml";
String eyes_cascade_name = "C:\\OIM\\code\\OIM2 - face detection\\Debug\\haarcascade_eye_tree_eyeglasses.xml";
CascadeClassifier face_cascade;
CascadeClassifier eyes_cascade;
string window_name = "Capture - Face detection";
RNG rng(12345);
/** #function main */
int main( int argc, const char** argv )
{
string fn_csv = "C:\\OIM\\faces_org.csv";
// These vectors hold the images and corresponding labels.
vector<Mat> images;
vector<int> labels;
// Read in the data. This can fail if no valid
// input filename is given.
try {
read_csv(fn_csv, images, labels);
} catch (cv::Exception& e) {
cerr << "Error opening file \"" << fn_csv << "\". Reason: " << e.msg << endl;
// nothing more we can do
exit(1);
}
// Quit if there are not enough images for this demo.
if(images.size() <= 1) {
string error_message = "This demo needs at least 2 images to work. Please add more images to your data set!";
CV_Error(CV_StsError, error_message);
}
// Get the height from the first image. We'll need this
// later in code to reshape the images to their original
// size:
int height = images[0].rows;
// The following lines create an Eigenfaces model for
// face recognition and train it with the images and
// labels read from the given CSV file.
// This here is a full PCA, if you just want to keep
// 10 principal components (read Eigenfaces), then call
// the factory method like this:
//
// cv::createEigenFaceRecognizer(10);
//
// If you want to create a FaceRecognizer with a
// confidennce threshold, call it with:
//
// cv::createEigenFaceRecognizer(10, 123.0);
//
//Ptr<FaceRecognizer> model = createEigenFaceRecognizer();
#if EIGEN
Ptr<FaceRecognizer> model = createEigenFaceRecognizer(10,2000000000);
#elif FISHER
Ptr<FaceRecognizer> model = createFisherFaceRecognizer(0, 200000000);
#elif LBPH
Ptr<FaceRecognizer> model =createLBPHFaceRecognizer(1,8,8,8,200000000);
#endif
model->train(images, labels);
Mat frame;
//-- 1. Load the cascades
if( !face_cascade.load( face_cascade_name ) ){ printf("--(!)Error loading\n"); return -1; };
if( !eyes_cascade.load( eyes_cascade_name ) ){ printf("--(!)Error loading\n"); return -1; };
// Get the frame rate
bool stop(false);
int count=1;
char filename[512];
for (int i=1;i<=517;i++){
sprintf(filename,"C:\\OIM\\original_frames2\\image%d.jpg",i);
Mat frame=imread(filename);
detectAndDisplay(frame,i,model);
waitKey(0);
}
return 0;
}
/** #function detectAndDisplay */
void detectAndDisplay( Mat frame ,int i, Ptr<FaceRecognizer> model)
{
std::vector<Rect> faces;
Mat frame_gray;
cvtColor( frame, frame_gray, CV_BGR2GRAY );
equalizeHist( frame_gray, frame_gray );
//-- Detect faces
//face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) );
face_cascade.detectMultiScale( frame_gray, faces, 1.1, 1, 0|CV_HAAR_SCALE_IMAGE, Size(10, 10) );
for( size_t i = 0; i < faces.size(); i++ )
{
Rect roi = Rect(faces[i].x,faces[i].y,faces[i].width,faces[i].height);
Mat face=frame_gray(roi);
resize(face,face,Size(200,200));
int predictedLabel = -1;
double confidence = 0.0;
model->predict(face, predictedLabel, confidence);
//imshow("gil",face);
//waitKey(0);
#if EIGEN
int M=10000;
#elif FISHER
int M=500;
#elif LBPH
int M=300;
#endif
Point center( faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5 );
if ((predictedLabel==1)&& (confidence<M))
ellipse( frame, center, Size( faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar( 0, 0, 255 ), 4, 8, 0 );
if ((predictedLabel==0)&& (confidence<M))
ellipse( frame, center, Size( faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 0), 4, 8, 0 );
if (confidence>M)
ellipse( frame, center, Size( faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar( 0, 255, 0), 4, 8, 0 );
Mat faceROI = frame_gray( faces[i] );
std::vector<Rect> eyes;
//-- In each face, detect eyes
eyes_cascade.detectMultiScale( faceROI, eyes, 1.1, 2, 0 |CV_HAAR_SCALE_IMAGE, Size(30, 30) );
for( size_t j = 0; j < eyes.size(); j++ )
{
Point center( faces[i].x + eyes[j].x + eyes[j].width*0.5, faces[i].y + eyes[j].y + eyes[j].height*0.5 );
int radius = cvRound( (eyes[j].width + eyes[j].height)*0.25 );
//circle( frame, center, radius, Scalar( 255, 0, 0 ), 4, 8, 0 );
}
}
//-- Show what you got
//imshow( window_name, frame );
char filename[512];
sprintf(filename,"C:\\OIM\\FaceRecognitionResults\\image%d.jpg",i);
imwrite(filename,frame);
}
Thanks in advance,
Gil.
First thing, as commented, increase the number of samples if possible. Also include the variations (like illumination, slight poses etc) you expect to be in the video. However, especially for eigenfaces/ fisherfaces so many images will not help to increase performance. Sadly, the best number of training samples can depend on your data.
The more important point is the hardness of the problem is totally depends on your video. If your video contains variations like illumination, pose; then you can't expect using purely appearance based methods(e.g Eigenfaces) and texture descriptor(LBP) will be succesful. First, you might want to detect faces. Then:
You might want to estimate face position and warp to frontal; check
for Active Appearance Model and Active Shape Model
Use histogram of equalization to attenuate illumination problem
Fitting an ellipse to detected face region will help against background noise.
Of course, there are many other methods available in literature; the steps I wrote is implemented in OpenCV and commonly known.
Hope it helps.

Generating a bird's eye / top view with OpenCV

I'm trying to generate a bird's eye view from an image. For the camera intrinsics and disortions, I'm using hard coded values that I retrieved from a driving simulator that has a camera mounted on it's roof.
The basis for the code is from "Learning OpenCV Computer Vision with the OpenCV Library", Pg 409.
When I run the code on an image containing a chess board with 3 inner corners per row and 4 inner corners per column, my bird's eye view is upside down. I need the image to correctly turn into a bird's eye and that is right side up because I need the homography matrix for another function call.
Here are the input and output images, and the code i'm using:
Input image:
Corners detected:
Output Image/bird's eye (upside down!):
The code:
#include <highgui.h>
#include <cv.h>
#include <cxcore.h>
#include <math.h>
#include <vector>
#include <stdio.h>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[]) {
if(argc != 4) return -1;
// INPUT PARAMETERS:
//
int board_w = atoi(argv[1]); //inner corners per row
int board_h = atoi(argv[2]); //inner corners per column
int board_n = board_w * board_h;
CvSize board_sz = cvSize( board_w, board_h );
//Hard coded intrinsics for the camera
Mat intrinsicMat = (Mat_<double>(3, 3) <<
418.7490, 0., 236.8528,
0.,558.6650,322.7346,
0., 0., 1.);
//Hard coded distortions for the camera
CvMat* distortion = cvCreateMat(1, 4, CV_32F);
cvmSet(distortion, 0, 0, -0.0019);
cvmSet(distortion, 0, 1, 0.0161);
cvmSet(distortion, 0, 2, 0.0011);
cvmSet(distortion, 0, 3, -0.0016);
IplImage* image = 0;
IplImage* gray_image = 0;
if( (image = cvLoadImage(argv[3])) == 0 ) {
printf("Error: Couldn’t load %s\n",argv[3]);
return -1;
}
gray_image = cvCreateImage( cvGetSize(image), 8, 1 );
cvCvtColor(image, gray_image, CV_BGR2GRAY );
// UNDISTORT OUR IMAGE
//
IplImage* mapx = cvCreateImage( cvGetSize(image), IPL_DEPTH_32F, 1 );
IplImage* mapy = cvCreateImage( cvGetSize(image), IPL_DEPTH_32F, 1 );
CvMat intrinsic (intrinsicMat);
//This initializes rectification matrices
//
cvInitUndistortMap(
&intrinsic,
distortion,
mapx,
mapy
);
IplImage *t = cvCloneImage(image);
// Rectify our image
//
cvRemap( t, image, mapx, mapy );
// GET THE CHESSBOARD ON THE PLANE
//
cvNamedWindow("Chessboard");
CvPoint2D32f* corners = new CvPoint2D32f[ board_n ];
int corner_count = 0;
int found = cvFindChessboardCorners(
image,
board_sz,
corners,
&corner_count,
CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS
);
if(!found){
printf("Couldn’t aquire chessboard on %s, "
"only found %d of %d corners\n",
argv[3],corner_count,board_n
);
return -1;
}
//Get Subpixel accuracy on those corners:
cvFindCornerSubPix(
gray_image,
corners,
corner_count,
cvSize(11,11),
cvSize(-1,-1),
cvTermCriteria( CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 0.1 )
);
//GET THE IMAGE AND OBJECT POINTS:
// We will choose chessboard object points as (r,c):
// (0,0), (board_w-1,0), (0,board_h-1), (board_w-1,board_h-1).
//
CvPoint2D32f objPts[4], imgPts[4];
imgPts[0] = corners[0];
imgPts[1] = corners[board_w-1];
imgPts[2] = corners[(board_h-1)*board_w];
imgPts[3] = corners[(board_h-1)*board_w + board_w-1];
objPts[0].x = 0; objPts[0].y = 0;
objPts[1].x = board_w -1; objPts[1].y = 0;
objPts[2].x = 0; objPts[2].y = board_h -1;
objPts[3].x = board_w -1; objPts[3].y = board_h -1;
// DRAW THE POINTS in order: B,G,R,YELLOW
//
cvCircle( image, cvPointFrom32f(imgPts[0]), 9, CV_RGB(0,0,255), 3); //blue
cvCircle( image, cvPointFrom32f(imgPts[1]), 9, CV_RGB(0,255,0), 3); //green
cvCircle( image, cvPointFrom32f(imgPts[2]), 9, CV_RGB(255,0,0), 3); //red
cvCircle( image, cvPointFrom32f(imgPts[3]), 9, CV_RGB(255,255,0), 3); //yellow
// DRAW THE FOUND CHESSBOARD
//
cvDrawChessboardCorners(
image,
board_sz,
corners,
corner_count,
found
);
cvShowImage( "Chessboard", image );
// FIND THE HOMOGRAPHY
//
CvMat *H = cvCreateMat( 3, 3, CV_32F);
cvGetPerspectiveTransform( objPts, imgPts, H);
Mat homography = H;
cvSave("Homography.xml",H); //We can reuse H for the same camera mounting
/**********************GENERATING 3X4 MATRIX***************************/
// LET THE USER ADJUST THE Z HEIGHT OF THE VIEW
//
float Z = 23;
int key = 0;
IplImage *birds_image = cvCloneImage(image);
cvNamedWindow("Birds_Eye");
// LOOP TO ALLOW USER TO PLAY WITH HEIGHT:
//
// escape key stops
//
while(key != 27) {
// Set the height
//
CV_MAT_ELEM(*H,float,2,2) = Z;
// COMPUTE THE FRONTAL PARALLEL OR BIRD’S-EYE VIEW:
// USING HOMOGRAPHY TO REMAP THE VIEW
//
cvWarpPerspective(
image,
birds_image,
H,
CV_INTER_LINEAR | CV_WARP_INVERSE_MAP | CV_WARP_FILL_OUTLIERS
);
cvShowImage( "Birds_Eye", birds_image );
imwrite("/home/lee/bird.jpg", birds_image);
key = cvWaitKey();
if(key == 'u') Z += 0.5;
if(key == 'd') Z -= 0.5;
}
return 0;
}
The homography result seems correct. Since you're mapping the camera's z-axe as the world's y-axe, the image resulting of the bird's eye view (BEV) remap is upside down.
If you really need the BEV image as the camera shot you can have use H as H = Ty * Rx * H, where R is a 180 degree rotation around x-axe, T is a translation in y-axe and H is your original homography. The translation is required since your rotation remapped your old BEV on the negative side of y-axe.

OpenCV 2.4.X slow on square detection with WebCam vs OpenCV 2.1.X

I have tried to port Square detection with OpenCV 2.4.1-2.4.4 but results seem very slow. I was keen to move to newer versions of OpenCV because of new functionality given, but am having very slow results.
My OpenCV code for versions 2.4.X is:
// The "Square Detector" program.
// It loads several images sequentially and tries to find squares in
// each image
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <math.h>
#include <string.h>
using namespace cv;
using namespace std;
int thresh = 50, N = 11;
const char* wndname = "Square Detection Demo";
// helper function:
// finds a cosine of angle between vectors
// from pt0->pt1 and from pt0->pt2
static double angle( Point pt1, Point pt2, Point pt0 )
{
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
// returns sequence of squares detected on the image.
// the sequence is stored in the specified memory storage
static void findSquares( const Mat& image, vector<vector<Point> >& squares )
{
squares.clear();
Mat pyr, timg, gray0(image.size(), CV_8U), gray;
// down-scale and upscale the image to filter out the noise
pyrDown(image, pyr, Size(image.cols/2, image.rows/2));
pyrUp(pyr, timg, image.size());
vector<vector<Point> > contours;
// find squares in every color plane of the image
for( int c = 0; c < 3; c++ )
{
int ch[] = {c, 0};
mixChannels(&timg, 1, &gray0, 1, ch, 1);
// try several threshold levels
for( int l = 0; l < N; l++ )
{
// hack: use Canny instead of zero threshold level.
// Canny helps to catch squares with gradient shading
if( l == 0 )
{
// apply Canny. Take the upper threshold from slider
// and set the lower to 0 (which forces edges merging)
Canny(gray0, gray, 0, thresh, 5);
// dilate canny output to remove potential
// holes between edge segments
dilate(gray, gray, Mat(), Point(-1,-1));
}
else
{
// apply threshold if l!=0:
// tgray(x,y) = gray(x,y) < (l+1)*255/N ? 255 : 0
gray = gray0 >= (l+1)*255/N;
}
// find contours and store them all as a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
vector<Point> approx;
// test each contour
for( size_t i = 0; i < contours.size(); i++ )
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);
// square contours should have 4 vertices after approximation
// relatively large area (to filter out noisy contours)
// and be convex.
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if( approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)) )
{
double maxCosine = 0;
for( int j = 2; j < 5; j++ )
{
// find the maximum cosine of the angle between joint edges
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
// if cosines of all angles are small
// (all angles are ~90 degree) then write quandrange
// vertices to resultant sequence
if( maxCosine < 0.3 )
squares.push_back(approx);
}
}
}
}
}
// the function draws all the squares in the image
static void drawSquares( Mat& image, const vector<vector<Point> >& squares )
{
for( size_t i = 0; i < squares.size(); i++ )
{
const Point* p = &squares[i][0];
int n = (int)squares[i].size();
polylines(image, &p, &n, 1, true, Scalar(0,255,0), 3, CV_AA);
}
imshow(wndname, image);
}
int main()
{
VideoCapture cap;
cap.open(0);
Mat frame,image;
namedWindow( "Square Detection Demo", 1 );
vector<vector<Point> > squares;
for(;;)
{
cap >> frame;
if( frame.empty() ){
break;
}
frame.copyTo(image);
if( image.empty() )
{
cout << "Couldn't load image" << endl;
continue;
}
findSquares(image, squares);
drawSquares(image, squares);
//imshow("Window", image);
int c = waitKey(1);
if( (char)c == 27 )
break;
}
return 0;
}
You can notice that the code is a simple mix of Webcam visualization and the squares code provided both by OpenCV 2.4.X.
However, the equivalent code for version 2.1 of OpenCV which i will put now is a lot faster:
#include <cv.h>
#include <highgui.h>
int thresh = 50;
IplImage* img = 0;
IplImage* img0 = 0;
CvMemStorage* storage = 0;
// helper function:
// finds a cosine of angle between vectors
// from pt0->pt1 and from pt0->pt2
double angle( CvPoint* pt1, CvPoint* pt2, CvPoint* pt0 )
{
double dx1 = pt1->x - pt0->x;
double dy1 = pt1->y - pt0->y;
double dx2 = pt2->x - pt0->x;
double dy2 = pt2->y - pt0->y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
// returns sequence of squares detected on the image.
// the sequence is stored in the specified memory storage
CvSeq* findSquares4( IplImage* img, CvMemStorage* storage )
{
CvSeq* contours;
int i, c, l, N = 11;
CvSize sz = cvSize( img->width & -2, img->height & -2 );
IplImage* timg = cvCloneImage( img ); // make a copy of input image
IplImage* gray = cvCreateImage( sz, 8, 1 );
IplImage* pyr = cvCreateImage( cvSize(sz.width/2, sz.height/2), 8, 3 );
IplImage* tgray;
CvSeq* result;
double s, t;
// create empty sequence that will contain points -
// 4 points per square (the square's vertices)
CvSeq* squares = cvCreateSeq( 0, sizeof(CvSeq), sizeof(CvPoint), storage );
// select the maximum ROI in the image
// with the width and height divisible by 2
cvSetImageROI( timg, cvRect( 0, 0, sz.width, sz.height ));
//cvSetImageROI( timg, cvRect( 0,0,50, 50 ));
// down-scale and upscale the image to filter out the noise
cvPyrDown( timg, pyr, 7 );
cvPyrUp( pyr, timg, 7 );
tgray = cvCreateImage( sz, 8, 1 );
// find squares in every color plane of the image
for( c = 0; c < 3; c++ )
{
// extract the c-th color plane
cvSetImageCOI( timg, c+1 );
cvCopy( timg, tgray, 0 );
// try several threshold levels
for( l = 0; l < N; l++ )
{
// hack: use Canny instead of zero threshold level.
// Canny helps to catch squares with gradient shading
if( l == 0 )
{
// apply Canny. Take the upper threshold from slider
// and set the lower to 0 (which forces edges merging)
cvCanny( tgray, gray, 0, thresh, 5 );
// dilate canny output to remove potential
// holes between edge segments
cvDilate( gray, gray, 0, 1 );
}
else
{
// apply threshold if l!=0:
// tgray(x,y) = gray(x,y) < (l+1)*255/N ? 255 : 0
cvThreshold( tgray, gray, (l+1)*255/N, 255, CV_THRESH_BINARY );
}
// find contours and store them all as a list
cvFindContours( gray, storage, &contours, sizeof(CvContour),
CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0) );
// test each contour
while( contours )
{
// approximate contour with accuracy proportional
// to the contour perimeter
result = cvApproxPoly( contours, sizeof(CvContour), storage,
CV_POLY_APPROX_DP, cvContourPerimeter(contours)*0.02, 0 );
// square contours should have 4 vertices after approximation
// relatively large area (to filter out noisy contours)
// and be convex.
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if( result->total == 4 &&
cvContourArea(result,CV_WHOLE_SEQ,0) > 1000 &&
cvCheckContourConvexity(result) )
{
s = 0;
for( i = 0; i < 5; i++ )
{
// find minimum angle between joint
// edges (maximum of cosine)
if( i >= 2 )
{
t = fabs(angle(
(CvPoint*)cvGetSeqElem( result, i ),
(CvPoint*)cvGetSeqElem( result, i-2 ),
(CvPoint*)cvGetSeqElem( result, i-1 )));
s = s > t ? s : t;
}
}
// if cosines of all angles are small
// (all angles are ~90 degree) then write quandrange
// vertices to resultant sequence
if( s < 0.3 )
for( i = 0; i < 4; i++ )
cvSeqPush( squares,
(CvPoint*)cvGetSeqElem( result, i ));
}
// take the next contour
contours = contours->h_next;
}
}
}
// release all the temporary images
cvReleaseImage( &gray );
cvReleaseImage( &pyr );
cvReleaseImage( &tgray );
cvReleaseImage( &timg );
return squares;
}
// the function draws all the squares in the image
void drawSquares( IplImage* img, CvSeq* squares )
{
CvSeqReader reader;
IplImage* cpy = cvCloneImage( img );
int i;
// initialize reader of the sequence
cvStartReadSeq( squares, &reader, 0 );
// read 4 sequence elements at a time (all vertices of a square)
for( i = 0; i < squares->total; i += 4 )
{
CvPoint pt[4], *rect = pt;
int count = 4;
// read 4 vertices
CV_READ_SEQ_ELEM( pt[0], reader );
CV_READ_SEQ_ELEM( pt[1], reader );
CV_READ_SEQ_ELEM( pt[2], reader );
CV_READ_SEQ_ELEM( pt[3], reader );
// draw the square as a closed polyline
cvPolyLine( cpy, &rect, &count, 1, 1, CV_RGB(0,255,0), 3, CV_AA, 0 );
}
// show the resultant image
cvShowImage( "Squares", cpy );
cvReleaseImage( &cpy );
}
int main(int argc, char** argv){
// Crea una ventana llamada Original Image con un tamaño predeterminado.
cvNamedWindow("Original Image", CV_WINDOW_AUTOSIZE);
cvNamedWindow("Squares", CV_WINDOW_AUTOSIZE);
// Crea la conexion con la Webcam.
CvCapture* capture = cvCreateCameraCapture(0);
if( !capture ){
throw "Error when reading steam_avi";
}
storage = cvCreateMemStorage(0);
while(true)
{
// Pongo el frame capturado dentro de la imagen originalImg.
img0 = cvQueryFrame(capture);
if(!img0){
break;
}
img = cvCloneImage( img0 );
// find and draw the squares
drawSquares( img, findSquares4( img, storage ) );
cvShowImage("Original Image", img0);
cvReleaseImage(&img);
// clear memory storage - reset free space position
cvClearMemStorage( storage );
// Espero a que me pulsen el ESC para salir del bucle infinito.
char c = cvWaitKey(10);
if( c == 27 ) break;
}
//cvReleaseImage(&img);
cvReleaseImage(&img0);
// clear memory storage - reset free space position
cvClearMemStorage( storage );
// Destruye la ventana “Original Image”.
cvDestroyWindow("Original Image");
cvDestroyWindow("Squares");
// Libera la memoria utilizada por la variable capture.
cvReleaseCapture(&capture);
}
I am aware that I can use one colour channel to speed up x3, and change other params to speed up, but wonder why equivalent codes give such different execution times.
Is there anything basic which I am missing out on?
I have tried to put working code up for everyone to try, so as to not waste anybody's time with vague questions such as: Opencv 2.4.X is slow.
Finaly left out Canny and checked for Area of square being below certain values (less 20% of image area) so that unwanted squares were not detected. As for getting multiple results for same square, am not too bothered with it at the moment, as i can input given squares as possible template images for comparisson. Now off to recognition of image in square. Thanks Chris for at least reading this comment (I cant give you points as answer as it was only a comment, but either way, thank you).

Image Sharpening Using Laplacian Filter

I was trying to sharpening on some standard image from Gonzalez books. Below are some code that I have tried but it doesn't get closer to the results of the sharpened image.
cvSmooth(grayImg, grayImg, CV_GAUSSIAN, 3, 0, 0, 0);
IplImage* laplaceImg = cvCreateImage(cvGetSize(oriImg), IPL_DEPTH_16S, 1);
IplImage* abs_laplaceImg = cvCreateImage(cvGetSize(oriImg), IPL_DEPTH_8U, 1);
cvLaplace(grayImg, laplaceImg, 3);
cvConvertScaleAbs(laplaceImg, abs_laplaceImg, 1, 0);
IplImage* dstImg = cvCreateImage(cvGetSize(oriImg), IPL_DEPTH_8U, 1);
cvAdd(abs_laplaceImg, grayImg, dstImg, NULL);
Before Sharpening
My Sharpening Result
Desired Result
Absolute Laplace
I think the problem is that you are blurring the image before take the 2nd derivate.
Here is the working code with the C++ API (I'm using Opencv 2.4.3). I tried also with MATLAB and the result is the same.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int /*argc*/, char** /*argv*/) {
Mat img, imgLaplacian, imgResult;
//------------------------------------------------------------------------------------------- test, first of all
// now do it by hand
img = (Mat_<uchar>(4,4) << 0,1,2,3,4,5,6,7,8,9,0,11,12,13,14,15);
// first, the good result
Laplacian(img, imgLaplacian, CV_8UC1);
cout << "let opencv do it" << endl;
cout << imgLaplacian << endl;
Mat kernel = (Mat_<float>(3,3) <<
0, 1, 0,
1, -4, 1,
0, 1, 0);
int window_size = 3;
// now, reaaallly by hand
// note that, for avoiding padding, the result image will be smaller than the original one.
Mat frame, frame32;
Rect roi;
imgLaplacian = Mat::zeros(img.size(), CV_32F);
for(int y=0; y<img.rows-window_size/2-1; y++) {
for(int x=0; x<img.cols-window_size/2-1; x++) {
roi = Rect(x,y, window_size, window_size);
frame = img(roi);
frame.convertTo(frame, CV_32F);
frame = frame.mul(kernel);
float v = sum(frame)[0];
imgLaplacian.at<float>(y,x) = v;
}
}
imgLaplacian.convertTo(imgLaplacian, CV_8U);
cout << "dudee" << imgLaplacian << endl;
// a little bit less "by hand"..
// using cv::filter2D
filter2D(img, imgLaplacian, -1, kernel);
cout << imgLaplacian << endl;
//------------------------------------------------------------------------------------------- real stuffs now
img = imread("moon.jpg", 0); // load grayscale image
// ok, now try different kernel
kernel = (Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1); // another approximation of second derivate, more stronger
// do the laplacian filtering as it is
// well, we need to convert everything in something more deeper then CV_8U
// because the kernel has some negative values,
// and we can expect in general to have a Laplacian image with negative values
// BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255
// so the possible negative number will be truncated
filter2D(img, imgLaplacian, CV_32F, kernel);
img.convertTo(img, CV_32F);
imgResult = img - imgLaplacian;
// convert back to 8bits gray scale
imgResult.convertTo(imgResult, CV_8U);
imgLaplacian.convertTo(imgLaplacian, CV_8U);
namedWindow("laplacian", CV_WINDOW_AUTOSIZE);
imshow( "laplacian", imgLaplacian );
namedWindow("result", CV_WINDOW_AUTOSIZE);
imshow( "result", imgResult );
while( true ) {
char c = (char)waitKey(10);
if( c == 27 ) { break; }
}
return 0;
}
Have fun!
I think the main problem lies in the fact that you do img + laplace, while img - laplace would give better results. I remember that img - 2*laplace was best, but I cannot find where I read that, probably in one of the books I read in university.
You need to do img - laplace instead of img + laplace.
laplace: f(x,y) = f(x-1,y+1) + f(x-1,y-1) + f(x,y+1) + f(x+1,y) - 4*f(x,y)
So, if you see subtract laplace from the original image you would see that the minus sign in front of 4*f(x,y) gets negated and this term becomes positive.
You could also have kernel with -5 in the center pixel instead of -4 to make the laplacian a one-step process instead of getting the getting the laplace and doing img - laplace Why? Try deriving that yourself.
This would be the final kernel.
Mat kernel = (Mat_(3,3) <<
-1, 0, -1,
0, -5, 0,
-1, 0, -1);
It is indeed a well-known result in image processing that if you subtract its Laplacian from an image, the image edges are amplified giving a sharper image.
Laplacian Filter Kernel algorithm: sharpened_pixel = 5 * current – left – right – up – down
enter image description here
So the Code will look like these:
void sharpen(const Mat& img, Mat& result)
{
result.create(img.size(), img.type());
//Processing the inner edge of the pixel point, the image of the outer edge of the pixel should be additional processing
for (int row = 1; row < img.rows-1; row++)
{
//Front row pixel
const uchar* previous = img.ptr<const uchar>(row-1);
//Current line to be processed
const uchar* current = img.ptr<const uchar>(row);
//new row
const uchar* next = img.ptr<const uchar>(row+1);
uchar *output = result.ptr<uchar>(row);
int ch = img.channels();
int starts = ch;
int ends = (img.cols - 1) * ch;
for (int col = starts; col < ends; col++)
{
//The traversing pointer of the output image is synchronized with the current row, and each channel value of each pixel in each row is given a increment, because the channel number of the image is to be taken into account.
*output++ = saturate_cast<uchar>(5 * current[col] - current[col-ch] - current[col+ch] - previous[col] - next[col]);
}
} //end loop
//Processing boundary, the peripheral pixel is set to 0
result.row(0).setTo(Scalar::all(0));
result.row(result.rows-1).setTo(Scalar::all(0));
result.col(0).setTo(Scalar::all(0));
result.col(result.cols-1).setTo(Scalar::all(0));
}
int main()
{
Mat lena = imread("lena.jpg");
Mat sharpenedLena;
ggicci::sharpen(lena, sharpenedLena);
imshow("lena", lena);
imshow("sharpened lena", sharpenedLena);
cvWaitKey();
return 0;
}
If you are a lazier. Have fun with the following.
int main()
{
Mat lena = imread("lena.jpg");
Mat sharpenedLena;
Mat kernel = (Mat_<float>(3, 3) << 0, -1, 0, -1, 4, -1, 0, -1, 0);
cv::filter2D(lena, sharpenedLena, lena.depth(), kernel);
imshow("lena", lena);
imshow("sharpened lena", sharpenedLena);
cvWaitKey();
return 0;
}
And the result like these.enter image description here

Resources