OpenCV errors for iOS / detecting Hough Circles - ios

I have been trying for hours to run an xcode project with openCV. I have built the source, imported it into the project and included
#ifdef __cplusplus #import opencv2/opencv.hpp>
#endif
in the .pch file.
I followed the instructions from http://docs.opencv.org/trunk/doc/tutorials/introduction/ios_install/ios_install.html
Still I am getting many Apple Mach-O linker errors when I compile.
Undefined symbols for architecture i386:
"std::__1::__vector_base_common<true>::__throw_length_error() const", referenced from:
Please help me I am really lost..
UPDATE:
Errors all fixed and now I am trying to detect circles..
Mat src, src_gray;
cvtColor( image, src_gray, CV_BGR2GRAY );
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, image.rows/8, 200, 100, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
I am using the code above, however no circles are being drawn on the image.. is there something obvious that I am doing wrong?

Try the solution in my answer to this question...
How to resolve iOS Link errors with OpenCV
Also on github I have a couple of simple working samples - with recently built openCV framework.
NB - OpenCVSquares is simpler than OpenCVSquaresSL. The latter was adapted for Snow Leopard backwards compatibility - it contains two builds of the openCV framework and 3 targets, so you are better off using the simpler OpenCVSquares if it will run on your system.
To adapt OpenCVSquares to detect circles, I suggest that you start with the Hough Circles c++ sample from the openCV distro, and use it to adapt/replace CVSquares.cpp and CVSquares.h with, say CVCircles.cpp and CVCicles.h
The principles are exactly the same:
remove UI code from the c++, the UI is provided on the obj-C side
transform the main() function into a static member function for the class declared in the header file. This should mirror in form an Objective-C message to the wrapper (which translates the obj-c method to a c++ function call).
From the objective-C side, you are passing a UIImage to the wrapper object, which:
converts the UIImage to a cv::Mat image
pass the Mat to a c++ class for processing
converts the result from Mat back to UIImage
returns the processed UIImage back to the objective-C calling object
update
The adapted houghcircles.cpp should look something like this at it's most basic (I've replaced the CVSquares class with a CVCircles class):
cv::Mat CVCircles::detectedCirclesInImage (cv::Mat img)
{
//expects a grayscale image on input
//returns a colour image on ouput
Mat cimg;
medianBlur(img, img, 5);
cvtColor(img, cimg, CV_GRAY2RGB);
vector<Vec3f> circles;
HoughCircles(img, circles, CV_HOUGH_GRADIENT, 1, 10,
100, 30, 1, 60 // change the last two parameters
// (min_radius & max_radius) to detect larger circles
);
for( size_t i = 0; i < circles.size(); i++ )
{
Vec3i c = circles[i];
circle( cimg, Point(c[0], c[1]), c[2], Scalar(255,0,0), 3, CV_AA);
circle( cimg, Point(c[0], c[1]), 2, Scalar(0,255,0), 3, CV_AA);
}
return cimg;
}
Note that the input parameters are reduced to one - the input image - for simplicity. Shortly I will post a sample on github which will include some parameters tied to slider controls in the iOS UI, but you should get this version working first.
As the function signature has changed you should follow it up the chain...
Alter the houghcircles.h class definition:
static cv::Mat detectedCirclesInImage (const cv::Mat image);
Modify the CVWrapper class to accept a similarly-structured method which calls detectedCirclesInImage
+ (UIImage*) detectedCirclesInImage:(UIImage*) image
{
UIImage* result = nil;
cv::Mat matImage = [image CVGrayscaleMat];
matImage = CVCircles::detectedCirclesInImage (matImage);
result = [UIImage imageWithCVMat:matImage];
return result;
}
Note that we are converting the input UIImage to grayscale, as the houghcircles function expects a grayscale image on input. Take care to pull the latest version of my github project, I found an error in the CVGrayscaleMat category which is now fixed . Output image is colour (colour applied to grayscale input image to pick out found circles).
If you want your input and output images in colour, you just need to ensure that you make a grayscale conversion of your input image for sending to Houghcircles() - eg cvtColor(input_image, gray_image, CV_RGB2GRAY); and apply your found circles to the colour input image (which becomes your return image).
Finally in your CVViewController, change your messages to CVWrapper to conform to this new signature:
UIImage* image = [CVWrapper detectedCirclesInImage:self.image];
If you follow all of these details your project will produce circle-detected results.
update 2
OpenCVCircles now on Github
With sliders to adjust HoughCircles() parameters

Related

Improving Tesseract OCR Quality Fails

I am currently using tesseract to scan receipts. The quality wasn't good so I read this article on how to improve it: https://github.com/tesseract-ocr/tesseract/wiki/ImproveQuality#noise-removal. I implemented resizing, deskewing(aligning), and gaussian blur. But none of them seem to have a positive effect on the accuracy of the OCR except the deskewing. Here is my code for resizing and gaussian blur. Am I doing anything wrong? If not, what else can I do to help?
Code:
+(UIImage *) prepareImage: (UIImage *)image{
//converts UIImage to Mat format
Mat im = cvMatWithImage(image);
//grayscale image
Mat gray;
cvtColor(im, gray, CV_BGR2GRAY);
//deskews text
//did not provide code because I know it works
Mat preprocessed = preprocess2(gray);
double skew = hough_transform(preprocessed, im);
Mat rotated = rot(im,skew* CV_PI/180);
//resize image
Mat scaledImage = scaleImage(rotated, 2);
//Guassian Blur
GaussianBlur(scaledImage, scaledImage, cv::Size(1, 1), 0, 0);
return UIImageFromCVMat(scaledImage);
}
// Organization -> Resizing
Mat scaleImage(Mat mat, double factor){
Mat resizedMat;
double width = mat.cols;
double height = mat.rows;
double aspectRatio = width/height;
resize(mat, resizedMat, cv::Size(width*factor*aspectRatio, height*factor*aspectRatio));
return resizedMat;
}
Receipt:
If you read the Tesseract documentation you will see that tesseract engine works best with texts in a single line in a square. Passing it the whole receipt image reduces the engine's accuracy. What you need to do is use the new iOS framework CITextFeature to detect texts in your receipt into multiple blocks of images. Then only you can pass those images to tesseract for processing.

How to get similarties and differences between two images using Opencv

I want to compare two images and find same and different parts of images. I tired "cv::compare and cv::absdiff" methods but confused which one can good for my case. Both show me different results. So how i can achieve my desired task ?
Here's an example how you can use cv::absdiff to find image similarities:
int main()
{
cv::Mat input1 = cv::imread("../inputData/Similar1.png");
cv::Mat input2 = cv::imread("../inputData/Similar2.png");
cv::Mat diff;
cv::absdiff(input1, input2, diff);
cv::Mat diff1Channel;
// WARNING: this will weight channels differently! - instead you might want some different metric here. e.g. (R+B+G)/3 or MAX(R,G,B)
cv::cvtColor(diff, diff1Channel, CV_BGR2GRAY);
float threshold = 30; // pixel may differ only up to "threshold" to count as being "similar"
cv::Mat mask = diff1Channel < threshold;
cv::imshow("similar in both images" , mask);
// use similar regions in new image: Use black as background
cv::Mat similarRegions(input1.size(), input1.type(), cv::Scalar::all(0));
// copy masked area
input1.copyTo(similarRegions, mask);
cv::imshow("input1", input1);
cv::imshow("input2", input2);
cv::imshow("similar regions", similarRegions);
cv::imwrite("../outputData/Similar_result.png", similarRegions);
cv::waitKey(0);
return 0;
}
Using those 2 inputs:
You'll observe that output (black background):

Mat to IplImage crash

The camera keeps on crashing when I run my code. Trying to convert cv::mat to IplImage.
cv::Mat canvas(320, 240, CV_8UC3, Scalar(255,255,255));
IplImage test =canvas;
while(true )
{
canvas =cvQueryFrame(capture);
imgScribble = cvCreateImage(cvGetSize(&test), 8, 3);
IplImage* imgYellowThresh1 = GetThresholdedImage1(&test);
cvAdd(&test,imgScribble,&test);
cvShowImage("video", &test);
//This is the only line that uses the C++ API, so I assume you want to use the C API instead
cv::Mat canvas(320, 240, CV_8UC3, Scalar(255,255,255));
//I have used OpenCV for quite a while now and I've always declared IplImage*, and never IplImage. Use it safely as a rule of thumb, * always goes after IplImage
IplImage test =canvas;
This will become:
//although why you need to clone a newly created
//blank image is a valid concern
IplImage* canvas = cvCreateImage(....);
IplImage* test = cvClone(canvas);
cvZero(test);
//don't forget to release resources at some point
cvReleaseImage(&canvas);
cvReleaseImage(&test);

Multiple Face Detection

I have a code in OpenCV (in C++) which uses "haarcascade_mcs_upperbody.xml" to detect upper body.
It detects single upper body. How can I make it detect multiple upper bodies.
I think CV_HAAR_FIND_BIGGEST_OBJECT is detecting only the biggest object. But I am not knowing how to solve this issue
The code goes like this:
int main(int argc, const char** argv)
{
CascadeClassifier body_cascade;
body_cascade.load("haarcascade_mcs_upperbody.xml");
VideoCapture captureDevice;
captureDevice.open(0);
Mat captureFrame;
Mat grayscaleFrame;
namedWindow("outputCapture", 1);
//create a loop to capture and find faces
while(true)
{
//capture a new image frame
captureDevice>>captureFrame;
//convert captured image to gray scale and equalize
cvtColor(captureFrame, grayscaleFrame, CV_BGR2GRAY);
equalizeHist(grayscaleFrame, grayscaleFrame);
//create a vector array to store the face found
std::vector<Rect> bodies;
//find faces and store them in the vector array
body_cascade.detectMultiScale(grayscaleFrame, faces, 1.1, 3,
CV_HAAR_FIND_BIGGEST_OBJECT|CV_HAAR_SCALE_IMAGE, Size(30,30));
//draw a rectangle for all found faces in the vector array on the original image
for(int i = 0; i < faces.size(); i++)
{
Point pt1(bodies[i].x + bodies[i].width, bodies[i].y + bodies[i].height);
Point pt2(bodies[i].x, bodies[i].y);
rectangle(captureFrame, pt1, pt2, cvScalar(0, 255, 0, 0), 1, 8, 0);
}
//print the output
imshow("outputCapture", captureFrame);
//pause for 33ms
waitKey(33);
}
return 0;
}
It seems there is some inconsistency in your code, since face_cascade is not defined anywhere, but I assume its type is CascadeClassifier.
detectMultiScale stores all detected objects in the faces vector. Are you sure it contains only one object?
Try removing the CV_HAAR_FIND_BIGGEST_OBJECT flag, because you want all objects to be detected, and not only the biggest one.
Also, make sure you set the minSize and maxSize parameters correctly (see documentation), since those parameters determine the minimal and maximal detectable object sizes.

OpenCV C++/Obj-C: goodFeaturesToTrack inside specific blob

Is there a quick solution to specify the ROI only within the contours of the blob I'm intereseted in?
My ideas so far:
Using the boundingRect, but it contains too much stuff I don't want to analyse.
Applying goodFeaturesToTrack to the whole image and then loop through the output coordinates to eliminate the once outside my blobs contour
Thanks in advance!
EDIT
I found what I need: cv::pointPolygonTest() seems to be the right thing, but I'm not sure how to implement it …
Here's some code:
// ...
IplImage forground_ipl = result;
IplImage *labelImg = cvCreateImage(forground.size(), IPL_DEPTH_LABEL, 1);
CvBlobs blobs;
bool found = cvb::cvLabel(&forground_ipl, labelImg, blobs);
IplImage *imgOut = cvCreateImage(cvGetSize(&forground_ipl), IPL_DEPTH_8U, 3);
if (found) {
vb::CvBlob *greaterBlob = blobs[cvb::cvGreaterBlob(blobs)];
cvb::cvRenderBlob(labelImg, greaterBlob, &forground_ipl, imgOut);
CvContourPolygon *polygon = cvConvertChainCodesToPolygon(&greaterBlob->contour);
}
"polygon" contains the contour I need.
goodFeaturesToTrack is implemented this way:
- (std::vector<cv::Point2f>)pointsFromGoodFeaturesToTrack:(cv::Mat &)_image
{
std::vector<cv::Point2f> corners;
cv::goodFeaturesToTrack(_image,corners, 100, 0.01, 10);
return corners;
}
So next I need to loop through the corners and check each point with cv::pointPolygonTest(), right?
You can create a mask over your interest region:
EDIT
How to make a mask:
Make a mask;
Mat mask(origImg.size(), CV_8UC1);
mask.setTo(Scalar::all(0));
// here I assume your contour is extracted with findContours,
// and is stored in a vector<vector<Point>>
// and that you know which contour is the blob
// if it's not the case, use fillPoly instead of drawContour();
Scalar color(255,255,255); // white. actually, it's monchannel.
drawContours(mask, contours, contourIdx, color );
// fillPoly(Mat& img, const Point** pts, const int* npts,
// int ncontours, const Scalar& color)
And now you're ready to use it. BUT, look carefully at the result - I have heard about some bugs in OpenCV regarding the mask parameter for feature extractors, and I am not sure if it's about this one.
// note the mask parameter:
void goodFeaturesToTrack(InputArray image, OutputArray corners, int maxCorners,
double qualityLevel, double minDistance,
InputArray mask=noArray(), int blockSize=3,
bool useHarrisDetector=false, double k=0.04 )
This will also improve the speed of your aplication - goodFeaturesToTrack eats a hoge amount of time, and if you apply it only on a smaller image, the overall gain is significant.

Resources