How to recognize digits from the analog counter? - opencv

I'm trying to read the following kWh numbers from the counter. The problem is the tesseract OCR doesn't recognize the analog digits.
The question is: will it be a better idea to make the photos of all of the digits (from 0 to 9) at different positions (I mean when digit is in the center, when it is a little at the top and the number 2 is appearing etc.) and to try image recognition instead of text recognition?
As far as I understood the difference is, that the image recognition compares the photos, while the text recognition... well I don't know...
Any advice?

Since the counter is not digital, but analog, we have problems at the transitions. The text/number recognition libraries can not recognize smth like that. The solution, that I've found is: Machine Learning.
Firstly I've made user to make the picture, where the numbers take 70-80% of the image (in order to remove the unneeded details).
Then I'm looking for parallel lines (if there are any) and cut the picture, that is between them (if the distance is big enough).
After that I'm filtering the picture (playing with contrast, brightness, set it grayscale) and then use the filter, that makes the image to contain only two colours (#000000 (black) and #ffffff (white)). In order to find the contours easier.
Then I find the contours by using Canny algorithm and filter them, by removing the unneeded details.
After that I use K-Nearest-Neighbour algorithm in order to recognize the digits.
But before I can recognize anything, I need to teach the algorithm, how the digits look like and what are they.
I hope it was useful!

Maybe you are not configuring tesseract right. I made a code using it that solves your problem:
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <tesseract/baseapi.h>
#include <iostream>
using namespace cv;
int main(int argc, char** argv)
{
cv::Mat input = cv::imread("img.jpg");
//rectangle containing just the kWh numbers
Rect roi(358,327,532,89);
//convert to gray scale
Mat input_gray;
cvtColor(input(roi),input_gray,CV_BGR2GRAY);
//threshold image
Mat binary_img = input_gray>200;
//make a copy to use on findcontours
Mat copy_binary_img = binary_img.clone();
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
//identify each blob in order to eliminate the small ones
findContours(copy_binary_img, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0,0));
//filter blobs by their sizes
for (vector<vector<Point> >::iterator it = contours.begin(); it!=contours.end(); )
{
if (it->size()>20)
it=contours.erase(it);
else
++it;
}
//Erase blobs which have countour size smaller than 20
for( int i = 0; i< contours.size(); i++ )
{
drawContours( binary_img, contours, i, 0, -1, 8, hierarchy, 0, Point() );
}
//initialize tesseract OCR
tesseract::TessBaseAPI tess;
tess.Init(NULL, "eng", tesseract::OEM_DEFAULT);
tess.SetVariable("tessedit_char_whitelist", "0123456789-.");
tess.SetPageSegMode(tesseract::PSM_SINGLE_BLOCK);
//set input
tess.SetImage((uchar*)binary_img.data
, binary_img.cols
, binary_img.rows
, 1
, binary_img.cols);
// Get the text
char* out = tess.GetUTF8Text();
std::cout << out << std::endl;
waitKey();
return 0;
}

Related

How to completely convert one side of the detected edge into white?

I have an RGB image (figure given) Original RGB Imageon which I have applied Canny edge detection and have obtained the edges as in the figure
After Canny Edge detection the edges obtained
Now I want to completely cover the upper half of the edge into white color. Something like this...My Target. As it can be observed that the filling of white is not proper and many a time it goes below the edge line.
Code preferred in Python
There are some functions in opencv for this purpose but I wanna show my simple algorithm approach.
Get canny output ( you already have it )
Check every column of image until hitting a white pixel(255)
When you hit a white pixel which should belong to circle mark it
Make all column white until that marking pixel
Here is results and code:
Input:
Result:
Code:
#include <opencv2/highgui/highgui.hpp>
#include "opencv2/opencv.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
Mat img = imread("/ur/img/directory/img.png",0);
imshow("Before", img);
for(int i=0; i<img.rows; i++)
{
for(int j=0; j<img.cols; j++)
{
if(img.at<uchar>(Point(j,i))>250)
{
for(int k=0; k<i; k++)
{
img.at<uchar>(Point(j,k)) = 255;
}
}
}
}
imshow("Result", img);
waitKey(0);
}

Detect location(s) of objects in an image

I have an input image that looks like this:
Notice that there are 6 boxes with black borders. I need to detect the location (upper-left hand corder) of each box. Normally I would use something like template matching but the contents (the colored area inside the black border) of each box is distinct.
Is there a version of template matching that can configured to ignore the inner area of each box? Is the an algorithm better suited to this situation?
Also note, that I have to deal with several different resolutions... thus the actual size of the boxes will be different from image to image. That said, the ratio (length to width) will always be the same.
Real-world example/input image per request:
You can do this finding the bounding box of connected components.
To find connected components you can convert to grayscale, and keep all pixels with value 0, i.e. the black border of the rectangles.
Then you can find the contours of each connected component, and compute its bounding box. Here the red bounding boxes found:
Code:
#include <opencv2/opencv.hpp>
#include <vector>
using namespace cv;
using namespace std;
int main()
{
// Load the image, as BGR
Mat3b img = imread("path_to_image");
// Convert to gray scale
Mat1b gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
// Get binary mask
Mat1b binary = (gray == 0);
// Find contours of connected components
vector<vector<Point>> contours;
findContours(binary.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
// For each contour
for (int i = 0; i < contours.size(); ++i)
{
// Get the bounding box
Rect box = boundingRect(contours[i]);
// Draw the box on the original image in red
rectangle(img, box, Scalar(0, 0, 255), 5);
}
// Show result
imshow("Result", img);
waitKey();
return 0;
}
From the image posted in chat, this code produces:
In general, this code will correctly detect the cards, as well as noise. You just need to remove noise according to some criteria. Among others: size or aspect ratio of boxes, colors inside boxes, some texture information.

How to detect human using findcontours based on the human shape?

I wanna ask how to detecting humans or pedestrians on blob (findcontours)? I've try to learn how to detecting any object on the frame using findcontours() like this:
#include"stdafx.h"
#include<vector>
#include<iostream>
#include<opencv2/opencv.hpp>
#include<opencv2/core/core.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<opencv2/highgui/highgui.hpp>
int main(int argc, char *argv[])
{
cv::Mat frame;
cv::Mat fg;
cv::Mat blurred;
cv::Mat thresholded;
cv::Mat thresholded2;
cv::Mat result;
cv::Mat bgmodel;
cv::namedWindow("Frame");
cv::namedWindow("Background Model"
//,CV_WINDOW_NORMAL
);
//cv::resizeWindow("Background Model",400,300);
cv::namedWindow("Blob"
//,CV_WINDOW_NORMAL
);
//cv::resizeWindow("Blob",400,300);
cv::VideoCapture cap("campus3.avi");
cv::BackgroundSubtractorMOG2 bgs;
bgs.nmixtures = 3;
bgs.history = 1000;
bgs.varThresholdGen = 15;
bgs.bShadowDetection = true;
bgs.nShadowDetection = 0;
bgs.fTau = 0.5;
std::vector<std::vector<cv::Point>> contours;
for(;;)
{
cap >> frame;
cv::GaussianBlur(frame,blurred,cv::Size(3,3),0,0,cv::BORDER_DEFAULT);
bgs.operator()(blurred,fg);
bgs.getBackgroundImage(bgmodel);
cv::threshold(fg,thresholded,70.0f,255,CV_THRESH_BINARY);
cv::threshold(fg,thresholded2,70.0f,255,CV_THRESH_BINARY);
cv::Mat elementCLOSE(5,5,CV_8U,cv::Scalar(1));
cv::morphologyEx(thresholded,thresholded,cv::MORPH_CLOSE,elementCLOSE);
cv::morphologyEx(thresholded2,thresholded2,cv::MORPH_CLOSE,elementCLOSE);
cv::findContours(thresholded,contours,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE);
cv::cvtColor(thresholded2,result,CV_GRAY2RGB);
int cmin = 50;
int cmax = 1000;
std::vector<std::vector<cv::Point>>::iterator itc=contours.begin();
while (itc!=contours.end()) {
if (itc->size() > cmin && itc->size() < cmax){
std::vector<cv::Point> pts = *itc;
cv::Mat pointsMatrix = cv::Mat(pts);
cv::Scalar color( 0, 255, 0 );
cv::Rect r0= cv::boundingRect(pointsMatrix);
cv::rectangle(frame,r0,color,2);
++itc;
}else{++itc;}
}
cv::imshow("Frame",frame);
cv::imshow("Background Model",bgmodel);
cv::imshow("Blob",result);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
and now I wanna know how to detect humans? am I need to use hog? or haar? if yes I need to use them, how to use them? any tutorials to learn how to use it? because I'm so curious! and it's so much fun when I learn OpenCV! so addictive! :))
anyway I'll appreciate any help here, thanks. :)
This is a good start, with lots of enthusiasm. There is more than one way to do human detection on images/image sequences. I summarize a few below:
Since you are already extracting blobs that are supposed to be persons or objects, you can compare the features of these blobs with those of blobs resulting from a human in the scene. Many people look at the shape of the head-shoulder region, the height and area of the blob, etc.
You can also look at research papers like this one. The earlier researches are easier to understand and code, compared to the recent papers.
Instead of using background subtraction, you can also use an approach like Haar Wavelet based detection. This is widely used for face detection, but opencv contains a model for upper body detection. You can also build your own models, as described here.
Have fun!

Multiple Face Detection

I have a code in OpenCV (in C++) which uses "haarcascade_mcs_upperbody.xml" to detect upper body.
It detects single upper body. How can I make it detect multiple upper bodies.
I think CV_HAAR_FIND_BIGGEST_OBJECT is detecting only the biggest object. But I am not knowing how to solve this issue
The code goes like this:
int main(int argc, const char** argv)
{
CascadeClassifier body_cascade;
body_cascade.load("haarcascade_mcs_upperbody.xml");
VideoCapture captureDevice;
captureDevice.open(0);
Mat captureFrame;
Mat grayscaleFrame;
namedWindow("outputCapture", 1);
//create a loop to capture and find faces
while(true)
{
//capture a new image frame
captureDevice>>captureFrame;
//convert captured image to gray scale and equalize
cvtColor(captureFrame, grayscaleFrame, CV_BGR2GRAY);
equalizeHist(grayscaleFrame, grayscaleFrame);
//create a vector array to store the face found
std::vector<Rect> bodies;
//find faces and store them in the vector array
body_cascade.detectMultiScale(grayscaleFrame, faces, 1.1, 3,
CV_HAAR_FIND_BIGGEST_OBJECT|CV_HAAR_SCALE_IMAGE, Size(30,30));
//draw a rectangle for all found faces in the vector array on the original image
for(int i = 0; i < faces.size(); i++)
{
Point pt1(bodies[i].x + bodies[i].width, bodies[i].y + bodies[i].height);
Point pt2(bodies[i].x, bodies[i].y);
rectangle(captureFrame, pt1, pt2, cvScalar(0, 255, 0, 0), 1, 8, 0);
}
//print the output
imshow("outputCapture", captureFrame);
//pause for 33ms
waitKey(33);
}
return 0;
}
It seems there is some inconsistency in your code, since face_cascade is not defined anywhere, but I assume its type is CascadeClassifier.
detectMultiScale stores all detected objects in the faces vector. Are you sure it contains only one object?
Try removing the CV_HAAR_FIND_BIGGEST_OBJECT flag, because you want all objects to be detected, and not only the biggest one.
Also, make sure you set the minSize and maxSize parameters correctly (see documentation), since those parameters determine the minimal and maximal detectable object sizes.

OpenCV: Converting from a Contour Tree to a Contour - cvContourFromContourTree()

I have a pointer to a CvContourTree and I wish to derive the associated contour from this.
I have tried to use the function that will do this -
cvContourFromContourTree(const CvContourTree* tree, CvMemStorage* storage, CvTermCriteria criteria )
but it is giving me an error:
'Unhandled exception at 0x1005567f in Matching_Hierarchial.exe: 0xC0000005:
Access violation reading location 0x00000002.'
I have defined the CvTermCriteria as follows:
CvTermCriteria termcrit = cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS ,5,1);
Can someone please provide some sample code of how to convert a contour to contour tree and then back to a contour again. I would be extremely grateful for help in this matter.
Thanks,
Conor
Thanks for your fast response. Please see the attched code segment. I have taken in an image from my project folder, converted it to binary. I have then found the contours. Using an arbitrary contour, I simplified its complexity via polygon approximation. I construct a contour tree from this contour (I am confident that this is working ok as I have tested this contour tree against a similar one using cvMatchContourTrees() and gotten favourable outcomes). However despite reading all I could find on the function and your post, I cannot convert from the contour tree back to the contour structure.
#include "stdafx.h"
#include "cv.h"
#include "highgui.h"
#include "cxcore.h"
#include "cvaux.h"
#include <iostream>
using namespace std;
#define CVX_RED CV_RGB(0xff,0x00,0x00)
#define CVX_BLUE CV_RGB(0x00,0x00,0xff)
int _tmain(int argc, _TCHAR* argv[])
{
// define input image
IplImage *img1 = cvLoadImage("SHAPE1.jpg",0);
// define and construct binary image of input image
IplImage *imgEdge1 = cvCreateImage(cvGetSize(img1),IPL_DEPTH_8U,1);
cvThreshold(img1,imgEdge1,155,255,CV_THRESH_BINARY);
// define and zero image to place polygon image
IplImage *dst1 = cvCreateImage(cvGetSize(img1),IPL_DEPTH_8U,1);
cvZero(dst1);
// display ip and thresholded image
cvNamedWindow("img1",1);
cvNamedWindow("thresh1",1);
cvShowImage("img1",img1);
cvShowImage("thresh1",imgEdge1);
// find all the contours of the image
CvSeq* contours1 = NULL;
CvMemStorage* storage1 = cvCreateMemStorage();
int numContour1 = cvFindContours(imgEdge1,storage1,&contours1,sizeof(CvContour),CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE);
cout<<"number of contours"<<numContour1<<endl;
// extract a contour of interest
CvSeq* poly_approx1 = contours1->v_next; // interested in vertical level becaue tree structure
// CALCULATE PERIMETER
double perimeter1 = cvArcLength((CvSeq*)poly_approx1,CV_WHOLE_SEQ,-1);
// CREATE POLYGON APPROXIMATION -
// NB: CANNOT USE 'CV_CHAIN_CODE'ARGUEMENT IN THE cvFindContours() call
CvSeq* polySeq1 = cvApproxPoly((CvSeq*)poly_approx1,sizeof(CvContour),storage1,CV_POLY_APPROX_DP,perimeter1*0.02,0);
// draw approximated polygon
cvDrawContours(dst1,polySeq1,cvScalar(255),cvScalar(255),0,3,8); // draw
// display polygon
cvNamedWindow("Poly Approx1",1);
cvShowImage("Poly Approx1",dst1);
// NOW WE HAVE A POLYGON APPROXIMATED CONTOUR
// CREATE A CONTOUR TREE
CvMemStorage *treeStorage1 = cvCreateMemStorage(0);
CvContourTree* tree1 = cvCreateContourTree((const CvSeq*)polySeq1,treeStorage1,0);
// TO RECONSTRUCT A CONTOUR FROM THE CONTOUR TREE
// CANNOT GET TO WORK YET...
CvMemStorage *stor = cvCreateMemStorage(0);
CvTermCriteria termcrit = cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS ,5,1); // more later
/* the next line will not compile */
CvSeq *contour_recap = cvContourFromContourTree(tree1,treeStorage1,termcrit);
cvWaitKey(0);
return 0;
}
Thanks again for any help or advice that you might be able to give. I assure you it's greatly appreciated.
Conor
Well, you are using the appropriate methods.
CvContourTree* cvCreateContourTree(
const CvSeq* contour,
CvMemStorage* storage,
double threshold);
This method will create the contour tree from a given sequence, which can then be used to compare two contours.
To convert a contour tree back to a sequence you will use the method you already posted, but remember to initialize the storage and create a TermCriteria(looks ok in your example):
storage = cvCreateMemStorage(0);
CvSeq* cvContourFromContourTree(
const CvContourTree* tree,
CvMemStorage* storage,
CvTermCriteria criteria);
So this steps should be ok for your conversion, and if there's nothing missing from your code than you should post more of it so we can find the mistake.

Resources