directional edge detection in OpenCV - opencv

I would like to detect edge that has certain angle/orientation.
Adapting from a post in SO, I've figured out to use OpenCV magnitude, phase and Sobel functions to filter out unwanted edge points. Then use the magnitude image (with phase image as condition) to output the edge points.
However, the results is not similar to Canny edge function. It's good that the edges with unwanted angles are filtered out but detected edges are blobs of points, not thin line edge
the left edge image is also plotted out after findContours is used, but this barely helps out
1) what else should be added to mimic Canny processing?
2) As for the purpose of directional edge detection, is this approach more robust than using a directional kernel other than typical Sobel ones?
Thank you!
Edit 01:
forgot to put my code link

alternatively, you can try lsd,(http://www.ipol.im/pub/art/2012/gjmr-lsd/). it outputs lines as two point pairs so directional filtering is also possible.
there's also another line segment implementation # http://sourceforge.net/projects/lswms/ though the lsd link above has better results
if you want a single pixel edge, you would need to do skeletonization/thinning
edit
rename the lsd.c into lsd.cpp when you are compiling. i used version 1.6 attached in the url. code and results below. you can tweak the thresholds to suppress the small segments as well.
#include "opencv2/opencv.hpp"
using namespace cv;
#include "lsd.h"
void lsd_call(Mat& im)
{
Mat gray;
cvtColor(im,gray,CV_BGR2GRAY);
Mat imgdouble;
gray.convertTo(imgdouble,CV_64FC1);
double * image;
double * out;
int x,y,i,j,n;
out = lsd(&n,(double*)imgdouble.data,imgdouble.cols,imgdouble.rows);
Mat lines = im.clone();
Mat lines_binary = Mat::zeros(gray.size(),CV_8UC1);
for(i=0;i<n;i++)
{
double x1,y1,x2,y2,w;
x1 = out[7*i+0];
y1 = out[7*i+1];
x2 = out[7*i+2];
y2 = out[7*i+3];
w = out[7*i+4];
double length = sqrt(pow(x1-x2,2)+pow(y1-y2,2));
double angle = atan2(y2 - y1, x2 - x1) * 180 / CV_PI;
if(angle<180 && angle>90)
{
line(lines,Point2d(out[7*i+0],out[7*i+1]),Point2d(out[7*i+2],out[7*i+3]),Scalar (0,0,255));
line(lines_binary,Point2d(out[7*i+0],out[7*i+1]),Point2d(out[7*i+2],out[7*i+3]) ,Scalar(255));
}
if(length>75)
{
//line(todraw,Point2d(out[7*i+0],out[7*i+1]),Point2d(out[7*i+2],out[7*i+3]), Scalar(0,0,255),out[7*i+4]);
}
}
imshow("lines",lines);
imshow("lines_binary",lines_binary);
imwrite("c:/data/lines.jpg",lines);
imwrite("c:/data/linesbinary.jpg",lines_binary);
free( (void *) out );
}
int main(int argc,char** argv )
{
Mat im = imread("c:/data/lines.png");
lsd_call(im);
waitKey(0);
}

1)
Canny edge detector produces thin edges because of non-maxima supression along the neighbours.
In order to mimic that, you need to choose edge pixels with maximimum edge response along that direction. So blobs of points can be prevented this way.
As you can probably guess, the weaker images in the grid can be suppressed with threshold defined by you.
2) I can't give a definite answer to that sadly. For the angel given, the kernels might be limited by
discretization. So for many different angles, this approach 'should' be better.

Related

OpenCV - Determine position of wrist

I need to determine the position of the wrist in a frame with parts of a human under arm & matching hand.
So far I have isolated the hand & arm and I'm able to draw a polygon & hull curve around it:
I achieve this result by simple binary thresholding and automatic contour fitting.
Based on this I want to extract the location of the wrist. This needs to work for all orientations of the hand/wrist.
However, being fairly new to working with OpenCV it is unclear to me what the best way is to determine/isolate the location of the wrist. I have various ideas for this:
The arm section is a fairly straight. Maybe a simple line detection over the contour polygon might do the job to get straight lines for the under arm.
Somehow split the contour polygon into multiple sections. Basically it's fair to assume that the location of the wrist has the smallest distance between the two arms contouring the under arm. Is there a way to find that point along the polygon and then "cut" or "split" the polygon to get two? From there I'd have one polygon representing a rectangle which should be easy to work with.
Use an approach that iterates along the main axis of the polygon fitted using fitLine(), measuring the distance between two opposing points of the polygon, finding the shortest distance.
Unfortunately I lack the experience to make the correct choice here - or even come up with a better idea.
I'd appreciate any kind of ideas & pointers towards achieving this. I could find a lot of valuable research material when it comes to hand detection & tracking and basic body part matching using Haar cascades. Unfortunately, I couldn't find a way to apply those technologies for my use case.
Here's some raw material (images & videos) to work with: (Google Drive Link!): https://drive.google.com/drive/folders/1hU4hGw5dYtVrcXTq8TYWCWfcLWjT-ZJU?usp=sharing
Approach: I used the advantage of arm side. The thickness of arm is almost same until hitting the hand.
Assumption: I coded by assuming the arm will enter the screen vertically. Otherwise my code may not work. I tried all of the images you shared and its working properly for all.
My steps:
Make a simple segmentation methodology for getting only needed part of the image
Start to count the non-black pixel for each column by beginning from the arm side.
Until hitting a column which is different from the previous column countings, you are still on the arm side. When you hit, you reached the wrist.
Note: I decided the threshold experimentally.
Here are the results and code:
Input Image:
After segmentation:
Output after algorithm:
Code:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <cstdlib>
using namespace cv;
using namespace std;
int main() {
Mat src, gray, blur_image, threshold_output;
// take input image
src = imread("/ur/image/directory/image_01.jpg", 1);
// convert to grayscale
cvtColor(src, gray, COLOR_BGR2GRAY);
// add blurring to the input image
medianBlur(gray,gray,9);
// Apply a segmentation to arm
for(int i=0; i<gray.rows; i++)
for(int j=0;j<gray.cols; j++)
if(gray.at<uchar>(Point(j,i))<110)
gray.at<uchar>(Point(j,i)) = 0;
//Creat a bgr mat to show the results clearly
Mat copy_gray = gray;
cvtColor(copy_gray,copy_gray,CV_GRAY2BGR);
double sum = 0;
int loop_cnt = 0,enter = 1;
Point first,second;
for(int j=gray.cols-1; j>=0; j--)
{
loop_cnt++;
int counter = 0,ff=1,enter2 = 1;
for(int i=0;i<gray.rows; i++)
{
if(gray.at<uchar>(Point(j,i))!=0 && enter)
{
if(ff)
first = Point(j,i);
counter++;
ff = 0;
}
if(!ff && gray.at<uchar>(Point(j,i))==0 && enter2)
{
second = Point(j,i);
enter2 = 0;
}
}
sum += (double)counter;
double average = sum/(double)(loop_cnt);
if(abs(average-counter)>20.0 && enter)
{
line(copy_gray,Point(j,0),Point(j,500),Scalar(0,255,0),5);
enter = 0;
}
}
int distance = norm(second-first)/2;
circle(copy_gray,Point(first.x,first.y+distance),20,Scalar(0,0,255),5);
imshow("Result",copy_gray);
waitKey(0);
return 0;
}

Detecting a hand above a chessboard using opencv

I am developing an android application for analyzing chess games based on series of photos. To process images, I am using OpenCV. My question is how can I detect that there is a player's hand on a picture? Because I would like to filter those photos and analyze only the ones with the only chessboard on them.
So far I managed to get the Canny, so from an image like that
original image
I am able to get that canny
.
But I have no idea what can I do next...
The code I used to get Canny:
Mat gray, blur, cannyed;
cvtColor(img, gray, CV_BGR2GRAY);
GaussianBlur(gray, blur, Size(7, 7), 0, 0);
Canny(blur, cannyed, 50, 100, 3);
I would highly appreciate any ideas and advice on what to do next and what OpenCV functions can I use.
You have a very nice spectrum in the chess board. A hand in it messes up the frequencies built up by the regular transitions between the black and white squares. Try moving a bigger square (let's say the size of a 4.5 x 4.5 squares) around and see what happens to the frequencies.
Another approach if you have the sequence of pictures taken as a movie is to analyse the motions. Take the difference of consecutive frames (low pass filter them a bit first) to detect motions. Filter the motions in time (over several frames). Then threshold these motions to get a binary image. Erode the binary shapes to filter out small moving objects (noise, chess figure) be able to detect if any larger moving shape is on the board (e.g. a hand).
Here, After Canny Edge detection the morphological operations of horizontal and vertical lines extraction process i tried.
Mat horizontal = cannyed.clone();
// Specify size on horizontal axis
int horizontalsize = horizontal.cols / 60;
// Create structure element for extracting horizontal lines through morphology operations
Mat horizontalStructure = getStructuringElement(MORPH_RECT, Size(horizontalsize,1));
erode(horizontal, horizontal, horizontalStructure, Point(-1, -1),2);
dilate(horizontal, horizontal, horizontalStructure, Point(-1, -1),1);
imshow("horizontal",horizontal);
Mat vertical = cannyed.clone();
// Specify size on horizontal axis
int verticalsize = vertical.cols / 60;
// Create structure element for extracting horizontal lines through morphology operations
Mat verticalStructure = getStructuringElement(MORPH_RECT, Size(1,verticalsize));
erode(vertical, vertical, verticalStructure, Point(-1, -1));
dilate(vertical, vertical, verticalStructure, Point(-1, -1),2);
imshow("vertical",vertical);
the results are ,
Horizontal Lines in the chess board
Then, from the figure you can see there is a proper interval in between the lines. The area where hand is present there is more interval in lines.
In that location, if contour is done, the hand (or any object ) over the chess board can be detected.
This helps to solve for any object when placed over chess board.
Thank you all very much for your suggestions.
So I solved the problem mostly using Gowthaman's method. First I use his code to generate vertical and horizontal lines. Then I combine them like this:
Mat combined = vertical + horizontal;
So I get something like that when there is no hand
or like that when there is a hand
.
Next I count white pixels using the code:
int GetPixelCount(Mat image, uchar color)
{
int result = 0;
for (int i = 0; i < image.rows; i++)
{
for (int j = 0; j < image.cols; j++)
{
if (image.at<uchar>(Point(j, i)) == color)
result++;
}
}
return result;
}
I do that for every photo in the series. First photo is always without a hand, so I use is as a template. If current photo has less then 98% of template white pixels then I deduce there is hand (or something else) in it.
Most likely this is not an optimal method and has lots of weaknesses, but it is very simple and works for me just fine :)

iOS & OpenCV: Image Registration / Alignment

I am doing a project of combining multiple images similar to HDR in iOS. I have managed to get 3 images of different exposures through the Camera and now I want to align them because during the capture, one's hand must have shaken and resulted in all 3 images having slightly different alignment.
I have imported OpenCV framework and I have been exploring functions in OpenCV to align/register images, but found nothing. Is there actually a function in OpenCV to achieve this? If not, is there any other alternatives?
Thanks!
In OpenCV 3.0 you can use findTransformECC. I have copied this ECC Image Alignment code from LearnOpenCV.com where a very similar problem is solved for aligning color channels. The post also contains code in Python. Hope this helps.
// Read the images to be aligned
Mat im1 = imread("images/image1.jpg");
Mat im2 = imread("images/image2.jpg");
// Convert images to gray scale;
Mat im1_gray, im2_gray;
cvtColor(im1, im1_gray, CV_BGR2GRAY);
cvtColor(im2, im2_gray, CV_BGR2GRAY);
// Define the motion model
const int warp_mode = MOTION_EUCLIDEAN;
// Set a 2x3 or 3x3 warp matrix depending on the motion model.
Mat warp_matrix;
// Initialize the matrix to identity
if ( warp_mode == MOTION_HOMOGRAPHY )
warp_matrix = Mat::eye(3, 3, CV_32F);
else
warp_matrix = Mat::eye(2, 3, CV_32F);
// Specify the number of iterations.
int number_of_iterations = 5000;
// Specify the threshold of the increment
// in the correlation coefficient between two iterations
double termination_eps = 1e-10;
// Define termination criteria
TermCriteria criteria (TermCriteria::COUNT+TermCriteria::EPS, number_of_iterations, termination_eps);
// Run the ECC algorithm. The results are stored in warp_matrix.
findTransformECC(
im1_gray,
im2_gray,
warp_matrix,
warp_mode,
criteria
);
// Storage for warped image.
Mat im2_aligned;
if (warp_mode != MOTION_HOMOGRAPHY)
// Use warpAffine for Translation, Euclidean and Affine
warpAffine(im2, im2_aligned, warp_matrix, im1.size(), INTER_LINEAR + WARP_INVERSE_MAP);
else
// Use warpPerspective for Homography
warpPerspective (im2, im2_aligned, warp_matrix, im1.size(),INTER_LINEAR + WARP_INVERSE_MAP);
// Show final result
imshow("Image 1", im1);
imshow("Image 2", im2);
imshow("Image 2 Aligned", im2_aligned);
waitKey(0);
There is no single function called something like align, you need to do/implement it yourself, or find an already implemented one.
Here is a one solution.
You need to extract keypoints from all 3 images and try to match them. Be sure that your keypoint extraction technique is invariant to illumination changes since all have different intensity values because of different exposures. You need to match your keypoints and find some disparity. Then you can use disparity to align your images.
Remember this answer is so superficial, for details first you need to do some research about keypoint/descriptor extraction, and keypoint/descriptor matching.
Good luck!

Reshaping noisy coin into a circle form

I'm doing a coin detection using JavaCV (OpenCV wrapper) but I have a little problem when the coins are connected. If I try to erode them to separate these coins they loose their circle form and if I try to count pixels inside each coin there can be problems so that some coins can be miscounted as one that bigger. What I want to do is firstly to reshape them and make them like a circle (equal with the radius of that coin) and then count pixels inside them.
Here is my thresholded image:
And here is eroded image:
Any suggestions? Or is there any better way to break bridges between coins?
It looks similar to a problem I recently had to separate bacterial colonies growing on agar plates.
I performed a distance transform on the thresholded image (in your case you will need to invert it).
Then found the peaks of the distance map (by calculating the difference between a the dilated distance map and the distance map and finding the zero values).
Then, I assumed each peak to be the centre of a circle (coin) and the value of the peak in the distance map to be the radius of the circle.
Here is the result of your image after this pipeline:
I am new to OpenCV, and c++ so my code is probably very messy, but I did that:
int main( int argc, char** argv ){
cv::Mat objects, distance,peaks,results;
std::vector<std::vector<cv::Point> > contours;
objects=cv::imread("CUfWj.jpg");
objects.copyTo(results);
cv::cvtColor(objects, objects, CV_BGR2GRAY);
//THIS IS THE LINE TO BLUR THE IMAGE CF COMMENTS OF THIS POST
cv::blur( objects,objects,cv::Size(3,3));
cv::threshold(objects,objects,125,255,cv::THRESH_BINARY_INV);
/*Applies a distance transform to "objects".
* The result is saved in "distance" */
cv::distanceTransform(objects,distance,CV_DIST_L2,CV_DIST_MASK_5);
/* In order to find the local maxima, "distance"
* is subtracted from the result of the dilatation of
* "distance". All the peaks keep the save value */
cv::dilate(distance,peaks,cv::Mat(),cv::Point(-1,-1),3);
cv::dilate(objects,objects,cv::Mat(),cv::Point(-1,-1),3);
/* Now all the peaks should be exactely 0*/
peaks=peaks-distance;
/* And the non-peaks 255*/
cv::threshold(peaks,peaks,0,255,cv::THRESH_BINARY);
peaks.convertTo(peaks,CV_8U);
/* Only the zero values of "peaks" that are non-zero
* in "objects" are the real peaks*/
cv::bitwise_xor(peaks,objects,peaks);
/* The peaks that are distant from less than
* 2 pixels are merged by dilatation */
cv::dilate(peaks,peaks,cv::Mat(),cv::Point(-1,-1),1);
/* In order to map the peaks, findContours() is used.
* The results are stored in "contours" */
cv::findContours(peaks, contours, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
/* The next steps are applied only if, at least,
* one contour exists */
cv::imwrite("CUfWj2.jpg",peaks);
if(contours.size()>0){
/* Defines vectors to store the moments of the peaks, the center
* and the theoritical circles of the object of interest*/
std::vector <cv::Moments> moms(contours.size());
std::vector <cv::Point> centers(contours.size());
std::vector<cv::Vec3f> circles(contours.size());
float rad,x,y;
/* Caculates the moments of each peak and then the center of the peak
* which are approximatively the center of each objects of interest*/
for(unsigned int i=0;i<contours.size();i++) {
moms[i]= cv::moments(contours[i]);
centers[i]= cv::Point(moms[i].m10/moms[i].m00,moms[i].m01/moms[i].m00);
x= (float) (centers[i].x);
y= (float) (centers[i].y);
if(x>0 && y>0){
rad= (float) (distance.at<float>((int)y,(int)x)+1);
circles[i][0]= x;
circles[i][3]= y;
circles[i][2]= rad;
cv::circle(results,centers[i],rad+1,cv::Scalar( 255, 0,0 ), 2, 4, 0 );
}
}
cv::imwrite("CUfWj2.jpg",results);
}
return 1;
}
You don't need to erode, just a good set of params for cvHoughCircles():
The code used to generate this image came from my other post: Detecting Circles, with these parameters:
CvSeq* circles = cvHoughCircles(gray, storage, CV_HOUGH_GRADIENT, 1, gray->height/12, 80, 26);
OpenCV has a function called HoughCircles() that can be applied to your case, without separating the different circles. Can you call it from JavaCV ? If so, it will do what you want (detecting and counting circles), bypassing your separation problem.
The main point is to detect the circles accurately without separating them first. Other algorithms (such as template matching can be used instead of generalized Hough transform, but you have to take into account the different sizes of the coins.
The usual approach for erosion-based object recognition is to label continuous regions in the eroded image and then re-grow them until they match the regions in the original image. Hough circles is a better idea in your case, though.
After detecting the joined coins, I recommend applying morphological operations to classify areas as "definitely coin" and "definitely not coin", apply a distance transformation, then run the watershed to determine the boundaries. This scenario is actually the demonstration example for the watershed algorithm in OpenCV − perhaps it was created in response to this question.

Algorithm to detect corners of paper sheet in photo

What is the best way to detect the corners of an invoice/receipt/sheet-of-paper in a photo? This is to be used for subsequent perspective correction, before OCR.
My current approach has been:
RGB > Gray > Canny Edge Detection with thresholding > Dilate(1) > Remove small objects(6) > clear boarder objects > pick larges blog based on Convex Area. > [corner detection - Not implemented]
I can't help but think there must be a more robust 'intelligent'/statistical approach to handle this type of segmentation. I don't have a lot of training examples, but I could probably get 100 images together.
Broader context:
I'm using matlab to prototype, and planning to implement the system in OpenCV and Tesserect-OCR. This is the first of a number of image processing problems I need to solve for this specific application. So I'm looking to roll my own solution and re-familiarize myself with image processing algorithms.
Here are some sample image that I'd like the algorithm to handle: If you'd like to take up the challenge the large images are at http://madteckhead.com/tmp
(source: madteckhead.com)
(source: madteckhead.com)
(source: madteckhead.com)
(source: madteckhead.com)
In the best case this gives:
(source: madteckhead.com)
(source: madteckhead.com)
(source: madteckhead.com)
However it fails easily on other cases:
(source: madteckhead.com)
(source: madteckhead.com)
(source: madteckhead.com)
EDIT: Hough Transform Progress
Q: What algorithm would cluster the hough lines to find corners?
Following advice from answers I was able to use the Hough Transform, pick lines, and filter them. My current approach is rather crude. I've made the assumption the invoice will always be less than 15deg out of alignment with the image. I end up with reasonable results for lines if this is the case (see below). But am not entirely sure of a suitable algorithm to cluster the lines (or vote) to extrapolate for the corners. The Hough lines are not continuous. And in the noisy images, there can be parallel lines so some form or distance from line origin metrics are required. Any ideas?
(source: madteckhead.com)
I'm Martin's friend who was working on this earlier this year. This was my first ever coding project, and kinda ended in a bit of a rush, so the code needs some errr...decoding...
I'll give a few tips from what I've seen you doing already, and then sort my code on my day off tomorrow.
First tip, OpenCV and python are awesome, move to them as soon as possible. :D
Instead of removing small objects and or noise, lower the canny restraints, so it accepts more edges, and then find the largest closed contour (in OpenCV use findcontour() with some simple parameters, I think I used CV_RETR_LIST). might still struggle when it's on a white piece of paper, but was definitely providing best results.
For the Houghline2() Transform, try with the CV_HOUGH_STANDARD as opposed to the CV_HOUGH_PROBABILISTIC, it'll give rho and theta values, defining the line in polar coordinates, and then you can group the lines within a certain tolerance to those.
My grouping worked as a look up table, for each line outputted from the hough transform it would give a rho and theta pair. If these values were within, say 5% of a pair of values in the table, they were discarded, if they were outside that 5%, a new entry was added to the table.
You can then do analysis of parallel lines or distance between lines much more easily.
Hope this helps.
Here's what I came up with after a bit of experimentation:
import cv, cv2, numpy as np
import sys
def get_new(old):
new = np.ones(old.shape, np.uint8)
cv2.bitwise_not(new,new)
return new
if __name__ == '__main__':
orig = cv2.imread(sys.argv[1])
# these constants are carefully picked
MORPH = 9
CANNY = 84
HOUGH = 25
img = cv2.cvtColor(orig, cv2.COLOR_BGR2GRAY)
cv2.GaussianBlur(img, (3,3), 0, img)
# this is to recognize white on white
kernel = cv2.getStructuringElement(cv2.MORPH_RECT,(MORPH,MORPH))
dilated = cv2.dilate(img, kernel)
edges = cv2.Canny(dilated, 0, CANNY, apertureSize=3)
lines = cv2.HoughLinesP(edges, 1, 3.14/180, HOUGH)
for line in lines[0]:
cv2.line(edges, (line[0], line[1]), (line[2], line[3]),
(255,0,0), 2, 8)
# finding contours
contours, _ = cv2.findContours(edges.copy(), cv.CV_RETR_EXTERNAL,
cv.CV_CHAIN_APPROX_TC89_KCOS)
contours = filter(lambda cont: cv2.arcLength(cont, False) > 100, contours)
contours = filter(lambda cont: cv2.contourArea(cont) > 10000, contours)
# simplify contours down to polygons
rects = []
for cont in contours:
rect = cv2.approxPolyDP(cont, 40, True).copy().reshape(-1, 2)
rects.append(rect)
# that's basically it
cv2.drawContours(orig, rects,-1,(0,255,0),1)
# show only contours
new = get_new(img)
cv2.drawContours(new, rects,-1,(0,255,0),1)
cv2.GaussianBlur(new, (9,9), 0, new)
new = cv2.Canny(new, 0, CANNY, apertureSize=3)
cv2.namedWindow('result', cv2.WINDOW_NORMAL)
cv2.imshow('result', orig)
cv2.waitKey(0)
cv2.imshow('result', dilated)
cv2.waitKey(0)
cv2.imshow('result', edges)
cv2.waitKey(0)
cv2.imshow('result', new)
cv2.waitKey(0)
cv2.destroyAllWindows()
Not perfect, but at least works for all samples:
A student group at my university recently demonstrated an iPhone app (and python OpenCV app) that they'd written to do exactly this. As I remember, the steps were something like this:
Median filter to completely remove the text on the paper (this was handwritten text on white paper with fairly good lighting and may not work with printed text, it worked very well). The reason was that it makes the corner detection much easier.
Hough Transform for lines
Find the peaks in the Hough Transform accumulator space and draw each line across the entire image.
Analyse the lines and remove any that are very close to each other and are at a similar angle (cluster the lines into one). This is necessary because the Hough Transform isn't perfect as it's working in a discrete sample space.
Find pairs of lines that are roughly parallel and that intersect other pairs to see which lines form quads.
This seemed to work fairly well and they were able to take a photo of a piece of paper or book, perform the corner detection and then map the document in the image onto a flat plane in almost realtime (there was a single OpenCV function to perform the mapping). There was no OCR when I saw it working.
Instead of starting from edge detection you could use Corner detection.
Marvin Framework provides an implementation of Moravec algorithm for this purpose. You could find the corners of the papers as a starting point. Below the output of Moravec's algorithm:
Also you can use MSER (Maximally stable extremal regions) over Sobel operator result to find the stable regions of the image. For each region returned by MSER you can apply convex hull and poly approximation to obtain some like this:
But this kind of detection is useful for live detection more than a single picture that not always return the best result.
After edge-detection, use Hough Transform.
Then, put those points in an SVM(supporting vector machine) with their labels, if the examples have smooth lines on them, SVM will not have any difficulty to divide the necessary parts of the example and other parts. My advice on SVM, put a parameter like connectivity and length. That is, if points are connected and long, they are likely to be a line of the receipt. Then, you can eliminate all of the other points.
Here you have #Vanuan 's code using C++:
cv::cvtColor(mat, mat, CV_BGR2GRAY);
cv::GaussianBlur(mat, mat, cv::Size(3,3), 0);
cv::Mat kernel = cv::getStructuringElement(cv::MORPH_RECT, cv::Point(9,9));
cv::Mat dilated;
cv::dilate(mat, dilated, kernel);
cv::Mat edges;
cv::Canny(dilated, edges, 84, 3);
std::vector<cv::Vec4i> lines;
lines.clear();
cv::HoughLinesP(edges, lines, 1, CV_PI/180, 25);
std::vector<cv::Vec4i>::iterator it = lines.begin();
for(; it!=lines.end(); ++it) {
cv::Vec4i l = *it;
cv::line(edges, cv::Point(l[0], l[1]), cv::Point(l[2], l[3]), cv::Scalar(255,0,0), 2, 8);
}
std::vector< std::vector<cv::Point> > contours;
cv::findContours(edges, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_TC89_KCOS);
std::vector< std::vector<cv::Point> > contoursCleaned;
for (int i=0; i < contours.size(); i++) {
if (cv::arcLength(contours[i], false) > 100)
contoursCleaned.push_back(contours[i]);
}
std::vector<std::vector<cv::Point> > contoursArea;
for (int i=0; i < contoursCleaned.size(); i++) {
if (cv::contourArea(contoursCleaned[i]) > 10000){
contoursArea.push_back(contoursCleaned[i]);
}
}
std::vector<std::vector<cv::Point> > contoursDraw (contoursCleaned.size());
for (int i=0; i < contoursArea.size(); i++){
cv::approxPolyDP(Mat(contoursArea[i]), contoursDraw[i], 40, true);
}
Mat drawing = Mat::zeros( mat.size(), CV_8UC3 );
cv::drawContours(drawing, contoursDraw, -1, cv::Scalar(0,255,0),1);
Convert to lab space
Use kmeans segment 2 cluster
Then use contours or hough on one of the clusters (intenral)

Resources