Related
I've been using tesseract to convert documents into text. The quality of the documents ranges wildly, and I'm looking for tips on what sort of image processing might improve the results. I've noticed that text that is highly pixellated - for example that generated by fax machines - is especially difficult for tesseract to process - presumably all those jagged edges to the characters confound the shape-recognition algorithms.
What sort of image processing techniques would improve the accuracy? I've been using a Gaussian blur to smooth out the pixellated images and seen some small improvement, but I'm hoping that there is a more specific technique that would yield better results. Say a filter that was tuned to black and white images, which would smooth out irregular edges, followed by a filter which would increase the contrast to make the characters more distinct.
Any general tips for someone who is a novice at image processing?
fix DPI (if needed) 300 DPI is minimum
fix text size (e.g. 12 pt should be ok)
try to fix text lines (deskew and dewarp text)
try to fix illumination of image (e.g. no dark part of image)
binarize and de-noise image
There is no universal command line that would fit to all cases (sometimes you need to blur and sharpen image). But you can give a try to TEXTCLEANER from Fred's ImageMagick Scripts.
If you are not fan of command line, maybe you can try to use opensource scantailor.sourceforge.net or commercial bookrestorer.
I am by no means an OCR expert. But I this week had the need to convert text out of a jpg.
I started with a colorized, RGB 445x747 pixel jpg.
I immediately tried tesseract on this, and the program converted almost nothing.
I then went into GIMP and did the following.
image > mode > grayscale
image > scale image > 1191x2000 pixels
filters > enhance > unsharp mask with values of
radius = 6.8, amount = 2.69, threshold = 0
I then saved as a new jpg at 100% quality.
Tesseract then was able to extract all the text into a .txt file
Gimp is your friend.
As a rule of thumb, I usually apply the following image pre-processing techniques using OpenCV library:
Rescaling the image (it's recommended if you’re working with images that have a DPI of less than 300 dpi):
img = cv2.resize(img, None, fx=1.2, fy=1.2, interpolation=cv2.INTER_CUBIC)
Converting image to grayscale:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Applying dilation and erosion to remove the noise (you may play with the kernel size depending on your data set):
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
Applying blur, which can be done by using one of the following lines (each of which has its pros and cons, however, median blur and bilateral filter usually perform better than gaussian blur.):
cv2.threshold(cv2.GaussianBlur(img, (5, 5), 0), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
cv2.threshold(cv2.bilateralFilter(img, 5, 75, 75), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
cv2.threshold(cv2.medianBlur(img, 3), 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
cv2.adaptiveThreshold(cv2.GaussianBlur(img, (5, 5), 0), 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
cv2.adaptiveThreshold(cv2.bilateralFilter(img, 9, 75, 75), 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
cv2.adaptiveThreshold(cv2.medianBlur(img, 3), 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
I've recently written a pretty simple guide to Tesseract but it should enable you to write your first OCR script and clear up some hurdles that I experienced when things were less clear than I would have liked in the documentation.
In case you'd like to check them out, here I'm sharing the links with you:
Getting started with Tesseract - Part I: Introduction
Getting started with Tesseract - Part II: Image Pre-processing
Three points to improve the readability of the image:
Resize the image with variable height and width(multiply 0.5 and 1 and 2 with image height and width).
Convert the image to Gray scale format(Black and white).
Remove the noise pixels and make more clear(Filter the image).
Refer below code :
Resize
public Bitmap Resize(Bitmap bmp, int newWidth, int newHeight)
{
Bitmap temp = (Bitmap)bmp;
Bitmap bmap = new Bitmap(newWidth, newHeight, temp.PixelFormat);
double nWidthFactor = (double)temp.Width / (double)newWidth;
double nHeightFactor = (double)temp.Height / (double)newHeight;
double fx, fy, nx, ny;
int cx, cy, fr_x, fr_y;
Color color1 = new Color();
Color color2 = new Color();
Color color3 = new Color();
Color color4 = new Color();
byte nRed, nGreen, nBlue;
byte bp1, bp2;
for (int x = 0; x < bmap.Width; ++x)
{
for (int y = 0; y < bmap.Height; ++y)
{
fr_x = (int)Math.Floor(x * nWidthFactor);
fr_y = (int)Math.Floor(y * nHeightFactor);
cx = fr_x + 1;
if (cx >= temp.Width) cx = fr_x;
cy = fr_y + 1;
if (cy >= temp.Height) cy = fr_y;
fx = x * nWidthFactor - fr_x;
fy = y * nHeightFactor - fr_y;
nx = 1.0 - fx;
ny = 1.0 - fy;
color1 = temp.GetPixel(fr_x, fr_y);
color2 = temp.GetPixel(cx, fr_y);
color3 = temp.GetPixel(fr_x, cy);
color4 = temp.GetPixel(cx, cy);
// Blue
bp1 = (byte)(nx * color1.B + fx * color2.B);
bp2 = (byte)(nx * color3.B + fx * color4.B);
nBlue = (byte)(ny * (double)(bp1) + fy * (double)(bp2));
// Green
bp1 = (byte)(nx * color1.G + fx * color2.G);
bp2 = (byte)(nx * color3.G + fx * color4.G);
nGreen = (byte)(ny * (double)(bp1) + fy * (double)(bp2));
// Red
bp1 = (byte)(nx * color1.R + fx * color2.R);
bp2 = (byte)(nx * color3.R + fx * color4.R);
nRed = (byte)(ny * (double)(bp1) + fy * (double)(bp2));
bmap.SetPixel(x, y, System.Drawing.Color.FromArgb
(255, nRed, nGreen, nBlue));
}
}
bmap = SetGrayscale(bmap);
bmap = RemoveNoise(bmap);
return bmap;
}
SetGrayscale
public Bitmap SetGrayscale(Bitmap img)
{
Bitmap temp = (Bitmap)img;
Bitmap bmap = (Bitmap)temp.Clone();
Color c;
for (int i = 0; i < bmap.Width; i++)
{
for (int j = 0; j < bmap.Height; j++)
{
c = bmap.GetPixel(i, j);
byte gray = (byte)(.299 * c.R + .587 * c.G + .114 * c.B);
bmap.SetPixel(i, j, Color.FromArgb(gray, gray, gray));
}
}
return (Bitmap)bmap.Clone();
}
RemoveNoise
public Bitmap RemoveNoise(Bitmap bmap)
{
for (var x = 0; x < bmap.Width; x++)
{
for (var y = 0; y < bmap.Height; y++)
{
var pixel = bmap.GetPixel(x, y);
if (pixel.R < 162 && pixel.G < 162 && pixel.B < 162)
bmap.SetPixel(x, y, Color.Black);
else if (pixel.R > 162 && pixel.G > 162 && pixel.B > 162)
bmap.SetPixel(x, y, Color.White);
}
}
return bmap;
}
INPUT IMAGE
OUTPUT IMAGE
This is somewhat ago but it still might be useful.
My experience shows that resizing the image in-memory before passing it to tesseract sometimes helps.
Try different modes of interpolation. The post https://stackoverflow.com/a/4756906/146003 helped me a lot.
What was EXTREMLY HELPFUL to me on this way are the source codes for Capture2Text project.
http://sourceforge.net/projects/capture2text/files/Capture2Text/.
BTW: Kudos to it's author for sharing such a painstaking algorithm.
Pay special attention to the file Capture2Text\SourceCode\leptonica_util\leptonica_util.c - that's the essence of image preprocession for this utility.
If you will run the binaries, you can check the image transformation before/after the process in Capture2Text\Output\ folder.
P.S. mentioned solution uses Tesseract for OCR and Leptonica for preprocessing.
Java version for Sathyaraj's code above:
// Resize
public Bitmap resize(Bitmap img, int newWidth, int newHeight) {
Bitmap bmap = img.copy(img.getConfig(), true);
double nWidthFactor = (double) img.getWidth() / (double) newWidth;
double nHeightFactor = (double) img.getHeight() / (double) newHeight;
double fx, fy, nx, ny;
int cx, cy, fr_x, fr_y;
int color1;
int color2;
int color3;
int color4;
byte nRed, nGreen, nBlue;
byte bp1, bp2;
for (int x = 0; x < bmap.getWidth(); ++x) {
for (int y = 0; y < bmap.getHeight(); ++y) {
fr_x = (int) Math.floor(x * nWidthFactor);
fr_y = (int) Math.floor(y * nHeightFactor);
cx = fr_x + 1;
if (cx >= img.getWidth())
cx = fr_x;
cy = fr_y + 1;
if (cy >= img.getHeight())
cy = fr_y;
fx = x * nWidthFactor - fr_x;
fy = y * nHeightFactor - fr_y;
nx = 1.0 - fx;
ny = 1.0 - fy;
color1 = img.getPixel(fr_x, fr_y);
color2 = img.getPixel(cx, fr_y);
color3 = img.getPixel(fr_x, cy);
color4 = img.getPixel(cx, cy);
// Blue
bp1 = (byte) (nx * Color.blue(color1) + fx * Color.blue(color2));
bp2 = (byte) (nx * Color.blue(color3) + fx * Color.blue(color4));
nBlue = (byte) (ny * (double) (bp1) + fy * (double) (bp2));
// Green
bp1 = (byte) (nx * Color.green(color1) + fx * Color.green(color2));
bp2 = (byte) (nx * Color.green(color3) + fx * Color.green(color4));
nGreen = (byte) (ny * (double) (bp1) + fy * (double) (bp2));
// Red
bp1 = (byte) (nx * Color.red(color1) + fx * Color.red(color2));
bp2 = (byte) (nx * Color.red(color3) + fx * Color.red(color4));
nRed = (byte) (ny * (double) (bp1) + fy * (double) (bp2));
bmap.setPixel(x, y, Color.argb(255, nRed, nGreen, nBlue));
}
}
bmap = setGrayscale(bmap);
bmap = removeNoise(bmap);
return bmap;
}
// SetGrayscale
private Bitmap setGrayscale(Bitmap img) {
Bitmap bmap = img.copy(img.getConfig(), true);
int c;
for (int i = 0; i < bmap.getWidth(); i++) {
for (int j = 0; j < bmap.getHeight(); j++) {
c = bmap.getPixel(i, j);
byte gray = (byte) (.299 * Color.red(c) + .587 * Color.green(c)
+ .114 * Color.blue(c));
bmap.setPixel(i, j, Color.argb(255, gray, gray, gray));
}
}
return bmap;
}
// RemoveNoise
private Bitmap removeNoise(Bitmap bmap) {
for (int x = 0; x < bmap.getWidth(); x++) {
for (int y = 0; y < bmap.getHeight(); y++) {
int pixel = bmap.getPixel(x, y);
if (Color.red(pixel) < 162 && Color.green(pixel) < 162 && Color.blue(pixel) < 162) {
bmap.setPixel(x, y, Color.BLACK);
}
}
}
for (int x = 0; x < bmap.getWidth(); x++) {
for (int y = 0; y < bmap.getHeight(); y++) {
int pixel = bmap.getPixel(x, y);
if (Color.red(pixel) > 162 && Color.green(pixel) > 162 && Color.blue(pixel) > 162) {
bmap.setPixel(x, y, Color.WHITE);
}
}
}
return bmap;
}
The Tesseract documentation contains some good details on how to improve the OCR quality via image processing steps.
To some degree, Tesseract automatically applies them. It is also possible to tell Tesseract to write an intermediate image for inspection, i.e. to check how well the internal image processing works (search for tessedit_write_images in the above reference).
More importantly, the new neural network system in Tesseract 4 yields much better OCR results - in general and especially for images with some noise. It is enabled with --oem 1, e.g. as in:
$ tesseract --oem 1 -l deu page.png result pdf
(this example selects the german language)
Thus, it makes sense to test first how far you get with the new Tesseract LSTM mode before applying some custom pre-processing image processing steps.
Adaptive thresholding is important if the lighting is uneven across the image.
My preprocessing using GraphicsMagic is mentioned in this post:
https://groups.google.com/forum/#!topic/tesseract-ocr/jONGSChLRv4
GraphicsMagic also has the -lat feature for Linear time Adaptive Threshold which I will try soon.
Another method of thresholding using OpenCV is described here:
https://docs.opencv.org/4.x/d7/d4d/tutorial_py_thresholding.html
I did these to get good results out of an image which has not very small text.
Apply blur to the original image.
Apply Adaptive Threshold.
Apply Sharpening effect.
And if the still not getting good results, scale the image to 150% or 200%.
Reading text from image documents using any OCR engine have many issues in order get good accuracy. There is no fixed solution to all the cases but here are a few things which should be considered to improve OCR results.
1) Presence of noise due to poor image quality / unwanted elements/blobs in the background region. This requires some pre-processing operations like noise removal which can be easily done using gaussian filter or normal median filter methods. These are also available in OpenCV.
2) Wrong orientation of image: Because of wrong orientation OCR engine fails to segment the lines and words in image correctly which gives the worst accuracy.
3) Presence of lines: While doing word or line segmentation OCR engine sometimes also tries to merge the words and lines together and thus processing wrong content and hence giving wrong results. There are other issues also but these are the basic ones.
This post OCR application is an example case where some image pre-preocessing and post processing on OCR result can be applied to get better OCR accuracy.
Text Recognition depends on a variety of factors to produce a good quality output. OCR output highly depends on the quality of input image. This is why every OCR engine provides guidelines regarding the quality of input image and its size. These guidelines help OCR engine to produce accurate results.
I have written a detailed article on image processing in python. Kindly follow the link below for more explanation. Also added the python source code to implement those process.
Please write a comment if you have a suggestion or better idea on this topic to improve it.
https://medium.com/cashify-engineering/improve-accuracy-of-ocr-using-image-preprocessing-8df29ec3a033
you can do noise reduction and then apply thresholding, but that you can you can play around with the configuration of the OCR by changing the --psm and --oem values
try:
--psm 5
--oem 2
you can also look at the following link for further details
here
So far, I've played a lot with tesseract 3.x, 4.x and 5.0.0.
tesseract 4.x and 5.x seem to yield the exact same accuracy.
Sometimes, I get better results with legacy engine (using --oem 0) and sometimes I get better results with LTSM engine --oem 1.
Generally speaking, I get the best results on upscaled images with LTSM engine. The latter is on par with my earlier engine (ABBYY CLI OCR 11 for Linux).
Of course, the traineddata needs to be downloaded from github, since most linux distros will only provide the fast versions.
The trained data that will work for both legacy and LTSM engines can be downloaded at https://github.com/tesseract-ocr/tessdata with some command like the following. Don't forget to download the OSD trained data too.
curl -L https://github.com/tesseract-ocr/tessdata/blob/main/eng.traineddata?raw=true -o /usr/share/tesseract/tessdata/eng.traineddata
curl -L https://github.com/tesseract-ocr/tessdata/blob/main/eng.traineddata?raw=true -o /usr/share/tesseract/tessdata/osd.traineddata
I've ended up using ImageMagick as my image preprocessor since it's convenient and can easily run scripted. You can install it with yum install ImageMagick or apt install imagemagick depending on your distro flavor.
So here's my oneliner preprocessor that fits most of the stuff I feed to my OCR:
convert my_document.jpg -units PixelsPerInch -respect-parenthesis \( -compress LZW -resample 300 -bordercolor black -border 1 -trim +repage -fill white -draw "color 0,0 floodfill" -alpha off -shave 1x1 \) \( -bordercolor black -border 2 -fill white -draw "color 0,0 floodfill" -alpha off -shave 0x1 -deskew 40 +repage \) -antialias -sharpen 0x3 preprocessed_my_document.tiff
Basically we:
use TIFF format since tesseract likes it more than JPG (decompressor related, who knows)
use lossless LZW TIFF compression
Resample the image to 300dpi
Use some black magic to remove unwanted colors
Try to rotate the page if rotation can be detected
Antialias the image
Sharpen text
The latter image can than be fed to tesseract with:
tesseract -l eng preprocessed_my_document.tiff - --oem 1 -psm 1
Btw, some years ago I wrote the 'poor man's OCR server' which checks for changed files in a given directory and launches OCR operations on all not already OCRed files. pmocr is compatible with tesseract 3.x-5.x and abbyyocr11.
See the pmocr project on github.
I am trying to find a way to break the split the lines of text in a scanned document that has been adaptive thresholded. Right now, I am storing the pixel values of the document as unsigned ints from 0 to 255, and I am taking the average of the pixels in each line, and I split the lines into ranges based on whether the average of the pixels values is larger than 250, and then I take the median of each range of lines for which this holds. However, this methods sometimes fails, as there can be black splotches on the image.
Is there a more noise-resistant way to do this task?
EDIT: Here is some code. "warped" is the name of the original image, "cuts" is where I want to split the image.
warped = threshold_adaptive(warped, 250, offset = 10)
warped = warped.astype("uint8") * 255
# get areas where we can split image on whitespace to make OCR more accurate
color_level = np.array([np.sum(line) / len(line) for line in warped])
cuts = []
i = 0
while(i < len(color_level)):
if color_level[i] > 250:
begin = i
while(color_level[i] > 250):
i += 1
cuts.append((i + begin)/2) # middle of the whitespace region
else:
i += 1
EDIT 2: Sample image added
From your input image, you need to make text as white, and background as black
You need then to compute the rotation angle of your bill. A simple approach is to find the minAreaRect of all white points (findNonZero), and you get:
Then you can rotate your bill, so that text is horizontal:
Now you can compute horizontal projection (reduce). You can take the average value in each line. Apply a threshold th on the histogram to account for some noise in the image (here I used 0, i.e. no noise). Lines with only background will have a value >0, text lines will have value 0 in the histogram. Then take the average bin coordinate of each continuous sequence of white bins in the histogram. That will be the y coordinate of your lines:
Here the code. It's in C++, but since most of the work is with OpenCV functions, it should be easy convertible to Python. At least, you can use this as a reference:
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
// Read image
Mat3b img = imread("path_to_image");
// Binarize image. Text is white, background is black
Mat1b bin;
cvtColor(img, bin, COLOR_BGR2GRAY);
bin = bin < 200;
// Find all white pixels
vector<Point> pts;
findNonZero(bin, pts);
// Get rotated rect of white pixels
RotatedRect box = minAreaRect(pts);
if (box.size.width > box.size.height)
{
swap(box.size.width, box.size.height);
box.angle += 90.f;
}
Point2f vertices[4];
box.points(vertices);
for (int i = 0; i < 4; ++i)
{
line(img, vertices[i], vertices[(i + 1) % 4], Scalar(0, 255, 0));
}
// Rotate the image according to the found angle
Mat1b rotated;
Mat M = getRotationMatrix2D(box.center, box.angle, 1.0);
warpAffine(bin, rotated, M, bin.size());
// Compute horizontal projections
Mat1f horProj;
reduce(rotated, horProj, 1, CV_REDUCE_AVG);
// Remove noise in histogram. White bins identify space lines, black bins identify text lines
float th = 0;
Mat1b hist = horProj <= th;
// Get mean coordinate of white white pixels groups
vector<int> ycoords;
int y = 0;
int count = 0;
bool isSpace = false;
for (int i = 0; i < rotated.rows; ++i)
{
if (!isSpace)
{
if (hist(i))
{
isSpace = true;
count = 1;
y = i;
}
}
else
{
if (!hist(i))
{
isSpace = false;
ycoords.push_back(y / count);
}
else
{
y += i;
count++;
}
}
}
// Draw line as final result
Mat3b result;
cvtColor(rotated, result, COLOR_GRAY2BGR);
for (int i = 0; i < ycoords.size(); ++i)
{
line(result, Point(0, ycoords[i]), Point(result.cols, ycoords[i]), Scalar(0, 255, 0));
}
return 0;
}
Basic steps as #Miki,
read the source
threshed
find minAreaRect
warp by the rotated matrix
find and draw upper and lower bounds
While code in Python:
#!/usr/bin/python3
# 2018.01.16 01:11:49 CST
# 2018.01.16 01:55:01 CST
import cv2
import numpy as np
## (1) read
img = cv2.imread("img02.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
## (2) threshold
th, threshed = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY_INV|cv2.THRESH_OTSU)
## (3) minAreaRect on the nozeros
pts = cv2.findNonZero(threshed)
ret = cv2.minAreaRect(pts)
(cx,cy), (w,h), ang = ret
if w>h:
w,h = h,w
ang += 90
## (4) Find rotated matrix, do rotation
M = cv2.getRotationMatrix2D((cx,cy), ang, 1.0)
rotated = cv2.warpAffine(threshed, M, (img.shape[1], img.shape[0]))
## (5) find and draw the upper and lower boundary of each lines
hist = cv2.reduce(rotated,1, cv2.REDUCE_AVG).reshape(-1)
th = 2
H,W = img.shape[:2]
uppers = [y for y in range(H-1) if hist[y]<=th and hist[y+1]>th]
lowers = [y for y in range(H-1) if hist[y]>th and hist[y+1]<=th]
rotated = cv2.cvtColor(rotated, cv2.COLOR_GRAY2BGR)
for y in uppers:
cv2.line(rotated, (0,y), (W, y), (255,0,0), 1)
for y in lowers:
cv2.line(rotated, (0,y), (W, y), (0,255,0), 1)
cv2.imwrite("result.png", rotated)
Finally result:
I would like to detect this pattern
As you can see it's basically the letter C, inside another, with different orientations. My pattern can have multiple C's inside one another, the one I'm posting with 2 C's is just a sample. I would like to detect how many C's there are, and the orientation of each one. For now I've managed to detect the center of such pattern, basically I've managed to detect the center of the innermost C. Could you please provide me with any ideas about different algorithms I could use?
And here we go! A high level overview of this approach can be described as the sequential execution of the following steps:
Load the input image;
Convert it to grayscale;
Threshold it to generate a binary image;
Use the binary image to find contours;
Fill each area of contours with a different color (so we can extract each letter separately);
Create a mask for each letter found to isolate them in separate images;
Crop the images to the smallest possible size;
Figure out the center of the image;
Figure out the width of the letter's border to identify the exact center of the border;
Scan along the border (in a circular fashion) for discontinuity;
Figure out an approximate angle for the discontinuity, thus identifying the amount of rotation of the letter.
I don't want to get into too much detail since I'm sharing the source code, so feel free to test and change it in any way you like.
Let's start, Winter Is Coming:
#include <iostream>
#include <vector>
#include <cmath>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
cv::RNG rng(12345);
float PI = std::atan(1) * 4;
void isolate_object(const cv::Mat& input, cv::Mat& output)
{
if (input.channels() != 1)
{
std::cout << "isolate_object: !!! input must be grayscale" << std::endl;
return;
}
// Store the set of points in the image before assembling the bounding box
std::vector<cv::Point> points;
cv::Mat_<uchar>::const_iterator it = input.begin<uchar>();
cv::Mat_<uchar>::const_iterator end = input.end<uchar>();
for (; it != end; ++it)
{
if (*it) points.push_back(it.pos());
}
// Compute minimal bounding box
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
// Set Region of Interest to the area defined by the box
cv::Rect roi;
roi.x = box.center.x - (box.size.width / 2);
roi.y = box.center.y - (box.size.height / 2);
roi.width = box.size.width;
roi.height = box.size.height;
// Crop the original image to the defined ROI
output = input(roi);
}
For more details on the implementation of isolate_object() please check this thread. cv::RNG is used later on to fill each contour with a different color, and PI, well... you know PI.
int main(int argc, char* argv[])
{
// Load input (colored, 3-channel, BGR)
cv::Mat input = cv::imread("test.jpg");
if (input.empty())
{
std::cout << "!!! Failed imread() #1" << std::endl;
return -1;
}
// Convert colored image to grayscale
cv::Mat gray;
cv::cvtColor(input, gray, CV_BGR2GRAY);
// Execute a threshold operation to get a binary image from the grayscale
cv::Mat binary;
cv::threshold(gray, binary, 128, 255, cv::THRESH_BINARY);
The binary image looks exactly like the input because it only had 2 colors (B&W):
// Find the contours of the C's in the thresholded image
std::vector<std::vector<cv::Point> > contours;
cv::findContours(binary, contours, cv::RETR_LIST, cv::CHAIN_APPROX_SIMPLE);
// Fill the contours found with unique colors to isolate them later
cv::Mat colored_contours = input.clone();
std::vector<cv::Scalar> fill_colors;
for (size_t i = 0; i < contours.size(); i++)
{
std::vector<cv::Point> cnt = contours[i];
double area = cv::contourArea(cv::Mat(cnt));
//std::cout << "* Area: " << area << std::endl;
// Fill each C found with a different color.
// If the area is larger than 100k it's probably the white background, so we ignore it.
if (area > 10000 && area < 100000)
{
cv::Scalar color = cv::Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
cv::drawContours(colored_contours, contours, i, color,
CV_FILLED, 8, std::vector<cv::Vec4i>(), 0, cv::Point());
fill_colors.push_back(color);
//cv::imwrite("test_contours.jpg", colored_contours);
}
}
What colored_contours looks like:
// Create a mask for each C found to isolate them from each other
for (int i = 0; i < fill_colors.size(); i++)
{
// After inRange() single_color_mask stores a single C letter
cv::Mat single_color_mask = cv::Mat::zeros(input.size(), CV_8UC1);
cv::inRange(colored_contours, fill_colors[i], fill_colors[i], single_color_mask);
//cv::imwrite("test_mask.jpg", single_color_mask);
Since this for loop is executed twice, one for each color that was used to fill the contours, I want you to see all images that were generated by this stage. So the following images are the ones that were stored by single_color_mask (one for each iteration of the loop):
// Crop image to the area of the object
cv::Mat cropped;
isolate_object(single_color_mask, cropped);
//cv::imwrite("test_cropped.jpg", cropped);
cv::Mat orig_cropped = cropped.clone();
These are the ones that were stored by cropped (by the way, the smaller C looks fat because the image is rescaled by this page to have the same size of the larger C, don't worry):
// Figure out the center of the image
cv::Point obj_center(cropped.cols/2, cropped.rows/2);
//cv::circle(cropped, obj_center, 3, cv::Scalar(128, 128, 128));
//cv::imwrite("test_cropped_center.jpg", cropped);
To make it clearer to understand for what obj_center is for, I painted a little gray circle for educational purposes on that location:
// Figure out the exact center location of the border
std::vector<cv::Point> border_points;
for (int y = 0; y < cropped.cols; y++)
{
if (cropped.at<uchar>(obj_center.x, y) != 0)
border_points.push_back(cv::Point(obj_center.x, y));
if (border_points.size() > 0 && cropped.at<uchar>(obj_center.x, y) == 0)
break;
}
if (border_points.size() == 0)
{
std::cout << "!!! Oops! No border detected." << std::endl;
return 0;
}
// Figure out the exact center location of the border
cv::Point border_center = border_points[border_points.size() / 2];
//cv::circle(cropped, border_center, 3, cv::Scalar(128, 128, 128));
//cv::imwrite("test_border_center.jpg", cropped);
The procedure above scans a single vertical line from top/middle of the image to find the borders of the circle to be able to calculate it's width. Again, for education purposes I painted a small gray circle in the middle of the border. This is what cropped looks like:
// Scan the border of the circle for discontinuities
int radius = obj_center.y - border_center.y;
if (radius < 0)
radius *= -1;
std::vector<cv::Point> discontinuity_points;
std::vector<int> discontinuity_angles;
for (int angle = 0; angle <= 360; angle++)
{
int x = obj_center.x + (radius * cos((angle+90) * (PI / 180.f)));
int y = obj_center.y + (radius * sin((angle+90) * (PI / 180.f)));
if (cropped.at<uchar>(x, y) < 128)
{
discontinuity_points.push_back(cv::Point(y, x));
discontinuity_angles.push_back(angle);
//cv::circle(cropped, cv::Point(y, x), 1, cv::Scalar(128, 128, 128));
}
}
//std::cout << "Discontinuity size: " << discontinuity_points.size() << std::endl;
if (discontinuity_points.size() == 0 && discontinuity_angles.size() == 0)
{
std::cout << "!!! Oops! No discontinuity detected. It's a perfect circle, dang!" << std::endl;
return 0;
}
Great, so the piece of code above scans along the middle of the circle's border looking for discontinuity. I'm sharing a sample image to illustrate what I mean. Every gray dot on the image represents a pixel that is tested. When the pixel is black it means we found a discontinuity:
// Figure out the approximate angle of the discontinuity:
// the first angle found will suffice for this demo.
int approx_angle = discontinuity_angles[0];
std::cout << "#" << i << " letter C is rotated approximately at: " << approx_angle << " degrees" << std::endl;
// Figure out the central point of the discontinuity
cv::Point discontinuity_center;
for (int a = 0; a < discontinuity_points.size(); a++)
discontinuity_center += discontinuity_points[a];
discontinuity_center.x /= discontinuity_points.size();
discontinuity_center.y /= discontinuity_points.size();
cv::circle(orig_cropped, discontinuity_center, 2, cv::Scalar(128, 128, 128));
cv::imshow("Original crop", orig_cropped);
cv::waitKey(0);
}
return 0;
}
Very well... This last piece of code is responsible for figuring out the approximate angle of the discontinuity as well as indicate the central point of discontinuity. The following images are stored by orig_cropped. Once again I added a gray dot to show the exact positions detected as the center of the gaps:
When executed, this application prints the following information to the screen:
#0 letter C is rotated approximately at: 49 degrees
#1 letter C is rotated approximately at: 0 degrees
I hope it helps.
For start you could use Hough transformation. This algorithm is not very fast, but it's quite robust. Especially if you have such clear images.
The general approach would be:
1) preprocessing - suppress noise, convert to grayscale / binary
2) run edge detector
3) run Hough transform - IIRC it's `cv::HoughCircles` in OpenCV
4) do some postprocessing - remove surplus circles, decide which ones correspond to shape of letter C, and so on
My approach will give you 2 hough circles per letter C. One on inner boundary, one on outer letter C. If you want only one circle per letter you can use skeletonization algoritm. More info here http://homepages.inf.ed.ac.uk/rbf/HIPR2/skeleton.htm
Given that we have nested C structures and you know the centres of the Cs and would like to evaluate the orientations- one simply needs to observe the distribution of pixels along the radius of the concentric Cs in all directions.
This can be done by performing a simple morphological dilation operation from the centre. As we reach the right radius for the innermost C, we will reach a maximum number of pixels covered for the innermost C. The difference between the disc and the C will give us the location of the gap in the whole and one can perform an ultimate erosion to get the centroid of the gap in the C. The angle between the centre and this point is the orientation of the C. This step is iterated till all Cs are covered.
This can also be done quickly using the Distance function from the centre point of the Cs.
I am trying to detect full circles and semicircles in an image.
I am following the below mentioned procedure:
Process image (including Canny edge detection).
Find contours and draw them on an empty image, so that I can eliminate unwanted components
(The processed image is exactly what I want).
Detect circles using HoughCircles. And, this is what I get:
I tried varying the parameters in HoughCircles but the results are not consistent as it varies based on lighting and the position of circles in the image.
I accept or reject a circle based on its size. So, the result is not acceptable. Also, I have a long list of "acceptable" circles. So, I need some allowance in the HoughCircle params.
As for the full circles, it's easy - I can simply find the "roundness" of the contour. The problem is semicircles!
Please find the edited image before Hough transform
Use houghCircle directly on your image, don't extract edges first.
Then test for each detected circle, how much percentage is really present in the image:
int main()
{
cv::Mat color = cv::imread("../houghCircles.png");
cv::namedWindow("input"); cv::imshow("input", color);
cv::Mat canny;
cv::Mat gray;
/// Convert it to gray
cv::cvtColor( color, gray, CV_BGR2GRAY );
// compute canny (don't blur with that image quality!!)
cv::Canny(gray, canny, 200,20);
cv::namedWindow("canny2"); cv::imshow("canny2", canny>0);
std::vector<cv::Vec3f> circles;
/// Apply the Hough Transform to find the circles
cv::HoughCircles( gray, circles, CV_HOUGH_GRADIENT, 1, 60, 200, 20, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
cv::circle( color, center, 3, Scalar(0,255,255), -1);
cv::circle( color, center, radius, Scalar(0,0,255), 1 );
}
//compute distance transform:
cv::Mat dt;
cv::distanceTransform(255-(canny>0), dt, CV_DIST_L2 ,3);
cv::namedWindow("distance transform"); cv::imshow("distance transform", dt/255.0f);
// test for semi-circles:
float minInlierDist = 2.0f;
for( size_t i = 0; i < circles.size(); i++ )
{
// test inlier percentage:
// sample the circle and check for distance to the next edge
unsigned int counter = 0;
unsigned int inlier = 0;
cv::Point2f center((circles[i][0]), (circles[i][1]));
float radius = (circles[i][2]);
// maximal distance of inlier might depend on the size of the circle
float maxInlierDist = radius/25.0f;
if(maxInlierDist<minInlierDist) maxInlierDist = minInlierDist;
//TODO: maybe paramter incrementation might depend on circle size!
for(float t =0; t<2*3.14159265359f; t+= 0.1f)
{
counter++;
float cX = radius*cos(t) + circles[i][0];
float cY = radius*sin(t) + circles[i][1];
if(dt.at<float>(cY,cX) < maxInlierDist)
{
inlier++;
cv::circle(color, cv::Point2i(cX,cY),3, cv::Scalar(0,255,0));
}
else
cv::circle(color, cv::Point2i(cX,cY),3, cv::Scalar(255,0,0));
}
std::cout << 100.0f*(float)inlier/(float)counter << " % of a circle with radius " << radius << " detected" << std::endl;
}
cv::namedWindow("output"); cv::imshow("output", color);
cv::imwrite("houghLinesComputed.png", color);
cv::waitKey(-1);
return 0;
}
For this input:
It gives this output:
The red circles are Hough results.
The green sampled dots on the circle are inliers.
The blue dots are outliers.
Console output:
100 % of a circle with radius 27.5045 detected
100 % of a circle with radius 25.3476 detected
58.7302 % of a circle with radius 194.639 detected
50.7937 % of a circle with radius 23.1625 detected
79.3651 % of a circle with radius 7.64853 detected
If you want to test RANSAC instead of Hough, have a look at this.
Here is another way to do it, a simple RANSAC version (much optimization to be done to improve speed), that works on the Edge Image.
the method loops these steps until it is cancelled
choose randomly 3 edge pixel
estimate circle from them (3 points are enough to identify a circle)
verify or falsify that it's really a circle: count how much percentage of the circle is represented by the given edges
if a circle is verified, remove the circle from input/egdes
int main()
{
//RANSAC
//load edge image
cv::Mat color = cv::imread("../circleDetectionEdges.png");
// convert to grayscale
cv::Mat gray;
cv::cvtColor(color, gray, CV_RGB2GRAY);
// get binary image
cv::Mat mask = gray > 0;
//erode the edges to obtain sharp/thin edges (undo the blur?)
cv::erode(mask, mask, cv::Mat());
std::vector<cv::Point2f> edgePositions;
edgePositions = getPointPositions(mask);
// create distance transform to efficiently evaluate distance to nearest edge
cv::Mat dt;
cv::distanceTransform(255-mask, dt,CV_DIST_L1, 3);
//TODO: maybe seed random variable for real random numbers.
unsigned int nIterations = 0;
char quitKey = 'q';
std::cout << "press " << quitKey << " to stop" << std::endl;
while(cv::waitKey(-1) != quitKey)
{
//RANSAC: randomly choose 3 point and create a circle:
//TODO: choose randomly but more intelligent,
//so that it is more likely to choose three points of a circle.
//For example if there are many small circles, it is unlikely to randomly choose 3 points of the same circle.
unsigned int idx1 = rand()%edgePositions.size();
unsigned int idx2 = rand()%edgePositions.size();
unsigned int idx3 = rand()%edgePositions.size();
// we need 3 different samples:
if(idx1 == idx2) continue;
if(idx1 == idx3) continue;
if(idx3 == idx2) continue;
// create circle from 3 points:
cv::Point2f center; float radius;
getCircle(edgePositions[idx1],edgePositions[idx2],edgePositions[idx3],center,radius);
float minCirclePercentage = 0.4f;
// inlier set unused at the moment but could be used to approximate a (more robust) circle from alle inlier
std::vector<cv::Point2f> inlierSet;
//verify or falsify the circle by inlier counting:
float cPerc = verifyCircle(dt,center,radius, inlierSet);
if(cPerc >= minCirclePercentage)
{
std::cout << "accepted circle with " << cPerc*100.0f << " % inlier" << std::endl;
// first step would be to approximate the circle iteratively from ALL INLIER to obtain a better circle center
// but that's a TODO
std::cout << "circle: " << "center: " << center << " radius: " << radius << std::endl;
cv::circle(color, center,radius, cv::Scalar(255,255,0),1);
// accept circle => remove it from the edge list
cv::circle(mask,center,radius,cv::Scalar(0),10);
//update edge positions and distance transform
edgePositions = getPointPositions(mask);
cv::distanceTransform(255-mask, dt,CV_DIST_L1, 3);
}
cv::Mat tmp;
mask.copyTo(tmp);
// prevent cases where no fircle could be extracted (because three points collinear or sth.)
// filter NaN values
if((center.x == center.x)&&(center.y == center.y)&&(radius == radius))
{
cv::circle(tmp,center,radius,cv::Scalar(255));
}
else
{
std::cout << "circle illegal" << std::endl;
}
++nIterations;
cv::namedWindow("RANSAC"); cv::imshow("RANSAC", tmp);
}
std::cout << nIterations << " iterations performed" << std::endl;
cv::namedWindow("edges"); cv::imshow("edges", mask);
cv::namedWindow("color"); cv::imshow("color", color);
cv::imwrite("detectedCircles.png", color);
cv::waitKey(-1);
return 0;
}
float verifyCircle(cv::Mat dt, cv::Point2f center, float radius, std::vector<cv::Point2f> & inlierSet)
{
unsigned int counter = 0;
unsigned int inlier = 0;
float minInlierDist = 2.0f;
float maxInlierDistMax = 100.0f;
float maxInlierDist = radius/25.0f;
if(maxInlierDist<minInlierDist) maxInlierDist = minInlierDist;
if(maxInlierDist>maxInlierDistMax) maxInlierDist = maxInlierDistMax;
// choose samples along the circle and count inlier percentage
for(float t =0; t<2*3.14159265359f; t+= 0.05f)
{
counter++;
float cX = radius*cos(t) + center.x;
float cY = radius*sin(t) + center.y;
if(cX < dt.cols)
if(cX >= 0)
if(cY < dt.rows)
if(cY >= 0)
if(dt.at<float>(cY,cX) < maxInlierDist)
{
inlier++;
inlierSet.push_back(cv::Point2f(cX,cY));
}
}
return (float)inlier/float(counter);
}
inline void getCircle(cv::Point2f& p1,cv::Point2f& p2,cv::Point2f& p3, cv::Point2f& center, float& radius)
{
float x1 = p1.x;
float x2 = p2.x;
float x3 = p3.x;
float y1 = p1.y;
float y2 = p2.y;
float y3 = p3.y;
// PLEASE CHECK FOR TYPOS IN THE FORMULA :)
center.x = (x1*x1+y1*y1)*(y2-y3) + (x2*x2+y2*y2)*(y3-y1) + (x3*x3+y3*y3)*(y1-y2);
center.x /= ( 2*(x1*(y2-y3) - y1*(x2-x3) + x2*y3 - x3*y2) );
center.y = (x1*x1 + y1*y1)*(x3-x2) + (x2*x2+y2*y2)*(x1-x3) + (x3*x3 + y3*y3)*(x2-x1);
center.y /= ( 2*(x1*(y2-y3) - y1*(x2-x3) + x2*y3 - x3*y2) );
radius = sqrt((center.x-x1)*(center.x-x1) + (center.y-y1)*(center.y-y1));
}
std::vector<cv::Point2f> getPointPositions(cv::Mat binaryImage)
{
std::vector<cv::Point2f> pointPositions;
for(unsigned int y=0; y<binaryImage.rows; ++y)
{
//unsigned char* rowPtr = binaryImage.ptr<unsigned char>(y);
for(unsigned int x=0; x<binaryImage.cols; ++x)
{
//if(rowPtr[x] > 0) pointPositions.push_back(cv::Point2i(x,y));
if(binaryImage.at<unsigned char>(y,x) > 0) pointPositions.push_back(cv::Point2f(x,y));
}
}
return pointPositions;
}
input:
output:
console output:
press q to stop
accepted circle with 50 % inlier
circle: center: [358.511, 211.163] radius: 193.849
accepted circle with 85.7143 % inlier
circle: center: [45.2273, 171.591] radius: 24.6215
accepted circle with 100 % inlier
circle: center: [257.066, 197.066] radius: 27.819
circle illegal
30 iterations performed`
optimization should include:
use all inlier to fit a better circle
dont compute distance transform after each detected circles (it's quite expensive). compute inlier from point/edge set directly and remove the inlier edges from that list.
if there are many small circles in the image (and/or a lot of noise), it's unlikely to hit randomly 3 edge pixels or a circle. => try contour detection first and detect circles for each contour. after that try to detect all "other" circles left in the image.
a lot of other stuff
#Micka's answer is great, here it is roughly translated into python
The method takes a thresholded image mask as an argument, leaving that part as an exercise for the reader
def get_circle_percentages(image):
#compute distance transform of image
dist = cv2.distanceTransform(image, cv2.DIST_L2, 0)
rows = image.shape[0]
circles = cv2.HoughCircles(image, cv2.HOUGH_GRADIENT, 1, rows / 8, 50, param1=50, param2=10, minRadius=40, maxRadius=90)
minInlierDist = 2.0
for c in circles[0, :]:
counter = 0
inlier = 0
center = (c[0], c[1])
radius = c[2]
maxInlierDist = radius/25.0
if maxInlierDist < minInlierDist: maxInlierDist = minInlierDist
for i in np.arange(0, 2*np.pi, 0.1):
counter += 1
x = center[0] + radius * np.cos(i)
y = center[1] + radius * np.sin(i)
if dist.item(int(y), int(x)) < maxInlierDist:
inlier += 1
print(str(100.0*inlier/counter) + ' percent of a circle with radius ' + str(radius) + " detected")
I know that it's little bit late, but I used different approach which is much easier.
From the cv2.HoughCircles(...) you get centre of the circle and the diameter (x,y,r). So I simply go through all centre points of the circles and I check if they are further away from the edge of the image than their diameter.
Here is my code:
height, width = img.shape[:2]
#test top edge
up = (circles[0, :, 0] - circles[0, :, 2]) >= 0
#test left edge
left = (circles[0, :, 1] - circles[0, :, 2]) >= 0
#test right edge
right = (circles[0, :, 0] + circles[0, :, 2]) <= width
#test bottom edge
down = (circles[0, :, 1] + circles[0, :, 2]) <= height
circles = circles[:, (up & down & right & left), :]
The semicircle detected by the hough algorithm is most probably correct. The issue here might be that unless you strictly control the geometry of the scene, i.e. exact position of the camera relative to the target, so that the image axis is normal to the target plane, you will get ellipsis rather than circles projected on the image plane. Not to mention the distortions caused by the optical system, which further degenerate the geometric figure. If you rely on precision here, I would recommend camera calibration.
You better try with different kernel for gaussian blur.That will help you
GaussianBlur( src_gray, src_gray, Size(11, 11), 5,5);
so change size(i,i),j,j)
I have been trying to make my own convolution operator instead of using the inbuilt one that comes with Java. I applied the inbuilt convolution operator on this image
link
using the inbuilt convolution operator with gaussian filter I got this image.
link
Now I run the same image using my code
public static int convolve(BufferedImage a,int x,int y){
int red=0,green=0,blue=0;
float[] matrix = {
0.1710991401561097f, 0.2196956447338621f, 0.1710991401561097f,
0.2196956447338621f, 0.28209479177387814f, 0.2196956447338621f,
0.1710991401561097f, 0.2196956447338621f, 0.1710991401561097f,
};
for(int i = x;i<x+3;i++){
for(int j = y;j<y+3;j++){
int color = a.getRGB(i,j);
red += Math.round(((color >> 16) & 0xff)*matrix[(i-x)*3+j-y]);
green += Math.round(((color >> 8) & 0xff)*matrix[(i-x)*3+j-y]);
blue += Math.round(((color >> 0) & 0xff)*matrix[(i-x)*3+j-y]);
}
}
return (a.getRGB(x, y)&0xFF000000) | (red << 16) | (green << 8) | (blue);
}
And The result I got is this.
link
Also how do I optimize the code that I wrote. The inbuilt convolution operator takes 1 ~ 2 seconds while my code even if it is not serving the exact purpose as it is suppose to, is taking 5~7 seconds !
I accidentally rotated my source image while uploading. So please ignore that.
First of all, you are needlessly (and wrongly) converting your result from float to int at each cycle of the loop. Your red, green and blue should be of type float and should be cast back to integer only after the convolution (when converted back to RGB):
float red=0.0f, green = 0.0f, blue = 0.0f
for(int i = x;i<x+3;i++){
for(int j = y;j<y+3;j++){
int color = a.getRGB(i,j);
red += ((color >> 16) & 0xff)*matrix[(i-x)*3+j-y];
green += ((color >> 8) & 0xff)*matrix[(i-x)*3+j-y];
blue += ((color >> 0) & 0xff)*matrix[(i-x)*3+j-y];
}
}
return (a.getRGB(x, y)&0xFF000000) | (((int)red) << 16) | (((int)green) << 8) | ((int)blue);
The bleeding of colors in your result is caused because your coefficients in matrix are wrong:
0.1710991401561097f + 0.2196956447338621f + 0.1710991401561097f +
0.2196956447338621f + 0.28209479177387814f + 0.2196956447338621f +
0.1710991401561097f + 0.2196956447338621f + 0.1710991401561097f =
1.8452741
The sum of the coefficients in a blurring convolution matrix should be 1.0. When you apply this matrix to an image you may get colors that are over 255. When that happens the channels "bleed" into the next channel (blue to green, etc.).
A completely green image with this matrix would result in:
green = 255 * 1.8452741 ~= 471 = 0x01D7;
rgb = 0xFF01D700;
Which is a less intense green with a hint of red.
You can fix that by dividing the coefficients by 1.8452741, but you want to make sure that:
(int)(255.0f * (sum of coefficients)) = 255
If not you need to add a check which limits the size of channels to 255 and don't let them wrap around. E.g.:
if (red > 255.0f)
red = 255.0f;
Regarding efficiency/optimization:
It could be that the difference in speed may be explained by this needless casting and calling Math.Round, but a more likely candidate is the way you are accessing the image. I'm not familiar enough with BufferedImage and Raster to advice you on the most efficient way to access the underlying image buffer.