OpenCV : Detecting bounding-box covering the label - opencv

I have similar images with similar kind of background noise.
Using OpenCV, is there a way to detect the area (detect contours) of the area covering the label.
Or at least , is it possible to detect "rough" bounding-box covering the label area?

I've tried to accomplish this task, main issue which probably won't allow to generalize this algorithm is how to choose proper contour. I have two values(contour lengths) 3108 and 2855. You may try to fetch all you photos(if they on similar distance from camera) and set threshold for desired contour between 3050 and 2750, but no guaranty that it will work. So this is how i remove background(full code):
import cv2
import numpy as np
image=cv2.imread('C:/Users/srlatch/Desktop/of8cA.png')
img = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
def clear_vertical(img, target):
for i in range(img.shape[1]):
for j in range(img.shape[0]):
if img[j][i]:
break
else:
target[j][i]=[0,0,0]
def clear_horizontal(img, target):
for i in range(img.shape[0]):
for j in range(img.shape[1]):
if img[i][j]:
break
else:
target[i][j]=[0,0,0]
def turn_off(img):
for i in range(img.shape[0]):
for j in range(img.shape[1]):
img[i][j]=0
def turn_on(img,result):
for i in result:
img[i[0][1]][i[0][0]]=255
def f(list):
max=[]
for i in list:
if len(i)>len(max):
max=i
return max
def rem(ls, thresh):
new_c=[]
for i in ls:
if len(i)>thresh:
new_c.append(i)
return new_c
def rn(ls,min,max):
ret=[]
for i in ls:
if len(i)<max and len(i)>min:
print(len(i))
ret.append(i)
return ret
#ret,tresh = cv2.threshold(img,40,255,cv2.THRESH_BINARY)
kernel = np.ones((2,2),np.uint8)
new=cv2.Canny(img,190,1)
dilated=cv2.dilate(new, kernel)
tresh,c,hr=cv2.findContours(dilated,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
c=rn(c, 2600, 4000)
turn_off(new)
turn_on(new,c[0])
clear_horizontal(new,image)
clear_vertical(new,image)
cv2.imwrite('result_image_end.png',image)
cv2.imshow('wnd',image)
cv2.waitKey(100)
I've tried different approaches, but this seems to work better than the others. I believe in opencv exist function which can substitute this clear_horizontally and vertically, but i can't remember it name. Hope it helps!

Related

Extract text information from PDF files with different layouts - machine learning

I need assistance with a ML project I am currently trying to create.
I receive a lot of invoices from a lot of different suppliers - all in their own unique layout. I need to extract 3 key elements from the invoices. These 3 elements are all located in a table/line items for all the invoices.
The 3 elements are:
1: Tariff number (digit)
2: Quantity (always a digit)
3: Total line amount (monetary value)
Please refer to below screenshot, where I have marked these field on a sample invoice.
I started this project with a template approach, based on regular expressions. This, however, was not scaleable at all and I ended up with tons of different rules.
I am hoping that machine learning can help me here - or maybe a hybrid solution?
The common denominator
In all of my invoices, despite of the different layouts, each line item will always consist of one tariff number. This tariff number is always 8 digits, and is always formatted in one the ways like below:
xxxxxxxx
xxxx.xxxx
xx.xx.xx.xx
(Where "x" is a digit from 0 - 9).
Further, as you can see on the invoice there is both a Unit Price and a Total Amount per line. The amount I will need is always the highest for each line.
The output
For each invoice like the one above, I need the output for each line. This could for example be something like this:
{
"line":"0",
"tariff":"85444290",
"quantity":"3",
"amount":"258.93"
},
{
"line":"1",
"tariff":"85444290",
"quantity":"4",
"amount":"548.32"
},
{
"line":"2",
"tariff":"76109090",
"quantity":"5",
"amount":"412.30"
}
Where to go from here?
I am not sure of what I am looking to do falls under machine learning and if so, under which category. Is it computer vision? NLP? Named Entity Recognition?
My initial thought was to:
Convert the invoice to text. (The invoices are all in textable PDFs, so I can use something like pdftotext to get the exact textual values)
Create custom named entities for quantity, tariff and amount
Export the found entities.
However, I feel like I might be missing something.
Can anyone assist me in the right direction?
Edit:
Please see below for a few more examples of how an invoice table section can look like:
Sample invoice #2
Sample invoice #3
Edit 2:
Please see below for the three sample images, without the borders/bounding boxes:
Image 1:
Image 2:
Image 3:
Here's an attempt using OpenCV, the idea is:
Obtain binary image. We load the image, enlarge using
imutils.resize to help obtain better OCR results (see Tesseract improve quality), convert to grayscale, then Otsu's threshold to obtain a binary image (1-channel).
Remove table grid lines. We create a horizontal and vertical kernels then perform morphological operations to combine adjacent text contours into a single contour. The idea is to extract a ROI row as one piece to OCR.
Extract row ROIs. We find contours then sort from top-to-bottom using imutils.contours.sort_contours. This ensures that we iterate through each row in the correct order. From here we iterate through the contours, extract the row ROI using Numpy slicing, OCR using Pytesseract, then parse the data.
Here's the visualization of each step:
Input image
Binary image
Morph close
Visualization of iterating through each row
Extracted row ROIs
Output invoice data result:
{'line': '0', 'tariff': '85444290', 'quantity': '3', 'amount': '258.93'}
{'line': '1', 'tariff': '85444290', 'quantity': '4', 'amount': '548.32'}
{'line': '2', 'tariff': '76109090', 'quantity': '5', 'amount': '412.30'}
Unfortunately, I get mixed results when trying on the 2nd and 3rd image. This method does not produce great results on the other images since the layout of the invoices are all different. However, this approach shows that it's possible to use traditional image processing techniques to extract the invoice information with the assumption that you have a fixed invoice layout.
Code
import cv2
import numpy as np
import pytesseract
from imutils import contours
import imutils
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Load image, enlarge, convert to grayscale, Otsu's threshold
image = cv2.imread('1.png')
image = imutils.resize(image, width=1000)
height, width = image.shape[:2]
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Remove horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (50,1))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(thresh, [c], -1, 0, -1)
# Remove vertical lines
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,50))
detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(thresh, [c], -1, 0, -1)
# Morph close to combine adjacent contours into a single contour
invoice_data = []
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (85,5))
close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=3)
# Find contours, sort from top-to-bottom
# Iterate through contours, extract row ROI, OCR, and parse data
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
(cnts, _) = contours.sort_contours(cnts, method="top-to-bottom")
row = 0
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
ROI = image[y:y+h, 0:width]
ROI = cv2.GaussianBlur(ROI, (3,3), 0)
data = pytesseract.image_to_string(ROI, lang='eng', config='--psm 6')
parsed = [word.lower() for word in data.split()]
if 'tariff' in parsed or 'number' in parsed:
row_data = {}
row_data['line'] = str(row)
row_data['tariff'] = parsed[-1]
row_data['quantity'] = parsed[2]
row_data['amount'] = str(max(parsed[10], parsed[11]))
row += 1
print(row_data)
invoice_data.append(row_data)
# Visualize row extraction
'''
mask = np.zeros(image.shape, dtype=np.uint8)
cv2.rectangle(mask, (0, y), (width, y + h), (255,255,255), -1)
display_row = cv2.bitwise_and(image, mask)
cv2.imshow('ROI', ROI)
cv2.imshow('display_row', display_row)
cv2.waitKey(1000)
'''
print(invoice_data)
cv2.imshow('thresh', thresh)
cv2.imshow('close', close)
cv2.waitKey()
I'm working on a similar problem in the logistics industry and trust me when I say these document tables come in myriad layouts. Numerous companies that have somewhat solved and are improving on this problem are mentioned as under
Leaders: ABBYY, AntWorks, Kofax, and WorkFusion
Major Contenders: Automation Anywhere, Celaton, Datamatics, EdgeVerve, Extract Systems, Hyland, Hyperscience, Infrrd, and Parascript
Aspirants: Ikarus, Rossum, Shipmnts(Alex), Amazon(Textract), Docsumo, Docparser, Aidock
The category I would like to put this problem under would be multi-modal learning, because both textual and image modalities contribute a good deal in this problem. Though OCR tokens play a vital role in attribute-value classification, their position on the page, spacing and inter-character distances hold as very important features in detecting table, row and column boundaries. The problem gets all the more interesting when rows break across pages, or some columns carry non-empty values.
While the academic world and conferences uses the term Intelligent Document Processing, in general for extracting both singular fields and tabular data. The former is more known by attribute-value classification and the latter is famous by table extraction or repeated-structure extraction, in research literature.
In our foray in processing these semi-structured documents over the 3 years, I feel that achieving both accuracy and scalability is a long and arduous journey. The solutions that offer scalability / 'template free' approach do have annotated corpus of semi-structured business documents in the order of tens of thousand, if not millions. Though this approach is a scalable solution, it's as good as the documents it has been trained on. If your documents hail from the logistics or insurance sector, which are known for their complex layouts, and need to be super-accurate owing to the compliance procedures, a 'template-based' solution would be the panacea to your ills. It is guaranteed to give more accuracy.
If you need links to existing research, do mention in the comments below and I'ld be happy to share them.
Also, I would recommend using pdfparser1 over pdf2text or pdfminer because the former gives character level information in digital files at significantly better performance.
Would be happy to incorporate any feedback, as this is my first answer here.

Count number of objects using watershed algorithm - Scikit-image

I am trying to find the number of objects in a given image using watershed segmentation. Consider for example the coins image. Here I would like to know the number of coins in the image. I implemented the code available at Scikit-image documentation and tweaked with it a little and got results similar to those displayed on the documentation page.
After looking at functions used in the code in detail I found out that ndimage.label() also returns number of unique objects found in the image (mentioned in it's documentation), but when I print that value I am getting 53 which is very high as compared to the number of coins in the actual image.
Can somebody suggest some method to find the number of objects in an image.
Here is a version of your code that counts the coins in one of two ways: a) by directly segmenting the distance image and b) by doing watershed first and rejecting tiny intersecting regions.
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from skimage import io, color, filter as filters
from scipy import ndimage
from skimage.morphology import watershed
from skimage.feature import peak_local_max
from skimage.measure import regionprops, label
image = color.rgb2gray(io.imread('water_coins.jpg', plugin='freeimage'))
image = image < filters.threshold_otsu(image)
distance = ndimage.distance_transform_edt(image)
# Here's one way to measure the number of coins directly
# from the distance map
coin_centres = (distance > 0.8 * distance.max())
print('Number of coins (method 1):', np.max(label(coin_centres)))
# Or you can proceed with the watershed labeling
local_maxi = peak_local_max(distance, indices=False, footprint=np.ones((3, 3)),
labels=image)
markers, num_features = ndimage.label(local_maxi)
labels = watershed(-distance, markers, mask=image)
# ...but then you have to clean up the tiny intersections between coins
regions = regionprops(labels)
regions = [r for r in regions if r.area > 50]
print('Number of coins (method 2):', len(regions) - 1)
fig, axes = plt.subplots(ncols=3, figsize=(8, 2.7))
ax0, ax1, ax2 = axes
ax0.imshow(image, cmap=plt.cm.gray, interpolation='nearest')
ax0.set_title('Overlapping objects')
ax1.imshow(-distance, cmap=plt.cm.jet, interpolation='nearest')
ax1.set_title('Distances')
ax2.imshow(labels, cmap=plt.cm.spectral, interpolation='nearest')
ax2.set_title('Separated objects')
for ax in axes:
ax.axis('off')
fig.subplots_adjust(hspace=0.01, wspace=0.01, top=1, bottom=0, left=0,
right=1)
plt.show()

Examples/tutorials of Adaptive Metropolis for images using PyMC

I am looking for examples or tutorials of the AdaptiveMetropolis step method used for image processing.
The only vaguely image-related resource that I have found until now is this astronomy dissertation and the related GitHub repo.
This wider question does not seem to provide PyMC example code.
What about finding the peak on this simulated array?
import numpy as np
from matplotlib import pyplot as plt
sz = (12,18)
data_input = np.random.normal( loc=5.0, size=sz )
data_input[7:10, 2:6] = np.random.normal( loc=100.0, size=(3,4) )
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
im = ax.imshow( data_input )
ax.set_title("input")
The closest I know of is here: http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter5_LossFunctions/LossFunctions.ipynb (see
Example: Kaggle contest on Observing Dark World)
On this thread you asked a specific question: https://github.com/pymc-devs/pymc/issues/653 about finding an array in an image. Here is a first attempt at a model:
In that case it seems like you are trying to estimate a 2-D uniform distribution with gaussian noise. You'll have to translate this into an actual model but this would be one idea:
lower_x ~ DiscreteUniform(0, 20)
upper_x ~ DiscreteUniform(0,20)
lower_y ~ DiscreteUniform(0, 20)
upper_y ~ DiscreteUniform(0, 20)
height ~ Normal(100, 1)
noise ~ InvGamma(1, 1)
means = zeros((20, 20))
means[[lower_x:upper_x,lower_y: upper_y]] = height # this needs to be
a deterministic
data ~ Normal(mu=means, sd=noise)
It might be better to code upper_x as an offset and then do lower_x:lower_x+offset_x, otherwise you need a potential to enforce lower_x < upper_x.

Blocproc in matlab with two output variables

I have the following problem. I have to compute dense SIFT interest points in a very high dimensional image (182MP). When I run the code in the full image Matlab always close suddently. So I decided to run the code in image patches.
the code
I tried to use blocproc in matlab to call the c++ function that performs the dense sift interest points detection this way:
fun = #(block_struct) denseSIFT(block_struct.data, options);
[dsift , infodsift] = blockproc(ndvi,[1000 1000],fun);
where dsift is the sift descriptors (vectors) and infodsift has the information of the interest points, such as the x and y coordinates.
the problem
The problem is the fact that blocproc just allow one output, but i want both outputs. The following error is given by matlab when i run the code.
Error using blockproc
Too many output arguments.
Is there a way for me doing this?
Would it be a problem for you to "hard code" a version of blockproc?
Assuming for a moment that you can divide your image into NxM smaller images, you could loop around as follows:
bigImage = someFunction();
sz = size(bigImage);
smallSize = sz ./ [N M];
dsift = cell(N,M);
infodsift = cell(N,M);
for ii = 1:N
for jj = 1:M
smallImage = bigImage((ii-1)*smallSize(1) + (1:smallSize(1)), (jj-1)*smallSize(2) + (1:smallSize(2));
[dsift{ii,jj} infodsift{ii,jj}] = denseSIFT(smallImage, options);
end
end
The results will then be in the two cell arrays. No real need to pre-allocate, but it's tidier if you do. If the individual matrices are the same size, you can convert into a single large matrix with
dsiftFull = cell2mat(dsift);
Almost magic. This won't work if your matrices are different sizes - but then, if they are, I'm not sure you would even want to put them all in a single one (unless you decide to horzcat them).
If you do decide you want a list of "all the colums as a giant matrix", then you can do
giantMatrix = [dsift{:}];
This will return a matrix with (in your example) 128 rows, and as many columns as there were "interest points" found. It's shorthand for
giantMatrix = [dsift{1,1} dsift{2,1} dsift{3,1} ... dsift{N,M}];

Horizontal Histogram in OpenCV

I am newbie to OpenCV,now I am making a senior project related Image processing. I have a question: Can I make a horizontal or vertical histogram with some functions of OpenCV?
Thanks,
Truong
The most efficient way to do this is by using the cvReduce function. There's a parameter to allow to select if you want an horizontal or vertical projection.
You can also do it by hand with the functions cvGetCol and cvGetRow combined with cvSum.
Based on the link you provided in a comment, this is what I believe you're trying to do.
You want to create an array with n elements, where n is the number of columns in the input image. The value of the nth element of the array is the sum of all the pixels in the nth column.
You can calculate this array by looping over the columns of the input image, using cvGetSubRect to access the pixels in that column, and cvSum to sum those pixels.
Here is some Python code that does that, assuming a grayscale image:
import cv
def verticalProjection(img):
"Return a list containing the sum of the pixels in each column"
(w,h) = cv.GetSize(img)
sumCols = []
for j in range(w):
col = cv.GetSubRect(img, (j,0,1,h))
sumCols.append(cv.Sum(col)[0])
return sumCols
Updating carnieri answer (some cv functions are not working today)
import numpy as np
import cv2
def verticalProjection(img):
"Return a list containing the sum of the pixels in each column"
(h, w) = img.shape[:2]
sumCols = []
for j in range(w):
col = img[0:h, j:j+1] # y1:y2, x1:x2
sumCols.append(np.sum(col))
return sumCols
Regards.
An example of using cv2.reduce with OpenCV 3 in Python :
import numpy as np
import cv2
img = cv2.imread("test_1.png")
x_sum = cv2.reduce(img, 0, cv2.REDUCE_SUM, dtype=cv2.CV_32S)
y_sum = cv2.reduce(img, 1, cv2.REDUCE_SUM, dtype=cv2.CV_32S)

Resources