Color levels in OpenCV - opencv

I want to do something similar to the levels function in Photoshop, but can't find the right openCV functions.
Basically I want to stretch the greys in an image to go from almost white to practically black instead of from almost white to slightly greyer, while leaving white as white and black as black (I am using greyscale images).

The following python code fully implements Photoshop Adjustments -> Levels dialog.
Change the values for each channel to the desired ones.
img is input rgb image of np.uint8 type.
inBlack = np.array([0, 0, 0], dtype=np.float32)
inWhite = np.array([255, 255, 255], dtype=np.float32)
inGamma = np.array([1.0, 1.0, 1.0], dtype=np.float32)
outBlack = np.array([0, 0, 0], dtype=np.float32)
outWhite = np.array([255, 255, 255], dtype=np.float32)
img = np.clip( (img - inBlack) / (inWhite - inBlack), 0, 255 )
img = ( img ** (1/inGamma) ) * (outWhite - outBlack) + outBlack
img = np.clip( img, 0, 255).astype(np.uint8)

I think this is a function mapping input levels to output levels as shown below in the figure.
For example, the orange curve is a straight line from (a, c) to (b, d), blue curve is a straight line from (a, d) to (b, c) and green curve is a non-linear function from (a,c) to (b, d).
We can define the blue curve as (x - a)/(y - d) = (a - b)/(d - c).
Limiting values of a, b, c and d depend on the range supported by the channel that you are applying this transformation to. For gray scale this is [0, 255].
For example, if you want a transformation like (a, d) = (10, 200), (b, c) = (250, 50) for a gray scale image,
y = -150*(x-10)/240 + 200 for x [10, 250]
y = x for [0, 10) and (250, 255] if you want remaining values unchanged.
You can use a lookup table in OpenCV (LUT function) to calculate the output levels and apply this transformation to your image or the specific channel. You can apply any piecewise transformation this way.

I don't know what are the "Photoshop levels". But from description, I think you should try the following:
Convert your image to YUV using cvtColor. Y will represent the intensity plane. (You can also use Lab, Luv, or any similar colorspace with separate intensity component).
Split the planes using split, so that the intensity plane will be a separate image.
Call equalizeHist on the intensity plane
Merge the planes back together using merge
Details on histogram equalization can be found here
Also note, that there's an implementation of somewhat improved histogram equalization method - CLAHE (but I can't find a better link than this, also #berak suggested a good link on the topic)

Related

how to localize red regions in heatmap using opencv?

I am generating a heat map using an anomaly detection framework, and the output is something like this image.
the output is like this:
How can I mark the red regions on the original images? like a circle around the red part on the original image.
So let's say you have this heatmap.
It was actually generated from some intensity data, some prediction you obtained from your algorithm. That's what we need, not the heatmap itself. It is usually a "grayscale image", it has no color, only intensity values. Usually it is from 0.0 to 1.0 (could be also from 0 to 255) and if you were to plot it, it would like that.
So now to obtain "red" areas you just need regions with high intensity. We must do "thresholding" to obtain them.
max_val = 1.0 # could be 255 in your case, you must check
prediction /= max_val # normalize
mask = prediction > 0.9
Threshold in this case is 0.9, you can make it smaller to make "red" regions larger. We will get the following mask:
Now we can either blend this mask with our original image:
alpha = 0.5
original[mask] = original[mask] * (1 - alpha) + np.array([0, 0, 255]) * alpha
... and get this:
Or we can find some contours on the mask and encircle them:
contours, _ = cv2.findContours(mask.astype(np.uint8),
cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
center = np.average(cnt, axis=0)
radius = np.max(np.linalg.norm((cnt - center)[:, 0], axis=1))
radius = max(radius, 10.0)
cv2.circle(original, center[0].astype(np.int32), int(radius), (0, 0, 255), 2)
... to get this:

Image Processing: Mapping a scanned image on a template image with many identical features

Problem description
We are trying to match a scanned image onto a template image:
Example of a scanned image:
Example of a template image:
The template image contains a collection of hearts varying in size and contour properties (closed, open left and open right). Each heart in the template is a Region of Interest for which we know the location, size, and contour type. Our goal is to match a scanned onto the template so that we can extract these ROIs in the scanned image. In the scanned image, some of these hearts are crossed, and they will be presented to a classifier that decides if they are crossed or not.
Our approach
Following a tutorial on PyImageSearch, we have attempted to use ORB to find matching keypoints (code included below). This should allow us to compute a perspective transform matrix that maps the scanned image on the template image.
We have tried some preprocessing steps such as thresholding and/or blurring the scanned image. We have also tried to increase the maximum number of features as much as possible.
The problem
The method fails to work for our image set. This can be seen in the following image:
It appears that a lot of keypoints are mapped to the wrong part of the template image, so the transform matrix is not calculated correctly.
Is ORB the right technique to use here, or are there parameters of the algorithm that could be fine-tuned to improve performance? It feels like we are missing out on something simple that should make it work, but we really don't know how to go forward with this approach :).
We are trying out an alternative technique where we cross-correlate the scan with individual heart shapes. This should give an image with peaks at the heart locations. By drawing a bounding box around these peaks we hope to map that bounding box on the bounding box of the template (I can elaborat on this upon request)
Any suggestions are greatly appreciated!
import cv2 as cv
import matplotlib.pyplot as plt
import numpy as np
# Preprocessing parameters
THRESHOLD = True
BLUR = False
# ORB parameters
MAX_FEATURES = 4048
KEEP_PERCENT = .01
SHOW_DEBUG = True
# Convert both the input image and template to grayscale
scan_file = r'scan.jpg'
template_file = r'template.jpg'
scan = cv.imread(scan_file)
template = cv.imread(template_file)
scan_gray = cv.cvtColor(scan, cv.COLOR_BGR2GRAY)
template_gray = cv.cvtColor(template, cv.COLOR_BGR2GRAY)
if THRESHOLD:
_, scan_gray = cv.threshold(scan_gray, 127, 255, cv.THRESH_BINARY)
_, template_gray = cv.threshold(template_gray, 127, 255, cv.THRESH_BINARY)
if BLUR:
scan_gray = cv.blur(scan_gray, (5, 5))
template_gray = cv.blur(template_gray, (5, 5))
# Use ORB to detect keypoints and extract (binary) local invariant features
orb = cv.ORB_create(MAX_FEATURES)
(kps_template, desc_template) = orb.detectAndCompute(template_gray, None)
(kps_scan, desc_scan) = orb.detectAndCompute(scan_gray, None)
# Match the features
#method = cv.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING
#matcher = cv.DescriptorMatcher_create(method)
#matches = matcher.match(desc_scan, desc_template)
bf = cv.BFMatcher(cv.NORM_HAMMING)
matches = bf.match(desc_scan, desc_template)
# Sort the matches by their distances
matches = sorted(matches, key = lambda x : x.distance)
# Keep only the top matches
keep = int(len(matches) * KEEP_PERCENT)
matches = matches[:keep]
if SHOW_DEBUG:
matched_visualization = cv.drawMatches(scan, kps_scan, template, kps_template, matches, None)
plt.imshow(matched_visualization)
Based on the clarifications provided by #it_guy, I have attempted to find all the crossed hearts using just the scanned image. I would have to try the algorithm on more images to check whether this approach will generalize or not.
Binarize the scanned image.
gray_image = cv2.cvtColor(rgb_image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray_image, 180, 255, cv2.THRESH_BINARY_INV)
Perform dilation to close small gaps in the outline of the hearts, and the curves representing crosses. Note - The structuring element np.ones((1,2), np.uint8 can be changed by running the algorithm through multiple images and finding the most suitable structuring element.
closing_original = cv2.morphologyEx(original_binary, cv2.MORPH_DILATE, np.ones((1,2), np.uint8)).
Find all the contours in the image. The contours include all hearts and the triangle at the bottom. We eliminate other contours like dots by placing constraints on the height and width of contours to filter them. Further, we also use contour hierachies to eliminate inner contours in cross hearts.
contours_original, hierarchy_original = cv2.findContours(closing_original, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
We iterate through each of the filtered contours.
Contour with normal heart -
Contour with crossed heart -
Let us observe the difference between these two types of hearts. If we look at the transition from white-to-black pixel and black-to-white pixel ( from top to bottom ) inside the normal heart, we see that for majority of the image columns the number of such transitions are 4. ( Top border - 2 transitions, bottom border - 2 transitions )
white-to-black pixel - (255, 255, 0, 0, 0)
black-to-white pixel - (0, 0, 255, 255, 255)
But, in the case of the crossed heart, the number of transitions for majority of the columns must be 6, since the crossed curve / line adds two more transitions inside the heart (black-to-white first, then white-to-black). Hence, among all image columns which have greater than or equal to 4 such transitions, if more than 40% of the columns have 6 transitions, then the given contour represents a crossed contour. Result -
Code -
import cv2
import numpy as np
def convert_to_binary(rgb_image):
gray_image = cv2.cvtColor(rgb_image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray_image, 180, 255, cv2.THRESH_BINARY_INV)
return gray_image, thresh
original = cv2.imread('original.jpg')
height, width = original.shape[:2]
original_gray, original_binary = convert_to_binary(original) # Get binary image
cv2.imwrite("binary.jpg", original_binary)
closing_original = cv2.morphologyEx(original_binary, cv2.MORPH_DILATE, np.ones((1,2), np.uint8)) # Close small gaps in the binary image
cv2.imwrite("closed.jpg", closing_original)
contours_original, hierarchy_original = cv2.findContours(closing_original, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE) # Get all the contours
bounding_rects_original = [cv2.boundingRect(c) for c in contours_original] # Get all contour bounding boxes
orig_boxes = list()
all_contour_image = original.copy()
for i, (x, y, w, h) in enumerate(bounding_rects_original):
if h > height / 2 or w > width / 2: # Eliminate extremely large contours
continue
if h < w / 2 or w < h / 2: # Eliminate vertical / horuzontal lines
continue
if w * h < 200: # Eliminate small area contours
continue
if hierarchy_original[0][i][3] != -1: # Eliminate contours created by heart crosses
continue
orig_boxes.append((x, y, w, h))
cv2.rectangle(all_contour_image, (x,y), (x + w, y + h), (0, 255, 0), 3)
# cv2.imshow("warped", closing_original)
cv2.imwrite("all_contours.jpg", all_contour_image)
final_image = original.copy()
for x, y, w, h in orig_boxes:
cropped_image = closing_original[y - 2 :y + h + 2, x: x + w] # Get the heart binary image
col_pixel_diffs = np.abs(np.diff(cropped_image.T.astype(np.int16))/255) # Obtain all consecutive pixel differences in all the columns
column_sums = np.sum(col_pixel_diffs, axis=1) # Get the sum of each column's transitions. This results in an array of size equal
# to the number of columns, each element representing the number of black-white and white-black transitions.
percent_crosses = np.sum(column_sums >= 6)/ np.sum(column_sums >= 4) # Percentage of columns with 6 transitions among columns with 4 transitions
if percent_crosses > 0.4: # Crossed heart criterion
cv2.rectangle(final_image, (x,y), (x + w, y + h), (0, 255, 0), 3)
cv2.imwrite("crossed_heart.jpg", cropped_image)
else:
cv2.imwrite("normal_heart.jpg", cropped_image)
cv2.imwrite("all_crossed_hearts.jpg", final_image)
This approach can be tested on more images to find its accuracy.

How can I detect uniform color rectangles in an image using OpenCV?

I would like to use OpenCV to detect which rectangles in an image have a majority of pixels close to a given color.
Here's an example of an image I would like to process using this to identify rectangular regions that contain mostly gray pixels (possibly roads):
More precisely, given:
dimensions h x w (height and weight of candidate rectangles)
a distance function dist for colors (for example, the norm of the vector difference between the color vector, which could be RGB or any other representation)
a color vector C
a maximum distance d for colors to be from C
a minimum percentage rate r of pixels in a given rectangle to be within distance d from C for the rectangle to be of interest,
return a mask M in which each pixel P is 1 if the rectangle of size h x w left-cornered by P contains at least r % of its pixels within distance d from C when measured with dist.
In pseudo-code, pixel P in the mask is 1 if and only if:
def rectangle_left_cornered_at_P_is_of_interest(P):
n_pixels_near_C = size([P' for P' in rectangle(P, P + (h,w)) if dist(P',C) < d])
return n_pixels_near_C / (h * w) > r
I imagine there may already exist a filter/kernel that does just that (or can be used to do that) in OpenCV, but I am still learning about it and could not identify one by looking at the documentation. Is there such a thing?
You can use HSV for this . you may have to play with the values a bit for the mask but it will get the job done.
img = cv2.imread(img)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_gray = np.array([0, 5, 50], np.uint8)
upper_gray = np.array([350, 50, 255], np.uint8)
mask = cv2.inRange(hsv, lower_gray, upper_gray)
img_res = cv2.bitwise_and(img, img, mask = mask)
cv2.imwrite('gray.png',img_res)
You should also refer to this post. Its a good post on the use of HSV.
Basicly all you will need for this job will be :
HSV masks,
Otsu thresholding , blurs and may be erosion and dilation.
Use them in some combition that fits your requirement best.

How to get the area of the contours?

I have a picture like this:
And then I transform it into binary image and use canny to detect edge of the picture:
gray = cv.cvtColor(image, cv.COLOR_RGB2GRAY)
edge = Image.fromarray(edges)
And then I get the result as:
I want to get the area of 2 like this:
My solution is to use HoughLines to find lines in the picture and calculate the area of triangle formed by lines. However, this way is not precise because the closed area is not a standard triangle. How to get the area of region 2?
A simple approach using floodFill and countNonZero could be the following code snippet. My standard quote on contourArea from the help:
The function computes a contour area. Similarly to moments, the area is computed using the Green formula. Thus, the returned area and the number of non-zero pixels, if you draw the contour using drawContours or fillPoly, can be different. Also, the function will most certainly give a wrong results for contours with self-intersections.
Code:
import cv2
import numpy as np
# Input image
img = cv2.imread('images/YMMEE.jpg', cv2.IMREAD_GRAYSCALE)
# Needed due to JPG artifacts
_, temp = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)
# Dilate to better detect contours
temp = cv2.dilate(temp, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
# Find largest contour
cnts, _ = cv2.findContours(temp, cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_NONE)
largestCnt = []
for cnt in cnts:
if (len(cnt) > len(largestCnt)):
largestCnt = cnt
# Determine center of area of largest contour
M = cv2.moments(largestCnt)
x = int(M["m10"] / M["m00"])
y = int(M["m01"] / M["m00"])
# Initiale mask for flood filling
width, height = temp.shape
mask = img2 = np.ones((width + 2, height + 2), np.uint8) * 255
mask[1:width, 1:height] = 0
# Generate intermediate image, draw largest contour, flood filled
temp = np.zeros(temp.shape, np.uint8)
temp = cv2.drawContours(temp, largestCnt, -1, 255, cv2.FILLED)
_, temp, mask, _ = cv2.floodFill(temp, mask, (x, y), 255)
temp = cv2.morphologyEx(temp, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
# Count pixels in desired region
area = cv2.countNonZero(temp)
# Put result on original image
img = cv2.putText(img, str(area), (x, y), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, 255)
cv2.imshow('Input', img)
cv2.imshow('Temp image', temp)
cv2.waitKey(0)
Temporary image:
Result image:
Caveat: findContours has some problems one the right side, where the line is very close to the bottom image border, resulting in possibly omitting some pixels.
Disclaimer: I'm new to Python in general, and specially to the Python API of OpenCV (C++ for the win). Comments, improvements, highlighting Python no-gos are highly welcome!
There is a very simple way to find this area, if you take some assumptions that are met in the example image:
The area to be found is bounded on top by a line
Any additional lines in the image are above the line of interest
There are no discontinuities in the line
In this case, the area of the region of interest is given by the sum of the lengths from the bottom of the image to the first set pixel. We can compute this with:
import numpy as np
import matplotlib.pyplot as pp
img = pp.imread('/home/cris/tmp/YMMEE.jpg')
img = np.flip(img, axis=0)
pos = np.argmax(img, axis=0)
area = np.sum(pos)
print('Area = %d\n'%area)
This prints Area = 22040.
np.argmax finds the first set pixel on each column of the image, returning the index. By first using np.flip, we flip this axis so that the first pixel is actually the one on the bottom. The index corresponds to the number of pixels between the bottom of the image and the line (not including the set pixel).
Thus, we're computing the area under the line. If you need to include the line itself in the area, add pos.shape[0] to the area (i.e. the number of columns).

Tables' contours finding by OpenCV

I am trying to detect table's border by opencv, i choose the findContour function, here's the minimal demo.
def contour_proposal(rgb_matrix, weight_threshold, height_threshold):
"""
:return: list of proposal region, each region is a tuple (ltx, lty, rbx, rby)
"""
gray = cv.cvtColor(rgb_matrix, cv.COLOR_RGB2GRAY)
_, binary = cv.threshold(gray, 0, 255, cv.THRESH_BINARY + cv.THRESH_OTSU)
# binary = cv.adaptiveThreshold(gray, 255, cv.ADAPTIVE_THRESH_MEAN_C, cv.THRESH_BINARY, 11, 2)
# binary = cv.adaptiveThreshold(gray, 255, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY, 11, 2)
new_image_matrix, contours, hierarchy = cv.findContours(binary, cv.RETR_LIST, cv.CHAIN_APPROX_SIMPLE)
proposals = list(filter(
lambda x: x[2] > weight_threshold and x[3] > height_threshold,
map(cv.boundingRect, contours)
))
res = []
for p in proposals:
x, y, w, h = p
res.append((x, y, x+w, y+h))
return res
Here are two images for test
[]
[]
Here are their finding result's visualization. I specific the contours result by blue rectangles.
[]
[]
The border of the table are correctly detected by findcontours in the second image while not in the first image. However, these two tables seem to have similar feature, both of them have the top and bottom borders and don't have the left and white borders. My question is, why is the first table cannot be detected while the second one can be detected? I've check the binary image of the first image, it's border information is not be abandoned.
So, how to retrieve the best detection results by opencv. Besides, the different kinds of threshold methods seems have no performance influence in this specific task(table detection), so which one should i choose?

Resources