So i have a skin color range that I'm using to draw the contours of any skin color my camera sees, i.e, it'll draw contour on the hands or faces etc.
However, the range of color that I am using to draw the contours has a red range of 0-255, so it basically draws contour around everything that is red. The red pixel is however important for a good skin detection so I can't change that.
So i was wondering how I could tweek my contour code, so that it doesn't draw contour around the color red.
My code is as follow:
min_YCrCb = np.array([0,133,77],np.uint8) # Create a lower bound for the skin color
max_YCrCb = np.array([255,173,127],np.uint8) # Create an upper bound for skin color
skinRegion = cv2.inRange(converted,min_YCrCb,max_YCrCb) # Create a mask with boundaries
contours, hierarchy = cv2.findContours(skinRegion, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Find the contour on the skin detection
for i, c in enumerate(contours): # Draw the contour on the source frame
area = cv2.contourArea(c)
if area > 10000:
cv2.drawContours(img, contours, i, (255, 255, 0), 2)
In my experience with OpenCV, and what many other would recommend, would be converting your RGB color scheme into HSV.
HSV stands for Hue, Saturation, and Value and is just another way of expressing different colors in the white light spectrum. In RGB it can be very difficult to find bright or dull colors but this is not the case in the HSV color range.
In the Python documentation of OpenCV one can see an example of converting BGR to HSV, NOTE: The H in HSV has a range of 0-179 while the others are 0-255.
You can switch to HSV by using the function hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) and then having your code find the contours within that image stream instead of your original. To figure out what your HSV range should be set to, you can google a RGB to HSV converter such as this one - NOTE: OpenCV commonly uses BGR instead of RBG so be careful not to get them mixed up
This should help you find the exact range of colors you need to make sure your program is properly working.
min_YCrCb = np.array([0,0,0],np.uint8) # Create a lower bound HSV
max_YCrCb = np.array([179,255,255],np.uint8) # Create an upper bound HSV
hsv = cv2.cvtColor(converted, cv2.COLOR_BGR2HSV) # assuming converted is your original stream...
skinRegion = cv2.inRange(hsv,min_YCrCb,max_YCrCb) # Create a mask with boundaries
contours, hierarchy = cv2.findContours(skinRegion, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Find the contour on the skin detection
for i, c in enumerate(contours): # Draw the contour on the source frame
area = cv2.contourArea(c)
if area > 10000:
cv2.drawContours(img, contours, i, (255, 255, 0), 2)
Related
Given an image like this, how would I go about extracting only the arrows? I'm having trouble filtering all the noise, and it's difficult to filter the colour since the arrows can either be in the HSV range of blue and red. My goal is to have just the 4 arrows showing in the image and nothing else.
Here are some of the different photos:
What I've tried so far with OpenCV-Python:
Applying GaussianBlur
Grabbing hue and then applying canny
Drawing contours
Filtering by colour, then applying canny (difficult because arrows can be different colours)
In the end, none of these did a good job in filtering for the arrows.
Here is one approach in Python/OpenCV that works for the image provided. I am not sure if it works for all the image. Perhaps the threshold needs adjusting for each.
Read the input
Convert to LAB colorspace
Separate LAB channels
Get pixel-by-pixel maximum between the A and B channels
Threshold
Apply morphology close
Get contours
Filter the contours on area and perimeter
Draw the remaining contours as white filled on a black background
Save the results
Input:
import cv2
import numpy as np
# read the input
img = cv2.imread('game_arrows.png')
# convert to LAB
LAB = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
# separate channels
L,A,B = cv2.split(LAB)
# get the maximum between the A and B pixel by pixel
ABmax = np.maximum(A, B)
# threshold
thresh = cv2.threshold(ABmax, 180, 255, cv2.THRESH_BINARY)[1]
# morphology close and open
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (7,7))
morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get contours
contours = cv2.findContours(morph, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# filter contours
result = np.zeros_like(img)
for cntr in contours:
area = cv2.contourArea(cntr)
perimeter = cv2.arcLength(cntr, True)
if area > 300 and perimeter < 200:
cv2.drawContours(result, [cntr], 0, (255,255,255), -1)
# save results
cv2.imwrite('game_arrows_ABmax.jpg', ABmax)
cv2.imwrite('game_arrows_thresh.jpg', thresh)
cv2.imwrite('game_arrows_morph.jpg', morph)
cv2.imwrite('game_arrows_result.jpg', result)
# show results
cv2.imshow('ABmax', ABmax)
cv2.imshow('thresh', thresh)
cv2.imshow('morph', morph)
cv2.imshow('result', result)
cv2.waitKey(0)
Maximum of A and B channels:
Threshold Image:
Morphology Closed Image:
Filtered Contours:
I am trying to define the coordinates of multiple rectangles appearing randomly in the screen. The width of the rectangles is defined (even if with the contour method i noticed there is a bit of inaccuracy in determine it).
With my python code:
yellow = (5,242,206)
while True:
isFrameValid, frame = capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
roi = gray [0:300, 0:1920]
threshold, thresh_image = cv2.threshold(roi, 30, 255, cv2.THRESH_BINARY)
#Select contours
contours, _ =cv2.findContours(thresh_image,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(frame, contours, -1,yellow,1)
I can only detect the entire block, even trying with different options instead of RETR_EXTERNAL.Looking at my example images, what I'd like to achive is to detect the 3 rectangles (appearing in random position in the screen) so I can correctly determine their coordinates. Are there any ideas or methods I dont know about since im new with Opencv?
)
Example to reproduce the problem with an image
import cv2
img= cv2.imread('./IwOXW.png')
yellow = (5,242,206)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
roi = gray [0:300, 0:1920]
threshold, thresh_image = cv2.threshold(roi, 30, 255, cv2.THRESH_BINARY)
#Select contours
contours, _ = cv2.findContours(thresh_image,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img, contours, -1,yellow,1)
cv2.imshow('Frame',img)
cv2.waitKey(0)
with this image
The reason findCountours can't detect the entire block is because there is no line on the inside for it to detect.
I can think of two options for you to try:
Use the contours that you have, and write some smarts to find 90 degree bends, and thus build your rectangles
You could investigate using HoughLines to detect the lines. You would probably still have to write some code to take the detected lines and figure out what are rectangles, but it might be simpler with HoughLines as it will give you straight lines to work with. Look for HoughLines in the docs: https://docs.opencv.org/4.x/
Out of an image, I need to extract a sheet of paper, just like camscanner app does, https://www.camscanner.com/
I know that I can do this by detecting the edges of the sheet of paper i want to detect. And later performing perspective transform. I use openCV library in python.
This is the image in which I'm trying to find the sheet of paper:
Here is what I already tried:
Method 1:
(using thresholding)
Preprocessing the image with image smoothening (guassian
blur/bilateral blur)
splitting image into h,s,v channels
adaptive thresholding on the saturation channel
some morphological operations like dilation and erosion
finding contours, identifying the largest contour and finding the
corner points
I've implemented this method based on a stackoverflow answer:
Detecting a sheet of paper / Square Detection
I'm able to find the paper sheet for some images, but it fails for images like this:
Method 2:
(using sobel gradient operator)
Preprocessing the image by converting into grayscale, image smoothening (guassian
blur/bilateral blur)
Finding the gradients of the image
downsampling and upsampling the image
After this I don't know how to find the appropriate boundary enclosing the image.
I've implemented this method based on a stackoverflow answer:
detect paper from background almost same as paper color
Here's how far I got with the image:
Method 3:
(using canny edge detector)
According to the posts I've read on this community seems that everyone prefers canny edge method to extract the edges, but in my case the results are not satisfactory. Here's what I did:
Preprocessing the image by converting into grayscale, image smoothening (guassian
blur/bilateral blur)
Finding the edges using canny edge
some morphological operations like dilation and erosion
But the edges obtained from canny are really not up to the mark.
I've implemented this method based on a stackoverflow answer:
Detecting a sheet of paper / Square Detection, also I didn't quite what he does by iterating over multiple channels in this answer.
Here's how far I got with the image:
Here's some code on the method1(thresholding):
#READING IMAGE INTO BGR SPACE
image = cv2.imread("./images/sheet3.png")
#BILATERAL FILTERING TO SMOOTHEN THE IMAGE BUT NOT THE EDGES
img = cv2.bilateralFilter(image,20,75,75)
#CONVERTING BGR TO HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
#SPLITTING THE HSV CHANNELS
h,s,v = cv2.split(hsv)
#DOUBLING THE SATURATION CHANNEL
gray_s = cv2.addWeighted(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY), 0.0, s, 2.0, 0)
#THRESHOLDING USING ADAPTIVETHRESHOLDING
threshed = cv2.adaptiveThreshold(gray_s, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 109, 10)
#APPLYING MORPHOLOGICAL OPERATIONS OF DILATION AND EROSION
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2))
morph = cv2.morphologyEx(threshed, cv2.MORPH_OPEN, kernel)
#FINDING ALL THE CONTOURS
cnts = cv2.findContours(morph, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[-2]
canvas = img.copy()
#SORTING THE CONTOURS AND TAKING THE LARGEST CONTOUR
cnts = sorted(cnts, key = cv2.contourArea)
cnt = cnts[-1]
#FINDING THE PERIMETER OF THE CONTOUR
arclen = cv2.arcLength(cnt, True)
#FINDING THE END POINTS OF THE CONTOUR BY APPROX POLY DP
approx = cv2.approxPolyDP(cnt, 0.02* arclen, True)
cv2.drawContours(canvas, [cnt], -1, (255,0,0), 1, cv2.LINE_AA)
cv2.drawContours(canvas, [approx], -1, (0, 0, 255), 1, cv2.LINE_AA)
cv2.imwrite("detected.png", canvas)
I'm kind of new to image processing and openCV.
Please share some insights on how to take this further and obtain results more accurately. TIA.
I have been using OpenCV's findContours() to find areas of contiguous black pixels. Sometimes it selects the area of white pixels surrounding the black pixels, e.g. in this figure the "g", "e", and "n" are selected with black pixels as I expect, but the other three letters are selected by the surrounding area of white pixels, as shown by the green points of the contour:
Sometimes, the "g" with the white area inside the bowl is selected as a contour, and other times the white area inside the bowl is a different contour.
For both examples, I could deal with the hierarchy and check which contours are children of which other contours, but I think I am missing something simpler.
How can I get OpenCV to select and return each separate area of contiguous black pixels?
This is caused by findContours, that starts by looking for white shapes on a black background. Simply inverting you image will improve results. The code below will draw contours one by one with a keypress, so you can see that it is the black pixels that are selected.
import cv2
import numpy as np
# Read image in color (so we can draw in red)
img = cv2.imread("vBQa7.jpg")
# convert to gray and threshold to get a binary image
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
th, dst = cv2.threshold(gray, 20, 255, cv2.THRESH_BINARY)
# invert image
dst = cv2.bitwise_not(dst)
# find contours
countours,hierarchy=cv2.findContours(dst,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# draw contours
for cnt in countours:
cv2.drawContours(img,[cnt],0,(0,0,255),2)
cv2.imshow("Result",img)
cv2.waitKey(0)
# show image
cv2.imshow("Result",img)
cv2.waitKey(0)
cv2.destroyAllWindows()
You'll find that there are also some small black patches selected, as well as the background area. You can remove these by setting a minimum and maximum size and check the contourArea for each contour. (docs)
I dont know whether this is an option for your use case, but you could take the following steps:
identify the black pixels with filters/ thresholds
use a clustering algorithm (DBscan here I think) to group the pixels together
Lets say I have the following image where there is a folder image with a white label on it.
What I want is to detect the coordinates of end points of the folder and the white paper on it (both rectangles).
Using the coordinates, I want to know the exact place of the paper on the folder.
GIVEN :
The inner white paper rectangle is always going to be of the fixed size, so may be we can use this knowledge somewhere?
I am new to opencv and trying to find some guidance around how should I approach this problem?
Problem Statement : We cannot rely on color based solution since this is just an example and color of both the folder as well as the rectangular paper can change.
There can be other noisy papers too but one thing is given, The overall folder and the big rectangular paper would always be the biggest two rectangles at any given time.
I have tried opencv canny for edge detection and it looks like this image.
Now how can I find the coordinates of outer rectangle and inner rectangle.
For this image, there are three domain colors: (1) the background-yellow (2) the folder-blue (3) the paper-white. Use the color info may help, I analysis it in RGB and HSV like this:
As you can see(the second row, the third cell), the regions can be easily seperated in H(HSV) if you find the folder mask first.
We can choose
My steps:
(1) find the folder region mask in HSV using inRange(hsv, (80, 10, 20), (150, 255, 255))
(2) find contours on the mask and filter them by width and height
Here is the result:
Related:
Choosing the correct upper and lower HSV boundaries for color detection with`cv::inRange` (OpenCV)
How to define a threshold value to detect only green colour objects in an image :Opencv
You can opt for (Adaptive Threshold)[https://docs.opencv.org/3.4/d7/d4d/tutorial_py_thresholding.html]
Obtain the hue channel of the image.
Perform adaptive threshold with a certain block size. I used size of 15 for half the size of the image.
This is invariant to color as you expected. Now you can go ahead and extract what you need!!
This solution helps to identify the white paper region of the image.
This is the full code for the solution:
import cv2
import numpy as np
image = cv2.imread('stack2.jpg',-1)
paper = cv2.resize(image,(500,500))
ret, thresh_gray = cv2.threshold(cv2.cvtColor(paper, cv2.COLOR_BGR2GRAY),
200, 255, cv2.THRESH_BINARY)
image, contours, hier = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
for c in contours:
area = cv2.contourArea(c)
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
# convert all coordinates floating point values to int
box = np.int0(box)
# draw a green 'nghien' rectangle
if area>500:
cv2.drawContours(paper, [box], 0, (0, 255, 0),1)
print([box])
cv2.imshow('paper', paper)
cv2.imwrite('paper.jpg',paper)
cv2.waitKey(0)
First using a manual threshold(200) you can detect paper in the image.
ret, thresh_gray = cv2.threshold(cv2.cvtColor(paper, cv2.COLOR_BGR2GRAY), 200, 255, cv2.THRESH_BINARY)
After that you should find contours and get the minAreaRect(). Then you should get coordinates for that rectangle(box) and draw it.
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(paper, [box], 0, (0, 255, 0),1)
In order to avoid small white regions of the image you can use area = cv2.contourArea(c) and check if area>500 and drawContours().
final output:
Console output gives coordinates for the white paper.
console output:
[array([[438, 267],
[199, 256],
[209, 60],
[447, 71]], dtype=int64)]