Drawing shapely object in opencv - opencv

I have a shapely polygon like this:
polygon = Polygon([(0, 0), (1, 1), (1, 0), (0,0)])
When I directly print this polygon in a jupyter notebook cell, it shows the polygon perfectly. On the other side, what I want to do is draw this polygon in an image. How can I do that?
In brief; How can I draw a shapely polygon object on opencv cv2 to process it in a object counting project.opencv

Related

openCv Find coordinates of edges/contours

Lets say I have the following image where there is a folder image with a white label on it.
What I want is to detect the coordinates of end points of the folder and the white paper on it (both rectangles).
Using the coordinates, I want to know the exact place of the paper on the folder.
GIVEN :
The inner white paper rectangle is always going to be of the fixed size, so may be we can use this knowledge somewhere?
I am new to opencv and trying to find some guidance around how should I approach this problem?
Problem Statement : We cannot rely on color based solution since this is just an example and color of both the folder as well as the rectangular paper can change.
There can be other noisy papers too but one thing is given, The overall folder and the big rectangular paper would always be the biggest two rectangles at any given time.
I have tried opencv canny for edge detection and it looks like this image.
Now how can I find the coordinates of outer rectangle and inner rectangle.
For this image, there are three domain colors: (1) the background-yellow (2) the folder-blue (3) the paper-white. Use the color info may help, I analysis it in RGB and HSV like this:
As you can see(the second row, the third cell), the regions can be easily seperated in H(HSV) if you find the folder mask first.
We can choose
My steps:
(1) find the folder region mask in HSV using inRange(hsv, (80, 10, 20), (150, 255, 255))
(2) find contours on the mask and filter them by width and height
Here is the result:
Related:
Choosing the correct upper and lower HSV boundaries for color detection with`cv::inRange` (OpenCV)
How to define a threshold value to detect only green colour objects in an image :Opencv
You can opt for (Adaptive Threshold)[https://docs.opencv.org/3.4/d7/d4d/tutorial_py_thresholding.html]
Obtain the hue channel of the image.
Perform adaptive threshold with a certain block size. I used size of 15 for half the size of the image.
This is invariant to color as you expected. Now you can go ahead and extract what you need!!
This solution helps to identify the white paper region of the image.
This is the full code for the solution:
import cv2
import numpy as np
image = cv2.imread('stack2.jpg',-1)
paper = cv2.resize(image,(500,500))
ret, thresh_gray = cv2.threshold(cv2.cvtColor(paper, cv2.COLOR_BGR2GRAY),
200, 255, cv2.THRESH_BINARY)
image, contours, hier = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
for c in contours:
area = cv2.contourArea(c)
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
# convert all coordinates floating point values to int
box = np.int0(box)
# draw a green 'nghien' rectangle
if area>500:
cv2.drawContours(paper, [box], 0, (0, 255, 0),1)
print([box])
cv2.imshow('paper', paper)
cv2.imwrite('paper.jpg',paper)
cv2.waitKey(0)
First using a manual threshold(200) you can detect paper in the image.
ret, thresh_gray = cv2.threshold(cv2.cvtColor(paper, cv2.COLOR_BGR2GRAY), 200, 255, cv2.THRESH_BINARY)
After that you should find contours and get the minAreaRect(). Then you should get coordinates for that rectangle(box) and draw it.
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(paper, [box], 0, (0, 255, 0),1)
In order to avoid small white regions of the image you can use area = cv2.contourArea(c) and check if area>500 and drawContours().
final output:
Console output gives coordinates for the white paper.
console output:
[array([[438, 267],
[199, 256],
[209, 60],
[447, 71]], dtype=int64)]

detecting outer circle using opencv HoughCircles

I am trying to detect two concentric circles using opencv in Android. Big outer circle is red, inner smaller circle is blue. The idea is to detect big circle while distance is long and detect inner circle as the distance becomes short.
Sample picture
I am using simple code:
Mat matRed = new Mat();
Core.inRange(matHsv, getScalar(hue - HUE_D, saturation - SAT_D, brightness - BRIGHT_D), getScalar(hue + HUE_D, saturation + SAT_D, brightness + BRIGHT_D), matRed);
//here we have black-white image
Imgproc.GaussianBlur(matRed, matRed, new Size(0, 0), 6, 6);
Mat matCircles = new Mat();
Imgproc.HoughCircles(matRed, matCircles, CV_HOUGH_GRADIENT, 1, matRed.rows()/8, 100, param2, 0, 0);
After calling inRange we have white ring on black background. HoughCircles function detects only inner black circle.
How can I make it to detect outer white circle instead?
Without seeing a sample image (or being quite sure what you mean by 'detect big circle while distance is long and detect inner circle as the distance becomes short'), this is somewhat of a guess, but I'd suggest using Canny edge detect to get the boundaries of your circles and then using contours to extract the edges. You can use the contour hierarchy to determine which is inside which if you need to extract one or the other.
Additionally, given the circles are different colours, you might want to look at using inRange to segment based on colour; for example, this post from PyImageSearch contains a Python application which does colour-based tracking.

Opencv Detect boundary and ROI mask

Hi , I have attached the image below with an yellow bounding box. Is there any algorithm or (sequence of algorithms) in Opencv by which I can detect the yellow pixels and create a ROI mask (which will block out all the pixels outside of it).
You can do:
Find the yellow polygon
Fill the inside of the polygon
Copy only the inside of the polygon to a black-initialized image
Find the yellow polygon
Unfortunately, you used anti-aliasing to draw the yellow line, so the yellow color is not pure yellow, but has a wider range due to interpolation. This affects also the final results, since some not yellow pixels will be included in the result image. You can easily correct this by not using anti-aliasing.
So the best option is to convert the image in the HSV space (where it's easier to segment a single color) and keep only values in a range around the pure yellow.
If you don't use anti-aliasing, you don't even need to convert to HSV and simply keep points whose value is pure yellow.
Fill the inside of the polygon
You can use floodFill to fill the polygon. You need a starting point for that. Since we don't know if a point is inside the polygon (and taking the baricenter may not be safe since the polygon is not convex), we can safely assume that the point (0,0), i.e. the top-left corner of the image is outside the polygon. We can then fill the outside of the polygon, and then invert the result.
Copy only the inside of the polygon to a black-initialized image
Once you have the mask, simply use copyTo with that mask to copy on a black image the content under non-zero pixels in the mask.
Here the full code:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
// Convert to HSV color space
Mat3b hsv;
cvtColor(img, hsv, COLOR_BGR2HSV);
// Get yellow pixels
Mat1b polyMask;
inRange(hsv, Scalar(29, 220, 220), Scalar(31, 255, 255), polyMask);
// Fill outside of polygon
floodFill(polyMask, Point(0, 0), Scalar(255));
// Invert (inside of polygon filled)
polyMask = ~polyMask;
// Create a black image
Mat3b res(img.size(), Vec3b(0,0,0));
// Copy only masked part
img.copyTo(res, polyMask);
imshow("Result", res);
waitKey();
return 0;
}
Result:
NOTES
Please note that there are almost yellow pixels in the result image. This is due to anti-aliasing, as explained above.

Outlying pixels in OpenCV warpPerspective()

My application requires mapping one quadrilateral to another quadrilateral. Neither of these are rectangles.
However, the result I get from warpPerspective() is always a rectangle. I have tried setting the "outlier" flag to different values to prevent pixels from outside the warped quad from appearing in the destination image but nothing seems to work. What I want is a warped quad with the pixels outside the warped quad set to transparency.
How do I achieve this?
Alternatively, is there a straightforward way to mask the region outside a quadrilateral in OpenCV so that I can copy just the quad to another image?
In case it is relevant, I am using the Python binding to OpenCV.
Here is my current code:
def warpImage(image, corners, target, width, height):
mat = cv2.getPerspectiveTransform(corners, target)
out = numpy.zeros(shape=(width, height), dtype="uint8")
out = cv2.warpPerspective(image, mat, (width,height), out, cv2.INTER_CUBIC)
return out
corners and target are both non-rectangular quads. The output is a full widthxheight rectangle, however. None of the pixels are black or transparent. Instead they are pixels from the image both inside and outside the corners quad. I only want the ones inside.
The best option I have found is to cycle through the pixels and copy the ones in the warped quad to a remap array using the matplotlib pnpoly() function, as so:
import matplotlib.nxutils as nx
def warpImage(image, corners, target, width, height, x0, y0, remap):
mat = cv2.getPerspectiveTransform(corners, target)
out = cv2.warpPerspective(image, mat, (width,height), flags=cv2.INTER_CUBIC)
for x in range(0,width):
for y in range(0,height):
if nx.pnpoly(x,y,target) == 1:
for i in range(0,3):
remap[y+y0,x+x0,i] = out[y,x,i]
return remap
I loop through all the quads in image and accumulate transformed versions in remap.
Having to access each pixel is not very efficient but fortunately this is a one time transformation.

OpenCV Object Detection - Center Point

Given an object on a plain white background, does anybody know if OpenCV provides functionality to easily detect an object from a captured frame?
I'm trying to locate the corner/center points of an object (rectangle). The way I'm currently doing it, is by brute force (scanning the image for the object) and not accurate. I'm wondering if there is functionality under the hood that i'm not aware of.
Edit Details:
The size about the same as a small soda can. The camera is positioned above the object, to give it a 2D/Rectangle feel. The orientation/angle from from the camera is random, which is calculated from the corner points.
It's just a white background, with the object on it (black). The quality of the shot is about what you'd expect to see from a Logitech webcam.
Once I get the corner points, I calculate the center. The center point is then converted to centimeters.
It's refining just 'how' I get those 4 corners is what I'm trying to focus on. You can see my brute force method with this image: Image
There's already an example of how to do rectangle detection in OpenCV (look in samples/squares.c), and it's quite simple, actually.
Here's the rough algorithm they use:
0. rectangles <- {}
1. image <- load image
2. for every channel:
2.1 image_canny <- apply canny edge detector to this channel
2.2 for threshold in bunch_of_increasing_thresholds:
2.2.1 image_thresholds[threshold] <- apply threshold to this channel
2.3 for each contour found in {image_canny} U image_thresholds:
2.3.1 Approximate contour with polygons
2.3.2 if the approximation has four corners and the angles are close to 90 degrees.
2.3.2.1 rectangles <- rectangles U {contour}
Not an exact transliteration of what they are doing, but it should help you.
Hope this helps, uses the moment method to get the centroid of a black and white image.
cv::Point getCentroid(cv::Mat img)
{
cv::Point Coord;
cv::Moments mm = cv::moments(img,false);
double moment10 = mm.m10;
double moment01 = mm.m01;
double moment00 = mm.m00;
Coord.x = int(moment10 / moment00);
Coord.y = int(moment01 / moment00);
return Coord;
}
OpenCV has heaps of functions that can help you achieve this. Download Emgu.CV for a C#.NET wrapped to the library if you are programming in that language.
Some methods of getting what you want:
Find the corners as before - e.g. "CornerHarris" OpenCV function
Threshold the image and calculate the centre of gravity - see http://www.roborealm.com/help/Center%20of%20Gravity.php ... this is the method i would use. You can even perform the thresholding in the COG routine. i.e. cog_x += *imagePtr < 128 ? 255 : 0;
Find the moments of the image to give rotation, center of gravity etc - e.g. "Moments" OpenCV function. (I haven't used this)
(edit) The AForge.NET library has corner detection functions as well as an example project (MotionDetector) and libraries to connect to webcams. I think this would be the easiest way to go, assuming you are using Windows and .NET.
Since no one has posted a complete OpenCV solution, here's a simple approach:
Obtain binary image. We load the image, convert to grayscale, and then obtain a binary image using Otsu's threshold
Find outer contour. We find contours using findContours and then extract the bounding box coordinates using boundingRect
Find center coordinate. Since we have the contour, we can find the center coordinate using moments to extract the centroid of the contour
Here's an example with the bounding box and center point highlighted in green
Input image -> Output
Center: (100, 100)
Center: (200, 200)
Center: (300, 300)
So to recap:
Given an object on a plain white background, does anybody know if OpenCV provides functionality to easily detect an object from a captured frame?
First obtain a binary image (Canny edge detection, simple thresholding, Otsu's threshold, or Adaptive threshold) and then find contours using findContours. To obtain the bounding rectangle coordinates, you can use boundingRect which will give you the coordinates in the form of x,y,w,h. To draw the rectangle, you can draw it with rectangle. This will give you the 4 corner points of the contour. If you wanted to obtain the center point, use
moments to extract the centroid of the contour
Code
import cv2
import numpy as np
# Load image, convert to grayscale, and Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Find contours and extract the bounding rectangle coordintes
# then find moments to obtain the centroid
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
# Obtain bounding box coordinates and draw rectangle
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2)
# Find center coordinate and draw center point
M = cv2.moments(c)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
cv2.circle(image, (cx, cy), 2, (36,255,12), -1)
print('Center: ({}, {})'.format(cx,cy))
cv2.imshow('image', image)
cv2.waitKey()
It is usually called blob analysis in other machine vision libraries. I haven't used opencv yet.

Resources