How to fill polygon in image - image-processing

I've a closed contour region on an image. How to fill it up with white color? Not sure whats the Julia function for fillpoly(..)? Thanks
black background img with thin white circle,
contour=findall(img.>0)
img_With_contour_filled_with_white= ..

Original Img
Filled Img
code:
using Images
img = load("img.png")
function fill_poly!(img::Matrix{RGB{N0f8}})
bit_cols = all.(==(RGB(1., 1., 1.)), eachcol(img)) .== 0
idx_cols = findall(bit_cols)
f = findfirst.(!=(RGB(1., 1., 1.)), eachcol(img[:, bit_cols]))
l = findlast.(!=(RGB(1., 1., 1.)), eachcol(img[:, bit_cols]))
foreach(x->img[x[1]:x[2], x[3]].=RGB(0., 0., 0.), zip(f, l, idx_cols))
end;
fill_poly!(img)
Known issue: fills areas like the following

Related

Line following robot in outdoor condition with intel sealsense d415

I am working on making a line following robot using Opencv and am using nvidia realsense d415 depth camera for image capturing.I am using cv2.findcontours and taking the max contour as the line and then working on that. the problem is along with detecting the main line , the code is also detecting random contours, like when I capture empty space on ground, it considers the whole frame as one contour. It is also picking up random objects as contour. Is there a way to detect just the black line? and no other object as contour?
Code:
cap = cv2.VideoCapture(2)
ret,frame = cap.read()
frame = frame[0:800,0:1800]
value = 80 #whatever value you want to add
cv2.add(frame[:,:,2], value, frame[:,:,2])
img = cv2.cvtColor(frame, cv2.COLOR_HSV2BGR)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = cv2.GaussianBlur(img,(5,5),0)
_,threshold = cv2.threshold(img,60,255,cv2.THRESH_BINARY_INV)
mask = cv2.erode(threshold, None, iterations=2)
mask = cv2.dilate(mask, None, iterations=2)
contours, hierarchy =
cv2.findContours(mask.copy(),1,cv2.CHAIN_APPROX_NONE)
max_c = max(contours, key=cv2.contourArea)
cv2.drawContours(img, max_c, -1, (0,255,0), 3)
Image with object

How to get the area of the contours?

I have a picture like this:
And then I transform it into binary image and use canny to detect edge of the picture:
gray = cv.cvtColor(image, cv.COLOR_RGB2GRAY)
edge = Image.fromarray(edges)
And then I get the result as:
I want to get the area of 2 like this:
My solution is to use HoughLines to find lines in the picture and calculate the area of triangle formed by lines. However, this way is not precise because the closed area is not a standard triangle. How to get the area of region 2?
A simple approach using floodFill and countNonZero could be the following code snippet. My standard quote on contourArea from the help:
The function computes a contour area. Similarly to moments, the area is computed using the Green formula. Thus, the returned area and the number of non-zero pixels, if you draw the contour using drawContours or fillPoly, can be different. Also, the function will most certainly give a wrong results for contours with self-intersections.
Code:
import cv2
import numpy as np
# Input image
img = cv2.imread('images/YMMEE.jpg', cv2.IMREAD_GRAYSCALE)
# Needed due to JPG artifacts
_, temp = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)
# Dilate to better detect contours
temp = cv2.dilate(temp, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
# Find largest contour
cnts, _ = cv2.findContours(temp, cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_NONE)
largestCnt = []
for cnt in cnts:
if (len(cnt) > len(largestCnt)):
largestCnt = cnt
# Determine center of area of largest contour
M = cv2.moments(largestCnt)
x = int(M["m10"] / M["m00"])
y = int(M["m01"] / M["m00"])
# Initiale mask for flood filling
width, height = temp.shape
mask = img2 = np.ones((width + 2, height + 2), np.uint8) * 255
mask[1:width, 1:height] = 0
# Generate intermediate image, draw largest contour, flood filled
temp = np.zeros(temp.shape, np.uint8)
temp = cv2.drawContours(temp, largestCnt, -1, 255, cv2.FILLED)
_, temp, mask, _ = cv2.floodFill(temp, mask, (x, y), 255)
temp = cv2.morphologyEx(temp, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
# Count pixels in desired region
area = cv2.countNonZero(temp)
# Put result on original image
img = cv2.putText(img, str(area), (x, y), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, 255)
cv2.imshow('Input', img)
cv2.imshow('Temp image', temp)
cv2.waitKey(0)
Temporary image:
Result image:
Caveat: findContours has some problems one the right side, where the line is very close to the bottom image border, resulting in possibly omitting some pixels.
Disclaimer: I'm new to Python in general, and specially to the Python API of OpenCV (C++ for the win). Comments, improvements, highlighting Python no-gos are highly welcome!
There is a very simple way to find this area, if you take some assumptions that are met in the example image:
The area to be found is bounded on top by a line
Any additional lines in the image are above the line of interest
There are no discontinuities in the line
In this case, the area of the region of interest is given by the sum of the lengths from the bottom of the image to the first set pixel. We can compute this with:
import numpy as np
import matplotlib.pyplot as pp
img = pp.imread('/home/cris/tmp/YMMEE.jpg')
img = np.flip(img, axis=0)
pos = np.argmax(img, axis=0)
area = np.sum(pos)
print('Area = %d\n'%area)
This prints Area = 22040.
np.argmax finds the first set pixel on each column of the image, returning the index. By first using np.flip, we flip this axis so that the first pixel is actually the one on the bottom. The index corresponds to the number of pixels between the bottom of the image and the line (not including the set pixel).
Thus, we're computing the area under the line. If you need to include the line itself in the area, add pos.shape[0] to the area (i.e. the number of columns).

opencv warpaffine negative translation

I am attempting to use OpenCVs warpAffine to transform an image with a simple translation. The image produced from a negative versus positive translation surprises me.
from skimage import data
import numpy as np
import cv2
from pylab import *
ion()
fig = figure()
fig.clear()
image = data.camera()
# positive translation
rigid0 = np.float32([[1.0, 0.0, 96.0], [0.0, 1.0, 0.0]])
w0 = cv2.warpAffine(image,rigid0,(image.shape[1]+int(abs(rigid0[0,2])),image.shape[0]))
# negative translation
rigid1 = np.float32([[1.0, 0.0, -96.0], [0.0, 1.0, 0.0]])
w1 = cv2.warpAffine(image,rigid1,(image.shape[1]+int(abs(rigid1[0,2])),image.shape[0]))
plt.subplot(1, 2, 1)
imshow(w0, cmap=gray())
plt.subplot(1, 2, 2)
imshow(w1, cmap=gray())
I have inserted the produced figure below, notice how the negative translation on the right appears to eat twice as many pixels off the image. Both images are construced with a translation by 96 pixels, one negative and the other positive.
I'm able to reproduce your output in c++:
#include <opencv2/opencv.hpp>
int main(int argc, char *argv[]) {
cv::Mat img = cv::imread("H:/cameraman.jpg");
cv::resize(img, img, cv::Size(512, 512));
cv::Mat rigid0 = (cv::Mat_<double>(2, 3) << 1., 0., 96.,
0., 1., 0.);
cv::Mat rigid1 = (cv::Mat_<double>(2, 3) << 1., 0., -96.,
0., 1., 0.);
cv::Mat res0, res1;
cv::warpAffine(img, res0, rigid0, cv::Size(img.cols + 96., img.rows));
cv::warpAffine(img, res1, rigid1, cv::Size(img.cols + 96., img.rows));
cv::imshow("0", res0);
cv::imshow("1", res1);
cv::waitKey();
return 0;
}
According to the documentation of warpAffine function, the resulting image is constructed by:
dst(x, y) = src(M11 * x + M12 * y + M13, M21 * x + M22 * y + M23)
where M is an invertion of your affine matrix. So in case of negative translation, you have:
dst(x, y) = src(x + 96, y)
So, it is exactly what you have (input shifted by 96 pixels left).
You set resulting size wider for 96 pixels, so the resulting image is filled with black according to the borderMode and borderValue default values (which are BORDER_CONSTANT with black color).
UPDATED:
in case you still not understand what's going on, I have made a picture for you:

Opencv How can I have smoother contours on curves?

I'm facing some problems to get smoother contours on curves.
After an image processing I have this image.
I am trying to get smoother curves with this code:
imgWithBridgesBw = convert_to_bw(imgWithBridges)
# add later this mask upper
ellipsekernel20 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
enhanceMask = np.ones((InterholesBw.shape[0],InterholesBw.shape[1]) ,dtype="uint8") * 255
enhanceMaskColor = np.ones((InterholesBw.shape[0],InterholesBw.shape[1],3) ,dtype="uint8") * 255
# invert the Layer to get less blank pixels
imgWithBridgesBwInv = 255 - imgWithBridgesBw
imgWithBridgesBwInv_dilate = cv2.dilate(imgWithBridgesBwInv,ellipsekernel20,iterations =1)
imgWithBridgesBwInv_erode = cv2.erode(imgWithBridgesBwInv_dilate, ellipsekernel20, iterations = 1)
_ ,allCnts, hier = cv2.findContours(imgWithBridgesBwInv_dilate,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(enhanceMask,allCnts,-1,0,2)
cv2.drawContours(enhanceMaskColor,allCnts,-1,(255,0,0),2)
The result is this:
https://dl.dropboxusercontent.com/u/710615/testEhhancBwColor_.jpg
As you can see at this image, which is a "zoom", I'm not reaching any success.

Detect caps on bottles using opencv and python

I know that there are a hundred topics about my question in all over the web, but i would like to ask specific for my problem because I tried almost all solutions without any success.
I am trying to count circles in an image (yes i have already tried hough circles but due to light reflections, i think, on my object is not very robust).
Then I tried to create a classifier (no success i think there is no enough features so the detection is not good)
I have also tried HSV conversation and tried to find my object with color (again I had some problems because of the light and the variations of colors)
As you can see on image, there are 8 caps and i would like to be able to count them.
Using all of this methods, i was able to detect the objects on an image (because I was optimizing all the parameters of functions for the specific image) but as soon as I load a new, similar, image the results was disappointing.
Please follow this link to see the Image
Bellow you can find parts of everything i have tried:
1. Hough circles
img = cv2.imread('frame71.jpg',1)
img = cv2.medianBlur(img,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
if img == None:
print "There is no image file. Quiting..."
quit()
circles = cv2.HoughCircles(img,cv.CV_HOUGH_GRADIENT,3,50,
param1=55,param2=125,minRadius=25,maxRadius=45)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
print len(circles[0,:])
cv2.imshow('detected circles',cimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
2. HSV Transform, color detection
def image_process(frame, h_low, s_low, v_low, h_up, s_up, v_up, ksize):
temp = ksize
if(temp%2==1):
ksize = temp
else:
ksize = temp+1
#if(True):
# return frame
#thresh = frame
#try:
#TODO: optimize as much as possiblle this part of code
try:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower = np.array([h_low, s_low, v_low],np.uint8)
upper = np.array([h_up,s_up,h_up],np.uint8)
mask = cv2.inRange(hsv, lower, upper)
res = cv2.bitwise_and(hsv,hsv, mask= mask)
thresh = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
#thresh = cv2.threshold(res, 50, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.threshold(thresh, 50, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.medianBlur(thresh,ksize)
except Exception as inst:
print type(inst)
#cv2.imshow('thresh', thresh)
return thresh
3. Cascade classifier
img = cv2.imread('frame405.jpg', 1)
cap_cascade = cv2.CascadeClassifier('haar_30_17_16_stage.xml')
caps = cap_cascade.detectMultiScale(img, 1.3, 5)
#print caps
for (x,y,w,h) in caps:
cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0),2)
#cv2.rectangle(img, (10,10),(100,100),(0,255,255),4)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
quit()
About training the classifier I really used a lot of variations of images, samples, negatives and positives, number of stages, w and h but the results was not very accurate.
Finally I would like to know from your experience which is the best method I should follow and I will stick on that in order to optimize my detection. Keep in mind that all images are similiar but NOT identical. There are some differences due to light, movement etc
Than you in advance,
I did some experiment with the sample image. I'm posting my results, and if you find it useful, you can improve it further and optimize. Here are the steps:
downsample the image
perform morphological opening
find Hough circles
cluster the circles by radii (bottle circles should get the same label)
filter the circles by a radius threshold
you can also cluster circles by their center x and y coordinates (I haven't done this)
prepare a mask from the filtered circles and extract the possible bottles region
cluster this region by color
Code is in C++. I'm attaching my results.
Mat im = imread(INPUT_FOLDER_PATH + string("frame71.jpg"));
Mat small;
int kernelSize = 9; // try with different kernel sizes. 5 onwards gives good results
pyrDown(im, small); // downsample the image
Mat morph;
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(kernelSize, kernelSize));
morphologyEx(small, morph, MORPH_OPEN, kernel); // open
Mat gray;
cvtColor(morph, gray, CV_BGR2GRAY);
vector<Vec3f> circles;
HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 2, gray.rows/8.0); // find circles
// -------------------------------------------------------
// cluster the circles by radii. similarly you can cluster them by center x and y for further filtering
Mat circ = Mat(circles);
Mat data[3];
split(circ, data);
Mat labels, centers;
kmeans(data[2], 2, labels, TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), 2, KMEANS_PP_CENTERS, centers);
// -------------------------------------------------------
Mat rgb;
small.copyTo(rgb);
//cvtColor(gray, rgb, CV_GRAY2BGR);
Mat mask = Mat::zeros(Size(gray.cols, gray.rows), CV_8U);
for(size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
float r = centers.at<float>(labels.at<int>(i));
if (r > 30.0f && r < 45.0f) // filter circles by radius (values are based on the sample image)
{
// just for display
circle(rgb, center, 3, Scalar(0,255,0), -1, 8, 0);
circle(rgb, center, radius, Scalar(0,0,255), 3, 8, 0);
// prepare a mask
circle(mask, center, radius, Scalar(255,255,255), -1, 8, 0);
}
}
// use each filtered circle as a mask and extract the region from original downsampled image
Mat rgb2;
small.copyTo(rgb2, mask);
// cluster the masked region by color
Mat rgb32fc3, lbl;
rgb2.convertTo(rgb32fc3, CV_32FC3);
int imsize[] = {rgb32fc3.rows, rgb32fc3.cols};
Mat color = rgb32fc3.reshape(1, rgb32fc3.rows*rgb32fc3.cols);
kmeans(color, 4, lbl, TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), 2, KMEANS_PP_CENTERS);
Mat lbl2d = lbl.reshape(1, 2, imsize);
Mat lbldisp;
lbl2d.convertTo(lbldisp, CV_8U, 50);
Mat lblColor;
applyColorMap(lbldisp, lblColor, COLORMAP_JET);
Results:
Filtered circles:
Masked:
Segmented:
Hello finally i think I found a way to count caps on bottles.
Read image
Teach (find correct values for HSV up/low limits)
Select desire color (using HSV and mask)
Find contours on the masked image
Find the minCircles for contours
Reject all circles beyond thresholds
I have also order a polarized filter which I think it will reduce glares a lot. I am open to suggestions for further improvement (robustness and speed). Both robustness and speed is crucial for my application.
Thank you.

Resources