Hey OpenCV/Emgu gurus,
I have an image that I am generating contour for, see below. I am trying to generate a color histogram based pruning of search space of images to look for. How can I get the mask around just the prominent object contour and block out the remaining. So I have a 2 part question:
How do I "invert" the image outside the contour? Floodfill invert, not? I am confused with all the options in OpenCV.
Second, how do I generate a 1-d color histogram from the contoured object in this case the red car to exclude the black background and only generate the color histogram that includes the car.
How would I do that in OpenCV (preferably in Emgu/C# code)?
Perhaps something like this? Done using the Python bindings, but easy to translate the methods to other bindings...
#!/usr/local/bin/python
import cv
import colorsys
# get orginal image
orig = cv.LoadImage('car.jpg')
# show orginal
cv.ShowImage("orig", orig)
# get mask image
maskimg = cv.LoadImage('carcontour.jpg')
# split original image into hue and value
hsv = cv.CreateImage(cv.GetSize(orig),8,3)
hue = cv.CreateImage(cv.GetSize(orig),8,1)
val = cv.CreateImage(cv.GetSize(orig),8,1)
cv.CvtColor(maskimg,hsv,cv.CV_BGR2HSV)
cv.Split(hsv, hue, None, val, None)
# build mask from val image, select values NOT black
mask = cv.CreateImage(cv.GetSize(orig),8,1)
cv.Threshold(val,mask,0,255,cv.CV_THRESH_BINARY)
# show the mask
cv.ShowImage("mask", mask)
# calculate colour (hue) histgram of only masked area
hue_bins = 180
hue_range = [0,180]
hist = cv.CreateHist([hue_bins], cv.CV_HIST_ARRAY, [hue_range], 1)
cv.CalcHist([hue],hist,0,mask)
# create the colour histogram
(_, max_value, _, _) = cv.GetMinMaxHistValue(hist)
histimg = cv.CreateImage((hue_bins*2, 200), 8, 3)
for h in range(hue_bins):
bin_val = cv.QueryHistValue_1D(hist,h)
norm_val = cv.Round((bin_val/max_value)*200)
rgb_val = colorsys.hsv_to_rgb(float(h)/180.0,1.0,1.0)
cv.Rectangle(histimg,(h*2,0),
((h+1)*2-1, norm_val),
cv.RGB(rgb_val[0]*255,rgb_val[1]*255,rgb_val[2]*255),
cv.CV_FILLED)
cv.ShowImage("hist",histimg)
# wait for key press
cv.WaitKey(-1)
This is a little bit clunky finding the mask - I wonder perhaps due to JPEG compression artefacts in the image... If you had the original contour it is easy enough to "render" this to a mask instead.
The example histogram rendering function is also a wee bit basic - but I think it shows the idea (and how the car is predominately red!). Note how OpenCV's interpretation of Hue ranges only from [0-180] degrees.
EDIT: if you want to use the mask to count colours in the original image - edit as so from line 15 downwards:
# split original image into hue
hsv = cv.CreateImage(cv.GetSize(orig),8,3)
hue = cv.CreateImage(cv.GetSize(orig),8,1)
cv.CvtColor(orig,hsv,cv.CV_BGR2HSV)
cv.Split(hsv, hue, None, None, None)
# split mask image into val
val = cv.CreateImage(cv.GetSize(orig),8,1)
cv.CvtColor(maskimg,hsv,cv.CV_BGR2HSV)
cv.Split(hsv, None, None, val, None)
(I think this is more what was intended, as the mask is then derived separately and applied to a completely different image. The histogram is roughly the same in both cases...)
Related
I am trying to identify a rectangle underwater in a noisy environment. I implemented Canny to find the edges, and drew the found edges using cv2.circle. From here, I am trying to identify the imperfect rectangle in the image (the black one below the long rectangle that covers the top of the frame)
I have attempted multiple solutions, including thresholds, blurs and resizing the image to detect the rectangle. Below is the barebones code with just drawing the identified edges.
import numpy as np
import cv2
import imutils
img_text = 'img5.png'
img = cv2.imread(img_text)
original = img.copy()
min_value = 50
max_value = 100
# draw image and return coordinates of drawn pixels
image = cv2.Canny(img, min_value, max_value)
indices = np.where(image != 0)
coordinates = zip(indices[1], indices[0])
for point in coordinates:
cv2.circle(original, point, 1, (0, 0, 255), -1)
cv2.imshow('original', original)
cv2.waitKey(0)
cv2.destroyAllWindows()
Where the output displays this:
output
From here I want to be able to separately detect just the rectangle and draw another rectangle on top of the output in green, but I haven't been able to find a way to detect the original rectangle on its own.
For your specific image, I obtained quite good results with a simple thresholding on the blue channel.
image = cv2.imread("test.png")
t, img = cv2.threshold(image[:,:,0], 80, 255, cv2.THRESH_BINARY)
In order to adapt the threshold, I propose a simple way of varying the threshold until you get one component. I have also implemented the rectangle drawing:
def find_square(image):
markers = 0
threshold = 10
while np.amax(markers) == 0:
threshold += 5
t, img = cv2.threshold(image[:,:,0], threshold, 255, cv2.THRESH_BINARY_INV)
_, markers = cv2.connectedComponents(img)
kernel = np.ones((5,5),np.uint8)
img = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
img = cv2.morphologyEx(img, cv2.MORPH_DILATE, kernel)
nonzero = cv2.findNonZero(img)
x, y, w, h = cv2.boundingRect(nonzero)
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow("image", image)
And the results on the provided example images:
The idea behind this approach is based on the observation that the most information is in the blue channel. If you separate the images in the channels, you will see that in the blue channel, the dark square has the best contrast. It is also the darkest region on this channel, which is why thresholding works. The problem remains the threshold setting. Based on the above intuition, we are looking for the lowest threshold that will bring up something (and hope that it will be the square). What I did is to simply increase gradually the threshold until something appears.
Then, I applied some morphology operations to eliminate other small points that may appear after thresholding and to make the square look a bit bigger (the edges of the square are lighter, and therefore not the entire square is captured). Then is was a matter of drawing the rectangle.
The code can be made much nicer (and more efficient) by doing some statistical analysis on the histogram. Simply compute the threshold such that 5% (or some percent) of the pixels are darker. You may require do so a connected component analysis to keep the biggest blob.
Also, my usage of connectedComponents is very poor and inefficient. Again, code written in a hurry to prove the concept.
I wanted to read characters/triangles from a bar.
Firstly I applied Otsu with different values to this bar but couldn't get the all characters properly. Also I tried triangle detection but couldn't extract again. The characters' colours are varying. Could someone give another way/algorithm to extract them? Also, is there any way to color sweeping, I mean try all colours then if exist, extract (extract all colored from black&white backgrounded image) ?
ret,im1 = cv2.threshold(crop_img,0,255,cv2.THRESH_OTSU)
The challenges, the last one is the hardest
The best one I got which is unsuccesful:
Your problem is best solved using color separation. You can use the inrange() function for that (docs). This is usually done best in the HSV colorspace. The code below shows how you can do this.
You can use this script to find the value ranges you need to do color separation. It also has a sample image that can help you understand how HSV works.
Result:
Purple only:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("image.png")
# Convert BGR to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of HSV-color
lower_val = np.array([0,50,80])
upper_val = np.array([179,255,255])
# purple only
#lower_val = np.array([140,50,80])
#upper_val = np.array([170,255,255])
# Threshold the HSV image to get a mask that holds the markings
mask = cv2.inRange(hsv, lower_val, upper_val)
# create an image of the markings with background excluded
img_masked = cv2.bitwise_and(img,img,mask=mask)
# display image
cv2.imshow("result", img_masked)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am trying to do segmentation of leaf images of tomato crops. I want to convert images like following image
to following image with black background
I have reference this code from Github
but it does not do well on this problem, It does something like this
Can anyone suggest me a way to do it ?
The image is separable using the HSV-colorspace. The background has little saturation, so thresholding the saturation removes the gray.
Result:
Code:
import numpy as np
import cv2
# load image
image = cv2.imread('leaf.jpg')
# create hsv
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# set lower and upper color limits
low_val = (0,60,0)
high_val = (179,255,255)
# Threshold the HSV image
mask = cv2.inRange(hsv, low_val,high_val)
# remove noise
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel=np.ones((8,8),dtype=np.uint8))
# apply mask to original image
result = cv2.bitwise_and(image, image,mask=mask)
#show image
cv2.imshow("Result", result)
cv2.imshow("Mask", mask)
cv2.imshow("Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The problem with your image is the different coloration of the leaf. If you convert the image to grayscale, you will see the problem for the binarization algorithm:
Do you notice the very different brightness of the bottom half and the top half of the leaf? This gives you three mostly uniformly bright areas of the image: The actual background, the top-half leaf and the bottom-half leaf. That's not good for binarization.
However, your problem can be solved by separating your color image into it's respective channels. After separation, you will notice that in the blue channel the leaf looks very uniformly bright:
Which makes sense if we think about the colors we are talking about: Both green and yellow have very small amounts blue in it, if any.
This makes it easy for us to binarize it. For the sake of a clearer image, i first applied smoothing
and then used the iso_data Threshold of ImageJ (you can however use any of the existing automatic thresholding methods available) to create a binary mask:
Because the algorithm has set the leaf to background (black), we have to invert it:
This mask can be further improved by applying binary "fill holes" algorithms:
This mask can be used to crop the original image to extract the leaf:
The quality of the result image could be further improved by eroding the mask a little bit.
For the sake of completeness: You do not have to smooth the image, to get a result. Here is the mask for the unsmoothed image:
To remove the noise, you first apply binary fill holes, then binary closing followed by binary erosion. This will give you:
as a mask.
This will lead to
I have images containing gray gradations and one another color. I'm trying to convert image to grayscale with opencv, also i want the colored pixels in the source image to become rather light in the output grayscale image, independently to the color itself.
The common luminosity formula is smth like 0.299R+0.587G+0.114B, according to opencv docs, so it gives very different luminosity to different colors.
I consider the solution is to set some custom weights in the luminosity formula.
Is it possible in opencv? Or maybe there is a better way to perform such selective desaturation?
I use python, but it doesnt matter
This is the perfect case for the transform() function. You can treat grayscale conversion as applying a 1x3 matrix transformation to each pixel of the input image. The elements in this matrix are the coefficients for the blue, green, and red components, respectively since OpenCV images are BGR by default.
im = cv2.imread(image_path)
coefficients = [1,0,0] # Gives blue channel all the weight
# for standard gray conversion, coefficients = [0.114, 0.587, 0.299]
m = np.array(coefficients).reshape((1,3))
blue = cv2.transform(im, m)
So you have custom formula,
Load source,
Mat src=imread(fileName,1);
Create gray image,
Mat gray(src.size(),CV_8UC1,Scalar(0));
Now in a loop, access BGR pixel of source like,
Vec3b bgrPixel=src.at<cv::Vec3b>(y,x); //gives you the BGR vector of type cv::Vec3band will be in row, column order
bgrPixel[0]= Blue//
bgrPixel[1]= Green//
bgrPixel[2]= Red//
Calculate new gray pixel value using your custom equation.
Finally set the pixel value on gray image,
gray.at<uchar>(y,x) = custom intensity value // will be in row, column order
I am currently doing a bit of computer vision using openCv. I have a sample of bottles a label on it. I am trying to determine when a bottle does not have a label on it.
The label is rectangular in shape.
I have done an edge detection using Canny.I have tried using findcountour() to detect if a bottle has an inner contour(this would represent the rectangular label).
If your problem is this simple, just place reduce your image using a rectangle.
cv::Mat image = imread("image.png");
cv::Rect labelRegion(50, 200, 50, 50);
cv::Mat labelImage = image(labelRegion);
Then decompose your image region into three channels.
cv::Mat channels[3];
cv::split(labelImage, channels);
cv::Mat labelImageRed = channels[2];
cv::Mat labelImageGreen = channels[1];
cv::Mat labelImageBlue = channels[0];
Then threshold each of these one channeled images and count number of zero/nonzero pixels.
I'm not providing code for this part!
If you don't have label on the image then each channel has values bigger then ~200 (you should check this). If there is a label, then you will see different result when counting zero/nonzero pixels from the non labeled one.
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat img=imread("c:/data/bottles/1.png");
Mat gray;
cvtColor(img,gray,CV_BGR2GRAY);
resize(gray,gray,Size(50,100));
Sobel(gray,gray,CV_16SC1,0,1);
convertScaleAbs(gray,gray);
if(sum(gray)[0]<130000)
{
cout<<"no label";
}else{
cout<<"has label";
}
imshow("gray",gray);
waitKey();
return 0;
}
I am guessing it should be enough to just see if there is text present on the bottle or not (if yes, then it has a label and vice versa).. You could check out a project like THIS.. There are numerous papers in this area; some of the more famous ones are done by the Stanford CV group - 1 and 2..
HTH
guneykayim suggested image segmentation which I feel would be the easiest method. I am just adding a little bit more...
my suggestion is that you convert your BGR image into YCbCr and then look for values within the Cb and Cr channels to match the color of your label. This will allow you to easily segment out colors even if lighting conditions on the bottle change (a darkly lit bottle will end up having white regions appear dark gray and this can be a problem if you have gray colored labeling)
something like this should work in python:
# Required moduls
import cv2
import numpy
# Convert image to YCrCb
imageYCrCb = cv2.cvtColor(sourceImage,cv2.COLOR_BGR2YCR_CB)
# Constants for finding range of label color in YCrCb
# a, b, c and d need to be defined
min_YCrCb = numpy.array([0,a,b],numpy.uint8)
max_YCrCb = numpy.array([0,c,d],numpy.uint8)
# Threshold the image to produce blobs that indicate the labeling
labelRegion = cv2.inRange(imageYCrCb,min_YCrCb,max_YCrCb)
# Just in case you are interested in going an extra step
contours, hierarchy = cv2.findContours(labelRegion, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Draw the contour on the source image
for i, c in enumerate(contours):
area = cv2.contourArea(c)
if area > minArea: # minArea needs to be defined, try 300 square pixels
cv2.drawContours(sourceImage, contours, i, (0, 255, 0), 3)
the function cv2.inRange() will also work incase you decided to work with BGR image.
Reference:
http://en.wikipedia.org/wiki/YCbCr