Detecting if a bottle has a label - opencv

I am currently doing a bit of computer vision using openCv. I have a sample of bottles a label on it. I am trying to determine when a bottle does not have a label on it.
The label is rectangular in shape.
I have done an edge detection using Canny.I have tried using findcountour() to detect if a bottle has an inner contour(this would represent the rectangular label).

If your problem is this simple, just place reduce your image using a rectangle.
cv::Mat image = imread("image.png");
cv::Rect labelRegion(50, 200, 50, 50);
cv::Mat labelImage = image(labelRegion);
Then decompose your image region into three channels.
cv::Mat channels[3];
cv::split(labelImage, channels);
cv::Mat labelImageRed = channels[2];
cv::Mat labelImageGreen = channels[1];
cv::Mat labelImageBlue = channels[0];
Then threshold each of these one channeled images and count number of zero/nonzero pixels.
I'm not providing code for this part!
If you don't have label on the image then each channel has values bigger then ~200 (you should check this). If there is a label, then you will see different result when counting zero/nonzero pixels from the non labeled one.

#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat img=imread("c:/data/bottles/1.png");
Mat gray;
cvtColor(img,gray,CV_BGR2GRAY);
resize(gray,gray,Size(50,100));
Sobel(gray,gray,CV_16SC1,0,1);
convertScaleAbs(gray,gray);
if(sum(gray)[0]<130000)
{
cout<<"no label";
}else{
cout<<"has label";
}
imshow("gray",gray);
waitKey();
return 0;
}

I am guessing it should be enough to just see if there is text present on the bottle or not (if yes, then it has a label and vice versa).. You could check out a project like THIS.. There are numerous papers in this area; some of the more famous ones are done by the Stanford CV group - 1 and 2..
HTH

guneykayim suggested image segmentation which I feel would be the easiest method. I am just adding a little bit more...
my suggestion is that you convert your BGR image into YCbCr and then look for values within the Cb and Cr channels to match the color of your label. This will allow you to easily segment out colors even if lighting conditions on the bottle change (a darkly lit bottle will end up having white regions appear dark gray and this can be a problem if you have gray colored labeling)
something like this should work in python:
# Required moduls
import cv2
import numpy
# Convert image to YCrCb
imageYCrCb = cv2.cvtColor(sourceImage,cv2.COLOR_BGR2YCR_CB)
# Constants for finding range of label color in YCrCb
# a, b, c and d need to be defined
min_YCrCb = numpy.array([0,a,b],numpy.uint8)
max_YCrCb = numpy.array([0,c,d],numpy.uint8)
# Threshold the image to produce blobs that indicate the labeling
labelRegion = cv2.inRange(imageYCrCb,min_YCrCb,max_YCrCb)
# Just in case you are interested in going an extra step
contours, hierarchy = cv2.findContours(labelRegion, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Draw the contour on the source image
for i, c in enumerate(contours):
area = cv2.contourArea(c)
if area > minArea: # minArea needs to be defined, try 300 square pixels
cv2.drawContours(sourceImage, contours, i, (0, 255, 0), 3)
the function cv2.inRange() will also work incase you decided to work with BGR image.
Reference:
http://en.wikipedia.org/wiki/YCbCr

Related

Extract dark contour

I want to extract the darker contours from images with opencv. I have tried using a simple threshold such as below (c++)
cv::threshold(gray, output, threshold, 255, THRESH_BINARY_INV);
I can iterate threshold lets say from 50 ~ 200
then I can get the darker contours in the middle
for images with a clear distinction such as this
here is the result of the threshold
but if the contours near the border, the threshold will fail because the pixel almost the same.
for example like this image.
What i want to ask is there any technique in opencv that can extract darker contour in the middle of image even though the contour reach the border and having almost the same pixel as the border?
(updated)
after threshold darker contour in the middle overlapped with border top.
It makes me fail to extract character such as the first two "SS".
I think you can simply add a edge preserving smoothing step to solve this :
// read input image
Mat inputImg = imread("test2.tif", IMREAD_GRAYSCALE);
Mat filteredImg;
bilateralFilter(inputImg, filteredImg, 5, 60, 20);
// compute laplacian
Mat laplaceImg;
Laplacian(filteredImg, laplaceImg, CV_16S, 1);
// threshold
Mat resImg;
threshold(laplaceImg, resImg, 10, 1, THRESH_BINARY);
// write result
imwrite("res2.tif", resImg);
This will give you the following result : result
Regards,
I think using laplacian could partialy solve your problem :
// read input image
Mat inputImg = imread("test2.tif", IMREAD_GRAYSCALE);
// compute laplacian
Mat laplaceImg;
Laplacian(inputImg, laplaceImg, CV_16S, 1);
Mat resImg;
threshold(laplaceImg, resImg, 30, 1, THRESH_BINARY);
// write result
imwrite("res2.tif", resImg);
Using this code you should obtain something like :
this result
You can then play with final threshold value and with laplacian kernel size.
You will probably have to remove small artifacts after this operation.
Regards

Remove Boxes/rectangles from image

I have the following image.
this image
I would like to remove the orange boxes/rectangle around numbers and keep the original image clean without any orange grid/rectangle.
Below is my current code but it does not remove it.
Mat mask = new Mat();
Mat src = new Mat();
src = Imgcodecs.imread("enveloppe.jpg",Imgcodecs.CV_LOAD_IMAGE_COLOR);
Imgproc.cvtColor(src, hsvMat, Imgproc.COLOR_BGR2HSV);
Scalar lowerThreshold = new Scalar(0, 50, 50);
Scalar upperThreshold = new Scalar(25, 255, 255);
Mat mask = new Mat();
Core.inRange(hsvMat, lowerThreshold, upperThreshold, mask);
//src.setTo(new scalar(255,255,255),mask);
what to do next ?
How can i remove the orange boxes/rectangle from the original images ?
Update:
For information , the mask contains exactly all the boxes/rectangle that i want to remove. I don't know how to use this mask to remove boxes/rectangle from the source (src) image as if they were not present.
This is what I did to solve the problem. I solved the problem in C++ and I used OpenCV.
Part 1: Find box candidates
Firstly I wanted to isolate the signal that was specific for red channel. I splitted the image into three channels. I then subtracted the red channel from blue channel and the red from green channel. After that I subtracted both previous subtraction results from one another. The final subtraction result is shown on the image below.
using namespace cv;
using namespace std;
Mat src_rgb = imread("image.jpg");
std::vector<Mat> channels;
split(src_rgb, channels);
Mat diff_rb, diff_rg;
subtract(channels[2], channels[0], diff_rb);
subtract(channels[2], channels[1], diff_rg);
Mat diff;
subtract(diff_rb, diff_rg, diff);
My next goal was to divide the parts of obtained image into separate "groups". To do that, I smoothed the image a little bit with a Gaussian filter. Then I applied a threshold to obtain a binary image; finally I looked for external contours within that image.
GaussianBlur(diff, diff, cv::Size(11, 11), 2.0, 2.0);
threshold(diff, diff, 5, 255, THRESH_BINARY);
vector<vector<Point>> contours;
findContours(diff, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
Click to see subtraction result, Gaussian blurred image, thresholded image and detected contours.
Part 2: Inspect box candidates
After that, I had to make an estimate whether the interior of each contour contained a number or something else. I made an assumption that numbers will always be printed with black ink and that they will have sharp edges. Therefore I took a blue channel image and I applied just a little bit of Gaussian smoothing and convolved it with a Laplacian operator.
Mat blurred_ch2;
GaussianBlur(channels[2], blurred_ch2, cv::Size(7, 7), 1, 1);
Mat laplace_result;
Laplacian(blurred_ch2, laplace_result, -1, 1);
I then took the resulting image and applied the following procedure for every contour separately. I computed a standard deviation of the pixel values within the contour interior. Standard deviation was high inside the contours that surrounded numbers; and it was low inside the two contours that surrounded the dog's head and the letters on top of the stamp.
That is why I could appliy the standard deviation threshold. Standard deviation was approx. twice larger for contours containing numbers so this was an easy way to only select the contours that contained numbers. Then I drew the contour interior mask. I used erosion and subtraction to obtain the "box edge mask".
The final step was fairly easy. I computed an estimate of average pixel value nearby the box on every channel of the image. Then I changed all pixel values under the "box edge mask" to those values on every channel. After I repeated that procedure for every box contour, I merged all three channels into one.
Mat mask(src_rgb.size(), CV_8UC1);
for (int i = 0; i < contours.size(); ++i)
{
mask.setTo(0);
drawContours(mask, contours, i, cv::Scalar(200), -1);
Scalar mean, stdev;
meanStdDev(laplace_result, mean, stdev, mask);
if (stdev.val[0] < 10.0) continue;
Mat eroded;
erode(mask, eroded, cv::Mat(), cv::Point(-1, -1), 6);
subtract(mask, eroded, mask);
for (int c = 0; c < src_rgb.channels(); ++c)
{
erode(mask, eroded, cv::Mat());
subtract(mask, eroded, eroded);
Scalar mean, stdev;
meanStdDev(channels[c], mean, stdev, eroded);
channels[c].setTo(mean, mask);
}
}
Mat final_result;
merge(channels, final_result);
imshow("Final Result", final_result);
Click to see red channel of the image, the result of convolution with Laplacian operator, drawn mask of the box edges and the final result.
Please note
This code is far from being optimal, especially the last loop does quite a lot of unnecessary work. But I think that in this case readability is more important (and the author of the question did not request an optimized solution anyway).
Looking towards more general solution
After I posted the initial reply, the author of the question noted that the digits can be of any color and their edges are not necessarily sharp. That means that above procedure can fail because of various reasons. I altered the input image so that it contains different kinds of numbers (click to see the image) and you can run my algorithm on this input and analyze what goes wrong.
The way I see it, one of these approaches is needed (or perhaps a mixture of both) to obtain a more "general" solution:
concentrate only on rectangle shape and color (confirm that the box candidate is really an orange box and remove it regardless of what is inside)
concentrate on numbers only (run a proper number detection algorithm inside the interior of every box candidate; if it contains a single number, remove the box)
I will give a trivial example of the first approach. If you can assume that orange box size will always be the same, just check the box size instead of standard deviation of the signal in the last loop of the algorithm:
Rect rect = boundingRect(contours[i]);
float area = rect.area();
if (area < 1000 || area > 1200) continue;
Warning: actual area of rectangles is around 600Px^2, but I took into account the Gaussian Blurring, which caused the contour to expand. Please also note that if you use this approach you no longer need to perform blurring or laplace operations on blue channel image.
You can also add other simple constraints to that condition; ratio between width and height is the first one that comes to my mind. Geometric properties can also be a good option (right angles, straight edges, convexness ...).

best way to segment a tree in plantation aerial image using opencv

so i want to segment a tree from an aerial image
sample image (original image) :
and i expect the result like this (or better) :
the first thing i do is using threshold function in opencv and i didn't get expected result (it cant segment the tree crown), and then i'm using black and white filter in photoshop using some adjusted parameter (the result is shown beloww) and do the threshold and morphological filter and got result like shown above.
my question, is there a some ways to do the segmentation to the image without using photoshop first, and produce segmented image like the second image (or better) ? or maybe is there a way to do produce image like the third image ?
ps: you can read the photoshop b&w filter question here : https://dsp.stackexchange.com/questions/688/whats-the-algorithm-behind-photoshops-black-and-white-adjustment-layer
You can do it in OpenCV. The code below will basically do the same operations you did in Photoshop. You may need to tune some of the parameters to get exactly what you want.
#include "opencv2\opencv.hpp"
using namespace cv;
int main(int, char**)
{
Mat3b img = imread("path_to_image");
// Use HSV color to threshold the image
Mat3b hsv;
cvtColor(img, hsv, COLOR_BGR2HSV);
// Apply a treshold
// HSV values in OpenCV are not in [0,100], but:
// H in [0,180]
// S,V in [0,255]
Mat1b res;
inRange(hsv, Scalar(100, 80, 100), Scalar(120, 255, 255), res);
// Negate the image
res = ~res;
// Apply morphology
Mat element = getStructuringElement( MORPH_ELLIPSE, Size(5,5));
morphologyEx(res, res, MORPH_ERODE, element, Point(-1,-1), 2);
morphologyEx(res, res, MORPH_OPEN, element);
// Blending
Mat3b green(res.size(), Vec3b(0,0,0));
for(int r=0; r<res.rows; ++r) {
for(int c=0; c<res.cols; ++c) {
if(res(r,c)) { green(r,c)[1] = uchar(255); }
}
}
Mat3b blend;
addWeighted(img, 0.7, green, 0.3, 0.0, blend);
imshow("result", res);
imshow("blend", blend);
waitKey();
return 0;
}
The resulting image is:
The blended image is:
This has been an interesting topic of research in the past - mainly in the remote sensing literature.
While the morphological methods proposed using OpenCV will work in certain cases, you might want to consider more sophisticated approaches (depending on how variable your data is and how robust a detector you want to build).
For example, this paper, and those who cite it - give you a flavour of what has been attempted.
Pragmatically speaking - I think a neat solution would be one founded more on statistical texture analysis. There are many ways to classify (and then count) regions of an image as belong to a texture (co-occurance matrices, filter banks, textons, wavelets, etc, etc.).
Sadly, this is an area where OpenCV is rather deficient - it only provides a subset of the useful algorithms out there... However, here are a few quick ideas (none of which I have tried directly, just what I'm aware of are based on underlying OpenCV):
Use OpenCV Gabor filter support and cluster (for example).
You could also possibly train an OpenCV SVM with Local Binary Patterns.
A new library - but probably not so relevant for static images - LIBDT
Anyways, I hope you get something that just works for your purposes!

OpenCV Display Colored Cb Cr Channels

So I understand how to convert a BGR image to YCrCb format using cvtColor() and seperate different channels using split() or mixChannels() in OpenCV. However, these channels are displayed as grayscale images as they are CV_8UC1 Mats.
I would like to display Cb and Cr channels in color like
Barns image on Wikipedia.
I found this solution in Matlab, but how do I do it in OpenCV?
Furthermore, the mentioned solution displayed Cb and Cr channels by "fills the other channels with a constant value of 50%". My question is:
Is this the common way to display Cr Cb channels? Or is there any recommendations or specifications when displaying Cr Cb channels?
I made a code from scratch as described in answer. Looks like it's what you need.
Mat bgr_image = imread("lena.png");
Mat yCrCb_image;
cvtColor(bgr_image, yCrCb_image, CV_BGR2YCrCb);
Mat yCrCbChannels[3];
split(yCrCb_image, yCrCbChannels);
Mat half(yCrCbChannels[0].size(), yCrCbChannels[0].type(), 127);
vector<Mat> yChannels = { yCrCbChannels[0], half, half };
Mat yPlot;
merge(yChannels, yPlot);
cvtColor(yPlot, yPlot, CV_YCrCb2BGR);
imshow("y", yPlot);
vector<Mat> CrChannels = { half, yCrCbChannels[1], half };
Mat CrPlot;
merge(CrChannels, CrPlot);
cvtColor(CrPlot, CrPlot, CV_YCrCb2BGR);
imshow("Cr", CrPlot);
vector<Mat> CbChannels = { half, half, yCrCbChannels[2] };
Mat CbPlot;
merge(CbChannels, CbPlot);
cvtColor(CrPlot, CrPlot, CV_YCrCb2BGR);
imshow("Cb", CbPlot);
waitKey(0);
As for converting grayscale images to color format, usually in such case all color channels (B, G, R) set to one grayscale value. In OpenCV CV_GRAY2BGR mode implemented in that manner.
As for "fills the other channels with a constant value of 50%" I believe it's common way to visualize such color spaces as YCbCr and Lab. I did not find any articles and descriptions of this approach, but I think it's driven by visualization purposes. Indeed, if we fill the other channels with zero, fundamentally nothing has changed: we can also see the influence of each channel, but the picture does not look very nice:
So, the aim of this approach to make visualization more colorful.
For thoses who want the code in Python (which is the same as #akarsakov) here it is :
import cv2
import numpy as np
img = cv2.imread(r"lena.png")
imgYCC = cv2.cvtColor(img, cv2.COLOR_BGR2YCR_CB)
Y,Cr,Cb = cv2.split(imgYCC)
half = np.array([[127]*Y.shape[1]]*Y.shape[0]).astype(Y.dtype)
merge_Y = cv2.merge([Y, half, half])
merge_Cb = cv2.merge([half, half, Cb])
merge_Cr = cv2.merge([half, Cr, half])
merge_Y = cv2.cvtColor(merge_Y, cv2.COLOR_YCrCb2BGR)
merge_Cb = cv2.cvtColor(merge_Cb, cv2.COLOR_YCrCb2BGR)
merge_Cr = cv2.cvtColor(merge_Cr, cv2.COLOR_YCrCb2BGR)
cv2.imwrite(r'Y.png', merge_Y)
cv2.imwrite(r'Cb.png', merge_Cb)
cv2.imwrite(r'Cr.png', merge_Cr)
The result isn't the same by the way and more look like what we could find on google when we write Y Cb Cr images.
Or maybe I did a mistake with split and/or BGR.

Convert image to grayscale with custom luminosity formula

I have images containing gray gradations and one another color. I'm trying to convert image to grayscale with opencv, also i want the colored pixels in the source image to become rather light in the output grayscale image, independently to the color itself.
The common luminosity formula is smth like 0.299R+0.587G+0.114B, according to opencv docs, so it gives very different luminosity to different colors.
I consider the solution is to set some custom weights in the luminosity formula.
Is it possible in opencv? Or maybe there is a better way to perform such selective desaturation?
I use python, but it doesnt matter
This is the perfect case for the transform() function. You can treat grayscale conversion as applying a 1x3 matrix transformation to each pixel of the input image. The elements in this matrix are the coefficients for the blue, green, and red components, respectively since OpenCV images are BGR by default.
im = cv2.imread(image_path)
coefficients = [1,0,0] # Gives blue channel all the weight
# for standard gray conversion, coefficients = [0.114, 0.587, 0.299]
m = np.array(coefficients).reshape((1,3))
blue = cv2.transform(im, m)
So you have custom formula,
Load source,
Mat src=imread(fileName,1);
Create gray image,
Mat gray(src.size(),CV_8UC1,Scalar(0));
Now in a loop, access BGR pixel of source like,
Vec3b bgrPixel=src.at<cv::Vec3b>(y,x); //gives you the BGR vector of type cv::Vec3band will be in row, column order
bgrPixel[0]= Blue//
bgrPixel[1]= Green//
bgrPixel[2]= Red//
Calculate new gray pixel value using your custom equation.
Finally set the pixel value on gray image,
gray.at<uchar>(y,x) = custom intensity value // will be in row, column order

Resources