All I have is the following bitmap:
What I'm gonna do is to fill the contour automatically like the following:
It's kind like the fill function in MS Painter. The initial contours will not cross the boundary of the image.
I don't have a good idea yet. Is there any method in OpenCV can do this? or any suggestions?
Thanks in advance!
Probably Contours Hierarchy may help you to achieve this,
You need to do,
Find every contours.
Check hierarchy of each contour.
Based on hierarchy draw each contour to new Mat with either thickness filled or 1.
If you know that regions have to be closed you can just scan horizontally and keep an edge count:
// Assume image is an CV_8UC1 with only black and white pixels.
uchar white(255);
uchar black(0);
cv::Mat output = image.clone();
for(int y = 0; y < image.rows; ++y)
{
uchar* irow = image.ptr<uchar>(y)
uchar* orow = output.ptr<uchar>(y)
uchar previous = black;
int filling = 0;
for(int x = 0; x < image.cols; ++x)
{
// if we are not filling, turn it on at a black to white transition
if((filling == 0) && previous == black && irow[x] == white)
++filling ;
// if we are filling, turn it off at a white to black transition
if((filling != 0) && previous == white && irow[x] == black)
--filling ;
// write output image
orow[x] = filling != 0 ? white : black;
// update previous pixel
previous = irow[x];
}
}
Related
I am trying to segment an image of rocks and I get a decent result. But now I need to count the pixels in the largest colored object.
The picture above shows a segmented image of a rock pile and I want to count the number of green pixels which denote the largest rock in the image. And then also count the 2nd largest,i.e, the yellow one. After counting I would like to compare it with the ground truth to compare my results.
The code to get the segmented image is referred from Watershed segmentation opencv. A part of my code is also given below :
cv::findContours(peaks_8u, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// Create the marker image for the watershed algorithm
// CV_32S - 32-bit signed integers ( -2147483648..2147483647 )
cv::Mat markers = cv::Mat::zeros(input_image.size(), CV_32S);
// Draw the foreground markers
for (size_t i = 0; i < contours.size(); i++)
{
cv::drawContours(markers, contours, static_cast<int>(i), cv::Scalar(static_cast<int>(i) + 1), -1);
}
// Draw the background marker
cv::circle(markers, cv::Point(5, 5), 3, cv::Scalar(255), -1);
cv::watershed(in_sharpened_image, markers);
// Generate random colors; result of watershed
std::vector<cv::Vec3b> colors;
for (size_t i = 0; i < contours.size(); i++)
{
int b = cv::theRNG().uniform(0, 256); //0,256
int g = cv::theRNG().uniform(0, 256);
int r = cv::theRNG().uniform(0, 256);
colors.push_back(cv::Vec3b((uchar)b, (uchar)g, (uchar)r));
}
// Create the result image
cv::Mat dst = cv::Mat::zeros(markers.size(), CV_8UC3);
// Fill labeled objects with random colors
for (int i = 0; i < markers.rows; i++)
{
for (int j = 0; j < markers.cols; j++)
{
int index = markers.at<int>(i, j);
if (index > 0 && index <= static_cast<int>(contours.size()))
{
dst.at<cv::Vec3b>(i, j) = colors[index - 1];
}
}
}
Question: Is there an efficient way to count the pixels inside the largest/marker in opencv?
You can calculate a histogram of markers using cv::calcHist with range from 0 to contours.size() + 1 and find the largest value in it starting from the index 1.
Instead of counting pixels you could use contourArea() for your largest contour. This will work much faster.
Something like this.
cv::Mat mask;
// numOfSegments - number of your labels (colors)
for (int i = 0; i < numOfSegments; i++) {
std::vector<cv::Vec4i> hierarchy;
// this "i + 2" may be different for you
// depends on your labels allocation.
// This is thresholding to get mask with
// contour of your #i label (color)
cv::inRange(markers, i + 2, i + 2, mask);
contours.clear();
findContours(mask, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
double area = cv::contourArea(contours[0]);
}
Having contours in hands is also good because after watershed() they will be quite "noisy" with lots of small peaks and not suitable for most of using in the "raw" form. Having contour you may smooth it with gauss or approxPoly, etc., as well as check for some important properties or contour shape if you need it.
I am using an iPhone camera to detect a TV screen. My current approach is to compare subsequent frames pixel by pixel and keep track of cumulative differences. The result is binary a image as shown in image.
For me this looks like a rectangle but OpenCV does not think so. It's sides are not perfectly straight and sometimes there is even more color bleed to make detection difficult. Here is my OpenCV code trying to detect rectangle, since I am not very familiar with OpenCV it is copied from some example I found.
uint32_t *ptr = (uint32_t*)CVPixelBufferGetBaseAddress(buffer);
cv::Mat image((int)width, (int)height, CV_8UC4, ptr); // unsigned 8-bit values for 4 channels (ARGB)
cv::Mat image2 = [self matFromPixelBuffer:buffer];
std::vector<std::vector<cv::Point>>squares;
// blur will enhance edge detection
cv::Mat blurred(image2);
GaussianBlur(image2, blurred, cvSize(3,3), 0);//change from median blur to gaussian for more accuracy of square detection
cv::Mat gray0(blurred.size(), CV_8U), gray;
std::vector<std::vector<cv::Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++) {
int ch[] = {c, 0};
mixChannels(&blurred, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++) {
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0) {
Canny(gray0, gray, 10, 20, 3); //
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, cv::Mat(), cv::Point(-1,-1));
} else {
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
std::vector<cv::Point> approx;
int biggestSize = 0;
for (size_t i = 0; i < contours.size(); i++) {
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true);
if (approx.size() != 4)
continue;
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
int areaSize = fabs(contourArea(cv::Mat(approx)));
if (approx.size() == 4 && areaSize > biggestSize)
biggestSize = areaSize;
cv::RotatedRect boundingRect = cv::minAreaRect(approx);
float aspectRatio = boundingRect.size.width / boundingRect.size.height;
cv::Rect boundingRect2 = cv::boundingRect(approx);
float aspectRatio2 = (float)boundingRect2.width / (float)boundingRect2.height;
bool convex = isContourConvex(cv::Mat(approx));
if (approx.size() == 4 &&
fabs(contourArea(cv::Mat(approx))) > minArea &&
(aspectRatio >= minAspectRatio && aspectRatio <= maxAspectRatio) &&
isContourConvex(cv::Mat(approx))) {
double maxCosine = 0;
for (int j = 2; j < 5; j++) {
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAXIMUM(maxCosine, cosine);
}
double area = fabs(contourArea(cv::Mat(approx)));
if (maxCosine < 0.3) {
squares.push_back(approx);
}
}
}
}
After Canny-step the image looks like this:
It seems fine to me but for some reason rectangle is not detected. Can anyone explain if there is something wrong with my parameters?
My second approach was to use OpenCV Hough line detection, basically using the same code as above, for Canny image I then call HoughLines function. It gives me quite a few lines as I had to lower threshold to detect vertical lines. The result looks like this:
The problem is that there are some many lines. How can I find out the lines that are touching the sides of blue rectangle as shown in first image?
Or is there a better approach to detect a screen?
First of all, find maximal area contour reference, then compure min area rectangle reference, divide contour area by rectangle area, if it close enough to 1 then your contour similar to rectangle. This will be your required contour and rectangle.
I have the histogram for an image which i have calculated. I want to display this as an image so that I can actually see the histogram. I think my problem is to do with scaling although i am slightly confused over the co ordinate system starting with 0,0 in the top left as well.
int rows = channel.rows;
int cols = channel.cols;
int hist[256] = {0};
for(int i = 0; i<rows; i++)
{
for(int k = 0; k<cols; k++ )
{
int value = channel.at<cv::Vec3b>(i,k)[0];
hist[value] = hist[value] + 1;
}
}
Mat histPlot = cvCreateMat(256, 500,CV_8UC1);
for(int i = 0; i < 256; i++)
{
int mag = hist[i];
line(histPlot,Point(i,0),Point(i,mag),Scalar(255,0,0));
}
namedWindow("Hist",1);
imshow("Hist",histPlot);
This is my calculation for creating my histogram and displaying the result. If i do mag/100 in my second loop then i get some resemblance of a plot appearing (although upside down). I call this method whenever i adjust a value of my image, so the histogram should also change shape, which it doesn't appear to do. Any help in scaling the histogram and displaying it properly is appreciated.
please don't use cvCreateMat ( aka, the old c-api ), you also seem to have rows and cols wrong, additionally, if you want a color drawing, you need a color image as well, so make that:
Mat histPlot( 500, 256, CV_8UC3 );
image origin is top-left(0,0), so you've got to put y in reverse:
line(histPlot,Point(i,histPlot.rows-1),Point(i,histPlot.rows-1-mag/100),Scalar(255,0,0));
I have to calculate white pixels and compare two contours in opencv that is one contour in first and the fifth frame then fifth and the tenth frame and so on.
I searched a lot about how to find next contour in a video. but all in vain. I am in doubt whether opencv has the function to find next contour. I am completely confused by reading the tutorials and other things.
I have done this. I doubt my logic.
cvFindContours(bgModel->foreground, memory, &contour, sizeof(CvContour),CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
if(FrameNumber%5==0)
{
for( ; contour != 0; contour = contour->h_next )
{
double area = fabs(cvContourArea(contour,CV_WHOLE_SEQ, 0));
I dont know what to do after this. how to get next contour????
Here is a code fragment iterating contours your counour is in 'c' variable.
//Contour Stuffz
CvMemStorage* g_storage = NULL;
g_storage = cvCreateMemStorage(0);
CvSeq* contours = 0;
cvFindContours(r2, g_storage, &contours);
for (CvSeq* c = contours; c != NULL; c=c->h_next) {
//*** Contour Stuff
int nContourPoints = c->total; // Get total number of points in the chain
if (nContourPoints > 46) { ...
I am a beginner in OpenCV and C++, but now I have to find a solution for this problem:
I have an image of a person with blue background, now I have to subtract background from image then replace it by another image.
Now I think there are 2 ways to resolve this problem, but I don't know which is better:
Solution 1:
Convert image to B&W
Use it as a mask to subtract background.
Solution 2:
Using coutour to find the background,
and then subtract it.
I have already implemented as solution 1, but the result is not as my expect.
Do you know there's another better solution or somebody already implement it as source code?
I will appreciate your help.
I update my source code here, please give me some comment
//Get the image with person
cv::Mat imgRBG = imread("test.jpg");
//Convert this image to grayscale
cv::Mat imgGray = imread("test.jpg",CV_LOAD_IMAGE_GRAYSCALE);
//Get the background from image
cv::Mat background = imread("paris.jpg");
cv::Mat imgB, imgW;
//Image with black background but inside have some area black
threshold(imgGray, imgB, 200, 255, CV_THRESH_BINARY_INV);
cv::Mat imgTemp;
cv::Mat maskB, maskW;
cv::Mat imgDisplayB, imgDisplayW;
cv::Mat imgDisplay1, imgDisplay2, imgResult;
//Copy image with black background, overide the original image
//Now imgTemp has black background wrap the human image, and inside the person, if there're some white area, they will be replace by black area
imgRBG.copyTo(imgTemp, imgB);
//Now replace the black background with white color
cv::floodFill(imgTemp, cv::Point(imgTemp.cols -10 ,10), cv::Scalar(255.0, 255.0, 255.0));
cv::floodFill(imgTemp, cv::Point(10,10), cv::Scalar(255.0, 255.0, 255.0));
cv::floodFill(imgTemp, cv::Point(10,imgTemp.rows -10), cv::Scalar(255.0, 255.0, 255.0));
cv::floodFill(imgTemp, cv::Point(imgTemp.cols -10,imgTemp.rows -10), cv::Scalar(255.0, 255.0, 255.0));
//Convert to grayscale
cvtColor(imgTemp,imgGray,CV_RGB2GRAY);
//Convert to B&W image, now background is black, other is white
threshold(imgGray, maskB, 200, 255, CV_THRESH_BINARY_INV);
//Convert to B&W image, now background is white, other is black
threshold(imgGray, maskW, 200, 255, CV_THRESH_BINARY);
//Replace background of image by the black mask
imgRBG.copyTo(imgDisplayB, maskB);
//Clone the background image
cv::Mat overlay = background.clone();
//Create ROI
cv::Mat overlayROI = overlay(cv::Rect(0,0,imgDisplayB.cols,imgDisplayB.rows));
//Replace the area which will be human image by white color
overlayROI.copyTo(imgResult, maskW);
//Add the person image
cv::addWeighted(imgResult,1,imgDisplayB,1,0.0,imgResult);
imshow("Image Result", imgResult);
waitKey();
return 0;
Check this project
https://sourceforge.net/projects/cvchromakey
void chromakey(const Mat under, const Mat over, Mat *dst, const Scalar& color) {
// Create the destination matrix
*dst = Mat(under.rows,under.cols,CV_8UC3);
for(int y=0; y<under.rows; y++) {
for(int x=0; x<under.cols; x++) {
if (over.at<Vec3b>(y,x)[0] >= red_l && over.at<Vec3b>(y,x)[0] <= red_h && over.at<Vec3b>(y,x)[1] >= green_l && over.at<Vec3b>(y,x)[1] <= green_h && over.at<Vec3b>(y,x)[2] >= blue_l && over.at<Vec3b>(y,x)[2] <= blue_h)
{
dst->at<Vec3b>(y,x)[0]= under.at<Vec3b>(y,x)[0];
dst->at<Vec3b>(y,x)[1]= under.at<Vec3b>(y,x)[1];
dst->at<Vec3b>(y,x)[2]= under.at<Vec3b>(y,x)[2];}
else{
dst->at<Vec3b>(y,x)[0]= over.at<Vec3b>(y,x)[0];
dst->at<Vec3b>(y,x)[1]= over.at<Vec3b>(y,x)[1];
dst->at<Vec3b>(y,x)[2]= over.at<Vec3b>(y,x)[2];}
}
}
}
If you know that the background is blue, you are losing valuable information by converting the image to B/W.
If the person is not wearing blue (at least not one that is very close to the background color), you don't have to use contours. just replace the blue pixels with the pixels from the other image. You can use cvScalar data type with, cvGet2D and cvSet2D functions to achieve this.
Edit:
Your code looks a lot more complicated than the original problem you stated. Having a blue background (also called "blue screen" and "chroma key") is a common method used by TV channels to change backgrounds of news readers. The reason for selecting blue was that the human skin has less dominance in the blue component.
Assuming that the person is not wearing blue, the following code should work. Let me know if you need something different.
//Read the image with person
IplImage* imgPerson = cvLoadImage("person.jpg");
//Read the image with background
IplImage* imgBackground = cvLoadImage("paris.jpg");
// assume that the blue background is quite even
// here is a possible range of pixel values
// note that I did not use all of them :-)
unsigned char backgroundRedMin = 0;
unsigned char backgroundRedMax = 10;
unsigned char backgroundGreenMin = 0;
unsigned char backgroundGreenMax = 10;
unsigned char backgroundBlueMin = 245;
unsigned char backgroundBlueMax = 255;
// for simplicity, I assume that both images are of the same resolution
// run a loop to replace pixels
for (int i=0; i<imgPerson->width; i++)
{
for (int j=0; j< imgPerson->height; j++)
{
CvScalar currentPixel = cvGet2D(imgPerson, j, i);
// compare the RGB values of the pixel, with the range
if (curEdgePixel.val[0] > backgroundBlueMin && curEdgePixel.val[1] <
backgroundGreenMax && curEdgePixel.val[2] < backgroundRedMax)
{
// copy the corresponding pixel from background
CvScalar currentBackgroundPixel = cvGet2D(imgBackground, j, i);
cvSet2D(imgPerson, j, i, currentBackgroundPixel);
}
}
}
imshow("Image Result", imgPerson);
waitKey();
return 0;