My question is for Opencv experts, I've detected road lines (left and right lines) so I was aiming to paint the road area with semi-transparent blue. So I used :
cv::fillPoly(image, ppt, npt, 1, CV_RGB(0, 0,200), lineType);
ppt- contain the points for right and left,
npt- number of points
But, what I got it filled area over the road which is not my aim.
So, my question is there any solution to paint the road area with semi-transparent? I was told to add another channel like:
cv::Mat channel[3];
split(image, channel);
channel[0] = cv::Mat::zeros(image.rows, image.cols, CV_8UC1);
merge(channel, 3, image);cv::imshow("kkk",image);
But the thing is I got all the image in semi-transparent and I want only the road area. Any ideas appreciated!!
thanks
try this code (couldnt test it on the mobile):
cv::Mat polyImage = cv::Mat(image.rows, image.cols, CV_8UC3,cv::Scalar (0,0,0));
cv::fillPoly(polyImage, ppt, npt, 1, CV_RGB(0, 0,200), lineType);
float transFactor = 0.5f; // the bigger the more transparent
for(int y=0;y <image.rows;++y)
for(int x=0;x <image.cols; ++x)
{
if(polyImage.at<cv::Vec3b>(y,x) != cv::Vec3b(0,0,0) )
image.at<cv::Vec3b>(y,x) = (transFactor)*image.at<cv::Vec3b>(y,x) + (1.0f - transFactor)*polyImage.at<cv::Vec3b>(y,x);
}
I want to improve my project which is designed for object detection.
Firstly, to get my actual result I use absdiff, and next I use the following operations are in my code below:
cv::threshold(subtractionResultEdges, threshold, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Sobel(threshold, sobel, CV_32F, 1, 0);
minMaxLoc(sobel, &minVal, &maxVal);
sobel.convertTo(sobel, CV_8U, 255.0 / (maxVal - minVal), -minVal * 255.0 / (maxVal - minVal));
dilate(subtractionResultEdges, subtractionResultEdges, verticalStructreMat, Point(-1, -1));
erode(subtractionResultEdges, filteredResult, verticalStructreMat, Point(-1, -1));
Canny(filteredResult, filteredResult, 33, 100, 3);
My last operation is findContours(canny_output, *contours, *hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
This is my result after use every function and foreground which I get with using accumulate function (20 frames) :
foreground:
http://j71i.imgup.net/foregroundc3dc.PNG
subtraction:
http://p81i.imgup.net/subtractio2866.PNG
Sobel:
http://g51i.imgup.net/sobela1fb.PNG
threshold:
http://p46i.imgup.net/treshold14c9.PNG
dilate, erode and Canny:
http://q68i.imgup.net/canny2e1a.PNG
findContours:
http://v76i.imgup.net/contours6845.PNG
Background is also obtained from accumulate function.
Could you help me get better corner or contours detection? I need it, to get object size in pixels.
Thanks in advance!
Use a larger kernel for dilate/erode part, maybe (11, 11) or even bigger, or alternatively do multiple iterations (this can be set as a parameter. This should connect the individual parts of your detected object better and then you'll have less contours.
To calculate area, you can then use contourArea()
We're currently trying to detect the object regions in medical instruments images using the methods available in OpenCV, C++ version. An example image is shown below:
Here are the steps we're following:
Converting the image to gray scale
Applying median filter
Find edges using sobel filter
Convert the result to binary image using a threshold of 25
Skeletonize the image to make sure we have neat edges
Finding X largest connected components
This approach works perfectly for the image 1 and here is the result:
The yellow borders are the connected components detected.
The rectangles are just to highlight the presence of a connected component.
To get understandable results, we just removed the connected components that are completely inside any another one, so the end result is something like this:
So far, everything was fine but another sample of image complicated our work shown below.
Having a small light green towel under the objects results this image:
After filtering the regions as we did earlier, we got this:
Obviously, it is not what we need..we're excepting something like this:
I'm thinking about clustering the closest connected components found(somehow!!) so we can minimize the impact of the presence of the towel, but don't know yet if it's something doable or someone has tried something like this before? Also, does anyone have any better idea to overcome this kind of problems?
Thanks in advance.
Here's what I tried.
In the images, the background is mostly greenish and the area of the background is considerably larger than that of the foreground. So, if you take a color histogram of the image, the greenish bins will have higher values. Threshold this histogram so that bins having smaller values are set to zero. This way we'll most probably retain the greenish (higher value) bins and discard other colors. Then backproject this histogram. The backprojection will highlight these greenish regions in the image.
Backprojection:
Then threshold this backprojection. This gives us the background.
Background (after some morphological filtering):
Invert the background to get foreground.
Foreground (after some morphological filtering):
Then find the contours of the foreground.
I think this gives a reasonable segmentation, and using this as mask you may be able to use a segmentation like GrabCut to refine the boundaries (I haven't tried this yet).
EDIT:
I tried the GrabCut approach and it indeed refines the boundaries. I've added the code for GrabCut segmentation.
Contours:
GrabCut segmentation using the foreground as mask:
I'm using the OpenCV C API for the histogram processing part.
// load the color image
IplImage* im = cvLoadImage("bFly6.jpg");
// get the color histogram
IplImage* im32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 3);
cvConvertScale(im, im32f);
int channels[] = {0, 1, 2};
int histSize[] = {32, 32, 32};
float rgbRange[] = {0, 256};
float* ranges[] = {rgbRange, rgbRange, rgbRange};
CvHistogram* hist = cvCreateHist(3, histSize, CV_HIST_ARRAY, ranges);
IplImage* b = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* g = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* r = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* backproject32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 1);
IplImage* backproject8u = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplImage* bw = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplConvKernel* kernel = cvCreateStructuringElementEx(3, 3, 1, 1, MORPH_ELLIPSE);
cvSplit(im32f, b, g, r, NULL);
IplImage* planes[] = {b, g, r};
cvCalcHist(planes, hist);
// find min and max values of histogram bins
float minval, maxval;
cvGetMinMaxHistValue(hist, &minval, &maxval);
// threshold the histogram. this sets the bin values that are below the threshold to zero
cvThreshHist(hist, maxval/32);
// backproject the thresholded histogram. backprojection should contain higher values for the
// background and lower values for the foreground
cvCalcBackProject(planes, backproject32f, hist);
// convert to 8u type
double min, max;
cvMinMaxLoc(backproject32f, &min, &max);
cvConvertScale(backproject32f, backproject8u, 255.0 / max);
// threshold backprojected image. this gives us the background
cvThreshold(backproject8u, bw, 10, 255, CV_THRESH_BINARY);
// some morphology on background
cvDilate(bw, bw, kernel, 1);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_CLOSE, 2);
// get the foreground
cvSubRS(bw, cvScalar(255, 255, 255), bw);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_OPEN, 2);
cvErode(bw, bw, kernel, 1);
// find contours of the foreground
//CvMemStorage* storage = cvCreateMemStorage(0);
//CvSeq* contours = 0;
//cvFindContours(bw, storage, &contours);
//cvDrawContours(im, contours, CV_RGB(255, 0, 0), CV_RGB(0, 0, 255), 1, 2);
// grabcut
Mat color(im);
Mat fg(bw);
Mat mask(bw->height, bw->width, CV_8U);
mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, fg);
Mat bgdModel, fgdModel;
grabCut(color, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);
Mat gcfg = mask == GC_PR_FGD;
vector<vector<cv::Point>> contours;
vector<Vec4i> hierarchy;
findContours(gcfg, contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
for(int idx = 0; idx < contours.size(); idx++)
{
drawContours(color, contours, idx, Scalar(0, 0, 255), 2);
}
// cleanup ...
UPDATE: We can do the above using the C++ interface as shown below.
const int channels[] = {0, 1, 2};
const int histSize[] = {32, 32, 32};
const float rgbRange[] = {0, 256};
const float* ranges[] = {rgbRange, rgbRange, rgbRange};
Mat hist;
Mat im32fc3, backpr32f, backpr8u, backprBw, kernel;
Mat im = imread("bFly6.jpg");
im.convertTo(im32fc3, CV_32FC3);
calcHist(&im32fc3, 1, channels, Mat(), hist, 3, histSize, ranges, true, false);
calcBackProject(&im32fc3, 1, channels, hist, backpr32f, ranges);
double minval, maxval;
minMaxIdx(backpr32f, &minval, &maxval);
threshold(backpr32f, backpr32f, maxval/32, 255, THRESH_TOZERO);
backpr32f.convertTo(backpr8u, CV_8U, 255.0/maxval);
threshold(backpr8u, backprBw, 10, 255, THRESH_BINARY);
kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
dilate(backprBw, backprBw, kernel);
morphologyEx(backprBw, backprBw, MORPH_CLOSE, kernel, Point(-1, -1), 2);
backprBw = 255 - backprBw;
morphologyEx(backprBw, backprBw, MORPH_OPEN, kernel, Point(-1, -1), 2);
erode(backprBw, backprBw, kernel);
Mat mask(backpr8u.rows, backpr8u.cols, CV_8U);
mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, backprBw);
Mat bgdModel, fgdModel;
grabCut(im, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);
Mat fg = mask == GC_PR_FGD;
I would consider a few options. My assumption is that the camera does not move. I haven't used the images or written any code, so this is mostly from experience.
Rather than just looking for edges, try separating the background using a segmentation algorithm. Mixture of Gaussian can help with this. Given a set of images over the same region (i.e. video), you can cancel out regions which are persistent. Then, new items such as instruments will pop out. Connected components can then be used on the blobs.
I would look at segmentation algorithms to see if you can optimize the conditions to make this work for you. One major item is to make sure your camera is stable or you stabilize the images yourself pre-processing.
I would consider using interest points to identify regions in the image with a lot of new material. Given that the background is relatively plain, small objects such as needles will create a bunch of interest points. The towel should be much more sparse. Perhaps overlaying the detected interest points over the connected component footprint will give you a "density" metric which you can then threshold. If the connected component has a large ratio of interest points for the area of the item, then it is an interesting object.
On this note, you can even clean up the connected component footprint by using a Convex Hull to prune the objects you have detected. This may help situations such as a medical instrument casting a shadow on the towel which stretches the component region. This is a guess, but interest points can definitely give you more information than just edges.
Finally, given that you have a stable background with clear objects in view, I would take a look at Bag-of-Features to see if you can just detect each individual object in the image. This may be useful since there seems to be a consistent pattern to the objects in these images. You can build a big database of images such as needles, gauze, scissors, etc. Then BoF, which is in OpenCV will find those candidates for you. You can also mix it in with other operations you are doing to compare results.
Bag of Features using OpenCV
http://www.codeproject.com/Articles/619039/Bag-of-Features-Descriptor-on-SIFT-Features-with-O
-
I would also suggest an idea to your initial version. You can also skip the contours, whose regions have width and height greater than the half the image width and height.
//take the rect of the contours
Rect rect = Imgproc.boundingRect(contours.get(i));
if (rect.width < inputImageWidth / 2 && rect.height < inputImageHeight / 2)
//then continue to draw or use for next purposes.
I'm working on a Processing project to simulate very basic hard-shadows. For the most part I've got it working; each edge of each object checks if its back is facing the light. If it is, a shadow polygon is added using that edge and others cast back away from the point directly away from the light.
However, when I tried to shift from solid shadows to transparent I ran into some problems. Namely, because the shadows are made of multiple shapes the borders tended to overlap, making them darker than everywhere else:
I disabled the stroke on the shadows, which improved the effect but left small lines between the shadows, despite the edges for the polygons being identical:
Is there a way to eliminate this artifact? If so, how?
The solution is to not draw the shadows as separate pieces, but to draw the combined polygon of all the shadow pieces as one polygon.
Here's a little example that exhibits your problem:
void setup(){
size(500, 500);
}
void draw(){
background(255);
noStroke();
fill(0);
ellipse(mouseX, mouseY, 10, 10);
fill(128, 128, 128, 128);
beginShape();
vertex(mouseX, mouseY);
vertex(0, height);
vertex(width, height);
endShape();
fill(128, 128, 128, 128);
beginShape();
vertex(mouseX, mouseY);
vertex(width, height);
vertex(width, 0);
endShape();
}
Notice the white line between the two polygons:
But if I instead draw the two polygons as one:
void setup(){
size(500, 500);
}
void draw(){
background(255);
noStroke();
fill(0);
ellipse(mouseX, mouseY, 10, 10);
fill(128, 128, 128, 128);
beginShape();
vertex(mouseX, mouseY);
vertex(0, height);
vertex(width, height);
vertex(width, 0);
endShape();
}
Then the white line goes away: