detect blob over other blob - opencv

I use OpenCV and cvblob library to play with blob.
Now I want to detect blob in this particular case.
The problem or the difficulty in this case is there are two blobs over a bigger one and other blob that overlap a part of the bigger one.
In cvblob library to detect a blob you must have a binary image.
I think i need to create two or more image to segment color uniform blobs and then binarize them to obtain all the blobs in the image.
How can i do that.
thanks in advance

I'm quite a beginner in OpenCV but I guess that, for that particular case, you should work with cvFindContours with the CV_RETR_EXTERNAL flag (with the CV_RETR_TREE, your yellow blob would be IN the blue one) instead of using cvblob.
It depends if you want to track them or not (cvblob offers a quick and efficient way to track blobs, instead of having to implement camshift).
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* firstContour = cvCreateSeq(CV_SEQ_ELTYPE_POINT, sizeof(CvSeq), sizeof(CvPoint), storage);
cvFindContours(image, storage, &firstContour, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// S'il y a un contour
if(firstContour != 0) {
for( CvSeq* c = firstContour; c != NULL; c = c->h_next ) {
for(int i = 0; i < c->total; ++i) {
// Get each point of the current contour
CvPoint* pt = CV_GET_SEQ_ELEM(CvPoint, c, i);
double x = pt->x;
double y = pt->y;
}
}
}
With the information given by the contour you can find easily the centroid, angle and bounding box of your blob.
Tracking these blob might be more difficult as cvblob doesn't like overlapping blobs (as I can see). You may have to implement your own tracking method.

Related

Matchingproblems when using OpenCVs matchShapes function

I´m trying to find a objekt in a larger Picture with the findContour/matchShape functions (the object can vary so it´s not possible to look after the color or something similar, Featuredetectors like SIFT also doesn´t work because the object could be symetric)
I have written following code:
Mat scene = imread...
Mat Template = imread...
Mat imagegray1, imagegray2, imageresult1, imageresult2;
int thresh=80;
double ans=0, result=0;
// Preprocess pictures
cvtColor(scene, imagegray1,CV_BGR2GRAY);
cvtColor(Template,imagegray2,CV_BGR2GRAY);
GaussianBlur(imagegray1,imagegray1, Size(5,5),2);
GaussianBlur(imagegray2,imagegray2, Size(5,5),2);
Canny(imagegray1, imageresult1,thresh, thresh*2);
Canny(imagegray2, imageresult2,thresh, thresh*2);
vector<vector <Point> > contours1;
vector<vector <Point> > contours2;
vector<Vec4i>hierarchy1, hierarchy2;
// Template
findContours(imageresult2,contours2,hierarchy2,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
// Szene
findContours(imageresult1,contours1,hierarchy1,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
imshow("template", Template);
double helper = INT_MAX;
int idx_i = 0, idx_j = 0;
// Match all contours with eachother
for(int i = 0; i < contours1.size(); i++)
{
for(int j = 0; j < contours2.size(); j++)
{
ans=matchShapes(contours1[i],contours2[j],CV_CONTOURS_MATCH_I1 ,0);
// find the best matching contour
if((ans < helper) )
{
idx_i = i;
helper = ans;
}
}
}
// draw the best contour
drawContours(scene, contours1, idx_i,
Scalar(255,255,0),3,8,hierarchy1,0,Point());
When I'm using a scene where only the Template is located in, i get a good matching result:
But when there are more objects in the pictures i have trouble detecting the object:
Hope someone can tell me whats the problem with the code i´m using. Thanks
You have a huge amount of contours in the second image (almost each letter).
As the matchShape checks for scale-invariant Hu-moments (http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gab001db45c1f1af6cbdbe64df04c4e944) also a very small contours may fit the shape you are looking for.
Furthermore, the original shape is not distinguished properly like can be seen when excluding all contours with an area smaller 50.
if(contourArea(contours1[i]) > 50)
drawContours(scene, contours1, i, Scalar(255, 255, 0), 1);
To say it with other words, there is no problem with your code. The contour can simply not be detected very well. I would suggest to have a look at approxCurve and convexHull and try to close the contour this way. Or improve the use of Canny in some way.
Then you could use a priori knowledge to restrict the size (and maybe rotation?) of the contour you are looking for.

find objects with bounding boxes

I would like to find bounding boxes for each object in the picture and after I found that, I crop out the bounding boxes and use it for the next steps.Here is the input picture after preprocessing.
I have a code for bounding box, but it just works well for 1 object. If there are 2 objects it sums up both and draw a bounding box around both of them. Here is the first output. The code for it is:
vector<vector<Point>> contours;
vector<Point> points;
findContours(erod, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for (size_t i = 0; i < contours.size(); i++) {
for (size_t j = 0; j < contours[i].size(); j++) {
Point p = contours[i][j];
points.push_back(p);
}
}
if (points.size() > 0) {
Rect brect = boundingRect(Mat(points).reshape(2));
cv::rectangle(erod, brect.tl(), brect.br(), Scalar(100,100,200), 2, CV_AA);
Mat ROI = frame(brect);
}
The secound thing I tried was using the code of the documentation of OpenCV. Here I changed CV_RETR_TREE in findContours to CV_RETR_EXTERNAL but I still get to many bounding boxes and I don't know how to crop out the boxes.
Thanks a lot!
Before finding contours you should do some morphological opening to clear all the noise and lines:
Mat morphKernelOpen = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new org.opencv.core.Size(20, 20));
Imgproc.morphologyEx(mat, mat, Imgproc.MORPH_OPEN, morphKernelOpen);
Result:
Also, there are some black spaces inside your objects, so to avoid finding contours in them, your findContours function should be under CV_RETR_EXTERNAL mode:
Imgproc.findContours(scharrThresh, scharrThreshContours, new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
In the end you'll have two contours and you can continue your boxes finding as you did previously
If you do not like soft edges around your objects you can do threshold function before morphological opening. Getting 100% accurate contours around your objects would be very hard or nearly impossible due to too much noise in the image. Also, if you can, next time if you ask put the result image you get after the actions you do, it will be easier to give you a proper answer.

How can i prevent my object detection program from detecting multiple objects of different sizes?

So, here is my situation. I have created a object detection program which is based on color object detection. My program detects the color red and it works perfectly. But here is the problems i am facing:-
Whenever there are more than one red object in the surrounding, my program detects them and it cannot really track one object at that time(i.e it tracks other red objects of various sizes in the background. It shows me the error that "too much noise in the background". As you can see in the "threshold image" attached, it detects the round object (which is my tracking object) and my cap which is red in color. I want my program to detect only my tracking object("which is a round shaped coke cap"). How can i achieve that? Please help me out. I have my engineering design contest in few days and i have to demo my program infront of my lecturers. My program should only be able to detect and track the object which i want. Thanks
My code for the objectdetection program is a little long. So, i am hereby explaining the code as follows- I captured a frame from the webcam frame-converted it to HSV- used HSV Inrange filter to filter out the other colors but red- applied morphological operations on the filtered image. This all goes in my main function
I am using a frame resolution of 1280*720 for my webcam frame. It kind of slows down my program but it was a trade off which i had to do for performing gesture controlled operations. Anyways here is my drawobjectfunction and trackfilteredobjectfunction.
int H_MIN = 0;
int H_MAX = 256;
int S_MIN = 0;
int S_MAX = 256;
int V_MIN = 0;
int V_MAX = 256;
//default capture width and height
const int FRAME_WIDTH = 1280;
const int FRAME_HEIGHT = 720;
//max number of objects to be detected in frame
const int MAX_NUM_OBJECTS=50;
//minimum and maximum object area
const int MIN_OBJECT_AREA = 20*20;
const int MAX_OBJECT_AREA = FRAME_HEIGHT*FRAME_WIDTH/1.5;
void drawObject(int x, int y,Mat &frame){
circle(frame,Point(x,y),20,Scalar(0,255,0),2);
if(y-25>0)
line(frame,Point(x,y),Point(x,y-25),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(x,0),Scalar(0,255,0),2);
if(y+25<FRAME_HEIGHT)
line(frame,Point(x,y),Point(x,y+25),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(x,FRAME_HEIGHT),Scalar(0,255,0),2);
if(x-25>0)
line(frame,Point(x,y),Point(x-25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(0,y),Scalar(0,255,0),2);
if(x+25<FRAME_WIDTH)
line(frame,Point(x,y),Point(x+25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(FRAME_WIDTH,y),Scalar(0,255,0),2);
putText(frame,intToString(x)+","+intToString(y),Point(x,y+30),1,1,Scalar(0,255,0),2);
}
void trackFilteredObject(int &x, int &y, Mat threshold, Mat &cameraFeed){
Mat temp;
threshold.copyTo(temp);
//these two vectors needed for output of findContours
vector< vector<Point> > contours;
vector<Vec4i> hierarchy;
//find contours of filtered image using openCV findContours function
findContours(temp,contours,hierarchy,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE );
//use moments method to find our filtered object
double refArea = 0;
bool objectFound = false;
if (hierarchy.size() > 0) {
int numObjects = hierarchy.size();
//if number of objects greater than MAX_NUM_OBJECTS we have a noisy filter
if(numObjects<MAX_NUM_OBJECTS){
for (int index = 0; index >= 0; index = hierarchy[index][0]) {
Moments moment = moments((cv::Mat)contours[index]);
double area = moment.m00;
//if the area is less than 20 px by 20px then it is probably just noise
//if the area is the same as the 3/2 of the image size, probably just a bad filter
//we only want the object with the largest area so we safe a reference area each
//iteration and compare it to the area in the next iteration.
if(area>MIN_OBJECT_AREA && area<MAX_OBJECT_AREA && area>refArea){
x = moment.m10/area;
y = moment.m01/area;
objectFound = true;
refArea = area;
}else objectFound = false;
}
//let user know you found an object
if(objectFound ==true){
putText(cameraFeed,"Tracking Object",Point(0,50),2,1,Scalar(0,255,0),2);
//draw object location on screen
drawObject(x,y,cameraFeed);}
}else putText(cameraFeed,"TOO MUCH NOISE! ADJUST FILTER",Point(0,50),1,2,Scalar(0,0,255),2);
}
}
Here is the link of the image; as you can see it also detects the red hat in the background along with the red cap of the coke bottle.
My observations:- Here is what i think, to achieve my desired goal of not detecting objects of unknown sizes of red color. I think i have to edit the value of maximum object area which i declared in the above program as (const int MAX_OBJECT_AREA = FRAME_HEIGHT*FRAME_WIDTH/1.5;). I think i have to change this value, that might eliminate the detection of bigger continous red pictures. But also, there is another problem some objects are not completely red in color and they have patches of red and other colors. So, if the detected area is within the range specfied in my program then my program detects those red patches too. What i mean to say is i was wearing a tshirt which has mixed colors and when i tested my program by wearing that tshirt, my program was able to detect the red color out of the other colors. Now, how do i solve this issue?
I think you can try out the following procedure:
obtain a circular kernel having roughly the same area as your object of interest. You can do it like: Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(d, d));
where d is the diameter of the disk.
perform normalized-cross-correlation or convolution of the filtered regions image with this kernel (I think normalized-cross-correlation would be better. And add an empty boarder around the kernel).
the peak of the resulting image should give you the location of the circular region in your filtered image (if you are using normalized-cross-correlation, you'll have to add the shift).
To speed things up, you can perform this at a reduced resolution.
You can filter out non-circular shapes by detecting circles in your thresholded image. OpenCV provides a built-on method to detect circles using Hough transform, more info here. You can take advantage of this function to retain only circles that have a radius in a given range.
Another possibility is to implement connected component labeling (CCL) into your demo program.
I believe that it was removed at some point in verions 2.x of OpenCV, but a basic implementation of the two-pass version is straightforward from the Wikipedia page.
CCL will assign a unique ID for each object after thresholding. You then have to implement matching between the objects at frame (T-1) and objects in frame (T) (for example based on some nearest distance criterion) and possibly trajectory filtering or smoothing, but this would definitely give you some extra-points.

How to track multiple object location?

I need to track multiple objects, some color objects, attached on human body; All same color. I can track one object through Threshold image and Moment but when I use more than one object the computed Moment is something between those two or three. I need to have xy coordinate of each ones. Actually, after all, I want to do some analysis on those sequences of coordinates.
I'm using VS2010, OpenCV 2.3.1, Win7 x64.
You have to compute the moments for each blob alone. To accomplish this, you can use cv::findContours to get a descriptor for each blob in the form of its contour, then use it to compute its moments. In the code snippet below, inspired by this example, it is shown how one would compute the mass centers of each blob using this approach.
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
// Find contours
cv::findContours(img, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
// Get the moments
std::vector<Moments> mu(contours.size() );
for(int i = 0; i < contours.size(); i++)
mu[i] = moments(contours[i], false);
// Get the mass centers:
std::vector<cv::Point2f> mc(contours.size());
for(int i = 0; i < contours.size(); i++)
mc[i] = Point2f(mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00);

Memory Leak while using CvSeq in cvFindContours

I am new to OpenCV and i faced some problem while using it.
Currently i am work on Binary Partitioning Tree (BPT) algorithm. Basically I need split the image into many regions, and based on some parameter. 2 regions will merged and form 1 new region, which consists of these 2 regions.
I managed to get initial regions by using cvWatershed. I also created a vector to store these regions, each in 1 vector block. However, I get memory leak when I tried to move the contour information into vector. It says, memory leak.
for (int h = 0; h <compCount; h++) // compCount - Amount of regions found through cvWaterShed
{
cvZero(WSRegion); // clears out an image, used for painting
Region.push_back(EmptyNode); // create an empty vector slot
CvScalar RegionColor = colorTab[h]; // the color of the region in watershed
for (int i = 0; i <WSOut->height; i++)
{
for (int j = 0; j <WSOut->width; j++)
{
CvScalar s = cvGet2D(WSOut, i, j); // get pixel color in watershed image
if (s.val[0] == RegionColor.val[0] && s.val[1] == RegionColor.val[1] && s.val[2] == RegionColor.val[2])
{
cvSet2D(WSRegion, i, j, cvScalarAll(255)); // paint the pixel to white if it has the same color with the region[h]
}
}
}
MemStorage = cvCreateMemStorage(); // create memory storage
cvFindContours(WSRegion, MemStorage, &contours, sizeof(CvContour), CV_RETR_LIST);
Region[h].RegionContour = cvCloneSeq(contours); // clone and store in vector Region[h]
Region[h].RegionContour->h_next = NULL;
}
Is it any ways I can solve this problem? Or is there any alternative that I do not need to create a new memory storage for every region vector? Thank You in advance
You should create the memory storage only once before the loop, cvFindContours can use that, and after the loop you should release the storage with:
void cvReleaseMemStorage(CvMemStorage** storage)
You can also take a look here for the CvMemStorage specification :
http://opencv.itseez.com/modules/core/doc/dynamic_structures.html?highlight=cvreleasememstorage#CvMemStorage
EDIT:
Your next problem is with cvCloneSeq(). Here are some specifications for it:
CvSeq* cvCloneSeq(const CvSeq* seq, CvMemStorage* storage=NULL )
Parameters:
seq – Sequence
storage – The destination storage block to hold the new sequence header and the copied data, if any. If it is NULL, the function uses the storage block containing the input sequence.
As you can see if you don't specify a different memory storage, it will clone the sequence in the same memory block as the input. When you are releasing the memory storage after the loop you are also releasing the last contour and it's clone that you pushed in the list.

Resources