I´m trying to find a objekt in a larger Picture with the findContour/matchShape functions (the object can vary so it´s not possible to look after the color or something similar, Featuredetectors like SIFT also doesn´t work because the object could be symetric)
I have written following code:
Mat scene = imread...
Mat Template = imread...
Mat imagegray1, imagegray2, imageresult1, imageresult2;
int thresh=80;
double ans=0, result=0;
// Preprocess pictures
cvtColor(scene, imagegray1,CV_BGR2GRAY);
cvtColor(Template,imagegray2,CV_BGR2GRAY);
GaussianBlur(imagegray1,imagegray1, Size(5,5),2);
GaussianBlur(imagegray2,imagegray2, Size(5,5),2);
Canny(imagegray1, imageresult1,thresh, thresh*2);
Canny(imagegray2, imageresult2,thresh, thresh*2);
vector<vector <Point> > contours1;
vector<vector <Point> > contours2;
vector<Vec4i>hierarchy1, hierarchy2;
// Template
findContours(imageresult2,contours2,hierarchy2,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
// Szene
findContours(imageresult1,contours1,hierarchy1,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
imshow("template", Template);
double helper = INT_MAX;
int idx_i = 0, idx_j = 0;
// Match all contours with eachother
for(int i = 0; i < contours1.size(); i++)
{
for(int j = 0; j < contours2.size(); j++)
{
ans=matchShapes(contours1[i],contours2[j],CV_CONTOURS_MATCH_I1 ,0);
// find the best matching contour
if((ans < helper) )
{
idx_i = i;
helper = ans;
}
}
}
// draw the best contour
drawContours(scene, contours1, idx_i,
Scalar(255,255,0),3,8,hierarchy1,0,Point());
When I'm using a scene where only the Template is located in, i get a good matching result:
But when there are more objects in the pictures i have trouble detecting the object:
Hope someone can tell me whats the problem with the code i´m using. Thanks
You have a huge amount of contours in the second image (almost each letter).
As the matchShape checks for scale-invariant Hu-moments (http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gab001db45c1f1af6cbdbe64df04c4e944) also a very small contours may fit the shape you are looking for.
Furthermore, the original shape is not distinguished properly like can be seen when excluding all contours with an area smaller 50.
if(contourArea(contours1[i]) > 50)
drawContours(scene, contours1, i, Scalar(255, 255, 0), 1);
To say it with other words, there is no problem with your code. The contour can simply not be detected very well. I would suggest to have a look at approxCurve and convexHull and try to close the contour this way. Or improve the use of Canny in some way.
Then you could use a priori knowledge to restrict the size (and maybe rotation?) of the contour you are looking for.
Related
picture histogram
I'm quite new to the processing language. I am trying to create an image comparison tool.
The idea is to get a histogram of a picture (see screenshot below, size is 600x400), which is then compared to 10 other histograms of similar pictures (all size 600x400). The histogram shows the frequency distribution of the gray levels with the number of pure black values displayed on the left and number of pure white values on the right.
In the end I should get a "winning" picture (the one that has the most similar histogram).
Below you can see the code for the image histogram, similar to the processing tutorial example.
My idea was to create a PImage [] for the 10 other pictures to create histograms and then an if statement, but I'm not sure how to code it.
Does anyone have a tip on how to proceed or where to look? I couldn't find a similar post.
Thanks in advance and sorry if the question is very basic!
size(600, 400);
// Load an image from the data directory
// Load a different image by modifying the comments
PImage img = loadImage("image4.jpg");
image(img, 0, 0);
int[] hist = new int[256];
// Calculate the histogram
for (int i = 0; i < img.width; i++) {
for (int j = 0; j < img.height; j++) {
int bright = int(brightness(get(i, j)));
hist[bright]++;
}
}
// Find the largest value in the histogram
int histMax = max(hist);
stroke(255);
// Draw half of the histogram (skip every second value)
for (int i = 0; i < img.width; i += 2) {
// Map i (from 0..img.width) to a location in the histogram (0..255)
int which = int(map(i, 0, img.width, 0, 255));
// Convert the histogram value to a location between
// the bottom and the top of the picture
int y = int(map(hist[which], 0, histMax, img.height, 0));
line(i, img.height, i, y);
}
Not sure if your problem is the implementation in processing or if you don't know how to compare histograms. I assume it is the comparison as the rest is pretty straight forward. Calculate the similarity for every candidate and pick the winner.
Search the web for histogram comparison and among others you will find:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html
OpenCV implements four measures for histogram similarity.
Correlation
where and N is the number of histogram bins
or
Chi-Square
or
Intersection
or
Bhattacharyya-Distance
You can use these measures, but I'm sure you'll find something else as well.
I would like to find bounding boxes for each object in the picture and after I found that, I crop out the bounding boxes and use it for the next steps.Here is the input picture after preprocessing.
I have a code for bounding box, but it just works well for 1 object. If there are 2 objects it sums up both and draw a bounding box around both of them. Here is the first output. The code for it is:
vector<vector<Point>> contours;
vector<Point> points;
findContours(erod, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for (size_t i = 0; i < contours.size(); i++) {
for (size_t j = 0; j < contours[i].size(); j++) {
Point p = contours[i][j];
points.push_back(p);
}
}
if (points.size() > 0) {
Rect brect = boundingRect(Mat(points).reshape(2));
cv::rectangle(erod, brect.tl(), brect.br(), Scalar(100,100,200), 2, CV_AA);
Mat ROI = frame(brect);
}
The secound thing I tried was using the code of the documentation of OpenCV. Here I changed CV_RETR_TREE in findContours to CV_RETR_EXTERNAL but I still get to many bounding boxes and I don't know how to crop out the boxes.
Thanks a lot!
Before finding contours you should do some morphological opening to clear all the noise and lines:
Mat morphKernelOpen = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new org.opencv.core.Size(20, 20));
Imgproc.morphologyEx(mat, mat, Imgproc.MORPH_OPEN, morphKernelOpen);
Result:
Also, there are some black spaces inside your objects, so to avoid finding contours in them, your findContours function should be under CV_RETR_EXTERNAL mode:
Imgproc.findContours(scharrThresh, scharrThreshContours, new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
In the end you'll have two contours and you can continue your boxes finding as you did previously
If you do not like soft edges around your objects you can do threshold function before morphological opening. Getting 100% accurate contours around your objects would be very hard or nearly impossible due to too much noise in the image. Also, if you can, next time if you ask put the result image you get after the actions you do, it will be easier to give you a proper answer.
There are images with perspectively distorted barcodes in them.
They are located and decoded using ZBar.
Now I do not only need the rough location, but the four real corner points of the barcode, that define the enclosing 4-point polygon.
I tried different approaches, but did not yet get the desired result.
One of them was:
convert image to grayscale
threshold image
erode image
floodFill beginning with a pixel known to be part of barcode
obtain the contour around the floodFill result
But around this contour I now would need to find the minimum best fitting 4-point polygon, which seems to be not that easy.
Do you have ideas for better approaches?
You could use the following code and try to reduce your contour to 4-point polygon via approxPoly
vector approx;
for (size_t i = 0; i < contours.size(); i++)
{
approxPolyDP(Mat(contours[i]), approx,
arcLength(Mat(contours[i]), true)*0.02, true);
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for( int j = 2; j < 5; j++ )
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if( maxCosine < 0.3 )
squares.push_back(approx);
}
}
http://opencv-code.com/tutorials/detecting-simple-shapes-in-an-image/
You can also try the following methods, maybe they will produce good enough results for you:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=minarearect#minarearect
or
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=convexhull#convexhull
OK, I found a solution that works good enough for my use case.
First a scanline is generated from the ZBar result.
Now the first and the last black pixels are found in verion of the image resulting from cv::adaptivethreshold with a large enough blockSize.
From there on the first and the last bar are segmented using cv::findContours.
Now for both end bars the two contour points with the most distance to each others are searched.
They finally define the enclosing 4-point-polygon.
Which is not exactly what I posted in my question, but the additional size due to the elongated guard patterns does not matter in my case.
I want to analyze the blobs received using contours. However, I came across with a slight problems where is there any difference analyzing the blobs before and after using the following code?
for(unsigned int i = 0; i < rects3.size(); i++) {
Scalar color = Scalar(255,255,255);
drawContours( drawing3, contours3, i, color, CV_FILLED, 8);
}
before using the above, there are only some boundaries line and after using the code we can see the white blobs. As attached are the example of it.
You want to iterate through the possible blobs and then analyze it (area, perimeter, etc).
Your contours are in vector called rects3.
// iterating trough
for(unsigned int i = 0; i < rects3.size(); i++) {
// get the bounding box of one contour
Rect rect = boundingRect(rects3[i]);
//area
double area = contourArea(rects3[i]);
//perimiter
double perimiter = arcLength(rects3[i], true);
}
see http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html
Is there any built in library for sliding a window (custom size) over an image in opencv version 2.x?
I tried to write the algorithm by myself but I found it very painful and probably error-prone.
I need to slide over an image and create histogram for the input of svm.
there is one for HOG Descriptor, which calculates HOG features but I have my own feature set so I just need an algorithm to let me slide over an image.
You can define a Region of Interest (ROI) on a cv::Mat object, which gives you a new Mat object referring to the sub-window. This does not copy the underlying data, merely a new header with the appropriate metadata.
cv::Mat::operator()
See also this other question:
OpenCV C++, getting Region Of Interest (ROI) using cv::Mat
Basic code can looks like. The code is described good enought. I hope.
This is single scale slideing window 60x60 witch Step 30.
Result of this simple example is ROI.
You can visit this basic tutorial Tutorial Here.
// Parameters of your slideing window
int windows_n_rows = 60;
int windows_n_cols = 60;
// Step of each window
int StepSlide = 30;
for (int row = 0; row <= LoadedImage.rows - windows_n_rows; row += StepSlide)
{
for (int col = 0; col <= LoadedImage.cols - windows_n_cols; col += StepSlide)
{
Rect windows(col, row, windows_n_rows, windows_n_cols);
Mat Roi = LoadedImage(windows);
}
}