After some simple preprocessing I am receiving boolean mask of segmented images.
I want to "enhance" borders of the mask and make them more smooth. For that I am using OPEN morphology filter with a rather big circle kernel , it works very well until the distance between segmented objects is enough. But In alot of samples objects stick together. Is there exists some more or less simple method to smooth such kind of images without changing its morphology ?
Without applying a morphological filter first, you can try to detect the external contours of the image. Now you can draw these external contours as filled contours and then apply your morphological filter. This works because now you don't have any holes to fill. This is fairly simple.
Another approach:
find external contours
take the x, y of coordinates of the contour points. you can consider these as 1-D signals and apply a smoothing filter to these signals
In the code below, I've applied the second approach to a sample image.
Input image
External contours without any smoothing
After applying a Gaussian filter to x and y 1-D signals
C++ code
Mat im = imread("4.png", 0);
Mat cont = im.clone();
Mat original = Mat::zeros(im.rows, im.cols, CV_8UC3);
Mat smoothed = Mat::zeros(im.rows, im.cols, CV_8UC3);
// contour smoothing parameters for gaussian filter
int filterRadius = 5;
int filterSize = 2 * filterRadius + 1;
double sigma = 10;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
// find external contours and store all contour points
findContours(cont, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, Point(0, 0));
for(size_t j = 0; j < contours.size(); j++)
{
// draw the initial contour shape
drawContours(original, contours, j, Scalar(0, 255, 0), 1);
// extract x and y coordinates of points. we'll consider these as 1-D signals
// add circular padding to 1-D signals
size_t len = contours[j].size() + 2 * filterRadius;
size_t idx = (contours[j].size() - filterRadius);
vector<float> x, y;
for (size_t i = 0; i < len; i++)
{
x.push_back(contours[j][(idx + i) % contours[j].size()].x);
y.push_back(contours[j][(idx + i) % contours[j].size()].y);
}
// filter 1-D signals
vector<float> xFilt, yFilt;
GaussianBlur(x, xFilt, Size(filterSize, filterSize), sigma, sigma);
GaussianBlur(y, yFilt, Size(filterSize, filterSize), sigma, sigma);
// build smoothed contour
vector<vector<Point> > smoothContours;
vector<Point> smooth;
for (size_t i = filterRadius; i < contours[j].size() + filterRadius; i++)
{
smooth.push_back(Point(xFilt[i], yFilt[i]));
}
smoothContours.push_back(smooth);
drawContours(smoothed, smoothContours, 0, Scalar(255, 0, 0), 1);
cout << "debug contour " << j << " : " << contours[j].size() << ", " << smooth.size() << endl;
}
Not 100% sure what you are trying to achieve, but this may be an avenue to explore... the tool potrace takes images and converts them to vectorised images which involves smoothing. It prefers PGM format input files so I use ImageMagick to prepare them. Anyway, here is an example of the command and the result so see what you think:
convert disks.png pgm:- | potrace - -s -o out.svg
I have converted the resulting SVG file to a PNG so I can upload it to SO.
Related
I have a set of discrete points shown in an image, like the following
I want to reconstruct or up sampling (I'm not sure what's the correct way to describe it) the image, so that the result image would be like the following. It doesn't need to be exactly the same as the example image, but the main idea is to fill up the original one.
I have an initial idea about how to do it. But I don't know how to do it after the first step. My idea is to first separate image using kmeans and find out the different objects. And I have successfully done it. The resulting images after kmeans are: .
After kmeans, I want to use find contour or something like concave to get the outline of these shapes and fill the shape using functions like fill holes. However, I found "find contour" does not work, it will consider each single pixel as a contour.
The other way I'm thinking is to use interpolation. But I'm not sure whether it is possible with so sparse points. Does anyone have any ideas about how to do this? I'm not sure whether I'm on the right way and I'm open to any solutions.
Thanks a lot!
Take a look at the morphological transformations. I would start with a dilation operation using a large kernel, say the MORPH_ELLIPSE with a size(15,15). Afterwards, thin the blobs back down using the erosion operation with the same size kernel. Take a look at the docs here. Note that OpenCV offers chained, or sequenced, morphological operations, too. See here. You'll then see that my suggestion is a "closing" operation.
Update:
I experimented with simple dilation and contouring to yield the results shown in the image. The results appear to satisfy the general requirements of the problem.
Likewise, what "realtime" means for the application isn't specified, but this set of operations may be quickly executed and could easily be applied to a 30fps application.
Code snippet below:
// Convert image to grayscale
cvtColor(src, gray, CV_BGR2GRAY);
threshold(gray, gray, 128.0, 128.0, THRESH_BINARY);
// Dilate to fill holes
dilate(gray, dest, getStructuringElement(MORPH_ELLIPSE, Size(13,13)));
// Find contours
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(dest, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0,0));
// Prune contours
float maxArea = 0.0f;
for (size_t i = 0; i< contours.size(); i++)
{
if (contourArea(contours[i]) >= maxArea)
{
maxArea = contourArea(contours[i]);
}
}
float minArea = 0.20f * maxArea;
vector<vector<Point> > prunedContours;
for (size_t i = 0; i< contours.size(); i++)
{
if (contourArea(contours[i]) >= minArea)
{
prunedContours.push_back(contours[i]);
}
}
// Smooth the contours
vector<vector<Point> > smoothedContours;
smoothedContours.resize(prunedContours.size());
for (size_t i=0;i<prunedContours.size();i++)
{
vector<float> x;
vector<float> y;
const size_t n = prunedContours[i].size();
for (size_t j=0;j<n;j++)
{
x.push_back(prunedContours[i][j].x);
y.push_back(prunedContours[i][j].y);
}
Mat G;
transpose(getGaussianKernel(11,4.0,CV_32FC1),G);
vector<float> xSmooth;
vector<float> ySmooth;
filter2D(x,xSmooth, CV_32FC1, G);
filter2D(y,ySmooth, CV_32FC1, G);
for (size_t j=0;j<n;j++)
{
smoothedContours[i].push_back(Point2f(xSmooth[j],ySmooth[j]));
}
}
I am struggling with finding the appropriate contour algorithm for a low quality image. The example image shows a rock scene:
What I am trying to achieve is to find contours arround features such as:
light areas
dark areas
grey1 areas
grey2 areas
etc. until grey-n areas
(The number of areas shall be a parameter of choice)
I do not want to take a simple binary-threshold but rather use some sort of contour-finding (for example watershed or other). The major feature-lines shall be kept, noise within a feature-are can be flattened.
The result of my code can be seen on the images to the right.
Unfortunately, as you can easily tell, the colors do not really represent the original large-scale image features! For example: check out the two areas that I circled with red - these features are almost completely flooded with another color. What I imagine is that at least the very light and the very dark areas are covered by its own color.
cv::Mat cv_src = cv::imread(argv[1]);
cv::Mat output;
cv::Mat cv_src_gray;
cv::cvtColor(cv_src, cv_src_gray, cv::COLOR_RGB2GRAY);
double clipLimit = 0.1;
cv::Size titleGridSize = cv::Size(8,8);
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE(clipLimit, titleGridSize);
clahe->apply(cv_src_gray, output);
cv::equalizeHist(output, output);
cv::cvtColor(output, cv_src, cv::COLOR_GRAY2RGB);
// Create binary image from source image
cv::Mat bw;
cv::cvtColor(cv_src, bw, cv::COLOR_BGR2GRAY);
cv::threshold(bw, bw, 180, 255, cv::THRESH_BINARY);
// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, cv::DIST_L2, CV_32F);
// Normalize the distance image for range = {0.0, 1.0}
cv::normalize(dist, dist, 0, 1., cv::NORM_MINMAX);
// Threshold to obtain the peaks
cv::threshold(dist, dist, .2, 1., cv::THRESH_BINARY);
// Create the CV_8U version of the distance image
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);
// Find total markers
std::vector<std::vector<cv::Point> > contours;
cv::findContours(dist_8u, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
int ncomp = contours.size();
// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
// Draw the foreground markers
for (int i = 0; i < ncomp; i++)
cv::drawContours(markers, contours, i, cv::Scalar::all(i+1), -1);
// Draw the background marker
cv::circle(markers, cv::Point(5,5), 3, CV_RGB(255,255,255), -1);
// Perform the watershed algorithm
cv::watershed(cv_src, markers);
// Generate random colors
std::vector<cv::Vec3b> colors;
for (int i = 0; i < ncomp; i++)
{
int b = cv::theRNG().uniform(0, 255);
int g = cv::theRNG().uniform(0, 255);
int r = cv::theRNG().uniform(0, 255);
colors.push_back(cv::Vec3b((uchar)b, (uchar)g, (uchar)r));
}
// Create the result image
cv::Mat dst = cv::Mat::zeros(markers.size(), CV_8UC3);
// Fill labeled objects with random colors
for (int i = 0; i < markers.rows; i++)
{
for (int j = 0; j < markers.cols; j++)
{
int index = markers.at<int>(i,j);
if (index > 0 && index <= ncomp)
dst.at<cv::Vec3b>(i,j) = colors[index-1];
else
dst.at<cv::Vec3b>(i,j) = cv::Vec3b(0,0,0);
}
}
// Show me what you got
imshow("final_result", dst);
I think you can use a simple clustering such as k-means for this, then examine the cluster centers (or the mean and standard deviations of each cluster). I quickly tried it in matlab.
im = imread('tvBqt.jpg');
gr = rgb2gray(im);
x = double(gr(:));
idx = kmeans(x, 4);
cl = reshape(idx, 600, 472);
figure,
subplot(1, 2, 1), imshow(gr, []), title('original')
subplot(1, 2, 2), imshow(label2rgb(cl), []), title('clustered')
The result:
You could try using SLIC Superpixels. I tried it and showed some good results. You could vary the parameters to get better clustering.
SLIC Superpixels
SLIC Superpixels with OpenCV C++
SLIC Superpixels with OpenCV Python
I would like to detect this pattern
As you can see it's basically the letter C, inside another, with different orientations. My pattern can have multiple C's inside one another, the one I'm posting with 2 C's is just a sample. I would like to detect how many C's there are, and the orientation of each one. For now I've managed to detect the center of such pattern, basically I've managed to detect the center of the innermost C. Could you please provide me with any ideas about different algorithms I could use?
And here we go! A high level overview of this approach can be described as the sequential execution of the following steps:
Load the input image;
Convert it to grayscale;
Threshold it to generate a binary image;
Use the binary image to find contours;
Fill each area of contours with a different color (so we can extract each letter separately);
Create a mask for each letter found to isolate them in separate images;
Crop the images to the smallest possible size;
Figure out the center of the image;
Figure out the width of the letter's border to identify the exact center of the border;
Scan along the border (in a circular fashion) for discontinuity;
Figure out an approximate angle for the discontinuity, thus identifying the amount of rotation of the letter.
I don't want to get into too much detail since I'm sharing the source code, so feel free to test and change it in any way you like.
Let's start, Winter Is Coming:
#include <iostream>
#include <vector>
#include <cmath>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
cv::RNG rng(12345);
float PI = std::atan(1) * 4;
void isolate_object(const cv::Mat& input, cv::Mat& output)
{
if (input.channels() != 1)
{
std::cout << "isolate_object: !!! input must be grayscale" << std::endl;
return;
}
// Store the set of points in the image before assembling the bounding box
std::vector<cv::Point> points;
cv::Mat_<uchar>::const_iterator it = input.begin<uchar>();
cv::Mat_<uchar>::const_iterator end = input.end<uchar>();
for (; it != end; ++it)
{
if (*it) points.push_back(it.pos());
}
// Compute minimal bounding box
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
// Set Region of Interest to the area defined by the box
cv::Rect roi;
roi.x = box.center.x - (box.size.width / 2);
roi.y = box.center.y - (box.size.height / 2);
roi.width = box.size.width;
roi.height = box.size.height;
// Crop the original image to the defined ROI
output = input(roi);
}
For more details on the implementation of isolate_object() please check this thread. cv::RNG is used later on to fill each contour with a different color, and PI, well... you know PI.
int main(int argc, char* argv[])
{
// Load input (colored, 3-channel, BGR)
cv::Mat input = cv::imread("test.jpg");
if (input.empty())
{
std::cout << "!!! Failed imread() #1" << std::endl;
return -1;
}
// Convert colored image to grayscale
cv::Mat gray;
cv::cvtColor(input, gray, CV_BGR2GRAY);
// Execute a threshold operation to get a binary image from the grayscale
cv::Mat binary;
cv::threshold(gray, binary, 128, 255, cv::THRESH_BINARY);
The binary image looks exactly like the input because it only had 2 colors (B&W):
// Find the contours of the C's in the thresholded image
std::vector<std::vector<cv::Point> > contours;
cv::findContours(binary, contours, cv::RETR_LIST, cv::CHAIN_APPROX_SIMPLE);
// Fill the contours found with unique colors to isolate them later
cv::Mat colored_contours = input.clone();
std::vector<cv::Scalar> fill_colors;
for (size_t i = 0; i < contours.size(); i++)
{
std::vector<cv::Point> cnt = contours[i];
double area = cv::contourArea(cv::Mat(cnt));
//std::cout << "* Area: " << area << std::endl;
// Fill each C found with a different color.
// If the area is larger than 100k it's probably the white background, so we ignore it.
if (area > 10000 && area < 100000)
{
cv::Scalar color = cv::Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
cv::drawContours(colored_contours, contours, i, color,
CV_FILLED, 8, std::vector<cv::Vec4i>(), 0, cv::Point());
fill_colors.push_back(color);
//cv::imwrite("test_contours.jpg", colored_contours);
}
}
What colored_contours looks like:
// Create a mask for each C found to isolate them from each other
for (int i = 0; i < fill_colors.size(); i++)
{
// After inRange() single_color_mask stores a single C letter
cv::Mat single_color_mask = cv::Mat::zeros(input.size(), CV_8UC1);
cv::inRange(colored_contours, fill_colors[i], fill_colors[i], single_color_mask);
//cv::imwrite("test_mask.jpg", single_color_mask);
Since this for loop is executed twice, one for each color that was used to fill the contours, I want you to see all images that were generated by this stage. So the following images are the ones that were stored by single_color_mask (one for each iteration of the loop):
// Crop image to the area of the object
cv::Mat cropped;
isolate_object(single_color_mask, cropped);
//cv::imwrite("test_cropped.jpg", cropped);
cv::Mat orig_cropped = cropped.clone();
These are the ones that were stored by cropped (by the way, the smaller C looks fat because the image is rescaled by this page to have the same size of the larger C, don't worry):
// Figure out the center of the image
cv::Point obj_center(cropped.cols/2, cropped.rows/2);
//cv::circle(cropped, obj_center, 3, cv::Scalar(128, 128, 128));
//cv::imwrite("test_cropped_center.jpg", cropped);
To make it clearer to understand for what obj_center is for, I painted a little gray circle for educational purposes on that location:
// Figure out the exact center location of the border
std::vector<cv::Point> border_points;
for (int y = 0; y < cropped.cols; y++)
{
if (cropped.at<uchar>(obj_center.x, y) != 0)
border_points.push_back(cv::Point(obj_center.x, y));
if (border_points.size() > 0 && cropped.at<uchar>(obj_center.x, y) == 0)
break;
}
if (border_points.size() == 0)
{
std::cout << "!!! Oops! No border detected." << std::endl;
return 0;
}
// Figure out the exact center location of the border
cv::Point border_center = border_points[border_points.size() / 2];
//cv::circle(cropped, border_center, 3, cv::Scalar(128, 128, 128));
//cv::imwrite("test_border_center.jpg", cropped);
The procedure above scans a single vertical line from top/middle of the image to find the borders of the circle to be able to calculate it's width. Again, for education purposes I painted a small gray circle in the middle of the border. This is what cropped looks like:
// Scan the border of the circle for discontinuities
int radius = obj_center.y - border_center.y;
if (radius < 0)
radius *= -1;
std::vector<cv::Point> discontinuity_points;
std::vector<int> discontinuity_angles;
for (int angle = 0; angle <= 360; angle++)
{
int x = obj_center.x + (radius * cos((angle+90) * (PI / 180.f)));
int y = obj_center.y + (radius * sin((angle+90) * (PI / 180.f)));
if (cropped.at<uchar>(x, y) < 128)
{
discontinuity_points.push_back(cv::Point(y, x));
discontinuity_angles.push_back(angle);
//cv::circle(cropped, cv::Point(y, x), 1, cv::Scalar(128, 128, 128));
}
}
//std::cout << "Discontinuity size: " << discontinuity_points.size() << std::endl;
if (discontinuity_points.size() == 0 && discontinuity_angles.size() == 0)
{
std::cout << "!!! Oops! No discontinuity detected. It's a perfect circle, dang!" << std::endl;
return 0;
}
Great, so the piece of code above scans along the middle of the circle's border looking for discontinuity. I'm sharing a sample image to illustrate what I mean. Every gray dot on the image represents a pixel that is tested. When the pixel is black it means we found a discontinuity:
// Figure out the approximate angle of the discontinuity:
// the first angle found will suffice for this demo.
int approx_angle = discontinuity_angles[0];
std::cout << "#" << i << " letter C is rotated approximately at: " << approx_angle << " degrees" << std::endl;
// Figure out the central point of the discontinuity
cv::Point discontinuity_center;
for (int a = 0; a < discontinuity_points.size(); a++)
discontinuity_center += discontinuity_points[a];
discontinuity_center.x /= discontinuity_points.size();
discontinuity_center.y /= discontinuity_points.size();
cv::circle(orig_cropped, discontinuity_center, 2, cv::Scalar(128, 128, 128));
cv::imshow("Original crop", orig_cropped);
cv::waitKey(0);
}
return 0;
}
Very well... This last piece of code is responsible for figuring out the approximate angle of the discontinuity as well as indicate the central point of discontinuity. The following images are stored by orig_cropped. Once again I added a gray dot to show the exact positions detected as the center of the gaps:
When executed, this application prints the following information to the screen:
#0 letter C is rotated approximately at: 49 degrees
#1 letter C is rotated approximately at: 0 degrees
I hope it helps.
For start you could use Hough transformation. This algorithm is not very fast, but it's quite robust. Especially if you have such clear images.
The general approach would be:
1) preprocessing - suppress noise, convert to grayscale / binary
2) run edge detector
3) run Hough transform - IIRC it's `cv::HoughCircles` in OpenCV
4) do some postprocessing - remove surplus circles, decide which ones correspond to shape of letter C, and so on
My approach will give you 2 hough circles per letter C. One on inner boundary, one on outer letter C. If you want only one circle per letter you can use skeletonization algoritm. More info here http://homepages.inf.ed.ac.uk/rbf/HIPR2/skeleton.htm
Given that we have nested C structures and you know the centres of the Cs and would like to evaluate the orientations- one simply needs to observe the distribution of pixels along the radius of the concentric Cs in all directions.
This can be done by performing a simple morphological dilation operation from the centre. As we reach the right radius for the innermost C, we will reach a maximum number of pixels covered for the innermost C. The difference between the disc and the C will give us the location of the gap in the whole and one can perform an ultimate erosion to get the centroid of the gap in the C. The angle between the centre and this point is the orientation of the C. This step is iterated till all Cs are covered.
This can also be done quickly using the Distance function from the centre point of the Cs.
What is the most efficient way to find the bounding box of the largest blob in a binary image using OpenCV? Unfortunately, OpenCV does not have specific functionality for blob detection. Should I just use findContours() and search for the largest in the list?
Here. It. Is.
(FYI: try not to be lazy and figure out what happens in my function below.
cv::Mat findBiggestBlob(cv::Mat & matImage){
int largest_area=0;
int largest_contour_index=0;
vector< vector<Point> > contours; // Vector for storing contour
vector<Vec4i> hierarchy;
findContours( matImage, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
for( int i = 0; i< contours.size(); i++ ) {// iterate through each contour.
double a=contourArea( contours[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
//bounding_rect=boundingRect(contours[i]); // Find the bounding rectangle for biggest contour
}
}
drawContours( matImage, contours, largest_contour_index, Scalar(255), CV_FILLED, 8, hierarchy ); // Draw the largest contour using previously stored index.
return matImage;
}
If you want to use OpenCV libs, check out OpenCVs SimpleBlobDetector. Here's another stack overflow showing a small tutorial of it: How to use OpenCV SimpleBlobDetector
This only gives you key points though. You could use this as an initial search to find the blob you want, and then possibly use the findContours algorithm around the most likely blobs.
Also the more information you know about your blob, you can provide parameters to filter out the blobs you don't want. You might want to test out the area parameters of the SimpleBlobDetector. Possibly could could compute the area based on the size of the area of the image and then iteratively allow for a smaller blob if the algorithm does not detect any blobs.
Here is the link to the main OpenCV documentation: http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html#simpleblobdetector
To find the bounding box of the largest blob, I used findContours, followed by the following code:
double maxArea = 0;
for (MatOfPoint contour : contours) {
double area = Imgproc.contourArea(contour);
if (area > maxArea) {
maxArea = area;
largestContour = contour;
}
}
Rect boundingRect = Imgproc.boundingRect(largestContour);
Since no one has posted a complete OpenCV solution, here's a simple approach using thresholding + contour area filtering
Input image
Largest blob/contour highlighted in green
import cv2
# Load image, grayscale, Gaussian blur, and Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Find contours and sort using contour area
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
for c in cnts:
# Highlight largest contour
cv2.drawContours(image, [c], -1, (36,255,12), 3)
break
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()
TimZaman, your code has a bug but I can't comment so I start a new and correct answer.
Here is my solution based on 1"'s and TimZaman's ideas:
Mat measure::findBiggestBlob(cv::Mat &src){
int largest_area=0;
int largest_contour_index=0;
Mat temp(src.rows,src.cols,CV_8UC1);
Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0));
src.copyTo(temp);
vector<vector<Point>> contours; // storing contour
vector<Vec4i> hierarchy;
findContours( temp, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
for( int i = 0; i< contours.size(); i++ ) // iterate
{
double a=contourArea( contours[i],false); //Find the largest area of contour
if(a>largest_area)
{
largest_area=a;
largest_contour_index=i;
}
}
drawContours( dst, contours,largest_contour_index, Scalar(255), CV_FILLED, 8, hierarchy );
// Draw the largest contour
return dst;
}
This is regarding a project that concerns detection of text in an image using OpenCV in C. The process is to detect the colors inside and outside the corresponding contours and the way to do that is to draw normals on the contours in equal spaced positions and extract the pixel colors in the corresponding positions of the normals end-points.
I am trying to implement this using the following code but it's not working. I mean, its drawing the normals but not in and equi-spaced fashion.
for( ; contours!=0 ; contours = contours->h_next )
{
CvScalar color = CV_RGB( rand()&255, rand()&255, rand()&255 );
cvDrawContours( cc_color, contours, color, CV_RGB(0,0,0), -1, 1, 8, cvPoint(0,0) );
ptr = contours;
for( i=1; i<ptr->total; i++)
{
p1 = CV_GET_SEQ_ELEM( CvPoint, ptr, i );
p2 = CV_GET_SEQ_ELEM( CvPoint, ptr, i+1 );
x1 = p1->x;
y1 = p1->y;
x2 = p2->x;
y2 = p2->y;
printf("%d %d %d %d\n",x1,y1,x2,y2);
draw_normals(x1,y1,x2,y2);
}
}
So is there a way to find the length of a contour so that I can divide it by the number of normals I want to draw. Thanks in advance.
EDIT: The draw_normal function draws the normals between two points passed to it as parameters.
So is there a way to find the length of a contour?
Yes, you can find length of a contour using OpenCV standard function , cvarcLength().
Check Documentation here.