opencv - counting non directional edges from canny - opencv

can anyone help me how to count the number of non directional edge using opencv cannyedge detection? I have a cannyEdge image from opencv and I would like to have an histogram based on edge directions and there by i can count he number of directional and non directional edges.

I think you are confusing edge detection with gradient detection. Canny provides an edge map based on the gradient magnitude (normally using a Sobel operator, but it can use others) because Canny only returns the thresholded gradient magnitude information it cannot provide you with the orientation information.
EDIT : I should clarify that the Canny algorithm does use gradient orientation for the non-maximum suppression step. However, the OpenCV implementation of Canny hides this orientation information from you, and only returns an edge magnitude map.
The basic algorithm to get magnitude and orientation of the gradient is as follows:
Compute Sobel in the X direction (Sx).
Compute Sobel in the Y direction (Sy).
Compute the gradient magnitude sqrt(Sx*Sx + Sy*Sy).
Compute the gradient orientation with arctan(Sy / Sx).
This algorithm can be implemented using the following OpenCV functions: Sobel, magnitude, and phase.
Below is a sample that computes the gradient magnitude and phase as well as shows a coarse color mapping of the gradient orientations:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
using namespace cv;
using namespace std;
Mat mat2gray(const cv::Mat& src)
{
Mat dst;
normalize(src, dst, 0.0, 255.0, cv::NORM_MINMAX, CV_8U);
return dst;
}
Mat orientationMap(const cv::Mat& mag, const cv::Mat& ori, double thresh = 1.0)
{
Mat oriMap = Mat::zeros(ori.size(), CV_8UC3);
Vec3b red(0, 0, 255);
Vec3b cyan(255, 255, 0);
Vec3b green(0, 255, 0);
Vec3b yellow(0, 255, 255);
for(int i = 0; i < mag.rows*mag.cols; i++)
{
float* magPixel = reinterpret_cast<float*>(mag.data + i*sizeof(float));
if(*magPixel > thresh)
{
float* oriPixel = reinterpret_cast<float*>(ori.data + i*sizeof(float));
Vec3b* mapPixel = reinterpret_cast<Vec3b*>(oriMap.data + i*3*sizeof(char));
if(*oriPixel < 90.0)
*mapPixel = red;
else if(*oriPixel >= 90.0 && *oriPixel < 180.0)
*mapPixel = cyan;
else if(*oriPixel >= 180.0 && *oriPixel < 270.0)
*mapPixel = green;
else if(*oriPixel >= 270.0 && *oriPixel < 360.0)
*mapPixel = yellow;
}
}
return oriMap;
}
int main(int argc, char* argv[])
{
Mat image = Mat::zeros(Size(320, 240), CV_8UC1);
circle(image, Point(160, 120), 80, Scalar(255, 255, 255), -1, CV_AA);
imshow("original", image);
Mat Sx;
Sobel(image, Sx, CV_32F, 1, 0, 3);
Mat Sy;
Sobel(image, Sy, CV_32F, 0, 1, 3);
Mat mag, ori;
magnitude(Sx, Sy, mag);
phase(Sx, Sy, ori, true);
Mat oriMap = orientationMap(mag, ori, 1.0);
imshow("magnitude", mat2gray(mag));
imshow("orientation", mat2gray(ori));
imshow("orientation map", oriMap);
waitKey();
return 0;
}
Using a circle image:
This results in the following magnitude and orientation images:
Finally, here is the gradient orientation map:
UPDATE : Abid actually asked a great question in the comments "what is meant by orientation here?", which I thought needed some further discussion. I am assuming that the phase function doesn't switch coordinate frames from the normal image processing standpoint of positive y-axis is down, and positive x-axis is right. Given this assumption that leads to following image showing the gradient orientation vectors around the circle:
This can be difficult to get used to since the axes are flipped from what we are normally used to in math class... So, gradient orientation is the angle made by the normal vector to the gradient surface in the direction of increasing change.
Hope you found that helpful!

Related

Circle detection in a Image

I am trying to use HoughCircles method to detect a circle from an Image, but it looks like that this method is not useful to detect all circle with almost the same centres. For example, if I have 3 circles with almost the same centre it detects as a single circle. Please suggest if there is any way around to find all circles. Here is the source image:
I might be wrong with HoughCircle Methos assumption.
Thanks in advance.
The reason is that when you call HoughCircles you should decide the minimum distance between the detected circle centerces. The same center you mentioned means that zero distance between them. So in this case you should set the minimum distance parameter almost 0.
void cv::HoughCircles ( InputArray image,
OutputArray circles,
int method,
double dp,
double minDist, // You should set this parameter almost zero cos 0 not accepted.
double param1 = 100,
double param2 = 100,
int minRadius = 0,
int maxRadius = 0
)
When I tried with these parameters:
HoughCircles( input, output, CV_HOUGH_GRADIENT, 1, 0.5, 60, 30, 1, 200 );
I get this:
Edit: When I tried some more playing on this, I got 12 circles but the reality is 18 circles(edge circles not included). The reason could be about the image quality. Here is my code and result:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
int main()
{
/// Load source image and convert it to gray
Mat src_gray,dst,src = imread("/ur/source/image/image.jpg", 1 );
imshow("Source",src);
int i = 50;
bilateralFilter(src,dst,i,i*2,i/2);
imshow("Output",dst);
cvtColor( dst, src_gray, CV_BGR2GRAY );
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, 0.01, 80, 55, 0, 100 );
Mat zero_mask = Mat::zeros(src.rows,src.cols,CV_8UC3);
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle( zero_mask, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( zero_mask, center, radius, Scalar(0,0,255), 1, 8, 0 );
}
cout<<circles.size()<<endl;
imshow("Output2",src_gray);
imshow("outt",zero_mask);
waitKey(0);
return(0);
}
Output:

Hough Transform failed in opencv

This is a picture above. I am using opencv to process it and I have tried to use Hough Transform, but failed. Also, I found that it is so hard to set relative parameters in Hough Transform.
The codes are as following:
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat srcImg = imread("srccenter.bmp");
Mat greyImg;
cvtColor(srcImg, greyImg, COLOR_BGR2GRAY);
std::vector<cv::Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles(greyImg, circles, CV_HOUGH_GRADIENT, 1, 10, 100, 20, 0, 0);
/// Draw the circles detected
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle(srcImg, center, 3, Scalar(0, 255, 255), -1);
circle(srcImg, center, radius, Scalar(0, 255, 0), 1);
}
namedWindow("srcImg", WINDOW_NORMAL);
imshow("srcImg", srcImg);
waitKey(0);
return 0;
}
But the result is I can not detect any circle.
How I can detect the inner circle?
Do you have any good ideas?
You need to change min_dist parameter to zero. This parameter is for minimum distance between detected centers. in your case, centers of the circles are so near.
And Also, you must change param_1, the parameter of the Canny edge detector.

Why doesn't OpenCV findContour method always find closed outer contour?

I have two images from the same camera position. The difference between them is that one was taken with orthographic and the other was taken with perspective projection.
Here is the two image:
When I run the findContour OpenCV method on them the result is the follwing:
Why OpenCV doesn't find a closed outer contour curve for the perspective one?
I tried both CV_RETR_TREE and CV_RETR_EXTERNAL flags with the combination of CV_CHAIN_APPROX_SIMPLE and CV_CHAIN_APPROX_NONE flags.
Here is the documentation and sample code (which I am using) for the findContour method.
Actually I can't reproduce your problem. Try with this code:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
RNG rng(1234);
Mat3b img = imread("path_to_image");
Mat1b gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
Mat1b bw = ~gray;
vector<vector<Point>> contours;
findContours(bw, contours, RETR_LIST, CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); ++i)
{
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
drawContours(img, contours, i, color, 2);
}
imshow("Result", img);
waitKey();
return 0;
}
Result:

palm veins enhancement with OpenCV

I'm trying to implement in OpenCV an algorithm to bring out the details of a palm vein pattern. I've based myself on a paper called "A Contactless Biometric System Using Palm Print and Palm Vein Features" that I've found on the Internet. The part I'm interested in is the chapter 3.2 Pre-processing. The steps involved are shown there.
I'd like to do the implementation using OpenCV but until now I'm stuck hard. Especially they use a Laplacian filter on the response of a low-pass filter to isolate the principal veins but my result gets very noisy, no matter the parameters I try!
Any help would be greatly appreciated!
Ok finally I've figured out by myself how to do it. Here is my code :
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#define THRESHOLD 150
#define BRIGHT 0.7
#define DARK 0.2
using namespace std;
using namespace cv;
int main()
{
// Read source image in grayscale mode
Mat img = imread("roi.png", CV_LOAD_IMAGE_GRAYSCALE);
// Apply ??? algorithm from https://stackoverflow.com/a/14874992/2501769
Mat enhanced, float_gray, blur, num, den;
img.convertTo(float_gray, CV_32F, 1.0/255.0);
cv::GaussianBlur(float_gray, blur, Size(0,0), 10);
num = float_gray - blur;
cv::GaussianBlur(num.mul(num), blur, Size(0,0), 20);
cv::pow(blur, 0.5, den);
enhanced = num / den;
cv::normalize(enhanced, enhanced, 0.0, 255.0, NORM_MINMAX, -1);
enhanced.convertTo(enhanced, CV_8UC1);
// Low-pass filter
Mat gaussian;
cv::GaussianBlur(enhanced, gaussian, Size(0,0), 3);
// High-pass filter on computed low-pass image
Mat laplace;
Laplacian(gaussian, laplace, CV_32F, 19);
double lapmin, lapmax;
minMaxLoc(laplace, &lapmin, &lapmax);
double scale = 127/ max(-lapmin, lapmax);
laplace.convertTo(laplace, CV_8U, scale, 128);
// Thresholding using empirical value of 150 to create a vein mask
Mat mask;
cv::threshold(laplace, mask, THRESHOLD, 255, CV_THRESH_BINARY);
// Clean-up the mask using open morphological operation
morphologyEx(mask,mask,cv::MORPH_OPEN,
getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(5,5)));
// Connect the neighboring areas using close morphological operation
Mat connected;
morphologyEx(mask,mask,cv::MORPH_CLOSE,
getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(11,11)));
// Blurry the mask for a smoother enhancement
cv::GaussianBlur(mask, mask, Size(15,15), 0);
// Blurry a little bit the image as well to remove noise
cv::GaussianBlur(enhanced, enhanced, Size(3,3), 0);
// The mask is used to amplify the veins
Mat result(enhanced);
ushort new_pixel;
double coeff;
for(int i=0;i<mask.rows;i++){
for(int j=0;j<mask.cols;j++){
coeff = (1.0-(mask.at<uchar>(i,j)/255.0))*BRIGHT + (1-DARK);
new_pixel = coeff * enhanced.at<uchar>(i,j);
result.at<uchar>(i,j) = (new_pixel>255) ? 255 : new_pixel;
}
}
// Show results
imshow("frame", img);
waitKey();
imshow("frame", result);
waitKey();
return 0;
}
So the main steps of the paper are followed here. For some parts I've inspired myself on code I've found. It's the case for the first processing I apply that I've found here. Also for the High-pass filter (laplacian) I've inspired myself on the code given in OpenCV 2 Computer Vision Application Programming Cookbook.
Finally I've done some little improvements by allowing to modify the brightness of the background and the darkness of the veins (see defines BRIGHT and DARK). I've also decided to blur a bit the mask to have a more "natural" enhancement.
Here the results (Source / Paper result / My result) :

OpenCV distance transform outputting an image that looks exactly like the input image

I am doing some detection work using OpenCV, and I need to use the distance transform. Except the distance transform function in opencv gives me an image that is exactly the same as the image I use as source. Anyone know what I am doing wrong? Here is the portion of my code:
cvSetData(depthImage, m_rgbWk, depthImage->widthStep);
//gotten openCV image in "depthImage"
IplImage *single_channel_depthImage = cvCreateImage(cvSize(320, 240), 8, 1);
cvSplit(depthImage, single_channel_depthImage, NULL, NULL, NULL);
//smoothing
IplImage *smoothed_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvSmooth(single_channel_depthImage, smoothed_image, CV_MEDIAN, 9, 9, 0, 0);
//do canny edge detector
IplImage *edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvCanny(smoothed_image, edges_image, 100, 200);
//invert values
IplImage *inverted_edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvNot(edges_image, inverted_edges_image);
//calculate the distance transform
IplImage *distance_image = cvCreateImage(cvSize(320, 240), IPL_DEPTH_32F, 1);
cvZero(distance_image);
cvDistTransform(inverted_edges_image, distance_image, CV_DIST_L2, CV_DIST_MASK_PRECISE, NULL, NULL);
In a nutshell, I grad the image from the kinect, turn it into a one channel image, smooth it, run the canny edge detector, invert the values, and then I do the distance transform. But the transformed image looks exactly the same as the input image. What's wrong?
Thanks!
I believe the key here is that they look the same. Here is a small program I wrote to show the difference:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
Mat before = imread("qrcode.png", 0);
Mat dist;
distanceTransform(before, dist, CV_DIST_L2, 3);
imshow("before", before);
imshow("non-normalized", dist);
normalize(dist, dist, 0.0, 1.0, NORM_MINMAX);
imshow("normalized", dist);
waitKey();
return 0;
}
In the non-normalized image, you see this:
which doesn't really look like it changed anything, but the distance steps are very small compared to the overall range of values [0, 255] (due to imshow converting the image from 32-bit float to 8-bits for display), we can't see the differences, so let's normalize it...
Now we get this:
The values themselves should be correct, but when displayed you will need to normalize the image to see the difference.
EDIT :
Here is a small 10x10 sample from the upper-left corner of the dist matrix show that the values are in fact different:
[10.954346, 10.540054, 10.125763, 9.7114716, 9.2971802, 8.8828888, 8.4685974, 8.054306, 7.6400146, 7.6400146;
10.540054, 9.5850525, 9.1707611, 8.7564697, 8.3421783, 7.927887, 7.5135956, 7.0993042, 6.6850128, 6.6850128;
10.125763, 9.1707611, 8.2157593, 7.8014679, 7.3871765, 6.9728851, 6.5585938, 6.1443024, 5.730011, 5.730011;
9.7114716, 8.7564697, 7.8014679, 6.8464661, 6.4321747, 6.0178833, 5.6035919, 5.1893005, 4.7750092, 4.7750092;
9.2971802, 8.3421783, 7.3871765, 6.4321747, 5.4771729, 5.0628815, 4.6485901, 4.2342987, 3.8200073, 3.8200073;
8.8828888, 7.927887, 6.9728851, 6.0178833, 5.0628815, 4.1078796, 3.6935883, 3.2792969, 2.8650055, 2.8650055;
8.4685974, 7.5135956, 6.5585938, 5.6035919, 4.6485901, 3.6935883, 2.7385864, 2.324295, 1.9100037, 1.9100037;
8.054306, 7.0993042, 6.1443024, 5.1893005, 4.2342987, 3.2792969, 2.324295, 1.3692932, 0.95500183, 0.95500183;
7.6400146, 6.6850128, 5.730011, 4.7750092, 3.8200073, 2.8650055, 1.9100037, 0.95500183, 0, 0;
7.6400146, 6.6850128, 5.730011, 4.7750092, 3.8200073, 2.8650055, 1.9100037, 0.95500183, 0, 0]
I just figured this one out.
The OpenCV distanceTransform
Calculates the distance to the closest zero pixel for each pixel of
the source image.
and so it expects your edges image to be negative.
All you need to do is to negate your edges image:
edges = 255 - edges;
You can print this values using this code before normalize function:
for(int x=0; x<10;x++)
{
cout<<endl;
for(int y=0; y<10;y++)
cout<<std::setw(10)<<dist.at<float>(x, y);
}
Mat formats
Input: CV_8U
Dist: CV_32F
Normalized: CV_8U
normalize(Mat_dist, Mat_norm, 0, 255, NORM_MINMAX, CV_8U);
If you want to visualize the result, you need to scale the normalization to 0 ... 255 and not to 0 ... 1 or everything will seem black. Using imshow(); on a scaled to 0 ... 1 image will work but may cause problmes in the next processing steps. Al least it did in my case.

Resources