How to properly apply Hough Circle Transform in iOS environment? - ios

First of, I have no experience working in Objective C++.
Like the subjects states, I am trying to implement OpenCV (v3.4.2) Hough Circle Transform into images capture via iOS device. So far, I am able to make the captured photo greyscale. However, when I try to apply circle transform, app crashes. Also, there are syntax errors when I try to draw the recognized circles.
OpenCV Guide I am following.
Thank you for your time.
#import <opencv2/opencv.hpp>
#import <opencv2/imgcodecs/ios.h>
#import "GoCalc2-Bridging-Header.h"
#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#implementation ImageConverter : NSObject
+(UIImage *)ConvertImage:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
cv::Mat gray;
cv::cvtColor(mat, gray, CV_RGB2GRAY);
cv::Mat bin;
cv::threshold(gray, bin, 0, 255, cv::THRESH_BINARY | cv::THRESH_OTSU);
// Apply a Median blur to reduce noise and avoid false circle detection
cv::medianBlur(mat, gray, 5);
// Proceed to apply Hough Circle Transform
std::vector<cv::Vec3f> circles;
HoughCircles(gray, circles, cv::HOUGH_GRADIENT, 1,
gray.rows/16, // change this value to detect circles with different distances to each other
100, 30, 1, 30 // change the last two parameters
// (min_radius & max_radius) to detect larger circles
);
// Draw the detected circles
for( size_t i = 0; i < circles.size(); i++ )
{
Vec3i c = circles[i];
Point center = Point(c[0], c[1]); // ERROR HERE: No matching constructor for initialization of 'Point'
// circle center
circle( src, center, 1, Scalar(0,100,100), 3, LINE_AA); // ERROR HERE: Use of undeclared identifier 'LINE_AA' and Use of undeclared identifier 'src'
// circle outline
int radius = c[2];
circle( src, center, radius, Scalar(255,0,255), 3, LINE_AA); // ERROR HERE: Use of undeclared identifier 'LINE_AA' and Use of undeclared identifier 'src'
}
// Display the detected circle(s) and wait for the user to exit the program
imshow("detected circles", mat);
cv::waitKey();
UIImage *binImg = MatToUIImage(bin);
return binImg;
}
#end

Have you tried removing LINE_AA?
This is what I have done in my app circle( img, center, 1, cvScalar(255,255,255), 3);.
Instead of Vec3i c = circles[i]; you may want to try cv::Vec3i c = circles[i];
hope it will help

Related

Estimate white background

I have image with white uneven background (due to lighting). I'm trying to estimate background color and transform image into image with true white background. For this I estimated white color for each 15x15 pixels block based on its luminosity. So I've got the following map (on the right):
Now I want to interpolate color so it will be more smooth transition from 15x15 block to neighboring block, plus I want it to eliminate outliers (pink dots on left hand side). Could anyone suggest good technique/algorithm for this? (Ideally within OpenCV library, but not necessary)
Starting from this image:
You could find the text on the whiteboard as the parts of your images that have a high gradient, and apply a little dilation to deal with thick parts of the text. You'll get a mask that separates background from foreground pretty well:
Background:
Foreground:
You can then apply inpainting using the computed mask on the original image (you need OpenCV contrib module photo):
Just to show that this works independently of the text color, I tried on a different image:
Resulting in:
Code:
#include <opencv2/opencv.hpp>
#include <opencv2/photo.hpp>
using namespace cv;
void findText(const Mat3b& src, Mat1b& mask)
{
// Convert to grayscale
Mat1b gray;
cvtColor(src, gray, COLOR_BGR2GRAY);
// Compute gradient magnitude
Mat1f dx, dy, mag;
Sobel(gray, dx, CV_32F, 1, 0);
Sobel(gray, dy, CV_32F, 0, 1);
magnitude(dx, dy, mag);
// Remove low magnitude, keep only text
mask = mag > 10;
// Apply a dilation to deal with thick text
Mat1b K = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
dilate(mask, mask, K);
}
int main(int argc, const char * argv[])
{
Mat3b img = imread("path_to_image");
// Segment white
Mat1b mask;
findText(img, mask);
// Show intermediate images
Mat3b background = img.clone();
background.setTo(0, mask);
Mat3b foreground = img.clone();
foreground.setTo(0, ~mask);
// Apply inpainting
Mat3b inpainted;
inpaint(img, mask, inpainted, 21, CV_INPAINT_TELEA);
imshow("Original", img);
imshow("Foreground", foreground);
imshow("Background", background);
imshow("Inpainted", inpainted);
waitKey();
return 0;
}

How to black out everything outside a circle in Open CV

I am currently trying to black out everything outside a circle.
I am drawing the circle using the following lines of code:
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); // CVRound converts floating numbers to integer
int radius = cvRound(circles[i][2]); // Radius is the third parameter [i][0] = x [i][1]= y [i][2] = radius
circle( image, center, 3, cv::Scalar(0,255,0), -1, 8, 0 ); // Drawing little circle to Image Center , next Line of Code draws the real circle
circle( image, center, radius, cv::Scalar(0,0,255), 3, 8, 0 ); // Circle(img, center, radius, color, thickness=1, lineType=8, shift=0)
What is the best approach of painting everything of the circly black, if I have a radius and the center of my circle?
Does OpenCV provide an easy mechanism of doing this or should I iterate through all the pixels of my image and depending on the position color them black or not?
Thanks to Abid for the hint, i ended up with this approach. Everything works fine:
cv::Mat src = someMethodThatReturnsSrcImage(); // src Image
cv::Mat maskedImage; // stores masked Image
std::vector<cv::Vec3f> circles = someMethodThatReturnsCircles(src);
cv::Mat mask(srcImageForDimensions.size(),srcImageForDimensions.type()); // create an Mat that has same Dimensons as src
mask.setTo(cv::Scalar(0,0,0)); // creates black-Image
// Add all found circles to mask
for( size_t i = 0; i < circles.size(); i++ ) // iterate through all detected Circles
{
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); // CVRound converts floating numbers to integer
int radius = cvRound(circles[i][2]); // Radius is the third parameter [i][0] = x [i][1]= y [i][2] = radius
cv::circle( mask, center, radius, cv::Scalar(255,255,255),-1, 8, 0 ); // Circle(img, center, radius, color, thickness=1, lineType=8, shift=0)
}
src.copyTo(maskedImage,mask); // creates masked Image and copies it to maskedImage
you can make the background the color you want
image=cv::Scalar(red_value, green_value, blue_value);
then draw your circles
I think that comment just under your question is the best solution.
I made a modified version of your code for 5M image from fisheye camera. This image also needs to make black all points outside circle.
#include <Windows.h>
#include <Vfw.h>
#include <string>
#include <iostream>
#include "opencv2\core\core.hpp"
#include "opencv2\imgproc\imgproc.hpp"
#include "opencv2\imgcodecs\imgcodecs.hpp"
#include "opencv2\highgui\highgui.hpp"
using namespace std;
using namespace cv;
int _tmain(int argc, _TCHAR* argv[])
{
cv::Mat im_source_non_square = cv::imread("D:/FishLib/sample_02.bmp", CV_LOAD_IMAGE_COLOR);
cv::namedWindow("Image",CV_WINDOW_FREERATIO);
cv::imshow("Image", im_source_non_square);
Mat im_source_square;
int m_nCenterX=1280;
int m_nCenterY=960;
int m_nRadius=916;
Mat im_mask=im_source_non_square.clone();
im_mask.setTo(cv::Scalar(0,0,0));
circle( im_mask, cv::Point(m_nCenterX,m_nCenterY), m_nRadius, cv::Scalar(255,255,255), -3, 8, 0 );
cv::namedWindow("Mask image",CV_WINDOW_FREERATIO);
cv::imshow("Mask image", im_mask);
Mat im_source_circle;
cv::bitwise_and(im_source_non_square,im_mask,im_source_circle);
cv::namedWindow("Combined image",CV_WINDOW_FREERATIO);
cv::imshow("Combined image", im_source_circle);
cv::waitKey(0);
return 0;
}
Just tried your code snippet and it works.
Also if you want to change the background color instead of black, according to opencv docs here, before copyTo the destination mat will be initialized if needed, so just add code below:
cv::Mat maskedImage(srcImageForDimensions.size(), srcImageForDimensions.type()); // stores masked Image
maskedImage.setTo(cv::Scalar(0,0,255)); // set background color to red

opencv - counting non directional edges from canny

can anyone help me how to count the number of non directional edge using opencv cannyedge detection? I have a cannyEdge image from opencv and I would like to have an histogram based on edge directions and there by i can count he number of directional and non directional edges.
I think you are confusing edge detection with gradient detection. Canny provides an edge map based on the gradient magnitude (normally using a Sobel operator, but it can use others) because Canny only returns the thresholded gradient magnitude information it cannot provide you with the orientation information.
EDIT : I should clarify that the Canny algorithm does use gradient orientation for the non-maximum suppression step. However, the OpenCV implementation of Canny hides this orientation information from you, and only returns an edge magnitude map.
The basic algorithm to get magnitude and orientation of the gradient is as follows:
Compute Sobel in the X direction (Sx).
Compute Sobel in the Y direction (Sy).
Compute the gradient magnitude sqrt(Sx*Sx + Sy*Sy).
Compute the gradient orientation with arctan(Sy / Sx).
This algorithm can be implemented using the following OpenCV functions: Sobel, magnitude, and phase.
Below is a sample that computes the gradient magnitude and phase as well as shows a coarse color mapping of the gradient orientations:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
using namespace cv;
using namespace std;
Mat mat2gray(const cv::Mat& src)
{
Mat dst;
normalize(src, dst, 0.0, 255.0, cv::NORM_MINMAX, CV_8U);
return dst;
}
Mat orientationMap(const cv::Mat& mag, const cv::Mat& ori, double thresh = 1.0)
{
Mat oriMap = Mat::zeros(ori.size(), CV_8UC3);
Vec3b red(0, 0, 255);
Vec3b cyan(255, 255, 0);
Vec3b green(0, 255, 0);
Vec3b yellow(0, 255, 255);
for(int i = 0; i < mag.rows*mag.cols; i++)
{
float* magPixel = reinterpret_cast<float*>(mag.data + i*sizeof(float));
if(*magPixel > thresh)
{
float* oriPixel = reinterpret_cast<float*>(ori.data + i*sizeof(float));
Vec3b* mapPixel = reinterpret_cast<Vec3b*>(oriMap.data + i*3*sizeof(char));
if(*oriPixel < 90.0)
*mapPixel = red;
else if(*oriPixel >= 90.0 && *oriPixel < 180.0)
*mapPixel = cyan;
else if(*oriPixel >= 180.0 && *oriPixel < 270.0)
*mapPixel = green;
else if(*oriPixel >= 270.0 && *oriPixel < 360.0)
*mapPixel = yellow;
}
}
return oriMap;
}
int main(int argc, char* argv[])
{
Mat image = Mat::zeros(Size(320, 240), CV_8UC1);
circle(image, Point(160, 120), 80, Scalar(255, 255, 255), -1, CV_AA);
imshow("original", image);
Mat Sx;
Sobel(image, Sx, CV_32F, 1, 0, 3);
Mat Sy;
Sobel(image, Sy, CV_32F, 0, 1, 3);
Mat mag, ori;
magnitude(Sx, Sy, mag);
phase(Sx, Sy, ori, true);
Mat oriMap = orientationMap(mag, ori, 1.0);
imshow("magnitude", mat2gray(mag));
imshow("orientation", mat2gray(ori));
imshow("orientation map", oriMap);
waitKey();
return 0;
}
Using a circle image:
This results in the following magnitude and orientation images:
Finally, here is the gradient orientation map:
UPDATE : Abid actually asked a great question in the comments "what is meant by orientation here?", which I thought needed some further discussion. I am assuming that the phase function doesn't switch coordinate frames from the normal image processing standpoint of positive y-axis is down, and positive x-axis is right. Given this assumption that leads to following image showing the gradient orientation vectors around the circle:
This can be difficult to get used to since the axes are flipped from what we are normally used to in math class... So, gradient orientation is the angle made by the normal vector to the gradient surface in the direction of increasing change.
Hope you found that helpful!

OpenCV distance transform outputting an image that looks exactly like the input image

I am doing some detection work using OpenCV, and I need to use the distance transform. Except the distance transform function in opencv gives me an image that is exactly the same as the image I use as source. Anyone know what I am doing wrong? Here is the portion of my code:
cvSetData(depthImage, m_rgbWk, depthImage->widthStep);
//gotten openCV image in "depthImage"
IplImage *single_channel_depthImage = cvCreateImage(cvSize(320, 240), 8, 1);
cvSplit(depthImage, single_channel_depthImage, NULL, NULL, NULL);
//smoothing
IplImage *smoothed_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvSmooth(single_channel_depthImage, smoothed_image, CV_MEDIAN, 9, 9, 0, 0);
//do canny edge detector
IplImage *edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvCanny(smoothed_image, edges_image, 100, 200);
//invert values
IplImage *inverted_edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvNot(edges_image, inverted_edges_image);
//calculate the distance transform
IplImage *distance_image = cvCreateImage(cvSize(320, 240), IPL_DEPTH_32F, 1);
cvZero(distance_image);
cvDistTransform(inverted_edges_image, distance_image, CV_DIST_L2, CV_DIST_MASK_PRECISE, NULL, NULL);
In a nutshell, I grad the image from the kinect, turn it into a one channel image, smooth it, run the canny edge detector, invert the values, and then I do the distance transform. But the transformed image looks exactly the same as the input image. What's wrong?
Thanks!
I believe the key here is that they look the same. Here is a small program I wrote to show the difference:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
Mat before = imread("qrcode.png", 0);
Mat dist;
distanceTransform(before, dist, CV_DIST_L2, 3);
imshow("before", before);
imshow("non-normalized", dist);
normalize(dist, dist, 0.0, 1.0, NORM_MINMAX);
imshow("normalized", dist);
waitKey();
return 0;
}
In the non-normalized image, you see this:
which doesn't really look like it changed anything, but the distance steps are very small compared to the overall range of values [0, 255] (due to imshow converting the image from 32-bit float to 8-bits for display), we can't see the differences, so let's normalize it...
Now we get this:
The values themselves should be correct, but when displayed you will need to normalize the image to see the difference.
EDIT :
Here is a small 10x10 sample from the upper-left corner of the dist matrix show that the values are in fact different:
[10.954346, 10.540054, 10.125763, 9.7114716, 9.2971802, 8.8828888, 8.4685974, 8.054306, 7.6400146, 7.6400146;
10.540054, 9.5850525, 9.1707611, 8.7564697, 8.3421783, 7.927887, 7.5135956, 7.0993042, 6.6850128, 6.6850128;
10.125763, 9.1707611, 8.2157593, 7.8014679, 7.3871765, 6.9728851, 6.5585938, 6.1443024, 5.730011, 5.730011;
9.7114716, 8.7564697, 7.8014679, 6.8464661, 6.4321747, 6.0178833, 5.6035919, 5.1893005, 4.7750092, 4.7750092;
9.2971802, 8.3421783, 7.3871765, 6.4321747, 5.4771729, 5.0628815, 4.6485901, 4.2342987, 3.8200073, 3.8200073;
8.8828888, 7.927887, 6.9728851, 6.0178833, 5.0628815, 4.1078796, 3.6935883, 3.2792969, 2.8650055, 2.8650055;
8.4685974, 7.5135956, 6.5585938, 5.6035919, 4.6485901, 3.6935883, 2.7385864, 2.324295, 1.9100037, 1.9100037;
8.054306, 7.0993042, 6.1443024, 5.1893005, 4.2342987, 3.2792969, 2.324295, 1.3692932, 0.95500183, 0.95500183;
7.6400146, 6.6850128, 5.730011, 4.7750092, 3.8200073, 2.8650055, 1.9100037, 0.95500183, 0, 0;
7.6400146, 6.6850128, 5.730011, 4.7750092, 3.8200073, 2.8650055, 1.9100037, 0.95500183, 0, 0]
I just figured this one out.
The OpenCV distanceTransform
Calculates the distance to the closest zero pixel for each pixel of
the source image.
and so it expects your edges image to be negative.
All you need to do is to negate your edges image:
edges = 255 - edges;
You can print this values using this code before normalize function:
for(int x=0; x<10;x++)
{
cout<<endl;
for(int y=0; y<10;y++)
cout<<std::setw(10)<<dist.at<float>(x, y);
}
Mat formats
Input: CV_8U
Dist: CV_32F
Normalized: CV_8U
normalize(Mat_dist, Mat_norm, 0, 255, NORM_MINMAX, CV_8U);
If you want to visualize the result, you need to scale the normalization to 0 ... 255 and not to 0 ... 1 or everything will seem black. Using imshow(); on a scaled to 0 ... 1 image will work but may cause problmes in the next processing steps. Al least it did in my case.

How to find the coordinates of a point w.r.t another point on an image using OpenCV

Today I wrote a program for detecting circles using Hough Transform using OpenCV in C.
The program inputs 3 images, each image contains a fixed small circle and a big circle with variable position. The program then recognizes both the circles and marks the centres of both the circles. Now what I want to do is that in the output image the (x,y) coordinates of the centre of the bigger circle should be displayed with respect to the centre of the fixed smaller circle . Here's the code for 'circle.cpp'
#include <cv.h>
#include <highgui.h>
#include <math.h>
int main(int argc, char** argv)
{
IplImage* img;
int n=3;
char input[21],output[21];
for(int l=1;l<=n;l++)
{
sprintf(input,"Frame%d.jpg",l); // Inputs Images
if( (img=cvLoadImage(input))!= 0)
{
IplImage* gray = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 );
IplImage* canny=cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,1);
IplImage* rgbcanny=cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,3);
CvMemStorage* storage = cvCreateMemStorage(0);
cvCvtColor( img, gray, CV_BGR2GRAY );
cvSmooth( gray, gray, CV_GAUSSIAN, 9, 9 ); // smooth it, otherwise a lot of false circles may be detected
cvCanny(gray,canny,50,100,3);
CvSeq* circles = cvHoughCircles( canny, storage, CV_HOUGH_GRADIENT, 2, gray->height/4, 200, 100 );
int i;
cvCvtColor(canny,rgbcanny,CV_GRAY2BGR);
for( i = 0; i < circles->total; i++ )
{
float* p = (float*)cvGetSeqElem( circles, i );
cvCircle( rgbcanny, cvPoint(cvRound(p[0]),cvRound(p[1])), 3, CV_RGB(0,255,0), -1, 8, 0 );
cvCircle( rgbcanny, cvPoint(cvRound(p[0]),cvRound(p[1])), cvRound(p[2]), CV_RGB(255,0,0), 3, 8, 0 );
}
cvNamedWindow( "circles", 1 );
cvShowImage( "circles", rgbcanny );
//Displays Output images
sprintf(output,"circle%d.jpg",l);
cvSaveImage(output,rgbcanny);
cvWaitKey(0);
}
}
return 0;
}
And here are the input and output images:
Please suggest what changes should I make in the code to display the desired (x,y)coordinates. Thanx a lot :)
Before you show the image, use cvPutText to add the desired text. The parameters of this function are self-explaining. The font should be initialized using cvInitFont.
When you calculate the relative coordinates, keep in mind that in OpenCV, the coordinate system is like this
-----> x
|
|
v
y
just in case you are interested in showing the relative coordinates in a system in which the axes point in another direction.
You should check that the Hough transform has detected exactly two circles. If so, all the data you need is in the circles variable. If (xa,ya) are the coordinates of the bigger circle and (xb,yb) the coordinates of the smaller one, the relative coordinates are (xa-xb,ya-yb).

Resources