I have the image below. I want to detect the line that divides this object in two pieces. Which is the best way? I've tried with the Hough transform but sometimes the object is not big enough for it to detect. Any ideias?
Thanks!
Normally, Hough Transform is used for line detection.
But if it doesn't work for you, fitting line is also a good alternative.
Check OpenCV fitline function for more details and parameters.
Since you have already tried hough lines, I will demonstrate fitting line here, using OpenCV-Python :
# Load image, convert to grayscale, threshold and find contours
img = cv2.imread('lail.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
contours,hier = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
cnt = contours[0]
# then apply fitline() function
[vx,vy,x,y] = cv2.fitLine(cnt,cv2.cv.CV_DIST_L2,0,0.01,0.01)
# Now find two extreme points on the line to draw line
lefty = int((-x*vy/vx) + y)
righty = int(((gray.shape[1]-x)*vy/vx)+y)
#Finally draw the line
cv2.line(img,(gray.shape[1]-1,righty),(0,lefty),255,2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Below is the result I got :
EDIT :
If you want to find the line to divide the object into two pieces, first find the fitting line, then find the line normal to it.
For that, add below piece of code under cv2.fitLine() function :
nx,ny = 1,-vx/vy
mag = np.sqrt((1+ny**2))
vx,vy = nx/mag,ny/mag
And below are the results I got :
Hope it helps !!!
UPDATE :
Below is the C++ code for Python code of first case as you requested. The code works fine for me. Output is same as given above :
#include <iostream>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <opencv/cv.h>
using namespace std;
using namespace cv;
int main()
{
cv::Mat img, gray,thresh;
vector<vector<Point>> contours;
Vec4f lines;
img = cv::imread("line.png");
cv::cvtColor(img,gray,cv::COLOR_BGR2GRAY);
cv::threshold(gray,thresh,127,255,cv::THRESH_BINARY);
cv::findContours(thresh,contours,cv::RETR_LIST,cv::CHAIN_APPROX_SIMPLE);
cv::fitLine(Mat(contours[0]),lines,2,0,0.01,0.01);
//lefty = int((-x*vy/vx) + y)
//righty = int(((gray.shape[1]-x)*vy/vx)+y)
int lefty = (-lines[2]*lines[1]/lines[0])+lines[3];
int righty = ((gray.cols-lines[2])*lines[1]/lines[0])+lines[3];
cv::line(img,Point(gray.cols-1,righty),Point(0,lefty),Scalar(255,0,0),2);
cv::imshow("img",img);
cv::waitKey(0);
cv::destroyAllWindows();
}
Related
I'm trying to to add noise to an Image & then Denoise it to test my DeNoising algorithm! So for benchmark i'm referring this Online Test samples. I'm trying to replicate the Noise model.
With reference to this threads 1 , 2 I'm adding noise to image like this!
Mat mSource_Bgr;
mSource_Bgr= imread(FileName_S,1);
double m_NoiseStdDev=10;
Mat mNoise_Bgr = mSource_Bgr.clone();
Mat mGaussian_noise = Mat(mSource_Bgr.size(),CV_8UC3);
randn(mGaussian_noise,0,m_NoiseStdDev);
mNoise_Bgr += mGaussian_noise;
normalize(mNoise_Bgr,mNoise_Bgr,0, 255, CV_MINMAX, CV_8UC3);
imshow("Output Window",mNoise_Bgr);
//imshow("Gaussian Noise",mGaussian_noise);
My Input Image
Output Image with Noise
Problem:
Adding Noise to the image alters overall brightness of the Image which in turn alters my final results PSNR!
I want to get the results as much as closer to this one!
What i have tried so far!
I have tried to add the noise only in the color channel.
Convert the Input image into YUV Color space
Add the Noise only in the UV Color Channels & Keep the Y channel unaltered.
Results are very bad & the overall color of the image is getting altered! Will add the code if needed!
So any advice regarding this is much appreciated! May be give me some formulas for adding Noise to the image!
Thank you #Andrey Smorodov For your insights!
I got it working! Here is my updated code for adding Noise in a Color Image. Hope this will be useful for someone!
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
inline BYTE Clamp(int n)
{
n = n>255 ? 255 : n;
return n<0 ? 0 : n;
}
bool AddGaussianNoise(const Mat mSrc, Mat &mDst,double Mean=0.0, double StdDev=10.0)
{
if(mSrc.empty())
{
cout<<"[Error]! Input Image Empty!";
return 0;
}
Mat mGaussian_noise = Mat(mSrc.size(),CV_16SC3);
randn(mGaussian_noise,Scalar::all(Mean),Scalar::all(StdDev));
for (int Rows = 0; Rows < mSrc.rows; Rows++)
{
for (int Cols = 0; Cols < mSrc.cols; Cols++)
{
Vec3b Source_Pixel= mSrc.at<Vec3b>(Rows,Cols);
Vec3b &Des_Pixel= mDst.at<Vec3b>(Rows,Cols);
Vec3s Noise_Pixel= mGaussian_noise.at<Vec3s>(Rows,Cols);
for (int i = 0; i < 3; i++)
{
int Dest_Pixel= Source_Pixel.val[i] + Noise_Pixel.val[i];
Des_Pixel.val[i]= Clamp(Dest_Pixel);
}
}
}
return true;
}
bool AddGaussianNoise_Opencv(const Mat mSrc, Mat &mDst,double Mean=0.0, double StdDev=10.0)
{
if(mSrc.empty())
{
cout<<"[Error]! Input Image Empty!";
return 0;
}
Mat mSrc_16SC;
Mat mGaussian_noise = Mat(mSrc.size(),CV_16SC3);
randn(mGaussian_noise,Scalar::all(Mean), Scalar::all(StdDev));
mSrc.convertTo(mSrc_16SC,CV_16SC3);
addWeighted(mSrc_16SC, 1.0, mGaussian_noise, 1.0, 0.0, mSrc_16SC);
mSrc_16SC.convertTo(mDst,mSrc.type());
return true;
}
int main(int argc, const char* argv[])
{
Mat mSource= imread("input.png",1);
imshow("Source Image",mSource);
Mat mColorNoise(mSource.size(),mSource.type());
AddGaussianNoise(mSource,mColorNoise,0,10.0);
imshow("Source + Color Noise",mColorNoise);
AddGaussianNoise_Opencv(mSource,mColorNoise,0,10.0);//I recommend to use this way!
imshow("Source + Color Noise OpenCV",mColorNoise);
waitKey();
return 0;
}
Looks like your noise matrix can't get negative values as it have unsigned char element type. Try operate with real valued matrices, it should help.
There are mainly two methods to add say awgn noise (mean = 0, standard deviation = 30) to a colored image.
First: You can add the awgn noise of mean = 0, standard deviation = 30 to each of Red, Green, and Blue channels independently (or any other color model-HSI, YUV, Lab); and then combine the noisy channels to form the colored noisy image.
Second: To use the in-built function to add noise to the colored image directly. eg. imnoise() in Matlab.
I tried with both the methods (imnoise and independently), I got the same result.
You mentioned "I have tried to add the noise only in the color channel.
Convert the Input image into YUV Color space
Add the Noise only in the UV Color Channels & Keep the Y channel unaltered."
If you are using the YUV color model, I would suggest you do the opposite. Keep U, and V channel unaltered and add noise only to the Y channel only.
I have written a C++ program using OpenCV that can detect and highlight the edges of any object from a live video. But now I don't know how to extract the four corners of the cube from the many edges that are being detected in the video. So I am looking for some help here.
Here is the link of the paper that I am using as a guide for my this project.
http://www.cs.ubc.ca/~andrejk/525project/525report.pdf
You can find the program code for this paper in the link below. It's written in Python. (I am using C++ and I don't know Python)
http://www.cs.ubc.ca/~andrejk/525project/cubefinder.py
According to the paper the next step would be, 'edge segmentation with adaptive threshold.'
Which I don't really understand. And also I don't know how to extract the corners of the cube then.
The short summary of the method that I have used is as following.
1. Input from webcam
2. Apply Laplacian filter
3. Apply Hough Line Transform.
I get the following result.
Code
using namespace std;
using namespace cv;
Mat laplacianFilter(Mat image)
{
Mat hImage;
GaussianBlur(image,hImage,Size(3,3),0,0,BORDER_DEFAULT);
cvtColor(hImage,hImage,CV_RGB2GRAY);
Laplacian(hImage,hImage,CV_16SC1,3,1,0,BORDER_DEFAULT);
convertScaleAbs(hImage,hImage,1,0);
return hImage;
}
Mat hghTransform(Mat image, Mat &image2)
{
Mat lImage;
Canny(image,image,50,200,3);
cvtColor(image,lImage,CV_GRAY2BGR);
vector<Vec4i> lines;
HoughLinesP(image, lines, 1, CV_PI/180, 50, 50, 10 );
for( size_t i = 0; i < lines.size(); i++ )
{
Vec4i l = lines[i];
line( image2, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,255,0), 3, CV_AA);
}
return lImage;
}
int main()
{
int c;
VideoCapture cap(0);
Mat image;
Mat image2;
namedWindow("hghtransform");
namedWindow("laplacianfilter");
namedWindow("cannyOutput");
while(1)
{
cap>>image;
cap>>image2;
//Output
imshow("laplacianfilter",laplacianFilter(image));
imshow("cannyOutput",hghTransform(laplacianFilter(image),image2));
imshow("hghtransform",image2);
c=waitKey(33);
if(c==27)
return 0;
}
return 0;
}
Adaptive threshold will give you a clear line of edges which enables you to get 9 squares of a rubik side properly.
You can see a decent comparison of global and adaptive threshold here:
here: https://sites.google.com/site/qingzongtseng/adaptivethreshold
original image:
global threshold:
adaptive threshold:
For the corner, I am not sure whether it's stated in the paper, but I would do something like:
==> finding area like 1, 2, 3, 4 for upper-left, upper-right, lower-left, and lower-right corner respectively
==> with a template matching algorithm.
hope it helps.
note: you might want to have a background with less noise there. =)
I'm trying to implement in OpenCV an algorithm to bring out the details of a palm vein pattern. I've based myself on a paper called "A Contactless Biometric System Using Palm Print and Palm Vein Features" that I've found on the Internet. The part I'm interested in is the chapter 3.2 Pre-processing. The steps involved are shown there.
I'd like to do the implementation using OpenCV but until now I'm stuck hard. Especially they use a Laplacian filter on the response of a low-pass filter to isolate the principal veins but my result gets very noisy, no matter the parameters I try!
Any help would be greatly appreciated!
Ok finally I've figured out by myself how to do it. Here is my code :
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#define THRESHOLD 150
#define BRIGHT 0.7
#define DARK 0.2
using namespace std;
using namespace cv;
int main()
{
// Read source image in grayscale mode
Mat img = imread("roi.png", CV_LOAD_IMAGE_GRAYSCALE);
// Apply ??? algorithm from https://stackoverflow.com/a/14874992/2501769
Mat enhanced, float_gray, blur, num, den;
img.convertTo(float_gray, CV_32F, 1.0/255.0);
cv::GaussianBlur(float_gray, blur, Size(0,0), 10);
num = float_gray - blur;
cv::GaussianBlur(num.mul(num), blur, Size(0,0), 20);
cv::pow(blur, 0.5, den);
enhanced = num / den;
cv::normalize(enhanced, enhanced, 0.0, 255.0, NORM_MINMAX, -1);
enhanced.convertTo(enhanced, CV_8UC1);
// Low-pass filter
Mat gaussian;
cv::GaussianBlur(enhanced, gaussian, Size(0,0), 3);
// High-pass filter on computed low-pass image
Mat laplace;
Laplacian(gaussian, laplace, CV_32F, 19);
double lapmin, lapmax;
minMaxLoc(laplace, &lapmin, &lapmax);
double scale = 127/ max(-lapmin, lapmax);
laplace.convertTo(laplace, CV_8U, scale, 128);
// Thresholding using empirical value of 150 to create a vein mask
Mat mask;
cv::threshold(laplace, mask, THRESHOLD, 255, CV_THRESH_BINARY);
// Clean-up the mask using open morphological operation
morphologyEx(mask,mask,cv::MORPH_OPEN,
getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(5,5)));
// Connect the neighboring areas using close morphological operation
Mat connected;
morphologyEx(mask,mask,cv::MORPH_CLOSE,
getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(11,11)));
// Blurry the mask for a smoother enhancement
cv::GaussianBlur(mask, mask, Size(15,15), 0);
// Blurry a little bit the image as well to remove noise
cv::GaussianBlur(enhanced, enhanced, Size(3,3), 0);
// The mask is used to amplify the veins
Mat result(enhanced);
ushort new_pixel;
double coeff;
for(int i=0;i<mask.rows;i++){
for(int j=0;j<mask.cols;j++){
coeff = (1.0-(mask.at<uchar>(i,j)/255.0))*BRIGHT + (1-DARK);
new_pixel = coeff * enhanced.at<uchar>(i,j);
result.at<uchar>(i,j) = (new_pixel>255) ? 255 : new_pixel;
}
}
// Show results
imshow("frame", img);
waitKey();
imshow("frame", result);
waitKey();
return 0;
}
So the main steps of the paper are followed here. For some parts I've inspired myself on code I've found. It's the case for the first processing I apply that I've found here. Also for the High-pass filter (laplacian) I've inspired myself on the code given in OpenCV 2 Computer Vision Application Programming Cookbook.
Finally I've done some little improvements by allowing to modify the brightness of the background and the darkness of the veins (see defines BRIGHT and DARK). I've also decided to blur a bit the mask to have a more "natural" enhancement.
Here the results (Source / Paper result / My result) :
I'm trying to apply a morphological closing operation only to an nxn neighborhood of a pixel at (i,j). Easiest way seemed to create a CvRect with cvRect(j-n,i-n,j+n,i+n), set the image's ROI to that and then apply morphology.
However, the result is the same as applying Morphology to the whole image, without setting an ROI. What am I doing wrong here?
I haven't tried doing this with the C interface, but here is how I did it using the C++ interface:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <vector>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char* argv[])
{
Mat spots = imread("roi.png", 0);
Rect ulRoi(0, 0, spots.cols >> 1, spots.rows >> 1);
Mat opening(spots, ulRoi);
Mat element = getStructuringElement(MORPH_RECT, Size(7, 7));
morphologyEx(opening, opening, MORPH_OPEN, element);
imshow("opening", opening);
imshow("spots", spots);
waitKey();
return 0;
}
I have basically just contrived an image, and then only got rid of the "noise" halo in the upper left quadrant. My "noise" spots were only 5x5, so I made the morphological kernel 7x7 to obliterate the noise.
Here is the input image:
After a morphological opening, I get the following:
Hopefully that will help you out!
Does someone know the link of example of SIFT implementation with OpenCV 2.2.
regards,
Below is a minimal example:
#include <opencv/cv.h>
#include <opencv/highgui.h>
int main(int argc, const char* argv[])
{
const cv::Mat input = cv::imread("input.jpg", 0); //Load as grayscale
cv::SiftFeatureDetector detector;
std::vector<cv::KeyPoint> keypoints;
detector.detect(input, keypoints);
// Add results to image and save.
cv::Mat output;
cv::drawKeypoints(input, keypoints, output);
cv::imwrite("sift_result.jpg", output);
return 0;
}
Tested on OpenCV 2.3
You can obtain the SIFT detector and SIFT-based extractor in several ways. As others have already suggested the more direct methods, I will provide a more "software engineering" approach that may make you code more flexible to changes (i.e. easier to change to other detectors and extractors).
Firstly, if you are looking to obtain the detector using built in parameters the best way is to use OpenCV"s factory methods for creating it. Here's how:
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main(int argc, char *argv[])
{
Mat image = imread("TestImage.jpg");
// Create smart pointer for SIFT feature detector.
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("SIFT");
vector<KeyPoint> keypoints;
// Detect the keypoints
featureDetector->detect(image, keypoints); // NOTE: featureDetector is a pointer hence the '->'.
//Similarly, we create a smart pointer to the SIFT extractor.
Ptr<DescriptorExtractor> featureExtractor = DescriptorExtractor::create("SIFT");
// Compute the 128 dimension SIFT descriptor at each keypoint.
// Each row in "descriptors" correspond to the SIFT descriptor for each keypoint
Mat descriptors;
featureExtractor->compute(image, keypoints, descriptors);
// If you would like to draw the detected keypoint just to check
Mat outputImage;
Scalar keypointColor = Scalar(255, 0, 0); // Blue keypoints.
drawKeypoints(image, keypoints, outputImage, keypointColor, DrawMatchesFlags::DEFAULT);
namedWindow("Output");
imshow("Output", outputImage);
char c = ' ';
while ((c = waitKey(0)) != 'q'); // Keep window there until user presses 'q' to quit.
return 0;
}
The reason using the factory methods is flexible because now you can change to a different keypoint detector or feature extractor e.g. SURF simply by changing the argument passed to the "create" factory methods like this:
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("SURF");
Ptr<DescriptorExtractor> featureExtractor = DescriptorExtractor::create("SURF");
For other possible arguments to pass to create other detectors or extractors see:
http://opencv.itseez.com/modules/features2d/doc/common_interfaces_of_feature_detectors.html#featuredetector-create
http://opencv.itseez.com/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html?highlight=descriptorextractor#descriptorextractor-create
Now, using the factory methods means you gain the convenience of not having to guess some suitable parameters to pass to each of the detectors or extractors. This can be convenient for people new to using them. However, if you would like to create your own custom SIFT detector, you can wrap the SiftDetector object created with custom parameters and wrap it into a smart pointer and refer to it using the featureDetector smart pointer variable as above.
A simple example using SIFT nonfree feature detector in opencv 2.4
#include <opencv2/opencv.hpp>
#include <opencv2/nonfree/nonfree.hpp>
using namespace cv;
int main(int argc, char** argv)
{
if(argc < 2)
return -1;
Mat img = imread(argv[1]);
SIFT sift;
vector<KeyPoint> key_points;
Mat descriptors;
sift(img, Mat(), key_points, descriptors);
Mat output_img;
drawKeypoints(img, key_points, output_img);
namedWindow("Image");
imshow("Image", output_img);
waitKey(0);
destroyWindow("Image");
return 0;
}
OpenCV provides SIFT and SURF (here too) and other feature descriptors out-of-the-box.
Note that the SIFT algorithm is patented, so it may be incompatible with the regular OpenCV use/license.
Another simple example using SIFT nonfree feature detector in opencv 2.4
Be sure to add the opencv_nonfree240.lib dependency
#include "cv.h"
#include "highgui.h"
#include <opencv2/nonfree/nonfree.hpp>
int main(int argc, char** argv)
{
cv::Mat img = cv::imread("image.jpg");
cv::SIFT sift(10); //number of keypoints
cv::vector<cv::KeyPoint> key_points;
cv::Mat descriptors, mascara;
cv::Mat output_img;
sift(img,mascara,key_points,descriptors);
drawKeypoints(img, key_points, output_img);
cv::namedWindow("Image");
cv::imshow("Image", output_img);
cv::waitKey(0);
return 0;
}
in case someone is wondering how to do it with 2 images :
import numpy as np
import cv2
print ('Initiate SIFT detector')
sift = cv2.xfeatures2d.SIFT_create()
print ('find the keypoints and descriptors with SIFT')
gcp1, des1 = sift.detectAndCompute(src_img,None)
gcp2, des2 = sift.detectAndCompute(trg_img,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
#print only the first 100 matches
img3 = drawMatches(src_img, gcp1, trg_img, gcp2, matches[:100])