Using cuda::morphologyex in opencv 3 - opencv

I am working in an opencv project which usese the morphologyex function. Now I am trying to do it with gpu support.
When I compile my program with opencv 3.0 and cuda 7.5 support, it accepts most of the functions (such as cuda::threshold, cuda::cvtcolor,etc) except for morphologyEx. Note that morphologyex is called in opencv 2.4.9 as gpu::morphologyEx.
How can I use this function in OpenCV 3.0 or 3.1? If it isn't supported, is there an alternative to this function?
Actually I am using this function for background detection in nonuniform illumination. I am adding the code to the question. Please suggest me how can I replace the morphologyEx function.
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Step 1: Read Image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Step 2: Use Morphological Opening to Estimate the Background
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(15,15));
Mat1b background;
morphologyEx(img, background, MORPH_OPEN, kernel);
// Step 3: Subtract the Background Image from the Original Image
Mat1b img2;
absdiff(img, background, img2);
// Step 4: Increase the Image Contrast
// Don't needed it here, the equivalent would be cv::equalizeHist
// Step 5(1): Threshold the Image
Mat1b bw;
threshold(img2, bw, 50, 255, THRESH_BINARY);
// Step 6: Identify Objects in the Image
vector<vector<Point>> contours;
findContours(bw.clone(), contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for(int i=0; i<contours.size(); ++i)
{
// Step 5(2): bwareaopen
if(contours[i].size() > 50)
{
// Step 7: Examine One Object
Mat1b object(bw.size(), uchar(0));
drawContours(object, contours, i, Scalar(255), CV_FILLED);
imshow("Single Object", object);
waitKey();
}
}
return 0;
}
==========================================================================
Thanks to #Roy Falk
After reading the valuable comments and documentation, I felt that morphologyEX function
morphologyEx(img, background, MORPH_OPEN, kernel);
can be replaced by
cv::Ptr<cv::cuda::Filter>morph = cuda::createMorphologyFilter(MORPH_OPEN, out.type(), kernel);
morph->apply(out, bc);
feel free to say if I'm wrong

As noted in the comment above, morphologyex is not in the 3.1 API.
I'm guessing you need to map the invocation as documented in 2.4 documentation to the way it's done in 3.1.
Specifically, morphologyex has the following Parameters:
src Source image. Use cuda::Filter and cuda::Filter::apply
dst Destination image same as above
op Type of a morphological operation. Try cuda::createMorphologyFilter
...etc.
In other words, 2.4 did the operation in one go (morphologyex). In 3.1 you first create a filter using createFooFilter and then call apply filter.
Not really a definitive answer but more of a suggestion, but couldn't really write all this in a comment. Good luck.
=================================================================
Edit: Try looking at https://github.com/Itseez/opencv/blob/master/samples/gpu/morphology.cpp. It shows how to use cuda::createMorphologyFilter

Related

how can i removing sinusoidal noise w\ frequency domain in opencv

i'm trying to remove '5 lines' in section, in music papers, my original image is this : http://en.wikipedia.org/wiki/Requiem_(Mozart)#/media/File:K626_Requiem_Mozart.jpg
First, i apply gaussian filter and binarized with threshold (min:100, max 255).
Then applying dft to this image, erase some appropriate lines, and reconstruct image by inverse dft.
i use sample code in opencv documentation, actually i doubt myself that i understand this code. :(
http://docs.opencv.org/doc/tutorials/core/discrete_fourier_transform/discrete_fourier_transform.html
in this sample code, there's 2 Mats. one is 'complexI' for spectrum, another is 'magI' for actual visualized. the result of cv::dft is complexI, magI is normalized complexI. my question is this. how can i add a black line(to cancel in freq domain) and reconstruct?
OpenCV (now) provides a detailed tutorial on how to deal with periodic noise by spectral filtering: https://docs.opencv.org/trunk/d2/d0b/tutorial_periodic_noise_removing_filter.html
It hinges on using cv::dft(), cv::idft(), cv::mulSpectrums(), and cv::magnitude().
The core function (from the tutorial) to perform the filtering goes like so:
void filter2DFreq(const Mat& inputImg, Mat& outputImg, const Mat& H)
{
Mat planes[2] = { Mat_<float>(inputImg.clone()), Mat::zeros(inputImg.size(), CV_32F) };
Mat complexI;
merge(planes, 2, complexI);
// find FT of image
dft(complexI, complexI, DFT_SCALE);
Mat planesH[2] = { Mat_<float>(H.clone()), Mat::zeros(H.size(), CV_32F) };
Mat complexH;
merge(planesH, 2, complexH);
Mat complexIH;
// apply spectral filter
mulSpectrums(complexI, complexH, complexIH, 0);
// reconstruct the filtered image
idft(complexIH, complexIH);
split(complexIH, planes);
outputImg = planes[0];
}
Refer to the tutorial for more information.

HOGDescriptor::computeGradient using OpenCV

I try to compute gradient map using HOGDescriptor.
My code:
HOGDescriptor hog;
hog.compute(faceROI,ders,Size(32,32),Size(0,0),locs);
Mat grad;
Mat sec;
hog.computeGradient(frame_gray, grad, angleofs);
imshow("1", frame_gray);
imshow("2", grad); //here program fails: Unhandled exception at memory location
imshow("3", angleofs); //grad.data = "". Why??
I cant find goot examples of using HOGDescriptor::computeGradient.
Help please!
To visualize OpenCv's HOGDescriptor::Calculate(..), use this, it's amazing.
imshow("2", grad); fails because imshow expects that grad image is a 1, 3 or 4 channel image whereas it is a 2 channel image.
First channel contains the gradient in x direction whereas the second channel contains the gradient in y. You should split the channels in two images to visualize them:
Mat grad_channel[2];
split(grad, grad_channel);
imshow("grad_x", grad_channel[0]);
imshow("grad_y", grad_channel[1]);
Best

How to detect and match a marker using OpenCV (Template Matching)

I am using an image which holds a marker in a specific area. I tried to do it using Template matching which is the method defined in opencv as cvMatchTemplate.
I am using a web cam to detect them, currently the program is detecting the marker, because I provided the same marker as template.
But I cannot find a way to check whether it is the best match or just slightly matched. Because in cvMatchTemplate it is not only detecting the best match, it also keeps detecting the areas which are slightly matching.
Can any one please tell me a way to do this. Or if there is any other way for my problem, please let me know!
here is the link for my image card
http://imageshack.us/photo/my-images/266/piggycard.jpg/
(I want to detect and check whether its mached)
here is the code
// template_mching_test_2.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "cv.h"
#include "highgui.h"
int main()
{
IplImage* imgOriginal = cvLoadImage("D:\\4Yr\\Research\\SRS\\Animations\\Piggycard.jpg", 0);
IplImage* imgTemplate = cvLoadImage("D:\\4Yr\\Research\\MakingOf\\Sample Imageas\\PiggyMarkerStart.jpg", 0);
CvCapture *cap = cvCaptureFromCAM(0);
if(!cap)
return -1;
cvNamedWindow("result");
IplImage* imgOriginal;
IplImage* imgOriginal2;
IplImage* imgResult;
while(true)
{
imgOriginal = cvQueryFrame(cap);//cvCreateImage(cvSize(imgOriginal->width-imgTemplate->width+1, imgOriginal->height-imgTemplate->height+1), IPL_DEPTH_32F, 1);
imgOriginal2 = cvCreateImage(cvSize(imgOriginal->width,imgOriginal->height),imgOriginal->depth,1);
imgResult = cvCreateImage(cvSize(imgOriginal->width-imgTemplate->width + 1,imgOriginal->height-imgTemplate->height+1),IPL_DEPTH_32F,1);
cvZero(imgResult);
cvZero(imgOriginal2);
cvCvtColor(imgOriginal,imgOriginal2,CV_BGR2GRAY);
cvMatchTemplate(imgOriginal2, imgTemplate, imgResult,CV_TM_CCORR_NORMED);
double min_val=0, max_val=0;
CvPoint min_loc, max_loc;
cvMinMaxLoc(imgResult, &min_val, &max_val, &min_loc, &max_loc);
cvRectangle(imgOriginal, max_loc, cvPoint(max_loc.x+imgTemplate->width, max_loc.y+imgTemplate->height), cvScalar(0), 1);
printf("%f \n", max_val);
cvShowImage("result", imgOriginal);
cvWaitKey(10);
cvReleaseImage(&imgOriginal2);
cvReleaseImage(&imgResult);
}
cvDestroyAllWindows();
cvReleaseCapture(&cap);
return 0;
}
and as the template I provided the same marker which cropped from the original image. From minMaxLoc i took the max value to check the best match. but it is keep giving me the same values when the image marker in a position, And when the image marker is not in the frame and slightly matching at a place which previous matched with the marker.Does minMaxloc giving us the coordinates(position) of the marker or matching percentage.Or is there any other way for this.
Thank you for your consideration.
There is an OpenCV tutorial on the subject of Template Matching.
Using matchTemplate is a good start, it will provide you with an image containing numbers relating to your matching metric (there is a range of choices for the metric, some of which provide high numbers for better matches, some lower).
To subsequently pick out the best match, you will also need to use the function minMaxLoc which can locate the minimum & maximum values from this matrix.

low-pass filter in opencv

I would like to know how I can do a low-pass filter in opencv on an IplImage.
For example "boxcar" or something similar.
I've googled it but i can't find a clear solution.
If anyone could give me an example or point me in the right direction on how to implement this in opencv or javacv I would be grateful.
Thx in advance.
Here is an example using the C API and IplImage:
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/highgui/highgui_c.h"
int main()
{
IplImage* img = cvLoadImage("input.jpg", 1);
IplImage* dst=cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,3);
cvSmooth(img, dst, CV_BLUR);
cvSaveImage("filtered.jpg",dst);
}
For information about what parameters of the cvSmooth function you can have a look at the cvSmooth Documentation.
If you want to use a custom filter mask you can use the function cvFilter2D:
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/highgui/highgui_c.h"
int main()
{
IplImage* img = cvLoadImage("input.jpg", 1);
IplImage* dst=cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,3);
double a[9]={ 1.0/9.0,1.0/9.0,1.0/9.0,
1.0/9.0,1.0/9.0,1.0/9.0,
1.0/9.0,1.0/9.0,1.0/9.0};
CvMat k;
cvInitMatHeader( &k, 3, 3, CV_64FC1, a );
cvFilter2D( img ,dst, &k,cvPoint(-1,-1));
cvSaveImage("filtered.jpg",dst);
}
These examples use OpenCV 2.3.1.
The openCV filtering documentation is a little confusing because the functions try and efficently cover every possible filtering technique.
There is a tutorial on using your own filter kernels which covers box filters

OpenCV with kinect begineer's doubts

I have OpenCV and libfreenect configured on my ubuntu 11.04 and works seperately.
I also have some experience with OpenCV but the problem is i don't know how to combine both kinect and OpenCV.I was hoping if someone would kindly help me out by pointing to a good documentation or providing a simple sample code of using kinect in opencv.
The first link on google for "OpenCV kinect" was this. I hope it helps.
To quickly get things working, I would recommend including opencv libraries to one of the openni samples (for example NiUserTracker). There you can acquire the depth image from the DepthMetaData object in the following way.
//obtain depth image
DepthMetaData depthMD;
g_DepthGenerator.GetMetaData(depthMD);
const XnDepthPixel* g_Depth = depthMD.Data();
cv::Mat DepthBuf(480,640,CV_16UC1,(unsigned char*)g_Depth);
//To display the depth image you probably would want to normalize it to 0-255 range first
//obtain rgb image
ImageMetaData ImageMD;
g_ImageGenerator.GetMetaData(ImageMD);
const XnUInt8* g_Img =ImageMD.Data();
cv::Mat ImgBuf(480,640,CV_8UC3,(unsigned short*)g_Img);
cv::Mat ImgBuf2;
cv::cvtColor(ImgBuf,ImgBuf2,CV_RGB2BGR);
To get work MrglMrgl code, I've had to add the following at the beginning:
nRetVal = g_Context.FindExistingNode(XN_NODE_TYPE_IMAGE, g_ImageGenerator);
if (nRetVal != XN_STATUS_OK)
{
printf("No image node exists! Check your XML.");
return 1;
}
And this at the final:
cv::namedWindow( "Example1", CV_WINDOW_AUTOSIZE );
cv::imshow( "Example1", ImgBuf2 );
cv::waitKey(0);

Resources