JavaCV/OpenCV cvDrawContours modifies original image - opencv

I am experimenting with JavaCV (OpenCV) and I am confused with the following behavior.
My programm simply:
Grab an image
Create a grayscale version of the image (leaving the original image untouched)
Threshold the grayscale image
Find contours in the grayscale image (cloning the image since cvFindContours modifies the image and we want to display it as is)
Draw contours on the original color image
The problem is that unless I clone the grabbedImage (see commented line), the grayscale image is modified and contours are drawn on it. Also, it is as if multiple contours are drawn on the grabbedImage.
I also tried to add a sleep to the loop and it fixes the problem. Could it be that I get the same (modified) grabbedImage multiple time? I checked the java reference and it's different but could it be the same buffer?
Any idea?
Thank you
package com.mdarveau.opencvtest;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.CanvasFrame;
import com.googlecode.javacv.FrameGrabber;
import com.googlecode.javacv.cpp.opencv_core.CvContour;
import com.googlecode.javacv.cpp.opencv_core.CvMemStorage;
import com.googlecode.javacv.cpp.opencv_core.CvScalar;
import com.googlecode.javacv.cpp.opencv_core.CvSeq;
import com.googlecode.javacv.cpp.opencv_core.IplImage;
import com.googlecode.javacv.cpp.opencv_objdetect;
public class Demo {
public static void main( String[] args ) throws Exception {
// Preload the opencv_objdetect module to work around a known bug.
Loader.load( opencv_objdetect.class );
FrameGrabber grabber = FrameGrabber.createDefault( 1 );
grabber.start();
IplImage grabbedImage = grabber.grab();
int width = grabbedImage.width();
int height = grabbedImage.height();
IplImage grayImage = IplImage.create( width, height, IPL_DEPTH_8U, 1 );
CvMemStorage storage = CvMemStorage.create();
CanvasFrame filterProbe = new CanvasFrame( "Filtered", CanvasFrame.getDefaultGamma() / grabber.getGamma() );
CanvasFrame enhancedProbe = new CanvasFrame( "Enhanced", CanvasFrame.getDefaultGamma() / grabber.getGamma() );
while ( filterProbe.isVisible() && enhancedProbe.isVisible() && (grabbedImage = grabber.grab()) != null ) {
cvClearMemStorage( storage );
// Convert to grayscale image...
cvCvtColor( grabbedImage, grayImage, CV_BGR2GRAY );
// UNCOMMENT FIXES THE PROBLEM grabbedImage = grabbedImage.clone();
// Let's find some contours! but first some thresholding...
cvThreshold( grayImage, grayImage, 128, 255, CV_THRESH_BINARY );
// To check if an output argument is null we may call either isNull() or equals(null).
CvSeq contour = new CvSeq( null );
// cvFindContours modifies the image so clone it first since we want to keep the grayscale version
cvFindContours( grayImage.clone(), storage, contour, Loader.sizeof( CvContour.class ), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE );
while ( contour != null && !contour.isNull() ) {
if ( contour.elem_size() > 0 ) {
CvSeq points = cvApproxPoly( contour, Loader.sizeof( CvContour.class ), storage, CV_POLY_APPROX_DP, cvContourPerimeter( contour ) * 0.02, 0 );
cvDrawContours( grabbedImage, points, CvScalar.BLUE, CvScalar.BLUE, -1, 1 /*CV_FILLED*/, CV_AA );
}
contour = contour.h_next();
}
filterProbe.showImage( grayImage );
enhancedProbe.showImage( grabbedImage );
}
filterProbe.dispose();
enhancedProbe.dispose();
grabber.stop();
}
}

This is the expected behavior of cvFindContours():
The function modifies the image while extracting the contours.

Related

How to create 3D surface with skeletonized images?

I'm using OpenCV 3.1 (with VTK7.1) and attempt to create 3D-like surface with 2D images.
The idea is simple:
input continuous image(each frame of video stream)
skeletonization processing each frame.
stacking in 3-dimension space. Here is the problem:
How can I convert a image to point cloud and how can I display?
Here is a part of the code:
for (;;)
{
inStream >> singleFrm;
if (singleFrm.empty()) {
break;
}
else {
imshow("Origin", singleFrm);
singleFrm = singleFrm(roi);
cvtColor(singleFrm, roiSkelFrm, CV_RGB2GRAY);
const Size2d size(roiSkelFrm.cols, roiSkelFrm.rows);
threshold(roiSkelFrm, roiSkelFrm, 80, 255, cv::THRESH_BINARY);
thinning(roiSkelFrm, roiSkelFrm);
stackChFrm[0] = roiSkelFrm;
stackChFrm[1] = roiSkelFrm;
stackChFrm[2] = roiSkelFrm;
cv::merge(stackChFrm, 3, skelFrm);
skelFrm.convertTo(skelFrm, CV_32FC3);
viz::WCloud aCloudSlice(skelFrm, viz::Color::white());
myWindow.showWidget("image", aSlice);
myWindow.spinOnce(1, true);
if (waitKey(1) == 27)
break;
}
}
while (!myWindow.wasStopped())
{
myWindow.spinOnce(1, true);
}

How to save (cvWrite or imwrite) an image in OpenCV 2.4.3?

I am trying to save an OpenCV image to the hard drive.
Here is what I tried:
public void SaveImage (Mat mat) {
Mat mIntermediateMat = new Mat();
Imgproc.cvtColor(mRgba, mIntermediateMat, Imgproc.COLOR_RGBA2BGR, 3);
File path =
Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES);
String filename = "barry.png";
File file = new File(path, filename);
Boolean bool = null;
filename = file.toString();
bool = Highgui.imwrite(filename, mIntermediateMat);
if (bool == true)
Log.d(TAG, "SUCCESS writing image to external storage");
else
Log.d(TAG, "Fail writing image to external storage");
}
}
Can any one show how to save that image with OpenCV 2.4.3?
Your question is a bit confusing, as your question is concerning OpenCV on the desktop, but your code is for Android, and you ask about IplImage, but your posted code is using C++ and Mat. Assuming you're on the desktop using C++, you can do something along the lines of:
cv::Mat image;
std::string image_path;
//load/generate your image and set your output file path/name
//...
//write your Mat to disk as an image
cv::imwrite(image_path, image);
...Or for a more complete example:
void SaveImage(cv::Mat mat)
{
cv::Mat img;
cv::cvtColor(...); //not sure where the variables in your example come from
std::string store_path("..."); //put your output path here
bool write_success = cv::imwrite(store_path, img);
//do your logging...
}
The image format is chosen based on the supplied filename, e.g. if your store_path string was "output_image.png", then imwrite would save it was a PNG image. You can see the list of valid extensions at the OpenCV docs.
One caveat to be aware of when writing images to disk with OpenCV is that the scaling will differ depending on the Mat type; that is, for floats the images are expected to be within the range [0, 1], while for say, unsigned chars they'll be from [0, 256).
For IplImages, I'd advise just switching to use Mat, as the old C-interface is deprecated. You can convert an IplImage to a Mat via cvarrToMat then use the Mat, e.g.
IplImage* oldC0 = cvCreateImage(cvSize(320,240),16,1);
Mat newC = cvarrToMat(oldC0);
//now can use cv::imwrite with newC
alternately, you can convert an IplImage to a Mat just with
Mat newC(oldC0); //where newC is a Mat and oldC0 is your IplImage
Also I just noticed this tutorial at the OpenCV website, which gives you a walk-though on loading and saving images in a (desktop) environment.

Error opencv & openframework While cropping image

I have the following code:
IplImage* f( IplImage* src )
{
// Must have dimensions of output image
IplImage* cropped = cvCreateImage( cvSize(1280,500), src->depth, src->nChannels );
// Say what the source region is
cvSetImageROI( src, cvRect( 0,0, 1280,500 ) );
// Do the copy
cvCopy( src, cropped );
cvResetImageROI( src );
return cropped;
}
void testApp::setup(){
img.loadImage("test.jpg");
finder.setup("haarcascade_frontalface_default.xml");
finder.findHaarObjects(img);
}
//--------------------------------------------------------------
void testApp::update(){
}
//--------------------------------------------------------------
ofRectangle cur;
void testApp::draw(){
img = f(img);
img.draw(0, 0);
ofNoFill();
for(int i = 0; i < finder.blobs.size(); i++) {
cur = finder.blobs[i].boundingRect;
ofRect(cur.x-20, cur.y-20, cur.width+50, cur.height+50);
}
}
It produces an error. I think it's because I don't convert IplImage to ofImage. Can someone please tell me how to do it?
I would imagine that your instance of img, that your passing to f(), is an instance of ofxCVImage or similar. As far as I know ofx store a protected variable of iplimage, as such you can't cast ofxCVImage to an iplimage.
You might try
img = f(img.getCvImage()) // ofxCvImage::getCvImage() returns IplImage

Opencv: image, retrieved from camera capture is always gray

I am always getting a grey screen when showing image using opencv, the image is captured from camera.
capture = cvCaptureFromCAM(-1);
cvGrabFrame(capture);
image = cvRetrieveFrame(capture);
cvShowImage("name", image);
after this I see the grey screen, even if I do in cycle. But the same code works well on another computer. What is the problem? The same opencv library version is used on both computers. Working in the Visual Studio 2010, C++
Opencv version 2.2.0
EDIT 1: The camera used in both computers is the same.
And I have tried to rebuild the opencv on the computer where the problem happens, it didn't help.
I had the same problem in OpenCvSharp. I don't know what's the cause, but for some reason, calling the "WaitKey" method after displaying the image solved it.
CvCapture mov = new CvCapture(filepath);
IplImage frame = mov.QueryFrame();
CvWindow win1 = new CvWindow(window name);
win1.ShowImage(frame);
CvWindow.WaitKey(1);
Your problem sounds very weird.
Try to use cvQueryFrame instead of cvRetrieveFrame() and let me know if it does make a difference.
Here is a code of using opencv for face detection, the concept of capturing an image is very similar:
int main( int argc, const char** argv )
{
CvCapture* capture;
Mat frame;
//-- 1. Load the cascades
if( !face_cascade.load( face_cascade_name ) ){ printf("--(!)Error loading\n"); return -1; };
if( !eyes_cascade.load( eyes_cascade_name ) ){ printf("--(!)Error loading\n"); return -1; };
//-- 2. Read the video stream
capture = cvCaptureFromCAM( -1 );
if( capture )
{
while( true )
{
frame = cvQueryFrame( capture );
//-- 3. Apply the classifier to the frame
if( !frame.empty() )
{ detectAndDisplay( frame ); }
else
{ printf(" --(!) No captured frame -- Break!"); break; }
int c = waitKey(10);
if( (char)c == 'c' ) { break; }
}
}return 0;
}

detector->detect(img, keypoint); error

I want to implement bag of words in opencv. after detector->detect(img, keypoint); detects keypoints, when i want to clean keypoints using keypoint.clear(); or when the function wants to return the following error will be appeared.
"Unhandled exception at 0x011f45bb in BOW.exe: 0xC0000005: Access violation reading location 0x42ebe098."
and also detected keypoints have bizarre points coordinates like cv::Point_ pt{x=-1.5883997e+038y=-1.5883997e+038 }
Part of the code
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");
Ptr<DescriptorExtractor> extractor = new SurfDescriptorExtractor();
Ptr<FeatureDetector> detector = new SurfFeatureDetector(2000);
void extractTrainingVocabulary() {
IplImage *img;
int i,j;
CvSeq *imageKeypoints = 0;
for(j=1;j<=60;j++)
for(i=1;i<=60;i++){
sprintf( ch,"%d%s%d%s",j," (",i,").jpg");
const char* imageName = ch;
Mat img = imread(ch);
vector<KeyPoint> keypoint;
detector->detect(img, keypoint);
Mat features;
extractor->compute(img, keypoint, features);
bowTrainer.add(features);
keypoint.clear();//problem
}
return;
}
I noticed something about your code, on extractTrainingVocabulary() you declare IplImage* img; and inside the loop you declare another variable with the same name (but different type): Mat img = imread(ch);.
Even though that might not be the problem, it's certainly not good practice. I would fix that immediately and update the code on your question.

Resources