libpng errror: IDAT: invalid distance too far back - opencv

I'm using INRIA person dataset, i iterate the images and everything is fine and after i have this function
vector<Mat> HOG_extract(Mat input_image, bool patch_size, int width, int height)
{
Mat gray_image;
cvtColor(input_image, gray_image, CV_BGR2GRAY);
HOGDescriptor hog;
hog.winSize = Size(width, height);
hog.blockSize = Size(block_size, block_size);
hog.blockStride = Size(block_stride, block_stride);
hog.cellSize = Size(cell_size, cell_size);
hog.nbins = bin_size;
vector<float> hog_value;
vector<Point> locations;
hog.compute(gray_image, hog_value, Size(0, 0), Size(0, 0), locations);
}
when he gets to hog.compute i receive an exception and libpng error: IDAT: invalid distance too far back.
like how i can solve this? looks like something happend when using imread and converting in gray

There seems to be an issue with the dataset. I think it was edited with an error using an older version of libpng.
I managed to fix it with the help of "png-fix-IDAT-windowsize" tool which you can find here: http://www.libpng.org/pub/png/apps/pngcheck.html

Related

normalize and convertScaleAbs insights in opencv

Mat img = imread("/home/akash/Desktop/coding/IP/openCV/chessBoard.jpg",1);
Mat gray;
int thresh = 200;
void corner_detect(int,void *){
Mat dst = Mat::zeros(gray.size(),CV_32FC1);
Mat dst_norm,dst_scale;
cornerHarris(gray,dst,2,3,0.04);
normalize(dst,dst_norm,0,255,NORM_MINMAX,CV_32FC1,Mat()); //????
convertScaleAbs(dst_norm,dst_scale); //????
namedWindow("dst_norm",CV_WINDOW_AUTOSIZE);
imshow("dst_norm",dst_norm);
for(int i=0;i<dst_norm.rows;i++){
for(int j=0;j<dst_norm.cols;j++){
if(dst_norm.at<float>(i,j) > thresh){
circle(dst_scale,Point(j,i),5,Scalar(0),2);
}
}
}
imshow("window",dst_scale);
}
int main(){
namedWindow("window",CV_WINDOW_AUTOSIZE);
namedWindow("input",CV_WINDOW_AUTOSIZE);
cvtColor(img,gray,CV_BGR2GRAY);
createTrackbar("threshold","window",&thresh,255,corner_detect);
corner_detect(0,0);
imshow("input",img);
waitKey(0);
return 0;
}
I have taken this code from here which is basically corner detection and drawing circles around it.
I want to ask(where "????" is mentioned in code) working of normalize and convertScaleAbs. I have read the docs but I am still in doubt.I also outputted the dst_norm but it helped me none.
I got that normalize is used to change the value range in array and convertScaleAbs is converting CV_32FC1 type image to CV_8UC1.
But i am unable to understand any insights(i.e. how i got dst_norm and dst_scale when i outputted them).
Any help would be appreciated....
screen shot for reference

How to detect human using findcontours based on the human shape?

I wanna ask how to detecting humans or pedestrians on blob (findcontours)? I've try to learn how to detecting any object on the frame using findcontours() like this:
#include"stdafx.h"
#include<vector>
#include<iostream>
#include<opencv2/opencv.hpp>
#include<opencv2/core/core.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<opencv2/highgui/highgui.hpp>
int main(int argc, char *argv[])
{
cv::Mat frame;
cv::Mat fg;
cv::Mat blurred;
cv::Mat thresholded;
cv::Mat thresholded2;
cv::Mat result;
cv::Mat bgmodel;
cv::namedWindow("Frame");
cv::namedWindow("Background Model"
//,CV_WINDOW_NORMAL
);
//cv::resizeWindow("Background Model",400,300);
cv::namedWindow("Blob"
//,CV_WINDOW_NORMAL
);
//cv::resizeWindow("Blob",400,300);
cv::VideoCapture cap("campus3.avi");
cv::BackgroundSubtractorMOG2 bgs;
bgs.nmixtures = 3;
bgs.history = 1000;
bgs.varThresholdGen = 15;
bgs.bShadowDetection = true;
bgs.nShadowDetection = 0;
bgs.fTau = 0.5;
std::vector<std::vector<cv::Point>> contours;
for(;;)
{
cap >> frame;
cv::GaussianBlur(frame,blurred,cv::Size(3,3),0,0,cv::BORDER_DEFAULT);
bgs.operator()(blurred,fg);
bgs.getBackgroundImage(bgmodel);
cv::threshold(fg,thresholded,70.0f,255,CV_THRESH_BINARY);
cv::threshold(fg,thresholded2,70.0f,255,CV_THRESH_BINARY);
cv::Mat elementCLOSE(5,5,CV_8U,cv::Scalar(1));
cv::morphologyEx(thresholded,thresholded,cv::MORPH_CLOSE,elementCLOSE);
cv::morphologyEx(thresholded2,thresholded2,cv::MORPH_CLOSE,elementCLOSE);
cv::findContours(thresholded,contours,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE);
cv::cvtColor(thresholded2,result,CV_GRAY2RGB);
int cmin = 50;
int cmax = 1000;
std::vector<std::vector<cv::Point>>::iterator itc=contours.begin();
while (itc!=contours.end()) {
if (itc->size() > cmin && itc->size() < cmax){
std::vector<cv::Point> pts = *itc;
cv::Mat pointsMatrix = cv::Mat(pts);
cv::Scalar color( 0, 255, 0 );
cv::Rect r0= cv::boundingRect(pointsMatrix);
cv::rectangle(frame,r0,color,2);
++itc;
}else{++itc;}
}
cv::imshow("Frame",frame);
cv::imshow("Background Model",bgmodel);
cv::imshow("Blob",result);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
and now I wanna know how to detect humans? am I need to use hog? or haar? if yes I need to use them, how to use them? any tutorials to learn how to use it? because I'm so curious! and it's so much fun when I learn OpenCV! so addictive! :))
anyway I'll appreciate any help here, thanks. :)
This is a good start, with lots of enthusiasm. There is more than one way to do human detection on images/image sequences. I summarize a few below:
Since you are already extracting blobs that are supposed to be persons or objects, you can compare the features of these blobs with those of blobs resulting from a human in the scene. Many people look at the shape of the head-shoulder region, the height and area of the blob, etc.
You can also look at research papers like this one. The earlier researches are easier to understand and code, compared to the recent papers.
Instead of using background subtraction, you can also use an approach like Haar Wavelet based detection. This is widely used for face detection, but opencv contains a model for upper body detection. You can also build your own models, as described here.
Have fun!

OpenCV errors for iOS / detecting Hough Circles

I have been trying for hours to run an xcode project with openCV. I have built the source, imported it into the project and included
#ifdef __cplusplus #import opencv2/opencv.hpp>
#endif
in the .pch file.
I followed the instructions from http://docs.opencv.org/trunk/doc/tutorials/introduction/ios_install/ios_install.html
Still I am getting many Apple Mach-O linker errors when I compile.
Undefined symbols for architecture i386:
"std::__1::__vector_base_common<true>::__throw_length_error() const", referenced from:
Please help me I am really lost..
UPDATE:
Errors all fixed and now I am trying to detect circles..
Mat src, src_gray;
cvtColor( image, src_gray, CV_BGR2GRAY );
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, image.rows/8, 200, 100, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
I am using the code above, however no circles are being drawn on the image.. is there something obvious that I am doing wrong?
Try the solution in my answer to this question...
How to resolve iOS Link errors with OpenCV
Also on github I have a couple of simple working samples - with recently built openCV framework.
NB - OpenCVSquares is simpler than OpenCVSquaresSL. The latter was adapted for Snow Leopard backwards compatibility - it contains two builds of the openCV framework and 3 targets, so you are better off using the simpler OpenCVSquares if it will run on your system.
To adapt OpenCVSquares to detect circles, I suggest that you start with the Hough Circles c++ sample from the openCV distro, and use it to adapt/replace CVSquares.cpp and CVSquares.h with, say CVCircles.cpp and CVCicles.h
The principles are exactly the same:
remove UI code from the c++, the UI is provided on the obj-C side
transform the main() function into a static member function for the class declared in the header file. This should mirror in form an Objective-C message to the wrapper (which translates the obj-c method to a c++ function call).
From the objective-C side, you are passing a UIImage to the wrapper object, which:
converts the UIImage to a cv::Mat image
pass the Mat to a c++ class for processing
converts the result from Mat back to UIImage
returns the processed UIImage back to the objective-C calling object
update
The adapted houghcircles.cpp should look something like this at it's most basic (I've replaced the CVSquares class with a CVCircles class):
cv::Mat CVCircles::detectedCirclesInImage (cv::Mat img)
{
//expects a grayscale image on input
//returns a colour image on ouput
Mat cimg;
medianBlur(img, img, 5);
cvtColor(img, cimg, CV_GRAY2RGB);
vector<Vec3f> circles;
HoughCircles(img, circles, CV_HOUGH_GRADIENT, 1, 10,
100, 30, 1, 60 // change the last two parameters
// (min_radius & max_radius) to detect larger circles
);
for( size_t i = 0; i < circles.size(); i++ )
{
Vec3i c = circles[i];
circle( cimg, Point(c[0], c[1]), c[2], Scalar(255,0,0), 3, CV_AA);
circle( cimg, Point(c[0], c[1]), 2, Scalar(0,255,0), 3, CV_AA);
}
return cimg;
}
Note that the input parameters are reduced to one - the input image - for simplicity. Shortly I will post a sample on github which will include some parameters tied to slider controls in the iOS UI, but you should get this version working first.
As the function signature has changed you should follow it up the chain...
Alter the houghcircles.h class definition:
static cv::Mat detectedCirclesInImage (const cv::Mat image);
Modify the CVWrapper class to accept a similarly-structured method which calls detectedCirclesInImage
+ (UIImage*) detectedCirclesInImage:(UIImage*) image
{
UIImage* result = nil;
cv::Mat matImage = [image CVGrayscaleMat];
matImage = CVCircles::detectedCirclesInImage (matImage);
result = [UIImage imageWithCVMat:matImage];
return result;
}
Note that we are converting the input UIImage to grayscale, as the houghcircles function expects a grayscale image on input. Take care to pull the latest version of my github project, I found an error in the CVGrayscaleMat category which is now fixed . Output image is colour (colour applied to grayscale input image to pick out found circles).
If you want your input and output images in colour, you just need to ensure that you make a grayscale conversion of your input image for sending to Houghcircles() - eg cvtColor(input_image, gray_image, CV_RGB2GRAY); and apply your found circles to the colour input image (which becomes your return image).
Finally in your CVViewController, change your messages to CVWrapper to conform to this new signature:
UIImage* image = [CVWrapper detectedCirclesInImage:self.image];
If you follow all of these details your project will produce circle-detected results.
update 2
OpenCVCircles now on Github
With sliders to adjust HoughCircles() parameters

OpenCV 2.4.3 and videoInput into Mat

I am trying to capture video into a Mat type from two or more MSFT LifeCam HD-3000s using the videoInput library, OpenCV 2.4.3, and VS2010 Express.
I followed the example at: Most efficient way to capture and send images from a webcam in a network and it worked great.
Now I want to replace the IplImage type with a c++ Mat type. I tried to follow the example at: opencv create mat from camera data
That gave me the following:
VI = new videoInput;
int CurrentCam = 0;
VI->setupDevice(CurrentCam,WIDTH,HEIGHT);
int width = VI->getWidth(CurrentCam);
int height = VI->getHeight(CurrentCam);
unsigned char* yourBuffer = new unsigned char[VI->getSize(CurrentCam)];
cvNamedWindow("test",1);
while(1)
{
VI->getPixels(CurrentCam, yourBuffer, false, true);
cv::Mat image(width, height, CV_8UC3, yourBuffer, Mat::AUTO_STEP);
imshow("test", image);
if(cvWaitKey(15)==27) break;
}
The output is a lined image (i.e., it looks like the first line is correct but the second line seems off, third correct, fourth off, etc). That suggests that either the step part is wrong or there is some difference between the IplImage type and the Mat type that I am not getting. I have tried looking at/altering all the parameters, but I can't find anything.
Hopefully, an answer will help those facing what appears to be a fairly common issue with loading an image form the videoInput library to the Mat type.
Thanks in advance!
Try
cv::Mat image(height, width, CV_8UC3, yourBuffer, Mat::AUTO_STEP);

OpenCV C++/Obj-C: goodFeaturesToTrack inside specific blob

Is there a quick solution to specify the ROI only within the contours of the blob I'm intereseted in?
My ideas so far:
Using the boundingRect, but it contains too much stuff I don't want to analyse.
Applying goodFeaturesToTrack to the whole image and then loop through the output coordinates to eliminate the once outside my blobs contour
Thanks in advance!
EDIT
I found what I need: cv::pointPolygonTest() seems to be the right thing, but I'm not sure how to implement it …
Here's some code:
// ...
IplImage forground_ipl = result;
IplImage *labelImg = cvCreateImage(forground.size(), IPL_DEPTH_LABEL, 1);
CvBlobs blobs;
bool found = cvb::cvLabel(&forground_ipl, labelImg, blobs);
IplImage *imgOut = cvCreateImage(cvGetSize(&forground_ipl), IPL_DEPTH_8U, 3);
if (found) {
vb::CvBlob *greaterBlob = blobs[cvb::cvGreaterBlob(blobs)];
cvb::cvRenderBlob(labelImg, greaterBlob, &forground_ipl, imgOut);
CvContourPolygon *polygon = cvConvertChainCodesToPolygon(&greaterBlob->contour);
}
"polygon" contains the contour I need.
goodFeaturesToTrack is implemented this way:
- (std::vector<cv::Point2f>)pointsFromGoodFeaturesToTrack:(cv::Mat &)_image
{
std::vector<cv::Point2f> corners;
cv::goodFeaturesToTrack(_image,corners, 100, 0.01, 10);
return corners;
}
So next I need to loop through the corners and check each point with cv::pointPolygonTest(), right?
You can create a mask over your interest region:
EDIT
How to make a mask:
Make a mask;
Mat mask(origImg.size(), CV_8UC1);
mask.setTo(Scalar::all(0));
// here I assume your contour is extracted with findContours,
// and is stored in a vector<vector<Point>>
// and that you know which contour is the blob
// if it's not the case, use fillPoly instead of drawContour();
Scalar color(255,255,255); // white. actually, it's monchannel.
drawContours(mask, contours, contourIdx, color );
// fillPoly(Mat& img, const Point** pts, const int* npts,
// int ncontours, const Scalar& color)
And now you're ready to use it. BUT, look carefully at the result - I have heard about some bugs in OpenCV regarding the mask parameter for feature extractors, and I am not sure if it's about this one.
// note the mask parameter:
void goodFeaturesToTrack(InputArray image, OutputArray corners, int maxCorners,
double qualityLevel, double minDistance,
InputArray mask=noArray(), int blockSize=3,
bool useHarrisDetector=false, double k=0.04 )
This will also improve the speed of your aplication - goodFeaturesToTrack eats a hoge amount of time, and if you apply it only on a smaller image, the overall gain is significant.

Resources