MatToQImage QPixmap::scaled: Pixmap is a null pixmap - opencv

I Try to display a video processed by OpenCV, which is in Mat format, to a Qt self-defined FrameLabel. Mat frames can be populated when I add the OpenCV function imshow("frame",mat), but turn to the QPixmap::scaled: Pixmap is a null pixmap after I removed the imshow("frame",mat). Even trying to lock the thread not solve the problem. I have done some search, found that it might due to QPixmap resource should be defined in something like "xxx.qrc".
My code is as below:
void IntelligentSurveillance::on_pushButton_clicked(){
QMutex processingMutex;
string filename = VIDEO_PATH;
VideoCapture cap;
cap.open(filename);
Mat mat;
QImage qImage;
for (;;)
{
cap >> mat;
//processingMutex.lock();
qImage = MatToQImage(mat);
//processingMutex.unlock();
ui.frame->setPixmap(QPixmap::fromImage(qImage).scaled(ui.frame->width(), ui.frame->height(), Qt::IgnoreAspectRatio));
//imshow("frame", mat);
if (waitKey(30) >= 0) break;
}}
Output always like: QPixmap::scaled: Pixmap is a null pixmap
The problem is that it works fine when I add imshow("frame",mat)...
Can anyone give some help? Thanks!!

You need to use temp variable to save image, for example
QPixmap icon = QPixmap(":/img/" + iconFileName);
QPixmap tmp = icon.scaled(30, 30, Qt::KeepAspectRatio);
value = tmp;
And this message gone!

Related

OpenCV3.0 gold imread function doesn't work with absolute path on Ubuntu14.04

I use imread function by OpenCV3.0 gold on Ubuntu 14.04, and followed the web to installed OpenCV3.0 But the imread function dosen't work with absolute path.It can work like imread("a.jpg"),but not imread("\home\a\a.jpg") .I want to use the function to read image sequence.Here is my code:
char filename1[200];
char filename2[200];
sprintf(filename1, "/home/image_2/%06d.png", 0);
sprintf(filename2, "/home/image_2/%06d.png", 1);
//read the first two frames from the dataset
Mat img_1_c = imread(filename1);
Mat img_2_c = imread(filename2);
if ( !img_1_c.data || !img_2_c.data ) {
std::cout<< " --(!) Error reading images " << std::endl; return -1;
}
There are images in folder a,like 000000.png .When I run it ,it says"--(!) Error reading images".Can someone help me ?Thank you.

opencv image stitching

I'm already looking for hours but I can't find the problem.
I get the following error when I want to stitch two images together:
OopenCV error: assertion failed (y==0 || data && dims >=1 && (unsigned)y < (unsigned > size.p[0])) in unkown function...
This is the code (pano.jpg was already stitched together in a previous run of the algorithm were the same algorithm did work...):
cv::Mat img1 = imread("input2.jpg");
cv::Mat img2 = imread("pano.jpg");
std::vector<cv::Mat> vectest;
vectest.push_back(img2);
vectest.push_back(img1);
cv::Mat result;
cv::Stitcher stitcher = cv::Stitcher::createDefault( false );
stitcher.setPanoConfidenceThresh(0.01);
detail::BestOf2NearestMatcher *matcher = new detail::BestOf2NearestMatcher(false, 0.001/*=match_conf*/);
detail::SurfFeaturesFinder *featureFinder = new detail::SurfFeaturesFinder(100);
stitcher.setFeaturesMatcher(matcher);
stitcher.setFeaturesFinder(featureFinder);
cv::Stitcher::Status status = stitcher.stitch( vectest, result );
You can find the images here:
pano.jpg: https://dl.dropbox.com/u/5276376/pano.jpg
input2.jpg: https://dl.dropbox.com/u/5276376/input2.jpg
Edit:
I compiled opencv 2.4.2 myself but still the same problem...
The system crashes in the stitcher.cpp file on the following line:
blender_->feed(img_warped_s, mask_warped, corners[img_idx]);
In this feed function it crashed at this line:
int y_ = y - y_tl;
const Point3_<short>* src_row = src_pyr_laplace[i].ptr<Point3_<short> >(y_);
Point3_<short>* dst_row = dst_pyr_laplace_[i].ptr<Point3_<short> >(y);
And finally this assertion in mat.hpp:
template<typename _Tp> inline _Tp* Mat::ptr(int y)
{
CV_DbgAssert( y == 0 || (data && dims >= 1 && (unsigned)y < (unsigned)size.p[0]) );
return (_Tp*)(data + step.p[0]*y);
}
strange that everything works fine for some people here...
I stitching images now,but not using stitch High Level Functionality instead encode every step by opencv2.4.2. As far as I know, you could have a try about first SurfFeaturesFinder, second BestOf2NearestMatcher. Just a try, Good luck!

OpenCV Stitcher returns ERR_NEED_MORE_IMGS

I have a problem over here with some simple stitching tool test using OpenCV.
Here s my code:
IplImage *pLeft,
*pRight;
pLeft = cvLoadImage( "left.jpg" );
pRight = cvLoadImage( "right.jpg" );
cv::Mat cvMatLeft( pLeft, true ),
cvMatRight( pRight, true );
std::vector<cv::Mat> imgs;
imgs.push_back( cvMatLeft );
imgs.push_back( cvMatRight );
cv::Mat cvMatOutput;
cv::Stitcher myStitcher = cv::Stitcher::createDefault( true );
cv::Stitcher::Status myStatus = myStitcher.stitch( imgs, cvMatOutput );
I get back the enum ERR_NEED_MORE_IMGS while running this code.
When i debug into the functions called by OpenCV i did recognize the following uncertainty:
stitch( )'s first argument is an cv::InputArray named images. Taking a closer look at it shows, that the arguments sz.width and sz.height are 0.
Further on running through estimateTransform( ) twice the function matchImages( ) is called where the member imgs_ is checked. This one is derived from the InputArray and has (resulting) the size( ) (of images) being 0.
This leads to the mentioned enum.
What am i doing wrong? Something on initialization of the stitcher or the cv::Mat?
Thanks in advance
I think it occurs when you use similar image. When you use the images which the number of extraced feature points is small, so it does.

OpenCV cvWriteFrame , cvWriteToAVI

The problem is as follows
I want to read a video file from disk and convert its every frame into grayscale and write it into new video file
I am using following code to do so
CvCapture* capture = cvCreateFileCapture( "/root/tree.avi");
if (!capture){
return -1;
}
...
CvVideoWriter* writer =
cvCreateVideoWriter("/root/output.avi",CV_FOURCC('D','I','V','X'),fps,size);
...
IplImage* gray_frame = cvCreateImage(
size,
IPL_DEPTH_8U,
1
);
while( (bgr_frame=cvQueryFrame(capture)) != NULL ) {
cvShowImage( "Example2_10", bgr_frame );
cvCvtColor(bgr_frame,gray_frame,CV_RGB2GRAY);
cvShowImage( "B&W result", gray_frame );
cvWriteFrame( writer, gray_frame);
char c = cvWaitKey(10);
if( c == 27 ) break;
}
...
The problem is , program runs fine , but fails to write frames to output.avi and creats only blank output.avi file of just 5.5KB
One more thing is i am unable to write only gra_frame using cvWriteFrame , and if i try to Write bgr_frame , it does write the output.avi file successfully.
Please if anyone knows solution, let me know
You need to pass is_color=0 to the cvCreateVideoWriter function if you want to write gray value images. Because of that you are only able to write color images to your output video.
It is the last parameter of the cvCreateVideoWriter function which defaults to 1:
CvVideoWriter* cvCreateVideoWriter(const char* filename, int fourcc, double fps, CvSize frame_size, int is_color=1)
In my case the problem was that I created an a CvVideoWriter in a different resolution than the image I wrote to it using cvWriteFrame. This worked fine in an earlier version of OpenCV, but failed to write frames in OpenCV 2.4

cannot access values of Mat returned from OpenCV functions

I'm using cv::imread to load a image and do some processes with that image,
but I don't know why I can't read values of the returned Mat from imread function.
I used Mat.at method:
Mat iplimage = imread("Photo.jpg",1); //input
for(int i=0;i<iplimage.rows;i++){
for(int j=0;j<iplimage.cols;j++){
cout<<(int)iplimage.at<int>(i,j)<<" ";
}
cout<<endl;
}
But it appeared an error:
OpenCV Error: Assertion failed ( dims <= 2 && data && (unsigned)i0 <
(unsigned)size.p[0] &&
(unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channels()) &&
((((Sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3) -1))*4) & 15)
== elemSize1()) is unknown function, file: "c:\opencv2.2\include\opencv2\core\mat.hpp", line 517
But it is ok if I use the direct access way:
Mat iplimage = imread("Photo.jpg",1); //input
for(int i=0;i<iplimage.rows;i++){
for(int j=0;j<iplimage.cols;j++){
cout<<(int)iplimage.data[i*iplimage.cols + j]<<" ";
}
cout<<endl;
}
Could anyone tell me how can I use the Mat.at method to access the above Mat?
Thanks for your help!
See this answer. In your case, the returned Mat is 3 dimensional, hence iplimage.at<int> fails the assertion, you just need to access the intensities in each channel like the way the mentioned answer explain.
you are trying to load with 3 channel of image it will be fine if you change to this Mat iplimage = imread("Photo.jpg",0); //input
I found the solution. It is because I used :
inputImage.at<int>(i,j) or inputImage.at<float>(1,2)
instead of, (int)inputImage.at<uchar>(1,2) or (float)inputImage.at<uchar>(1,2)
Sorry for my carelessness!
Mat iplimage = imread("Photo.jpg",1) this read in a 3 channel colour image. You can use Mat iplimage = imread("Photo.jpg",0) to read in the image as greyscale so that your iplimage.at(i,j) would work. Please note that you should use .at if your image is 8bit instead of .at.
If your image is not 8bit, you should use iplimage = imread("Photo.jpg",CV_LOAD_IMAGE_ANYDEPTH)

Resources