I want to display a video read with OpenCV in a qt widget.I have already read the frames (have function that reads frames) but don't know how to show them. I need to show the video in a little widget in a window.
Please share a code that can help me.
as far as I know, you should play video in a Qlabel. this link might help you about Qlabel: https://doc.qt.io/qt-6/qlabel.html
at first create a label and then can play your video in while(1) loop; dont forgot that input shape of the image in Qlabel is QPixmap but shape of image in openCV is Mat; so you have to convet image shape from Mat to QPixmap.
for this you can convert mat to QImage and then you can convet QPixmap to QImage in Qt.
I hope you will realize it when you read code and its comments.
cv::VideoCapture cap("String Video Address");
if(!cap.isOpened())
QMessageBox::information(this, "", "error: Video not loaded "); // show error message
cv::Mat cvframe;
QImage Qframe;
while(1){
cap >> cvframe;
if (cvframe.empty())
break;
Qframe = convertOpenCVMatToQtQImage(cvframe);
ui->Video_lable->setPixmap(QPixmap::fromImage(Qframe)); // show images on form labels
ui->Video_lable->setScaledContents( true );
ui->Video_lable->setSizePolicy( QSizePolicy::Ignored, QSizePolicy::Ignored );
char c=(char)cv::waitKey(25); // waits to display frame
if(c==27)
break;
}
//The function that convert OpenCVMat To QtQImageQImage
QImage MainWindow::convertOpenCVMatToQtQImage(cv::Mat mat)
{
if(mat.channels() == 1) { // if 1 channel (grayscale or black and white) image
return QImage((uchar*)mat.data, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);// return QImage
}
else if(mat.channels() == 3) { // if 3 channel color image
cv::cvtColor(mat, mat, cv::COLOR_BGR2RGB);// flip colors
return QImage((uchar*)mat.data, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);// return QImage
}
else {
qDebug() << "in convertOpenCVMatToQtQImage, image was not 1 channel or 3 channel, should never get here";
}
return QImage();// return a blank QImage if the above did not work
}
Related
I am calling frames from a Blackmagic Decklink capture card and trying to convert them to a cv::Mat.
I have this working, but the resulting frames look like this:
When they should look like this:
The incoming frames are valid, so it must be my conversion. Based on this question:
Getting left and right frames with BlackMagic SDK and convert them into opencv mat
I am using:
bool deckLinkVideoFrameToCvMat(ComPtr<IDeckLinkVideoInputFrame> in,
cv::Mat& out)
{
void* data;
if (FAILED(in->GetBytes(&data))) {
std::cout << "Fail obtaining the data from videoFrame\n" << std::endl;
return S_FALSE;
}
cv::Mat loadedImage;
cv::Mat mat = cv::Mat(in->GetHeight(), in->GetWidth(), CV_8UC2, data, in->GetRowBytes());
cv::cvtColor(mat, loadedImage, CV_YUV2BGR_UYVY);
loadedImage.copyTo(out);
return true;
}
Where am i going wrong?
EDIT: with the stride parameter removed, as suggested:
I have graped an image from videoCapture object then i converted it to QImage to send it to server. After i receive it from the server side i want to do some image processing on the received image which is QImage. So before i performing any processing i have to convert it back to cv::Mat image.
I have function converting cv::Mat to QImage
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
//en
img.bits();
return img.rgbSwapped();
have function converting QImage to cv::Mat
Mat QImageToMat(const QImage& src){
cv::Mat tmp(src.height(),src.width(),CV_8UC3,(uchar*)src.bits(),src.bytesPerLine());
cv::Mat result = tmp ; // deep copy just in case (my lack of knowledge with open cv)
for (int i=0;i<src.height();i++) {
memcpy(result.ptr(i),src.scanLine(i),src.bytesPerLine());
}
cvtColor(result, result,CV_RGB2BGR);
return result;
}
I have been searching for about 2 days how to convert QImage to cv::Mat but with no luck non of the code snippet works for me. I don't know why, the image after conversion looks bad. you can see the image to left.
Does someone have any idea, about what could be cause the problem? Thanks in advance.
LEFT:image after converted from QImage to Mat RIGHT: the origranl Image which is in QImage format
I tried to use the stitcher module in OpenCV to stitch images. Below is my code (core part).
int main()
{
Mat img1 = imread("5.jpg");
Mat img2 = imread("6.jpg");
//Mat img3 = imread("11.jpg");
imgs.push_back(img1);
imgs.push_back(img2);
//imgs.push_back(img3);
Mat pano; // Mat to store the output pano image
Stitcher stitcher = Stitcher::createDefault(try_use_gpu); // create a Stitcher object
Stitcher::Status status = stitcher.stitch(imgs, pano); // stitch the input images together
cout << "hello" <<endl;
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << int(status) << endl;
return -1;
}
imshow("pano",pano);
imwrite(result_name, pano); // write the result to the output image
waitKey(0);
return 0;
}
"imgs" in the code above is a vector of Mat type.
I saw a lot of similar code the net similar to this, but I don't know why my code has a big limitation on the size of the input images. If the input images size is around 800 * 600, it works well. But when the size is 650 * 650 or 1280 * 640 or something else, the console prompt will just close very soon after the program ends. And no images are saved in "panoResult.jpg". Also no image is displayed. Visual Studio 2010 is just displaying Image_stitching2.exe: Native' has exited with code -1 (0xffffffff). when I am debuging.
Can anyone helps with this? I really need to get progress on this since I have already spent several hours to fix it but just not correct. Any tips or help is great for me.
I have an OpenCV window that I would like to resize to fill my screen, but when I use the resize function the window flickers. The output is my webcam and I guess the flicker is because my camera does not have those dimensions. Is there any other way to make the output from the camera larger?
cvNamedWindow("video", CV_WINDOW_AUTOSIZE);
IplImage *frame=0;
frame=cvQueryFrame(capture);
cvShowImage("video", frame);
cvResizeWindow("video", 1920,1080);
Give you an example of using cvResize() to resize the image or frame.
IplImage *frame;
CvCapture *capture = cvCaptureFromCAM(0);
cvNamedWindow("capture", CV_WINDOW_AUTOSIZE);
while(1) {
frame = cvQueryFrame(capture);
IplImage *frame_resize = cvCreateImage(cvSize(1366, 768), frame -> depth, frame -> nChannels);
cvResize(frame, frame_resize, CV_INTER_LINEAR);
cvShowImage("capture", frame);
cvWaitKey(25);
}
One possibility is to use the cvResize() function to change the size of the frame.
However, an easier way is to get rid of the CV_WINDOW_AUTOSIZE flag -- without that the video will be displayed at the size of the window.
Something like this:
cvNamedWindow("video", 0);
cvResizeWindow("video", 1920,1080);
IplImage *frame=0;
while(true)
{
frame=cvQueryFrame(capture);
cvShowImage("video", frame);
int c = waitKey(10);
...
}
I am not sure of the cause of the flickering, as I could not replicate that issue on my system.
Therefore I cannot guarantee that the flickering will disappear for you (but at least the video should be the correct size).
I'm coding an opencv 2.1 program with visual c++ 2008 express. I want to get each pixel color data of each pixel and modify them by pixel.
I understand that the code "frmSource.channels();" returns the color channels of the mat frmSource, but it always returns 1 even if it is absolutely color video image, not 3 or 4.
Am I wrong?
If I'm wrong, please guide me how to get the each color component data of each pixel.
Also, the total frame count by "get(CV_CAP_PROP_FRAME_COUNT)" is much larger than the frame count I expected, so I divide the "get(CV_CAP_PROP_FRAME_COUNT) by get(CV_CAP_PROP_FPS Frame rate.") and I can get the result as I expected.
I understand that the frame is like a cut of a movie, and 30 frames per sec. Is that right?
My coding is as follows:
void fEditMain()
{
VideoCapture vdoCap("C:/Users/Public/Videos/Sample Videos/WildlifeTest.wmv");
// this video file is provided in window7
if( !vdoCap.isOpened() )
{
printf("failed to open!\n");
return;
}
Mat frmSource;
vdoCap >> frmSource;
if(! frmSource.data) return;
VideoWriter vdoRec(vRecFIleName, CV_FOURCC('W','M','V','1'), 30, frmSource.size(), true);
namedWindow("video",1);
// record video
int vFrmCntNo=1;
for(;;)
{
int vDepth = frmSource.depth();
vChannel = frmSource.channels();
// here! vChannel is always 1, i expect 3 or 4 because it is color image
imshow("video", frmSource);// frmSource Show
vdoRec << frmSource;
vdoCap >> frmSource;
if(! frmSource.data)
return;
}
return;
}
I am not sure if this will answer your question but if you use IplImage it will be very easy to get the correct number of channels as well as manipulate the image. Try using:
IplImage *frm = cvQueryFrame(cap);
int numOfChannels = channelfrm->nChannels;
A video is composed of frames and you can know how many frames pass in a second by using get(CV_CAP_PROP_FPS). If you divide the frame count by the FPS you'll get the number of seconds for the clip.