I am calling frames from a Blackmagic Decklink capture card and trying to convert them to a cv::Mat.
I have this working, but the resulting frames look like this:
When they should look like this:
The incoming frames are valid, so it must be my conversion. Based on this question:
Getting left and right frames with BlackMagic SDK and convert them into opencv mat
I am using:
bool deckLinkVideoFrameToCvMat(ComPtr<IDeckLinkVideoInputFrame> in,
cv::Mat& out)
{
void* data;
if (FAILED(in->GetBytes(&data))) {
std::cout << "Fail obtaining the data from videoFrame\n" << std::endl;
return S_FALSE;
}
cv::Mat loadedImage;
cv::Mat mat = cv::Mat(in->GetHeight(), in->GetWidth(), CV_8UC2, data, in->GetRowBytes());
cv::cvtColor(mat, loadedImage, CV_YUV2BGR_UYVY);
loadedImage.copyTo(out);
return true;
}
Where am i going wrong?
EDIT: with the stride parameter removed, as suggested:
Related
I want to display a video read with OpenCV in a qt widget.I have already read the frames (have function that reads frames) but don't know how to show them. I need to show the video in a little widget in a window.
Please share a code that can help me.
as far as I know, you should play video in a Qlabel. this link might help you about Qlabel: https://doc.qt.io/qt-6/qlabel.html
at first create a label and then can play your video in while(1) loop; dont forgot that input shape of the image in Qlabel is QPixmap but shape of image in openCV is Mat; so you have to convet image shape from Mat to QPixmap.
for this you can convert mat to QImage and then you can convet QPixmap to QImage in Qt.
I hope you will realize it when you read code and its comments.
cv::VideoCapture cap("String Video Address");
if(!cap.isOpened())
QMessageBox::information(this, "", "error: Video not loaded "); // show error message
cv::Mat cvframe;
QImage Qframe;
while(1){
cap >> cvframe;
if (cvframe.empty())
break;
Qframe = convertOpenCVMatToQtQImage(cvframe);
ui->Video_lable->setPixmap(QPixmap::fromImage(Qframe)); // show images on form labels
ui->Video_lable->setScaledContents( true );
ui->Video_lable->setSizePolicy( QSizePolicy::Ignored, QSizePolicy::Ignored );
char c=(char)cv::waitKey(25); // waits to display frame
if(c==27)
break;
}
//The function that convert OpenCVMat To QtQImageQImage
QImage MainWindow::convertOpenCVMatToQtQImage(cv::Mat mat)
{
if(mat.channels() == 1) { // if 1 channel (grayscale or black and white) image
return QImage((uchar*)mat.data, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);// return QImage
}
else if(mat.channels() == 3) { // if 3 channel color image
cv::cvtColor(mat, mat, cv::COLOR_BGR2RGB);// flip colors
return QImage((uchar*)mat.data, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);// return QImage
}
else {
qDebug() << "in convertOpenCVMatToQtQImage, image was not 1 channel or 3 channel, should never get here";
}
return QImage();// return a blank QImage if the above did not work
}
I am using the following OpenCV code to access video feed from a camera (lsusb command in Jetson TX1 terminal lists the camera as Pixart Imaging, Inc.).
Code:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
int main()
{
cv::VideoCapture cap(1);
if(!cap.isOpened())
{
std::cout << "Input error\n";
return -1;
}
cv::namedWindow("Video Feed", cv::WINDOW_AUTOSIZE);
for(;;)
{
cv::Mat frame;
cap >> frame;
std::cout << "Type: " << frame.type() << "\n";
//cv::cvtColor(frame, frame, CV_YUV2RGB);
//cv::resize(frame, frame, cv::Size2i(256, 144));
cv::imshow("Video Feed", frame);
if (cv::waitKey(10) == 27)
{
break;
}
}
cv::destroyAllWindows();
return 0;
}
The screeenshots of the video feed can be seen below:
I am trying to identify the color format of the camera and convert it to RGB. I have tried different color formats, but I focused mainly on YUV to RGB conversion as shown below (this line is commented out in the above code): cv::cvtColor(frame, frame, CV_YUV2RGB);
I have also tried different variants of YUV as listed here. However, I haven't received any result close to a normal RGB image.
I am also getting the following message on the terminal:
Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
1) Is this just a defective camera?
2) Are there any tests/ approaches to identify and rectify the problem?
Edit:
I have added a new picture to give an idea of what the actual color of the shirt of the person closer to the camera is:
I have graped an image from videoCapture object then i converted it to QImage to send it to server. After i receive it from the server side i want to do some image processing on the received image which is QImage. So before i performing any processing i have to convert it back to cv::Mat image.
I have function converting cv::Mat to QImage
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
//en
img.bits();
return img.rgbSwapped();
have function converting QImage to cv::Mat
Mat QImageToMat(const QImage& src){
cv::Mat tmp(src.height(),src.width(),CV_8UC3,(uchar*)src.bits(),src.bytesPerLine());
cv::Mat result = tmp ; // deep copy just in case (my lack of knowledge with open cv)
for (int i=0;i<src.height();i++) {
memcpy(result.ptr(i),src.scanLine(i),src.bytesPerLine());
}
cvtColor(result, result,CV_RGB2BGR);
return result;
}
I have been searching for about 2 days how to convert QImage to cv::Mat but with no luck non of the code snippet works for me. I don't know why, the image after conversion looks bad. you can see the image to left.
Does someone have any idea, about what could be cause the problem? Thanks in advance.
LEFT:image after converted from QImage to Mat RIGHT: the origranl Image which is in QImage format
I tried to use the stitcher module in OpenCV to stitch images. Below is my code (core part).
int main()
{
Mat img1 = imread("5.jpg");
Mat img2 = imread("6.jpg");
//Mat img3 = imread("11.jpg");
imgs.push_back(img1);
imgs.push_back(img2);
//imgs.push_back(img3);
Mat pano; // Mat to store the output pano image
Stitcher stitcher = Stitcher::createDefault(try_use_gpu); // create a Stitcher object
Stitcher::Status status = stitcher.stitch(imgs, pano); // stitch the input images together
cout << "hello" <<endl;
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << int(status) << endl;
return -1;
}
imshow("pano",pano);
imwrite(result_name, pano); // write the result to the output image
waitKey(0);
return 0;
}
"imgs" in the code above is a vector of Mat type.
I saw a lot of similar code the net similar to this, but I don't know why my code has a big limitation on the size of the input images. If the input images size is around 800 * 600, it works well. But when the size is 650 * 650 or 1280 * 640 or something else, the console prompt will just close very soon after the program ends. And no images are saved in "panoResult.jpg". Also no image is displayed. Visual Studio 2010 is just displaying Image_stitching2.exe: Native' has exited with code -1 (0xffffffff). when I am debuging.
Can anyone helps with this? I really need to get progress on this since I have already spent several hours to fix it but just not correct. Any tips or help is great for me.
I'm trying to convert frames captured from a Basler camera to OpenCV's Mat format. There isn't a lot of information from the Basler API documentation, but these are the two lines in the Basler example that should be useful in determining what the format of the output is:
// Get the pointer to the image buffer
const uint8_t *pImageBuffer = (uint8_t *) Result.Buffer();
cout << "Gray value of first pixel: " << (uint32_t) pImageBuffer[0] << endl << endl;
I know what the image format is (currently set to mono 8-bit), and have tried doing:
img = cv::Mat(964, 1294, CV_8UC1, &pImageBuffer);
img = cv::Mat(964, 1294, CV_8UC1, Result.Buffer());
Neither of which works. Any suggestions/advices would be much appreciated, thanks!
EDIT: I can access the pixels in the Basler image by:
for (int i=0; i<1294*964; i++)
(uint8_t) pImageBuffer[i];
If that helps with converting it to OpenCV's Mat format.
You are creating the cv images to use the camera's memory - rather than the images owning their own memory. The problem may be that the camera is locking that pointer - or perhaps expects to reallocate and move it on each new image
Try creating the images without the last parameter and then copy the pixel data from the camera to the image using memcpy().
// Danger! Result.Buffer() may be changed by the Basler driver without your knowing
const uint8_t *pImageBuffer = (uint8_t *) Result.Buffer();
// This is using memory that you have no control over - inside the Result object
img = cv::Mat(964, 1294, CV_8UC1, &pImageBuffer);
// Instead do this
img = cv::Mat(964, 1294, CV_8UC1); // manages it's own memory
// copies from Result.Buffer into img
memcpy(img.ptr(),Result.Buffer(),1294*964);
// edit: cvImage stores it's rows aligned on a 4byte boundary
// so if the source data isn't aligned you will have to do
for (int irow=0;irow<964;irow++) {
memcpy(img.ptr(irow),Result.Buffer()+(irow*1294),1294);
}
C++ code to get a Mat frame from a Pylon cam
Pylon::DeviceInfoList_t devices;
... get pylon devices if you have more than a camera connected ...
pylonCam = new CInstantCamera(tlFactory->CreateDevice(devices[selectedCamId]));
Pylon::CGrabResultPtr ptrGrabResult;
Pylon::CImageFormatConverter formatConverter;
formatConverter.OutputPixelFormat = Pylon::PixelType_BGR8packed;
pylonCam->MaxNumBuffer = 30;
pylonCam->StartGrabbing(GrabStrategy_LatestImageOnly);
std::cout << " trying to get width and height from pylon device " << std::endl;
pylonCam->RetrieveResult(5000, ptrGrabResult, TimeoutHandling_ThrowException);
formatConverter.Convert(pylonImage, ptrGrabResult);
Mat temp = Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t*)pylonImage.GetBuffer());