I tried to use the stitcher module in OpenCV to stitch images. Below is my code (core part).
int main()
{
Mat img1 = imread("5.jpg");
Mat img2 = imread("6.jpg");
//Mat img3 = imread("11.jpg");
imgs.push_back(img1);
imgs.push_back(img2);
//imgs.push_back(img3);
Mat pano; // Mat to store the output pano image
Stitcher stitcher = Stitcher::createDefault(try_use_gpu); // create a Stitcher object
Stitcher::Status status = stitcher.stitch(imgs, pano); // stitch the input images together
cout << "hello" <<endl;
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << int(status) << endl;
return -1;
}
imshow("pano",pano);
imwrite(result_name, pano); // write the result to the output image
waitKey(0);
return 0;
}
"imgs" in the code above is a vector of Mat type.
I saw a lot of similar code the net similar to this, but I don't know why my code has a big limitation on the size of the input images. If the input images size is around 800 * 600, it works well. But when the size is 650 * 650 or 1280 * 640 or something else, the console prompt will just close very soon after the program ends. And no images are saved in "panoResult.jpg". Also no image is displayed. Visual Studio 2010 is just displaying Image_stitching2.exe: Native' has exited with code -1 (0xffffffff). when I am debuging.
Can anyone helps with this? I really need to get progress on this since I have already spent several hours to fix it but just not correct. Any tips or help is great for me.
Related
I want to display a video read with OpenCV in a qt widget.I have already read the frames (have function that reads frames) but don't know how to show them. I need to show the video in a little widget in a window.
Please share a code that can help me.
as far as I know, you should play video in a Qlabel. this link might help you about Qlabel: https://doc.qt.io/qt-6/qlabel.html
at first create a label and then can play your video in while(1) loop; dont forgot that input shape of the image in Qlabel is QPixmap but shape of image in openCV is Mat; so you have to convet image shape from Mat to QPixmap.
for this you can convert mat to QImage and then you can convet QPixmap to QImage in Qt.
I hope you will realize it when you read code and its comments.
cv::VideoCapture cap("String Video Address");
if(!cap.isOpened())
QMessageBox::information(this, "", "error: Video not loaded "); // show error message
cv::Mat cvframe;
QImage Qframe;
while(1){
cap >> cvframe;
if (cvframe.empty())
break;
Qframe = convertOpenCVMatToQtQImage(cvframe);
ui->Video_lable->setPixmap(QPixmap::fromImage(Qframe)); // show images on form labels
ui->Video_lable->setScaledContents( true );
ui->Video_lable->setSizePolicy( QSizePolicy::Ignored, QSizePolicy::Ignored );
char c=(char)cv::waitKey(25); // waits to display frame
if(c==27)
break;
}
//The function that convert OpenCVMat To QtQImageQImage
QImage MainWindow::convertOpenCVMatToQtQImage(cv::Mat mat)
{
if(mat.channels() == 1) { // if 1 channel (grayscale or black and white) image
return QImage((uchar*)mat.data, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);// return QImage
}
else if(mat.channels() == 3) { // if 3 channel color image
cv::cvtColor(mat, mat, cv::COLOR_BGR2RGB);// flip colors
return QImage((uchar*)mat.data, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);// return QImage
}
else {
qDebug() << "in convertOpenCVMatToQtQImage, image was not 1 channel or 3 channel, should never get here";
}
return QImage();// return a blank QImage if the above did not work
}
I am calling frames from a Blackmagic Decklink capture card and trying to convert them to a cv::Mat.
I have this working, but the resulting frames look like this:
When they should look like this:
The incoming frames are valid, so it must be my conversion. Based on this question:
Getting left and right frames with BlackMagic SDK and convert them into opencv mat
I am using:
bool deckLinkVideoFrameToCvMat(ComPtr<IDeckLinkVideoInputFrame> in,
cv::Mat& out)
{
void* data;
if (FAILED(in->GetBytes(&data))) {
std::cout << "Fail obtaining the data from videoFrame\n" << std::endl;
return S_FALSE;
}
cv::Mat loadedImage;
cv::Mat mat = cv::Mat(in->GetHeight(), in->GetWidth(), CV_8UC2, data, in->GetRowBytes());
cv::cvtColor(mat, loadedImage, CV_YUV2BGR_UYVY);
loadedImage.copyTo(out);
return true;
}
Where am i going wrong?
EDIT: with the stride parameter removed, as suggested:
I want to stitch many images ( 25 ) in one single image of a straight surface of plastic part. Images look like this:
I am trying to use the Stitcher class form opencv. My code is here:
#include <iostream>
#include <fstream>
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/stitching.hpp>
using namespace std;
using namespace cv;
bool try_use_gpu = false;
vector<Mat> imgs;
string result_name = "result.jpg";
Mat img1, img2,img3,img4,img5,img6,img7, pano;
void printUsage();
//int parseCmdArgs(int argc, char** argv);
int main(int argc, char* argv[])
{
// Load images from HD.
img1 = imread("1.bmp");
img2 = imread("2.bmp");
img3 = imread("3.bmp");
img4 = imread("4.bmp");
// Put images into vector of images "imgs".
imgs.push_back(img1);
imgs.push_back(img2);
imgs.push_back(img3);
imgs.push_back(img4);
// Create stitcher instance and use stitch method with imgs.
Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
stitcher.setPanoConfidenceThresh(0.8);
Stitcher::Status status = stitcher.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << status << endl;
return -1;
}
imwrite(result_name, pano);
return 0;
}
I Always get an error saying : "Can't stitch images, error code = 1" so the system is saying that it needs more images. When debugging I see that images are properly loaded and then, vector imgs properly created. What can be the reasons? My calcuclation also last quite long (2 s)...
The stitcher module of OpenCV uses images features to create an exact alignment.
As a thumb rule, there should be around 20-30% overlap between the two images that have to be stitched. If there is no significant overlap between the camera fields, you won't have enough features between the image intersections. The images that you have given have quite a uniform background for the stitcher module to find sufficient features in the image intersection. You will need to look at increasing the features in the image intersection in order to align the images.
I want to compare two images and find same and different parts of images. I tired "cv::compare and cv::absdiff" methods but confused which one can good for my case. Both show me different results. So how i can achieve my desired task ?
Here's an example how you can use cv::absdiff to find image similarities:
int main()
{
cv::Mat input1 = cv::imread("../inputData/Similar1.png");
cv::Mat input2 = cv::imread("../inputData/Similar2.png");
cv::Mat diff;
cv::absdiff(input1, input2, diff);
cv::Mat diff1Channel;
// WARNING: this will weight channels differently! - instead you might want some different metric here. e.g. (R+B+G)/3 or MAX(R,G,B)
cv::cvtColor(diff, diff1Channel, CV_BGR2GRAY);
float threshold = 30; // pixel may differ only up to "threshold" to count as being "similar"
cv::Mat mask = diff1Channel < threshold;
cv::imshow("similar in both images" , mask);
// use similar regions in new image: Use black as background
cv::Mat similarRegions(input1.size(), input1.type(), cv::Scalar::all(0));
// copy masked area
input1.copyTo(similarRegions, mask);
cv::imshow("input1", input1);
cv::imshow("input2", input2);
cv::imshow("similar regions", similarRegions);
cv::imwrite("../outputData/Similar_result.png", similarRegions);
cv::waitKey(0);
return 0;
}
Using those 2 inputs:
You'll observe that output (black background):
I'm trying to convert frames captured from a Basler camera to OpenCV's Mat format. There isn't a lot of information from the Basler API documentation, but these are the two lines in the Basler example that should be useful in determining what the format of the output is:
// Get the pointer to the image buffer
const uint8_t *pImageBuffer = (uint8_t *) Result.Buffer();
cout << "Gray value of first pixel: " << (uint32_t) pImageBuffer[0] << endl << endl;
I know what the image format is (currently set to mono 8-bit), and have tried doing:
img = cv::Mat(964, 1294, CV_8UC1, &pImageBuffer);
img = cv::Mat(964, 1294, CV_8UC1, Result.Buffer());
Neither of which works. Any suggestions/advices would be much appreciated, thanks!
EDIT: I can access the pixels in the Basler image by:
for (int i=0; i<1294*964; i++)
(uint8_t) pImageBuffer[i];
If that helps with converting it to OpenCV's Mat format.
You are creating the cv images to use the camera's memory - rather than the images owning their own memory. The problem may be that the camera is locking that pointer - or perhaps expects to reallocate and move it on each new image
Try creating the images without the last parameter and then copy the pixel data from the camera to the image using memcpy().
// Danger! Result.Buffer() may be changed by the Basler driver without your knowing
const uint8_t *pImageBuffer = (uint8_t *) Result.Buffer();
// This is using memory that you have no control over - inside the Result object
img = cv::Mat(964, 1294, CV_8UC1, &pImageBuffer);
// Instead do this
img = cv::Mat(964, 1294, CV_8UC1); // manages it's own memory
// copies from Result.Buffer into img
memcpy(img.ptr(),Result.Buffer(),1294*964);
// edit: cvImage stores it's rows aligned on a 4byte boundary
// so if the source data isn't aligned you will have to do
for (int irow=0;irow<964;irow++) {
memcpy(img.ptr(irow),Result.Buffer()+(irow*1294),1294);
}
C++ code to get a Mat frame from a Pylon cam
Pylon::DeviceInfoList_t devices;
... get pylon devices if you have more than a camera connected ...
pylonCam = new CInstantCamera(tlFactory->CreateDevice(devices[selectedCamId]));
Pylon::CGrabResultPtr ptrGrabResult;
Pylon::CImageFormatConverter formatConverter;
formatConverter.OutputPixelFormat = Pylon::PixelType_BGR8packed;
pylonCam->MaxNumBuffer = 30;
pylonCam->StartGrabbing(GrabStrategy_LatestImageOnly);
std::cout << " trying to get width and height from pylon device " << std::endl;
pylonCam->RetrieveResult(5000, ptrGrabResult, TimeoutHandling_ThrowException);
formatConverter.Convert(pylonImage, ptrGrabResult);
Mat temp = Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t*)pylonImage.GetBuffer());