Opencv stitching images result error - opencv

I make Image Stitcher which merge 10 images(or more).
Merging images seems right, but this result has some problem. The result is not drawing perfectly, It has black area.
This is output
Why this problem is occurred? When i run this program, it uses a lot of cpu and memory, is this related to the problem?
+)The images used to merge is 2880*2880
+)This is my code
QImage requestImage(const QString &id, QSize *size, const QSize &requestedSize)
{
QUrl url(id);
QString file = url.toLocalFile();
QString queryStr = url.query();
QUrlQuery query(queryStr);
Mat pano;
Stitcher stitcher = Stitcher::createDefault(true);
stitcher.setFeaturesFinder(makePtr<detail::SurfFeaturesFinder>(100));
vector<Mat> imgs;
foreach (auto node, query.queryItems()) {
if(node.first == "src") {
imgs.push_back(cv::imread(node.second.remove(0,7).toStdString()));
} else if(node.first == "hessian") {
//stitcher.setFeaturesFinder(makePtr<detail::SurfFeaturesFinder>(node.second.toInt()));
stitcher.setFeaturesFinder(makePtr<detail::SurfFeaturesFinder>(100));
}
}
Stitcher::Status status = stitcher.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << status << endl;
return QImage();
}
return cvMatToQImage(pano);
}
+) I try reducing image size(2880*2880->256*256) but Stitcher occur same problem.

Related

Azure Kinect Recording Color Format

I'm attempting to read an Azure Kinect recording and save images from the frames. But, it is not possible to set the color_format, which causes problems when using imwrite.
I have read the recording documentation here: https://learn.microsoft.com/en-us/azure/Kinect-dk/azure-kinect-recorder.
By default, the format seems to be K4A_IMAGE_FORMAT_COLOR_MJPG. But I am unsure what parameter to pass in when creating the material. For BGRA32 it is CV_8UC4 and for depth images it is CV_16U.
I assume there are two ways to solve this problem, either by setting the color_format or figuring out what parameter is correct for the default format made by the recording.
You can access the rgb with OpenCV as if it were a normal webcam:
VideoCapture cap(0); // open the default camera
cap.set(CV_CAP_PROP_FRAME_WIDTH, 3840);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 2160);
if (!cap.isOpened()) // check if we succeeded
return -1;
Mat frame, img;
for (;;)
{
cap >> frame; // get a new frame from camera
cout << frame.cols << " x " << frame.rows << endl;
resize(frame, img, Size(), 0.25, 0.25);
imshow("frame", img);
if (waitKey(30) >= 0) break;
}
No k4a function is called, no need to set the color format.
If you want to use their SDK with jpeg format, they provide a function in one of their sample codes:
long WriteToFile(const char *fileName, void *buffer, size_t bufferSize)
{
cout << bufferSize << endl;
assert(buffer != NULL);
std::ofstream hFile;
hFile.open(fileName, std::ios::out | std::ios::trunc | std::ios::binary);
if (hFile.is_open())
{
hFile.write((char *)buffer, static_cast<std::streamsize>(bufferSize));
hFile.close();
}
std::cout << "[Streaming Service] Color frame is stored in " << fileName << std::endl;
return 0;
}
You just call:
image = k4a_capture_get_color_image(capture);
WriteToFile("color.jpg", k4a_image_get_buffer(image), k4a_image_get_size(image));
Finally, you can set the format to RGBA32:
config.color_format = K4A_IMAGE_FORMAT_COLOR_BGRA32;
and convert it into a OpenCV Mat:
color_image = k4a_capture_get_color_image(capture);
if (color_image)
{
uint8_t* buffer = k4a_image_get_buffer(color_image); // get raw buffer
cv::Mat colorMat(Hrgb, Wrgb, CV_8UC4, (void*)buffer, cv::Mat::AUTO_STEP);
//do something with colorMat
k4a_image_release(color_image);
}
More details on the last option here: How to convert k4a_image_t to opencv matrix? (Azure Kinect Sensor SDK)
The data is slightly better with the last solution, but the buffer is significantly larger (33M vs ~1.5M) for 3840x2160.

Getting Assertion (-215) error while using the opencv stitcher class in vs

So many type of questions like this has been asked, and i have pass through most of them but cant still get a solution to my problem, this is the error code though: error (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
bool try_use_gpu = false;
vector<Mat>imgs;
Mat image, pano;
image = imread("moscow1.jpg");
if (image.empty())
{
cout << "check your input image" << endl;
return EXIT_FAILURE;
}
imgs.push_back(imread("moscow1.jpg"));
image = imread("moscow2.jpg");
if (image.empty())
{
cout << "check your input image" << endl;
return EXIT_FAILURE;
}
imgs.push_back(imread("moscow2.jpg"));
Stitcher::Mode mode = Stitcher::PANORAMA;
Ptr<Stitcher> stitcher = Stitcher::create(mode, try_use_gpu);
//Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
Stitcher::Status status = stitcher->stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Panorama unsuccessful" << endl;
}
imshow("panorama", pano);
waitKey(0);
imwrite("panoramaimg.jpg", pano);
}
am also thinking if am not implementing the stitcher class well, any help will do...
The images you want to stitch must have common point, for the program to use, use two different images is just not going to work if the images have no common point..

opencv Unable to record video from camera to a file

i am trying capture a video from webcam. But i am always getting a 441 byte size file getting created.
Also in console there is error coming
OpenCVCMD[37317:1478193] GetDYLDEntryPointWithImage(/System/Library/Frameworks/AppKit.framework/Versions/Current/AppKit,_NSCreateAppKitServicesMenu) failed.
Code Snippet
void demoVideoMaker() {
//Camera Input
VideoCapture cap(0);
vidoFeed = ∩
namedWindow("VIDEO", WINDOW_AUTOSIZE);
//Determine the size of inputFeed
Size inpFeedSize = Size((int) cap.get(CV_CAP_PROP_FRAME_WIDTH), // Acquire input size
(int) cap.get(CV_CAP_PROP_FRAME_HEIGHT));
cout<<"Input Feed Size: "<<inpFeedSize<<endl;
VideoWriter outputVideo;
char fName[] = "capturedVid.avi";
outputVideo.open(fName, CV_FOURCC('P','I','M','1'), 20, inpFeedSize, true);
if (!outputVideo.isOpened()) {
cout<<"Failed to write Video"<<endl;
}
//Event Loop
Mat frame;
bool recordingOn = false;
while(1){
//Process user input if any
char ch = char(waitKey(10));
if (ch == 'q') {
break;
}if (ch == 'r') {
recordingOn = !recordingOn;
}
//Move to next frame
(*vidoFeed)>>frame;
if (frame.empty()) {
printf("\nEmpty Frame encountered");
}else {
imshow("VIDEO", frame);
if(recordingOn) {
cout<<".";
outputVideo.write(frame);
}
}
}
}
I am using opencv2.4, XCode 8.2 on mac OS Sierra 10.12.1
Tried changing the codec, fps, but nothing helped. I was assuming this would be a straight forward task but got stuck here. Please help.

OpenCV-iOS demos run at 6-10 FPS on iPad, is this normal?

The OpenCV-iOS detection and tracking code runs betwee 6-10 FPS on my iPad.
Is this normal?
I figured their "sample" code would run as fast as it could...
DetectTrackSample.cpp
#include <iostream>
#include "DetectTrackSample.h"
#include "ObjectTrackingClass.h"
#include "FeatureDetectionClass.h"
#include "Globals.h"
DetectTrackSample::DetectTrackSample()
: m_fdAlgorithmName("ORB")
, m_feAlgorithmName("FREAK")
, m_maxCorners(200)
, m_hessianThreshold(400)
, m_nFeatures(500)
, m_minMatches(4)
, m_drawMatches(true)
, m_drawPerspective(true)
{
std::vector<std::string> fdAlgos, feAlgos, otAlgos;
// feature detection options
fdAlgos.push_back("ORB");
fdAlgos.push_back("SURF");
registerOption("Detector", "", &m_fdAlgorithmName, fdAlgos);
// feature extraction options
feAlgos.push_back("ORB");
feAlgos.push_back("SURF");
feAlgos.push_back("FREAK");
registerOption("Extractor", "", &m_feAlgorithmName, feAlgos);
// SURF feature detector options
registerOption("hessianThreshold", "SURF", &m_hessianThreshold, 300, 500);
// ORB feature detector options
registerOption("nFeatures", "ORB", &m_nFeatures, 0, 1500);
// matcher options
registerOption("Minumum matches", "Matcher", &m_minMatches, 4, 200);
// object tracking options
registerOption("m_maxCorners", "Tracking", &m_maxCorners, 0, 1000);
// Display options
registerOption("Matches", "Draw", &m_drawMatches);
registerOption("Perspective", "Draw", &m_drawPerspective);
}
//! Gets a sample name
std::string DetectTrackSample::getName() const
{
return "Detection and Tracking";
}
std::string DetectTrackSample::getSampleIcon() const
{
return "DetectTrackSampleIcon.png";
}
//! Returns a detailed sample description
std::string DetectTrackSample::getDescription() const
{
return "Combined feature detection and object tracking sample.";
}
//! Returns true if this sample requires setting a reference image for latter use
bool DetectTrackSample::isReferenceFrameRequired() const
{
return true;
}
//! Sets the reference frame for latter processing
void DetectTrackSample::setReferenceFrame(const cv::Mat& reference)
{
getGray(reference, objectImage);
computeObject = true;
}
// Reset object keypoints and descriptors
void DetectTrackSample::resetReferenceFrame() const
{
detectObject = false;
computeObject = false;
trackObject = false;
}
//! Processes a frame and returns output image
bool DetectTrackSample::processFrame(const cv::Mat& inputFrame, cv::Mat& outputFrame)
{
// display the frame
inputFrame.copyTo(outputFrame);
// convert input frame to gray scale
getGray(inputFrame, imageNext);
// begin tracking object
if ( trackObject ) {
// prepare the tracking class
ObjectTrackingClass tracker;
tracker.setMaxCorners(m_maxCorners);
// track object
tracker.track(outputFrame,
imagePrev,
imageNext,
pointsPrev,
pointsNext,
status,
err);
// check if the next points array isn't empty
if ( pointsNext.empty() ) {
// if it is, go back to detect
trackObject = false;
detectObject = true;
}
}
// try to find the object in the scene
if (detectObject) {
// prepare the robust matcher and set paremeters
FeatureDetectionClass rmatcher;
rmatcher.setConfidenceLevel(0.98);
rmatcher.setMinDistanceToEpipolar(1.0);
rmatcher.setRatio(0.65f);
// feature detector setup
if (m_fdAlgorithmName == "SURF")
{
// prepare keypoints detector
cv::Ptr<cv::FeatureDetector> detector = new cv::SurfFeatureDetector(m_hessianThreshold);
rmatcher.setFeatureDetector(detector);
}
else if (m_fdAlgorithmName == "ORB")
{
// prepare feature detector and detect the object keypoints
cv::Ptr<cv::FeatureDetector> detector = new cv::OrbFeatureDetector(m_nFeatures);
rmatcher.setFeatureDetector(detector);
}
else
{
std::cerr << "Unsupported algorithm:" << m_fdAlgorithmName << std::endl;
assert(false);
}
// feature extractor and matcher setup
if (m_feAlgorithmName == "SURF")
{
// prepare feature extractor
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::SurfDescriptorExtractor;
rmatcher.setDescriptorExtractor(extractor);
// prepare the appropriate matcher for SURF
cv::Ptr<cv::DescriptorMatcher> matcher = new cv::BFMatcher(cv::NORM_L2, false);
rmatcher.setDescriptorMatcher(matcher);
} else if (m_feAlgorithmName == "ORB")
{
// prepare feature extractor
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::OrbDescriptorExtractor;
rmatcher.setDescriptorExtractor(extractor);
// prepare the appropriate matcher for ORB
cv::Ptr<cv::DescriptorMatcher> matcher = new cv::BFMatcher(cv::NORM_HAMMING, false);
rmatcher.setDescriptorMatcher(matcher);
} else if (m_feAlgorithmName == "FREAK")
{
// prepare feature extractor
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::FREAK;
rmatcher.setDescriptorExtractor(extractor);
// prepare the appropriate matcher for FREAK
cv::Ptr<cv::DescriptorMatcher> matcher = new cv::BFMatcher(cv::NORM_HAMMING, false);
rmatcher.setDescriptorMatcher(matcher);
}
else {
std::cerr << "Unsupported algorithm:" << m_feAlgorithmName << std::endl;
assert(false);
}
// call the RobustMatcher to match the object keypoints with the scene keypoints
cv::vector<cv::Point2f> objectKeypoints2f, sceneKeypoints2f;
std::vector<cv::DMatch> matches;
cv::Mat fundamentalMat = rmatcher.match(imageNext, // input scene image
objectKeypoints, // input computed object image keypoints
objectDescriptors, // input computed object image descriptors
matches, // output matches
objectKeypoints2f, // output object keypoints (Point2f)
sceneKeypoints2f); // output scene keypoints (Point2f)
if ( matches.size() >= m_minMatches ) { // assume something was detected
// draw perspetcive lines (box object in the frame)
if (m_drawPerspective)
rmatcher.drawPerspective(outputFrame,
objectImage,
objectKeypoints2f,
sceneKeypoints2f);
// draw keypoint matches as yellow points on the output frame
if (m_drawMatches)
rmatcher.drawMatches(outputFrame,
matches,
sceneKeypoints2f);
// init points array for tracking
pointsNext = sceneKeypoints2f;
// set flags
detectObject = false;
trackObject = true;
}
}
// compute object image keypoints and descriptors
if (computeObject) {
// select feature detection mechanism
if ( m_fdAlgorithmName == "SURF" )
{
// prepare keypoints detector
cv::Ptr<cv::FeatureDetector> detector = new cv::SurfFeatureDetector(m_hessianThreshold);
// Compute object keypoints
detector->detect(objectImage,objectKeypoints);
}
else if ( m_fdAlgorithmName == "ORB" )
{
// prepare feature detector and detect the object keypoints
cv::Ptr<cv::FeatureDetector> detector = new cv::OrbFeatureDetector(m_nFeatures);
// Compute object keypoints
detector->detect(objectImage,objectKeypoints);
}
else {
std::cerr << "Unsupported algorithm:" << m_fdAlgorithmName << std::endl;
assert(false);
}
// select feature extraction mechanism
if ( m_feAlgorithmName == "SURF" )
{
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::SurfDescriptorExtractor;
// Compute object feature descriptors
extractor->compute(objectImage,objectKeypoints,objectDescriptors);
}
else if ( m_feAlgorithmName == "ORB" )
{
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::OrbDescriptorExtractor;
// Compute object feature descriptors
extractor->compute(objectImage,objectKeypoints,objectDescriptors);
}
else if ( m_feAlgorithmName == "FREAK" )
{
cv::Ptr<cv::DescriptorExtractor> extractor = new cv::FREAK;
// Compute object feature descriptors
extractor->compute(objectImage,objectKeypoints,objectDescriptors);
}
else {
std::cerr << "Unsupported algorithm:" << m_feAlgorithmName << std::endl;
assert(false);
}
// set flags
computeObject = false;
detectObject = true;
}
// backup previous frame
imageNext.copyTo(imagePrev);
// backup points array
std::swap(pointsNext, pointsPrev);
return true;
}
This can be normal. It depends on your detection and tracking code.
For example:
On an iPhone 4 using the CV_HAAR_FIND_BIGGEST_OBJECT option the demo
app achieves up to 4 fps when a face is in the frame. This drops to
around 1.5 fps when no face is present. Without the
CV_HAAR_FIND_BIGGEST_OBJECT option multiple faces can be detected in a
frame at around 1.8 fps. Note that the live video preview always runs
at the full 30 fps irrespective of the processing frame rate and
processFrame:videoRect:videoOrientation: is called at 30 fps if you
only perform minimal processing.
Source: Click

using cvRetrieveFrame get strange image

I am reading a avi file, and do some background subtrcation work. The wierd thing is when I use cvRetrieveFrame, I got a strange image, like below:
origin:
cvRetrieveFrame returns:
I don't know what's the problem. Here is my code snippet.
CvCapture* readerAvi = cvCaptureFromAVI( filename.c_str() );
if(readerAvi == NULL)
{
std::cerr << "Could not open AVI file." << std::endl;
return 0;
}
// retrieve information about AVI file
cvQueryFrame(readerAvi); //....get some information, width, height, ....
// grad next frame from input video stream
if(!cvGrabFrame(readerAvi))
{
std::cerr << "Could not grab AVI frame." << std::endl;
return 0;
}
frame_data = cvRetrieveFrame(readerAvi);

Resources