I want to stitch many images ( 25 ) in one single image of a straight surface of plastic part. Images look like this:
I am trying to use the Stitcher class form opencv. My code is here:
#include <iostream>
#include <fstream>
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/stitching.hpp>
using namespace std;
using namespace cv;
bool try_use_gpu = false;
vector<Mat> imgs;
string result_name = "result.jpg";
Mat img1, img2,img3,img4,img5,img6,img7, pano;
void printUsage();
//int parseCmdArgs(int argc, char** argv);
int main(int argc, char* argv[])
{
// Load images from HD.
img1 = imread("1.bmp");
img2 = imread("2.bmp");
img3 = imread("3.bmp");
img4 = imread("4.bmp");
// Put images into vector of images "imgs".
imgs.push_back(img1);
imgs.push_back(img2);
imgs.push_back(img3);
imgs.push_back(img4);
// Create stitcher instance and use stitch method with imgs.
Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
stitcher.setPanoConfidenceThresh(0.8);
Stitcher::Status status = stitcher.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << status << endl;
return -1;
}
imwrite(result_name, pano);
return 0;
}
I Always get an error saying : "Can't stitch images, error code = 1" so the system is saying that it needs more images. When debugging I see that images are properly loaded and then, vector imgs properly created. What can be the reasons? My calcuclation also last quite long (2 s)...
The stitcher module of OpenCV uses images features to create an exact alignment.
As a thumb rule, there should be around 20-30% overlap between the two images that have to be stitched. If there is no significant overlap between the camera fields, you won't have enough features between the image intersections. The images that you have given have quite a uniform background for the stitcher module to find sufficient features in the image intersection. You will need to look at increasing the features in the image intersection in order to align the images.
Related
I have used opencv and c++ to remove watermark from image using code below.
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <Windows.h>
#include <string>
#include <filesystem>
namespace fs = std::filesystem;
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
bool debugFlag = true;
std::string path = "C:/test/";
for (const auto& entry : fs::directory_iterator(path))
{
std::string fileName = entry.path().string();
Mat original = imread(fileName, cv::IMREAD_COLOR);
if (debugFlag) { imshow("original", original); }
Mat inverted;
bitwise_not(original, inverted);
std::vector<Mat> channels;
split(inverted, channels);
for (int i = 0; i < 3; i++)
{
if (debugFlag) { imshow("chan" + std::to_string(i), channels[i]); }
}
Mat bwImg;
cv::threshold(channels[2], bwImg, 50, 255, cv::THRESH_BINARY);
if (debugFlag) { imshow("thresh", bwImg); }
Mat outputImg;
inverted.copyTo(outputImg, bwImg);
bitwise_not(outputImg, outputImg);
if (debugFlag) { imshow("output", outputImg); }
if (debugFlag) { waitKey(0); }
else { imwrite(fileName, outputImg); }
}
}
here is result original to removed watermark.
Now in previous image as you can see original image has orange/red watermark. I created a mask that would kill the watermark and then applied it to the original image (this pulls the grey text boundary as well). Another trick that helped was to use the red channel since the watermark is most saturated on red ~245). Note that this requires opencv and c++17
But now i want to remove watermark in new image which has similar watermark color as text image is given below as you can see some watermark in image sideway in chinese overlaping with text. how to achieve it with my current code any help is appreciated.
Two ideas to try:
1: The watermark looks "lighter" than the primary text. So if you create a grayscale version of the image, you may be able to apply a threshold that keeps the primary text and drops the watermark. You may want to add one pass of dilation on that mask before applying it to the original image as the grey thresh will likely clip your non-watermark characters a bit. (this may pull in too much noise from the watermark though, so test it)
2: Try using the opencv opening function. Your primary text seems thicker than the watermark, so you should be able to isolate it. Similarly after you create the mask of your keep text, dilate once and mask the original image.
I am using the following OpenCV code to access video feed from a camera (lsusb command in Jetson TX1 terminal lists the camera as Pixart Imaging, Inc.).
Code:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
int main()
{
cv::VideoCapture cap(1);
if(!cap.isOpened())
{
std::cout << "Input error\n";
return -1;
}
cv::namedWindow("Video Feed", cv::WINDOW_AUTOSIZE);
for(;;)
{
cv::Mat frame;
cap >> frame;
std::cout << "Type: " << frame.type() << "\n";
//cv::cvtColor(frame, frame, CV_YUV2RGB);
//cv::resize(frame, frame, cv::Size2i(256, 144));
cv::imshow("Video Feed", frame);
if (cv::waitKey(10) == 27)
{
break;
}
}
cv::destroyAllWindows();
return 0;
}
The screeenshots of the video feed can be seen below:
I am trying to identify the color format of the camera and convert it to RGB. I have tried different color formats, but I focused mainly on YUV to RGB conversion as shown below (this line is commented out in the above code): cv::cvtColor(frame, frame, CV_YUV2RGB);
I have also tried different variants of YUV as listed here. However, I haven't received any result close to a normal RGB image.
I am also getting the following message on the terminal:
Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
1) Is this just a defective camera?
2) Are there any tests/ approaches to identify and rectify the problem?
Edit:
I have added a new picture to give an idea of what the actual color of the shirt of the person closer to the camera is:
I am trying to implement a grabcut algorithm in OpenCV using C++
I stumble upon this site and found a very simple way how to do it. Unfortunately, it seems like the code is not working for me
#include "opencv2/opencv.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main( )
{
// Open another image
Mat image;
image= cv::imread("images/mango11a.jpg");
// define bounding rectangle
cv::Rect rectangle(50,70,image.cols-150,image.rows-180);
cv::Mat result; // segmentation result (4 possible values)
cv::Mat bgModel,fgModel; // the models (internally used)
// GrabCut segmentation
cv::grabCut(image, // input image
result, // segmentation result
rectangle,// rectangle containing foreground
bgModel,fgModel, // models
1, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
cout << "oks pa dito" <<endl;
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result); // bg pixels not copied
// draw rectangle on original image
cv::rectangle(image, rectangle, cv::Scalar(255,255,255),1);
cv::namedWindow("Image");
cv::imshow("Image",image);
// display result
cv::namedWindow("Segmented Image");
cv::imshow("Segmented Image",foreground);
waitKey();
return 0;
}
Can anyone help me with this please? What is supposed to be the problem
PS: NO errors were printed while compiling.
check your settings again. I just executed the same tutorial and it worked fine for me.
I'm using tesseract 3.02 and opencv to let the tesseract recognize the text from my camera realtime.
But the effect is quite bad. Results are unreadable and can't show image fluently. I think it's the problem of my code.
Can some one give me advice about how to modify it?
Thanks a lot!
#include "stdafx.h"
#include <string>
#include <opencv2/opencv.hpp>
#include <time.h>
using namespace std;
using namespace cv;
int main() {
// [1]
tesseract::TessBaseAPI *myOCR =
new tesseract::TessBaseAPI();
// [2]
printf("Tesseract-ocr version: %s\n",
myOCR->Version());
printf("Leptonica version: %s\n",
getLeptonicaVersion());
// [3]
if (myOCR->Init(NULL, "eng")) {
fprintf(stderr, "Could not initialize tesseract.\n");
exit(1);
}
//声明IplImage指针
IplImage* pFrame = NULL;
//获取摄像头
CvCapture* pCapture = cvCreateCameraCapture(-1);
//创建窗口
cvNamedWindow("video", 1);
//显示视屏
time_t last_time = time(NULL);
while(1)
{
pFrame=cvQueryFrame( pCapture );
if(!pFrame) break;
cvShowImage("video",pFrame);
char c=cvWaitKey(33);
if(c==27)break;
time_t this_time = time(NULL);
if(this_time != last_time)
{
last_time = this_time;
myOCR->SetRectangle(0,0,pFrame->width,pFrame->height);
myOCR->SetImage((uchar*)pFrame->imageData,pFrame->width,pFrame- >height,pFrame->depth/8,pFrame->width*(pFrame->depth/8));
myOCR->Recognize(NULL);
const char* out = myOCR->GetUTF8Text();
printf("%s\n",out);
}
}
cvReleaseCapture(&pCapture);
cvDestroyWindow("video");
cv::waitKey(-1);
return 0;
}
Tesseract was designed to process scanned books. It operates on white pages where there is only black text, clearly seen with minimal distortions. Images are mostly Black & White. Your image is grey level so Tesseract will perform very very poor.
It is not a problem of your code but of Tesseract.
If you point your camera towards a book, you will be able to get the text (assuming image is focused) but if you want to read general text (like street signs, logo on someones T-shirt than there is no way to do it. Sorry to disappoint you.
However, if you want to recognize a specific text, like credit card numbers or street signs,
that you can do it.
Start by grabbing many imgages of your text.
Do a bit of
pre-processing on the image, convert it to BW,
train Tesseract on many examples.
And then it will be able to accomplish your task.
I have a pointer to a CvContourTree and I wish to derive the associated contour from this.
I have tried to use the function that will do this -
cvContourFromContourTree(const CvContourTree* tree, CvMemStorage* storage, CvTermCriteria criteria )
but it is giving me an error:
'Unhandled exception at 0x1005567f in Matching_Hierarchial.exe: 0xC0000005:
Access violation reading location 0x00000002.'
I have defined the CvTermCriteria as follows:
CvTermCriteria termcrit = cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS ,5,1);
Can someone please provide some sample code of how to convert a contour to contour tree and then back to a contour again. I would be extremely grateful for help in this matter.
Thanks,
Conor
Thanks for your fast response. Please see the attched code segment. I have taken in an image from my project folder, converted it to binary. I have then found the contours. Using an arbitrary contour, I simplified its complexity via polygon approximation. I construct a contour tree from this contour (I am confident that this is working ok as I have tested this contour tree against a similar one using cvMatchContourTrees() and gotten favourable outcomes). However despite reading all I could find on the function and your post, I cannot convert from the contour tree back to the contour structure.
#include "stdafx.h"
#include "cv.h"
#include "highgui.h"
#include "cxcore.h"
#include "cvaux.h"
#include <iostream>
using namespace std;
#define CVX_RED CV_RGB(0xff,0x00,0x00)
#define CVX_BLUE CV_RGB(0x00,0x00,0xff)
int _tmain(int argc, _TCHAR* argv[])
{
// define input image
IplImage *img1 = cvLoadImage("SHAPE1.jpg",0);
// define and construct binary image of input image
IplImage *imgEdge1 = cvCreateImage(cvGetSize(img1),IPL_DEPTH_8U,1);
cvThreshold(img1,imgEdge1,155,255,CV_THRESH_BINARY);
// define and zero image to place polygon image
IplImage *dst1 = cvCreateImage(cvGetSize(img1),IPL_DEPTH_8U,1);
cvZero(dst1);
// display ip and thresholded image
cvNamedWindow("img1",1);
cvNamedWindow("thresh1",1);
cvShowImage("img1",img1);
cvShowImage("thresh1",imgEdge1);
// find all the contours of the image
CvSeq* contours1 = NULL;
CvMemStorage* storage1 = cvCreateMemStorage();
int numContour1 = cvFindContours(imgEdge1,storage1,&contours1,sizeof(CvContour),CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE);
cout<<"number of contours"<<numContour1<<endl;
// extract a contour of interest
CvSeq* poly_approx1 = contours1->v_next; // interested in vertical level becaue tree structure
// CALCULATE PERIMETER
double perimeter1 = cvArcLength((CvSeq*)poly_approx1,CV_WHOLE_SEQ,-1);
// CREATE POLYGON APPROXIMATION -
// NB: CANNOT USE 'CV_CHAIN_CODE'ARGUEMENT IN THE cvFindContours() call
CvSeq* polySeq1 = cvApproxPoly((CvSeq*)poly_approx1,sizeof(CvContour),storage1,CV_POLY_APPROX_DP,perimeter1*0.02,0);
// draw approximated polygon
cvDrawContours(dst1,polySeq1,cvScalar(255),cvScalar(255),0,3,8); // draw
// display polygon
cvNamedWindow("Poly Approx1",1);
cvShowImage("Poly Approx1",dst1);
// NOW WE HAVE A POLYGON APPROXIMATED CONTOUR
// CREATE A CONTOUR TREE
CvMemStorage *treeStorage1 = cvCreateMemStorage(0);
CvContourTree* tree1 = cvCreateContourTree((const CvSeq*)polySeq1,treeStorage1,0);
// TO RECONSTRUCT A CONTOUR FROM THE CONTOUR TREE
// CANNOT GET TO WORK YET...
CvMemStorage *stor = cvCreateMemStorage(0);
CvTermCriteria termcrit = cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS ,5,1); // more later
/* the next line will not compile */
CvSeq *contour_recap = cvContourFromContourTree(tree1,treeStorage1,termcrit);
cvWaitKey(0);
return 0;
}
Thanks again for any help or advice that you might be able to give. I assure you it's greatly appreciated.
Conor
Well, you are using the appropriate methods.
CvContourTree* cvCreateContourTree(
const CvSeq* contour,
CvMemStorage* storage,
double threshold);
This method will create the contour tree from a given sequence, which can then be used to compare two contours.
To convert a contour tree back to a sequence you will use the method you already posted, but remember to initialize the storage and create a TermCriteria(looks ok in your example):
storage = cvCreateMemStorage(0);
CvSeq* cvContourFromContourTree(
const CvContourTree* tree,
CvMemStorage* storage,
CvTermCriteria criteria);
So this steps should be ok for your conversion, and if there's nothing missing from your code than you should post more of it so we can find the mistake.