I'm trying to find the largest blob in an image and classify it according to a linked plist file. I'm using the latest version of OpenCV for iOS, and I've looked at several related questions, but none so far relate to iOS.
I'm getting this error:
OpenCV Error: Assertion failed (type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U)) in batchDistance, file /Users/admin/Desktop/OpenCV/modules/core/src/stat.cpp, line 4000
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/admin/Desktop/OpenCV/modules/core/src/stat.cpp:4000: error: (-215) type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U) in function batchDistance
when I run this:
- (IBAction)CaptureButton:(id)sender
{
// Find the biggest blob.
int biggestBlobIndex = 0;
for (int i = 0, biggestBlobArea = 0; i < detectedBlobs.size(); i++)
{
Blob &detectedBlob = detectedBlobs[i];
int blobArea = detectedBlob.getWidth() * detectedBlob.getHeight();
if (blobArea > biggestBlobArea)
{
biggestBlobIndex = i;
biggestBlobArea = blobArea;
}
}
Blob &biggestBlob = detectedBlobs[biggestBlobIndex];
// Classify the blob.
blobClassifier->classify(biggestBlob); // the error occurs here
}
The classify that I'm calling in the last line there was declared in another file:
void classify(Blob &detectedBlob) const;
This is the relevant code from stat.cpp:
Mat src1 = _src1.getMat(), src2 = _src2.getMat(), mask = _mask.getMat();
int type = src1.type();
CV_Assert( type == src2.type() && src1.cols == src2.cols &&
(type == CV_32F || type == CV_8U)); // this is line 4000
What's the issue here?
I don't know how do cv::Mat objects look in objective c, but You need to make sure that all the dimensions, channel number and depth of images used with the classifier are uniform. Probably there was a step previously when You fed the classifier with training images. Maybe one of them is not compatible with the mat that You are trying to classify.
You can try debugging with opencv if You compile it Yourself and set debug version in CMake.
Related
I have a dataset of memes' URLs which I wanna extract their texts from them. I have this function:
def image2text(path_x):
with requests.get(path_x, stream=True) as r:
request_x=r.content
r.close()
img=Image.open(BytesIO(request_x))
bilat_x=cv2.bilateralFilter(np.array(img),5, 55,60)
img.close()
request_x=None
cv2_x=cv2.cvtColor(bilat_x, cv2.COLOR_BGR2GRAY)
_,img = cv2.threshold(cv2_x, 240, 255, 1)
meme_text=pytesseract.image_to_string(img, lang='eng')
return meme_text
image2text('https://i.redd.it/r9lw184zky881.png')
I receive the following error:
error: OpenCV(4.1.2)
/io/opencv/modules/imgproc/src/bilateral_filter.dispatch.cpp:166:
error: (-215:Assertion failed) (src.type() == CV_8UC1 || src.type() ==
CV_8UC3) && src.data != dst.data in function 'bilateralFilter_8u'
BTW, the code is mainly copied from this source
I fixed it by adding the following line after opening the image:
img = cv2.cvtColor(np.array(img), cv2.COLOR_BGRA2BGR)
I am trying to find the calibration from
I was following the type of inputs described here, but i am having this error:
error: OpenCV(4.4.0) /tmp/pip-req-build-6amqbhlx/opencv/modules/calib3d/src/solvepnp.cpp:753: error: (-215:Assertion failed) ( (npoints >= 4) || (npoints == 3 && flags == SOLVEPNP_ITERATIVE && useExtrinsicGuess) ) && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) in function 'solvePnPGeneric'
Example code:
Any hints where this error might be coming from?
I am trying to undistort a fisheye image taken from a camera. I have already gotten the camera parameters needed. However, when I run the code below:
Mat cammatrix = cv::Mat::zeros(3,3, CV_64F);
cammatrix.at<double>(0,0) = 3.7089418826568277e+002;
cammatrix.at<double>(1,1) = 3.7179355652545451e+002;
cammatrix.at<double>(0,2) = 3.4450520804288089e+002;
cammatrix.at<double>(1,2) = 2.5859133287932718e+002;
cammatrix.at<double>(2,2) = 1.0;
std::vector<double> distortcoeff;
double tempdoub = -2.2022994789941803e+000;
distortcoeff.push_back(tempdoub);
tempdoub = 4.4652133910671958e+000;
distortcoeff.push_back(tempdoub);
tempdoub = 6.8050071879644780e-001;
distortcoeff.push_back(tempdoub);
tempdoub = -1.7697153575434696e+000;
distortcoeff.push_back(tempdoub);
// Process images here (optional)
Mat img_scene (current);
if(!img_scene.data )
{ std::cout<< " --(!) Error reading images " << std::endl; return -1; }
img_scene.convertTo(img_scene, CV_32FC2);
cv::fisheye::undistortPoints(img_scene, img_scene, cammatrix, distortcoeff);
I get this error:
OpenCV Error: Assertion failed (distorted.type() == CV_32FC2 || distorted.type() == CV_64FC2) in undistortPoints
Not sure why this is happening because I have the .convertTo line right before converting it to CV_32FC2. If anyone could help me fix this error, I would really appreciate it!
undistortPoints() function is retrieving undistorted pixel location, given it's current distorted location on the image. i.e it operates on points, not on images.
use fisheye::undistortImage for images.
I'm already looking for hours but I can't find the problem.
I get the following error when I want to stitch two images together:
OopenCV error: assertion failed (y==0 || data && dims >=1 && (unsigned)y < (unsigned > size.p[0])) in unkown function...
This is the code (pano.jpg was already stitched together in a previous run of the algorithm were the same algorithm did work...):
cv::Mat img1 = imread("input2.jpg");
cv::Mat img2 = imread("pano.jpg");
std::vector<cv::Mat> vectest;
vectest.push_back(img2);
vectest.push_back(img1);
cv::Mat result;
cv::Stitcher stitcher = cv::Stitcher::createDefault( false );
stitcher.setPanoConfidenceThresh(0.01);
detail::BestOf2NearestMatcher *matcher = new detail::BestOf2NearestMatcher(false, 0.001/*=match_conf*/);
detail::SurfFeaturesFinder *featureFinder = new detail::SurfFeaturesFinder(100);
stitcher.setFeaturesMatcher(matcher);
stitcher.setFeaturesFinder(featureFinder);
cv::Stitcher::Status status = stitcher.stitch( vectest, result );
You can find the images here:
pano.jpg: https://dl.dropbox.com/u/5276376/pano.jpg
input2.jpg: https://dl.dropbox.com/u/5276376/input2.jpg
Edit:
I compiled opencv 2.4.2 myself but still the same problem...
The system crashes in the stitcher.cpp file on the following line:
blender_->feed(img_warped_s, mask_warped, corners[img_idx]);
In this feed function it crashed at this line:
int y_ = y - y_tl;
const Point3_<short>* src_row = src_pyr_laplace[i].ptr<Point3_<short> >(y_);
Point3_<short>* dst_row = dst_pyr_laplace_[i].ptr<Point3_<short> >(y);
And finally this assertion in mat.hpp:
template<typename _Tp> inline _Tp* Mat::ptr(int y)
{
CV_DbgAssert( y == 0 || (data && dims >= 1 && (unsigned)y < (unsigned)size.p[0]) );
return (_Tp*)(data + step.p[0]*y);
}
strange that everything works fine for some people here...
I stitching images now,but not using stitch High Level Functionality instead encode every step by opencv2.4.2. As far as I know, you could have a try about first SurfFeaturesFinder, second BestOf2NearestMatcher. Just a try, Good luck!
I'm using cv::imread to load a image and do some processes with that image,
but I don't know why I can't read values of the returned Mat from imread function.
I used Mat.at method:
Mat iplimage = imread("Photo.jpg",1); //input
for(int i=0;i<iplimage.rows;i++){
for(int j=0;j<iplimage.cols;j++){
cout<<(int)iplimage.at<int>(i,j)<<" ";
}
cout<<endl;
}
But it appeared an error:
OpenCV Error: Assertion failed ( dims <= 2 && data && (unsigned)i0 <
(unsigned)size.p[0] &&
(unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channels()) &&
((((Sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3) -1))*4) & 15)
== elemSize1()) is unknown function, file: "c:\opencv2.2\include\opencv2\core\mat.hpp", line 517
But it is ok if I use the direct access way:
Mat iplimage = imread("Photo.jpg",1); //input
for(int i=0;i<iplimage.rows;i++){
for(int j=0;j<iplimage.cols;j++){
cout<<(int)iplimage.data[i*iplimage.cols + j]<<" ";
}
cout<<endl;
}
Could anyone tell me how can I use the Mat.at method to access the above Mat?
Thanks for your help!
See this answer. In your case, the returned Mat is 3 dimensional, hence iplimage.at<int> fails the assertion, you just need to access the intensities in each channel like the way the mentioned answer explain.
you are trying to load with 3 channel of image it will be fine if you change to this Mat iplimage = imread("Photo.jpg",0); //input
I found the solution. It is because I used :
inputImage.at<int>(i,j) or inputImage.at<float>(1,2)
instead of, (int)inputImage.at<uchar>(1,2) or (float)inputImage.at<uchar>(1,2)
Sorry for my carelessness!
Mat iplimage = imread("Photo.jpg",1) this read in a 3 channel colour image. You can use Mat iplimage = imread("Photo.jpg",0) to read in the image as greyscale so that your iplimage.at(i,j) would work. Please note that you should use .at if your image is 8bit instead of .at.
If your image is not 8bit, you should use iplimage = imread("Photo.jpg",CV_LOAD_IMAGE_ANYDEPTH)