OpenCV getRectSubPix with Alpha Channel - opencv

I need to use the getRectSubPix function in my code as part of a process of cropping an rotating a section of my image. This works fine normally with 3 channel images, but as soon as I try to use it with a BGRA or a RGBA mat image it crashes saying to me
OpenCV Error: Unsupported format or combination of formats () in cvGetRectSubPix, file /home/biotracking/Downloads/OpenCV-2.4.2/modules/imgproc/src/samplers.cpp, line 550
my code is basically like this
cv::cvtColor(polymask, polymask, CV_BGR2BGRA);
getRectSubPix(polymask, cv::Size(sqrt(biggestdistancesquared)*2,sqrt(biggestdistancesquared)*2), src_center, polymask);
If this function truly didn't work for Mats with alpha channels that seems crazy. Anybody know?

I noticed this problem and the answer by #blorgggg a while ago, and doubted deeply, since the problem is related with channels, but the answer is related with depth.
My answer:
The error rises because getRectSubPix supports only image of channels 1 or 3.
Related Code:
if( (cn != 1 && cn != 3) || !CV_ARE_CNS_EQ( src, dst ))
CV_Error( CV_StsUnsupportedFormat, "" );
The statement in thread here by #luhb is true, since it indeed supports only image of depth 8 or 32 is true, but has nothing to do with the question here.
Related code:
if( CV_MAT_DEPTH( src->type ) != CV_8U || CV_MAT_DEPTH( dst->type ) != CV_32F )
CV_Error( CV_StsUnsupportedFormat, "" );
I check the code snippet in version 2.4/2.8/2.9.
Finally, to solve your trouble, you could first split the Mat, getRectSuxPix on each channel, and then merge the four channels.

Another guy nailed the answer to this question
in this other question user luhb answered: Open CV using 16 bit signed format not working... Cropping and re-sizing an image
It seems getRectSubPix only work with {src,dst} type of {CV_8U, CV_8U}, {CV_32F, CV_32F} and {CV_8U, CV_32F}.
I dip into the source code and figure that out. There's no specification in the API reference.

Related

Why is "no code allowed to be all ones" in libjpeg's Huffman decoding?

I'm trying to satisfy myself that METEOSAT images I'm getting from their FTP server are actually valid images. My doubt arises because all the tools I've used so far complain about "Bogus Huffman table definition" - yet when I simply comment out that error message, the image appears quite plausible (a greyscale segment of the Earth's disc).
From https://github.com/libjpeg-turbo/libjpeg-turbo/blob/jpeg-8d/jdhuff.c#L379:
while (huffsize[p]) {
while (((int) huffsize[p]) == si) {
huffcode[p++] = code;
code++;
}
/* code is now 1 more than the last code used for codelength si; but
* it must still fit in si bits, since no code is allowed to be all ones.
*/
if (((INT32) code) >= (((INT32) 1) << si))
ERREXIT(cinfo, JERR_BAD_HUFF_TABLE);
code <<= 1;
si++;
}
If I simply comment out the check, or add a check for huffsize[p] to be nonzero (as in the containing loop's controlling expression), then djpeg manages to convert the image to a BMP which I can view with few problems.
Why does the comment claim that all-ones codes are not allowed?
It claims that because they are not allowed. That doesn't mean that there can't be images out there that don't comply with the standard.
The reason they are not allowed is this (from the standard):
Making entropy-coded segments an integer number of bytes is performed
as follows: for Huffman coding, 1-bits are used, if necessary, to pad
the end of the compressed data to complete the final byte of a
segment.
If the all 1's code was allowed, then you could end up with an ambiguity in the last byte of compressed data where the padded 1's could be another coded symbol.

Detecting color in video feed

I know that this seems like a question that has been asked a million times, and it definitely helped, but I haven't been able to find anything that can help me with my specific problem. Overall, my project is centered about detecting a blinking LED that's blinking a Morse Code, and then translating that Morse Code. What I've done so far is I've thresholded an image so that only the LED will show up, everything else is black. The light from the LED is red. So what I want to do to start off is print out either a "0" or "1" depending on if the LED is on or off. However, I am not sure how to detect any color in an image. Here is the part of the code that I'm working on currently
if(frameFromCamera->InRange(new Bgr(0, 0, 200),new Bgr(0, 0, 255)) == 255){
tbMorse->Text ="1";
}
else{
tbMorse->Text = "0";
}
But I am getting the following error.
BAOTFISInterface.cpp(1010): error C2664: 'Emgu::CV::Image<TColor,TDepth> ^Emgu::CV::Image<TColor,unsigned short>::InRange(Emgu::CV::Image<TColor,unsigned short> ^,Emgu::CV::Image<TColor,unsigned short> ^)' : cannot convert parameter 1 from 'Emgu::CV::Structure::Bgr *' to 'Emgu::CV::Image<TColor,TDepth> ^'
with
[
TColor=Emgu::CV::Structure::Gray,
TDepth=unsigned char
]
and
[
TColor=Emgu::CV::Structure::Gray,
TDepth=unsigned short
]
No user-defined-conversion operator available, or
Cannot convert an unmanaged type to a managed type
Does anyone know how to fix this? I'm using VS2010 so I have to use EMGU cv formatting to use the OpenCV library. This is all in Managed C++. I will take any pointers or suggestions that I can get.

cvConvertScale error

I am new to Python and openCV so if this is simple I apologize in advance.
I am trying to follow the depth map code at http://altruisticrobot.tistory.com/219.
In the code below ConvertScale raises the following error:
src.size == dst.size && src.channels() == dst.channels()
I have spent a couple days and cant figure out why.
Any pointers would be greatly appreciated.
Thanks in advance.
im_l = cv.LoadImage('C:\Python27\project\captureL6324.png',cv.CV_LOAD_IMAGE_GRAYSCALE)
im_r = cv.LoadImage('C:\Python27\project\captureR6324.png',cv.CV_LOAD_IMAGE_GRAYSCALE)
imsize = cv.GetSize(im_l)
disp = cv.CreateImage(imsize,cv.IPL_DEPTH_16S,1)# to receive Disparity
#run stereo Correspondence-- returns a single channel 16 bit signed disparity
disparity = cv.FindStereoCorrespondenceBM(im_r,im_l,disp,cv.CreateStereoBMState())
#convert to a real disparity by dividing it by 16-- first create variable to hold the converted scale
real_disparity = cv.CreateImage(imsize,cv.IPL_DEPTH_16S,1)
cv.ConvertScale(disparity, real_disparity, -16,0)
#get the point cloud
depth = cv.CreateImage(cv.GetSize(im_l),cv.IPL_DEPTH_32F,3)
cv.ReprojectIMageTo3D(real_disparity,depth, ReprojectMatrix)
Your problem is, that FindStereoCorrespondenceBM saves the result in the third parameter but you are using the return value, which is None for further computation. This leads to an error since a matrix of a certain size and type is expected.
So just change
cv.ConvertScale(disparity, real_disparity, -16,0)
to
cv.ConvertScale(disp, real_disparity, -16,0)
To run the whole script I also changed the last line to
ReprojectMatrix = cv.CreateMat(4,4,cv.CV_32FC1);
cv.ReprojectImageTo3D(real_disparity,depth, ReprojectMatrix)
This script runs without error for me. But I have not checked whether it gives the correct result.

Conditional expression using Mat type

Previously, I use C API and now I'm migrating to C++ API opencv. Below are some of the thing that doen't go through. It says some kind of error of conditional expression in Mat. Using C API everything seems fine.
/// Initialize (C API)
vector<IplImage*> storeImg;
storeImg.pushback(...);
if( storeImg.at(i) == storeImg.at(0) )//no error
/// Initialize (C++ API)
vector<Mat> storeImg;
storeImg.pushback(...);
/// To use it
if( storeImg.at(i) == storeImg.at(0) )//error: conditional expression is illegal
Is there any other workaround for this?
You need to access the indices of storeImg like this
storeImg[i]
If you wish to then access elements of the Mat stored in the index, you can call
storeImg[i].at<float>(j)
I'm not sure about this but just tested and it works and verified by the way.
if(storeImg[i].data == storeImg[0].data)
please clarify, what kind of comparison you intend.
if you got a vector<IplImage*> storeImg, ( storeImg[0]==storeImg[7] ) will compare the POINTERS only.
for a vector<Mat> storeImg the same expression would try to compare whole structs, which is in fact illegal.
did you want to check, if the CONTENT(pixels) is equal ?
that would be like: sum( storeImg[0] - storeImg[7] ) == 0
if you still want to compare pointers, ( storeImg[0].data == storeImg[7].data ) might work in the cv:Mat case, but it will fail, if you got clone()'s of other mats there

OpenCV: cvContourBoundingRect gives "Symbol Not Found" while other contour functions work fine

NOTE: StackOverflow won't let me answer my own question, so I'm answering it in here. Please scroll to the bottom to see my answer.
QUESTION
Given a binary image, I want to be able to identify which region has the greatest y-coordinate, i.e., which region is the closest to the bottom. In the function below, I try to use contours and bounding rectangles to get the answer I need. However, my use of the function cvContourBOundingRect gives rise to the following error message during compilation:
"_cvContourBoundingRectemphasized", referenced from:
GetLowestContourBoundingRectangle(_IplImage * img, bool)
in main.o. Symbol(s) not found. Collect2: Id returned 1 exit status.
Build Failed.
This is very strange, since I have successfully used other contour functions like cvFindContours and cvContourArea without any trouble. I tried running some searches on Google, but nothing turned up. If anyone can point me in the right direction I would appreciate it.
Thanks in advance.
CvRect GetLowestContourBoundingRectangle(IplImage *img, bool invertFlag) {
// NOTE: CONTOURS ARE DRAWN AROUND WHITE AREAS
IplImage *output = invertFlag ? cvCloneImage(InvertImage(img)) : cvCloneImage(img); // this goes into find contours and is consequently modified
// find contours
CvMemStorage *contourstorage = cvCreateMemStorage(0);
CvSeq* contours = NULL;
cvFindContours(output, contourstorage, &contours, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// analyze each contour
int lowestRectangleCoordinate = 0;
CvRect currentBoundingRectangle;
CvRect lowestBoundingRectangle;
while (contours) {
currentBoundingRectangle = cvContourBoundingRect(contours);
if (currentBoundingRectangle.y + currentBoundingRectangle.height > lowestRectangleCoordinate) {
lowestRectangleCoordinate = currentBoundingRectangle.y + currentBoundingRectangle.height;
lowestBoundingRectangle = currentBoundingRectangle;
}
contours = contours->h_next;
}
cvReleaseMemStorage(&contourstorage);
return lowestBoundingRectangle;
}
ANSWER:
Okay, ironically I found out why it's breaking shortly after drafting my original question (although in fairness I've been wrestling with this for a few hours at this point).
I looked up the header files in which each of the following three functions were defined:
cvFindContours -- imgproc_c.h
cvContourArea -- imgproc_c.h
cvContourBoundingRect -- compat.hpp
compat.hpp is apparently for deprecated functions that are kept around for backwards compatibility. Here's what's written in the header:
/*
A few macros and definitions for backward compatibility
with the previous versions of OpenCV. They are obsolete and
are likely to be removed in future. To check whether your code
uses any of these, define CV_NO_BACKWARD_COMPATIBILITY before
including cv.h.
*/
With that said, does anyone know how I can actually go about writing this function with non-obsolete definitions?
About your original question "OpenCV: cvContourBoundingRect gives “Symbol Not Found”".
The library to use to link with that (deprecated) method is this one:
libopencv_legacy.so

Resources