Detecting color in video feed - opencv

I know that this seems like a question that has been asked a million times, and it definitely helped, but I haven't been able to find anything that can help me with my specific problem. Overall, my project is centered about detecting a blinking LED that's blinking a Morse Code, and then translating that Morse Code. What I've done so far is I've thresholded an image so that only the LED will show up, everything else is black. The light from the LED is red. So what I want to do to start off is print out either a "0" or "1" depending on if the LED is on or off. However, I am not sure how to detect any color in an image. Here is the part of the code that I'm working on currently
if(frameFromCamera->InRange(new Bgr(0, 0, 200),new Bgr(0, 0, 255)) == 255){
tbMorse->Text ="1";
}
else{
tbMorse->Text = "0";
}
But I am getting the following error.
BAOTFISInterface.cpp(1010): error C2664: 'Emgu::CV::Image<TColor,TDepth> ^Emgu::CV::Image<TColor,unsigned short>::InRange(Emgu::CV::Image<TColor,unsigned short> ^,Emgu::CV::Image<TColor,unsigned short> ^)' : cannot convert parameter 1 from 'Emgu::CV::Structure::Bgr *' to 'Emgu::CV::Image<TColor,TDepth> ^'
with
[
TColor=Emgu::CV::Structure::Gray,
TDepth=unsigned char
]
and
[
TColor=Emgu::CV::Structure::Gray,
TDepth=unsigned short
]
No user-defined-conversion operator available, or
Cannot convert an unmanaged type to a managed type
Does anyone know how to fix this? I'm using VS2010 so I have to use EMGU cv formatting to use the OpenCV library. This is all in Managed C++. I will take any pointers or suggestions that I can get.

Related

RGB Conversion in Lua/TI-Nspire (TI-image)

I know this may be kinda of a dead and old unanswered topic, but
im trying to convert a RGB color code to the string format in a TI-image file, which doesnt make sense to me here:
https://wiki.inspired-lua.org/TI.Image
I understand all it mentions, until I reach the rgb conversion part. The article says that each rgb color has to have 5 bits, but it doesnt tell how to convert it, and i cant make sense of how to convert it following the given example. For instance:
R=255 → R = 31
G=012 → G = 1
B=123 → B = 15
What would I have to do to convert R255, G012 and B123 to the above output?
I understand the remaining instructions on the article, except this.
Anyone have an idea on how to do this?

Counting nuclear foci in imagej - getting strange results

I'm following the protocol outlined here: http://microscopy.duke.edu/HOWTO/countfoci.html
And the commands I'm using are enclosed below.
My problem is that I'm working on massive images (25k x 17k pixels) and sometimes I get accurate values but sometimes I get what I see below. Going by results table (and by RawIntDen/255 to get the actual number of nuclear foci), the cluster highlighted has around 60,000 nuclei! ...which is not the case as you can see by a quick visual examination.
Any idea why this is or what I can do about it?
I've tried re-binarizing the image in the step immediately before measuring. That didn't work. I get this problem whether I do the commands manually or whether I do it via the macro. Any other ideas?
Thanks in advance.
Link to picture: https://www.dropbox.com/s/b7p6wkijf5smlpy/photo%20aug%2009%2C%204%2054%2021%20pm.jpg?dl=0
run("8-bit");
run("Auto Threshold", "method=Triangle white setthreshold show");
run("Convert to Mask");
run("Fill Holes");
run("Dilate");
run("Dilate");
run("Analyze Particles...", "size=50-Infinity display clear summarize add in_situ");
wait(30000);
run("Revert");
run("Find Maxima...", "noise=7 output=[Single Points] exclude");
run("ROI Manager...");
roiManager("Show None");
roiManager("Show All");
run("Set Measurements...", "area mean min integrated redirect=None decimal=3");
roiManager("Measure");
wait(30000);
roiManager("Save", path + ".zip");
saveAs("Results", path + ".xls");
close();
your pictures seem to be quite big in Pixels. Your defined particle size is "50 - infinity". I think this is too small, try something bigger than 50. hope that solves the Problem.
P.S.: We've tried to cope with similar problems also using this algorithm, perhaps that's going to help you: http://focinator.oeck.de/.

cvConvertScale error

I am new to Python and openCV so if this is simple I apologize in advance.
I am trying to follow the depth map code at http://altruisticrobot.tistory.com/219.
In the code below ConvertScale raises the following error:
src.size == dst.size && src.channels() == dst.channels()
I have spent a couple days and cant figure out why.
Any pointers would be greatly appreciated.
Thanks in advance.
im_l = cv.LoadImage('C:\Python27\project\captureL6324.png',cv.CV_LOAD_IMAGE_GRAYSCALE)
im_r = cv.LoadImage('C:\Python27\project\captureR6324.png',cv.CV_LOAD_IMAGE_GRAYSCALE)
imsize = cv.GetSize(im_l)
disp = cv.CreateImage(imsize,cv.IPL_DEPTH_16S,1)# to receive Disparity
#run stereo Correspondence-- returns a single channel 16 bit signed disparity
disparity = cv.FindStereoCorrespondenceBM(im_r,im_l,disp,cv.CreateStereoBMState())
#convert to a real disparity by dividing it by 16-- first create variable to hold the converted scale
real_disparity = cv.CreateImage(imsize,cv.IPL_DEPTH_16S,1)
cv.ConvertScale(disparity, real_disparity, -16,0)
#get the point cloud
depth = cv.CreateImage(cv.GetSize(im_l),cv.IPL_DEPTH_32F,3)
cv.ReprojectIMageTo3D(real_disparity,depth, ReprojectMatrix)
Your problem is, that FindStereoCorrespondenceBM saves the result in the third parameter but you are using the return value, which is None for further computation. This leads to an error since a matrix of a certain size and type is expected.
So just change
cv.ConvertScale(disparity, real_disparity, -16,0)
to
cv.ConvertScale(disp, real_disparity, -16,0)
To run the whole script I also changed the last line to
ReprojectMatrix = cv.CreateMat(4,4,cv.CV_32FC1);
cv.ReprojectImageTo3D(real_disparity,depth, ReprojectMatrix)
This script runs without error for me. But I have not checked whether it gives the correct result.

OpenCV getRectSubPix with Alpha Channel

I need to use the getRectSubPix function in my code as part of a process of cropping an rotating a section of my image. This works fine normally with 3 channel images, but as soon as I try to use it with a BGRA or a RGBA mat image it crashes saying to me
OpenCV Error: Unsupported format or combination of formats () in cvGetRectSubPix, file /home/biotracking/Downloads/OpenCV-2.4.2/modules/imgproc/src/samplers.cpp, line 550
my code is basically like this
cv::cvtColor(polymask, polymask, CV_BGR2BGRA);
getRectSubPix(polymask, cv::Size(sqrt(biggestdistancesquared)*2,sqrt(biggestdistancesquared)*2), src_center, polymask);
If this function truly didn't work for Mats with alpha channels that seems crazy. Anybody know?
I noticed this problem and the answer by #blorgggg a while ago, and doubted deeply, since the problem is related with channels, but the answer is related with depth.
My answer:
The error rises because getRectSubPix supports only image of channels 1 or 3.
Related Code:
if( (cn != 1 && cn != 3) || !CV_ARE_CNS_EQ( src, dst ))
CV_Error( CV_StsUnsupportedFormat, "" );
The statement in thread here by #luhb is true, since it indeed supports only image of depth 8 or 32 is true, but has nothing to do with the question here.
Related code:
if( CV_MAT_DEPTH( src->type ) != CV_8U || CV_MAT_DEPTH( dst->type ) != CV_32F )
CV_Error( CV_StsUnsupportedFormat, "" );
I check the code snippet in version 2.4/2.8/2.9.
Finally, to solve your trouble, you could first split the Mat, getRectSuxPix on each channel, and then merge the four channels.
Another guy nailed the answer to this question
in this other question user luhb answered: Open CV using 16 bit signed format not working... Cropping and re-sizing an image
It seems getRectSubPix only work with {src,dst} type of {CV_8U, CV_8U}, {CV_32F, CV_32F} and {CV_8U, CV_32F}.
I dip into the source code and figure that out. There's no specification in the API reference.

OpenCV: cvContourBoundingRect gives "Symbol Not Found" while other contour functions work fine

NOTE: StackOverflow won't let me answer my own question, so I'm answering it in here. Please scroll to the bottom to see my answer.
QUESTION
Given a binary image, I want to be able to identify which region has the greatest y-coordinate, i.e., which region is the closest to the bottom. In the function below, I try to use contours and bounding rectangles to get the answer I need. However, my use of the function cvContourBOundingRect gives rise to the following error message during compilation:
"_cvContourBoundingRectemphasized", referenced from:
GetLowestContourBoundingRectangle(_IplImage * img, bool)
in main.o. Symbol(s) not found. Collect2: Id returned 1 exit status.
Build Failed.
This is very strange, since I have successfully used other contour functions like cvFindContours and cvContourArea without any trouble. I tried running some searches on Google, but nothing turned up. If anyone can point me in the right direction I would appreciate it.
Thanks in advance.
CvRect GetLowestContourBoundingRectangle(IplImage *img, bool invertFlag) {
// NOTE: CONTOURS ARE DRAWN AROUND WHITE AREAS
IplImage *output = invertFlag ? cvCloneImage(InvertImage(img)) : cvCloneImage(img); // this goes into find contours and is consequently modified
// find contours
CvMemStorage *contourstorage = cvCreateMemStorage(0);
CvSeq* contours = NULL;
cvFindContours(output, contourstorage, &contours, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// analyze each contour
int lowestRectangleCoordinate = 0;
CvRect currentBoundingRectangle;
CvRect lowestBoundingRectangle;
while (contours) {
currentBoundingRectangle = cvContourBoundingRect(contours);
if (currentBoundingRectangle.y + currentBoundingRectangle.height > lowestRectangleCoordinate) {
lowestRectangleCoordinate = currentBoundingRectangle.y + currentBoundingRectangle.height;
lowestBoundingRectangle = currentBoundingRectangle;
}
contours = contours->h_next;
}
cvReleaseMemStorage(&contourstorage);
return lowestBoundingRectangle;
}
ANSWER:
Okay, ironically I found out why it's breaking shortly after drafting my original question (although in fairness I've been wrestling with this for a few hours at this point).
I looked up the header files in which each of the following three functions were defined:
cvFindContours -- imgproc_c.h
cvContourArea -- imgproc_c.h
cvContourBoundingRect -- compat.hpp
compat.hpp is apparently for deprecated functions that are kept around for backwards compatibility. Here's what's written in the header:
/*
A few macros and definitions for backward compatibility
with the previous versions of OpenCV. They are obsolete and
are likely to be removed in future. To check whether your code
uses any of these, define CV_NO_BACKWARD_COMPATIBILITY before
including cv.h.
*/
With that said, does anyone know how I can actually go about writing this function with non-obsolete definitions?
About your original question "OpenCV: cvContourBoundingRect gives “Symbol Not Found”".
The library to use to link with that (deprecated) method is this one:
libopencv_legacy.so

Resources