My question is similar to OpenCV: Detect blinking lights in a video feed
openCV detect blinking lights
I want to detect LED on/off status from any image which will have LED object. LED object can be of any size ( but mostly circle ). It is important to get the location of all the LEDs in that image although it can be ON or OFF. First of all I would like to get the status and position of LEDs which are ON only. Right now image source is static for my work out but it must be from video of any product having glowing LEDs. So there is no chance of having template image to substract the background.
I have tried using OpenCV (new to OpenCV) with threshold, Contours and Circles methods but not found sucessful. Please share if any source code or solution. The solution can be anything not only using OpenCV which would give result for me. It would be greatly appreciated.
The difference from other two question is that I want to get the number of LEDs in the image whether it can be ON or OFF and status of all LEDs. I know this is very complex. First of all I was trying to detect glowing LEDs in the image. I have implemented the code which i shared below. I had different implementations but below code is able to show me the glowing LEDs just by drawing the contours but number of contours are more than the glowing LEDs. So I am not able to get the total number of LEDs glowing atleast. Please suggest me your inputs.
int main(int argc, char* argv[])
{
IplImage* newImg = NULL;
IplImage* grayImg = NULL;
IplImage* contourImg = NULL;
float minAreaOfInterest = 180.0;
float maxAreaOfInterest = 220.0;
//parameters for the contour detection
CvMemStorage * storage = cvCreateMemStorage(0);
CvSeq * contours = 0;
int mode = CV_RETR_EXTERNAL;
mode = CV_RETR_CCOMP; //detect both outside and inside contour
cvNamedWindow("src", 1);
cvNamedWindow("Threshhold",1);
//load original image
newImg = cvLoadImage(argv[1], 1);
IplImage* imgHSV = cvCreateImage(cvGetSize(newImg), 8, 3);
cvCvtColor(newImg, imgHSV, CV_BGR2HSV);
cvNamedWindow("HSV",1);
cvShowImage( "HSV", imgHSV );
IplImage* imgThreshed = cvCreateImage(cvGetSize(newImg), 8, 1);
cvInRangeS(newImg, cvScalar(20, 100, 100), cvScalar(30, 255, 255), imgThreshed);
cvShowImage( "src", newImg );
cvShowImage( "Threshhold", imgThreshed );
//make a copy of the original image to draw the detected contour
contourImg = cvCreateImage(cvGetSize(newImg), IPL_DEPTH_8U, 3);
contourImg=cvCloneImage( newImg );
cvNamedWindow("Contour",1);
//find the contour
cvFindContours(imgThreshed, storage, &contours, sizeof(CvContour), mode, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
int i = 0;
for (; contours != 0; contours = contours->h_next)
{
i++;
//ext_color = CV_RGB( rand()&255, rand()&255, rand()&255 ); //randomly coloring different contours
cvDrawContours(contourImg, contours, CV_RGB(0, 255, 0), CV_RGB(255, 0, 0), 2, 2, 8, cvPoint(0,0));
}
printf("Total Contours:%d\n", i);
cvShowImage( "Contour", contourImg );
cvWaitKey(0);
cvDestroyWindow( "src" ); cvDestroyWindow( "Threshhold" );
cvDestroyWindow( "HSV" );
cvDestroyWindow( "Contour" );
cvReleaseImage( &newImg ); cvReleaseImage( &imgThreshed );
cvReleaseImage( &imgHSV );
cvReleaseImage( &contourImg );
}
I has some time yesterday night, here is a (very) simple and partial solution that works fine for me.
I created a git repository that you can directly clone :
git://github.com/jlengrand/image_processing.git
and run using Python
$ cd image_processing/LedDetector/
$ python leddetector/led_highlighter.py
You can see the code here
My method :
Convert to one channel image
Search for brightest pixel, assuming that we have at least one LED on and a dark background as on your image
Create a binary image with the brightest part of the image
Extract the blobs from the image, retrieve their center and the number of leds.
The code only takes an image into account at this point, but you can enhance it with a loop to take a batch of images (I already provides some example images in my repo.)
You simply have to play around a bit with the center found for LEDs, as they might not be one pixel accurate from one image to another (center could be slightly shifted).
In order to get the algorithm more robust (know whether there is a LED on or not, find an automatic and not hard coded margin value), you can play around a bit with the histogram (placed in extract_bright).
I already created the function for that you should just have to enhance it a bit.
Some more information concerning the input data :
Opencv does only accept avi files for now, so you will have to convert the mp4 file to avi (uncompressed in my case). I used this, that worked perfectly.
For some reason, the queryframe function caused memory leaks on my computer. That is why I created the grab_images functions, that takes the avi file as input and creates a batch of jpg images that you can use easier.
Here is the result for an image :
Input image :
Binary image :
Final result :
Hope this helps. . .
EDIT :
Your problem is slightly more complex if you want to use this image. The method I posted could still be used, but needs to be a bit complexified.
You want to detect the leds that display 'an information' (status, bandwidth, . . . ) and discard the design part.
I see three simple solutions to this :
you have a previous knowledge of the position of the leds. In this case, you can apply the very same method, but on a precise part of the whole image (using cv.SetImageROI).
you have a previsous knowledge of the color of the leds (you can see on the image that there are two different colors). Then you can search the whole image, and then apply a color filter to restrain your choice.
you have no previous knowledge. In this case, things get a bit more complex. I would tend to say that leds that are not useful should all have the same color, and that status leds usually blink. This means that by adding a learning step to the method, you might be able to see which leds actually have to be selected as useful.
Hope this brings some more food for thoughts
Related
I'm working currently with EmguCV and I need empty Mat. But apparently when I create it, the new Mat has sometimes some random values which I do not want.
I'm creating it like that:
Mat mask =new Mat(mainImg.Size,Emgu.CV.CvEnum.DepthType.Cv8U,1);
And when I display the 'mask' it looks like that:
It should be completely black but as you can see there is some trash which cause me trouble in reading the mat.
Does anyone know why it is like that? Maybe is there a clever way to clear the Mat?
Thanks in advance!
To create an empty Mat just use the code below.
Mat img = new Mat();
If you want to make it a specific size, use the following code. In your question above, you were choosing a depth type of 8U, which might be contributing to your low quality of an image. Here I choose a depth type of 32F, which should increase the quality of the new mask image. I also added 3 channels instead of 1, so you can have full access to the Bgr color space.
Mat mask = new Mat(500, 500, DepthType.Cv32F, 3);
Mat objects are great because you don't need to specify the size or depth of the image beforhand. Simmilarly, if you want to use an Image instead, you can use the code below.
Image<Bgr, byte> img = new Image<Bgr, byte>(500, 500);
You will need to add some dependencies, but this is the easiest and my preferred way of doing it.
I have a grayscale photo that I am trying to colour programmatically so that it looks 'real', with user input 'painting' the colour (eg red). It feels like it should be simple, but I've been stuck trying a few ways that don't look right, so thought I'd ask the community in case I've missed something obvious. I've tried the following
Converting to HSV, and combining the "Hue" and Saturation from the colour selected by the user, with the "Value" from the image.
Building a colour transformation matrix to multiply the BGR values (ie R = 0.8R + 1.1G + 1.0B). This works well for 'tinting', and adds a nice pastel effect, but doesn't really keep the depth or boldness of colour I want.
(favourite so far - see answers) multiply RGB from colour channel by RGB of image.
To add to the comment by user Alexander Reynolds, the question that you're asking is a known open research problem in the field of computer graphics, because the problem is under-constrained without using statistical priors of some sort. The state of the art in the CG community is found here, presented at SIGGRAPH 2016.
http://hi.cs.waseda.ac.jp/~iizuka/projects/colorization/en/
Also see:
http://richzhang.github.io/colorization/
I've had another think and play with photoshop, and implemented a multiply blend mode on BGR space to get an ok result.
Implemented in java
Mat multiplyBlend(Mat values, Mat colours) {//values 1 channel, colours 3
//simulates BGR Multiply blend mode
ArrayList<Mat> splitColours = new ArrayList<Mat>();
Core.split(colours, splitColours);
Core.multiply(values, splitColours.get(0), splitColours.get(0), 1/255.0f);
Core.multiply(values, splitColours.get(1), splitColours.get(1), 1/255.0f);
Core.multiply(values, splitColours.get(2), splitColours.get(2), 1/255.0f);
Mat ret = new Mat();
Core.merge(splitColours, ret);
return ret;
}
I'm creating a program in OpenCV (2.4.8) which should read video files and do some calculations on them. For these calculations I don't need the high-res frames, I'm perfectly fine with 640*360 as resolution.
In early tests I had my webcam attached and I used:
VideoCapture cap(0);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 360);
Mat image;
cap.read(image);
namedWindow("firstframe", 1);
imshow("firstframe", image);
waitKey(0);
Which resized the image perfectly. Now I'm getting to the next step where I want to use my program for stored video instead of a live feed (which I used for testing).
When I change the '0' with the source file path (string), the file is loaded, but the resolution remains 1920*1080.
Did I do anything wrong? Is there a way to load the video at a lower resolution 'on the fly'?
I've read the OpenCV documentation. Some of the settings are labeled 'cameras only' but this setting isn't:
http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#videocapture-videocapture
I'm using OpenCV on a mac, and installed it with MacPorts.
Let me know if any more details are needed.
Thanks ahead for your help
edit:
I've realised that the cap.set(...) functions return a boolean, so I've tried printing them out and they both return 0. This of course confirms that the frame isn't resized. Still no clue as to why...
edit 2 :
So now, for a temporary solution I use the following line after read(image):
resize(image, image, Size(640, 360), 0, 0, INTER_CUBIC);
And this works. But I'm guessing this isn't really the most optimal solution.
using
resize(image, image, Size(640, 360), 0, 0, INTER_CUBIC);
after read(image) seems to be the best solution to solve this problem.
So the total (test) code becomes:
VideoCapture cap("path/to/file");
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 360);
Mat image;
cap.read(image);
resize(image, image, Size(640, 360), 0, 0, INTER_CUBIC);
namedWindow("firstframe", 1);
imshow("firstframe", image);
waitKey(0);
If anyone knows of a better way, please let me know.
I'm working with the OpenCV library in XCode and doing a colour tracking application that draws lines from one point to the next. I was wondering if it's possible to put the output not as the video but as a white background.
So instead of
cvShowImage("video",frame);
Is there a function that would show (backgroundcolour, frame)?
edit:
I've added this code but since canvas is not an image it won't let me write on to it instead of frame.
cv::Mat canvas(320, 240, CV_8UC3, Scalar(255,255,255));
IplImage* imgYellowThresh1 = GetThresholdedImage1(canvas);
cvAdd(&canvas,imgScribble,&canvas);
cvShowImage("video",&canvas);
So the erro is on the GetThresholdedImage1 saying "no matching function for call to GetThresholdedImage1"
No, not it such a simple way you proposed. The solution would be to create separate Mat and draw the lines on it.
cv::Mat canvas(rows, cols, CV_8U3C, Scalar(255,255,255)); //set size, type and fill color
You would prepare this mat an the begining of code, and then use the draw functions on it. So instead of drawing the lines on frame you would draw on canvas.
EDIT:
There was slight misconception. The problem is you are using old C API. To learn about the latest C++ API, follow this tutorials:
http://docs.opencv.org/doc/tutorials/tutorials.html
I'm new to OpenCV and just started sifting through the APIs. I intend to fetch the color, intensity and texture values of each pixel constituting the image. I was fiddling with the structure - IplImage to start with but couldn't make much progress.
Please let me know of any means to do this.
cheers
Have you tried OpenCV 2.0? They have a new C++ interface which makes things much easier. You can use their new Mat class to load images, access pixels efficiently, etc. It's much cleaner than IplImage fun. I use \doc\opencv.pdf as my reference to anything I need. It's got tutorials, and examples with the new C++ interface, etc. - enough and more to get you started.
If you have anymore specific OpenCV questions, please feel free to ask.
Here's some demo code to get you started: (I've used the cv namespace):
// Load the image (looks like MATLAB :) ? )
Mat M = imread("h:\\lena.bmp");
// Display
namedWindow("Lena",CV_WINDOW_AUTOSIZE);
imshow("Lena",M);
waitKey();
// Crop out rectangle from (100,100) of size (200,200) of the red channel
const int offset[2] = {100,100};
const int dims[2] = {200,200};
Mat Red(dims[0],dims[1],CV_8UC1);
// Read it from M into Red
uchar* lena = M.data;
for(int i=0;i<dims[0];++i)
for(int j=0;j<dims[0];++j)
{
// P = i*rows*channels + j*channels + c
Red.at<uchar>(i,j) = *(lena + (i+offset[0])*M.rows*M.channels() + (j+offset[1])*M.channels()+0);
}
//Display
namedWindow("RedRect",CV_WINDOW_AUTOSIZE);
imshow("RedRect",Red);
waitKey();