SimpleBlobDetection Code - opencv

Hi I am a pure novice in image processing especially with openCV. I want to write a program on blob detection that takes an image as an input and returns the color and centroid of the blob. My image consists purely of regular polygons in a black background. For eg. my image might consist of a green triangle(equilateral) or a red square in a black background. I want to use the simpleBlobDetection class in opencv and its 'detect' function for this purpose. Since I'm a novice a full program will be a lot of help to me.

I suggest you to use the complementary openCV library cvblob. It has an example to automatically obtain blobs in an image, centroid, contour, etc.
Here is the source code, i tried it in OSX and works really fine.
Link: https://code.google.com/p/cvblob/

Related

Detecting drawn lines and dots on a notebook paper

I have a picture of notebook (with squares) and lines and dots are drawn in it like in the description. Output should be a data structure which contains info about boundaries and dots. How one can accomplish that? If possible, program should process this dynamically (given a video).
Yes this can be accomplished by various image processing techniques.
One famous technique that can help is called the Canny Edge Detector. It can detect all the defined edges within an image. More can be looked into it here. Various python and C# image processing libraries make this extremely easy. Take for example OpenCV
For detecting dots in the middle of the edges, that would be up to you to come up with, unless anyone knows of a library to make that easy as well. I suggest looking at each square that we detected by the canny edge detector and see if there are any dark color values around the middle.
For the data structure, that is also up to you.
Remember that a video is just a sequence of images. Just apply the same technique to all the images.

Extracting lines from an image to feed to OCR - Tesseract

I was watching this talk from pycon http://youtu.be/B1d9dpqBDVA?t=15m34s around the 15:33 mark the speaker talks about extracting lines from an image (receipt) and then feeding that to the OCR engine so that text can be extracted in a better way.
I have a similar need where I'm passing images to the OCR engine. However, I don't quite understand what he means by extracting lines from an image. What are some open source tools that I can use to extract lines from an image?
Take a look at the technique used to detect the skew angle of a text.
Groups are lines are used to isolate text on an image (this is the interesting part).
From this result you can easily detect the upper/lower limits of each line of text. The text itself will be located inside them. I've faced a similar problem before, the code might be useful to you:
All you need to do from here is crop each pair of lines and feed that as an image to Tesseract.
i can tell u a simple technique to feed the images to OCR.. just perform some operations to get the ROI (Region of Interest) of ur image, and localize the area where the image after binarizing it.. then you may find contours, and by keeping the threasholding value, and setting the required contour area, you can feed the resulting image to OCR :) ..
(sorry for bad way of explaination)
Direct answer: you extract lines from an image with Hough Transform.
You can find an analytical guide here.
Text lines can be detected as well. Karlphillip's answer is based on Hough Transform too.

Opencv how to ignore small parts on image

I need a little help in Opencv, I´m a beginner and don´t know all functions yet.
I´m trying to do an OCR in my licence plate, it´s an Brazilian plate. So after some image processing like cvCvtColor,cvCanny,cvFindContours and cvDrawContours, I get images like this:
It´s a fake image, I mounted this image because I don´t want to publish my real plate on the web. On my real image, there is only black and white color I painted some parts on this example because I want ignore this parts. Red color it´s a city name, yellow color is a hyphen separator and green color is the hole to fix the plate on car. I need to know if there is a way to ignore this small parts and recognize only big parts, so after this filter i can do my OCR processing. Any help?
I'm not sure if it helps you in other situations but in this situation you can remove small contours using erosion or simply using contourArea for calculating contour's area (and remove contour if it's area is too small).

How do I detect small blobs using EmguCV?

I'm trying to track the position of a robot from an overhead webcam. However, as I don't have much access to the robot or the environment, so I have been working with snapshots from the webcam.
The robot has 5 bright LEDs positioned strategically which are a different enough color from the robot and the environment so as to easily isolate.
I have been able to do just that using EmguCV, resulting in a binary image like the one below. My question is now, how to I get the positions of the five blobs and use those positions to determine the position and orientation of the robot?
I have been experimenting with the Emgu.CV.VideoSurveillance.BlobTrackerAuto class, but it stubbornly refuses to detect the blobs in the above image. Being a bit of a newbie when it comes to any of this, I'm not sure what I could be doing wrong.
So what would be the best method of obtaining the positions of the blobs in the above image?
I can't tell you how to do it with emgucv in particular, you'd need to translate the calls from opencv to emgucv. You'd use cv::findContours to get the blobs and cv::moments to get the position of the blobs (the formula to get the middle points of the blobs is in the documentation of cv::moments). Then you'd use cv::estimateRigidTransform to get the position and orientation of the robot.
I use cvBlob library to work blobs. Yesterday i worked with it to detect small blobs and works fine.
I wrote a python module to do this very thing.
http://letsmakerobots.com/node/38883#comments

How do I recognize squares in this image?

So I'm using openCV to do square recognition on this image. I compiled the squares.c file on an image that I took and here are the results:
http://www.learntobe.org/urs/index1.php
The image on the left is the original and on the right is the image that is a result of running the square detection.
The results aren't bad, but I really need this to detect ALL of the squares and I'm really new to this openCV and image processing stuff. Does anyone know of how I can edit the squares.c file to possibly get the detection to be more inclusive so that all of the squares are highlighted?
Thanks a lot ahead of time.
All the whitish colors are tough to detect. Nothing separates it from the page itself. Try doing some kind of edge detection (check cvCanny or cvSobel).
You should also "pre-process" the image. That is, increase the contrast, make the colors more saturated, etc.
Also check this article http://www.aishack.in/2010/01/an-introduction-to-contours/ It talks about how the squares.c sample works. Then you'll understand a bit about how to improves the detection in your case.
Hope this helps!

Resources