How do I use feature detection to measure dimensions and locate a circle/line/rectangle in an image on LABVIEW.
For example, lets say I inserted an image into labview I want labview to detect if it has any shape in it!
I used lab view in high school for a robotics team and we developed a lot of real time image tracking code that did pretty much that.
What we did was create a little system that took an image checked it for pixels that were a specific Hue, Saturation, and Luminosity, than grouped them together by creating a convex hull around said pixels. We then we scored the convex hull on its linear averages and that was put up against expected results.
And once you get the convex hull you can preform some particle analysis and a few calculations later you have your dimensions.
You could use the function "IMAQ Find Circles" to locate circles. For lines and rectangles I'd probably write something on my own. Segment the image using IMAQ Threshold and let "IMAQ Particle Analysis" give you characteristics of the resulting blobs.
A sample image of what you're trying to achieve would help to understand the problem you're facing. Please upload one.
Also refer to the image processing manuals for LabVIEW. These two are pretty good and give a lot of examples on how to process images:
NI Vision for LabVIEW User Manual:
http://www.ni.com/pdf/manuals/371007b.pdf
NI Vision Concepts Manual:
http://www.ni.com/pdf/manuals/372916e.pdf
labview have some method to find shape you can find all of them in vision assistance machine vision part but you need some post processing to use such algorithm after that
you can use geometry matching and other vi to find any shape you want
Related
I am working on a project that aims to build a program which automatically gives a relatively accurate detection of pupil region in eye pictures. I am currently using simplecv in Python, given that Python is easier to experiment with. Since I just started, the eye pictures I am working with are fairly standardized. However, the size of iris and pupil as well as the color of iris can vary. And the position of the eye can shift a little among pictures. Here's a picture from wikipedia that is similar to the pictures I am using:
"MyStrangeIris.JPG" by Epicstessie is licensed under CC BY-SA 3.0
I have tried simple thresholding. Since different eyes have different iris colors, a fixed thresholding would not work on all pictures.
In addition, I tried simplecv's build-in sobel and canny edge detection, it's not working especially for eyes with darker iris. I also doubt that sobel or canny alone can solve the problem, given sometimes there are noises on the edge of the pupil (e.g., reflection of eyelash)
I have entry-level knowledge about image processing and machine learning. Right now, I am thinking about three possibilities:
Do a regression on the threshold value base on some variables
Make a specific mask only for edge detection for the pupil
classification on each pixel (this looks like lots of work to build the training set)
Am I on the right track? I would like to reach out to anyone with more experience on this type of problem. Any tips/suggestions are more than welcome. Thanks!
I think that for start you should put aside the machine learning. You have so much more to try in "regular" computer vision.
You need to try and describe a model for your problem. A good way to do this is to sit and think how you as a person detect iris. For example, i can think of:
It is near the center of image.
It is is Brown/green/blue circle, with distinct black center, surrounded by mostly white ellipse.
You have a skin color around the white ellipse.
It can't be too small or too large (depends on your images..)
After you build your model, try to find better ways to find these features. Hard to point on specific stuff, but you can start from: HSV color space, Correlation, Hough transform, Morphological operations..
Only after you feel you have exhausted all conventional tools, start thinking on features extraction and machine learning..
And BTW, because you are not the first person that try to detect iris, you can look at other projects for ideas.
I have written a small matlab code for image (link you have provided), function which i have used is hough transform for circle detection, which has also implemented in opencv, so porting will not create problem, i just want to know that i am on write way or not.
my result and code is as follows:
clc
clear all
close all
im = imresize(imread('irisdet.JPG'),0.5);
gray = rgb2gray(im);
Rmin = 50; Rmax = 100;
[centersDark, radiiDark] = imfindcircles(gray,[Rmin Rmax],'ObjectPolarity','dark');
figure,imshow(im,[])
viscircles(centersDark, radiiDark,'EdgeColor','b');
Input Image:
Result of Algorithm:
Thank You
Not sure about iris classification, but I've done written digit recognition from photos. I would recommend tuning up the contrast and saturation, then use a k-nearest neighbour algorithm to classify your images. Depending on your training set, you can get as high as 90% accuracy.
I think you are on the right track. Do image preprocessing to make classification easier, then train an algorithm of your choice. You would want to treat each image as one input vector though, instead of classifying each pixel!
I think you can try Active Shape Modelling or if you want a really feature rich modelling and do not care about the time it takes execute the algorithm you can try Active appearance modelling. You might want to look into these papers for better understanding:
Active Shape Models: Their Training and Application
Statistical Models of Appearance for Computer Vision - In Depth
I am looking for an efficient way to detect the small boxes around the numbers (see images)?
I already tried to use hough transformation with no success. Any ideas? I need some hints! I am using opencv...
For inspiration, you can have a look at the
Matlab video sudoku solver demo and explanation
Sudoku Grab, an Iphone App, whose author explains the computer vision part on his blog
Alternatively, if you are always hunting for the same grid you could deploy something like this:
Make a perfect artificial template of the grid and detect or save all coordinates from all corners.
In the target image, do the same thing, for example with Harris points. Be creative, you might also be able to use the distinct triangles that can be found in your images.
Using the coordinates from the template and the found harris points, determine the affine transformation x = Ax' between the template and the target image. That transformation can then be used to map the template grid onto the target image. At the very least this will give you some prior information to help guide further segmentation.
The gist of the idea and examples of the estimation of affine matrix A can be found on the site of Zissermans book Multiple View Geometry in Computer Vision and Peter Kovesi
I'd start by trying to detect the rectangular boundary of the overall sheet, then applying a perspective transform to make it truly rectangular. Crop that portion of the image out. If possible, then try to make the alternating white and grey sub-rectangles have an equal background brightness - maybe try adaptive histogram equalization.
Then the Hough transform might perform better. Alternatively, you could then take an approach that's broadly similar to this demonstration by Robert Bemis on MATLAB Central (it's analysing a DNA microarray image rather than Lotto cards, but it's essentially finding bounding boxes of items arranged in a grid). At a high level, the approach is to calculate the autocorrelation along columns and rows of pixels to detect the periodicity of the items in the grid, and use that to impose a bounding box on each item.
Sorry the above advice is mostly MATLAB-based; I'm afraid I'm not an opencv user, but hopefully it will give you some ideas at least.
For a project I've to detect a pattern and track it in space despite rotation, noise, etc.
It's highlighted with IR light and recorded with an IR camera:
Picture: https://i.stack.imgur.com/RJuVS.png
As on this picture it will be only very simple shape and we can choose which one we're gonna use.
I need direction on how to process a recognition of these shapes please.
What I do currently is thresholding and erosion to get a cleaner shape and then a contour detection and a polygon approximation.
What should I do then? I tried hu-moments but it wasn't good at all.
Could you please give me a global approach to recognize and track such pattern in space?
Can you choose which shape to project?
if so I would recomend using few concentric circles. Then using hough transform for circles you can easily find the center of the shape even when tracking is extremly hard (large movement/low frame rate).
If you must use rectangular shape then there is a good open source which does that. It is part of a project to read street signs and auto-translate them.
Here is a link: http://code.google.com/p/signfinder/
This source is not large and it would be easy to cut out the relevant part.
It uses "good features to track" of openCV in module CornerFinder.
Hope it helped
It is possible, you need following steps: thresholding image, some morphological enhancement,
blob extraction and normalization of blob size, blobs shape analysis, comparison of analysis results with pattern that you want to track.
There is many methods for blobs shape analysis. Simple methods: geometric dimensions, area, perimeter, circularity measurement; bit quads and others (for example, William K. Pratt "Digital Image Processing", chapter 18). Complex methods: spacial moments, template matching, neural networks and others.
In any event, it is very hard to answer exactly without knowledge of pattern shapes that you want to track )
hope it helped
I need to count out boxes in a warehouse by using edge detection techniques; images will be taken from a 3D model of a warehouse and the propose system will be used 3 images in 3 different angles to cover the whole area of a warehouse.
As I have no experience in image processing before I'm a bit confused about which algorithm to use.
For a quick start I would suggest looking at those two:
Sobel operator
Canny operator
These are the most widely used edge detection filters with pretty good results.
If you are starting to learn computer vision, you should also learn about typical operations in image processing and convolution.
The OpenCV library is a great library which implements many algorithms of computer vision, including the two operators mentioned above.
Check out AForge. It has full C# implementation of some edge detection algorithms.
Take a look at Image Processing Library for C++ question. You can find several useful links there. The suggested libraries not only have algorithm description but also their implementations.
OpenCV has a very nice algorithm which detects closed contours in an image and returns them as lists of points. You can then throw away all contours which don't have 4 points, and then check some constraints of the remaining ones (aspect ratio of the rectangles, etc...) to find your remaining box sides. This should at least solve the image processing part of your problem, though turning this list of contours into a count of boxes in your warehouse is going to be tough.
Check here for the OpenCV function:
http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#findcontours
http://opencv.willowgarage.com/documentation/drawing_functions.html#drawcontours
'Sujoy Filter' is better than Sobel Filter for Edge-detection. Here's the Julia implementation (with paper link): Sujoy Filter
I need to count out boxes in a warehouse by using edge detection techniques; images will be taken from a 3D model of a warehouse and the propose system will be used 3 images in 3 different angles to cover the whole area of a warehouse.
As I have no experience in image processing before I'm a bit confused about which algorithm to use.
For a quick start I would suggest looking at those two:
Sobel operator
Canny operator
These are the most widely used edge detection filters with pretty good results.
If you are starting to learn computer vision, you should also learn about typical operations in image processing and convolution.
The OpenCV library is a great library which implements many algorithms of computer vision, including the two operators mentioned above.
Check out AForge. It has full C# implementation of some edge detection algorithms.
Take a look at Image Processing Library for C++ question. You can find several useful links there. The suggested libraries not only have algorithm description but also their implementations.
OpenCV has a very nice algorithm which detects closed contours in an image and returns them as lists of points. You can then throw away all contours which don't have 4 points, and then check some constraints of the remaining ones (aspect ratio of the rectangles, etc...) to find your remaining box sides. This should at least solve the image processing part of your problem, though turning this list of contours into a count of boxes in your warehouse is going to be tough.
Check here for the OpenCV function:
http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#findcontours
http://opencv.willowgarage.com/documentation/drawing_functions.html#drawcontours
'Sujoy Filter' is better than Sobel Filter for Edge-detection. Here's the Julia implementation (with paper link): Sujoy Filter