Is there any way to find Moire Patter in an image I can use in my iOS app using Swift and maybe OpenCV?
Any help would be appreciated.
You can find Moire Pattern in Fourier transformed image.
If you want to remove it, apply median filter and inverse Fourier transform.
See this paper.
If you are looking for a cutting edge solution, then finding the Fourier transform is not the solution for you. It will eat up a lot of computing resources on the mobile as well. Instead, let the deep learning find the Fourier transform for you.
I have tried both the solutions on iOS:
Thresholding on the frequencies after the transform
Convolutional Neural Network based classifier
You will be surprised at the results using CNN.
Refer to this paper, https://ieeexplore.ieee.org/document/8628746/
Related
I'm trying to detect shapes written on a whiteboard with a black/blue/red/green marker. The shapes can be circles, rectangles or triangles. The image can be found at the bottom of this post.
I'm using OpenCV as the framework for the image recognition.
My first task is to research and list the different strategies that could be used for the detection. So far I have found the following:
1) Grayscale, Blur, Canny Edge, Contour detection, and then some logic to determine if the contours detected are shapes?
2) Haar training with different features for shapes
3) SVM classification
4) Grayscale, Blur, Canny Edge, Hough transformation and some sort of color segmentation?
Are there any other strategies that I have missed? Any newer articles or tested approaches? How would you do it?
One of the test pictures: https://drive.google.com/file/d/0B6Fm7aj1SzBlZWJFZm04czlmWWc/view?usp=sharing
UPDATE:
The first strategy seems to work the best, but is far from perfect. Issues arise when boxes are not closed, or when the whiteboard has a lot of noise. Haar training does not seems very effective because of the simple shapes to detect without many specific features. I have not tried CNN yet, but it seems most appropriate to image classification, and not so much to detect shapes in a larger image (but I'm not sure)
I think that the first option should work. You can use fourier descriptors in order to classify the segmented shapes.
http://www.isy.liu.se/cvl/edu/TSBB08/lectures/DBgrkX1.pdf
Also, maybe you can find something useful here:
http://www.pyimagesearch.com/2016/02/08/opencv-shape-detection/
If you want to try a more challenging but modern approach, consider deep learning approach (I would start with CNN). There are many implementations available on the internet. Although it is probably an overkill for this specific project, it might help you in the future...
I am working on a project that aims to build a program which automatically gives a relatively accurate detection of pupil region in eye pictures. I am currently using simplecv in Python, given that Python is easier to experiment with. Since I just started, the eye pictures I am working with are fairly standardized. However, the size of iris and pupil as well as the color of iris can vary. And the position of the eye can shift a little among pictures. Here's a picture from wikipedia that is similar to the pictures I am using:
"MyStrangeIris.JPG" by Epicstessie is licensed under CC BY-SA 3.0
I have tried simple thresholding. Since different eyes have different iris colors, a fixed thresholding would not work on all pictures.
In addition, I tried simplecv's build-in sobel and canny edge detection, it's not working especially for eyes with darker iris. I also doubt that sobel or canny alone can solve the problem, given sometimes there are noises on the edge of the pupil (e.g., reflection of eyelash)
I have entry-level knowledge about image processing and machine learning. Right now, I am thinking about three possibilities:
Do a regression on the threshold value base on some variables
Make a specific mask only for edge detection for the pupil
classification on each pixel (this looks like lots of work to build the training set)
Am I on the right track? I would like to reach out to anyone with more experience on this type of problem. Any tips/suggestions are more than welcome. Thanks!
I think that for start you should put aside the machine learning. You have so much more to try in "regular" computer vision.
You need to try and describe a model for your problem. A good way to do this is to sit and think how you as a person detect iris. For example, i can think of:
It is near the center of image.
It is is Brown/green/blue circle, with distinct black center, surrounded by mostly white ellipse.
You have a skin color around the white ellipse.
It can't be too small or too large (depends on your images..)
After you build your model, try to find better ways to find these features. Hard to point on specific stuff, but you can start from: HSV color space, Correlation, Hough transform, Morphological operations..
Only after you feel you have exhausted all conventional tools, start thinking on features extraction and machine learning..
And BTW, because you are not the first person that try to detect iris, you can look at other projects for ideas.
I have written a small matlab code for image (link you have provided), function which i have used is hough transform for circle detection, which has also implemented in opencv, so porting will not create problem, i just want to know that i am on write way or not.
my result and code is as follows:
clc
clear all
close all
im = imresize(imread('irisdet.JPG'),0.5);
gray = rgb2gray(im);
Rmin = 50; Rmax = 100;
[centersDark, radiiDark] = imfindcircles(gray,[Rmin Rmax],'ObjectPolarity','dark');
figure,imshow(im,[])
viscircles(centersDark, radiiDark,'EdgeColor','b');
Input Image:
Result of Algorithm:
Thank You
Not sure about iris classification, but I've done written digit recognition from photos. I would recommend tuning up the contrast and saturation, then use a k-nearest neighbour algorithm to classify your images. Depending on your training set, you can get as high as 90% accuracy.
I think you are on the right track. Do image preprocessing to make classification easier, then train an algorithm of your choice. You would want to treat each image as one input vector though, instead of classifying each pixel!
I think you can try Active Shape Modelling or if you want a really feature rich modelling and do not care about the time it takes execute the algorithm you can try Active appearance modelling. You might want to look into these papers for better understanding:
Active Shape Models: Their Training and Application
Statistical Models of Appearance for Computer Vision - In Depth
I am curious about the logic behind KLT in openCV.
From what I have known so far, the images sent to find optical flow in OpenCV is firstly converted to grayscale.
What I am curious is that, when running the algorithm, we need set of features for computation. What are the features used in finding optical flow method in openCV?
Thank you :)
There are 2 types of optical flow. Dense and sparse.
Dense finds flow for all the pixels while sparse finds flow for the selected points.
The selected points may be user specified, or calculated automatically using any of the feature detectors available in OpenCV. Most common feature detectors include GoodFeaturesToTrack which finds corners using cornerHarris or cornerMinEigenVal
The feature list is then passed to the KLT Tracker calcOpticalFlowPyrLK.
Feature can be any point in the image. Most common features are corners and edges.
For a project I've to detect a pattern and track it in space despite rotation, noise, etc.
It's highlighted with IR light and recorded with an IR camera:
Picture: https://i.stack.imgur.com/RJuVS.png
As on this picture it will be only very simple shape and we can choose which one we're gonna use.
I need direction on how to process a recognition of these shapes please.
What I do currently is thresholding and erosion to get a cleaner shape and then a contour detection and a polygon approximation.
What should I do then? I tried hu-moments but it wasn't good at all.
Could you please give me a global approach to recognize and track such pattern in space?
Can you choose which shape to project?
if so I would recomend using few concentric circles. Then using hough transform for circles you can easily find the center of the shape even when tracking is extremly hard (large movement/low frame rate).
If you must use rectangular shape then there is a good open source which does that. It is part of a project to read street signs and auto-translate them.
Here is a link: http://code.google.com/p/signfinder/
This source is not large and it would be easy to cut out the relevant part.
It uses "good features to track" of openCV in module CornerFinder.
Hope it helped
It is possible, you need following steps: thresholding image, some morphological enhancement,
blob extraction and normalization of blob size, blobs shape analysis, comparison of analysis results with pattern that you want to track.
There is many methods for blobs shape analysis. Simple methods: geometric dimensions, area, perimeter, circularity measurement; bit quads and others (for example, William K. Pratt "Digital Image Processing", chapter 18). Complex methods: spacial moments, template matching, neural networks and others.
In any event, it is very hard to answer exactly without knowledge of pattern shapes that you want to track )
hope it helped
How can I detect irises in a face with opencv?
Have a look at this forum thread. There's some source code there to get you started, but be careful about using it directly -- the original author seemed to have problems compiling it.
Start with detecting circles - see cvHoughCircles - hint, eyes have a series of concentric circles.
OpenCV has Face Detection module which uses Haar Cascade. You can use the same method to detect Iris. You collect some iris images and make it as positive set and non iris images as negative set. The use the Haar Training module to train it.
Quick and dirty would be making an eye detection first with Haar filter, there are good model xml files shipped with opencv 2.4.2. Then you do some skin detection (in the HSV space rather than the rgb space) to identify the area of the eye in the middle, or circle search.
Also, projections, histogram-based decisions can be used once the eye area is cropped.