I ran findCountours on the following Image:
And got the following contour image (I'm showing only "parent" contours according to the hierarchy):
As you can see, there are many different contours around each object (each one in a different color). Now, I want to unify the contours around the person to obtain one enclosing contour, so I could segment her our from the image.
I'm not sure that it can be done, but I thought I should ask here.
Is there any method to intelligently unify the contours in the image so I could segment different objects out?
Thanks,
Gil.
First, do you want to achieve the result only on this image or any other image where different people will present in different pose and different dresses?
If you want to segment only this image, then with some color thresholding or with some morphology operations you can achieve it. But to make it work for any image with different persons probably you may need to pursue a PhD in computer vision.
But if your task is segmentation only then I would suggest a Semi-Automatic Segmentation technique like Grab Cut or graph cut. These are very popular segmentation algorithms which are readily available in opencv or matlab. They work very well on all kind of images. Here is the result of grab cut algorithm on your image.
There is lots of work on Contour based segmentation in the literature out there.
The Ultrametric contour map produces a hierarchy of contours which are segmentations of objects in an input image.
Pub: Contour Detection and Hierarchical Image SegmentationPablo Arbelaez, Michael Maire, Charless Fowlkes, Jitendra Malik
Related
I have an image I want to extract lines from (a vascular network), using the Hough line algorithm. First I preprocess the image, then use Canny edge detection to generate the binary image.
I want to get a polygon/an array of joined line segments representing the shape of the vascular network. However applying the Hough line transform directly on this image yields mediocre results, partly because edge detection means each vessel is represented by two lines on each side, instead of a single line.
I'm new to OpenCV and image processing in general, so I'm probably going about this the wrong way. Any suggestions, or any recommended literature?
I suggest not using Canny edge detection.
Instead, first use a binary threshold to get a binary image of the vascular network (see http://docs.opencv.org/3.1.0/d7/d4d/tutorial_py_thresholding.html#gsc.tab=0 for applying a binary threshold). Then, pixels that are "on" should be points inside the network and those that are "off" should be outside.
Then use the findContours method:
http://opencvexamples.blogspot.com/2013/09/find-contour.html
This method gives you an array of contours, each of which is a list of points. A list of points will represent the list of line segments you are looking for (it will represent a contour, and if you are lucky it might be a polygon!).
Hough may not be the best tool for this job. Hough will give you straight lines or other geometric shapes. It is not designed to follow a detailed pattern like this.
Given the image, I would read research papers which already solve this. Here are a few examples from a search on Google Scholar. If they don't work for you, look up the citations as they should lead you down other paths.
https://scholar.google.com/scholar?hl=en&q=retina+computer+vision+vascular
http://ijesat.org/Volumes/2012_Vol_02_Iss_04/IJESAT_2012_02_04_25.pdf
http://www.vision.cs.rpiscrews.us/publications/pdfs/shen_itbm_submitted.pdf
I've been trying to work on an image processing script /OCR that will allow me to extract the letters (using tesseract) from the boxes found in the image below.
Following alot of processing, I was able to get the picture to look like this
In order to remove the noise I inverted the image followed by floodfilling and gaussian blurring to remove noise. This is what I ended up with next.
After running it through some threholding and erosion to remove the noise (erosion being the step that distorted the text) I was able to get the image to look like this before running it through tesseract
This, while a pretty good rendering, allows for fairly accurate results through tesseract. Though it sometimes fails because it reads the hash (#) as a H or W. This leads me to my question!
Is there a way using opencv, skimage, PIL (opencv preferably) I can sharpen this image in order to increase my chances of tesseract properly reading my image? OR Is there a way I can get from the third to final image WITHOUT having to use erosion which ultimately distorted the text in the image.
Any help would be greatly appreciated!
OpenCV does has functions like filter2D that convolves arbitrary kernel with given image. In particular you can use kernels that are used for image sharpening. The main question is whether this will improve the results of your OCR library or not. The image is already pretty sharp and the noise in the image is not a result of blur. I never worked with teseract myself, but I am fairly sure that it already does all the noise reduction it could. And 'helping' him in this process may actually have opposite effect. For example any sharpening process tends to amplify noise (as opposite to noise reduction processes that usually are blurring images). Most of computer vision libraries give better results when provided with raw (unprocessed) images.
Edit (after question update):
There multiple ways to do so. The first one that I would test is this: Your first binary image is pretty clean and sharp. Instead of of using morphological operations that reduce quality of letters switch to filtering contours. Use findContours function to find all contours in the image and store their hierarchy (i.e. which contour is inside which). From all the found contours you actually need only the contours on first and second levels, i.e. outer and inner contours of each letter (contours at zero level are the outermost contours). Other contours can be discarded. Among the contours that do belong to first level you can discard those whose bounding box is too small to be a real letter. After those two discarding procedures I would expect that most of the remaining contours are the ones that are parts of the letters. Draw them on white image and run OCR. (If you want white letters on black background you will need to invert the order of vertices in the contours).
I want to detect an object floating over water. I applied canny edge detector and found all the contours in it. Now I want to match these contours to the contours of another image of the same sight taken with a still camera and very little time gap, to first find the same objects and then calculate how much difference have they covered..
Kindly help me out with this. i searched a lot but couldn't find any thing clear.
If I got it right, you are trying to use the contours of an object to track its position on different images. In this case, you might be interested in template matching techniques.
In short, you'll be using matchTemplate to find the most probable location of a template (the contour of the object) on another image.
I am doing something similar to this problem:
Matching a curve pattern to the edges of an image
Basically, I have the same curve in two images, but with some affine transform between the two. Here is an example of two images:
Image1
Image2
So in order to get to Image2, you can apply some translation, rotation, scale, etc. to Image1.
Does anyone know how to solve for this transform?
Phase correlation doesn't work because it's not a translation only. Optical flow doesn't work since there's not enough detail to resolve translation, rotation, scale (It's pretty much a binary image). I'm not sure if Hough Transforms will give me good data.
I think some sort of keypoint matching algorithm like sift or surf would work with this kind of data as well.
The basic idea would be to find a limited number of "interesting" keypoints in each image, then match these keypoints pairwise.
Here is a quick test of your image with an online ASIFT demo:
http://demo.ipol.im/demo/my_affine_sift/result?key=BF9F4E4E006AB5168497709836C39C74#
It is probably more suited for normal greyscale images, but nevertheless it seems to work for this data. It looks like the lines connect roughly the same points around both of the curves; plugging all these pairs into something like the FindHomography function in OpenCv, the small discrepancies should even themselves out and you get the affine transformation matrix between the two images.
For your particular data you might be able to come up with better keypoint descriptors; perhaps something to detect the line ends, line crossings and sharp corners.
Or how about this: It is a little more work, but if you can vectorize your paths into a bezier or b-spline, you can get some natural keypoints from the spline descriptors.
I do not know any vectorisation library, but Inkscape has a basic implementation with which you could test the approach.
Once you have a small set of descriptors instead of a large 2d bitmap, you only need to match these descriptors between the two images, as per FindHomography.
answer to comment:
The points of interest are merely small areas that have certain properties. So the center of those areas might be black or white; the algorithm does not specifically look for white pixels or large-scale shapes such as the curve. What matter is that the lines connect roughly the same points on both curves, at least at first glance.
I have an image of the target logo that I am trying to use to find target logos in other images. I am currently running two different detection algorithms to help me detect any logos on the image. The first detection I use is Histogram based in which I search the image for a general area on screen where the colors are very similar. From there I run SIFT to further get the object that I am looking for. This works on most logos however the Target logo that I have isn't even picking up and keypoints in the logo.
I was wondering if there was anything I could do to help locate some keypoints in the image. Any advice is greatly appreciated.
Below is the image that isn't being picked up by SIFT:
Thanks in advance.
EDIT
I tired using Julien's idea for template matching based and different scales and rotations of the model, but still got little results. I have included an image that I am trying to test against.
There is no keypoint in your image...
Why ?
Because there is no keypoint in a uniform color plane (why would there be ? as it is uniform nothing is an highlight)
Because everything is symmetric in your image, it wouldn't really help to have keypoints, according to certain feature extractor they would have the same feature vectors
Because there's no corner or high gradient in cross directions which would result in keypoints fro many feature detectors
What you could try is a template matching method if you are searching for this logo without big changes (rotation, translation, noise etc) a simple correlation is the easiiiiest.
If you want to go further, one of my idea, that I have never implemented but which could be funny : would be to have sets of this image that you scale, rotate, warp, desaturate, increase noise with functions and then apply template matching with this set of images you got from your former template...
Well this idea comes from SIFT and Wavelet transform, where we use sort of functions that we change in some ways (rotation, noise, frequency etc...) in order to give robustness to our transform against these basic changes that occur in any image that you want to "inspect".
That could be an idea for you !
Here is an image summarizing my idea, you rotate and scale your template, actually it creates a new rotated/scaled template that you can try to match, it will increase robustness (even if it can be very long if you choose a lot of parameters to change). Well i'm not saying that's an algorithm, but it could be a funny and very basic idea to try...
Julien,
There is another reason that this logo is problematic for feature matching. Most features work pretty bad with artificial images that doesn't have any smoothness. All the derivatives are exactly 1 pixel size and features detector rely on derivatives. You have to smooth the image a bit. Ofcorse for this specific logo it will not help due to high symmetry. You can use hough transform to detect circles inside circles. It would give you better results in comparison with template matching.
I think you can try using MSER features- https://en.wikipedia.org/wiki/Maximally_stable_extremal_regions
See an example:
https://www.mathworks.com/examples/matlab-computer-vision/mw/vision_product-TextDetectionExample-automatically-detect-and-recognize-text-in-natural-images