I don't think I'm going to get any replies but here goes: I'm developing an iOS app that performs image segmentation functions. I'm trying to implement the easiest way to crop out a subject from an image without the need of a greenscreen/keying. Most automated solutions like using OpenCV just aren't cutting it.
I've found the rotoscope brush tool in After Effects to be effective at giving hints on where the app should be cutting out. Anyone know what kind of algorithms the rotoscope brush tool is using?
Check out this page, which contains a couple of video presentations from SIGGRAPH (a computer graphics conference) about the Roto Brush tool. Also take a look at Jue Wang's paper on Video SnapCut. As Damien guessed, object extraction relies on some pretty intense image processing algorithms. You might be able to implement something similar in OpenCV depending on how clever/masochistic you're feeling.
The algorithm is a graph-cut based segmentation algorithm where Gaussian Mixture Models (GMM) are trained using color pixels in "local" regions as well as "globally", together with some sort of shape prior.
OpenCV has a "cheap hack" implementation of the "GrabCut" paper where the user specifies a bounding box around the object he wish to segment. Typically, using just the bounding box will not give good results. You will need the user to specify the "foreground" and "background" pixels (as is done in Adobe's Rotoscoping tool) to help the algorithm build foreground and background color models (in this case GMMs) so that it will know what are the typical colors in the foreground object you wish to segment, and those for the background that you want to leave out.
A basic graph-cut implementation can be found on this blog. You can probably start from there and experiment with different ways to compute the cost terms to get better results.
Lastly, the "soften" the edges, a cheap hack is to blur the binary mask to obtain a mask with values between 0 and 1. Then recomposite your image using the mask i.e. c[i][j] = mask[i][j] * fgd[i][j] + (1 - mask[i][j]) * bgd[i][j], where you are blending the foreground you segmented (fgd), with a new background image (bgd) using the mask values as blending weights.
Related
I was wondering if its possible to match the exposure across a set of images.
For example, lets say you have 5 images that were taken at different angles. Images 1-3,5 are taken with the same exposure whilst the 4th image have a slightly darker exposure. When I then try to combine these into a cylindrical panorama using (seamFinder with: gc_color, surf detection, MULTI_BAND blending,Wave correction, etc.) the result turns out with a big shadow in the middle due to the darkness from image 4.
I've also tried using exposureCompensator without luck.
Since I'm taking the pictures in iOS, I maybe could increase exposure manually when needed? But this doesn't seem optimal..
Have anyone else dealt with this problem?
This method is probably overkill (and not just a little) but the current state-of-the-art method for ensuring color consistency between different images is presented in this article from HaCohen et al.
Their algorithm can correct a wide range of errors in image sets. I have implemented and tested it on datasets with large errors and it performs very well.
But, once again, I suppose this is way overkill for panorama stitching.
Sunreef has provided a very good paper, but it does seem overkill because of the complexity of a possible implementation.
What you want to do is to equalize the exposure not on the entire images, but on the overlapping zones. If the histograms of the overlapped zones match, it is a good indicator that the images have similar brightness and exposure conditions. Since you are doing more than 1 stitch, you may require a global equalization in order to make all the images look similar, and then only equalize them using either a weighted equalization on the overlapped region or a quadratic optimiser (which is again overkill if you are not a professional photographer). OpenCV has a simple implmentation of a simple equalization compensation algorithm.
The detail::ExposureCompensator class of OpenCV (sample implementation of such a stitiching is here) would be ideal for you to use.
Just create a compensator (try the 2 different types of compensation: GAIN and GAIN_BLOCKS)
Feed the images into the compensator, based on where their top-left cornes lie (in the stitched image) along with a mask (which can be either completely white or white only in the overlapped region).
Apply compensation on each individual image and iteratively check the results.
I don't know any way to do this in iOS, just OpenCV.
My goal is to find known logos in static image and videos. I want to achieve that by using feature detection with KAZE or AKAZE and RanSac.
I am aiming for a similar result to: https://www.youtube.com/watch?v=nzrqH...
While experimenting with the detection example from the docs which is great btw, i was facing several issues:
Object resolution: Differences in size between the known object and
the resolution of the scene where the object should be located
sometimes breaks the detection algorithm - the object won't be
recognized in images with a low resolution although the image quality
is still allright for a human eye.
Color contrast with the background: It seems, that the detection can
easily be distracted by different background contrasts (eg: object is
logo black on white background, logo in scene is white on black
background). How can I make the detection more robust against
different luminations and background contrasts?
Preprocessing: Should there be done any kind of preprocessing of the
object / scene? For example enlarge the scene up to a specific size?
Is there any guideline how to approach the feature detection in
several steps to get the best results?
I think your issue is more complicated than feature-descriptor-matching-homography process.
It is more likely oriented to pattern recognition or classification.
You can check this extended paper review of shape matching:
http://www.staff.science.uu.nl/~kreve101/asci/vir2001.pdf
Firstly, the resolution of images is very important,
because usually matching operation makes a pixel intensity cross-correlation
between your sample image (logo) and your process image, so you will get the best-crosscorrelated area.
In the same way, the background colour intensity
is very important because background illumination could affect severally to your final result.
Feature-based methods are widely researched:
http://docs.opencv.org/2.4/modules/features2d/doc/feature_detection_and_description.html
http://docs.opencv.org/2.4/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html
So for example, you can try alternative methods such as:
Hog descritors: Histogram oriented gradients:
https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients
Pattern matching or template matching
http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
I think the lastest (Pattern matching) is the easiest to check your algorithm.
Hope these references helps.
Cheers.
Unai.
I have some color images needing segmentation. They are images of slides that are stained with hematoxylin and eosin ("H&E").
I found this method for color deconvolution by Ruifrok
http://europepmc.org/abstract/med/11531144
that separates out the images by color.
However it seems that you can do something similar just by using K-means clustering:
http://www.mathworks.com/help/images/examples/color-based-segmentation-using-k-means-clustering.html
I am curious what the difference is. Any insight would be welcome. Thanks.
I can't seem to find a copy of the article (well without paying) but they are not exactly the same.
K means seeks to cluster data. So if you just want to find dominant colors in an image, or do some sorting based on colors, this is the way to go. As a side note: Kmeans can be used on any vector. Its not confined to color, so you can use it for many other applications.
Color Deconvolution is trying to remove the effects of chemical dyes commonly used for microscopy. (If I understood the abstract properly). Based on the specific dye used, the algorithm tries to reverse its effects and give you the original color image back (before the dye was added). I found this website that shows some output. This is deconvolving the dye contribution to the RGB spectrum. It doesn't do any clustering/grouping (other than finding the dye)
Hope that helps
EDIT
If you didn't know, convolution is most often associated with signals/image processing. Basically you take a filter and run it over a signal. The output is a modified version of the original input. In this case, the original image is filtered by a dye with known RGB values. IF we know the full characteristics of the dye/filter we can invert it. Then by running the convolution again using the inverse filter we can hopefully de -convolve the effect. In principle it sounds simple enough, but in many cases this isn't possible.
I am working on a project that requires me to:
Look at images that contain relatively well-defined objects, e.g.
and pick out the color of n-most (it's generic, could be 1,2,3, etc...) prominent objects in some space (whether it be RGB, HSV, whatever) and return it.
I am looking into ways to segment images like this into the independent objects. Once that's done, I'm under the impression that it won't be particularly difficult to find the contours of the segments and analyze them for average or centroid color, etc...
I looked briefly into the Watershed algorithm, which seems like it could work, but I was unsure of how to generate the marker image for an indeterminate number of blobs.
What's the best way to segment such an image, and if it's using Watershed, what's the best way to generate the corresponding marker image of integers?
Check out this possible approach:
Efficient Graph-Based Image Segmentation
Pedro F. Felzenszwalb and Daniel P. Huttenlocher
Here's what it looks like on your image:
I'm not an expert but I really don't see how the Watershed algorithm can be very useful to your segmentation problem.
From my limited experience/exposure to this kind of problems, I would think that the way to go would be to try a sliding-windows approach to segmentation. Basically this entails walking the image using a window of a set size, and attempting to determine if the window encompasses background vs. an object. You will want to try different window sizes and steps.
Doing this should allow you to detect the object in the image, presuming that the images contain relatively well defined objects. You might also attempt to perform segmentation after converting the image to black and white with a certain threshold the gives good separation of background vs. objects.
Once you've identified the object(s) via the sliding window you can attempt to determine the most prominent color using one of the methods you mentioned.
UPDATE
Based on your comment, here's another potential approach that might work for you:
If you believe the objects will have mostly uniform color you might attempt to process the image to:
remove noise;
map original image to reduced color space (i.e. 256 or event 16 colors)
detect connected components based on pixel color and determine which ones are large enough
You might also benefit from re-sampling the image to lower resolution (i.e. if the image is 1024 x 768 you might reduce it to 256 x 192) to help speed up the algorithm.
The only thing left to do would be to determine which component is the background. This is where it might make sense to also attempt to do the background removal by converting to black/white with a certain threshold.
I'm new to OpenCV (am actually using Emgu CV C# wrapper) and am attempting to do some object detection.
I'm attempting to determine if an object matches a predefined set of objects (that I will have to define). The background is well lit and does not move. My objects that I am starting with are bottles and cans.
My current approach is:
Do absDiff with a previously taken background image to separate the background.
Then dilate 4x to make the lighter areas (in labels) shrink.
Then I do a binary threshold to get a big blog, followed by finding contours in this image.
I then take the largest contour and draw it, which becomes my shape to either save to the accepted set or compare with the accepted set.
Currently I'm using cvMatchShapes, but the double return value seems to vary widely. I'm guessing it is because it doesn't take into account rotation.
Is this approach a good one? It isn't working well for glass bottles since the edges are hard to find...
I've read about haar classifiers, but thinking that might be overkill for my task.
Maybe this link is useful too. You have the code and library for SIFT, you just need to compile it. Good luck.
http://blogs.oregonstate.edu/hess/sift-library-places-2nd-in-acm-mm-10-ossc/#more-176