Define Image background is good or bad - image-processing

I am currently working on image segmentation project for human images. I am trying to find a way where we can check the image and decide that whether image is good for segmentation or not.
For example if a person is wearing a black shirt and background is also black or gray or person is sitting on black chair then image segmentation covers chair also with human.
So is there a way where we can do basic checking about image and give a warning that 'Foreground and background looks to be similar so chances are there that output will be not that good.'

If a human can differentiate between FG & BG, then AI algorithms can too. There are many contrast improvement image-processing algorithms. So don't ask a general question, rather be specific what you need & post with sample data.

Related

Quantifying differences in an image sequence to measure activity

I'm looking for a program that will enable me to quantity the difference between images in an image sequence over time.
We are hoping to use timelapse images to measure the activity of tadpoles by comparing how the images change over time. Tracking the movement of individuals isn’t necessary. The tadpoles are dark and the background of the aquarium is light, however the background isn’t uniform and some of the decor items like dark rocks and foliage make it so that all the tadpoles aren’t visible at all times.
Basically need a program that will allow me to quantity the differences/motion detected in an image sequence (i.e 209 images) and produce data that can be exported...
Any and all suggestions appreciated!!
Your question is rather vague and you don't supply any images or real indication of what you expect as results, so my answer will not be as thorough as it might otherwise be.
You don't mention any tools you are familiar with, but my recommendation would be Python and OpenCV. Alternatives are probably scikit-image, Python Wand.
In general, when trying to detect movement across a series of images, you would:
try and work out what the background is
look for movement by sutracting, or differencing, frames from the background
clean up the difference image
identify objects - maybe by shape or size or colour
maybe track objects
produce statistics
As regards working out the background, I did an example here by finding the median pixel across all images at each location in the images. There is also an OpenCV tutorial here.
As regards cleaning up images, you can probably remove noise in the background subtraction with a small median filter, say 3x3 or 5x5 depending on the resolution of your images.
As regards detecting tadpoles, you will probably want to use OpenCV findContours() and filter by size, or colour, or circularity. There are some fairly decent tutorials on PyImageSearch. There is also an ImageMagick "Connected Component" analysis to find a tennis player that I did here.

Proper approach to feature detection with opencv

My goal is to find known logos in static image and videos. I want to achieve that by using feature detection with KAZE or AKAZE and RanSac.
I am aiming for a similar result to: https://www.youtube.com/watch?v=nzrqH...
While experimenting with the detection example from the docs which is great btw, i was facing several issues:
Object resolution: Differences in size between the known object and
the resolution of the scene where the object should be located
sometimes breaks the detection algorithm - the object won't be
recognized in images with a low resolution although the image quality
is still allright for a human eye.
Color contrast with the background: It seems, that the detection can
easily be distracted by different background contrasts (eg: object is
logo black on white background, logo in scene is white on black
background). How can I make the detection more robust against
different luminations and background contrasts?
Preprocessing: Should there be done any kind of preprocessing of the
object / scene? For example enlarge the scene up to a specific size?
Is there any guideline how to approach the feature detection in
several steps to get the best results?
I think your issue is more complicated than feature-descriptor-matching-homography process.
It is more likely oriented to pattern recognition or classification.
You can check this extended paper review of shape matching:
http://www.staff.science.uu.nl/~kreve101/asci/vir2001.pdf
Firstly, the resolution of images is very important,
because usually matching operation makes a pixel intensity cross-correlation
between your sample image (logo) and your process image, so you will get the best-crosscorrelated area.
In the same way, the background colour intensity
is very important because background illumination could affect severally to your final result.
Feature-based methods are widely researched:
http://docs.opencv.org/2.4/modules/features2d/doc/feature_detection_and_description.html
http://docs.opencv.org/2.4/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html
So for example, you can try alternative methods such as:
Hog descritors: Histogram oriented gradients:
https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients
Pattern matching or template matching
http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
I think the lastest (Pattern matching) is the easiest to check your algorithm.
Hope these references helps.
Cheers.
Unai.

Foreground detection without an image sequence

I need to detect the foreground object on an image and cut it off from that image. There are lots of background/foreground subtraction or object recognition algorithms but these algorithms are working on the videos or image sequences. I have only an image (it can be a picture of a man in front of a white wall. ) as an input. Do you know any useful approaches that can be applicable on a single image file instead of video or image sequences.
What you are looking for is a figure-ground segmentation algorithm. Does it have to be fully automatic? If you can draw an initial contour of the object by hand, you can use a class of algorithms called "active contours". If you need this to be fully automatic, you can use an algorithm called N-cuts.
If you are using MATLAB, and you are ok with semi-automatic segmentation, try the Image Segmenter App in the Image Processing Toolbox.
search as an "saliency detection"
like
"Global contrast based salient region detection"
http://mmcheng.net/salobj/

Logo detection/recognition in natural images

Given a logo image as a reference image, how to detect/recognize it in a cluttered natural image?
The logo may be quite small in the image, it can appear in clothes, hats, shoes, background wall etc. I have tried SIFT feature for matching without any other preprocessing, and the result is good for cases in which the size of the logo in images is big and the logo is clear. However, it fails for some cases where the scene is quite cluttered and the proportion of the logo size is quite small compared with the whole image. It seems that SIFT feature is sensitive to perspective distortions.
Anyone know some better features or ideas for logo detection/recognition in natural images? For example, training a classifier to locate candidate regions first, and then apply directly SIFT matching for further recognition. However, training a model needs many data, especially it needs manually annotating logo regions in images, and it needs re-training (needs to collect and annotate new image) if I want to apply it for new logos.
So, any suggestions for this? Detailed workflow/code/reference will be highly appreciated, thanks!
There are many algorithms from shape matching to haar classifiers. The best algorithm very depend on kind of logo.
If you want to continue with feature registration, i recommend:
For detection of small logos, use tiles. Split whole image to smaller (overlapping) tiles and perform usual detection. It will use "locality" of searched features.
Try ASIFT for affine invariant detection.
Use many template images for reference feature extraction, with different lightning , different background images (black, white, gray)

After Effect's Rotoscoping brush algorithms

I don't think I'm going to get any replies but here goes: I'm developing an iOS app that performs image segmentation functions. I'm trying to implement the easiest way to crop out a subject from an image without the need of a greenscreen/keying. Most automated solutions like using OpenCV just aren't cutting it.
I've found the rotoscope brush tool in After Effects to be effective at giving hints on where the app should be cutting out. Anyone know what kind of algorithms the rotoscope brush tool is using?
Check out this page, which contains a couple of video presentations from SIGGRAPH (a computer graphics conference) about the Roto Brush tool. Also take a look at Jue Wang's paper on Video SnapCut. As Damien guessed, object extraction relies on some pretty intense image processing algorithms. You might be able to implement something similar in OpenCV depending on how clever/masochistic you're feeling.
The algorithm is a graph-cut based segmentation algorithm where Gaussian Mixture Models (GMM) are trained using color pixels in "local" regions as well as "globally", together with some sort of shape prior.
OpenCV has a "cheap hack" implementation of the "GrabCut" paper where the user specifies a bounding box around the object he wish to segment. Typically, using just the bounding box will not give good results. You will need the user to specify the "foreground" and "background" pixels (as is done in Adobe's Rotoscoping tool) to help the algorithm build foreground and background color models (in this case GMMs) so that it will know what are the typical colors in the foreground object you wish to segment, and those for the background that you want to leave out.
A basic graph-cut implementation can be found on this blog. You can probably start from there and experiment with different ways to compute the cost terms to get better results.
Lastly, the "soften" the edges, a cheap hack is to blur the binary mask to obtain a mask with values between 0 and 1. Then recomposite your image using the mask i.e. c[i][j] = mask[i][j] * fgd[i][j] + (1 - mask[i][j]) * bgd[i][j], where you are blending the foreground you segmented (fgd), with a new background image (bgd) using the mask values as blending weights.

Resources