Background Substraction with user input - image-processing

I am looking for an algorithm or, even better, some library that covers background substraction from a single static image (no background model available). What whould be possible though is some kind of user input like for example https://clippingmagic.com does it.
Sadly my google fu is bad here as i cant find any papers on that topic with my limited amount of keywords.

That webpage is really impressive. If I were to try and implement something similar I would probably use k-means clustering using the CIELAB colorspace. The reason for changing the colorspace is so that colors can be represented by two points rather than 3 as a regular RGB image. This should speed up clustering. Additionally, the CIELAB color space was build for this purpose, finding "distances" (similarities) between colors and accounts for the way humans perceive color. Not just looking at the raw binary data the computer has.
But a quick overview of kmeans. For this example we will say k=2 (meaning only two clusters)
Initialize each cluster with a mean.
Go through every pixel in your image and decide which mean it is closer to, cluster 1 or 2?
Compute the new mean for your clusters after you've processed all the pixels
using the newly computed means repeat steps 2-4 until convergence (meaning the means don't change very much)
Now that would work well when the foreground image is notably different than the background. Say a red ball in a blue background, but if the colors are similar it would be more problematic. I would still stick to kmeans but have a larger number of clusters. So on that web page you can make multiple red or green selections. I would make each of these strokes a cluster, and intialize my cluster to the mean. So say I drew 3 red strokes, and 2 green ones. That means I'd have 5 groups. But somehow internally I add an extra attribute as foreground/background. So that each cluster will have a small variance, but in the end, I would only display that attribute, foreground or background. I hope that made sense.
Maybe now you have some search terms to start off with. There may be many other methods but this is the first I thought of, good luck.
EDIT
After playing with the website a bit more I see it uses spatial proximity to cluster. So say I had 2 identical red blobs on opposite sides of the image. If I only annotate the left side of the image the blob on the right side might not get detected. Kmeans wouldn't replicate this behavior since the method I described only uses the color to cluster pixels completely oblivious to their location in the image.
I don't know what tools you have at your disposal but here is a nice matlab example/tutorial on color based kmeans

Related

Quantifying differences in an image sequence to measure activity

I'm looking for a program that will enable me to quantity the difference between images in an image sequence over time.
We are hoping to use timelapse images to measure the activity of tadpoles by comparing how the images change over time. Tracking the movement of individuals isn’t necessary. The tadpoles are dark and the background of the aquarium is light, however the background isn’t uniform and some of the decor items like dark rocks and foliage make it so that all the tadpoles aren’t visible at all times.
Basically need a program that will allow me to quantity the differences/motion detected in an image sequence (i.e 209 images) and produce data that can be exported...
Any and all suggestions appreciated!!
Your question is rather vague and you don't supply any images or real indication of what you expect as results, so my answer will not be as thorough as it might otherwise be.
You don't mention any tools you are familiar with, but my recommendation would be Python and OpenCV. Alternatives are probably scikit-image, Python Wand.
In general, when trying to detect movement across a series of images, you would:
try and work out what the background is
look for movement by sutracting, or differencing, frames from the background
clean up the difference image
identify objects - maybe by shape or size or colour
maybe track objects
produce statistics
As regards working out the background, I did an example here by finding the median pixel across all images at each location in the images. There is also an OpenCV tutorial here.
As regards cleaning up images, you can probably remove noise in the background subtraction with a small median filter, say 3x3 or 5x5 depending on the resolution of your images.
As regards detecting tadpoles, you will probably want to use OpenCV findContours() and filter by size, or colour, or circularity. There are some fairly decent tutorials on PyImageSearch. There is also an ImageMagick "Connected Component" analysis to find a tennis player that I did here.

Detecting common colors for groups of objects

I have an image with a collection of objects in K given perceived colors. Providing I extract those objects, how could I cluster them by their perceived color?
Let me give you an example. I am trying to cluster two football teams - so there will be two teams, referees and a keeper (or two, but that`s a rare situation) on the image - 3, 4 or 5 clusters.
For a human's eye, it`s an easy situation. On the picture above, we have white players, red players and a black ref. But it turns out not so easy for automatic processing.
What I have tried so far:
1) I've started working on the BGR colorspace, then tried HSV and now I am exploring CIE Luv, as I read it has unified distances describing the perceived differences between colors.
2) [BGR and HSV] taking the most common color from the contour (not the bounding box). this didn' work at all because of the noise (green field getting in the way), the quality of the image, the position of the player, etc. Colors were pretty much random.
3) [CIE Luv] Resizing all players' boxes to a common size and taking a small portion of the image from the middle (as marked by a black rectangle in the example below).
Taking the mean value of all pixels in each player's window and adding to the list (so, it`s one pixel with the mean value per player). Using K-means (with a defined number of clusters) to find out clusters on that list. This has proven somewhat successful, for the image above I have redish, white and blackish centres in the clusters.
Unfortunately, the assignment of players back to these clusters is pretty much random. I am doing that by calculating the mean color for each player like I described above and then measuring the distance to each cluster. A player might be assigned to the white cluster on one frame and to the red one on the next. Part of the problem might be that the window in the middle of the player's box will sometimes catch a number, grass or shorts, instead of the jersey.
I have already spent a considerable amount of time on trying to figure that out, grateful for any help.
I may be overcomplicating the problem since you just have 3 classes, but try training an SVM classifier based on HOG descriptors. maybe try LDA to improve speed
Some references -
1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.627.6465&rep=rep1&type=pdf - skip to recognition part
2] https://rodrigob.github.io/documents/2013_ijcnn_traffic_signs.pdf - skip to recognition part.
3] https://www.learnopencv.com/handwritten-digits-classification-an-opencv-c-python-tutorial/ - if you want to jump into the code right away
This will always work as long as your detection is good. and can also help to identify different players based on their shirt number
(maybe more???) if you train it right
EDIT: Okkay I have another idea, based on colour segmentation since that was your original approach and require less work (maybe not? color segmentation is a pain! also LIGHTING! LIGHTING! LIGHTING!).
Create a green mask and create a threshold so you detect as little grass as possible when doing your kmeans. Then instead of finding mean, try median instead, that will get you closer to red, coz white is detected as 0 and mean just drops drastically, median doesnt. So it'll be way more robust and you should be able to sort players better (hair color and skin color shouldnt affect it too much)
EDIT 2: Just noticed, if you use the black rectangle you'll get shirt number more (which is white), gonna mess up your classifier, use original box with green masked out
EDIT 3: Also. You can just create 3 thresholds for your required colors and split them up! don't really need Kmeans in this actually. Basically you just need your detected boxes to give out a value inside that threshold. Try the median method I mentioned above. Should improve. Also, might need some more minor tweaks here and there (blur, morphology etc to improve detection)

OpenCV Matching Exposure across images

I was wondering if its possible to match the exposure across a set of images.
For example, lets say you have 5 images that were taken at different angles. Images 1-3,5 are taken with the same exposure whilst the 4th image have a slightly darker exposure. When I then try to combine these into a cylindrical panorama using (seamFinder with: gc_color, surf detection, MULTI_BAND blending,Wave correction, etc.) the result turns out with a big shadow in the middle due to the darkness from image 4.
I've also tried using exposureCompensator without luck.
Since I'm taking the pictures in iOS, I maybe could increase exposure manually when needed? But this doesn't seem optimal..
Have anyone else dealt with this problem?
This method is probably overkill (and not just a little) but the current state-of-the-art method for ensuring color consistency between different images is presented in this article from HaCohen et al.
Their algorithm can correct a wide range of errors in image sets. I have implemented and tested it on datasets with large errors and it performs very well.
But, once again, I suppose this is way overkill for panorama stitching.
Sunreef has provided a very good paper, but it does seem overkill because of the complexity of a possible implementation.
What you want to do is to equalize the exposure not on the entire images, but on the overlapping zones. If the histograms of the overlapped zones match, it is a good indicator that the images have similar brightness and exposure conditions. Since you are doing more than 1 stitch, you may require a global equalization in order to make all the images look similar, and then only equalize them using either a weighted equalization on the overlapped region or a quadratic optimiser (which is again overkill if you are not a professional photographer). OpenCV has a simple implmentation of a simple equalization compensation algorithm.
The detail::ExposureCompensator class of OpenCV (sample implementation of such a stitiching is here) would be ideal for you to use.
Just create a compensator (try the 2 different types of compensation: GAIN and GAIN_BLOCKS)
Feed the images into the compensator, based on where their top-left cornes lie (in the stitched image) along with a mask (which can be either completely white or white only in the overlapped region).
Apply compensation on each individual image and iteratively check the results.
I don't know any way to do this in iOS, just OpenCV.

What is the difference between color deconvolution and K-means clustering for colors?

I have some color images needing segmentation. They are images of slides that are stained with hematoxylin and eosin ("H&E").
I found this method for color deconvolution by Ruifrok
http://europepmc.org/abstract/med/11531144
that separates out the images by color.
However it seems that you can do something similar just by using K-means clustering:
http://www.mathworks.com/help/images/examples/color-based-segmentation-using-k-means-clustering.html
I am curious what the difference is. Any insight would be welcome. Thanks.
I can't seem to find a copy of the article (well without paying) but they are not exactly the same.
K means seeks to cluster data. So if you just want to find dominant colors in an image, or do some sorting based on colors, this is the way to go. As a side note: Kmeans can be used on any vector. Its not confined to color, so you can use it for many other applications.
Color Deconvolution is trying to remove the effects of chemical dyes commonly used for microscopy. (If I understood the abstract properly). Based on the specific dye used, the algorithm tries to reverse its effects and give you the original color image back (before the dye was added). I found this website that shows some output. This is deconvolving the dye contribution to the RGB spectrum. It doesn't do any clustering/grouping (other than finding the dye)
Hope that helps
EDIT
If you didn't know, convolution is most often associated with signals/image processing. Basically you take a filter and run it over a signal. The output is a modified version of the original input. In this case, the original image is filtered by a dye with known RGB values. IF we know the full characteristics of the dye/filter we can invert it. Then by running the convolution again using the inverse filter we can hopefully de -convolve the effect. In principle it sounds simple enough, but in many cases this isn't possible.

Best tracking algorithm for multiple colored objects (billiard balls)

Let me quickly explain what I have: I have written a custom detector that finds the regions in an image of billiard balls. I did this in using the HSV colorspace and for most ball's I could get away with only thresholding the Hue channel. However for orange (#5) and brown (#7) one must take the saturation into account which adds another dimension to the problem.
From my research it seems like my best route would be to do some manner of mean-shift tracking but everything I've come across has described mean-shift in which only one channel is used (the hue channel).
Can anyone please explain or offer a link explaing how I can adapt mean-shift to work using hue and saturation?
Or can you tell me if you think a different tracking algorithm may be better suited to this problem?
In theory mean shift works well regardless of the dimensionality (in very high dimensions sparseness is a bit of an issues, but there are works that address that problem)
If you are trying to use an off the self mean shift tracker that only takes a single channel input, you can create your own problem specific color channel. You need a single channel that maximizes the difference between the different colored billiard balls.
The easiest way of doing that will be to take the mean colors of all 15 balls and, put them in a 15x3 matrix and decompose it with SVD (subtract the mean first) so you'll get the axis of maximal variance. This will give you the best linear transformation from RGB to a new one dimensional color space that maximizes difference between the billiard balls colors. (If it isn't good enough you can do better with local mapping, but might not be necessary)

Resources