Making the clusters fixed in place in K-Means - image-processing

I am doing image segmentation on black & white images using K-Means, I have K=2, where I need to determine the black area all the time and use it later for steganography purposes. The problem is when I run the code, the clusters change in index, the black area comes in index 0 once and on the other run comes in index 1.
Is there any way to have this fixed in place? And for multiple images too.
The code is written in Python using OpenCV library.

Related

How to remove 'wood grain' (noise) background from image?

I have been stuck on attempting to remove the background from a borehole log image for a week or so (new to image processing). I want to eventually develop a code which can automatically detect the horizontal sinusoidal features in the image (attached). I think for this I can use a Hough transform. However, all of the algorithms I have used (Hough transform, edge detections, thresholding) do not work because of the background of the image which has this 'wood grain' appearance. I also tried recreating a mask through finding the image gradient, but because the color values of the features I want (horizontal sinusiodal shapes) are so similar to the background I want to remove I am having a difficult time. The ultimate goal is to take two images taken at different times (before and after a scientific experiment) and to subtract them to see where the sinusoidal patterns differ. If I can get rid of this background that should be easier.
I so far have worked the image to better quality through taking the FFT and applying a high-pass filter. This at least homogenizes the image and leaves me with the attached result. However, I am not having much luck to remove this vertical 'wood grain'. Does anyone have a thought about how it could be done? This is driving me a little crazy.
Thank you so much!

Background Substraction with user input

I am looking for an algorithm or, even better, some library that covers background substraction from a single static image (no background model available). What whould be possible though is some kind of user input like for example https://clippingmagic.com does it.
Sadly my google fu is bad here as i cant find any papers on that topic with my limited amount of keywords.
That webpage is really impressive. If I were to try and implement something similar I would probably use k-means clustering using the CIELAB colorspace. The reason for changing the colorspace is so that colors can be represented by two points rather than 3 as a regular RGB image. This should speed up clustering. Additionally, the CIELAB color space was build for this purpose, finding "distances" (similarities) between colors and accounts for the way humans perceive color. Not just looking at the raw binary data the computer has.
But a quick overview of kmeans. For this example we will say k=2 (meaning only two clusters)
Initialize each cluster with a mean.
Go through every pixel in your image and decide which mean it is closer to, cluster 1 or 2?
Compute the new mean for your clusters after you've processed all the pixels
using the newly computed means repeat steps 2-4 until convergence (meaning the means don't change very much)
Now that would work well when the foreground image is notably different than the background. Say a red ball in a blue background, but if the colors are similar it would be more problematic. I would still stick to kmeans but have a larger number of clusters. So on that web page you can make multiple red or green selections. I would make each of these strokes a cluster, and intialize my cluster to the mean. So say I drew 3 red strokes, and 2 green ones. That means I'd have 5 groups. But somehow internally I add an extra attribute as foreground/background. So that each cluster will have a small variance, but in the end, I would only display that attribute, foreground or background. I hope that made sense.
Maybe now you have some search terms to start off with. There may be many other methods but this is the first I thought of, good luck.
EDIT
After playing with the website a bit more I see it uses spatial proximity to cluster. So say I had 2 identical red blobs on opposite sides of the image. If I only annotate the left side of the image the blob on the right side might not get detected. Kmeans wouldn't replicate this behavior since the method I described only uses the color to cluster pixels completely oblivious to their location in the image.
I don't know what tools you have at your disposal but here is a nice matlab example/tutorial on color based kmeans

Feature detection on a small, noisy image with OpenCV

I have an image that is both pretty noisy, small (the relevant portion is 381 × 314) and the features are very subtle.
The source image and the cropped relevant area are here as well: http://imgur.com/a/O8Zc2
The task is to count the number of white-ish dots within the relevant area using Python but I would be happy with just isolating the lighter dots and lines within the area and removing the background structure (in this case the cell).
With OpenCV I've tried Histogram equalization (destroys the details), finding contours (didn't work), using color ranges (too close in color?)
Any suggestions or guidance on other things to try? I don't believe I can get a higher res image so is this task possible with the rather difficult source?
(This is not a Python answer, since I never used the Python/OpenCV binding. The images below were created using Mathematica. But I just used basic image processing functions, so you should be able to implement that in Python on your own.)
A very general "trick" in image processing is to think about removing the thing you're looking for, instead of actually looking for it. Because often, removing it is much easier than finding it. You could for instance apply a morphological opening, median filter or a gaussian filter to it:
These filters effectively remove details smaller than the filter size, and leave the coarser structures more or less untouched. So you can just take the difference from the original image and look for local maxima:
(You'll have to play around with different "detail removal filters" and filter sizes. There's no way to tell which one works best with just one image.)

Image Segmentation for Color Analysis in OpenCV

I am working on a project that requires me to:
Look at images that contain relatively well-defined objects, e.g.
and pick out the color of n-most (it's generic, could be 1,2,3, etc...) prominent objects in some space (whether it be RGB, HSV, whatever) and return it.
I am looking into ways to segment images like this into the independent objects. Once that's done, I'm under the impression that it won't be particularly difficult to find the contours of the segments and analyze them for average or centroid color, etc...
I looked briefly into the Watershed algorithm, which seems like it could work, but I was unsure of how to generate the marker image for an indeterminate number of blobs.
What's the best way to segment such an image, and if it's using Watershed, what's the best way to generate the corresponding marker image of integers?
Check out this possible approach:
Efficient Graph-Based Image Segmentation
Pedro F. Felzenszwalb and Daniel P. Huttenlocher
Here's what it looks like on your image:
I'm not an expert but I really don't see how the Watershed algorithm can be very useful to your segmentation problem.
From my limited experience/exposure to this kind of problems, I would think that the way to go would be to try a sliding-windows approach to segmentation. Basically this entails walking the image using a window of a set size, and attempting to determine if the window encompasses background vs. an object. You will want to try different window sizes and steps.
Doing this should allow you to detect the object in the image, presuming that the images contain relatively well defined objects. You might also attempt to perform segmentation after converting the image to black and white with a certain threshold the gives good separation of background vs. objects.
Once you've identified the object(s) via the sliding window you can attempt to determine the most prominent color using one of the methods you mentioned.
UPDATE
Based on your comment, here's another potential approach that might work for you:
If you believe the objects will have mostly uniform color you might attempt to process the image to:
remove noise;
map original image to reduced color space (i.e. 256 or event 16 colors)
detect connected components based on pixel color and determine which ones are large enough
You might also benefit from re-sampling the image to lower resolution (i.e. if the image is 1024 x 768 you might reduce it to 256 x 192) to help speed up the algorithm.
The only thing left to do would be to determine which component is the background. This is where it might make sense to also attempt to do the background removal by converting to black/white with a certain threshold.

Pattern Recognition using OpenCV

I am trying to detect a pattern on an object on a green field, made up of three colors (two pink markers to the sides and a blue one in the middle) arranged like a traffic light.
At first I tried converting the images from the webcam to hsv color space and isolate the color using cvInRangeS but that became problematic as the light changes in the room during the day I either get false positives or lose track of objects.
Then I tried SURF by modifying find_obj.cpp, the problem with that was opencv can only detect 2 surf points on my marker which is not enough to locate it from the code it seems I need at least 4, I tried playing with surf params but that did not change anything.
Also while googling I came across this,
http://wiki.elphel.com/index.php?title=OpenCV_Tennis_balls_recognizing_tutorial&redirect=no
which says I can also use machine learning to pick the color range I am interested in but I could not find any info on how to do that.
My question is, is there anything in OpenCV that would allow me to detect the marker?
EDIT: Another question about trying haar training, my background will always be same color same surface using the same marker for the object, can I train a classifier with say 20 positive 20 negative images or do I still need thousands of images to get it to recognize?
I'd suggest you check out Shervin's tutorial on blob detection, using colors
http://www.shervinemami.info/blobs.html
EDIT
You night try retinex to help improve results
http://www.ipol.im/pub/algo/lmps_retinex_poisson_equation/

Resources