Image transformation by clustering - image-processing

I am doing a project on image binarisation where I am required to transform an image such that its divided into individual color layers using clustering. What I mean to say is that there will be no shades in the image, instead the shades of the input image will be converted into a layer dividing the two colors.
The input and output images are given:
I am trying to implement this using opencv, but not able to figure out how to do that.
Thanks in advance.

Try using k-means clustering.
http://aishack.in/tutorials/kmeans-clustering-opencv/
You get as many colours as you have means.
Here is an example being implemented using the Accord.NET C# library.
http://crsouza.blogspot.com.au/2010/10/k-means-clustering.html

Related

Template matching for colored image input

I have a working code for template matching. But it only works if the input image is converted into grayscale. Is it possible to do template matching considering the template color as well that needs to be found in the given image?
inputImg = cv2.imread("location")
template = cv2.imread("location")
Yes, you can do it but why?
The idea of converting to the grey-scale is to apply the selected edge-detection algorithm to find the features of the input image.
Since you are working the features the possibility of finding the template image in the original image will be higher. As a result, converting to grey-scale has two advantages. Accuracy and computational complexity.
The matchTemplate method is also working for the RGB images. Now you need to find the image characteristic for 3 different channels. Yet you are not sure about whether your features robust or not, since most edge-detection algorithms are designed for the grey-scale images.

How can I align warped images to create a panoramic image?

I am trying to create a panorama and I am stuck on the part where I have two separate warped images in two cv::Mat's and now I need to align them and create one single cv::Mat. I also need to average the pixel color value where the pixels in the images overlap to do elementary blending. Is there a built in function in opencv that can do this for me? I have been following the Basic Stitching Pipeline. I'm not sure how I can align and blend the images. I looked up a solution that does feature matching between the images and then we get the homography and just use the translation vector to align the images. Is this what I should be using?
Here are the warped images:
Image 1:
Image 1:
Generating a panaroma from a set of images is usually done using homographies. The reason for this is explained very well here.
You can refer to the code given by Eduardo here. It is also based on feature matching though.
You are right, you need to start with finding descriptors for features in the image (Brief descriptor might be a good idea) and then do feature matching. Once you have the correspondences, you will use those correspondences to estimate the homography. The homography will help you warp one of the image with respect to the other. Post this, you can simply blend them together (by simply add the two images, or taking the maximum value of the at each pixel between the two images)

flower segmentation using GrabCut in OpenCv

I want to create a model for flower segmentation. I want to train model with many images. I want to use GrabCut in opencv. I have read this link. but this only uses one image for segmentation. how can I use GrabCut for the above mentioned purpose?
here is some sample from flower's pictures:
If all the images are like the ones shown, and you are set on using grabcut, then you can cheat by setting a mask to the central pixels and then using grabcut with the mask option.
If all the images are like the ones shown, and you are not set on using grabcut, then maybe try salient segmentation, it seems to like flowers.
http://mmcheng.net/salobj/
If you want a general "model" that can segment flowers that is much more difficult. Perhaps check my other post https://stackoverflow.com/a/24624938/3669776 for some bedtime reading :)

Convert raster images to vector graphics using OpenCV?

I'm looking for a possibility to convert raster images to vector data using OpenCV. There I found a function cv::findContours() which seems to be a bit primitive (more probably I did not understand it fully):
It seems to use b/w images only (no greyscale and no coloured images) and does not seem to accept any filtering/error suppresion parameters that could be helpful in noisy images, to avoid very short vector lines or to avoid uneven polylines where one single, straight line would be the better result.
So my question: is there a OpenCV possibility to vectorise coloured raster images where the colour-information is assigned to the resulting polylinbes afterwards? And how can I apply noise reduction and error suppression to such a algorithm?
Thanks!
If you want to raster image by color than I recommend you to clusterize image on some group of colors (or quantalize it) and after this extract contours of each color and convert to needed format. There are no ready vectorizing methods in OpenCV.

How to segment objects based on color and size?

I have two image processing problems that I'm handling using Open-CV.
Identifying similar objects with different colors apart from each other.
Identifying similar colored objects with different sizes apart from each other.
Example images for scenarios 1 and 2;
1
2
Both the images have three types of objects of interest. (Either three colors or sizes)
The techniques I've come across include thresholding and then using erosion with pixel counting, color segmentation using RGB values.
What is a good work-chain and what is a good place to start?
For color segmentation you should stay away from RGB as similar colors aren't linearly related.
As an example 2 similar colors (with identical hue) may have very different RGB values:
It's better to work with color spaces like LUV or HSV which have separated color from luminance. For example you may try a clustering algorithm on U,V components of LUV.
Obviously working with RGB value is probably the best way to start here. Use the function cvSplit, which will give you the three separated plans B, G and R (BGR order with OpenCV, not RGB). In each one of them, you should see only the circles of the corresponding color.
I would recommend here to first perform a edge detection with Canny algorithm, implemented in OpenCV by the function cvCanny, and then do a circle detection with Hough algorithm, also implemented in OpenCV. If I remember well, the OpenCV function for Hough circles returns the circle properties (radius...), which will allow you to identify your circles upon their sizes.
Another option for 2. is Hit&Miss algorithm, that uses morphology. I never used morphology with OpenCV though, only with Matlab.
Have fun
Have a look at cvBlob which works very well and can handle complex shapes.

Resources