How to extract dominant color from CIAreaHistogram? - ios

I am looking to analyze the most dominant color in a UIImage on iOS (color present in the most pixels) and I stumbled upon Core Image's filter based API, particularly CIAreaHistogram.
It seems like this filter could probably help me but I am struggling to understand the API. Firstly it says the output of the filter is a one-dimensional image which is the length of your input-bins and one pixel in height. How do I read this data? I basically want to figure out the color-value with the highest frequency so I am expecting the data to contain some kind of frequency count for each color, its not clear to me how this one-dimensional image would represent that because it does not really explain the data I can expect inside this 1-d image. And if its truly a histogram why would it not return a data-structure representing that like a dictionary
Second, in the API it asks for a number of bins? What should that input be? If I want an exact analysis would the input bin parameter be the color-space of my image? What does making the bin value smaller do, I would imagine it just approximates nearby colors via Euclidean distance to the nearest bin. If this is the case will that not yield exact histogram results, why would anyone want to do that?
Any input on the above two questions from an API perspective would help me greatly

Ian Ollmann's idea of calculating the histogram just for the hue is really neat and can be done with a simple color kernel. This kernel returns a monochrome image of just the hue of an image (based on this original work)
let shaderString = "kernel vec4 kernelFunc(__sample c)" +
"{" +
" vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);" +
" vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));" +
" vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));" +
" float d = q.x - min(q.w, q.y);" +
" float e = 1.0e-10;" +
" vec3 hsv = vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);" +
" return vec4(vec3(hsv.r), 1.0);" +
"}"
let colorKernel = CIColorKernel(string: shaderString)
If I get the hue of an image of a blue sky, the resulting histogram looks like this:
...while a warm sunset gives a histogram like this:
So, that looks like a good technique to get the dominant hue of an image.
Simon

CIAreaHistogram returns an image where the reg, green, blue and alpha values of each of the pixels indicates the frequency of that tone in the image. You can render that image to an array of UInt8 to look at the histogram data. There's also an undocumented outputData value:
let filter = CIFilter(
name: "CIAreaHistogram",
withInputParameters: [kCIInputImageKey: image])!
let histogramData = filter.valueForKey("outputData")
However, I've found vImage to be a better framework for working with histograms. First off, you need to create a vImage image format:
var format = vImage_CGImageFormat(
bitsPerComponent: 8,
bitsPerPixel: 32,
colorSpace: nil,
bitmapInfo: CGBitmapInfo(
rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue),
version: 0,
decode: nil,
renderingIntent: .RenderingIntentDefault)
vImage works with image buffers that can be created from CGImage rather than CIImage instances (you can create one with the createCGImage method of CIContext. vImageBuffer_InitWithCGImage will create an image buffer:
var inBuffer: vImage_Buffer = vImage_Buffer()
vImageBuffer_InitWithCGImage(
&inBuffer,
&format,
nil,
imageRef,
UInt32(kvImageNoFlags))
Now to create arrays of Uint which will hold the histogram values for the four channels:
let red = [UInt](count: 256, repeatedValue: 0)
let green = [UInt](count: 256, repeatedValue: 0)
let blue = [UInt](count: 256, repeatedValue: 0)
let alpha = [UInt](count: 256, repeatedValue: 0)
let redPtr = UnsafeMutablePointer<vImagePixelCount>(red)
let greenPtr = UnsafeMutablePointer<vImagePixelCount>(green)
let bluePtr = UnsafeMutablePointer<vImagePixelCount>(blue)
let alphaPtr = UnsafeMutablePointer<vImagePixelCount>(alpha)
let rgba = [redPtr, greenPtr, bluePtr, alphaPtr]
let histogram = UnsafeMutablePointer<UnsafeMutablePointer<vImagePixelCount>>(rgba)
The final step is to perform the calculation, which will populate the four arrays, and free the buffer's data:
vImageHistogramCalculation_ARGB8888(&inBuffer, histogram, UInt32(kvImageNoFlags))
free(inBuffer.data)
A quick check of the alpha array of an opaque image should yield 255 zeros with the final value corresponding to the number of pixels in the image:
print(alpha) // [0, 0, 0, 0, 0 ... 409600]
A histogram won't give you the dominant color from a visual perspective: an image which is half yellow {1,1,0} and half black {0,0,0} will give the same results as an image which is half red {1,0,0} and held green {0,1,0}.
Hope this helps,
Simon

One problem with the histogram approach is that you lose correlation between the color channels. That is, half your image could be magenta and half yellow. You will find a red histogram that is all in the 1.0 bin, but the blue and green bins would be evenly split between 0.0 and 1.0 with nothing in between. Even though you can be quite sure that red is bright, you won't be able to say much about what the blue and green component should be for the "predominant color"
You could use a 3D histogram with 2**(8+8+8) bins, but this is quite large and you will find the signal is quite sparse. By happenstance three pixels might land in one bin and have no two the same elsewhere, even though many users could tell you that there is a predominant color and it has nothing to do with that pixel.
You could make the 3D histogram a lot lower resolution and have (for example) just 16 bins per color channel. It is much more likely that bins will have a statistically meaningful population count this way. This should give you a starting point to find a mean for a local population of pixels in that bin. If each bin had a count and a {R,G,B} sum, then you could quickly find the mean color for pixels in that bin once you had identified the most popular bins. This method is still subject to some influence from the histogram grid. You will be more likely to identify colors in the middle of a grid cell than at the edges. Populations may span multiple grid cells. Something like kmeans might be another method.
If you just want predominant hue, then conversion to a color space like HSV followed by a histogram of hue would work.
I'm not aware of any filters in vImage, CI or MetalPerformanceShaders to do these things for you. You can certainly write code in either the CPU or Metal to do it without a lot of trouble.

Related

Performant Multi ROI Image Color Average on iOS

CoreImage's CIAreaAverage filter can easily be used to perform whole CIImage RGB color averaging. For example:
let options = [CIContextOption.workingColorSpace: kCFNull as Any]
let context = CIContext(options: options)
let parameters = [
kCIInputImageKey: inputImage, // assume this exists
kCIInputExtentKey: CIVector(cgRect: inputImage.extent)
]
let filter = CIFilter(name: "CIAreaAverage", parameters: parameters)
var bitmap = [Float32](repeating: 0, count: 4)
context.render(filter.outputImage!, toBitmap: &bitmap, rowBytes: 16, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: .RGBAf, colorSpace: nil)
let rAverage = bitmap[0]
let gAverage = bitmap[1]
let bAverage = bitmap[3]
...
modified from https://www.hackingwithswift.com/example-code/media/how-to-read-the-average-color-of-a-uiimage-using-ciareaaverage
However supposing one does not want whole CIImage color averaging, breaking up the image into regions of interest (ROIs) by varying the input extent (see kCIInputExtentKey above), and performing CIAreaAverage filtering operations per ROI introduces many sequential steps, decreasing performance drastically. The filters cannot be chained, of course, since the output is a 4-component color average (see bitmap above). Another way of describing this might be "average downsampling".
For example, let's say you have a 1080p image (1920x1080), and you want a 10x10 color average matrix from this. You would be performing 100 CIAreaAverage operations for 100 different input extents--each corresponding to a 192x108 pixel ROI for which you wish to have R, G, B, and perhaps A, average. But this is now 100 sequential CIAreaAverage operations--not performant.
Perhaps the next thing one might think to do is some sort of parallel for loop, e.g., a DispatchQueue.concurrentPerform(iterations:, execute:) per ROI. However, I am not seeing a performance gain. (Note that CIContext is thread safe, CIFilter is not)
https://www.advancedswift.com/parallel-for-loops-in-swift/#parallel-for-loops-using-dispatchqueue
https://developer.apple.com/documentation/coreimage/cicontext
Logically the next idea might be to create a custom CIFilter--let's call it CIMultiAreaAverage. However, it's not obvious how to create a CIKernel that can examine a source pixel's location and map that to a particular destination pixel. You need some buffer of information such as ROI color sum or to treat the destination pixel as a buffer. The simplest thing might be to perform ROI per channel sum into a destination with integer type, and then process that once rendered to a bitmap into an average by casting to float and dividing by the number of pixels in the ROI.
https://www.raywenderlich.com/25658084-core-image-tutorial-for-ios-custom-filters
https://developer.apple.com/metal/MetalCIKLReference6.pdf
https://developer.apple.com/documentation/coreimage/cicolorkernel
I wish I had access to the source code for CIAreaAverage. To encapsulate the full functionality in the CIFilter you might have to go further and write what's really a custom Metal shader. So perhaps someone with some expertise can assist with how to accomplish this with a metal shader.
https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf
Another option might be to use vDSP/vImage to perform these ROI operations. It seems easy to create the necessary vImage_Buffers per ROI, but I'd want to make sure that's an in-place operation (probably) for performance. Then, I'm not sure which or how to apply a vDSP mean function to the vImage_Buffer, treating it like an array, if that's possible. It sounds like this might be the most performant operation.
https://stackoverflow.com/a/36805765/6528990
https://developer.apple.com/documentation/accelerate/applying_vimage_operations_to_regions_of_interest
What does SO think?
Here is what Apple is doing in CIAreaAverage:
I don't know why they follow two different paths, but this is what I think is happening:
The path on the left is a stepwise reduction of the input pixels into a smaller output. The kernel _areaAvg8 reduces a group of (up to) 8x8 pixels into one output pixel by calculating their average value. _areaAvg2 does the same for 2x2 pixels and _horizAvg2 for 2x1. So in multiple steps, the image is reduced, each step reducing the values of the previous step further. Until the last step produces one final pixel that contains the average of all pixels of the input.
For the right side, I assume that the CIAreaAverageProcessor is a CIImageProcessingKernel that uses Metal Performance Shaders, specifically I assume MPSImageReduceRowMean and MPSImageReduceColumnMean, to do the same. Why they have those two paths with the switch on top I do not know.
For your use case, I suggest you implement something similar to the left path, but stop somewhere in the middle, depending on the size of your desired output.
To improve performance, you can make use of the bilinear sampling that is provided by the graphics hardware basically for free: When you sample the input image at a coordinate in the middle of 4 pixels, you already get an average of these 4 color values. That means for an 8x8 reduction, you only need 4 x 4 = 16 sample operations (instead of 64). This kernel could look something like this:
extern "C" float4 areaAvg8(coreimage::sampler src, coreimage::destination dest) {
float2 center = dest.coord() * 8.0; // assuming that src is 8x larger than dest
float4 sum = src.sample(src.transform(center + float2(-3.0, -3.0)))
+ src.sample(src.transform(center + float2(-1.0, -3.0)))
+ src.sample(src.transform(center + float2( 1.0, -3.0)))
+ src.sample(src.transform(center + float2( 3.0, -3.0)))
+ src.sample(src.transform(center + float2(-3.0, -1.0)))
+ src.sample(src.transform(center + float2(-1.0, -1.0)))
+ src.sample(src.transform(center + float2( 1.0, -1.0)))
+ src.sample(src.transform(center + float2( 3.0, -1.0)))
+ src.sample(src.transform(center + float2(-3.0, 1.0)))
+ src.sample(src.transform(center + float2(-1.0, 1.0)))
+ src.sample(src.transform(center + float2( 1.0, 1.0)))
+ src.sample(src.transform(center + float2( 3.0, 1.0)))
+ src.sample(src.transform(center + float2(-3.0, 3.0)))
+ src.sample(src.transform(center + float2(-1.0, 3.0)))
+ src.sample(src.transform(center + float2( 1.0, 3.0)))
+ src.sample(src.transform(center + float2( 3.0, 3.0)));
return sum / 16.0;
}

How do you calculate the average gradient direction and average gradient strength/magnitude

In OpenCV how do you calculate the average gradient strength in a Mat and the average gradient direction?
I have sourced the below methods by googling but I want to confirm I am actually doing this correctly before moving onto the next step.
Is this correct?
Mat img = imread('foo.png', CV_8UC); // read image as grayscale single channel
// Calculate the mean intensity and the std deviation
// Any errors here or am I doing this correctly?
Scalar sMean, sStdDev;
meanStdDev(src, sMean, sStdDev);
double mean = sMean[0];
double stddev = sStdDev[0];
// Calculate the average gradient magnitude/strength across the image
// Any errors here or am I doing this correctly?
Mat dX, dY, magnitude;
Sobel(src, dX, CV_32F, 1, 0, 1);
Sobel(src, dY, CV_32F, 0, 1, 1);
magnitude(dX, dY, magnitude);
Scalar sMMean, sMStdDev;
meanStdDev(magnitude, sMMean, sMStdDev);
double magnitudeMean = sMMean[0];
double magnitudeStdDev = sMStdDev[0];
// Calculate the average gradient direction across the image
// Any errors here or am I doing this correctly?
Scalar avgHorizDir = mean(dX);
Scalar avgVertDir = mean(dY);
double avgDir = atan2(-avgVertDir[0], avgHorizDir[0]);
float blurriness = cv::videostab::calcBlurriness(src); // low values = sharper. High values = blurry
Technically those are the correct ways of obtaining the two averages.
The way you compute mean direction uses weighted directional statistics, meaning that pixels without a strong gradient have less influence on the average.
However, for most images this average direction is not very meaningful, as there exist edges in all directions and cancel out.
If your image is of a single edge, then this will work great.
If your image has lines in it, containing edges in opposite directions, this will not work. In this case, you want to average the double angle (average orientations). The obvious way of doing this is to compute the direction per pixel as an angle, double them, then use directional statistics to average (ie convert back to vectors and average those). Doubling the angle causes opposite directions to be mapped to the same value, thus averaging doesn’t cancel these out.
Another simple way to average orientations is to take the average of the tensor field obtained by the outer product of the gradient field with itself, and determine the direction of the eigenvector corresponding to the largest eigenvalue. The tensor field is obtained as follows:
Mat Sxx = dX * dX;
Mat Syy = dY * dY;
Mat Sxy = dX * dY;
This should then be averaged:
Scalar mSxx = mean(sXX);
Scalar mSyy = mean(sYY);
Scalar mSxy = mean(sXY);
These values form a 2x2 real-valued symmetric matrix:
| mSxx mSxy |
| mSxy mSyy |
It is relatively straight-forward to determine its eigendecomposition, and can be done analytically. I don’t have the equations on hand right now, so I’ll leave it as an exercise to the reader. :)

how to superimpose two images?

I have a visualization output of gabor filter with 12 different orientations.I want to superimpose the vizualization image on my image of retina for vessel extraction.How do i do it?I have tried the below method.is there any other method to perform superimposition of images in matlab.
here is my code
I = getimage();
I=I(:,:,2);
lambda = 8;
theta = 0;
psi = [0 pi/2];
gamma = 0.5;
bw = 1;
N = 2;
img_in = im2double(I);
%img_in(:,:,2:3) = []; % discard redundant channels, it's gray anyway
img_out = zeros(size(img_in,1), size(img_in,2), N);
for n=1:N
gb = gabor_fn(bw,gamma,psi(1),lambda,theta)...
+ 1i * gabor_fn(bw,gamma,psi(2),lambda,theta);
% gb is the n-th gabor filter
img_out(:,:,n) = imfilter(img_in, gb, 'symmetric');
% filter output to the n-th channel
%theta = theta + 2*pi/N
%figure;
%imshow(img_out(:,:,n));
imshow(img_in); hold on;
h = imagesc(img_out(:,:,n)); % here i am getting error saying CDATA must be size[M*N]
set( h, 'AlphaData', .5 ); % .5 transparency
figure;
imshow(h);
theta = 15 * n; % next orientation
end
this is my original image
this is my visualized image got by gabor filter using orientation
this is the kind/type of image i have to get with respect to visualisation .i.e i have to impose visualized image on my original image and i have to get this type of image
With the information you have provided, my understanding is you want the third/final image to be an overlay on top of the first/initial image. I do things like this when using segmentation to detect hemorrhaging in MRI images of the brain.
First, let's set up some defintions:
I_src = source/original image
I_out = output/final image
Now, make a copy of I_src and make it a color image rather than grayscale.
I_hybrid = I_src
colorIm = gray2rgb(I_src)
Let's assume both I_src and I_out are the same visual dimensions (ie: width, height), and that I_out is strictly black-and-white (ie: monochrome). Now, we can use I_out as a mask template for alpha channel adjustments in the resulting image. This is where it gets fun.
BLACK=0;
WHITE=1;
[length width] = size(I_out);
for i = 1:1:length
for j = 1:1:width
if (I_out(i,j) == WHITE)
I_hybrid(i,j) = I_hybrid(i,j) + [0.25 0 0]a;
end
end
This will result in you getting your original image with the blood vessels in the eye being slightly brighter and tinted red. You now have a beautiful composite of your original image with the desired features highlighted, but not overwritten (ie: you can undo the highlighting by subtracting the original color vector).
I will include an example of what the output would look like, but it's noisy because I had to create it in GIMP as I don't have Matlab installed right now. The results will be similar, but yours would be much cleaner and prettier.
Please let me know how this goes.
References
"Converting Images from Grayscale to Color" http://blogs.mathworks.com/pick/2012/11/25/converting-images-from-grayscale-to-color/

color a grayscale image with opencv

i'm using openNI for some project with kinect sensor. i'd like to color the users pixels given with the depth map. now i have pixels that goes from white to black, but i want from red to black. i've tried with alpha blending, but my result is only that i have pixels from pink to black because i add (with addWeight) red+white = pink.
this is my actual code:
layers = device.getDepth().clone();
cvtColor(layers, layers, CV_GRAY2BGR);
Mat red = Mat(240,320, CV_8UC3, Scalar(255,0,0));
Mat red_body; // = Mat::zeros(240,320, CV_8UC3);
red.copyTo(red_body, device.getUserMask());
addWeighted(red_body, 0.8, layers, 0.5, 0.0, layers);
where device.getDepth() returns a cv::Mat with depth map and device.getUserMask() returns a cv::Mat with user pixels (only white pixels)
some advice?
EDIT:
one more thing:
thanks to sammy answer i've done it. but actually i don't have values exactly from 0 to 255, but from (for example) 123-220.
i'm going to find minimum and maximum via a simple for loop (are there better way?), and how can i map my values from min-max to 0-255 ?
First, OpenCV's default color format is BGR not RGB. So, your code for creating the red image should be
Mat red = Mat(240,320, CV_8UC3, Scalar(0,0,255));
For red to black color map, you can use element wise multiplication instead of alpha blending
Mat out = red_body.mul(layers, 1.0/255);
You can find the min and max values of a matrix M using
double minVal, maxVal;
minMaxLoc(M, &minVal, &maxVal, 0, 0);
You can then subtract the minValue and scale with a factor
double factor = 255.0/(maxVal - minVal);
M = factor*(M -minValue)
Kinda clumsy and slow, but maybe split layers, copy red_body (make it a one channel Mat, not 3) to the red channel, merge them back into layers?
Get the same effect, but much faster (in place) with reshape:
layers = device.getDepth().clone();
cvtColor(layers, layers, CV_GRAY2BGR);
Mat red = Mat(240,320, CV_8UC1, Scalar(255)); // One channel
Mat red_body;
red.copyTo(red_body, device.getUserMask());
Mat flatLayer = layers.reshape(1,240*320); // presumed dimensions of layer
red_body.reshape(0,240*320).copyTo(flatLayer.col(0));
// layers now has the red from red_body

Filling holes inside a binary object

I have a problem with filling white holes inside a black coin so that I can have only 0-255 binary images with filled black coins. I have used a Median filter to accomplish it but in that case connection bridge between coins grows and it goes impossible to recognize them after several times of erosion... So I need a simple floodFill like method in opencv
Here is my image with holes:
EDIT: floodfill like function must fill holes in big components without prompting X, Y coordinates as a seed...
EDIT: I tried to use the cvDrawContours function but it doesn't fill contours inside bigger ones.
Here is my code:
CvMemStorage mem = cvCreateMemStorage(0);
CvSeq contours = new CvSeq();
CvSeq ptr = new CvSeq();
int sizeofCvContour = Loader.sizeof(CvContour.class);
cvThreshold(gray, gray, 150, 255, CV_THRESH_BINARY_INV);
int numOfContours = cvFindContours(gray, mem, contours, sizeofCvContour, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
System.out.println("The num of contours: "+numOfContours); //prints 87, ok
Random rand = new Random();
for (ptr = contours; ptr != null; ptr = ptr.h_next()) {
Color randomColor = new Color(rand.nextFloat(), rand.nextFloat(), rand.nextFloat());
CvScalar color = CV_RGB( randomColor.getRed(), randomColor.getGreen(), randomColor.getBlue());
cvDrawContours(gray, ptr, color, color, -1, CV_FILLED, 8);
}
CanvasFrame canvas6 = new CanvasFrame("drawContours");
canvas6.showImage(gray);
Result: (you can see black holes inside each coin)
There are two methods to do this:
1) Contour Filling:
First, invert the image, find contours in the image, fill it with black and invert back.
des = cv2.bitwise_not(gray)
contour,hier = cv2.findContours(des,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
cv2.drawContours(des,[cnt],0,255,-1)
gray = cv2.bitwise_not(des)
Resulting image:
2) Image Opening:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
res = cv2.morphologyEx(gray,cv2.MORPH_OPEN,kernel)
The resulting image is as follows:
You can see, there is not much difference in both cases.
NB: gray - grayscale image, All codes are in OpenCV-Python
Reference. OpenCV Morphological Transformations
A simple dilate and erode would close the gaps fairly well, I imagine. I think maybe this is what you're looking for.
A more robust solution would be to do an edge detect on the whole image, and then a hough transform for circles. A quick google shows there are code samples available in various languages for size invariant detection of circles using a hough transform, so hopefully that will give you something to go on.
The benefit of using the hough transform is that the algorithm will actually give you an estimate of the size and location of every circle, so you can rebuild an ideal image based on that model. It should also be very robust to overlap, especially considering the quality of the input image here (i.e. less worry about false positives, so can lower the threshold for results).
You might be looking for the Fillhole transformation, an application of morphological image reconstruction.
This transformation will fill the holes in your coins, even though at the cost of also filling all holes between groups of adjacent coins. The Hough space or opening-based solutions suggested by the other posters will probably give you better high-level recognition results.
In case someone is looking for the cpp implementation -
std::vector<std::vector<cv::Point> > contours_vector;
cv::findContours(input_image, contours_vector, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
cv::Mat contourImage(input_image.size(), CV_8UC1, cv::Scalar(0));
for ( ushort contour_index = 0; contour_index < contours_vector.size(); contour_index++) {
cv::drawContours(contourImage, contours_vector, contour_index, cv::Scalar(255), -1);
}
cv::imshow("con", contourImage);
cv::waitKey(0);
Try using cvFindContours() function. You can use it to find connected components. With the right parameters this function returns a list with the contours of each connected components.
Find the contours which represent a hole. Then use cvDrawContours() to fill up the selected contour by the foreground color thereby closing the holes.
I think if the objects are touched or crowded, there will be some problems using the contours and the math morophology opening.
Instead, the following simple solution is found and tested. It is working very well, and not only for this images, but also for any other images.
here is the steps (optimized) as seen in http://blogs.mathworks.com/steve/2008/08/05/filling-small-holes/
let I: the input image
1. filled_I = floodfill(I). // fill every hole in the image.
2. inverted_I = invert(I)`.
3. holes_I = filled_I AND inverted_I. // finds all holes
4. cc_list = connectedcomponent(holes_I) // list of all connected component in holes_I.
5. holes_I = remove(cc_list,holes_I, smallholes_threshold_size) // remove all holes from holes_I having size > smallholes_threshold_size.
6. out_I = I OR holes_I. // fill only the small holes.
In short, the algorithm is just to find all holes, remove the big ones then write the small ones only on the original image.
I've been looking around the internet to find a proper imfill function (as the one in Matlab) but working in C with OpenCV. After some reaserches, I finally came up with a solution :
IplImage* imfill(IplImage* src)
{
CvScalar white = CV_RGB( 255, 255, 255 );
IplImage* dst = cvCreateImage( cvGetSize(src), 8, 3);
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contour = 0;
cvFindContours(src, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
cvZero( dst );
for( ; contour != 0; contour = contour->h_next )
{
cvDrawContours( dst, contour, white, white, 0, CV_FILLED);
}
IplImage* bin_imgFilled = cvCreateImage(cvGetSize(src), 8, 1);
cvInRangeS(dst, white, white, bin_imgFilled);
return bin_imgFilled;
}
For this: Original Binary Image
Result is: Final Binary Image
The trick is in the parameters setting of the cvDrawContours function:
cvDrawContours( dst, contour, white, white, 0, CV_FILLED);
dst = destination image
contour = pointer to the first contour
white = color used to fill the contour
0 = Maximal level for drawn contours. If 0, only contour is drawn
CV_FILLED = Thickness of lines the contours are drawn with. If it is negative (For example, =CV_FILLED), the contour interiors are drawn.
More info in the openCV documentation.
There is probably a way to get "dst" directly as a binary image but I couldn't find how to use the cvDrawContours function with binary values.

Resources