Area calculation in opencv/javacv? - image-processing

Please can some one explain how to identify area which are should in red and blue colors in following image ? I tried to use cvFindContours() method but it didn't give expected result for me.
Input image
Expected result
I Like to know whether there are any other methods to identify or calculate the area of this kind of contours. Please be kind enough to share simple code example with this.

The function floodFill can also return an area as its return value. So one thing you can do is to raster scan each pixel: each time you reach untouched pixel, colour it into some colour(black), and store the area of that region along with the pixel coordinates, continue until whole image would not be covered.
In the end you will have a set of areas with cordinates for one pixel in each region.
Will you need to recover specific region you can use floodFill again by colouring that region to a specific colour.

Related

find rectangle coordinates in a given image

I'm trying to blindly detect signals in a spectra.
one way that came to my mind is to detect rectangles in the waterfall (a 2D matrix that can be interpret as an image) .
Is there any fast way (in the order of 0.1 second) to find center and width of all of the horizontal rectangles in an image? (heights of rectangles are not considered for me).
an example image will be uploaded (Note I know that all rectangles are horizontal.
I would appreciate it if you give me any other suggestion for this purpose.
e.g. I want the algorithm to give me 9 center and 9 coordinates for the above image.
Since the rectangle are aligned, you can do that quite easily and efficiently (this is not the case with unaligned rectangles since they are not clearly separated). The idea is first to compute the average color of each line and for each column. You should get something like that:
Then, you can subtract the background color (blue), compute the luminance and then compute a threshold. You can remove some artefact using a median/blur before.
Then, you can just scan the resulting 1D array filled with binary values so to locate where each rectangle start/stop. The center of each rectangle is ((x_start+x_end)/2, (y_start+y_end)/2).

How can i estimate the pixel coordinates?

I have a project that I need to follow the points like in the picture. I made a picture like this as a representation. I get the coordinates of the red dots as pixel coordinates once per second. Is there a way I can predict the green dots when I don’t take measurements afterwards? When I use the 2d-kalman filter (x,y,x’,y’ states) it keeps guessing in the direction of the blue arrow. I want to make correct predictions on the green dots, not this way. How can I do that.!
Thanks.
Image

Crop the region of interest with few points available

I have used convex hull and convexity defects and found the points in the hand as shown below.
With the above points information available, how can I crop the region marked in red (Knuckle) as shown in the below image.
My intention is to detect the Knuckles in the hand.
Note: The green region drawn is using "Draw contours". Is it possible to use this region to crop the red marked area ( Knuckle ). How to crop these regions.
Update ( 26/2/2014 ):
I have found contour points as below. With the below information in hand is it possible to find the knuckle region. Is there any ways to find using the points.
Since you already know the red position, all you want is to crop this region?
It's very easy, you just need to set a ROI (Region of interest) and to copy this region to another image. Like this (in pseudo-code since I don't have an open CV project up and running)
img1.ROI = varRedRectangle
img1.copyTo(img2)
img1.ROI = null;
If your question is how to detect the red section, I think you need to do like anyone in image recognition and work a lot because there is tons of way to do it nobody here will find them for you.
Hope it helps!
If your idea is to detect those red areas you can use the following simple idea.
Get edge image and remove the edges outside the green boundary.
Apply Horizontal histogram to get separate the strips.
In each take vertical histogram and locate the bins with values within a neighbourhood of the peak. (Lets call these as Peak Bins)
The longest contiguous sequence of peak bins should give the answer.

highlight overexposed areas in a UIImage

I'm making a simple camera app for iOS and MAC. After the user snaps a picture it generates a UIimage on iOS (NSImage on MAC). I want to be able to highlight the areas in the image that is over exposed. Basically the overexposed areas would blink when that image is displayed.
Anybody knows the algorithm on how to tell where on the image is overexposed. Do I just add up the R,G,B values at each pixel. And if the total at each pixel is greater than a certain amount, then start blinking that pixel, and do that for all pixels?
Or do I have to do some complicated math from outer space to figure it out?
Thanks
rough
you will have to traverse the image, depending on your desired accuracy and precision, you can combine skipping and averaging pixels to come up with a smooth region...
it will depend on the details of you color space, but imagine YUV space (because you only need to look at one value, the Y or luminance):
if 240/255 is considered white, then a greater value of say 250/255 would be over exposed and you could mark it, then display in an overlay.

GPUImage: How to determine average pixel value for given rectangle in processed image

I am using GPUImage to process incoming video and I would like to then consider a given square subregion of the image and determine the average pixel value of the pixels in that region. Can anyone advise me on how to accomplish this? Even information on how to acquire pixel data for a pixel at coordinate(x,y) in the image would be useful.
Apologies if this is a simple question, but I am new to computer vision and the way to do this was not clear to me from the available documentation. Thank you.
First, use a GPUImageCropFilter to extract the rectangular region of your original image. This uses normalized coordinates (0.0 - 1.0), so you'll have to translate from the pixel location and size to these normalized coordinates.
Next, feed the output from the crop filter into a GPUImageAverageColor operation. This will average the pixel color within that region and use the colorAverageProcessingFinishedBlock that you set as a callback. The block will return to you the average red, green, blue, and alpha channel values for the pixels in that region.
For an example of both of these operations in action, see the FilterShowcase example that comes with the framework.

Resources