Efficient Region Find Algorithm - graph-algorithm

Looking for a time efficient algorithm to decide the regions of a grid. Say you have a 16x16 grid, some tiles are white, some are black, and you are trying to find the disjoint regions of white tiles in that 16x16 grid. Seems like it might be some sort of union-find algorithm but I'm not sure - just wanted to figure out the most efficient way to do this.
Below is an example of a tile grid with 2 regions in it - the goal would be to assign a region ID to each white tile in that 2 dimensional array.
Thanks!

Related

Creating a porosity map by interpolating between multiple overlapping grid squares

I have a binary image with black particles and white pore space. I am trying to observe the porosity variation across the image. To do this I have originally been using a square grid and measuring the porosity (ratio of black to white pixels) in each grid. I have then been uploading these values to matlab as XYZ coordinates, with X and Y being the centres of each grid and Z being the porosity value. I have then interpolated between these values to produce a porosity map.
However, when using a single square grid, the porosity map is not very representative of the binary image because the grids are coarse. However, I cannot reduce the grid size due to theoretical reasons in what I am trying to do.
However, I have found that if I overlay multiple grids, but which are shifted to the right or downwards incrementally, then I can upload these new XYZ values to matlab and interpolate between them which produces a much better porosity map.
The issue is that I can't find any reference to this method anywhere and so does anyone know if this technique is used at all or in any literature?. Also would interpolating between overlapping squares cause any issues because the porosity map produced using the overlapping squares looks good?
I have been searching the literature for what feels like an age looking for the answer to this question so I'd really appreciate any help.
Instead of using a coarse grid and interpolating between the values, I would use a sliding window (the same size as the cells your coarse grid) and compute the porosity at every position.
The multigrid approach will probably produce artifacts (aliasing issues) and is difficult to interpret.

Most prevalent color on a background by changing color space

I have a sheet of paper on another surface. I want to figure out when the paper ends and the surface begins.
I'm using this approach:
1. Convert image to pixel array
2. Pick 3 random 20x20 squares and frequency count the colors
3. The highest frequency is the background
However, the problem is that I get over 100 colors every time I do this on an actual image taken by the camera.
I think I can fix it by putting the image in 16 colors palette. Is there a way to do this change on a UIImage or CGImage?
Thanks
Your colours are probably very close together. How about calculating the distance (the cumulative absolute difference between red, green and blue values) from each sampled colour to a reference colour - just use the first one you sample as reference. If the distance is large, you have a different colour. If the distance is small, you have the same colour with minor variations in lighting or other camera artefacts.
Basically this is applying a filter in a very simple manner. It is up to you to decide how big the difference has to be for the colours to be considered different, but you could decide that by looking at the median difference of all the colours and grouping them into over/under samples.
You might also get good results from applying a Core Image filter to the sample images, such as CIColorClamp (CISpotColor looks better but is OS X only). if you can find a suitable filter there is a good chance it will be simpler and faster than doing it yourself.

Detecting multiple shapes in a picture and calculate the middle

This question can be answered with any type of programming language, cause I would like some help with algorithms, but I prefer Delphi. I have a the task to detect and count multiple shapes (between 1 and N - mostly circular or a Elipse) of random pictures and calculate their middle and return them as coordinates of a picture. The middle of each shape can have a filling (but it doesn't matter). The shapes are at least 1+ pixel away from each other. None of the shapes will like blend in with another or the corner of a picture.
The background of the picture has always the same background color, which actually doesn't matter, cause the borders/frames of the shapes are always a different color compared to the background. This makes it easy to detect the shapes. I was thinking about going pixel by pixel and collect the coordinates and then draw like an invisible rectangle/square around every shape to calculate the middle. Then I also heard about scanline, but I don't think it would be faster in this case. So my question is, how can I calculate:
How many shapes are in the picture.
How can I calculate (more or less) the exact middle of them.
A few pictures to visualize the task:
This is a picture with random shapes (mostly close circles)
As you can see they are apart from each other just fine.
Then I could easily draw/calculate an imaginary rectangle/square around every shape and calculate the middle of it like that:
After I have the rectangles/squares. I can easily calculate the middle.
How do I start?
PS.: I've drawn some circles in mspaint. I have to add that all shapes are CLOSED, which makes it possible to flood fill EVERY shape in the picture with no problems!
Thank you for your help.
Calculate MSER (Maximally stable extremal regions) for the image. I can't explain that algorithm here. You can refer to the Maximally stable extremal regions article for more information about the algorithm.
That will give you centroid too.
This algorithm is implemented as inbuilt functions in OpenCv tool and Matlab 2012b.
Another method which i can think of and possibly simple than previous method is to apply connected components algorithm and count number of objects.More information of this can be found in book by Gonzalez and Woods on Digital Image Processing.

Headcount in opencv

i am new to opencv. i have to implement a headcount.
my idea is:
Identification of circular objects
We will start by edge detection to find border line of each shape.
sort through the image matrix pixel by pixel
for each pixel, analyze each of the 8 pixels surrounding it
record the value of the darkest pixel, and the lightest pixel
if (darkest_pixel_value - lightest_pixel_value) > threshold)
then rewrite that pixel as 1;
else rewrite that pixel as 0;
Now we detect shapes
count the number of continuous edges
a sharp change in line direction signifies a different line
do this by determining the average vector between adjacent pixels
if one line, then its a circle
by measure angles between lines more information can be deduced (rhomboid, equilateral triangle, etc.)
Face detection
This part includes two common approaches based on features and color. The basic idea of the algorithm is to find objects resembling an eye, then on the basis of geometric face characteristics try to join two the objects into an eye pair.
Steps:
Unimportant colors are eliminated from the image and insignificant colors are replaced with white color.
The image is then converted to grayscale.
The image is filtered with a median filter (unimportant white regions are blurred)
White regions are segmented using a Region growth algorithm.
Hough transform is applied to find circles
For each region the best possible circle is found
Using geometric face characteristics the pair of eyes is found
is this the right way to proceed or is there an easier way?
i want to count the number (estimate) of people found in a crowd (meetings, gatherings)
can you help me with the codes please?
Thank you
You can use the OpenCV built-in face detection.See http://opencv.willowgarage.com/wiki/FaceDetection for detailed instructions.
I had a similar project.
You need to get the best image so concentrate on fixing saturation, contrast and intensity.
If your planning to use color, if you want skin color detection for example, than you need to fix white balance.
Don't think of headcount, instead think of people count.
You need a good background segmentation, use Gaussian Mixture of Models
combined with other background modeling algorithm.
If this is an outdoor application you need shadow detection.
Get foreground blobs and then determine where people are in those blobs.
If your counting heads, you need to detect omega shape for the head and shoulders.
You will need tracking for occlusions and people crossing.
You can also use human body classification, opencv has haarcascade_fullbody.xml
These are just some ideas...

OpenCV: Detect a black to white gradient in an area

I uploaded an example image for better understanding: http://www.imagebanana.com/view/kaja46ko/test.jpg
In the image you can see some scanlines and a marker (the white retangle with the circle in it). I want OpenCV to go along a specified area (in the example outlined trough the scanlines) that should be around 5x5. If that area contains a gradient from black to white, I want OpenCV to save the position of that area, so that I can work with it later.
The final result would be to differentiate between the marker and the other retangles separated trough black and white lines.
Is something like that possible? I googled a lot but I only found edge detectors but that's not what I want, I really need the detection of the black to white gradient only.
Thanks in advance.
it would be a good idea to filter out some of the areas by calculating their histogram.
You can use cvCalcHist for the task, then you can establish some threshold to determine if the black-white pixels percentage corresponds to that of a gradient. This will not solve the task but it will help you in reducing complexity.
Then, you can erode the image to merge all the white areas. After applying threshold, it would be possible to find connected components (using cvFindContours) that will separate images in black zones or white zones. You can then detect gradients by finding 5x5 areas that contain both a piece of a white zone and black zone simultaneously.
hope it helps.
Thanks for your answerer dnul, but it didn't really help me work this out. I though about a histogram to approach the problem but it's not quite what I want.
I solved this problem by creating a 40x40 matrix which holds 5x5 matrix's containing the raw pixel data in all 3 channels. I iterated trough each 40px-area and inside iterated trough each border of 5px-area. I checked each pixel and saved the ones which are darker then a certain threshold a storage.
After the iteration I had a rough idea of how many black pixels their are, so I checked each one of them for neighbors with white-pixels in all 3 channels. I then marked each of those pixels and saved them to another storage.
I then used the ransac algorithm to construct lines out of these points. It constructs about 5-20 lines per marker edge. I then looked at the lines which meet each other and saved the position of those that meet in a square angle.
The 4 points I get from that are the edges of the marker.
If you want to reproduce this you would have to filter the image in beforehand and apply a threshold to make it easier to distinguish between black and white pixels.
A sample picture, save after finding the points and before constructing the lines:
http://www.imagebanana.com/view/i6gfe6qi/9.jpg
What you are describing is edge detection. This is exactly how, say, the Canny edge detector works. It looks for dark pixels near light pixels, and based on a threshold that you pass in (There is also the adaptive canny, which figures out the threshold for you), and sets them to all black or all white (aka 'marks' them).
See here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html

Resources