Adaptive hole filling in regards of the adjacent regions - opencv

I have the following problem, best explained with this picture:
I have a hole (blue) with an edge (white line).
I now want to fill the hole with the color of the region next to it.
So above the white line it shoud be yellow and below the white line red.
Is there an algorithm which does something similar i could adapt?
Possibly even an implementation in openCV?
EDIT
Ok, maybe to specify: The white line is from an edge detection and irregular. Also there are many blue spots like this on a big image and it needs to compute the color for each spot according to the adjacent colors
EDIT2: added a better example image containing the whole scene:
To further clarify: Only the blue "holes" should be filled, because they are the regions of error we know. The white object edges are taken from a ground truth for this example which is more precise than the data we can actually work with. It is possible to get a aproximation of that edge though.
The data is a depth map from a multi camera scan by the way. Goal is to fill the error regions cause by overshadowing of the objects. If an object can't be viewed by 2 camera views, because it is obfuscated, no depth estimation is possible.

Maye you want to have a look at the OpenCV's function called floodFill.
This function has an input mask parameter that you could use to specify the white line between the two colored regions.

I kinda found a solution to my problem: openCV has a inpaint function.
I'm gonna modify this according to this paper. Should work fine for my case.

Related

Image Processing - Film negative cutting

I'm trying to figure out how to automatically cut some images like the one below (this is a negative film), basically, I want to remove the blank parts at the top and at the bottom. I'm not looking for complete code for it, I just want to understand a way to do it. The language is not important at this point, but I think this kind of thing normally is accomplished with Python.
I think there are several ways to do that, ranging from simple to complex. You can see the problem as detecting white rectangles or segmenting the image I would say.
I can suggest you opencv (which is available in more than one language, among which python), you can have a look here at the image processing examples
First we need to find the white part, then remove it.
Finding the white part
Thresholding
Let's start with an easy one: thresholding
Thresholding means dividing the image into two parts (usually black and white). You can do that by selecting a threshold (in your case, the threshold would be towards white - or black if you invert the image). By doing so, however, you may also threshold some parts of the images (for example the chickens and the white part above the kid). Do you have information about position of the white stripes? Are they always on top and bottom of the image? If so, you can apply the thresholding operation only on the top 25% and bottom 25% of the image. You would then most likely not "ruin" the image.
Finding the white rectangle
If that does not work or you would like to try something else, you can see the white stripes as rectangles and try to find their contour. You can see how in this tutorial. In this case you do not get a binary image, but a bounding box of the white areas. You most likely find the chickens also in this case, but by looking at the bounding box is easy to understand which one are correct and which one not. You can also check this calculating areas of the bounding box (width * height) and keep only the big ones.
Removing the part
Once you get the binary image (white part and not white part) or the bounding box, you have to crop the image. This also can be done in several ways, but I think the easiest one would be cropping by just selecting the central part of the image. For example, if the image has H pixels vertically, you would keep only the pixel from H1 (the height of the first white space) to H-H2 (where H2 is the height of the second white space). There is no tutorial on cropping, but several questions here on SO, as for example this one.
Additional notes
You could use more advanced segmentation algorithms as the Watershed one or even learn and use advanced techinques as machine learning to do that (here an article), as you can see the rabbit hole is pretty deep in this case. But I believe that would be an overkill and already the easy techniques would give you some results in this case.
Hope this was helpful and have fun!

How to segment part of an image so that the edges are smooth?

I have an input image as follows and wish to segment the parts into regions. I also want the segmented parts to not been just the pixels which contribute to the solid color but also the edge anti-aliasing between the edge of the region and the next region.
Does there exist any filter or method to segment the image in this way? The important part is that the end result segmented part must contain the edge anti-aliasing between it and the next regions. A correct solution is shown in yellow.
In these two images I zoomed the pixels to be large so the edge anti-aliasing between region edges can be seen clearly.
An example output that I want for the yellow region is shown.
For a definition of "edge anti-aliasing" see https://markpospesel.wordpress.com/2012/03/30/efficient-edge-antialiasing/
I'm not sure what exactly you want. For example, would some pixels belong to two segments? If that is the case, then I'm relatively sure you have to do something on your own. Otherwise, the following might work:
Opening and Closing
Opening and closing are two morphological operations which will smooth borders
Clustering
There are many clustering algorithms. They are what you want for non-semantic segmentation (for semantic segmentation, you might want to read my literature survey). One example is
P. F. Felzenszwalb, “Graph based image segmentation.”
I would simply give those algorithms a try and see if one directly works.
Other clustering algorithms:
K-means
DB-SCAN
CLARANS
AGNES
DIANA

Finding interior regions in figure

Let's say we have a dark donut on a white background. What is a good way, looking only at any pixel's value (0 = not white, 1 = white) and any neighbouring pixel values, to determine which one of the two white regions found on the image is inside the donut?
(source: 123rf.com)
In computer graphics, this problem has been extensively studied in the context of geometry processing. The goal is to know whether a point is outside or outside a polygon (possibly with holes), and has been used for color filling for example.
The most common solution is to throw a line in a random direction from your current point (for simplicity, you can take an horizontal scan line), and count the number of intersections with the boundary. If this number is even, you are outside, and if it is odd, your are inside.
In the context of image processing, finding the boundary can be done with edge finding techniques (for instance, the Sobel operator). You can now just walk on a single row from your given point to the right (for instance) and count how many edges you found.
WhitAngl's answer is correct, so my answer is simply bringing in context some problems involved in Image Processing. If you are aware of these, sorry for the naïveness.
Given your initial image, simply considering its edges is bound to give incorrect results due to the own problems of detecting edges. They might be broken, they might not be detected, etc.. Also, from your own considerations, we cannot simple use 0 = not white, 1 = white. With your original image, this is the result of such consideration:
If we suppose you have a better binary representation as this one:
Then WhitAngl's answer applies perfectly. Also, in this case, the answer can be simplified to: if there is a black pixel of the exterior edge touching a white pixel, then this white pixel is not an interior one. This gives:
Where every white pixel is interior to your donut.

Simple OpenCV example to measure Size of Object on a screen

following up on my other question, do you guys know a good example in OpenCV, with a simple Black/White-Calibration Picture and appropriate detection-algorithms?
I just want to show some B&W-image on a screen, take a picture of that image from afar and calculate the size of the shown image, to calculate the distance to said screen.
Before I invent the wheel again, I recon this is so easy that it could be achieved through many different ways in OpenCV, yet I thought I'd ask if there's a preferred way around, possibly with some sample code.
(I got some face-detection code running using haarcascade-xml files already)
PS: I already have the resolution/dpi-part of my screen covered, so I know how big a picture would be in cm on my screen.
EDIT:
I'll make it real simple, I need:
A pattern, that is easily recognizable in an Image. Right now I'm experimenting with a checkerboard. The people who made ARDefender used this.
An appropriate algorithm to tell me the exact pixel coordinates of pattern 1) in a picture using OpenCV.
Well, it's hard to say which image is the best for recognition - in different illumination any color could be interpret as another color. Simple example:
As you can see both traffic signs have red color border but even on one image upper sign border is obviously not red.
So in my opinion you should use image with many different colors (like a rainbow). And also you said that it should be easy recognizable in different angles. That's why circle shape is the best for it.
That's why your image should look like this:
So idea of detection such object is the following:
Make different color segmentation (blue, red, green etc). For this use HSV color space.
Detect circles of specific color on image.
That area which has the biggest count of circles seems to be your object.
you just have to take pictures of your B&W object from several known distances (1m, 2m, 3m, ...) and then for each distance check the size of your object in the corresponding image.
From those datas, you will be able to create a linear function giving you the distance from the size in pixels (y = ax + b should do ;) ), translate it into your code and you're done.
Cheers

OpenCV: Detect a black to white gradient in an area

I uploaded an example image for better understanding: http://www.imagebanana.com/view/kaja46ko/test.jpg
In the image you can see some scanlines and a marker (the white retangle with the circle in it). I want OpenCV to go along a specified area (in the example outlined trough the scanlines) that should be around 5x5. If that area contains a gradient from black to white, I want OpenCV to save the position of that area, so that I can work with it later.
The final result would be to differentiate between the marker and the other retangles separated trough black and white lines.
Is something like that possible? I googled a lot but I only found edge detectors but that's not what I want, I really need the detection of the black to white gradient only.
Thanks in advance.
it would be a good idea to filter out some of the areas by calculating their histogram.
You can use cvCalcHist for the task, then you can establish some threshold to determine if the black-white pixels percentage corresponds to that of a gradient. This will not solve the task but it will help you in reducing complexity.
Then, you can erode the image to merge all the white areas. After applying threshold, it would be possible to find connected components (using cvFindContours) that will separate images in black zones or white zones. You can then detect gradients by finding 5x5 areas that contain both a piece of a white zone and black zone simultaneously.
hope it helps.
Thanks for your answerer dnul, but it didn't really help me work this out. I though about a histogram to approach the problem but it's not quite what I want.
I solved this problem by creating a 40x40 matrix which holds 5x5 matrix's containing the raw pixel data in all 3 channels. I iterated trough each 40px-area and inside iterated trough each border of 5px-area. I checked each pixel and saved the ones which are darker then a certain threshold a storage.
After the iteration I had a rough idea of how many black pixels their are, so I checked each one of them for neighbors with white-pixels in all 3 channels. I then marked each of those pixels and saved them to another storage.
I then used the ransac algorithm to construct lines out of these points. It constructs about 5-20 lines per marker edge. I then looked at the lines which meet each other and saved the position of those that meet in a square angle.
The 4 points I get from that are the edges of the marker.
If you want to reproduce this you would have to filter the image in beforehand and apply a threshold to make it easier to distinguish between black and white pixels.
A sample picture, save after finding the points and before constructing the lines:
http://www.imagebanana.com/view/i6gfe6qi/9.jpg
What you are describing is edge detection. This is exactly how, say, the Canny edge detector works. It looks for dark pixels near light pixels, and based on a threshold that you pass in (There is also the adaptive canny, which figures out the threshold for you), and sets them to all black or all white (aka 'marks' them).
See here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html

Resources