I have images of relief map. I would like to write a program that is able to analyze this map and identify typical situations shows in fig below.
Red color represents the lowest value, purple color - the highest. The region of interest is inside the white square. Do not pay attention to black and white circles - they serve for personal purposes not related to this question.
The color itself is not critical here, what is really crucial is the "form" of the maximum, i.e. the way it is located on the map. We can definitely tell the maximum on the left image while it is quite blurred in the right one (because it touches other areas which share similar color).
What I want my program to do is to distinguish between these two completely different cases, i.e. identify whether area inside white square is "reliable" or not (in terms of its "blurriness").
But I do not know what algorithms I should search for. Of course I can do this analysis manually comparing values at each point to others points, but I would like to use some established and robust algorithms if they exist.
Honestly speaking, I thought of using of the algorithms that find contours on binarized images, but it does not seem to be robust.
Thank you in advance.
P.S. I am using OpenCV, so if you know that something is already implemented in it it would be beneficent if you tell me.
UPD: I am not interested in situation only inside the white square - I would like to know what happens outside it as well and how it's compared to the region inside.
Related
I am looking for an algorithm or, even better, some library that covers background substraction from a single static image (no background model available). What whould be possible though is some kind of user input like for example https://clippingmagic.com does it.
Sadly my google fu is bad here as i cant find any papers on that topic with my limited amount of keywords.
That webpage is really impressive. If I were to try and implement something similar I would probably use k-means clustering using the CIELAB colorspace. The reason for changing the colorspace is so that colors can be represented by two points rather than 3 as a regular RGB image. This should speed up clustering. Additionally, the CIELAB color space was build for this purpose, finding "distances" (similarities) between colors and accounts for the way humans perceive color. Not just looking at the raw binary data the computer has.
But a quick overview of kmeans. For this example we will say k=2 (meaning only two clusters)
Initialize each cluster with a mean.
Go through every pixel in your image and decide which mean it is closer to, cluster 1 or 2?
Compute the new mean for your clusters after you've processed all the pixels
using the newly computed means repeat steps 2-4 until convergence (meaning the means don't change very much)
Now that would work well when the foreground image is notably different than the background. Say a red ball in a blue background, but if the colors are similar it would be more problematic. I would still stick to kmeans but have a larger number of clusters. So on that web page you can make multiple red or green selections. I would make each of these strokes a cluster, and intialize my cluster to the mean. So say I drew 3 red strokes, and 2 green ones. That means I'd have 5 groups. But somehow internally I add an extra attribute as foreground/background. So that each cluster will have a small variance, but in the end, I would only display that attribute, foreground or background. I hope that made sense.
Maybe now you have some search terms to start off with. There may be many other methods but this is the first I thought of, good luck.
EDIT
After playing with the website a bit more I see it uses spatial proximity to cluster. So say I had 2 identical red blobs on opposite sides of the image. If I only annotate the left side of the image the blob on the right side might not get detected. Kmeans wouldn't replicate this behavior since the method I described only uses the color to cluster pixels completely oblivious to their location in the image.
I don't know what tools you have at your disposal but here is a nice matlab example/tutorial on color based kmeans
Suppose I have gray-scale photographic pictures of texts sheets. Each sheet of paper is exactly white and text is exactly black.
Unfortunately, the light is not uniform, also perspective shading occurs, also sheets of papers may be curved. Of course, there are some small hi freq noise on an image.
I AM SURE that there should be nearly IDEAL solution to separate text and background in this situation.
So what is it? :)
I don't believe it is impossible or even hard to turn such gray-scale images into nearly perfect black and white pictures. I cant prove this but I judge on my own perception: I need no any intelligence to recognize such pictures by an eye. They can be in any language even unfamiliar, but I will SEE what is written exactly.
So, how to teach computer to do the same?
UPDATE
Consider original image
Any global thresolding will cause artefacts (1) and nonuniform text representation (2)
I need some thresolding, which looks for local statistics.
Switch to adaptive thresholding.
Here you will find some introduction - http://homepages.inf.ed.ac.uk/rbf/HIPR2/adpthrsh.htm
Adaptive thresholding is designed to deal with exactly this kind of problems.
I am a relative newcomer to image processing and this is the problem I'm facing - Say I have the image of an application form, like this:
Now I would like to detect the locations of all the locations where data is to be entered. In this case, it would be the rectangles divided into a number of boxes like so(not all fields marked):
I can live with the photograph box also being detected. I've tried running the squares.cpp sample in the OpenCV sources, which does not quite get me what I want. I also tried the modified version here - the results were worse(my use case is definitely very different from the OP's in that question).
Also, Hough transforming to get the lines is not really working with/without blur-threshold as the noise in scanned image is contributing to extraneous lines, and also, thresholding is taking away parts of the combs(the small squares), and hence the line detection is not up to the mark.
Note that this form is not a scanned copy of a printed form, but the real input might very well be a noisy, scanned image of a printed form.
While I'm definitely sure that this is possible(at least with some tolerance allowed) and I'm trying to get at the solution, it would be really helpful if I get insights and ideas from other people who might have tried something like this/enjoy hacking on CV problems. Also, it would be really nice if the answers explain why a particular operation was done (e.g., dilation to try and fill up any holes left by thresholding, etc)
Are the forms consistent in any way? Are the "such boxes" the same size on all forms? If you can rely on a consistent size, like the character boxes in the form above, you could use template matching.
Otherwise, the problem seems to be: find any/all rectangles on the image (with a post processing step to filter out any that have a significant amount of markings within, or to merge neighboring rectangles).
The more you can take advantage of the consistencies between the forms, the easier the problem will be. Use any context you can get.
EDIT
Using the gradients (computed by using a Sobel kernel in both the x and the y direction) you can weed out a lot of the noise.
Using both you can find the direction of the gradients (equation can be found here: en.wikipedia.org/wiki/Sobel_operator). Let's say we define a discriminating feature of a box to be a vertical or horizontal gradient. If the pixel's gradient has an orientation that's either straight horizontal or straight vertical, keep it, set all else to white.
To make this more robust to noise, you can use a sliding window (3x3) in which you compute the median orientation. If the median (or mean) orientation of the window is vertical or horizontal, keep the current (middle of the window) pixel, otherwise set it to white.
You can use OpenCV for the gradient computation, and possibly the orientation/phase calculation, but you'll probably need to write the code it do the actual sliding window code. I'm not intimately familiar with OpenCV
I'm trying to do an application which, among other things, is able to recognize chess positions on a computer screen from screenshots. I have very limited experience with image processing techniques and don't wish to invest a great amount of time in studying this, as this is just a pet project of mine.
Can anyone recommend me one or more image processing techniques that would yield me a good result?
The conditions are:
The image is always crispy clean, no noise, poor light conditions etc (since it's a screenshot)
I'm expecting a very low impact on computer performance while doing 1 image / second
I've thought of two modes to start the process:
Feed the piece shapes to the program (so that it knows what a queen, king etc. looks like)
just feed the program an initial image which contains the startup position, from which the program can (after it recognizes the position of the board) pick each chess piece
The process should be relatively easy to understand, as I don't have a very good grasp of image processing techniques (yet)
I'm not interested in using any specific technology, so technology-agnostic documentation would be ideal (C/C++, C#, Java examples would also be fine).
Thanks for taking the time to read this, and I hope to get some good answers.
It' an interesting problem, but you need to specify a lot more than in your original question in order to find an acceptable answer.
On the input images: "screenshots" is quote vague a category. Can you assume that the chessboard will always be entirely in view? Will you have multiple views of the same board? Can you assume that no pieces will be partially or completely occluded in all views?
On the imaged objects and the capture system: will the same chessboard and pieces be used, under very similar illumination? Will the same lens/camera/digitization pipeline be used?
Salut Andrei,
I have done a coin counting algorithm from a picture so the process should be helpful.
The algorithm is called Generalized Hough transform
Make the picture black and white, it is easier that way
Take the image from 1 piece and "slide it over the screenshot"
For each cell you calculate the nr of common pixel in the 2 images
Where you have the largest number there you have the piece
Hope this helps.
Yeah go with Salut Andrei,
Convert the picture into greyscale
Slice into 64 squares and store in array
Using Mat lab can identify the pieces easily
Color can be obtained from Calculating the percentage of No. dot pixels(black pixels)
threshold=no.black pixels /no. of black pixels + no. of white pixels,
If ur value is above threshold then WHITE else BLACK
I'm working on a similar project in c# finding which piece is which isn't the hard part for me. First step is to find a rectangle that shows just the board and cuts everything else out. I first hard-coded it to search for the colors of the squares but would like to make it more robust and reliable regardless of the color scheme. Trying to make it find squares of pixels that match within a certain threshold and extrapolate the board location from that.
Algorithm for a drawing and painting robot -
Hello
I want to write a piece of software which analyses an image, and then produces an image which captures what a human eye perceives in the original image, using a minimum of bezier path objects of varying of colour and opacity.
Unlike the recent twitter super compression contest (see: stackoverflow.com/questions/891643/twitter-image-encoding-challenge), my goal is not to create a replica which is faithful to the image, but instead to replicate the human experience of looking at the image.
As an example, if the original image shows a red balloon in the top left corner, and the reproduction has something that looks like a red balloon in the top left corner then I will have achieved my goal, even if the balloon in the reproduction is not quite in the same position and not quite the same size or colour.
When I say "as perceived by a human", I mean this in a very limited sense. i am not attempting to analyse the meaning of an image, I don't need to know what an image is of, i am only interested in the key visual features a human eye would notice, to the extent that this can be automated by an algorithm which has no capacity to conceptualise what it is actually observing.
Why this unusual criteria of human perception over photographic accuracy?
This software would be used to drive a drawing and painting robot, which will be collaborating with a human artist (see: video.google.com/videosearch?q=mr%20squiggle).
Rather than treating marks made by the human which are not photographically perfect as necessarily being mistakes, The algorithm should seek to incorporate what is already on the canvas into the final image.
So relative brightness, hue, saturation, size and position are much more important than being photographically identical to the original. The maintaining the topology of the features, block of colour, gradients, convex and concave curve will be more important the exact size shape and colour of those features
Still with me?
My problem is that I suffering a little from the "when you have a hammer everything looks like a nail" syndrome. To me it seems the way to do this is using a genetic algorithm with something like the comparison of wavelet transforms (see: grail.cs.washington.edu/projects/query/) used by retrievr (see: labs.systemone.at/retrievr/) to select fit solutions.
But the main reason I see this as the answer, is that these are these are the techniques I know, there are probably much more elegant solutions using techniques I don't now anything about.
It would be especially interesting to take into account the ways the human vision system analyses an image, so perhaps special attention needs to be paid to straight lines, and angles, high contrast borders and large blocks of similar colours.
Do you have any suggestions for things I should read on vision, image algorithms, genetic algorithms or similar projects?
Thank you
Mat
PS. Some of the spelling above may appear wrong to you and your spellcheck. It's just international spelling variations which may differ from the standard in your country: e.g. Australian standard: colour vs American standard: color
There is an model that can implemented as an algorithm to calculate a saliency map for an image, determining which parts of the image would get the most attention from a human.
The model is called itti koch model
You can find a startin paper here
And more resources and c++ sourcecode here
I cannot answer your question directly, but you should really take a look at artist/programmer (Lisp) Harold Cohen's painting machine Aaron.
That's quite a big task. You might be interested in image vectorizing (don't know what it's called officially), which is used to take in rasterized images (such as pictures you take with a camera) and outputs a set of bezier lines (i think) that approximate the image you put in. Since good algorithms often output very high quality (read: complex) line sets you'd also be interested in simplification algorithms which can help enormously.
Unfortunately I am not next to my library, or I could reccomend a number of books on perceptual psychology.
The first thing you must consider is the physiology of the human eye is such that when we examine an image or scene, we are only capturing very small bits at a time, as our eyes dart around rapidly. Our mind peices the different parts together to try and form a whole.
You might start by finding an algorithm for the path of an eyeball as it darts around. Perhaps it is attracted to contrast?
Next is that our eyes adjust the "exposure" depending on the context. It's like those high dynamic range images, if they were peiced together not by multiple exposures of a whole scene, but by many small images, each balanced on its own, but blended into its surroundings to form a high dynamic range.
Now there was a finding in a monkey brain that there is a single neuron that lights up if there's a diagonal line in the upper left of its field of vision. Similar neurons can be found for vertical lines, and horizontal lines in various areas of that monkey's field of vision. The "diagonalness" determines the frequency with which that neuron fires.
one might speculated that other neurons might be found and mapped to other qualities such as redness, or texturedness, and other things.
There's something humans can do that I've not seen a computer program ever able to do. it's something called "closure", where a human is able to fill in information about something that they are seeing, that doesn't actually exist in the image. an example:
*
* *
is that a triangle? If you knew that it was in advance, then you could probably make a program to connect the dots. But what if it's just dots? How can you know? I wouldn't attempt this one unless I had some really clever way of dealing with that one.
There are many other facts about human perception you might be able to use. Good luck, you've not picked a straightforward task.
i think a thing that could help you in this enormous task is human involvement. i mean data. like you could have many people sitting staring at random dots (like from the previous post) and connect them as they see right. you could harness that data.