How to defend thresholding technique - image-processing

On a job for a customer, I am locating items within a grayscale scene with nonuniform background illumination. Once the items are located, I need to do another search within each one for details. The items are easy enough to locate by masking with the output of a variance filter; and within the items, if the threshold is correct, the details are easy to locate as well. But the mean and contrast of these items varies substantially.
I played around with threshold calculation for a while, and none of the techniques I implemented is perfect; but the one that turns out simplest, as accurate as any other, and quite low cost, is to take the mean pixel value and add one standard deviation.
My question is: is there some analytical way to defend this calculation other than "it works well"? I mean, I did sort of fall on this technique accidentally (only later did I find this answer), and using it seems arbitrary.

Related

How to count tablets successfully?

My last question on image recognition seemed to be too broad, so I would like to ask a more concrete question.
First the background. I have already developed a (round) pill counter. It uses something similar to this tutorial. After I made it I also found something similar with this other tutorial.
However my method fails for something like this image
Although the segmentation process is a bit complicated (because of the semi-transparency of the tablets) I have managed to get it
My problem is here. How can I count the elongated tablets, separating each one from the image, similar to the final results in the linked tutorials?
So far I have applied distance transform and then my own version of watershed and I got
As you can see it fails in the adjacent tablets (distance transform usually does).
Take into account that the solution does have to work for this image and also for other arrangements of the tablets, the most difficult being for example
I am open to use OpenCV or if necessary implement on my own algorithms. So far I have tried both (used OpenCV functions and also programmed my own libraries) I am also open to use C++, or python or other. (I programmed them in C++ and I have done it on C# too).
I am also working on this pill counting problem (I'm much earlier in this process than you are), and to solve the piece you are working on - of touching pills, my general idea how to solve this is to capture contours of the pills once you have a good mask of the pills, and then calculate the area of a single pill.
For this approach I'm assuming that I have enough pills in the image such that the amount of them that are untouching is greater than those which are touching, and no pills overlap one another. For my application, placing this restriction I think is reasonable (humans can do a quick look at the pills they've dumped out, and at least roughly make them not touching without too much work. It's also possible that I could design a tray with some sort of dimples in it such that it would coerce the pills to not be touching)
I do this by sorting the contour areas (which, with the right thresholding should lead to only pills and pill-groups being in the identified contours), and taking the median value.
Then, with a good value for the area of a pill, you can look for contours with areas that are a multiple of that median area (+/- some % error value).
I also use that median value to filter out contours that are clearly not big enough to be pills, and ones that are far too large to be a pill (the latter though could be more troublesome, since it could still be a grouping of touching pills).
Given that the pills are all identical and don’t overlap, simply divide the total pill area by the area of a single pill.
The area is estimated simply counting the number of “pill” pixels.
You do need to calibrate the method by giving it the area of a single pill. This can be trivially obtained by giving the correct solution to one of the images (manual counting), then all the other images can be counted automatically.

Fast image segmentation algorithms for automatic object extraction

I need to segment a set of unknown objects (books, cans, toys, boxes, etc.) standing on top of a surface (table top, floor…). I want to extract a mask (either binary or probabilistic) for each object on the scene.
I do know what the appearance of the surface is (a color model). The size, geometry, amount, appearance of the objects is arbitrary, and they could be texture-less as well). Multiple views might be available as well. No user interaction is available.
I have been struggling on picking the best kind of algorithm for this scenario (graph based, cluster based, super-pixels, etc.). This comes, naturally from a lack of experience with different methods. I'd like to know how they compare one to another.
I have some constraints:
Can’t use libraries (it’s a legal constraint, except for OpenCV). So any algorithm must be implemented by me. So I’d like to choose an algorithm that is simple enough to be implemented in a non-too-long period of time.
Performance is VERY important. There will be many other processes running at the same time, so I can’t afford to have a slow method.
It’s much preferred to have a fast and simple method with less resolution than something complex and slow that provides better results.
Any suggestion on some approach suitable for this scenario would be appreciated.
For speed, I'd quickly segment the image into surface and non-surface (stuff). So this at least gets you from 24 bits (color?) to 8 bits or even one bit, if you are brave.
From there you need to aggregate and filter the regions into blobs of appropriate size. For that, I'd try a morphological (or linear) filter that is keyed to a disk that would just fit inside the smallest object of interest. This would be an opening and a closing. Perhaps starting with smaller radii for better results.
From there, you should have an image of blobs that can be found and discriminated. Each blob or region should designate the objects of interest.
Note that if you can get to a 1-bit image, things can go VERY fast. However, efficient tools that can make use of this data form (8 pixels per character) are often not forthcoming.

How to rearrange images by pixel groups

I would like to create an image transition program. It should shift pixel areas from one image and transition them to another based on certain criteria, like colour and shape.
To do this, I need to be able to analyse the image, split it into groups, and shift these groups.
The first problem already starts with determining the pixel groups. They should not be chosen at random or perfect polygons/shapes. Does anyone know of an algorithm that can differentiate different textures/surroundings/borders?
Next, I need to do the slight adjustments to the areas in order to make them fit to the new image. Then the areas will be moved. That'll not be as hard as the first problem.
Performance doesn't matter that much; first I have to get the program working. It can take an hour to load the transition beforehand or whatever ;)
Could anyone give me some advice where to start or what technologies/APIs I could use? I'm fine with most programming languages, preferably C#, VB, JavaScript, PHP, Java, etc. The platform doesn't matter either.
I know, this is complex, but I gave my best to try to explain it. Any ideas?
Your first task, grouping according to color/texture/etc. is called segmentation. There are many approaches and algorithms to do it, and none is absolutely better than all other, as many things in image processing, the best algorithm depends on your image and your specific functional/artistic goal.
The general idea is to define multiple distances between pixels, like one distance would be based only on the position of pixels, another on the difference in their color, a more advanced metric could take the neighborhood into account to do something related to shape, contour orientations or texture. Then you would combine these distances (for example in a weighted sum) to get a "clever" measure of how similar two pixels are. After that you compute more or less exhaustively all distances and group similar pixels according to some thresholds (like how big the final groups are).
If you don't want to research and implement all that, you'd be better off using an existing image processing library. I suggest looking at OpenCV and the "segmentation" keyword. You'll get implementations of k-means, watershed and meanshift algorithms which are probably of interest for achieving your effect.
OpenCV is C++ but it also have bindings in Java and Python I think, and probably other.
For your second task, you need a mix of moving and blending pixels, but that's simpler and you can do it "by hand", or look at morphing algorithms.
A quick search revealed this blog post with a source code using OpenCV to morph two images. You also have some ready-made libraries in a few languages, have a look at related questions.
You could even directly call a command-line utility: xmorph but doesn't seem portable or imagemagick (see this script) which is more modern but not doesn't implement a real morphing algorithm AFAIK.

Object Detection using OpenCV in C++

I'm currently working on a vision system for a UAV I am building. The goal of the system is to find target objects, which are rather well defined (see below), in a video stream that will be a 2-D flyover view of the ground. So far I have tried training and using a Haar-like feature based cascade, a la Viola Jones, to do the detection. I am training it with 5000+ images of the targets at different angles (perspective shifts) and ranges (sizes in the frame), but only 1900 "background" images. This does not yield good results at all, as I cannot find a suitable number of stages to the cascade that balances few false positives with few false negatives.
I am looking for advice from anyone who has experience in this area, as to whether I should: 1) ditch the cascade, in favor of something more suitable to objects defined by their outline and color (which I've read the VJ cascade is not).
2) improve my training set for the cascade, either by adding positives, background frames, organizing/shooting them better, etc.
3) Some other approach I can't fathom currently.
A description of the targets:
Primary shapes: triangles, squares, circles, ellipses, etc.
Distinct, solid, primary (or close to) colors.
Smallest dimension between two and eight feet (big enough to be seen easily from a couple hundred feet AGL
Large, single alphanumeric in the center of the object, with its own distinct, solid, primary or almost primary color.
My goal is use something very fast, such as the VJ cascade, to find possible objects and their associated bounding box, and then pass these on to finer processing routines to determine the properties (color of the object and AN, value of the AN, actual shape, and GPS location). Any advice you can give me towards completing this goal would be much appreciated. The source code I currently have is a little lengthy for post here, but is freely available should you like to see it for reference. Thanks in advance!
-JB
I would recommend ditching Haar classification, since you know a lot about your objects. In these cases you should start by checking what features you can use:
1) overhead flight means, as you said, you can basically treat these as fixed shapes on a 2D plane. There will be scaling, rotations and some minor affine transformations, which depends a lot on how wide-angled your camera is. If it isn't particularly wide-angled, that part can probably be ignored. Also, you probably know your altitude, by which you can probably also make very good assumption on the target size (scaling).
2) You know the colors, which also makes it quite easy to find objects. If these are very defined as primary color, then you can just filter the image based on color and find those contours. If you want to do something a little more advanced (which to me doesn't seem necessary though...) you can do a backprojection, which in my experience is very effective and fast. Note, if you're creating the objects, it would be better to use Red Green and Blue instead of primary colors (red green and yellow). Then you can simply split the image into it's respective channels and use a very high threshold.
3) You know the geometric shapes. I've never done this myself, but as far as I know, the options are using moments or using Hough transforms (although openCV only has hough algorithms for lines and circles, so you'd have to write your own for other shapes...). You might already have sufficiently good results without this step though...
If you want more specific recommendations, it would be very useful to upload a couple sample images. :)
May be solved but I came across a paper recently with an open-source license for generic object detection using normalised gradient features : http://mmcheng.net/bing/comment-page-9/
The details of the algorithms performance against illumination, rotation and scale may require a little digging. I can't remember on the top of my head where the original paper is.

An algorithm for a drawing and painting robot - any tips?

Algorithm for a drawing and painting robot -
Hello
I want to write a piece of software which analyses an image, and then produces an image which captures what a human eye perceives in the original image, using a minimum of bezier path objects of varying of colour and opacity.
Unlike the recent twitter super compression contest (see: stackoverflow.com/questions/891643/twitter-image-encoding-challenge), my goal is not to create a replica which is faithful to the image, but instead to replicate the human experience of looking at the image.
As an example, if the original image shows a red balloon in the top left corner, and the reproduction has something that looks like a red balloon in the top left corner then I will have achieved my goal, even if the balloon in the reproduction is not quite in the same position and not quite the same size or colour.
When I say "as perceived by a human", I mean this in a very limited sense. i am not attempting to analyse the meaning of an image, I don't need to know what an image is of, i am only interested in the key visual features a human eye would notice, to the extent that this can be automated by an algorithm which has no capacity to conceptualise what it is actually observing.
Why this unusual criteria of human perception over photographic accuracy?
This software would be used to drive a drawing and painting robot, which will be collaborating with a human artist (see: video.google.com/videosearch?q=mr%20squiggle).
Rather than treating marks made by the human which are not photographically perfect as necessarily being mistakes, The algorithm should seek to incorporate what is already on the canvas into the final image.
So relative brightness, hue, saturation, size and position are much more important than being photographically identical to the original. The maintaining the topology of the features, block of colour, gradients, convex and concave curve will be more important the exact size shape and colour of those features
Still with me?
My problem is that I suffering a little from the "when you have a hammer everything looks like a nail" syndrome. To me it seems the way to do this is using a genetic algorithm with something like the comparison of wavelet transforms (see: grail.cs.washington.edu/projects/query/) used by retrievr (see: labs.systemone.at/retrievr/) to select fit solutions.
But the main reason I see this as the answer, is that these are these are the techniques I know, there are probably much more elegant solutions using techniques I don't now anything about.
It would be especially interesting to take into account the ways the human vision system analyses an image, so perhaps special attention needs to be paid to straight lines, and angles, high contrast borders and large blocks of similar colours.
Do you have any suggestions for things I should read on vision, image algorithms, genetic algorithms or similar projects?
Thank you
Mat
PS. Some of the spelling above may appear wrong to you and your spellcheck. It's just international spelling variations which may differ from the standard in your country: e.g. Australian standard: colour vs American standard: color
There is an model that can implemented as an algorithm to calculate a saliency map for an image, determining which parts of the image would get the most attention from a human.
The model is called itti koch model
You can find a startin paper here
And more resources and c++ sourcecode here
I cannot answer your question directly, but you should really take a look at artist/programmer (Lisp) Harold Cohen's painting machine Aaron.
That's quite a big task. You might be interested in image vectorizing (don't know what it's called officially), which is used to take in rasterized images (such as pictures you take with a camera) and outputs a set of bezier lines (i think) that approximate the image you put in. Since good algorithms often output very high quality (read: complex) line sets you'd also be interested in simplification algorithms which can help enormously.
Unfortunately I am not next to my library, or I could reccomend a number of books on perceptual psychology.
The first thing you must consider is the physiology of the human eye is such that when we examine an image or scene, we are only capturing very small bits at a time, as our eyes dart around rapidly. Our mind peices the different parts together to try and form a whole.
You might start by finding an algorithm for the path of an eyeball as it darts around. Perhaps it is attracted to contrast?
Next is that our eyes adjust the "exposure" depending on the context. It's like those high dynamic range images, if they were peiced together not by multiple exposures of a whole scene, but by many small images, each balanced on its own, but blended into its surroundings to form a high dynamic range.
Now there was a finding in a monkey brain that there is a single neuron that lights up if there's a diagonal line in the upper left of its field of vision. Similar neurons can be found for vertical lines, and horizontal lines in various areas of that monkey's field of vision. The "diagonalness" determines the frequency with which that neuron fires.
one might speculated that other neurons might be found and mapped to other qualities such as redness, or texturedness, and other things.
There's something humans can do that I've not seen a computer program ever able to do. it's something called "closure", where a human is able to fill in information about something that they are seeing, that doesn't actually exist in the image. an example:
*
* *
is that a triangle? If you knew that it was in advance, then you could probably make a program to connect the dots. But what if it's just dots? How can you know? I wouldn't attempt this one unless I had some really clever way of dealing with that one.
There are many other facts about human perception you might be able to use. Good luck, you've not picked a straightforward task.
i think a thing that could help you in this enormous task is human involvement. i mean data. like you could have many people sitting staring at random dots (like from the previous post) and connect them as they see right. you could harness that data.

Resources