How to count tablets successfully? - opencv

My last question on image recognition seemed to be too broad, so I would like to ask a more concrete question.
First the background. I have already developed a (round) pill counter. It uses something similar to this tutorial. After I made it I also found something similar with this other tutorial.
However my method fails for something like this image
Although the segmentation process is a bit complicated (because of the semi-transparency of the tablets) I have managed to get it
My problem is here. How can I count the elongated tablets, separating each one from the image, similar to the final results in the linked tutorials?
So far I have applied distance transform and then my own version of watershed and I got
As you can see it fails in the adjacent tablets (distance transform usually does).
Take into account that the solution does have to work for this image and also for other arrangements of the tablets, the most difficult being for example
I am open to use OpenCV or if necessary implement on my own algorithms. So far I have tried both (used OpenCV functions and also programmed my own libraries) I am also open to use C++, or python or other. (I programmed them in C++ and I have done it on C# too).

I am also working on this pill counting problem (I'm much earlier in this process than you are), and to solve the piece you are working on - of touching pills, my general idea how to solve this is to capture contours of the pills once you have a good mask of the pills, and then calculate the area of a single pill.
For this approach I'm assuming that I have enough pills in the image such that the amount of them that are untouching is greater than those which are touching, and no pills overlap one another. For my application, placing this restriction I think is reasonable (humans can do a quick look at the pills they've dumped out, and at least roughly make them not touching without too much work. It's also possible that I could design a tray with some sort of dimples in it such that it would coerce the pills to not be touching)
I do this by sorting the contour areas (which, with the right thresholding should lead to only pills and pill-groups being in the identified contours), and taking the median value.
Then, with a good value for the area of a pill, you can look for contours with areas that are a multiple of that median area (+/- some % error value).
I also use that median value to filter out contours that are clearly not big enough to be pills, and ones that are far too large to be a pill (the latter though could be more troublesome, since it could still be a grouping of touching pills).

Given that the pills are all identical and don’t overlap, simply divide the total pill area by the area of a single pill.
The area is estimated simply counting the number of “pill” pixels.
You do need to calibrate the method by giving it the area of a single pill. This can be trivially obtained by giving the correct solution to one of the images (manual counting), then all the other images can be counted automatically.

Related

How to rearrange images by pixel groups

I would like to create an image transition program. It should shift pixel areas from one image and transition them to another based on certain criteria, like colour and shape.
To do this, I need to be able to analyse the image, split it into groups, and shift these groups.
The first problem already starts with determining the pixel groups. They should not be chosen at random or perfect polygons/shapes. Does anyone know of an algorithm that can differentiate different textures/surroundings/borders?
Next, I need to do the slight adjustments to the areas in order to make them fit to the new image. Then the areas will be moved. That'll not be as hard as the first problem.
Performance doesn't matter that much; first I have to get the program working. It can take an hour to load the transition beforehand or whatever ;)
Could anyone give me some advice where to start or what technologies/APIs I could use? I'm fine with most programming languages, preferably C#, VB, JavaScript, PHP, Java, etc. The platform doesn't matter either.
I know, this is complex, but I gave my best to try to explain it. Any ideas?
Your first task, grouping according to color/texture/etc. is called segmentation. There are many approaches and algorithms to do it, and none is absolutely better than all other, as many things in image processing, the best algorithm depends on your image and your specific functional/artistic goal.
The general idea is to define multiple distances between pixels, like one distance would be based only on the position of pixels, another on the difference in their color, a more advanced metric could take the neighborhood into account to do something related to shape, contour orientations or texture. Then you would combine these distances (for example in a weighted sum) to get a "clever" measure of how similar two pixels are. After that you compute more or less exhaustively all distances and group similar pixels according to some thresholds (like how big the final groups are).
If you don't want to research and implement all that, you'd be better off using an existing image processing library. I suggest looking at OpenCV and the "segmentation" keyword. You'll get implementations of k-means, watershed and meanshift algorithms which are probably of interest for achieving your effect.
OpenCV is C++ but it also have bindings in Java and Python I think, and probably other.
For your second task, you need a mix of moving and blending pixels, but that's simpler and you can do it "by hand", or look at morphing algorithms.
A quick search revealed this blog post with a source code using OpenCV to morph two images. You also have some ready-made libraries in a few languages, have a look at related questions.
You could even directly call a command-line utility: xmorph but doesn't seem portable or imagemagick (see this script) which is more modern but not doesn't implement a real morphing algorithm AFAIK.

Algorithm for capturing machine readable zones

What method is suitable to capture (detect) MRZ from a photo of a document? I'm thinking about cascade classifier (e.g. Viola-Jones), but it seems a bit weird to use it for this problem.
If you know that you will look for text in a passport, why not try to find passport model points on it first. Match template of a passport to it by using ASM/AAM (Active shape model, Active Appearance Model) techniques. Once you have passport position information you can cut out the regions that you are interested in. This will take some time to implement though.
Consider this approach as a great starting point:
Black top-hat followed by a horisontal derivative highlights long rows of characters.
Morphological closing operation(s) merge the nearby characters and character rows together into a single large blob.
Optional erosion operation(s) remove the small blobs.
Otsu thresholding followed by contour detection and filtering away the contours which are apparently too small, too round, or located in the wrong place will get you a small number of possible locations for the MRZ
Finally, compute bounding boxes for the locations you found and see whether you can OCR them successfully.
It may not be the most efficient way to solve the problem, but it is surprisingly robust.
A better approach would be the use of projection profile methods. A projection profile method is based on the following idea:
Create an array A with an entry for every row in your b/w input document. Now set A[i] to the number of black pixels in the i-th row of your original image.
(You can also create a vertical projection profile by considering columns in the original image instead of rows.)
Now the array A is the projected row/column histogram of your document and the problem of detecting MRZs can be approached by examining the valleys in the A histogram.
This problem, however, is not completely solved, so there are many variations and improvements. Here's some additional documentation:
Projection profiles in Google Scholar: http://scholar.google.com/scholar?q=projection+profile+method
Tesseract-ocr, a great open source OCR library: https://code.google.com/p/tesseract-ocr/
Viola & Jones' Haar-like features generate many (many (many)) features to try to describe an object and are a bit more robust to scale and the like. Their approach was a unique approach to a difficult problem.
Here, however, you have plenty of constraint on the problem and anything like that seems a bit overkill. Rather than 'optimizing early', I'd say evaluate the standard OCR tools off the shelf and see where they get you. I believe you'll be pleasantly surprised.
PS:
You'll want to preprocess the image to isolate the characters on a white background. This can be done quite easily and will help the OCR algorithms significantly.
You might want to consider using stroke width transform.
You can follow these tips to implement it.

Recommended pattern recognition technique for chess board

I'm trying to do an application which, among other things, is able to recognize chess positions on a computer screen from screenshots. I have very limited experience with image processing techniques and don't wish to invest a great amount of time in studying this, as this is just a pet project of mine.
Can anyone recommend me one or more image processing techniques that would yield me a good result?
The conditions are:
The image is always crispy clean, no noise, poor light conditions etc (since it's a screenshot)
I'm expecting a very low impact on computer performance while doing 1 image / second
I've thought of two modes to start the process:
Feed the piece shapes to the program (so that it knows what a queen, king etc. looks like)
just feed the program an initial image which contains the startup position, from which the program can (after it recognizes the position of the board) pick each chess piece
The process should be relatively easy to understand, as I don't have a very good grasp of image processing techniques (yet)
I'm not interested in using any specific technology, so technology-agnostic documentation would be ideal (C/C++, C#, Java examples would also be fine).
Thanks for taking the time to read this, and I hope to get some good answers.
It' an interesting problem, but you need to specify a lot more than in your original question in order to find an acceptable answer.
On the input images: "screenshots" is quote vague a category. Can you assume that the chessboard will always be entirely in view? Will you have multiple views of the same board? Can you assume that no pieces will be partially or completely occluded in all views?
On the imaged objects and the capture system: will the same chessboard and pieces be used, under very similar illumination? Will the same lens/camera/digitization pipeline be used?
Salut Andrei,
I have done a coin counting algorithm from a picture so the process should be helpful.
The algorithm is called Generalized Hough transform
Make the picture black and white, it is easier that way
Take the image from 1 piece and "slide it over the screenshot"
For each cell you calculate the nr of common pixel in the 2 images
Where you have the largest number there you have the piece
Hope this helps.
Yeah go with Salut Andrei,
Convert the picture into greyscale
Slice into 64 squares and store in array
Using Mat lab can identify the pieces easily
Color can be obtained from Calculating the percentage of No. dot pixels(black pixels)
threshold=no.black pixels /no. of black pixels + no. of white pixels,
If ur value is above threshold then WHITE else BLACK
I'm working on a similar project in c# finding which piece is which isn't the hard part for me. First step is to find a rectangle that shows just the board and cuts everything else out. I first hard-coded it to search for the colors of the squares but would like to make it more robust and reliable regardless of the color scheme. Trying to make it find squares of pixels that match within a certain threshold and extrapolate the board location from that.

What's a simple and efficient method for extracting line segments from a simple 2D image?

Specifically, I'm trying to extract all of the relevant line segments from screenshots of the game 'asteroids'. I've looked through the various methods for edge detection, but none seem to fit my problem for two reasons:
They detect smooth contours, whereas I just need the detection of straight line segments, and only those within a certain range of length. Now, these constraints should make my task considerably easier than the general case, but I don't want to just use a full blown edge detector and then clear the result of curved lines, as that would be prohibitively costly. Speed is of the utmost importance for my purposes.
They output a modified image where the edges are highlights, whereas I want a set of pixel coordinates depicting the endpoints of the detected line segments. Alternatively, a list of all of the pixels included in each segment would work as well.
I have an inkling that one possible solution would involve a hough transform, but I don't know how to use this to get the actual locations of the line segments (i.e. endpoints in pixel space). Though even if I did, I have no idea if that would be the simplest or most efficient way of doing things, hence the general wording of the question title.
Lastly, here's a sample image:
Notice that all of the major lines are similar in length and density, and that the overall image contrast is very high. I'm hoping the solution to my problem will exploit these features, because again, efficiency is paramount.
One caveat: while most of the line segments in this context are part of a polygon, I don't want a solution that relies on this fact.
Have a look at the Line Segment Detector algorithm.
Here's what they do :
You can find an impressive video at the bottom of the page.
There's a C implementation (that works with C++ compilers) that works out of the box. There are just one or two files, and no additional dependencies
But, be warned, the algorithm is under the GNU Allegro GPL license.
Also check out EDlines http://ceng.anadolu.edu.tr/cv/EDLines/
Very fast and provides a very useful output

How to align two different pictures in such a way, that they match as close as possible?

I need to automatically align an image B on top of another image A in such a way, that the contents of the image match as good as possible.
The images can be shifted in x/y directions and rotated up to 5 degrees on z, but they won't be distorted (i.e. scaled or keystoned).
Maybe someone can recommend some good links or books on this topic, or share some thoughts how such an alignment of images could be done.
If there wasn't the rotation problem, then I could simply try to compare rows of pixels with a brute-force method until I find a match, and then I know the offset and can align the image.
Do I need AI for this?
I'm having a hard time finding resources on image processing which go into detail how these alignment-algorithms work.
So what people often do in this case is first find points in the images that match then compute the best transformation matrix with least squares. The point matching is not particularly simple and often times you just use human input for this task, you have to do it all the time for calibrating cameras. Anyway, if you want to fully automate this process you can use feature extraction techniques to find matching points, there are volumes of research papers written on this topic and any standard computer vision text will have a chapter on this. Once you have N matching points, solving for the least squares transformation matrix is pretty straightforward and, again, can be found in any computer vision text, so I'll assume you got that covered.
If you don't want to find point correspondences you could directly optimize the rotation and translation using steepest descent, trouble is this is non-convex so there are no guarantees you will find the correct transformation. You could do random restarts or simulated annealing or any other global optimization tricks on top of this, that would most likely work. I can't find any references to this problem, but it's basically a digital image stabilization algorithm I had to implement it when I took computer vision but that was many years ago, here are the relevant slides though, look at "stabilization revisited". Yes, I know those slides are terrible, I didn't make them :) However, the method for determining the gradient is quite an elegant one, since finite difference is clearly intractable.
Edit: I finally found the paper that went over how to do this here, it's a really great paper and it explains the Lucas-Kanade algorithm very nicely. Also, this site has a whole lot of material and source code on image alignment that will probably be useful.
for aligning the 2 images together you have to carry out image registration technique.
In matlab, write functions for image registration and select your desirable features for reference called 'feature points' using 'control point selection tool' to register images.
Read more about image registration in the matlab help window to understand properly.

Resources