I have to find the crosses in the image. What I know is the exact position of each red square. Now I have to decide, if there is a cross inside the square or not. What is the best and fastest way to do this? I am using OpenCv/c++! Well, I could try to use the SVM of OpenCv? But is it fast? Do you have any other ideas?
If you really have the coordinates of the center of each number-box and you maybe can adjust the image acquisition a bit, this should be a feasible task.
The problem I see here is, that you have a brightness gradient in your image which you should get rid off by either taking a better picture or using a big Gaussian-filter and an image subtraction.
Otherwise I'm not sure you'll find a good brightness-threshold to separate crossed from non-crossed.
Another approach you could use is to calculate the variance of your pixels. This gives you a nice local measure, whether or not a dark pen spread your pixel-distribution. A quick test looks promising
Note, that I didn't had the real positions of the boxes. I just divided your image into equally sized regions which is not really correct regarding the box/sub-box like structure. Therefore there are some false positives in it because of the red triangles in each upper left corner and because of some overlapping crosses.
So here's what I did:
Take your image without the red channel and made it a graylevel image.
Filtering this image with a Gaussian of radius 100 and subtracting this from the image.
I divided your image into (7*6)x(7*2) subregions.
Calculated the variance of each subregion and used a variance-threshold of about 0.017 for the above image
Every box which had a larger variance was crossed.
Simple solution: if you know a-priori locations of all boxes, just calculate the average brightness of the box. Boxes with a mark will be much darker than empty boxes.
If not detecting red ink is an option, keep it simple: Accumulate all pixels within the red square and threshold on the "redness", i.e. the quotient of the sum of red values divided by the total color values.
just find rectungles and then do simple pixel compare.
Related
I have a sheet of paper on another surface. I want to figure out when the paper ends and the surface begins.
I'm using this approach:
1. Convert image to pixel array
2. Pick 3 random 20x20 squares and frequency count the colors
3. The highest frequency is the background
However, the problem is that I get over 100 colors every time I do this on an actual image taken by the camera.
I think I can fix it by putting the image in 16 colors palette. Is there a way to do this change on a UIImage or CGImage?
Thanks
Your colours are probably very close together. How about calculating the distance (the cumulative absolute difference between red, green and blue values) from each sampled colour to a reference colour - just use the first one you sample as reference. If the distance is large, you have a different colour. If the distance is small, you have the same colour with minor variations in lighting or other camera artefacts.
Basically this is applying a filter in a very simple manner. It is up to you to decide how big the difference has to be for the colours to be considered different, but you could decide that by looking at the median difference of all the colours and grouping them into over/under samples.
You might also get good results from applying a Core Image filter to the sample images, such as CIColorClamp (CISpotColor looks better but is OS X only). if you can find a suitable filter there is a good chance it will be simpler and faster than doing it yourself.
I am interested in detecting single object more precisely a fire extinguisher which has no inter class variability (all fire extinguisher looks same). However, The application is supposedly realtime i.e a robot is exploring the environment and whenever it sees the object of interest it should be able to detect it and give pixel coordinates of it.
My question is which algorithm will be good choice for this task?
1. Is this a classification problem and should we use features(sift/surf etc) + bow +svm?
2. some other solution (no idea yet).
Any kind of input will be appreciated.
Thanks.
(P.S bear with me i am newbie to computer vision and stack over flow)
update1:
Height varies all are mounted on the wall but with different height. I tried with SIFT features and bow but it is expensive to extract bow descriptors in testing part. Moreover I have no idea how to locate the object(pixel coordinates) inside the image after its been classified positive.
update 2:
I finally used sift + bow + svm and am able to classify the object. But using this technique, i only get output interms of whether the object is present in the scene or not?
How can i detect the object i.e getting the bounding box or centre of the object. what is the compatible approach with the above method for achieving these results.
Thank you all.
I would suggest using color as the main feature to look for, and only try other features as needed. The fire extinguisher red is very distinctive, and should not occur too often elsewhere in an office environment. Other, more computationally expensive tests can then be performed only in regions of the right color.
Here is a good tutorial for color detection that also explains how to find good thresholds for your desired color.
I would suggest the following approach:
denoise your image with a median filter
convert the image to HSV format (Hue, Saturation, Value)
select pixels close to that particular shade of red with InRange()
Now you have a binary image image that contains only the pixels that are red.
count the number of red pixels with CountNonZero()
If that number is too small, abort
remove noise from the binary image by morphological opening / closing
find contours of all blobs in your picture with findContours or the CvBlob library
check if there are blobs of the correct width, correct height and correct width/height ratio
since your fire extinguishers are vertical cylinders, the width/height ratio will be constant from every angle. The width and height will of course vary somewhat with distance to the camera.
if the width and height do not match, abort
repeat these steps to find the black-colored part on the bottom of the extinguisher,
abort if there is no black region with correct width/height below the red region
(perhaps also repeat these steps for the metallic top and the yellow rectangle)
These tests should all be very fast. If they are too slow, you could reduce the resolution of your input images.
Depending on your environment, it is possible that this is already a robust enough test. If not, you can proceed with sift/surf feature matching, but only in a small region around the blobs with the correct color. You also do not necessarily have to do that for each frame, each n-th frame should be be enough for confirmation.
This is a old question .. but will still like to give my recommendation to use YOLO algorithm to solve this problem.
YOLO fits very well to this scenario.
What are the possible fast ways to detect circle in an image ?
For ex:
i have an image with one Big Circle and has 6 small circles inside big Circle.
I need to find a big circle without using Hough Circles(OpencV).
Standard algorithms to find circles are Hough (which jamk mentioned in the comments) and RANSAC. Parameterizing these algorithms will set a baseline speed for your application.
http://en.wikipedia.org/wiki/Hough_transform
http://en.wikipedia.org/wiki/RANSAC
To speed up these algorithms, you can look at your collection of images and decide whether limiting the search ranges will help speed up the search. That's straightforward enough: only search within a reasonable range for the radius. Since they take edge points as inputs, you can also look at methods to reduce the number of edge points checked.
However, there are a few other tricks to speed up processing.
Carefully set the range or ranges over which radii are checked. For example, you might not simply check from the smallest possible radius to the largest possible radius, but instead you might split the search into two different ranges: from radius R1 to R2, and then from radius R3 to R4.
Ditch the Canny edge detection in favor of the fastest possible edge detection your application can tolerate. (You can ditch Canny for lots of applications.)
Preprocess your image of edge points to eliminate outliers. The appropriate algorithm to eliminate outliers will be specific to your image set, but you'll probably be able to find an algorithm that eliminates obvious outliers and thereby saves some search time in the more expensive circle fit algorithms.
If your circles are very well defined, and all or nearly all points are present, figure out how you might match only a quarter circle or semicircle instead of a full circle.
Long story short: start with a complete implementation and benchmark it, then gradually tighten up parameter settings and limit search ranges while ensuring that you can still find circles for your application and your image set.
If your images are amenable to scaling, then one possibility is to create an image pyramid of images at different scales: 1/2 scale, 1/4 scale, 1/8 scale, etc. You'll need an edge-preserving scaling method at smaller scales.
Once you have your image pyramid, try the following:
Find circles at the very smallest scale. The image will be small and
the range of possible radii will be limited, so this should be a
quick operation.
If you find a circle using the initial fit at the small scale, improve the fit by testing in the next larger scale image -OR- go ahead and search in the full scale image.
Check the next largest scale. Circles that weren't visible in the smaller scale image may suddenly "appear" in the current scale.
Repeat the steps above through all scales in the image.
Image scaling will be a fast operation, and you can see that if at least one of your circles is present in a smaller scale image you should be able to reduce the total number of cycles by performing a rough circle fit in the small scale image and then optimizing the fit for those edge points alone in the full scale image.
Edge-preserving scaling can also make it possible to use correlation-type tools to find circles, but being able to do so depends on the content of your images, including the noise, how completely edge points represent circles, and so on.
Maybe, detect contours and check their properties, e.g. try to use cv::isContourConvex or another way could be to use the eigenvalues of the covariance matrix and check if contour's representative ellipse first eccentricity is ~0.
I uploaded an example image for better understanding: http://www.imagebanana.com/view/kaja46ko/test.jpg
In the image you can see some scanlines and a marker (the white retangle with the circle in it). I want OpenCV to go along a specified area (in the example outlined trough the scanlines) that should be around 5x5. If that area contains a gradient from black to white, I want OpenCV to save the position of that area, so that I can work with it later.
The final result would be to differentiate between the marker and the other retangles separated trough black and white lines.
Is something like that possible? I googled a lot but I only found edge detectors but that's not what I want, I really need the detection of the black to white gradient only.
Thanks in advance.
it would be a good idea to filter out some of the areas by calculating their histogram.
You can use cvCalcHist for the task, then you can establish some threshold to determine if the black-white pixels percentage corresponds to that of a gradient. This will not solve the task but it will help you in reducing complexity.
Then, you can erode the image to merge all the white areas. After applying threshold, it would be possible to find connected components (using cvFindContours) that will separate images in black zones or white zones. You can then detect gradients by finding 5x5 areas that contain both a piece of a white zone and black zone simultaneously.
hope it helps.
Thanks for your answerer dnul, but it didn't really help me work this out. I though about a histogram to approach the problem but it's not quite what I want.
I solved this problem by creating a 40x40 matrix which holds 5x5 matrix's containing the raw pixel data in all 3 channels. I iterated trough each 40px-area and inside iterated trough each border of 5px-area. I checked each pixel and saved the ones which are darker then a certain threshold a storage.
After the iteration I had a rough idea of how many black pixels their are, so I checked each one of them for neighbors with white-pixels in all 3 channels. I then marked each of those pixels and saved them to another storage.
I then used the ransac algorithm to construct lines out of these points. It constructs about 5-20 lines per marker edge. I then looked at the lines which meet each other and saved the position of those that meet in a square angle.
The 4 points I get from that are the edges of the marker.
If you want to reproduce this you would have to filter the image in beforehand and apply a threshold to make it easier to distinguish between black and white pixels.
A sample picture, save after finding the points and before constructing the lines:
http://www.imagebanana.com/view/i6gfe6qi/9.jpg
What you are describing is edge detection. This is exactly how, say, the Canny edge detector works. It looks for dark pixels near light pixels, and based on a threshold that you pass in (There is also the adaptive canny, which figures out the threshold for you), and sets them to all black or all white (aka 'marks' them).
See here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html
How to get rid of uneven illumination from images, that contain text data, usually printed but may be handwritten? It can have some spots of lights because the light reflected while making picture.
I've seen the Halcon program's segment_characters function that is doing this work perfectly,
but it is not open source.
I wish to convert an image to the image that has a constant illumination at background and more dark colored regions of text. So that binarization will be easy and without noise.
The text is assumed to be dark colored than it's background.
Any ideas?
Strictly speaking, assuming you have access to the image's pixels (you can search online for how to accomplish this in your programming language as the topic is abundantly available), the exercise involves going over the pixels once to determine a "darkness threshold". In order to do this you convert each pixel from RGB to HSL in order to get the lightness level component for each pixel. During this process you calculate an average lightness for the whole image which you can use as your "darkness threshold"
Once you have the image average lightness level, you can go over the image pixels once more and if a pixel is less than the darkness threshold, set it's color to full white RGB(255,255,255), otherwise, set it's color to full black RGB (0,0,0). This will give you a binary image with in which the text should be black - the rest should be white.
Of course, the key is in finding the appropriate darkness threshold - so if the average method doesn't give you good results you may have to come up with a different method to augment that step. Such a method could involve separating the image in the primary channels Red, Green, Blue and computing the darkness threshold for each channel separately and then using the aggressive threshold of the three..
And lastly, a better approach may be to compute the light levels distribution - as opposed to simply the average - and then from that, the range around the maximum is what you want to keep. Again, go over each pixel and if it's lightness fits the band make it black, otherwise, make it white.
EDIT
For further reading about HSL I recommend starting with the Wiky entry on HSL and HSV Color spaces.
Have you tried using morphological techniques? Closure-by-reconstruction (as presented in Gonzalez, Woods and Eddins) can be used to create a grayscale representation of background illumination levels. You can more-or-less standardize the effective illumination by:
1) Calculating the mean intensity of all the pixels in the image
2) Using closure-by-reconstruction to estimate background illumination levels
3) Subtract the output of (2) from the original image
4) Adding the mean intensity from (1) to every pixel in the output of (3).
Basically what closure-by-reconstruction does is remove all image features that are smaller than a certain size, erasing the "foreground" (the text you want to capture) and leaving only the "background" (illumination levels) behind. Subtracting the result from the original image leaves behind only small-scale deviations (the text). Adding the original average intensity to those deviations is simply to make the text readable, so that the resulting picture looks like a light-normalized version of the original image.
Use Local-Thresholding instead of the global thresholding algorithm.
Divide your image(grayscale) in to a grid of smaller images (say 50x50 px) and apply the thresholding algorithm on each individual image.
If the background features are generally larger than the letters, you can try to estimate and subsequently remove the background.
There are many ways to do that, a very simple one would be to run a median filter on your image. You want the filter window to be large enough that text inside the window rarely makes up more than a third of the pixels, but small enough that there are several windows that fit into the bright spots. This filter should result in an image without text, but with background only. Subtract that from the original, and you should have an image that can be segmented with a global threshold.
Note that if the bright spots are much smaller than the text, you do the inverse: choose the filter window such that it removes the light only.
The first thing you need to try and do it change the lighting, use a dome light or some other light that will give you a more diffuse and even light.
If that's not possible, you can try some of the ideas in this question or this one. You want to implement some type of "adaptive threshold", this will apply a local threshold to individual parts of the image so that the change in contrast won't be as noticable.
There is also a simple but effective method explained here. The simple outline of the alrithm is the following:
Split the image up into NxN regions or neighbourhoods
Calculate the mean or median pixel value for the neighbourhood
Threshold the region based on the value calculated in 2) or the value from 2) minus C (where C is a chosen constant)
It seems like what you're trying to do is improve local contrast while attenuating larger scale lighting variations. I'll agree with other posters that optimizing the image through better lighting should always be the first move.
After that, here are two tricks.
1) Use smooth_image() operator to convolve a gaussian on your original image. Use a relaitively large kernel, like 20-50px. Then subtract this blurred image from your original image. Apply scale and offset within sub_image() operator, or use equ_histo() to equalize histogram.
This basically subtracts the low spatial frequency information from the original, leaving the higher frequency information intact.
2) You could try highpass_image() operator, or one of the laplacian operators to extract a gradiant image.