cvHoughCircles() parameters in JavaCV? - opencv

I want to find circles in an image by using cvHoughCircles() .
But I confused about the fourth parameter, because when I use "1", the cvHoughCircles() does not find circles and when I use "2", the method work properly and detect all circles in the image.
Click Here to see the screenshot of my program for both cases.
I did the same operation on another image ,but this time changing the value of the fourth parameter from 1 to 2, didn't affect the result[cvHoughCircles() returned the same result for both cases( using 1 or 2 for the value of the fourth parameter)] .
Can anyone please tell me what value should be sued for the fourth parameter when working with different images?

Check out this link:
http://docs.opencv.org/modules/imgproc/doc/feature_detection.html
It lists the c/c++/python implementations for all the functions stating what each parameter does and i've always found that one of them is what javacv has been wrapping (in this case the c code). I was actually looking for this page when i came across your post so in case it happens again i can now follow my own link (awesome!). Now to answer your question as best I can.
The function looks like this:
CvSeq* cvHoughCircles(CvArr* image, void* circle_storage, int method, double dp, double min_dist, double param1=100, double param2=100, int min_radius=0, int max_radius=0 )
Where the site describes:
dp – Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height.
I am guessing (based on what i remember from class) that what this refers to is the pyramid scheme that is sometimes used in feature detection. Basically you average the pixels of an image to get a smaller image in order to find the locations of important features like corners or in this case circles which in the end is based on gradient information (hence the black and white or greyscale image that should be used).
Using dp=1 should be perfectly fine however, just make sure to call cvSmooth() on the image so the gradient vectors make a nice circle around the circle. If you know that there is a circle then you could keep smoothing and dilating (cvDilate) until the circle appears but then you may detect artifacts so the biggest circle should be of what's interest. In the end it depends on the situation you are putting the algorithms through.

Related

Cropping image By selecting Object and color matching

We are developing an app where we need to crop an image according to the selecting object area. User will draw a line and we need to select the object and crop it .This crop need to be like the app: YourMoji
So far we have tried to get the color of the pixels along the line and then comparing those with the color of every pixel in the image and making a path from it to clip the image. But the almost going no where.
Is it possible through this way to crop an image or we are going in the wrong way? Can anyone provide a way to do this Or suggest a way to modify the way we have worked so far? Any advice and suggestions will be greatly appreciated!
Thanks in advance.
I guess what you want is the image segmentation algorithm called Graph Cut.
Here are two Github repositories, hope these would help:
GraphCut
GrabCutIOS
I'm not exactly clued up on image manipulation, but the first algorithm that comes to mind is something like this:
Take the average of the pixels in the line (as you have)
Since you appear to want faces, you might want to weight reds and blues over green. Not much green in faces of any skin tone.
For each pixel, if the colour is within a given threshold outside of your selected average, remove it / make transparent.
Perhaps the closer to the original line (or centroid), the less strict the threshold becomes.
I'd then provide the user with some tools for:
Sensitivity: how large the threshold is
Eraser: to remove parts of the image that your algorithm missed
Paintbrush: to replace parts of the image that your algorithm incorrectly removed.

How to separate the query and the train image from the Mat object returned from DrawMatches() method

I am trying to detect an object in a video. i am using SURF as feature detection and descriptor extractor, and BRUTFORCE as matcher. i tested my work with faces, i captured a picture of me and when i run the camera and direct it toward me, my face gets detected and a rectangle is drawn around it. i tried to make another test, i captured an image of my mouse and resized it, and when i run the cam, it is not getting detected
the problems i am facing are:
1-is the size of the query/object image matters in such cases,? i am asking this question because the image i captured of my self is bigger than the one of the mouse, and the face is getting detected and the mouse not.
2-regardless of which image i am using as a query/object iamge, how to display camera preview of only the train/scene image without the query/object image. i am asking this question because, what i am getting is something as shown in the below posted images, while what i want to do is something as it is shown here, i checked the code in that link, it is in C++ but i followed the same thing and also the tutorial uses 'drawMatches' method which has a peer in java which is Features2D.DrawMatches() and both of them returns a Mat object with the query/object image on the left side and the train/scene image on the right side as also shown in the image i posted below.
what i want to do is, to display on the the camera output without the query/object image, i want the area designated for the camera output is to show only the train/scene image captured from the camera.
please let me know how to solve this issues, i want to do something as shown in the tutorial i cited in the link.
1 - size matters but in your case, I think the most crucial problem is "textureness". SURF detect the interest points where the "texture gradient" is strong. In the case of your mouse, the gradient is mainly smooth, except aroud the logo (fujitsu), the button and at the border of the image. In the tutorial you point to, you notice it uses a very textured object to demonstrate the effect.
2 - to the best of my knowledge, there is fully automatic method to do what you want, but it can be done with a few steps. Basically, you must determine the surrounding box of your object then draw it. To draw, the easier is to use cv::rectangle but you can be more precise with four (or more....) cv::line. To determine the surrounding box, you can estimate the extreme points among the filtered matches.
Good luck!

EmguCV Shape Detection affected by Image Size

I'm using the Emgu shape detection example application to detect rectangles on a given image. The dimensions of the resized image appear to impact the number of shapes detected even though the aspect ratio remains the same. Here's what I mean:
Using (400,400), actual img size == 342,400
Using (520,520), actual img size == 445,520
Why is this so? And how can the optimal value be determined?
Thanks
I replied to your post on EMGU but figured you haven't checked back but this is it. The shape detection works on the principle of thresh-holding unlikely matches, this prevents lots of false classifications. This is true for many image processing algorithms. Basically there are no perfect setting and a designer must select the most appropriate settings to produce the most desirable results. I.E. match the most objects without saying there's more than there actually is.
You will need to adjust each variable individually to see what kind of results you get. Start of with the edge detection.
Image<Gray, Byte> cannyEdges = gray.Canny(cannyThreshold, cannyThresholdLinking);
Have a look at your smaller image see what the difference is between the rectangles detected and the one that isn't. You could be missing and edge or a corner which is why it's not classified. If you are adjust cannyThreshold and observe the results, if good then keep it :) if bad :( go back to the original value. Once satisfied adjust cannyThresholdLinking and observe.
You will keep repeating this until you get a preferred image the advantage here is that you have 3 items to compare you will continue until the item that's not being recognised matches the other two.
If they are the similar, likely as it is a black and white image you'll need to go onto the Hough lines detection.
LineSegment2D[] lines = cannyEdges.HoughLinesBinary(
1, //Distance resolution in pixel-related units
Math.PI / 45.0, //Angle resolution measured in radians.
20, //threshold
30, //min Line width
10 //gap between lines
)[0]; //Get the lines from the first channel
Use the same method of adjusting one value at a time and observing the output you will hopefully find the settings you need. Never jump in with both feet and change all the values as you will never know if your improving the accuracy or not. Finally if all else fails look at the section that inspects the Hough results for a rectangle
if (angle < 80 || angle > 100)
{
isRectangle = false;
break;
}
Less variables to change as hough should do all the work for you. but still it could all work out here.
I'm sorry that there is no straight forward answer, but I hope you keep at it and solve the problem. Else you could always resize the image each time.
Cheers
Chris

What processing steps should I use to clean photos of line drawings?

My usual method of 100% contrast and some brightness adjusting to tweak the cutoff point usually works reasonably well to clean up photos of small sub-circuits or equations for posting on E&R.SE, however sometimes it's not quite that great, like with this image:
What other methods besides contrast (or instead of) can I use to give me a more consistent output?
I'm expecting a fairly general answer, but I'll probably implement it in a script (that I can just dump files into) using ImageMagick and/or PIL (Python) so if you have anything specific to them it would be welcome.
Ideally a better source image would be nice, but I occasionally use this on other folk's images to add some polish.
The first step is to equalize the illumination differences in the image while taking into account the white balance issues. The theory here is that the brightest part of the image within a limited area represents white. By blurring the image beforehand we eliminate the influence of noise in the image.
from PIL import Image
from PIL import ImageFilter
im = Image.open(r'c:\temp\temp.png')
white = im.filter(ImageFilter.BLUR).filter(ImageFilter.MaxFilter(15))
The next step is to create a grey-scale image from the RGB input. By scaling to the white point we correct for white balance issues. By taking the max of R,G,B we de-emphasize any color that isn't a pure grey such as the blue lines of the grid. The first line of code presented here is a dummy, to create an image of the correct size and format.
grey = im.convert('L')
width,height = im.size
impix = im.load()
whitepix = white.load()
greypix = grey.load()
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 * impix[x,y][0] / whitepix[x,y][0], 255 * impix[x,y][1] / whitepix[x,y][1], 255 * impix[x,y][2] / whitepix[x,y][2]))
The result of these operations is an image that has mostly consistent values and can be converted to black and white via a simple threshold.
Edit: It's nice to see a little competition. nikie has proposed a very similar approach, using subtraction instead of scaling to remove the variations in the white level. My method increases the contrast in the regions with poor lighting, and nikie's method does not - which method you prefer will depend on whether there is information in the poorly lighted areas which you wish to retain.
My attempt to recreate this approach resulted in this:
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 + impix[x,y][0] - whitepix[x,y][0], 255 + impix[x,y][1] - whitepix[x,y][1], 255 + impix[x,y][2] - whitepix[x,y][2]))
I'm working on a combination of techniques to deliver an even better result, but it's not quite ready yet.
One common way to remove the different background illumination is to calculate a "white image" from the image, by opening the image.
In this sample Octave code, I've used the blue channel of the image, because the lines in the background are least prominent in this channel (EDITED: using a circular structuring element produces less visual artifacts than a simple box):
src = imread('lines.png');
blue = src(:,:,3);
mask = fspecial("disk",10);
opened = imerode(imdilate(blue,mask),mask);
Result:
Then subtract this from the source image:
background_subtracted = opened-blue;
(contrast enhanced version)
Finally, I'd just binarize the image with a fixed threshold:
binary = background_subtracted < 35;
How about detecting edges? That should pick up the line drawings.
Here's the result of Sobel edge detection on your image:
If you then threshold the image (using either an empirically determined threshold or the Ohtsu method), you can clean up the image using morphological operations (e.g. dilation and erosion). That will help you get rid of broken/double lines.
As Lambert pointed out, you can pre-process the image using the blue channel to get rid of the grid lines if you don't want them in your result.
You will also get better results if you light the page evenly before you image it (or just use a scanner) cause then you don't have to worry about global vs. local thresholding as much.

How to scale on-screen pixels?

I have written a 2D Jump&Run Engine resulting in a 320x224 (320x240) image. To maintain the old school "pixely"-feel to it, I would like to scale the resulting image by 2 or 3 or 4, according to the resolution of the user.
I don't want to scale each and every sprite, but the resulting image!
Thanks in advance :)
Bob's answer is correct about changing the filtering mode to TextureFilter.Point to keep things nice and pixelated.
But possibly a better method than scaling each sprite (as you'd also have to scale the position of each sprite) is to just pass a matrix to SpriteBatch.Begin, like so:
sb.Begin(/* first three parameters */, Matrix.CreateScale(4f));
That will give you the scaling you want without having to modify all your draw calls.
However it is worth noting that, if you use floating-point offsets in your game, you will end up with things not aligned to pixel boundaries after you scale up (with either method).
There are two solutions to this. The first is to have a function like this:
public static Vector2 Floor(Vector2 v)
{
return new Vector2((float)Math.Floor(v.X), (float)Math.Floor(v.Y));
}
And then pass your position through that function every time you draw a sprite. Although this might not work if your sprites use any rotation or offsets. And again you'll be back to modifying every single draw call.
The "correct" way to do this, if you want a plain point-wise scale-up of your whole scene, is to draw your scene to a render target at the original size. And then draw your render target to screen, scaled up (with TextureFilter.Point).
The function you want to look at is GraphicsDevice.SetRenderTarget. This MSDN article might be worth reading. If you're on or moving to XNA 4.0, this might be worth reading.
I couldn't find a simpler XNA sample for this quickly, but the Bloom Postprocess sample uses a render target that it then applies a blur shader to. You could simply ignore the shader entirely and just do the scale-up.
You could use a pixelation effect. Draw to a RenderTarget2D, then draw the result to the screen using a Pixel Shader. There's a tool called Shazzam Shader Editor that let's you try out pixel shaders and it includes one that does pixelation:
http://shazzam-tool.com/
This may not be what you wanted, but it could be good for allowing a high-resolution mode and for having the same effect no matter what resolution was used...
I'm not exactly sure what you mean by "resulting in ... an image" but if you mean your end result is a texture then you can draw that to the screen and set a scale:
spriteBatch.Draw(texture, position, source, color, rotation, origin, scale, effects, depth);
Just replace the scale with whatever number you want (2, 3, or 4). I do something similar but scale per sprite and not the resulting image. If you mean something else let me know and I'll try to help.
XNA defaults to anti-aliasing the scaled image. If you want to retain the pixelated goodness you'll need to draw in immediate sort mode and set some additional parameters:
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
GraphicsDevice.SamplerStates[0].MagFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MinFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MipFilter = TextureFilter.Point;
It's either the Point or the None TextureFilter. I'm at work so I'm trying to remember off the top of my head. I'll confirm one way or the other later today.

Resources