Well, I am a beginning girl in opencv learning,I want to achieve the following functions...can anybody give me more details and suggestions, thanks
1、I try to detest the closed shape successfully
2、I use the Findcontours() function to extract the outline.
Next, I want to find the inflection point ,so I use the Harries algorithm ..
Now I want to extract those sub-curves split up by those points...but I have no idea about it...
Sorry , Here is the replenish. My input image are black-and-white image .
the black-and-white image
Then I detect the outer contour (PS. I don't know why there are something inside)...
Then I marked the inflection point
The inflection of the curves
You have to fit your curve using a math model f(s)=M where M(x,y) and s curvilinear coordinates. then you can calculate derivative at each point and find interest point.
You can use as math model spline fourier descriptors...
Related
I would like to draw an arbitrary open set R on the xy-plane in R^3 as a sort of "blob", i.e. not anything with a particular shape. Does anyone have an idea of how?
You can use the Spline command to create a "blob" that goes through those points. In 2D it can be filled, in 3D you would have to approximate it by a polygon to fill it. The following command works well if your spline is called a.
Polygon(Sequence(a(k),k,0,1,0.01))
Please note that this is not a programing question and is better suited for GeoGebra help forum.
I'm new to image processing and hope someone can help/guide me in the right direction.
So I have a picture in black/white and I want to find the corner coordinates of the inner black part of the preprocessed picture. My question is what kind of method/s will yield the most accurate result?
I want something like this (red dots shows the inner corners)
go with cv::goodFeaturesToTrack() and play with params until you get your result.
you can refer to this on why choose this and not cornerHarris: goodFeaturesToTrack vs cornerHarris
and also to this SO answer for an example: opencv-using-cvgoodfeaturestotrack-with-c-mat-variable
of course I assume you are using C++, if you are using python it won't change much...
have luck and try to do a search before asking next time
I don't know what language you're using which makes it harder to answer this question but the way I did it my last time was by applying openCV's canny edge detection algorithm. This allows you to see the edges on the image. Next find the contours. With those two functions and a little help from your friend Google, you will be able to figure this out. Good luck!
One of the aproaches would be to use just sobel independently in X and Y (in original image). Since you have binary already just find inner edge, which will be easy.
Those edges can be then sampled - take a few points of the edge. And with the help of OpenCV library function cv::fitLine find the line of the edge. Do the same with all the inner edges and compute the intersections. This approach should be fairly accurate since from the fitLine function on you basically compute the corners in sub-pixel accuracy.
did somebody tried to find a pizzamarker like this one with "only" OpenCV so far?
I was trying to detect this one but couldn't get good results so far. I do not know where this marker is in picture (no ROI is possible), the marker will be somewhere in the room (different ligthning effects) and not faceing orthoonal towards us. What I want - the corners and later the orientation of this marker extracted with the corners but first of all only the 5Corners. (up, down, left, right, center)
I was trying so far: threshold, noiseclearing, find contours but nothing realy helped for a good result. Chessboards or square markers are normaly found because of their (parallel) lines- i guess this can't help me here...
What is an easy way to find those markers?
How would you start?
Use other colorformat like HSV?
A step-by-step idea or tutorial would be realy helpfull. Cause i couldn't find tuts at the net. Maybe this marker isn't called pizzamarker -> does somebody knows the real name?
thx for help
First - thank you for all of your help.
It seems that several methods are usefull. Some more or less time expansive.
For me it was the easiest with a template matching but not with the same marker.
I used only a small part of it...
this can be found 5 times(4 times negative and one positive) in this new marker:
now I use only the 4 most negatives Points and the most positive and got my 5 points that I finaly wanted. To make this more sure, I check if they are close to each other and will do a cornerSubPix().
If you need something which can operate in real-time I'd go down the edge detection route and look for intersecting lines like these guys did. Seems fast and robust to lighting changes.
Read up on the Hough Line Transform in openCV to get started.
Addendum:
Black to White is the strongest edge you can have. If you create a gradient image and use the strongest edges found in the scene (via histogram or other) you will be able to limit the detection to only the black/white edges. Look for intersections. This should give you a small number of center points to apply Hough ellipse detection (or alternate) to. You could rotate in a template as a further check if you wish.
BTW.. OpenCV has Edge Detection, Hough transform and FitEllipse if you do go down this route.
actually this 'pizza' pattern is one of the building blocks of the haar featured used in the
Viola–Jones object detection framework.
So what I would do is compute the summed area table, or integral image using cv::integral(img) and then run exhaustive search for this pattern, on various scales (size dependant).
In each window you are using only 9 points (top-left, top-center, ..., bottom left).
You can train and use cvHaarDetectObjects to detect the marker using VJ.
Probably not the fastest method but it should work.
You can find more info on object detection methods using OpenCV here: http://opencv.willowgarage.com/documentation/object_detection.html
I'm doing a project on hand posture recognition with OpenCV.
I've done the segmentation using normalizedRGB and found the contours using cvFindContours. Now I need to find the features to be extracted.
What are the best features to extract? What is the best method of classification for this case?
Thanks for the help ...
if your hand is static...you can fit a polygon to your contour and try to charecterise the polygon according to your gesture...like the number of sides of the polygon...angle between the sides...area of A,B,C,D...as shown in following diagram...
if your hand is moving then you can use cvCalcOpticalFlow to track the hand and depending on the displacement between two consecutive frames charecterise the direction of movement and take some decision...
I am looking for an efficient way to detect the small boxes around the numbers (see images)?
I already tried to use hough transformation with no success. Any ideas? I need some hints! I am using opencv...
For inspiration, you can have a look at the
Matlab video sudoku solver demo and explanation
Sudoku Grab, an Iphone App, whose author explains the computer vision part on his blog
Alternatively, if you are always hunting for the same grid you could deploy something like this:
Make a perfect artificial template of the grid and detect or save all coordinates from all corners.
In the target image, do the same thing, for example with Harris points. Be creative, you might also be able to use the distinct triangles that can be found in your images.
Using the coordinates from the template and the found harris points, determine the affine transformation x = Ax' between the template and the target image. That transformation can then be used to map the template grid onto the target image. At the very least this will give you some prior information to help guide further segmentation.
The gist of the idea and examples of the estimation of affine matrix A can be found on the site of Zissermans book Multiple View Geometry in Computer Vision and Peter Kovesi
I'd start by trying to detect the rectangular boundary of the overall sheet, then applying a perspective transform to make it truly rectangular. Crop that portion of the image out. If possible, then try to make the alternating white and grey sub-rectangles have an equal background brightness - maybe try adaptive histogram equalization.
Then the Hough transform might perform better. Alternatively, you could then take an approach that's broadly similar to this demonstration by Robert Bemis on MATLAB Central (it's analysing a DNA microarray image rather than Lotto cards, but it's essentially finding bounding boxes of items arranged in a grid). At a high level, the approach is to calculate the autocorrelation along columns and rows of pixels to detect the periodicity of the items in the grid, and use that to impose a bounding box on each item.
Sorry the above advice is mostly MATLAB-based; I'm afraid I'm not an opencv user, but hopefully it will give you some ideas at least.