After been able to calculate shortest distance using Dijstra algorithm feeding manually vertex point (getting lat and long from google maps) I'm searching a more dynamic way to do the same.
Assuming I'm having a shape file representing my map (with boundaries and obstacles) which algorithm I can use to decompose it?
Googling a little bit I found I should do a "cell decomposition" but honestly I've not figured out how to do it?
Thank you.
If you only have obstacles in shape file, than you could construct visibility graph, and use dijkstra on that.
If you have regions with different passabilities, than you should use some more complicated techniques, for example overlay it with some grid(rectangualr or triangular), than triangulate, assign weights to it's edges, and than use dijkstra too
Related
I'm looking for an algorithm to find a polygon that can represent a set of points in 2D space. Specifically, if given a set of points like this
It should ideally produce something similar to this:
(The arrows are segments)
Basically, the output would be a set of segments that "best" address the features of the points. The algorithms possibly take some parameters to control the numbers of output segments.
I currently do not have any ideas on what algorithms I'm looking for. Any papers or advice are appreciated.
This is a possible algorithm.
For every point, look at the 2 points closest to it, they become connected.
Then use Douglas Peucker to refine the edges.
Essentially you will create a first polygon containing all the points, and the try to eliminate points whose elimination doesn't change the shape too much.
I have an array of MKLocationCoordinate2D in iOS and I'd like to create a heat map of those points based on the clustering of them.
i.e. the more there are in a certain area then the higher the weight.
I've found a load of different frameworks for generating the heat maps and they all require the weights to be calculated yourself (which makes sense).
I'm just not sure where to start with the calculation.
I could do something like calculating the mean distance between each point and every other point but I'm not sure if that's a good idea.
Could someone point me in the direction of how to weight each point based on it's closeness to other points.
Thanks
I solved this by implementing a quad tree and using that to quickly get the number of neighbours within a certain radius.
I can then change the radius to tweak it but it will very quickly return weights based on how many neighbours each point has.
I want to find curvature at depth map
Look at the picture
This is example of curvature
Maybe if i represent image as function and take second derivative from it a can find curvatures. But i couldn't to implement it. (I tryed sobel operator from opencv)
Is there way out?
PS Sorry for my writing mistakes. English in not my native language.
That is not a depth map, it is a point cloud (but I assume it is generated from one single depth map z = f(x,y).
What curvature do you want to estimate? Mean, Gaussian, the whole 2nd fundamental form?
See, e.g. here for definitions. Here's a recent reference on fast estimation methods:
OpenCV has a nice in-built ellipse-fitting algorithm called fitEllipse(const Mat& points)
However, it has some major shortcomings, limiting its usefulness. For example, it already requires selected points, so I already have to do a feature extraction myself. HoughCircles detects circles on a given image, pity there is no HoughEllipses.
The other major shortcoming, which stands in the center of my question, is that it does no provide any metric about how accurate the fitting was. It returns an ellipse which best fits the given points, even if the shape does not even remotely look like an ellipse. Is there a way to get the estimated error from the algorithm? I would like to use it as a threshold to filter out shapes which are not even close to be considered ellipses.
I asked this, because maybe there is a simple solution before I try to reinvent the wheel and write my very own fitEllipse function.
If you don't mind getting your hands dirty, you could actually modify the source code for fitEllipse(). The fitEllipse() function uses least-squares to determine the likely ellipses, and the least-squares solution is a tangible distance metric, which is what you want.
If that is something you're willing to do, it would be a very simple code change. Simply add a float whose value is passed back after the function call, where the float stores the current best least-squares value.
fitEllipse gives you the ellipse as a cv::RotatedRect and so you know the angle of rotation of the ellipse, its center and its two axes.
You can compute the sum of the square of the distances between your points and the ellipse, that sum is the metric you are looking for.
The distance between a point and an ellipse is described here http://www.geometrictools.com/Documentation/DistancePointEllipseEllipsoid.pdf and the code is here http://www.geometrictools.com/GTEngine/Include/Mathematics/GteDistPointHyperellipsoid.h
You need to go from OpenCV cv::RotatedRect to Ellipse2 of Geometric Tools Engine and then you can compute the distance.
Why don't you do a findContours() to reduce the memory space required? There's your selected points structure right there. If you want to further simplify you can run a ConvexHull() or ApproxPoly() on that. Fit the ellipse to those points, and then I suppose you can check similarity between the two structures to get some kind of estimate. A difference operator between the two Mats would be a (very) rough estimate?
Depending on the application, you might be able to use CAMShift (or mean shift), which fits an ellipse to a region with similar colors.
Can anyone tell me of a method to refine disparity maps? I am trying to generate the disparity map of a face but the features like eyes, nose ,lips etc are not clear.How can I refine it to make it look better?
Take a look at
https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/cpp/stereo_match.cpp
there are some bm.state parameters (about line 197 - 207) that can be tweaked, we connected those values to some sliders so we could look at the result at the same time as we tweaked the values. you can also try with some different values for "blocksize". you can also try with some different distances between your cameras, if the cameras are too far apart, you will get poor / no results on close distances.
I assume that your code is similar to the example above.
Use stereoSGBM I am using it and you can use trackbars to tweak the parameters study the refernece of opencv and then manipulate each parameter based on the effect it has on your image eg:P1 P2 make it smoother etc