I am still new with the idea of A* Search. I understand some of the Heuristic that A* Search have such as Straight-Line Distance (Euclidean Distance), Manhattan Distance and Misplaced Tiles (for 8 puzzle game).
For the 2-d grid world,
Which is better admissible heuristic than Straight-Line Distance. I have my mind on Manhattan Distance. Any other suggestion?
When using A* there are two properties that must hold for the heuristic, in order for the search to be optimal (finding the best solution).
The heuristic must be admissible
The heuristic must be monotonistic
In reality it's pretty hard to come up with a non-monotonistic (also called inconsistent) heuristic, so lets stick with the first requirement.
A heuristic is admissible if it never overestimates the distance between two nodes (in this case points). As such the manhattan-distance heuristic is not admissible if diagonal movements are allowed - simply because of pythagoras theorem (the combined length of the two catheti, is longer than the squareroot of the hypothenuse), so in this case the straight line distance heuristic is the better - since it's admissible.
However if diagonal movements are not allowed in the 2D grid, then both heuristics are admissible, since neither will overestimate the distance, but hte manhattan distance heuristic is the preferred, because it makes better estimates, i.e. estimates closer to the actual distance.
Use a heuristic that agrees with the allowed movement:
For 4-directions, use Manhattan distance (L1)
For 8-directions, use Chebyshev distance (L-Infinity)
For any direction, you can use Euclidean distance, but an alternative map representation may be better (e.g. using Waypoints)
Amit Patel has produced fantastic reference material for this subject. See his page at RedBlobGames.com for an introduction to A* and his page on Stanford's Game Programming Page for a description of several grid-world Heuristics . His Stanford page also describes several methods for reducing size of the open set when optimality is not required.
There are also extensions for A* to take advantage of symmetry in grids with constant movement cost. Daniel Harabor introduced two in his doctoral thesis--Jump Point Search (JPS) and Rectangular Symmetry Reduction (RSR). He describes these in an article he posted on AiGameDev.com
Related
I am attempting to estimate the similarity between three different entities (here expressed as curves).
One of the curves represent a "teacher" (green curve) and the other two are "students".
While researching how to solve this problem, I have come across multiple techniques:
Procrustes Analysis Procrustes Analysis with NumPy?
Peak Finding Peak finding algorithm
Minkowski Distance (to penalize the outliers heavier)
All three methods have their own advantages and disadvantages, however neither of them seem to help me with the problem demonstrated in the image:
I "know" that "student 3" (orange curve) is closer to the "teacher", however distance wise "student 5" is measured as closest one
Peak estimations works well for sharp edges, and it does not perform well here.
I do not have a background in signal processing (which is what the problem appears to be requiring), and I would appreciate general suggestions/techniques on how to address these types of problems.
This problem isn't necessarily related to signal processing but to curve fitting or optimization in general. When you say that student 3 is "closer", you have to define "closeness".When using pre-defined distance function like you did, you have arbitrarily chosen a distance measure which isn't necessarily suitable to your needs.Estimating from the drawing, I think that by using Euclidean distance you'll get what you want (that student 3 is closer).
I've been exploring and learning about KD Trees for KNN (K Nearest Neighbors problem)
when would the search not work? or would be worth or not improve the naive search.
are there any drawbacks of this approach?
K-d trees don't work too well in high dimensions (where you have to visit lots and lots of tree branches). One rule of thumb is that if your data dimensionality is k, a k-d tree is only going to be any good if you have many more than 2^k data points.
In high dimensions, you'll generally want to switch to approximate nearest-neighbor searches instead. If you haven't run across it already, FLANN ( github ) is a very useful library for this (with C, C++, python, and matlab APIs); it has good implementations of k-d trees, brute-force search, and several approximate techniques, and it helps you automatically tune their parameters and switch between them easily.
It depends on your distance function.
You can't use k-d-trees with arbitrary distance functions. Minkowski norms should be fine though. But in a lot of applications, you will want to use more advanced distance functions.
Plus, with increasing dimensionality, k-d-trees work much less good.
The reason is simple: k-d-trees avoid looking at points where the one-dimensional distance to the boundary is already larger than the desired threshold, i.e. where for Euclidean distances (where z is the nearest border, y the closes known point):
(x_j - z_j) <=> sqrt(sum_i((x_i - y_i)^2))
equivalently, but cheaper:
(x_j - z_j)^2 <=> sum_i((x_i - y_i)^2)
You can imagine that the chance of this pruning rule holding decrease drastically with the number of dimensions. If you have 100 dimensions, there is next to no chance that a single dimensions squared difference will be larger than the sum of squared differences.
Time complexity for knn :O(k * lg(n))
where k is k-nearest neighbours and lg(n) is kd-tree height
kd-trees will not work well if the dimensions of the data set is high because of such huge space.
lets consider you have many points around the origin ,for simplicity consider in 2-D
If you want to find k-nearest neighbours for any point ,then you have to search along 4 axes because all points are closer to each other which results in backtracking to other axis in kd-tree,
So for a 3-dimensional space we have to search along 8 directions
To generalize for n -dimensional it is 2^k
So the time-complexity becomes O(2^k * lg(n))
I wonder if Triangle inequality is necessary for the distance measure used in kmeans.
k-means is designed for Euclidean distance, which happens to satisfy triangle inequality.
Using other distance functions is risky, as it may stop converging. The reason however is not the triangle inequality, but the mean might not minimize the distance function. (The arithmetic mean minimizes the sum-of-squares, not arbitrary distances!)
There are faster methods for k-means that exploit the triangle inequality to avoid recomputations. But if you stick to classic MacQueen or Lloyd k-means, then you do not need the triangle inequality.
Just be careful with using other distance functions to not run into an infinite loop. You need to prove that the mean minimizes your distances to the cluster centers. If you cannot prove this, it may fail to converge, as the objective function no longer decreases monotonically! So you really should try to prove convergence for your distance function!
Well, classical kmeans is defined on Euclidean space with L2 distance, so you get triangle inequality automatically from that (triangle inequality is part of how a distance/metric is defined). If you are using a non-euclidean metric, you would need to define what is the meaning of the "mean", amongst other things.
If you don't have triangle inequality, it means that two points could be very far from each other, but both can be close to a third point. You need to think how you would like to interpret this case.
Having said all that, I have in the past used average linkage hierarchical clustering with a distance measure that did not fulfill triangle inequality amongst other things, and it worked great for my needs.
I am having quite a bit of trouble understanding the workings of plane to plane homography. In particular I would like to know how the opencv method works.
Is it like ray tracing? How does a homogeneous coordinate differ from a scale*vector?
Everything I read talks like you already know what they're talking about, so it's hard to grasp!
Googling homography estimation returns this as the first link (at least to me):
http://cseweb.ucsd.edu/classes/wi07/cse252a/homography_estimation/homography_estimation.pdf. And definitely this is a poor description and a lot has been omitted. If you want to learn these concepts reading a good book like Multiple View Geometry in Computer Vision would be far better than reading some short articles. Often these short articles have several serious mistakes, so be careful.
In short, a cost function is defined and the parameters (the elements of the homography matrix) that minimize this cost function are the answer we are looking for. A meaningful cost function is geometric, that is, it has a geometric interpretation. For the homography case, we want to find H such that by transforming points from one image to the other the distance between all the points and their correspondences be minimum. This geometric function is nonlinear, that means: 1-an iterative method should be used to solve it, in general, 2-an initial starting point is required for the iterative method. Here, algebraic cost functions enter. These cost functions have no meaningful/geometric interpretation. Often designing them is more of an art, and for a problem usually you can find several algebraic cost functions with different properties. The benefit of algebraic costs is that they lead to linear optimization problems, hence a closed form solution for them exists (that is a one shot /non-iterative method). But the downside is that the found solution is not optimal. Therefore, the general approach is to first optimize an algebraic cost and then use the found solution as starting point for an iterative geometric optimization. Now if you google for these cost functions for homography you will find how usually these are defined.
In case you want to know what method is used in OpenCV simply need to have a look at the code:
http://code.opencv.org/projects/opencv/repository/entry/trunk/opencv/modules/calib3d/src/fundam.cpp#L81
This is the algebraic function, DLT, defined in the mentioned book, if you google homography DLT should find some relevant documents. And then here:
http://code.opencv.org/projects/opencv/repository/entry/trunk/opencv/modules/calib3d/src/fundam.cpp#L165
An iterative procedure minimizes the geometric cost function.It seems the Gauss-Newton method is implemented:
http://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm
All the above discussion assumes you have correspondences between two images. If some points are matched to incorrect points in the other image, then you have got outliers, and the results of the mentioned methods would be completely off. Robust (against outliers) methods enter here. OpenCV gives you two options: 1.RANSAC 2.LMeDS. Google is your friend here.
Hope that helps.
To answer your question we need to address 4 different questions:
1. Define homography.
2. See what happens when noise or outliers are present.
3. Find an approximate solution.
4. Refine it.
Homography in a 3x3 matrix that maps 2D points. The mapping is linear in homogeneous coordinates: [x2, y2, 1]’ ~ H * [x1, y1, 1]’, where ‘ means transpose (to write column vectors as rows) and ~ means that the mapping is up to scale. It is easier to see in Cartesian coordinates (multiplying nominator and denominator by the same factor doesn’t change the result)
x2 = (h11*x1 + h12*y1 + h13)/(h31*x1 + h32*y1 + h33)
y2 = (h21*x1 + h22*y1 + h23)/(h31*x1 + h32*y1 + h33)
You can see that in Cartesian coordinates the mapping is non-linear, but for now just keep this in mind.
We can easily solve a former set of linear equations in Homogeneous coordinates using least squares linear algebra methods (see DLT - Direct Linear Transform) but this unfortunately only minimizes an algebraic error in homography parameters. People care more about another kind of error - namely the error that shifts points around in Cartesian coordinate systems. If there is no noise and no outliers two erros can be identical. However the presence of noise requires us to minimize the residuals in Cartesian coordinates (residuals are just squared differences between the left and right sides of Cartesian equations). On top of that, a presence of outliers requires us to use a Robust method such as RANSAC. It selects the best set of inliers and rejects a few outliers to make sure they don’t contaminate our solution.
Since RANSAC finds correct inliers by random trial and error method over many iterations we need a really fast way to compute homography and this would be a linear approximation that minimizes parameters' error (wrong metrics) but otherwise is close enough to the final solution (that minimizes squared point coordinate residuals - a right metrics). We use a linear solution as a guess for further non-linear optimization;
The final step is to use our initial guess (solution of linear system that minimized Homography parameters) in solving non-linear equations (that minimize a sum of squared pixel errors). The reason to use squared residuals instead of their absolute values, for example, is because in Gaussian formula (describes noise) we have a squared exponent exp(x-mu)^2, so (skipping some probability formulas) maximum likelihood solutions requires squared residuals.
In order to perform a non-linear optimization one typically employs a Levenberg-Marquardt method. But in the first approximation one can just use a gradient descent (note that gradient points uphill but we are looking for a minimum thus we go against it, hence a minus sign below). In a nutshell, we go through a set of iterations 1..t..N selecting homography parameters at iteration t as param(t) = param(t-1) - k * gradient, where gradient = d_cost/d_param.
Bonus material: to further minimize the noise in your homography you can try a few tricks: reduce a search space for points (start tracking your points); use different features (lines, conics, etc. that are also transformed by homography but possibly have a higher SNR); reject impossible homographs to speed up RANSAC (e.g. those that correspond to ‘impossible’ point movements); use low pass filter for small changes in Homographies that may be attributed to noise.
I have implemented k-means clustering for determining the clusters in 300 objects. Each of my object
has about 30 dimensions. The distance is calculated using the Euclidean metric.
I need to know
How would I determine if my algorithms works correctly? I can't have a graph which will
give some idea about the correctness of my algorithm.
Is Euclidean distance the correct method for calculating distances? What if I have 100 dimensions
instead of 30 ?
The two questions in the OP are separate topics (i.e., no overlap in the answers), so I'll try to answer them one at a time staring with item 1 on the list.
How would I determine if my [clustering] algorithms works correctly?
k-means, like other unsupervised ML techniques, lacks a good selection of diagnostic tests to answer questions like "are the cluster assignments returned by k-means more meaningful for k=3 or k=5?"
Still, there is one widely accepted test that yields intuitive results and that is straightforward to apply. This diagnostic metric is just this ratio:
inter-centroidal separation / intra-cluster variance
As the value of this ratio increase, the quality of your clustering result increases.
This is intuitive. The first of these metrics is just how far apart is each cluster from the others (measured according to the cluster centers)?
But inter-centroidal separation alone doesn't tell the whole story, because two clustering algorithms could return results having the same inter-centroidal separation though one is clearly better, because the clusters are "tighter" (i.e., smaller radii); in other words, the cluster edges have more separation. The second metric--intra-cluster variance--accounts for this. This is just the mean variance, calculated per cluster.
In sum, the ratio of inter-centroidal separation to intra-cluster variance is a quick, consistent, and reliable technique for comparing results from different clustering algorithms, or to compare the results from the same algorithm run under different variable parameters--e.g., number of iterations, choice of distance metric, number of centroids (value of k).
The desired result is tight (small) clusters, each one far away from the others.
The calculation is simple:
For inter-centroidal separation:
calculate the pair-wise distance between cluster centers; then
calculate the median of those distances.
For intra-cluster variance:
for each cluster, calculate the distance of every data point in a given cluster from
its cluster center; next
(for each cluster) calculate the variance of the sequence of distances from the step above; then
average these variance values.
That's my answer to the first question. Here's the second question:
Is Euclidean distance the correct method for calculating distances? What if I have 100 dimensions instead of 30 ?
First, the easy question--is Euclidean distance a valid metric as dimensions/features increase?
Euclidean distance is perfectly scalable--works for two dimensions or two thousand. For any pair of data points:
subtract their feature vectors element-wise,
square each item in that result vector,
sum that result,
take the square root of that scalar.
Nowhere in this sequence of calculations is scale implicated.
But whether Euclidean distance is the appropriate similarity metric for your problem, depends on your data. For instance, is it purely numeric (continuous)? Or does it have discrete (categorical) variables as well (e.g., gender? M/F) If one of your dimensions is "current location" and of the 200 users, 100 have the value "San Francisco" and the other 100 have "Boston", you can't really say that, on average, your users are from somewhere in Kansas, but that's sort of what Euclidean distance would do.
In any event, since we don't know anything about it, i'll just give you a simple flow diagram so that you can apply it to your data and identify an appropriate similarity metric.
To identify an appropriate similarity metric given your data:
Euclidean distance is good when dimensions are comparable and on the same scale. If one dimension represents length and another - weight of item - euclidean should be replaced with weighted.
Make it in 2d and show the picture - this is good option to see visually if it works.
Or you may use some sanity check - like to find cluster centers and see that all items in the cluster aren't too away of it.
Can't you just try sum |xi - yi| instead if (xi - yi)^2
in your code, and see if it makes much difference ?
I can't have a graph which will give some idea about the correctness of my algorithm.
A couple of possibilities:
look at some points midway between 2 clusters in detail
vary k a bit, see what happens (what is your k ?)
use
PCA
to map 30d down to 2d; see the plots under
calculating-the-percentage-of-variance-measure-for-k-means,
also SO questions/tagged/pca
By the way, scipy.spatial.cKDTree
can easily give you say 3 nearest neighbors of each point,
in p=2 (Euclidean) or p=1 (Manhattan, L1), to look at.
It's fast up to ~ 20d, and with early cutoff works even in 128d.
Added: I like Cosine distance in high dimensions; see euclidean-distance-is-usually-not-good-for-sparse-data for why.
Euclidean distance is the intuitive and "normal" distance between continuous variable. It can be inappropriate if too noisy or if data has a non-gaussian distribution.
You might want to try the Manhattan distance (or cityblock) which is robust to that (bear in mind that robustness always comes at a cost : a bit of the information is lost, in this case).
There are many further distance metrics for specific problems (for example Bray-Curtis distance for count data). You might want to try some of the distances implemented in pdist from python module scipy.spatial.distance.