path planning -> ways from goal to initial state? - a-star

the problem: is it true that finding a path from goal to start point is much more efficient than finding a path from start to goal?
if this is true,can some one help me out and explain why?
my opinion:
it shouldn't be different because finding a way from goal to start is just like renaming goal to start and start to goal.

The answer to your question all depends on the path finding algorithm you use.
One of the most well know path finding algorithms, A-Star (or A*), is commonly used in a reverse sense. It all has to do with the heuristics. Since we usually use proximity as the heuristic for the algorithm we can get stuck in obstacles. These obstacles might however be easier to face the other way around. A great explanation with examples can be found here. Just for clarity: if there is no certain knowledge of obstacles, then there is no predictable difference between forwards and backwards path finding by A*.
Another reason why you might want to reverse the pathfinding is if you have multiple actors trying to reach the same goal. Instead of having to execute A*, or another path finding algorithm, for every actor you can combine them into a single executing of a graph explorational path finding algorithm. For example, a variation on Dijkstra's algorithm could find all the shortest distances to all actors in one graph exploration.

Related

Non Convex Optimizations

I have a gd algorithm and I am trying to come up with a non-convex univariate optimization problem. I want to plot the function python and then show two runs of gd, one where it gets caught in a local minimum and one where it manages to make it to a global minimum. I am thinking of using different starting points to accomplish this.
That being said I am somewhat clueless about coming up with such a function or trying two different points, any help is appreciated.
Your question is really broad and really hard to answer because nonconvex optimization is rather complicated so is any iterative algorithm that solves such problems. As a quick hint, you can use the Mexican Hat function (or a simple polynomial that gives you what you want) for your test case. Also these papers can give you come context : Paper1 Paper2
Good luck.

Reinforcement Learning in arbitrarily large action/state spaces

I’m interested to use Deep Reinforcement Learning in order to find an - unique - optimal path back home among (too many) possibilities and a few (required) intermediate stopes (for instance, buy a coffee or refuel).
Furthermore, I want to apply this in cases where the agent doesn’t know a “model” of the environment, and the agent can't try all possible combinations of states and actions at all. I.e. needing to use approximation techniques in Q-value function (and/or policy).
I’ve read of methods for facing cases like this - where rewards, if any, are sparse and binary - like Monte Carlo Tree search (which implies some sort of modeling and planning, according to my understandings) or Hindsight Experience Replay (HER), applying ideas of DDPG.
But there are so many different kind of algorithms to consider, I’m a bit confused what’s best to begin with.
I know it’s a difficult problem, and maybe it’s too naive to ask this, but Is there any clear, direct and we’ll-known way to solve the problem I want to face?
Thanks a lot!
Matias
If the final destination is fixed as in this case(home) you can go for dynamic search as a* will not work due to changeable enviornment.
And if you want to use deep learning algorithm then go for a3c with experience replay due to the large action/state spaces.It capable of handling complex probelm.

Neo4j Hamiltonian path (TSP)

Being a newbie with graphs, I'm looking whether it's possible to use Neo4j to calculate an optimal route which passes through all entered waypoints (distances are weights of the edges).
I'm familiar with ability to use A* and Dijkstra to find shortest/cheapest paths, but haven't found an easy way to do this. Since the number of nodes for each calculation will be relatively small (< 30), I'm primarily hoping for ease of implementation with Neo4j (if possible) compared to coding the solution from scratch in Node.js, since I guess performance won't be a problem at this scale.
Thank you for your time!
for this you must take a look into gremling traversal language, which neo4j aslo implements. pure cypher core wont be of any help.

Are there any good non-predictive path following algorithms?

All the path following steering algorithms (e.g. for robots steering to follow a colored terrain) that I can find are predictive, so they rely on the robot being able to sense some distance beyond its body.
I need path following behavior on a robot with a light sensor on its underside. It can only see terrain it is directly over and so can't make any predictions; are there any standard examples of good techniques to use for this?
I think that the technique you are looking for will most likely depend on what environment will you be operating in as well as to what type of your resources will your robot have access to. I have used NXT robots in the past, so you might consider this video interesting (This video is not mine).
Assuming that you will be working on a flat non glossy surface, you can let your robot wander around until it finds a predefined colour. The robot can then kick in a 'path following' mechanism and will keep tracking the line. If it does not sense the line any more, it might want to try to turn right and/or left (since the line might no longer be under the robot because it has found a bend).
In this case though the robot will need in advance what is the colour of the line that it needs to follow.
The reason the path finding algorithms you are seeing are predictive is because the robot needs to be able to interpret what it is "seeing" in context.
For instance, consider a coloured path in the form of a straight line. Even in this simple example, how is the robot to know:
Whether there is a coloured square in front of it, hence it should advance
Which direction it is even travelling in.
These two questions are the fundamental goals the algorithm you are looking for would answer (and things would get more complex as you add more difficult terrain and paths).
The first can only be answered with suitable forward-looking ability (hence a predictive algorithm), and the latter can only be answered with some memory of the previous state.
Based solely on the details you provided in your question, you wouldn't be able to implement an appropriate solution. Although, I would imagine that your sensor input and on-board memory would in fact be suitable for a predictive solution, you may just need to investigate further what the capabilities of your hardware allow for.

What's the best approach to recognize patterns in data, and what's the best way to learn more on the topic?

A developer I am working with is developing a program that analyzes images of pavement to find cracks in the pavement. For every crack his program finds, it produces an entry in a file that tells me which pixels make up that particular crack. There are two problems with his software though:
1) It produces several false positives
2) If he finds a crack, he only finds small sections of it and denotes those sections as being separate cracks.
My job is to write software that will read this data, analyze it, and tell the difference between false-positives and actual cracks. I also need to determine how to group together all the small sections of a crack as one.
I have tried various ways of filtering the data to eliminate false-positives, and have been using neural networks to a limited degree of success to group cracks together. I understand there will be error, but as of now, there is just too much error. Does anyone have any insight for a non-AI expert as to the best way to accomplish my task or learn more about it? What kinds of books should I read, or what kind of classes should I take?
EDIT My question is more about how to notice patterns in my coworker's data and identify those patterns as actual cracks. It's the higher-level logic that I'm concerned with, not so much the low-level logic.
EDIT In all actuality, it would take AT LEAST 20 sample images to give an accurate representation of the data I'm working with. It varies a lot. But I do have a sample here, here, and here. These images have already been processed by my coworker's process. The red, blue, and green data is what I have to classify (red stands for dark crack, blue stands for light crack, and green stands for a wide/sealed crack).
In addition to the useful comments about image processing, it also sounds like you're dealing with a clustering problem.
Clustering algorithms come from the machine learning literature, specifically unsupervised learning. As the name implies, the basic idea is to try to identify natural clusters of data points within some large set of data.
For example, the picture below shows how a clustering algorithm might group a bunch of points into 7 clusters (indicated by circles and color):
(source: natekohl.net)
In your case, a clustering algorithm would attempt to repeatedly merge small cracks to form larger cracks, until some stopping criteria is met. The end result would be a smaller set of joined cracks. Of course, cracks are a little different than two-dimensional points -- part of the trick in getting a clustering algorithm to work here will be defining a useful distance metric between two cracks.
Popular clustering algorithms include k-means clustering (demo) and hierarchical clustering. That second link also has a nice step-by-step explanation of how k-means works.
EDIT: This paper by some engineers at Phillips looks relevant to what you're trying to do:
Chenn-Jung Huang, Chua-Chin Wang, Chi-Feng Wu, "Image Processing Techniques for Wafer Defect Cluster Identification," IEEE Design and Test of Computers, vol. 19, no. 2, pp. 44-48, March/April, 2002.
They're doing a visual inspection for defects on silicon wafers, and use a median filter to remove noise before using a nearest-neighbor clustering algorithm to detect the defects.
Here are some related papers/books that they cite that might be useful:
M. Taubenlatt and J. Batchelder, “Patterned Wafer Inspection Using Spatial Filtering for Cluster Environment,” Applied Optics, vol. 31, no. 17, June 1992, pp. 3354-3362.
F.-L. Chen and S.-F. Liu, “A Neural-Network Approach to Recognize Defect Spatial Pattern in Semiconductor Fabrication.” IEEE Trans. Semiconductor Manufacturing, vol. 13, no. 3, Aug. 2000, pp. 366-373.
G. Earl, R. Johnsonbaugh, and S. Jost, Pattern Recognition and Image Analysis, Prentice Hall, Upper Saddle River, N.J., 1996.
Your problem falls in the very broad field of image classification. These types of problems can be notoriously difficult, and at the end of the day, solving them is an art. You must exploit every piece of knowledge you have about the problem domain to make it tractable.
One fundamental issue is normalization. You want to have similarly classified objects to be as similar as possible in their data representation. For example, if you have an image of the cracks, do all images have the same orientation? If not, then rotating the image may help in your classification. Similarly, scaling and translation (refer to this)
You also want to remove as much irrelevant data as possible from your training sets. Rather than directly working on the image, perhaps you could use edge extraction (for example Canny edge detection). This will remove all the 'noise' from the image, leaving only the edges. The exercise is then reduced to identifying which edges are the cracks and which are the natural pavement.
If you want to fast track to a solution then I suggest you first try the your luck with a Convolutional Neural Net, which can perform pretty good image classification with a minimum of preprocessing and noramlization. Its pretty well known in handwriting recognition, and might be just right for what you're doing.
I'm a bit confused by the way you've chosen to break down the problem. If your coworker isn't identifying complete cracks, and that's the spec, then that makes it your problem. But if you manage to stitch all the cracks together, and avoid his false positives, then haven't you just done his job?
That aside, I think this is an edge detection problem rather than a classification problem. If the edge detector is good, then your issues go away.
If you are still set on classification, then you are going to need a training set with known answers, since you need a way to quantify what differentiates a false positive from a real crack. However I still think it is unlikely that your classifier will be able to connect the cracks, since these are specific to each individual paving slab.
I have to agree with ire_and_curses, once you dive into the realm of edge detection to patch your co-developers crack detection, and remove his false positives, it seems as if you would be doing his job. If you can patch what his software did not detect, and remove his false positives around what he has given you. It seems like you would be able to do this for the full image.
If the spec is for him to detect the cracks, and you classify them, then it's his job to do the edge detection and remove false positives. And your job to take what he has given you and classify what type of crack it is. If you have to do edge detection to do that, then it sounds like you are not far from putting your co-developer out of work.
There are some very good answers here. But if you are unable to solve the problem, you may consider Mechanical Turk. In some cases it can be very cost-effective for stubborn problems. I know people who use it for all kinds of things like this (verification that a human can do easily but proves hard to code).
https://www.mturk.com/mturk/welcome
I am no expert by any means, but try looking at Haar Cascades. You may also wish to experiment with the OpenCV toolkit. These two things together do face detection and other object-detection tasks.
You may have to do "training" to develop a Haar Cascade for cracks in pavement.
What’s the best approach to recognize patterns in data, and what’s the best way to learn more on the topic?
The best approach is to study pattern recognition and machine learning. I would start with Duda's Pattern Classification and use Bishop's Pattern Recognition and Machine Learning as reference. It would take a good while for the material to sink in, but getting basic sense of pattern recognition and major approaches of classification problem should give you the direction. I can sit here and make some assumptions about your data, but honestly you probably have the best idea about the data set since you've been dealing with it more than anyone. Some of the useful technique for instance could be support vector machine and boosting.
Edit: An interesting application of boosting is real-time face detection. See Viola/Jones's Rapid Object Detection using a Boosted Cascade of Simple
Features (pdf). Also, looking at the sample images, I'd say you should try improving the edge detection a bit. Maybe smoothing the image with Gaussian and running more aggressive edge detection can increase detection of smaller cracks.
I suggest you pick up any image processing textbook and read on the subject.
Particularly, you might be interested in Morphological Operations like Dilation and Erosion‎, which complements the job of an edge detector. Plenty of materials on the net...
This is an image processing problem. There are lots of books written on the subject, and much of the material in these books will go beyond a line-detection problem like this. Here is the outline of one technique that would work for the problem.
When you find a crack, you find some pixels that make up the crack. Edge detection filters or other edge detection methods can be used for this.
Start with one (any) pixel in a crack, then "follow" it to make a multipoint line out of the crack -- save the points that make up the line. You can remove some intermediate points if they lie close to a straight line. Do this with all the crack pixels. If you have a star-shaped crack, don't worry about it. Just follow the pixels in one (or two) directions to make up a line, then remove these pixels from the set of crack pixels. The other legs of the star will recognized as separate lines (for now).
You might perform some thinning on the crack pixels before step 1. In other words, check the neighbors of the pixels, and if there are too many then ignore that pixel. (This is a simplification -- you can find several algorithms for this.) Another preprocessing step might be to remove all the lines that are too thin or two faint. This might help with the false positives.
Now you have a lot of short, multipoint lines. For the endpoints of each line, find the nearest line. If the lines are within a tolerance, then "connect" the lines -- link them or add them to the same structure or array. This way, you can connect the close cracks, which would likely be the same crack in the concrete.
It seems like no matter the algorithm, some parameter adjustment will be necessary for good performance. Write it so it's easy to make minor changes in things like intensity thresholds, minimum and maximum thickness, etc.
Depending on the usage environment, you might want to allow user judgement do determine the questionable cases, and/or allow a user to review the all the cracks and click to combine, split or remove detected cracks.
You got some very good answer, esp. #Nate's, and all the links and books suggested are worthwhile. However, I'm surprised nobody suggested the one book that would have been my top pick -- O'Reilly's Programming Collective Intelligence. The title may not seem germane to your question, but, believe me, the contents are: one of the most practical, programmer-oriented coverage of data mining and "machine learning" I've ever seen. Give it a spin!-)
It sounds a little like a problem there is in Rock Mechanics, where there are joints in a rock mass and these joints have to be grouped into 'sets' by orientation, length and other properties. In this instance one method that works well is clustering, although classical K-means does seem to have a few problems which I have addressed in the past using a genetic algorithm to run the interative solution.
In this instance I suspect it might not work quite the same way. In this case I suspect that you need to create your groups to start with i.e. longitudinal, transverse etc. and define exactly what the behviour of each group is i.e. can a single longitudinal crack branch part way along it's length, and if it does what does that do to it's classification.
Once you have that then for each crack, I would generate a random crack or pattern of cracks based on the classification you have created. You can then use something like a least squares approach to see how closely the crack you are checking fits against the random crack / cracks you have generated. You can repeat this analysis many times in the manner of a Monte-Carlo analysis to identify which of the randomly generated crack / cracks best fits the one you are checking.
To then deal with the false positives you will need to create a pattern for each of the different types of false positives i.e. the edge of a kerb is a straight line. You will then be able to run the analysis picking out which is the most likely group for each crack you analyse.
Finally, you will need to 'tweak' the definition of different crack types to try and get a better result. I guess this could either use an automated approach or a manual approach depending on how you define your different crack types.
One other modification that sometimes helps when I'm doing problems like this is to have a random group. By tweaking the sensitivity of a random group i.e. how more or less likely a crack is to be included in the random group, you can sometimes adjust the sensitivty of the model to complex patterns that don't really fit anywhere.
Good luck, looks to me like you have a real challenge.
You should read about data mining, specially pattern mining.
Data mining is the process of extracting patterns from data. As more data are gathered, with the amount of data doubling every three years, data mining is becoming an increasingly important tool to transform these data into information. It is commonly used in a wide range of profiling practices, such as marketing, surveillance, fraud detection and scientific discovery.
A good book on the subject is Data Mining: Practical Machine Learning Tools and Techniques
(source: waikato.ac.nz) ](http://www.amazon.com/Data-Mining-Ian-H-Witten/dp/3446215336 "ISBN 0-12-088407-0")
Basically what you have to do is apply statistical tools and methodologies to your datasets. The most used comparison methodologies are Student's t-test and the Chi squared test, to see if two unrelated variables are related with some confidence.

Resources