A* algorithm Finite State machine? - a-star

I want to make a robot that uses path recognition to get through a maze. A great way of representing the control logic behind this robot would be to use a Finite State Machines. Unfortunately I cannot find any examples on the internet that uses FSM to solve the A* problem. Is this because it is not possible? Are there not a finite amount of steps that you can loop through to generate a FSM for A*?
Thanks in advanced!

It's impossible in general because the Open and Closed sets are not bounded in size by a constant, so any FSM can run out of "memory". For any finite maze size it should be possible, but it's not worth it, the FSM will be gigantic: encoding not only the control flow through the algorithm but also the entire "state" of the path finding (particularly Open and Closed), you will have a ridiculous number of states for all but trivial sizes of maze. I don't even know how you could construct such an FSM for a non-trivial case.
Once you have the path, you can follow it with an FSM as control, which is a fairly "natural" thing to do I suppose. I see no advantage to encoding the path finding algorithm itself as an FSM, only huge disadvantages.

Related

Recommended local search optimization algorithm for control domain

Background: I am trying to find a list of floating point parameters for a low level controller that will lead to balance of a robot while it is walking.
Question: Can anybody recommend me any local search algorithms that will perform well for the domain I just described? The main criteria for me is the speed of convergence to the right solution.
Any help will be greatly appreciated!
P.S. Also, I conducted some research and found out that "Evolutianry
Strategy" algorithms are a good fit for continuous state space. However, I am not entirely sure, if they will fit well my particular problem.
More info: I am trying to optimize 8 parameters (although it is possible for me to reduce the number of parameters to 4). I do have a simulator and a criteria for me is speed in number of trials because simulation resets are costly (take 10-15 seconds on average).
One of the best local search algorithms for low number of dimensions (up to about 10 or so) is the Nelder-Mead simplex method. By the way, it is used as the default optimizer in MATLAB's fminsearch function. I personally used this method for finding parameters of some textbook 2nd or 3rd degree dynamic system (though very simple one).
Other option are the already mentioned evolutionary strategies. Currently the best one is the Covariance Matrix Adaption ES, or CMA-ES. There are variations to this algorithm, e.g. BI-POP CMA-ES etc. that are probably better than the vanilla version.
You just have to try what works best for you.
In addition to evolutionary algorithm, I recommend you also check reinforcement learning.
The right method depends a lot on the details of your problem. How many parameters? Do you have a simulator? Do you work in simulation only, or also with real hardware? Speed is in number of trials, or CPU time?

How do I know when I need a dedicated DSP chip?

When designing an embedded system, how can I tell in general when the floating point processing required will be too much for a standard microcontroller?
In case anyone is curious, the system I am designing is a Kalman filter and some motor control. However, I am looking for an engineering methodology for the general case.
The general case on finding out whether the given processor can solve your problem, is to estimate the number of floating-point operations that have to be run per second, and then comparing it to what the processor can do.
This ideal case will be affected by memory-access times, I/O-interrupts, etc. In practise, you'll have to run it (although you don't want to hear that).
For the Kalman filter case:
1. Know the sample rate, the size of the state variable and the measurement-variable.
2. The complexity of the Kalman filter is dominated by the matrix inversion and multiple matrix multiplications. (O(d^3), d: size of state variable, or the Information Filter (inverse problem): O(z^3), z: size of measurement-vector) On-line or in books you'll find in-detail analysis of the operations required for Kalman Filters.
3. Find out what actual operations are run in the algorithms and add the number of operations required for each part of the algorithm.
The analysis is essentially the same for a general microcontroller or a DSP, except that some things come for free on the DSP.

Are there any good non-predictive path following algorithms?

All the path following steering algorithms (e.g. for robots steering to follow a colored terrain) that I can find are predictive, so they rely on the robot being able to sense some distance beyond its body.
I need path following behavior on a robot with a light sensor on its underside. It can only see terrain it is directly over and so can't make any predictions; are there any standard examples of good techniques to use for this?
I think that the technique you are looking for will most likely depend on what environment will you be operating in as well as to what type of your resources will your robot have access to. I have used NXT robots in the past, so you might consider this video interesting (This video is not mine).
Assuming that you will be working on a flat non glossy surface, you can let your robot wander around until it finds a predefined colour. The robot can then kick in a 'path following' mechanism and will keep tracking the line. If it does not sense the line any more, it might want to try to turn right and/or left (since the line might no longer be under the robot because it has found a bend).
In this case though the robot will need in advance what is the colour of the line that it needs to follow.
The reason the path finding algorithms you are seeing are predictive is because the robot needs to be able to interpret what it is "seeing" in context.
For instance, consider a coloured path in the form of a straight line. Even in this simple example, how is the robot to know:
Whether there is a coloured square in front of it, hence it should advance
Which direction it is even travelling in.
These two questions are the fundamental goals the algorithm you are looking for would answer (and things would get more complex as you add more difficult terrain and paths).
The first can only be answered with suitable forward-looking ability (hence a predictive algorithm), and the latter can only be answered with some memory of the previous state.
Based solely on the details you provided in your question, you wouldn't be able to implement an appropriate solution. Although, I would imagine that your sensor input and on-board memory would in fact be suitable for a predictive solution, you may just need to investigate further what the capabilities of your hardware allow for.

How to test an Machine Learning or statistic NLP algorithm implementation pack?

I am working on testing several Machine Learning algorithm implementations, checking whether they can work as efficient as described in the papers and making sure they could offer a great power to our statistic NLP (Natural Language Processing) platform.
Could u guys show me some methods for testing an algorithm implementation?
1)What aspects?
2)How?
3)Do I have to follow some basic steps?
4)Do I have to consider diversity specific situations when using different programming languages?
5)Do I have to understand the algorithm? I mean, does it offer any help if I really know what the algorithm is and how it works?
Basically, we r using C or C++ to implement the algorithm and our working env is Linux/Unix. Our testing methods only focus on black box testing and testing input/output of functions. I am eager to improve them but I dont have any better idea now...
Great Thx!! LOL
For many machine learning and statistical classification tasks, the standard metric for measuring quality is Precision and Recall. Most published algorithms will make some kind of claim about these metrics, or you could implement them and run these tests yourself. This should provide a good indicative measure of the quality you can expect.
When you talk about efficiency of an algorithm, this is usually some statement about the time or space performance of an algorithm in terms of the size or complexity of its input (often expressed in Big O notation). Most published algorithms will report an upper bound on the time and space characteristics of the algorithm. You can use that as a comparative indicator, although you need to know a little bit about computational complexity in order to make sure you're not fooling yourself. You could also possibly derive this information from manual inspection of program code, but it's probably not necessary, because this information is almost always published along with the algorithm.
Finally, understanding the algorithm is always a good idea. It makes it easier to know what you need to do as a user of that algorithm to ensure you're getting the best possible results (and indeed to know whether the results you are getting are sensible or not), and it will allow you to apply quality measures such as those I suggested in the first paragraph of this answer.

How to avoid that the robot gets trapped in local minimum?

I have some time occupying myself with motion planning for robots, and have for some time wanted to explore the possibility of improving the opportunities as "potential field" method offers. My challenge is to avoid that the robot gets trapped in "local minimum" when using the "potential field" method. Instead of using a "random walk" approach to avoid that the robot gets trapped I have thought about whether it is possible to implement a variation of A* which could act as a sort of guide for precisely to avoid that the robot gets trapped in "local minimum".
Is there some of the experiences of this kind, or can refer to literature, which avoids local minimum in a more effective way than the one used in the "random walk" approach.
A* and potential fields are all search strategies. The problem you are experiencing is that some search strategies are more "greedy" than others, and more often than not, algorithms that are too greedy get trapped in local minimum.
There are some alternatives where the tension between greediness (the main cause of getting trapped on local minimum) and diversity (trying new alternatives that don't seem to be a good choice in the short term) are parameterized.
A few years ago I've researched a bit about ant algorithms (search for Marco Dorigo, ACS, ACO) and they have a family of search algorithms that can be applied to pretty much anything, and they can control the greediness vs. exploration of your search space. In one of their papers, they even compared the search performance solving the TSP (the canonical traveling salesman problem) using genetic algorithms, simulated annealing and others. Ant won.
I've solved the TSP in the past using genetic algorithms and I still have the source code in delphi if you like.
Use harmonic function path planning. Harmonic functions are potential functions that describe fluid flow and other natural phenomena. If they are setup correctly using boundary conditions, then they have no local minima. These have been in use since the early 90s by Rod Grupen and Chris Connolly. These functions have been shown to be a specific form of optimal control that minimizes collision probabilities. They can be computed efficiently in low dimensional spaces using difference equations (i.e. Gauss-seidel, successive over-relaxation, etc.).

Resources