For the slide_puzzle problem, how to record the shortest path.
I have tried the bfs method, and also recorded its previous state for each state,as this said ,so that the path can be reconstructed at the end, which works well for the 3*3 problem.
But when n>4, the memory overflows.
Should I choose something more efficient like A star, or should I store a map of [prev,cur] states in a simpler way?
Is it reasonable to record only the coordinates of the points swapped with 0 as the current state each time?
Related
We have this camera array arranged in an arc around a person (red dot). Think The Matrix - each camera fires at the same time and then we create an animated gif from the output. The problem is that it is near impossible to align the cameras exactly and so I am looking for a way in OpenCV to align the images better and make it smoother.
Looking for general steps. I'm unsure of the order I would do it. If I start with image 1 and match 2 to it, then 2 is further from three than it was at the start. And so matching 3 to 2 would be more change... and the error would propagate. I have seen similar alignments done though. Any help much appreciated.
Here's a thought. How about performing a quick and very simple "calibration" of the imaging system by using a single reference point?
The best thing about this is you can try it out pretty quickly and even if results are too bad for you, they can give you some more insight into the problem. But the bad thing is it may just not be good enough because it's hard to think of anything "less advanced" than this. Here's the description:
Remove the object from the scene
Place a small object (let's call it a "dot") to position that rougly corresponds to center of mass of object you are about to record (the center of area denoted by red circle).
Record a single image with each camera
Use some simple algorithm to find the position of the dot on every image
Compute distances from dot positions to image centers on every image
Shift images by (-x, -y), where (x, y) is the above mentioned distance; after that, the dot should be located in the center of every image.
When recording an actual object, use these precomputed distances to shift all images. After you translate the images, they will be roughly aligned. But since you are shooting an object that is three-dimensional and has considerable size, I am not sure whether the alignment will be very convincing ... I wonder what results you'd get, actually.
If I understand the application correctly, you should be able to obtain the relative pose of each camera in your array using homographies:
https://docs.opencv.org/3.4.0/d9/dab/tutorial_homography.html
From here, the next step would be to correct for alignment issues by estimating the transform between each camera's actual position and their 'ideal' position in the array. These ideal positions could be computed relative to a single camera, or relative to the focus point of the array (which may help simplify calculation). For each image, applying this corrective transform will result in an image that 'looks like' it was taken from the 'ideal' position.
Note that you may need to estimate relative camera pose in 3-4 array 'sections', as it looks like you have a full 180deg array (e.g. estimate homographies for 4-5 cameras at a time). As long as you have some overlap between sections it should work out.
Most of my experience with this sort of thing comes from using MATLAB's stereo camera calibrator app and related functions. Their help page gives a good overview of how to get started estimating camera pose. OpenCV has similar functionality.
https://www.mathworks.com/help/vision/ug/stereo-camera-calibrator-app.html
The cited paper by Zhang gives a great description of the mathematics of pose estimation from correspondence, if you're interested.
I have implemented a Particle Filter to localise a robot. If i want to get the most likely set of paths, what would be the best way to do it?
I was wondering if taking the particle with the highest weight is a correct way to do it?
At first, each particle should track its paths. This can be done by adding a list of waypoints to each Particle. When you want to get the most likely path, you can take the path from the particle with the highest weight. This is not the same as taking the most likely position in each time step and aggregate them as the most likely path!
You can also use the weighted average values of all paths of the particles. This depends on what distribution you expect. When it has only one mode, this may give a more precise path. In contrast, if you expect a multimodal distribution (assume an obstacle, where half of the particles pass left and the other half pass right), the weighted average might give wrong results.
I would stick with the particle with the highest weight.
I'm developing an iOS application that needs to determine the probability that a user is following a given path.
If they are not following the path, I'd like to give them the option to recalculate.
This should be a relatively simple algorithm, for inputs I have a location (x,y) and n paths (two x,y points).
What is the best way to do this?
You might look at Dijkstra's algorithm to find shortest distance between two points? What I think is you should always feed the current location of vehicle as it will show the recalculated value if taken a wrong turn and show it in graph. Hope it helps.
So I've recorded some data from an Android GPS, and I'm trying to find the peaks of these graphs, but I haven't been able to find anything specific, perhaps because I'm not too sure what I'm looking for. I have found some MatLab functions, but I can't find the actual algorithms that do it. I need to do this in Java, but I should be able to translate code from other languages.
As you can see, there are lots of 'mini-peaks', but I just want the main ones.
Your solution depends on what you want to do with the data. If you want to do very serious things then you should most likely use (Fast) Fourier Transforms, and extract both the phase and frequency output from it. But that's very computationally intensive and takes a long while to program. If you just want to do something simple that doesn't require a lot of computational resources, then here's a suggestion:
For that exact problem i implemented the below algorithm a few hours ago. I invented the algorithm myself so i do not know if it has a name already, but it is working great on very noisy data.
You need to determine the average peak-to-peak distance and call that PtP. Do that measurement any what you like. Judging from the graph in your case it appears to be about 35. In my code i have another algorithm i invented to do that automatically.
Then choose a random starting index on the graph. Poll every new datapoint from then on and wait until the graph has either risen or fallen from the starting index level by about 70% of PtP. If it was a fall then that's a tock. If it was a rise then that's a tick. Store that level as the last tick or tock height. Produce a 'tick' or 'tock' event at this index.
Continue forward in the data. After ticks, if the data continues to rise after that point then store that level as the new 'height-of-tick' but do not produce a new tick event. After tocks, if the data continues to fall after that point then store that level as the new 'depth-of-tock' but do not produce a new tock event.
If last event was a tock then wait for a tick, if last event was a tick then wait for a tock.
Each time you detect a tick, then that should be a peak! Good luck.
I think what you want to do is run this through some sort of low-pass filter. Depending on exactly what you want to get out of this dataset, a simple "box car" filter might be
sufficient: at each point, take the average of the N samples centered on that point,
and take the average as the filtered value. The larger N is, the more aggressively smoothed the filtered data will be.
I guess you have lots of points... Calculate mean value of them, subtract it from all point's values and get highest point value (negative or positive) from each range where points have same sign till they change it. I hope I am clear...
With particulary nasty and noisy data I usually use smoothing. Easiest example of smoothing is moving average. Then you can find peacks on that moving average. And then you simply go back to your original data and take the closest peak to one you found on moving average.
I've done some looking into peak detection and I can tell you that if your data doesn't behave, it could mess up your algorithm. Off the top of my head, you could try: Pick a threshold, i.e threshold = 250. If data is above threshold, find the max at that period. This is assuming that the data you have has a mean about 230. Not sure how fancy you want to get. Hope that helps.
How can I implement the A* algorithm on a gridless 2D plane with no nodes or cells? I need the object to maneuver around a relatively high number of static and moving obstacles in the way of the goal.
My current implementation is to create eight points around the object and treat them as the centers of imaginary adjacent squares that might be a potential position for the object. Then I calculate the heuristic function for each and select the best. The distances between the starting point and the movement point, and between the movement point and the goal I calculate the normal way with the Pythagorean theorem. The problem is that this way the object often ignores all obstacle and even more often gets stuck moving back and forth between two positions.
I realize how silly mu question might seem, but any help is appreciated.
Create an imaginary grid at whatever resolution is suitable for your problem: As coarse grained as possible for good performance but fine-grained enough to find (desirable) gaps between obstacles. Your grid might relate to a quadtree with your obstacle objects as well.
Execute A* over the grid. The grid may even be pre-populated with useful information like proximity to static obstacles. Once you have a path along the grid squares, post-process that path into a sequence of waypoints wherever there's an inflection in the path. Then travel along the lines between the waypoints.
By the way, you do not need the actual distance (c.f. your mention of Pythagorean theorem): A* works fine with an estimate of the distance. Manhattan distance is a popular choice: |dx| + |dy|. If your grid game allows diagonal movement (or the grid is "fake"), simply max(|dx|, |dy|) is probably sufficient.
Uh. The first thing that come to my mind is, that at each point you need to calculate the gradient or vector to find out the direction to go in the next step. Then you move by a small epsilon and redo.
This basically creates a grid for you, you could vary the cell size by choosing a small epsilon. By doing this instead of using a fixed grid you should be able to calculate even with small degrees in each step -- smaller then 45° from your 8-point example.
Theoretically you might be able to solve the formulas symbolically (eps against 0), which could lead to on optimal solution... just a thought.
How are the obstacles represented? Are they polygons? You can then use the polygon vertices as nodes. If the obstacles are not represented as polygons, you could generate some sort of convex hull around them, and use its vertices for navigation. EDIT: I just realized, you mentioned that you have to navigate around a relatively high number of obstacles. Using the obstacle vertices might be infeasible with to many obstacles.
I do not know about moving obstacles, I believe A* doesn't find an optimal path with moving obstacles.
You mention that your object moves back and fourth - A* should not do this. A* visits each movement point only once. This could be an artifact of generating movement points on the fly, or from the moving obstacles.
I remember encountering this problem in college, but we didn't use an A* search. I can't remember the exact details of the math but I can give you the basic idea. Maybe someone else can be more detailed.
We're going to create a potential field out of your playing area that an object can follow.
Take your playing field and tilt or warp it so that the start point is at the highest point, and the goal is at the lowest point.
Poke a potential well down into the goal, to reinforce that it's a destination.
For every obstacle, create a potential hill. For non-point obstacles, which yours are, the potential field can increase asymptotically at the edges of the obstacle.
Now imagine your object as a marble. If you placed it at the starting point, it should roll down the playing field, around obstacles, and fall into the goal.
The hard part, the math I don't remember, is the equations that represent each of these bumps and wells. If you figure that out, add them together to get your final field, then do some vector calculus to find the gradient (just like towi said) and that's the direction you want to go at any step. Hopefully this method is fast enough that you can recalculate it at every step, since your obstacles move.
Sounds like you're implementing The Wumpus game based on Norvig and Russel's discussion of A* in Artifical Intelligence: A Modern Approach, or something very similar.
If so, you'll probably need to incorporate obstacle detection as part of your heuristic function (hence you'll need to have sensors that alert your agent to the signs of obstacles, as seen here).
To solve the back and forth issue, you may need to store the traveled path so you can tell if you've already been to a location and have the heurisitic function examine the past N number of moves (say 4) and use that as a tie-breaker (i.e. if I can go north and east from here, and my last 4 moves have been east, west, east, west, go north this time)