I am working on a path-finding project in which, I have to implement single source multiple destination (just like in electric circuit connection). Also there are lots of source nodes in my single map. The Map in the program is a simple 2D map with equal movement cost. The Map is of 952 X 400 (Width X Height). I have used A-star algorithm to find path (may or may not be shortest path). The problem is that sometimes the A-star failed to find the path , when the connection in the circuit gets complex. Can any one suggest me any variation of A-star which will solve this problem. Please Help!!!
Related
I have a project to build a 3D model of the spinal roots in order to simulate the stimulation by an electrode. For the moment I've been handled two things, the extracted positions of the spinal roots (from the CT scans) and the selected segments out of the points (see both pictures down below). The data I'm provided is in 3D and all the segments are clearly distinct although it does not look like it on the figures below as it is zoomed out.
Points and segments extracted from the spinal cord CT scans:
.
Selected segments out of the points:
I'm now trying to connect these segments so as to have the centrelines for all the spinal roots at the end. The segments are not classified, simply of different colors to differentiate them on the plot. The task is then about vertically connecting the segments that look to be part of the same root path.
I've been reviewing the literature on how I could tackle that issue. As I'm still quite new to the field I don't have much intuition on what could work and what could not. I have two subtasks to solve here, connecting the lines and classifying the roots, and while connecting the segments after classification seems no big deal, classifying them seems decently harder. So I'm not sure in which order to proceed.
Here are the few options I'm considering to deal with the task :
Use a Kalman filter to extract the vertical lines from the selected segments and the missing parts
Use of a Hough transform to detect vertical lines, by trying to express the spinal root segments in the parametric space and see how they cluster and see if anything can be inferred from there.
Apply some sort of SVM classification algorithm on the segments to classify them by roots. I could characterize each segment by its orientation and position, and classify them based on similarities in the parameters I'm selecting, and then connect the segments. Or use the endpoint position of each segment and connect it to one of the nearest neighbours if their orientation/position is matching.
I'm open to any suggestions, any piece of advice, or any other ideas on how to deal with the current problem.
I have implemented a PCA in order to assign rotation information to connected 2D points extracted from images (edge fragments, see data points in image below for examples). I want the information to be robustly reproducible under rotation of the data so that I can use it for recognition purposes (comparable to 1). For this purpose, I want the principal components (eigenvectors) to rotate with the points (+- 180 deg).
My implementation includes a mean centring of the data. I have also tested the implementations of OpenCV and one in Python which yield to the same results. This is why I assume that my implementation is correct and that the problem is the method itself. I had quite good results for other 2D distributions. Nonetheless, for these specific data points, it does not seem to work.
I have done all the tests with and without normalization to the standard deviation (ie., dividing the data of the x and y values by their standard deviations).
Here are my results for different rotations of the data (extracted from images):
PCA Results
As can be seen, the method does not allow to find a reproducible rotation. The data is affected by quantization (because it is extracted from images) which is why I had the idea that this is the origin of the problem. Therefore I repeated the experiment with added random noise (4th column). As can be seen, this does not seem to be the problem.
I have no precise idea how to explain the displayed effects. I note that the general orientation of the principal axes seems to be similar in the first and second row, respectively. I think that this means something, but what exactly? Can I somehow solve the problem or are there possibly better methods for such a problem? Due to some preprocessing it can be assumed that there are no outliers.
Thanks for your help!
For symmetrycal shapes like you shown you can try symmetry detector like this: https://github.com/subokita/Sandbox/tree/master/FSD
On examples it give results like this:
Problem's description:
I am working at a project whose goal is to identify people's body parts in images (torso, head, left and right arms etc). The approach is based on finding parts of the human body (hypothesis) and then searching for the best pose configuration (= all the parts that really form a human body). The ideea is described better at this link http://www.di.ens.fr/willow/events/cvml2010/materials/INRIA_summer_school_2010_Andrew_human_pose.pdf.
The hypothesis are obtained after running a detection algorithm (here I am using a classifier from machine learning field) for each body part separately. So, the type of each hypothesis is known. Also, each hypothesis has a location (x and y coordinates in the image) and an orientation.
To determine the cost of linking two parts together, one can consider that each hypothesis of type head can be linked with each hypothesis of type torso (for example). But, a head hypothesis which is in the top right location of the image can not be linked (from a human point of view) with a torso hypothesis which is in the bottom left location of the image. I am trying to avoid these kinds of links based on the last statement and also due to the execution time.
Question: I am planing to reduce the searching space by considering a distance to the farthest hypothesis which can be a linking candidate. Which is the fastest way of solving this searching problem?
For similar problems I have resorted to spliting the source images into 16 (or more, depending on the relative size of the parts you're trying to link) smaller images, doing the detection and linking step in each of these seperatly, and an extra step where you will do only a link step for each subimage, and it's (possibly 8) neighbours.
In this case you will never even try to link one part in the upper left corner with the lower right one, and as an added bonus the first part of your problem is now extremely parallel.
Update:
You could do a edge detection on the image first, and never cut the image in 2 when that would mean cutting an edge in two. recursively doing this would allow you to get a lot of small images with body parts on them you can then process separately.
This kind of discrete assignment problem can be solved using the Hungarian algorithm.
In the computation of the cost (= distance) matrix, you can set the entry to some infinite or very high value when the distance is grater than a predefined threshold,
This will prevent the algorithm from assigning a head to a torso which is too far away.
This last technique is also called gating in tracking lectures.
I have a single calibrated camera (known intrinsic parameters, i.e. camera matrix K is known, as well as the distortion coefficients).
I would like to reconstruct the camera's 3d trajectory. There is no a-priori knowledge about the scene.
simplifying the problem by presenting two images that look on the same scene and extracting two set of corresponding matched feature points from them (SIFT, SURF, ORB, etc.)
My problem is how can I calculate the camera extrinsic parameters (i.e. the rotation matrix R and the translation vector t ) between the to viewpoints?
I have managed to calculate the fundamental matrix, and since K is know, the essential matrix as well. using David Nister's efficient solution to the Five-Point Relative Pose Problem I've managed to get 4 possible solution but:
the constraint on the essential matrix E ~ U * diag (s,s,0) * V' doesn't always apply - causing incorrect results.
[EDIT]: taking the average singular value seems to correct the results :) one down
how can I tell which one of the four is the correct one?
Thanks
Your solution to point 1 is correct: diag( (s1 + s2)/2, (s1 + s2)/2, 0).
As for telling which one of the four solutions is correct, only one will give positive depths for all points with respect to the camera frame. That's the one you want.
Code for checking which solution is correct can be found here: http://cs.gmu.edu/%7Ekosecka/examples-code/essentialDiscrete.m from http://cs.gmu.edu/%7Ekosecka/bookcode.html
They use the determinants of U and V to determine the solution with the correct orientation. Look for the comment "then four possibilities are". Since you're only estimating the essential matrix, it's susceptible to noise and does not behave well at all if all of the points are coplanar.
Also, the translation is only recovered to within a constant scaling factor, so the fact that you're seeing a normalized translation vector of unit magnitude is exactly correct. The reason is that the depth is unknown and estimated to be 1. You'll have to find some way to recover the depth as in the code for the eight-point algorithm + 3d reconstruction (Algorithm 5.1 in the bookcode link.)
The book the sample code above is taken from is also a very good reference. http://vision.ucla.edu/MASKS/ Chapter 5, the one you're interested in, is available on the Sample Chapters link.
Congrats on your hard work, sounds like you've tried hard to learn these techniques.
For actual production-strength code, I'd advise to download libmv and ceres, and re-code your solution using them.
Your two questions are really one: invalid solutions are rejected based on the data you have collected. In particular, Nister's (as well as Stewenius's) algorithm is normally used in the inner loop of a RANSAC-like solver, which selects for the solution with the best fit / max number of inliers.
I have a field filled with obstacles, I know where they are located, and I know the robot's position. Using a path-finding algorithm, I calculate a path for the robot to follow.
Now my problem is, I am guiding the robot from grid to grid but this creates a not-so-smooth motion. I start at A, turn the nose to point B, move straight until I reach point B, rinse and repeat until the final point is reached.
So my question is: What kind of techniques are used for navigating in such an environment so that I get a smooth motion?
The robot has two wheels and two motors. I change the direction of the motor by turning the motors in reverse.
EDIT: I can vary the speed of the motors basically the robot is an arduino plus ardumoto, I can supply values between 0-255 to the motors on either direction.
You need feedback linearization for a differentially driven robot. This document explains it in Section 2.2. I've included relevant portions below:
The simulated robot required for the
project is a differential drive robot
with a bounded velocity. Since
the differential drive robots are
nonholonomic, the students are encouraged to use feedback linearization to
convert the kinematic control output
from their algorithms to control the
differential drive robots. The
transformation follows:
where v, ω, x, y are the linear,
angular, and kinematic velocities. L
is an offset length proportional to the
wheel base dimension of the robot.
One control algorithm I've had pretty good results with is pure pursuit. Basically, the robot attempts to move to a point along the path a fixed distance ahead of the robot. So as the robot moves along the path, the look ahead point also advances. The algorithm compensates for non-holonomic constraints by modeling possible paths as arcs.
Larger look ahead distances will create smoother movement. However, larger look ahead distances will cause the robot to cut corners, which may collide with obstacles. You can fix this problem by implementing ideas from a reactive control algorithm called Vector Field Histogram (VFH). VFH basically pushes the robot away from close walls. While this normally uses a range finding sensor of some sort, you can extrapolate the relative locations of the obstacles since you know the robot pose and the obstacle locations.
My initial thoughts on this(I'm at work so can't spend too much time):
It depends how tight you want or need your corners to be (which would depend on how much distance your path finder gives you from the obstacles)
Given the width of the robot you can calculate the turning radius given the speeds for each wheel. Assuming you want to go as fast as possible and that skidding isn't an issue, you will always keep the outside wheel at 255 and reduce the inside wheel down to the speed that gives you the required turning radius.
Given the angle for any particular turn on your path and the turning radius that you will use, you can work out the distance from that node where you will slow down the inside wheel.
An optimization approach is a very general way to handle this.
Use your calculated path as input to a generic non-linear optimization algorithm (your choice!) with a cost function made up of closeness of the answer trajectory to the input trajectory as well as adherence to non-holonomic constraints, and any other constraints you want to enforce (e.g. staying away from the obstacles). The optimization algorithm can also be initialised with a trajectory constructed from the original trajectory.
Marc Toussaint's robotics course notes are a good source for this type of approach. See in particular lecture 7:
http://userpage.fu-berlin.de/mtoussai/teaching/10-robotics/