I am using TI sensor tags, and getting the raw data from this sensor in form of gyroscope (x,y,z) , accelerometer (x,y,z) and magnetometer (x,y,z). Is there a way to find out the path by which I am moving the sensor.I am rotating the sensor in circular path and want to draw the same path.
Related
Creating a Trajectory using a 360 camera video without use of GPS, IMU, sensor, ROS or LIDAR
Input is a video, created using a 360 camera(Samsung Gear 360). I need to plot a trajectory (without the use of ground truth poses) as I move around in an indoor location(that is I need to know the camera locations and plot accordingly).
Firstly, camera calibration was done by capturing 21 pics of the chessboard, and by using OpenCV methods, camera matrix(3x3 matrix which includes fx,fy,cx,cy, and skew factor) was achieved which was then given input to a text file.
Have tried: Feature detection(ORB, SIFT, AKAZE..) and tracking (Flann and Brute Force) methods. It works well for a single space but fails if a video is of a multi-storey building. Tested on this multi-storey building:https://youtu.be/6DPFcKoHiak and results obtained were:
An example of camera motion estimation that is required: https://arxiv.org/pdf/2003.08056.pdf
Any help regarding on how to plot camera poses with the use of VSLAM Visual odometry or any other.
I have a quadrotor, and I want to read its position in (x,y,z) using hector_gazebo_plugin and python. For now I am using libhector_gazebo_ros_gps.so file to get the latitude and longitude of the quadrotor.
But I would like to have the position of the quadrotor.
How do I do that?
In ros, there are several different frames, as BTables was trying to get to. If you're trying to get a position estimation from sensor data, using the robot_localization package, familiarize yourself with the different kinds of frames and ros message types / data that go in & out of that process.
Normally in ros, there are 3 main frames: map -> odom -> base_link. The base_link is the point on the robot that represents it. The odom frame tracks integrated velocity/acceleration sensor updates to have a continuous position; it's origin is usually wherever the robot boots up. The map frame is the "real world" location of the robot. It does require an origin position and yaw, because otherwise it's arbitrary.
In your case, it seems like you want to anchor the robot within the longitude/latitude coordinate frame. My recommendation is still to pick an origin in your environment, if you can, otherwise you can use the boot-up location as your origin.
So to do that, I'm assuming your odom->base_link transform is an EKF or UKF node (from robot_localization), using the IMU data from your quadcopter. Then your map->odom transform is another EKF or UKF that also takes in an absolute position in your map frame as an update. (See example launch file (navsat notes) and yaml config file (navsat notes) and more from the github). You can then use your fix topic (sensor_msgs/NavSatFix) from hector gazebo gps plugin with the navsat_transform_node to get a position update for your estimator above and map transforms to the global or utm coordinate system.
If you don't care about your position with respect to the world coming out, this gets a bit simpler, otherwise this also has the features to report your position back out to lat/long.
I am trying to obtain linear and angular speed of the robot relative to a world frame using a Bosch radar sensor.
The radar sensor provides the information of 32 objects which it can detect at a time, the information contains the location of the object and radial velocity of the object.
If the objects are stationary and the robot is moving towards the object with 1 m/sec. than the radar will show the radial velocity of the object -1 m/sec.
Now the problem is that I am not able to calculate the angular speed of the robot because the data is to sparse. and if I get the linear speed and angular speed of the robot in Radar frame, than how should I transform this information to a stationary world frame
I am currently working on pose estimation of one camera with respect to another using opencv, in a setup where camera1 is fixed and camera2 is free to move. I know the intrinsics of both the cameras. I have the pose estimation module using epipolar geometry and computing essential matrix using the five-point algorithm to figure out the R and t of camera2 with respect to camera1; but I would like to get the metric translation. To help achieve this, I have two GPS modules, one on camera1 and one on camera2. For now, if we assume camera1's GPS is flawless and accurate; camera2's GPS exhibits some XY noise, I would need a way to use the opencv pose estimate on top of this noisy GPS to get the final accurate translation.
Given that info, my question has two parts:
Because the extrinsics between the cameras keep changing, would it be possible to use bundle adjustment to refine my pose?
And can I somehow incorporate my (noisy) GPS measurements in a bundle adjustment framework as an initial estimate, and obtain a more accurate estimate of metric translation as my end result?
1) No, bundle adjustment has another function and you would not be able to work with it anyway because you would have an unknown scale for every pair you use with 5-point. You should instead use a perspective-n-point algorithm after the first pair of images.
2) Yes, it's called sensor fusion and you need to first calibrate (or know) the transformation between your GPS sensor coordinates and your camera coordinates. There is an open source framework you can use.
I see that I can retrieve CMAttitude from a device and from it I can read 3 values which I need (pitch, roll and yaw).
As I understand, this CMAttitude object is managed by CoreMotion which is a Sensor Fusion manager for calculating correct results from compass, gyro and accelerometer together (on Android it is SensorManager Class).
So my questions are:
Are those values (pitch, roll and yaw) relative to the magnetic north and gravity?
If above is correct, how can I modify it to give me results relative to the geographic north?
If a device (such as iPhone 3GS) doesn't have an gyroscope, do I have to tell it to Manager or can I just tell it to give me the device's attitude based on the sensors it has (acc + gyro + compas OR acc + compas)
and 2:
iOS 5.0 simplifies this task. CMMotion manager has new method:
- (void)startDeviceMotionUpdatesUsingReferenceFrame:(CMAttitudeReferenceFrame)referenceFrame
As reference frame you can use this values:
CMAttitudeReferenceFrameXMagneticNorthZVertical for magnetic north,
CMAttitudeReferenceFrameXTrueNorthZVertical for true north.
If you want to do this with older iOS im afraid you have to calibrate this by yourself using current user location.
Try checkout this resources:
"What's New in Core Motion" WWDC 2011 video,
"Sensing Device Motion in iOS 4" WWDC 2010 video
3.
If device has no gyro, the deviceMotionAvailable property of CMMotionManger will be "NO" (it is equivalent to gyroAvailable property) and you cannot get attitude using device motion. The only thing you can do is to read accelerometer and magnetometer data directly.