Multi-Vehicle(zephyr and iris) ardupilot gazebo simulation: Going to different points in Gazebo simulation for the same 'lat lon alt' - gazebo-simu

Hello community members
I want to simulate a zephyr fixed-wing and an iris copter in one gazebo simulation using SwiftGust/ardupilot_gazebo repository.
So What I have done is, I have opened the zephyr demo world in gazebo and inserted an iris model into the environment and finally saved that file as another file. after that I run SITL simulation for both iris and zephyr in the same location through -L option in sim_vehicle.py command. So I can assume when I give a guided lat lon alt command with the same arguments to both drones, the drones move toward a same location in the gazebo simulation, but each of them goes to a different place.
I have tried to set the same yaw values for drones in gazebo, but it does not help and simulation result was the same behavior. Also I have read somethings about WGS84 but I don't think it is important in my problem because it is the location of the world origin and the other things would have relative locations.
Sincerely!

Related

Using hector slam mapping for Ros navigation stack

Following is the approach that I am adopting
I have built the map using the hector_slam package while controlling my robot using an RF transmitter.
For the move_base node I am using amcl localization parameters.
For the sake of testing my robot's position on the map, I drive the robot manually and I receive the error and the robot randomly moves all around the map.
[WARN]: The origin of the sensor is out of map bounds. so the costmap cannot raytrace for it
[WARN]: Map update loop missed its desired rate of 10.0000Hz...the loop actually took 0.1207 seconds.
Autonomous navigation will only work if this localization problem is solved and according to my naive point of view I guess it's the problem of building the map with hector slam and later on using amcl for localization.
Questions:
Is this approach possible?
Can I use hector slam for localization instead of amcl?
How do i make the robot localize itself in the pre-build map (using hector_slam) when I control it manually

How to minimize the error integrating 3D angular velocity data obtained by the IMU to get linear velocity?

have IMU sensor that gives me the raw data such as orientation, Angular and Linear acceleration. Im using ROS and doing some Gazebo UUV simulation. Furthermore, I want to get linear velocity from the raw IMU data.
So the naive integration method is as follow: The simplest method for IMU state estimation is the naive integration of the measured data. We estimate the attitude by integrating 3D angular velocity data obtained by the IMU. Assuming that the time step Δ𝑡 is small, the attitude at each time step can be incrementally calculated.
If I do integration over time there will be accumulated error and will not be accurate with the time when for example the robot makes turns. So Im looking for some methods ( ROS packages or outside ROS framework) or code that can correct that error.
Any help?
I would first recommend that you try fitting your input sensor data into an EKF or UKF node from the robot_localization package. This package is the most used & most optimized pose estimation package in the ROS ecosystem.
It can handle 3D sensor input, but you would have to configure the parameters (there are no real defaults, all config). Besides the configuration docs above, the github has good examples of yaml parameter configurations (Ex.) (you'll want a separate file from the launch file) and example launch files (Ex.).

How display scan on rviz with sdf file?

Good morning,
I ran a simulation of a hecacopter with a gazebo. I have an sdf file of my drone with a 3-D lidar.
I send the data of my lidar on the topic /scan, and I want to visualize it on rviz.
I saw that I had to make an urdf file of my drone, but I can't make the conversion. (and the sdf file is quite big)
Is there another way to display the data without having to do the conversion?
Thank you
Translated with www.DeepL.com/Translator (free version)
If you're publishing a (LaserScan/PointCloud) msg on topic /scan, and the header.frame_id is (for example) on frame lidar, then you can just look at the lidar frame in Rviz, and have a laser scan element subscribing to /scan. You don't need to render the whole vehicle if you don't want to.
Personally, I find just rendering a coordinate axis frame to be sufficient sometimes.

Simulation of robotic spider with ROS and gazebo

I need to simulate walk of robotic spider in ROS and gazebo. I am able to code srcipts in C++ or python for basic elements like cube or ball for simulation in gazebo but I have no idea how to do it for the spider with six legs. Any suggestion how to do that would be appreciated or at least any book/project/source with similiar problem.
I have currently the model in inventor and transfering it to URDF doesn't seem to be a problem. Thanks
You can find a nice simulation of a quadruped spider-like robot in the following repository https://github.com/YunjaeChoi/Reinforment-Implementation-on-a-Quadruped
This is how it looks like in simulation: https://youtu.be/jo47bkJQrjU

Issues while using Open Frameworks CvCameraProjectorCalibration example

I'm currently developing a projection mapping application within Unity3D and I've reached a point where I require projector's intrisic/extrinsic matrix. In order to obtain them, I'm trying out the ofxCvCameraProjectorCalibration addon (using the calibration example available in the pack) and been having some issues:
1) The application is divided in 3 states: CAMERA, PROJECTOR_STATIC and PROJECTOR_DYNAMIC. In each state the corresponding device is calibrated. A video of demonstrating the process using a similar application can be seenhere. Upon reaching the final state (the dynamic one), the dots used in the projector calibration that were originally being projected in the Static state are almost gone - and I say almost because sometimes 1-4 dots show up on random locations unlike on the video I mentioned before. Since the calibration process comes to a halt, the application doesn't get to the point in the code where both CameraProjectorExtrinsics.yml and calibrationProjector.yml are generated!
2) The clean(maxReprojErrorCamera) crashes the application. In order to fully run the application, I have to set a high value in settings.xml so that this function is never called, however from what I've understood this function isn't mandatory but how much impact does it have on the calibration?
3) Are the values obtained in the extrinsic matrix of the projector measurable in meters or cm?
Any help would be greatly appreciated!

Resources