I have been trying to simulate a Helium Airship using Gazebo on ROS but can't find plugin to simulate the propellers for thrust. Does anyone know a way to do it?
You should be able to model the propeller like a Quadcopter propeller. See if https://dev.px4.io/en/simulation/gazebo.html helps.
Related
I have to work with ROS Noetic and gazebo on a model of a Semi-submerged drone. To test some different programs, I want to use the model of diffboat, located in this git.
Because Kinetic might close in few days, i have to work with ROS Noetic (or Foxy/ROS2) and the problem is this project is not compatible with ROS Noetic..
Do you have any advice to make this project compatible on ROS/Noetic?
Hi am not sure if you have to work with the mentioned environment or if also unity instead of gazebo is an alternative.
But if you want to check out other maritime simulators I can recommend these links to you:
Ros Discourse on Maritime Robotics
UUV Simulator (Ros,Gazebo)
DAVE Simulator (Ros,Gazebo)
Plankton Simulator (Ros2,Gazebo)
I am a beginner in robotics and I am trying to use Google Cartographer to make my simulated Turtlebot build a map autonomously of its environment.
I have done already all the tutorials in ROS and I can make the robot build the map using teleoperation, but I don't know how to make it build the map by itself. I want to use Google Cartographer for this.
I can however run the demo provided by Google and it works (it builds the map of a Museum).
# Launch the 2D depth camera demo.
roslaunch cartographer_turtlebot demo_depth_camera_2d.launch bag_filename:=${HOME}/Downloads/cartographer_turtlebot_demo.bag
The questions:
How can I run it on my own world instead of the bag file of that museum?
Does it need a yaml map like the one I built with teleoperation? what is the command to make it use my yaml map instead of a bag file?
Can I use a .png image with yaml context?
Could it use the gazebo simulated worlds that are .sdf? What is the command to input this then?
These are the specifications I have:
Ubuntu Xenial
ROS Kinetic
Gazebo 7
Turtlebot2
Google Cartographer for turtlebot
Thanks a lot! It's like I understand the concepts but I don't know how to link things together to make them work.
Is it possible to port my existing Viz3d-based visualisation to iOS? It is using VTK under the hood so in theory, it should be doable since VTK can run on iOS.
If yes, is there a working example of this, or can you provide one?
I try to record a video with my USB Webcam in a QT GUI Application on my Raspberry Pi.
All tutoriuals I can find focus on cross-compiling. I want to program and build this directly on the Pi. I've succesfully installed QT Creator and build GUI Applications, they're working.
Questions:
Is it possible to implement OpenCV into the RaspberryPi QT Creator? If yes, how?
Is this the correct approach to solve the problem?
Is there a better way to record videos with a QT GUI Application beside Open CV?
Thank you very much!
I want to train openCV from a server and send the xml generated by openCV to an ios device where an app will recognize the face using the xml trainned by the server. I will use openCV in both app but the server has window (trainning) and the device has ios (recognition).
So my main question is very simple:
The xml generated in openCV window version can be used an openCV IOS version without any trouble? Somebody made something similar who can give me some tips?
In window I will use .Net.
I think they won't have trouble because they are same libraries (openCV), so I suppose they have same internal algorithms but I want to be sure before start the project.
Best Regards and thanks for your time
There is no problem, but you must train with images taken from your devices. It is normal to have multiple xml sets depending on your different cameras. Normally you release these with the binary, and not as a download but still...