Using Underwater Robot Model on ROS (Noetic) and Gazebo - ros

I have to work with ROS Noetic and gazebo on a model of a Semi-submerged drone. To test some different programs, I want to use the model of diffboat, located in this git.
Because Kinetic might close in few days, i have to work with ROS Noetic (or Foxy/ROS2) and the problem is this project is not compatible with ROS Noetic..
Do you have any advice to make this project compatible on ROS/Noetic?

Hi am not sure if you have to work with the mentioned environment or if also unity instead of gazebo is an alternative.
But if you want to check out other maritime simulators I can recommend these links to you:
Ros Discourse on Maritime Robotics
UUV Simulator (Ros,Gazebo)
DAVE Simulator (Ros,Gazebo)
Plankton Simulator (Ros2,Gazebo)

Related

Using Google Cartographer with Turtlebot in a custom world?

I am a beginner in robotics and I am trying to use Google Cartographer to make my simulated Turtlebot build a map autonomously of its environment.
I have done already all the tutorials in ROS and I can make the robot build the map using teleoperation, but I don't know how to make it build the map by itself. I want to use Google Cartographer for this.
I can however run the demo provided by Google and it works (it builds the map of a Museum).
# Launch the 2D depth camera demo.
roslaunch cartographer_turtlebot demo_depth_camera_2d.launch bag_filename:=${HOME}/Downloads/cartographer_turtlebot_demo.bag
The questions:
How can I run it on my own world instead of the bag file of that museum?
Does it need a yaml map like the one I built with teleoperation? what is the command to make it use my yaml map instead of a bag file?
Can I use a .png image with yaml context?
Could it use the gazebo simulated worlds that are .sdf? What is the command to input this then?
These are the specifications I have:
Ubuntu Xenial
ROS Kinetic
Gazebo 7
Turtlebot2
Google Cartographer for turtlebot
Thanks a lot! It's like I understand the concepts but I don't know how to link things together to make them work.

Run OpenCV Viz on iOS

Is it possible to port my existing Viz3d-based visualisation to iOS? It is using VTK under the hood so in theory, it should be doable since VTK can run on iOS.
If yes, is there a working example of this, or can you provide one?

How to setup OpenNI 2.0 with OpenCV for a Kinect project?

I am working on my final year project. I need to work with Kinect to detect hand movements. I have tried a few ways and got some results, however, none was enough to meet the needs of the project. I saw this video long ago, and just got to know that they open sourced it recently. So I gave it a try.
My problem now is how to set things up.
The above awesome project uses OpenNI with Kinect. I tried to follow OpenCV tutorials to build it from source code, to let OpenCV work with OpenNI.
Problems:
It says "For the OpenNI Framework you need to install both the development build and the PrimeSensor Module." but as I followed the links some of them were dead. Seems like OpenNI 2.0 doesn't use PrimeSensor any longer.
It also says that in Cmake folders, one is OpenCV/Src, the other is /build. But the OpenCV I downloaded doesn't have anything as Src folder.
Still I used the whole folder as Src, and built it to a build folder and checked WITH OPENNI. I used the Include and Lib folder in OpenNI2 I downloaded, but when I built the OpenCV solution (already generated from CMake) all builds failed.
Also, while generating with Cmake, even if my future OpenCV solution had been successfully built (which wasn't the case), Cmake would have kept telling me how PrimeSense was not available, which made me feel so insecure. :(
I am a bit confused about 32- and 64-bit. The above project I want to follow says it works on 64-bit. But I use MS C++ Express, all projects are 32-bit. So which PrimeSense drivers (given in OpenNI2) should I use?
Could anyone please tell me how to set all these things (OpenNI2.0, OpenCV 2.4.3, PrimeSense) together so I can work with Kinect?
A while back I wrote two tutorials on 1) how to set up OpenNI 1.5 with NITE 2) How to compile OpenCV with OpenNI support.
These can be found here and here
I know this is not what you asked for, but the process of compiling OpenCV with OpenNI 2.0 should be similar and might help you understand where you are going wrong.
I will try to write a newer tutorial, however since I currently do not have access to a sensor, I might not be able to test if it works out in the end.
EDIT:
I have written some code to access Kinect data streams in OpenCV Mat format using OpenNI 2.x. The code github repo can be found here. Detailed guidance on how to set everything up can be found here.
OpenNI 2.x is much advanced than the previous versions. You don't need to install primesense sensorkinect driver. You can use OpenNI 2.x along with the Microsoft Kinect SDK 1.x.
Install both 64 and 32-bit OpenNI 2.x if you have Windows 7 x64 otherwise only 32-bit. Configure it with Visual Studio 2010 or 12. You can follow this video:
http://www.youtube.com/watch?v=ACqPsV0R4to
Then configure OpenCV for visual Studio 2010 or 12. You can follow this link:
http://4someonehelp.blogspot.in/2013/04/install-opencv-245-using-visual-studio.html
Thanks

OpenNI + OpenCV don't work with CV_CAP_OPENNI C++

I'm trying to use OpenCV with Kinect on Windows 7 x64, so I installed OpenNI, NITE and PrimeSense (by avin2).
I used CMake to compile the OpenCV 2.3.1, everything is correct with CMake Flags I checked, but I tried to use a simple code and it never found the Kinect.
All the samples of OpenNI and PrimeSense work fine.
I already installed x86 and x64 drivers and it still doesn't work!
I'm using VideoCapture, and isOpened, always returns 0.
Anyone know the solution?
I did that under linux ubuntu 12.04 last week end and that work fine.
Try to re-install componant by componant, and recompile your openCV.
I did that last week end.
But I agree there is things wich are not clear about how deal with that.
I replaced the kinect by an assus xtion and right now that don't work... but this an other topic.
About PrimeSense hardware, as I khnow kinect is made by PrimeSense... moreover PrimeSense is a member of the OpenNI project which is use in background of the libraries P.C.L. , openCV, and on the ros's openni_camera stack...
I have installed the opencv kinect on windows 7-64its (Professional) and works fine.
1.Drivers to kinect: here
Note
When you to install the dirvers make sure that the "Windows update" will not install aditional
drivers.
Tip: Disconnect the internet when you install it. ;)
Check in the "Device Manager" and search for PrimeSense.
Something like this:
PrimeSense
|- Kinect Audio
|- Kinect Camera
|- Kinect Motor
Check if it's working. Run a OpenNI sample.
2.OpenCV
Download it:
...://sourceforge.net/projects/opencvlibrary/files/opencv-win/2.3.1/OpenCV-2.3.1-win-superpack.exe/download
Configure the opencv on cmake. Remeber check to on the "OpenNI" option.
Maybe you'll get an erro like "warning: PrimeSense..." this happen because the OpenCVFindOpenNI.cmake is outdated.
You have to do some changes.
Go to here and download the changes:
Click here and download it (at bottom of page: "Download in other formats: Original Format").
You have to do the changes in the original file "OpenCVFindOpenNI.cmake".
It's in the root folder "OpenCV-2.3.1\"
The line that has "-" you delete and the line tha has "+" you replace/add.
Configure and compile the openCV.
After this it'll works fine, at least it should... :)
Sure.. You have to compile... ;)
I my case - Visual Studio C++ 9(2008) - (I compiled in the Release mode only)
You have to set the "bin" in the patch of system after compile..
Run a sample:
"OpenCV-2.3.1\samples\cpp\kinect_maps.cpp" and enjoy.
kinect for windows perhaps is not supported by Primesenser hardware drivers or even by avin2

Compatibility porting program

I am interested in trying to get a program ported to 64-bit and would like to know if it's even a good candidate for porting. I am a lighting director and have built a SUSE 11.1 Linux box for a program called MagicQ made by Chamsys (http://www.chamsys.be/download.html). I have been working on this for about 6 months now and have all hardware recognised. I am still working on stage visualizers, and I have a separate CPU/board generating the DMX512 code via PoE. I don't think getting it to run in SUSE will be a problem "it was natively built for Ubuntu".
Any help or direction is greatly appreciated!!
Unbuntu and Suse are subtlely different in how things are laid out for file sytems, home directories and such. Usually when you try to install a package on either on you need to use their own package manager programs so that all dependincies are handled and you don't need to manually try to find package 'x' version 'y' and package 'a' version 'b' just to get something working.
If you know that you have all the dependencies covered, and if you have the raw source code, you should be able to just run a compilier against the source code and have it compilied for a 64-bit processor.
Here is a link to the GCC, the GNU Compiler Collection for your reference.
Good luck with your porting project.

Resources