DJI SDK integrated with RVIZ and open manipulator x - ros

I am creating a project based on the Open Manipulator X robotic arm in conjunction with the Matrice 210 V1 aircraft, I want to perform the edometry part in RVIZ, how could I integrate the telemetry to visualize and obtain the positioning of the aircraft in RVIZ without losing the control and visualization of the arm.

Related

Suggestion for implementing sensor fusion algorithm using Sonar and Camera sensor data as input for obstacle avoidance in differential drive AMR

i am using pioneer 3dx with rosaria library where sonar publishes obstacles position from where i can extract ranges from obstacles. i mounted intel realsense D435i on pioneer and installed all the libraries required for camera. now the question is what data should i extract from camera for fusing both data so that they complement each other.
i used yolov3 algorithm on realsense camera and try to detect objects ,bounding box and x min,y min,x max, y max and depth info from objects detected.
so which info should i use from vast topics that camera is giving to fuse it with sonar range data so to improve obstacle detection fo avoidance
i am attaching sonar range data and camera publishes topics below
sonar ranges from 8 sonar rings:
pointcloud of sonar:
camera published topics:

Machine Learning: Question regarding processing of RGBD streams and involved components

I would like to experiment with machine learning (especially CNNs) on the aligned RGB and depth stream of either an Intel RealSense or an Orbbec Astra camera. My goal is to do some object recognisation and highlight/mark them in the output video stream (as a starting point).
But after having read many articles I am still confused about the involved frameworks and how the data flows from the camera through the involved software components. I just can't get a high level picture.
This is my assumption regarding the processing flow:
Sensor => Driver => libRealSense / Astra SDK => TensorFlow
Questions
Is my assumption correct regarding the processing?
Orbbec provides an additional Astra OpenNI SDK besides the Astra SDK where as Intel has wrappers (?) for OpenCV and OpenNI. When or why would I need this additional libraries/support?
What would be the quickest way to get started? I would prefer C# over C++
Your assumptions are correct: the data acquisition flow is: sensor -> driver -> camera library -> other libraries built on top of it (see OpenCV support for Intel RealSense)-> captured image. Once you got the image, you can do whatever you want of course.
The various libraries allow you to work easily with the device. In particular, OpenCV compiled with the Intel RealSense support allows you to use OpenCV standard data acquisition stream, without bothering about the image format coming from the sensor and used by the Intel library. 10/10 use these libraries, they make your life easier.
You can start from the OpenCV wrapper documentation for Intel RealSense (https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv). Once you are able to capture the RGBD images, you can create your input pipeline for your model using tf.data and develop in tensorflow any application that uses CNNs on RGDB images (just google it and look on arxiv to have ideas about the possible applications).
Once your model has been trained, just export the trained graph and use it in inference, hence your pipeline will become: sensor -> driver -> camera library -> libs -> RGBD image -> trained model -> model output

Displaying Google Tango scans with Hololens

I am currently working with Google Tango and Microsoft Hololens. I got the idea of scanning a room or an object using google Tango and then converting and showing it as hologram with the Hololens.
For that I need to get the ADF file on my computer.
Does someone know of a way to import adf-files onto a computer?
Do you know if it is possible to convert adf-files into usable 3d files?
An ADF is not a 3D scan of the room, it's a collection of feature descriptors from the computer vision algorithms with associated positional data, but the format is not documented.
You will want to use the point cloud from the depth sensor, convert it to a mesh (there are existing apps to do this) and import the mesh into a render engine on Hololens.

programming on kinect without kinect sensor

I recorded some motions as .xed file by Kinect Studio 1.8 and sensor Kinect for Windows at my University. And now I want to use this .xed file instead of kinect sensor, because at home I don't have it and I want to improve my app.
When I did points from similar problem it is not working, there is a message box that I need connect kinect sensor.
How I can open my app without kinect sensor and test it with this .xed file? I read about Fakenect but I can't find any documentation how to use it.

OpenCV supported camera types

I am using opencv 2.4.10 and am wondering if I hook up a usb 2.0 camera that uses a 10 bit analog to digital converter and has a resolution of 1328 x 1048, does openCV support that type of camera? If it does, how will it store the pixel information? (I have not purchased the camera yet and would buy a different one if the software won't work with it, so I can't just go test it myself).
clearly I didn't google well enough
https://web.archive.org/web/20120815172655/http://opencv.willowgarage.com/wiki/Welcome/OS/
list hasn't been updated for a while though

Resources