How to generate ".svo" file from rosbag for ZED SDK - ros

I would like to generate a point cloud from stereo videos using the ZED SDK from Stereolabs.
What I have now is some rosbags with left and right images (and other data from different sensors).
My problem comes when I extract the images and I create the videos from them, what I get are the videos in some format (e.g. .mp4) using ffmpeg, but the ZED SDK needs a .svo format, and I don't know how to generate it.
Does it exist some way to obtain ".svo" videos from rosbags?
Also, I would like to ask, (once I get the .svo files) how could I get the point cloud using the SDK if I am not able to use a graphic interface? I am working from a DGX workstation by using ROS (Melodic and Ubuntu 18.04) in Docker and I am not able to make rviz and any graphic tool to work inside the Docker image, so I think I should do the point cloud generation "automated", but I don't know how.
I have to say that this is my first project using ROS, ZED SDK and Docker, so that's why I am asking this (maybe) basics questions.
Thank you in advance.

You can't. The .svo file format is a propriety file format that can only be recorded to by using a ZED and their SDK (or wrapper), can only be read by their SDK/wrapper, and only be exported by their SDK/wrapper.
To provide some helpful direction, I suggest that all functionality & processing you would like to get out of the images, by processing with or making use of the SDK features, can be done with open source 3rd party trusted community software projects. Examples include OpenCV (which bundles many other AI/DNN object detection or position estimation or 3D world reconstruction algorithms), PCL, or their wrappers in ROS, or other excellent algorithms whose chief API and reference is their ROS node.

Related

Object Tracking in hololens

Does anyone work on object detection/tracking on Hololens? I'm expecting to automatically detect a physical object using a marker or without a marker.
As per my knowledge we have a library like OpenCV based on raw images. But, HoloLens is a powerful computer vision device. Is there a possibility Microsoft might expose some high-level object detection API?
With the actual version of Hololens it not could be done native. Rumors says that the next version will come with a IA chipset for this porpoise.
In the meanwhile, you can use thinks like Vuforia Object Detection which works smoothly.
Anyway, there is a version of the OpenCV in the Unity Asset store which come with severals samples of what you want to do.

Convert pcl xyzrgb(a) point cloud to images from different angles of the cloud

I would like to generate images from my point cloud (Kinect) from different angles, but the only function to take snapshots seems to require an open Viewer (open Window) and saves the images to a file. I would like to process them later and show them in a custom viewer, so storing in RAM is necessary.
Is the point cloud library providing such a method? Or does anybody knows how the approach with pcl would look like?
My second approach is to use opencv mats. Then use the projectPoints method for the projection, but this works on xyz-coordinates not xyzrgb(a) and I will loose the information which color from the pointcloud point belongs to the new projected image.
I stuck a little bit here :( and hope you can help me :)
Many thanks
Greetings
Carlo
I don't believe there is a way to do that using PCL functions. However PCL uses VTK to build it's viewer and I believe you would be able to do what you're talking about by using VTK functionality, though it's likely going to be more complicated. This article might be a good start.

Teensy + IR camera + OpenCV

I have never ever asked this kind of question on StackOverflow before, and I wonder if you could help me guys because it is a "bit" vague.
I have to design a project that uses Teensy (simple ARM platform) for getting data from IR camera (Flir, resolution 80x60) over SPI, and streaming these data to Linux/Windows running machine (through USB-serial) and doing something simple with OpenCV.
THE PROBLEM: The project lacks some "inovation". It should not be something very complicated, but rather different approach, or trying something new.
Do you have recommendations/tutorials/books/experience with working with above mentioned things? OR do you see a potential for teying something new?
You might want to check out the OpenCV Cookbook for some ideas.
There is a project using this FLIR with a Teensy. It provides a thermal image using a small LCD screen (without any aditional computer).
https://hackaday.io/project/8994-diy-thermocam
So, the teensy can get data through spi.
Can the teensy send data through usb then ? Probably but you will have to check if the rate is high enough
.
Using OpenCV directly on teensy is not possible because of size of library. But you can probably make some basic image processing if the code is small enough.
The FLIR Lepton can be directly interfaced with Linux or Windows computer, so I don't really see the use of Teensy.
I would recommend a Raspberry Pi to interface the FLIR Lepton and then do some image processing. It's well documented on the web.

Is there a virtual/dummy IMAQ camera for LabVIEW?

I'm writing LabVIEW software that grabs images from an IMAQ compatible GigE camera.
The problem: This is a collaborative project, so I only have intermittent access to the actual camera.I'd like to be able to keep developing this software even when the camera isn't present.
Is there a simple/fast way to create a virtual or dummy IMAQ camera in software? Ideally I'd like the dummy camera grab frames from an AVI or a stack of JPEG's. Something like this must exist, I just can't find it on Google.
I'm looking for something that won't take very long (e.g.< 2 hours effort) and that is abstracted away through the standard LabVIEW IMAQ interface, so that my software won't know or care whether its dealing with a dummy camera or an actual camera.
You can try this method using LabVIEW classes:
Hardware Emulation Using LabVIEW Classes
If you have the IMAQdx driver, you might consider just buying a cheap USB webcam for $10.
Use the IMAQdx driver (assuming you have it), and then insert the Vision Acquisition Express VI, and you can choose AVIs or even pics as a source.
Something like this: GigESim is a camera emulation software. Unfortunately it is proprietary and too expensive (>$500) for my own needs, but perhaps others will find this link useful.
Anyone know of a viable Open Source alternative?
There's an IP Camera emulator project that emulates IP camera with python. I haven't used it myself so i don't know if it can be used by IMAQ.
Let us know if it's good for you.
I know this question is really old, but hopefully this answer helps someone out.
IMAQdx also works with Windows DirectShow devices. While normally these are actual physical capture devices (think USB Webcams), there is no necessity that they have to be.
There are a few different pre-made options available on the web. I found using Open Broadcaster Studio and this Virtual Cam plugin to be easy enough. Basically:
Download and install both.
Load your media sources in the sources list.
Enable the VirtualCam stream (Tools > VirtualCam). Press Start.

FlyCapture2 and OpenCV, CMake build question

Platform: amd_64
Operating System: Ubuntu 8.10
Problem:
The current release of OpenCV (2.1 at time of writing) and libdc1394 doesn't properly interface with the new USB-interface PointGrey High-Res FireFlyMV Color camera.
Does anyone have this camera working with OpenCV on Ubuntu?
Currently, I'm working on writing my own frame-grabber using PointGrey's FlyCapture2 SDK, which works well with the camera. I'd like to interface this with OpenCV, by converting each image I grab into an IplImage object. When I write OpenCV programs, I use CMake. The example code for the FlyCapture2 SDK uses fairly simple makefiles. Does anyone know how I can take the information from the simple FlyCapture2 makefile so I can include the appropriate lines in CMakeLists.txt for my CMake build routine?
Not a simple answer (sorry) - but.
Generally you don't want to use cvCaptureCam() for high performance cameras beyond initial tests that they work. Even for standard interfaces like firewire It is very limited in what features of the camera it can control, it doesn't handle threading well and the performance is poor - especially at high data rates.
The more common way is to control the camera with the makers own SDK and output frames in a form (cv::mat/iplimage) that openCV can process. All openCV image types are very flexible in being able to share data with the camera API and specify padding/row striping etc so you should be able to design it so there is no unnecessary copying.

Resources