Options/best practice for evaluating the simulation rate in drake - drake

On Ubuntu 20.04 (focal) I used to use drake_visualizer for evaluating the simulation/realtime rate. With the dropped support for drake_visualizer in favor of meldis on jammy, is there an equivalent/similar functionality somewhere?
I can use lcm-spy to watch channels with known publish rates but that is a bit crude and noisy.

Drake's Meshcat does have a nice realtime rate visualization which is available already if you use MeshcatVisualizer. There is an open issue to make it available from meldis, too.

Related

Does TFF serializalize functions of another library?

I'm planning a TFF scheme in which the clients send to the sever data besides the weights, like their hardware information (e.g CPU frequency). To achieve that, I need to call functions of third-party python libraries, like psutils. Is it possible to serialize (using tff.tf_computation) such kind of functions?
If not, what could be a solution to achieve this objective in a scenario where I'm using a remote executor setting through gRPC?
Unfortunately no, this does not work without modification. TFF uses TensorFlow graphs to serialize the computation logic to run on remote machines. TFF does not interpret Python code on the remote machines.
There maybe a solution by creating a TensorFlow custom op. This would mean writing C++ code to retrieve CPU frequency, and then a Python API to add the operation to the TensorFlow graph during computation construction. TensorFlow's guide for Create an op can provide detailed instructions.

Can I use drake to test Visual SLAM algorithms?

I was wondering whether I could leverage the modularity drake gives to test Visual SLAM algorithms on realtime data. I would like to create 3 blocks that output acceleration, angular speed, and RGBD data. The blocks should pull information from a real sensor. Another block would process the data and produce the current transform of the camera and a global map. Effectively, I would like to cast my problem into a "Systems" framework so I can easily add filters where I need them.
My question is: Given other people's experience with this library, is Drake the right tool for the job for this usecase? Specifically, can I use this library to process real time information in a production setting?
Visual SLAM is not a use case I've implemented myself, but I believe the Drake Systems framework should be up to the task, depending on what you mean by "realtime".
We definitely ship RGBD data through the framework often.
We haven't made any attempt to support running Drake in hard realtime, but certainly can run at high rates. If you were to hit a performance bottleneck, we tend to be pretty responsive and would welcome PRs.
As for the "production-level", it is certainly our intention for the code / process to be mature enough for that setting, and numerous teams do already.

Real Time Image processing from a drone

I have the following products:
drone iris+
Pixhawk
For my last year project I want to process the image from the drone in real time and to control the drone by the image.
I don't find which product will be the best for me... is it the Raspberry pi or maybe something else that I'm not familiar with.
Thanks
Any embedded linux computer should work. The Odroid series has more computing power than a raspberry pi, which will be helpful here. See this article for setup instructions: http://dev.ardupilot.com/wiki/odroid-via-mavlink/
Regarding software: I would suggest using the OpenCV (computer vision) library for your image processing needs. There's a nice built in function for camera input that interfaces nicely with both Python and C++ programming languages. Depending on your experience writing software, I would recommend python (higher level, possibly slower, portable) or C++ (fighter jet: hard to use, higher ceiling in terms of performance). C++ might be appropriate for the speed necessary to operate a drone. I would check the docs to see if the package serves your needs before diving in.
Regarding hardware: Consider using Arduino to interface with peripheral hardware, but I'm definitely not experienced with this sort of thing.
Have fun!

Advice on a GPU for Dell Precision T3500 for image processing

I am a grad student and in our lab, we have a Dell Precision T3500 (http://www.dell.com/us/business/p/precision-t3500/pd). We use it primarily for image processing research and we need to use OpenCV 2.4.7's "ocl" i.e., OpenCL bindings for parallelizing up our work for some publications.
I looked at the workstation's specs and it specifies that we can get a NVIDIA Quadro 5000 or an AMD FirePro V7900 (the best of both manufacturers for this workstation).
This is where I am confused. Most of the reviews compare performance for CAD/CAM, MAYA and other software. But we will be writing our own code using OpenCV. Can anyone help me out in choosing the best of these two GPUs? Or is there anyway I can get a better GPU by upgrading the power supply?
We would greatly appreciate all the advice we can get at this stage!
Thank you very much.
If you are using OpenCL I agree with DarkZeros. You probably should buy AMD HW. Nvidia supports OpenCL only grudgingly as they want everyone to use CUDA.
Both of the cards you showed seem to be rather similar. Theoretical maximum at around 1TFlops. However both of them are rather old and very expensive. If you are not bound by any purchasing agreement I really recommend you buy a consumer card. The specs in dell.com only mean that if you purchase the computer from them you can select a GPU for it. It does not limit what you can do afterwards.
Depending on the chassis you could change your power supply. That would mean you could purchase something like this http://www.amazon.com/XFX-RADEON-1000MHz-Graphics-R9290XENFC/dp/B00G2OTRMA . It has double the memory of either of those professional cards and over 5x the theoretical processing power.
To be fair if you have the money to spend GTX Titan is still an excellent choice. It is about as fast as that AMD card and you can use CUDA with it if you need, considering how common CUDA is in scientific computing it might be wise to go with that.
However if you cannot switch your power supply, if it's non standard size or whatnot, then you are more limited. In that case you should search for pretty much the heftiest card that can run on 150W. Even those have perhaps double the performance of the cards the computer was originally available with.

OpenCV + Webcam compatibility

For the people that have experience with OpenCV, are there any webcams that don't work with OpenCV.
I am looking into the feasibility of a project and I know I am going to need a high quality feed (1080p), so I am going to need a webcam that is capable of that. So does OpenCV have problems with certain cameras?
To be analysing a video feed of that resolution on the fly I am going to need a fast processor, I know this, but will I need a machine that is not consumer available...ie, will an i7 do?
Thanks.
On Linux, if it's supported by v4l2, it is probably going to work (e.g., my home webcam isn't listed, but it's v4l2 compatible and works out of the box). You can always use the camera manufacturer's driver to acquire frames, and feed them to your OpenCV code. You can even sub-class the VideoCapture class, and implement your camera driver to make it work seamlessly with OpenCV.
I would think the latest i7 series should work just fine. You may want to also check out Intel's IPP library for more optimized routines. IPP also easily integrates into OpenCV code since OpenCV was an Intel project at its inception.
If you need really fast image processing, you might want to consider adding a high performance GPU to the box, so that you have that option available to you.
Unfortunately, the page that I'm about to reference doesn't exist anymore. OpenCV evolved a lot since I first wrote this answer in 2011 and it's difficult for them to keep track of which cameras in the market are supported by OpenCV.
Anyway, here is the old list of supported cameras organized by Operating System (this list was available until the beginning of 2013).
It depends if your camera is supported by OpenCV, mainly by the driver model that your camera is using.
Quote from Getting Started with OpenCV capturing,
Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL) and two on Linux: Video for Linux(V4L) and IEEE1394. For the latter there exists two implemented interfaces (CvCaptureCAM_DC1394_CPP and CvCapture_DC1394V2).
So if your camera is VFW or MIL compliant under Windows or suits into standard V4L or IEEE1394 driver model, then probably it will work.
But if not, like mevatron says, you can even sub-class the VideoCapture class, and implement your camera driver to make it work seamlessly with OpenCV.

Resources