Trying to convert a Float64 into Float32 on ROS - ros

i'm working on a project on ROS, i'm trying to pass from a visualization of a marker into a visualization of a point in a pointcloud.
the markers coordinates are rappresented in float64
the points (of pcl) coordinates are rappresented in float32
i use a simil c++ for implementing code.
i want simply to try if i'm going to lose a lot of precision in terms of performance with this type of conversion.
but don't know how to do it.
did someone could help me with a piece of code that make this conversion?
thanks!

Related

Eye to Hand Calibration OpenCV

I am new to this Eye to Hand calibration. I have read the opencv documentation. It says that we need to use cv2.calibrateHandEye(R_gripper2base, t_gripper2base, R_target2cam, t_target2cam).
Can somebody help me by clearly explaining what input values we need to provide, from where these input values need to be taken, and how it has to be in matrix format? Particularly for (R_target2cam, t_target2cam).I am using the UFactory Arm robot and Intel Realsense camera. So I need to calibrate both. Kindly guide me.
This is my Robot position coordinates
So, I think I have Rx,Ry,Rz for R_gripper2base. X,Y,Z for t_gripper2base. What and where can I get the values for R_target2cam, t_target2cam?

Calculating area with RGeo and Geojson

I have a multi-polygon defined in geojson. I am trying to calculate its area.
When I traverse the ostensible code path, I am getting very confusing results. I suspect this is because I am using some element incorrectly.
boundaries = {...valid geojson...}
field_feature_collection = RGeo::GeoJSON.decode(boundaries, geo_factory: RGeo::Geographic.simple_mercator_factory)
first_feature = field_feature_collection[0]
first_feature.geometry.area
# => 1034773.6727743163
I know the area of my feature is ~602982 square meters. I cannot figure out how to reconcile that with the result above.
I suspect I am missing some obvious projection.
Anyone have an idea on what's wrong?
This is most likely due to using the simple_mercator_factory as your geo_factory in the GeoJSON decoder. The simple_mercator_factory parses geometries from lon/lat coordinate systems but performs all of the calculations (length, area, etc.) using the Mercator projection (EPSG:3857) of the geometry.
The Mercator projection gets more distorted the further away you get from the equator, so that's why the area you're getting is about double what you expect it to be.
Depending on the scope of your project, it may be more appropriate to use a different projection, especially if area is a key metric for you. If you choose a different projection, the rgeo-proj4 gem has just been updated to work with newer versions of PROJ, so you can create the appropriate transformations in your app.
Another option is to use the RGeo::Geographic.spherical_factory which uses a spherical approximation for the earth and will give you more realistic areas than the Mercator projection, but it is slower than using GEOS backed cartesian factories (which is what the simple_mercator_factory uses).

Calculating matrices on CPU takes up most of the frame time

I am writing a simple engine for a simple game, so far I enjoy my little hobby project but the game grew and it now has roughly 800 of game objects at a time in a scene.
Every object, just like in Unity, has a transform component that calculates transformation matrix when the component is initialized. I started to notice that with 800 objects it takes 5.4 milliseconds just to update each matrix (for example if every object has moved) without any additional components or anything else.
I use GLKit math library, which for some reason faster than using native simd types. using simd types triples the time of calculation
here is a pice of code that runs it
let Translation : GLKMatrix4 = GLKMatrix4MakeTranslation(position.x, position.y, position.z)
let Scale : GLKMatrix4 = GLKMatrix4MakeScale(scale.x, scale.y, scale.z)
let Rotation : GLKMatrix4 = GLKMatrix4MakeRotationFromEulerVector(rotation)
//Produce model matrix
let SRT = GLKMatrix4Multiply(Translation, GLKMatrix4Multiply(Rotation, Scale))
Question: I am looking for a way to optimize this so I can use more game objects. and utilize more components on my objects
There could be multiple bottlenecks in your program.
Optimise your frame dependencies to avoid stalls as much as possible, e.g. by precomputing frame data on CPU. This is a good resource to learn about this technique.
Make sure that all matrices are stored in one MTLBuffer which is indexed from your vertex stage
On Apple silicon and iOS use MTLResourceStorageModeShared
If you really want to scale to tens of thousands of objects, then compute your matrices in a compute shader to store them in an MTLBuffer. Then, use indirect rendering to issue your draw calls.
In general, learn about AZDO.
Learn about compute shaders: https://developer.apple.com/documentation/metal/basic_tasks_and_concepts/performing_calculations_on_a_gpu
Learn about indirect rendering:
https://developer.apple.com/documentation/metal/indirect_command_buffers

Kinect V2 depth in metres - Processing 3.x

I am using Processing 3.3.6 with the openkinect library (link below). I have a Kinect V2 sensor, and as given in the examples in the below link, I am getting the depth values from a depth[] array.
Openkinect Library for Processing
The link above gives the formula given for converting the raw depth value to depth value in meters in real world.
depthInMeters = 1.0 / (rawDepth * -0.0030711016 + 3.3309495161);
This is adapted from here :Depth in meters calculation
I am getting values ranging from 0 - 4500, and after applying the formulae from the above references, the values after conversion to meters are not accurate, they are off by about 70m. So, is there any other way or method to convert the depth to meters. Should I only use the official development environment like Visual Studio (C++ , C#) with the SDK for calculating depth?
Or can open source tools like Processing be used to capture the values albeit with a different approach? Help or guidance would be appreciated as it is a completely new area for me so far.

OpenCV Multilevel B-Spline Approximation

Hi (sorry for my english) .. i'm working in a project for University in this project i need to use the MBA (Multilevel B-Spline Approximation) algorithm to get some points (control points) of a image to use in other operations.
I'm reading a lot of papers about this algorithm, and i think i understand, but i can't writing.
The idea is: Read a image, process a image (OpenCV), then get control points of the image, use this points.
So the problem here is:
The algorithm use a set of points {(x,y,z)} , this set of points are approximated with a surface generated with the control points obtained from MBA. the set of points {(x,y,z)} represents de data we need to approximate (the image)..
So, the image is in a cv::Mat format , how can transform this format to an ordinary array to simply access to the data an manipulate...
Here are one paper with an explanation of the method:
(Paper) REGULARIZED MULTILEVEL B-SPLINE REGISTRATION
(Paper)Scattered Data Interpolation with Multilevel B-splines
(Matlab)MBA
If someone can help, maybe a guideline, idea or anything will be appreciate ..
Thanks in advance.
EDIT: Finally i wrote the algorithm in C++ using armadillo and OpenCV ...
Well i'm using armadillo a C++ linear algebra library to works with matrix for the algorithm

Resources