Why doesn't Drake support anisotropic scaling of meshes? - drake

I have a mesh file that I would like to repeat at different scales to build up a visualization of my model. When I try to scale the mesh different amounts along each axis in the urdf, I get the following error when I parse: Drake meshes only support isotropic scaling. Therefore all three scaling factors must be exactly equal.
The parsing code points the finger at geometry::Mesh only having a single scaling value. Is there a reason why Mesh only supports isotropic scaling or is this just something that hasn't been implemented yet?

Related

Point Cloud Stitching and Processing

I have pointclouds from two realsense D415 cameras, which is mounted in a way to have overlap. I am able to stitch the output pointclouds in realtime to get a single larger FOV pointcloud by using PCL ICP to find the transforms and transforming one pointcloud to match with the other. Now I would like to detect planes on this output pointcloud and further detect people with the help of the detected planes and ground plane. I have implemented the same with PCL libraries once again.
Now issues arise in two cases:
(1) The final stitched output is unordered leading to a lot of PCL functions not being able to use the pointcloud. To overcome this , say I resize my pointcloud to match the final dimensions with height not being 1, I run into (2).
(2) Upon passing the resized pointcloud I get an error from the normal algorithm I am using as Input not from a projective device, using only XXXX points (XXXX is a very small fraction of the total number of points in the pointcloud). Any other available normal algorithm performs really slow and is not capable of being used in real time applications.
Any ideas how to proceed with this? Happy to provide more information.

Finding displacement between two camera frames

I'm currently working on a visual odometry project. Currently I've implemented up to Essential Matrix decomposition stage. But the resulting translation vector is normalized and cannot be able to plot the movement.
Now how can I compute the displacement in some scale? I have seen some suggestions to use planner homography to compute the absolute translation. I didn't got the idea of doing it as, the outdoor environment is not simply planner. At least, by considering the ground as planner, how to obtain, the translation of it. I've seen a suggestion here. Is it possible to use this approach to get the displacement between two frames?
What you are referring to is called registration. This is a vast field. There are methods for linear transformation across the entire image, and per pixel methods ( the two ends of the spectrum). Naturally per pixel methods are far slower typically and have many local errors.
Typically two frames have very little transformation between them and simple Homography will do to find the general scaling between them. Especially if you are talking about aerial photos. If your case is very far from planar then you may want to use something closer to pixel-wise. For example using spline fitting: https://www.mathworks.com/matlabcentral/fileexchange/20057-b-spline-grid--image-and-point-based-registration
You cannot recover scale, generally speaking, unless you can recognize in the scene 1 or more objects of known physical size.

How to match orientation and scale of two different image of the same object in OpenCV?

I have 2 images of printed circuit boards (PCB) both showing the same PCB. The differences between them are lighting, scale and orientation (because I take PCB images with my phone camera).
Now I want to use one image of the PCB to check if all the components of the circuit is assembled on another identical PCB.
Is there a convenient way to check differences between two images of two identical PCB?
Btw, I can add some marks on PCB so that in OpenCV I can correct the orientation and scale of the image.
PCB = printed circuit board, right?!?
You could probably compute a projective projective transformation or homography between matched points in both images. This transformation can be used to match planes (like your PCBs) and considers scale, rotation, shear and projective changes between your images.
It's very simple method:
Select at least 4 points and solve a system of linear equations. Take a look at the answer to this question on Math SE which explains exactly that.
This OpenCV example uses (automatic) feature matching to find corresponding image points and then computes a homography.
The also interesting derivation of this transformation can be found in every computer vision text book, e.g. THE standard Zisserman's "Multiple View Geometry" or Ma's "An Invitation to 3-D Vision".
EDIT:
This method will not remove specular reflections or other intensity differences.

OpenGL ES 2.0 Vertex Transformation Algorithms

I'm developing an image warping iOS app with OpenGL ES 2.0.
I have a good grasp on the setup, the pipeline, etc., and am now moving along to the math.
Since my experience with image warping is nil, I'm reaching out for some algorithm suggestions.
Currently, I'm setting the initial vertices at points in a grid type fashion, which equally divide the image into squares. Then, I place an additional vertex in the middle of each of those squares. When I draw the indices, each square contains four triangles in the shape of an X. See the image below:
After playing with photoshop a little, I noticed adobe uses a slightly more complicated algorithm for their puppet warp, but a much more simplified algorithm for their standard warp. What do you think is best for me to apply here / personal preference?
Secondly, when I move a vertex, I'd like to apply a weighted transformation to all the other vertices to smooth out the edges (instead of what I have below, where only the selected vertex is transformed). What sort of algorithm should I apply here?
As each vertex is processed independently by the vertex shader, it is not easy to have vertexes influence each other's positions. However, because there are not that many vertexes it should be fine to do the work on the CPU and dynamically update your vertex attributes per frame.
Since what you are looking for is for your surface to act like a rubber sheet as parts of it are pulled, how about going ahead and implementing a dynamic simulation of a rubber sheet? There are plenty of good articles on cloth simulation in full 3D such as Jeff Lander's. Your application could be a simplification of these techniques. I have previously implemented a simulation like this in 3D. I required a force attracting my generated vertexes to their original grid locations. You could have a similar force attracting vertexes to the pixels at which they are generated before the simulation is begun. This would make them spring back to their default state when left alone and would progressively reduce the influence of your dragging at more distant vertexes.

Triangulation of Rectified Image Points in Multiple Views

I am working with a set of calibrated images that form a ring around a foreground object (1). I used Fusiello's method (1) to rectify adjacent pairs of images, and then I performed disparity estimation.
When I take the matched points from a stereo pair and triangulate them, it forms an accurate point cloud. Unfortunately, when I triangulate the points from another stereo image pair, this point cloud never aligns correctly with the original cloud.
Should calibrated, rectified images' point clouds merge together automatically?
Thanks in advance for any help you can offer.
This might be due to the accuracy of calibration - both intrinsic (i.e. the same camera model - and how it handles distortion) and extrinsic (i.e. the camera pose in real space). Together, of course, these dictate the ultimate accuracy of your re-projection.
Do you have a measure of error for camera calibration - in terms of MSE re-projection?
Cumulative error is often noticeable in my experience if simply iterating over subsequent images. Some form of global optimisation often needs to be performed to first correct positions for all the camera poses.
The accuracy of your disparity estimation is also a factor. Not only in terms of the algorithm you using, but also in relation to the stereo baseline and how it relates to the size/nature of the object in question (how concave/convex), and how many sampling of the images you are taking (and the quality of those images - exposure/depth-of-field/etc).
Fundamentally, just how "off" are your point clouds? Are they close to being aligned (you could do a bit of ICP before triangulation...). Are they closer in the "centre" of the re-projection? Are they worse for projections taken from opposing images on opposite sides of the object?
Remember as well that (due to the discrete sampling) you shouldn't expect points to ever be re-projected exactly "on-top" on one another. Some form of binning operation during the triangulation pipeline usually occurs for handling this (hence most of the research work in visual hull -> voxels -> marching cubes -> triangulated surface around this...)
Have you checked out MeshLab BTW?

Resources