implementation like kinect hierarchical rotation - opencv

I would get some data stream about 3d position(in fixed world coordinate system) of a human's 20 skeletons.
I want to use the skeletons data to drive a human model with fixed bone like the demo video.
In Kinect SDK v1.8,i could get each skeleton's local rotation by NUI_SKELETON_BONE_ORIENTATION.hierarchicalRotation.
I want to implement some function like that.But the Kinect's SDK isn't open source.
I've found that the function xnGetSkeletonJointOrientation could get skeleton's rotation like that in OpenNI.But i haven't found the implement function about that.I don't know where am i wrong.
Any idea is appreciated.Thanks!
EDIT
I have found a similar question.
Here is the code he used finally.
Point3d Controller::calRelativeToParent(int parentID,Point3d point,int frameID){
if(parentID == 0){
QUATERNION temp = calChangeAxis(-1,parentID,frameID);
return getVect(multiplyTwoQuats(multiplyTwoQuats(temp,getQuat(point)),getConj(temp)));
}else{
Point3d ref = calRelativeToParent(originalRelativePointMap[parentID].parentID,point,frameID);
QUATERNION temp = calChangeAxis(originalRelativePointMap[parentID].parentID,parentID,frameID);
return getVect(multiplyTwoQuats(multiplyTwoQuats(temp,getQuat(ref)),getConj(temp)));
}}
QUATERNION Controller::calChangeAxis(int parentID,int qtcId,int frameID){ //currentid = id of the position of the orientation to be changed
if(parentID == -1){
QUATERNION out = multiplyTwoQuats(quatOrigin.toChange,originalRelativePointMap[qtcId].orientation);
return out;
}
else{
//QUATERNION temp = calChangeAxis(originalRelativePointMap[parentID].parentID,qtcId,frameID);
//return multiplyTwoQuats(finalQuatMap[frameID][parentID].toChange,temp);
return multiplyTwoQuats(finalQuatMap[frameID][parentID].toChange,originalRelativePointMap[qtcId].orientation);
}}
But i still have some question about that.
What does the variables quatOrigin.toChange and originalRelativePointMap stand for?
And in my opinion,the parameter Point3d point of the function Controller::calRelativeToParent should be a vector with euler angle.In this way,how to call the Controller::calRelativeToParent API in the main program.Because we know the root's rotation only.

The skeleton class has a "Joints" member that contains all the 3d position data for each tracked joint on the skeleton. I would look at the joint position data directly to drive your model rather than angles. Take one point to be your base (head or otherwise) then generate vectors in tree form between pairs of connected skeletal points. Scale those vectors and apply them to your model.

Related

Open Cascade Write glTF Writer

Open Cascade has glTF writer in their current development branch - RWGltf_CafWriter
I am trying to convert STP to glTF using it and got starting point from this question - Any Open source Libraries to Convert STEP files to glTF file format?
It looks doable, but I am new to Open Cascade technology and have few questions
While calculating triangulation for shapes using BRepMesh_IncrementalMesh, it needs line deflection and angle deflection, what are these and what should be its values?
RWGltf_CafWriter requires TDocStd_Document and TDF_LabelSequence, how do we get these from Shapes?
Thank You
While calculating triangulation for shapes using BRepMesh_IncrementalMesh,
it needs line deflection and angle deflection,
what are these and what should be its values?
Deflection parameters define the mesh quality. Within specific domain / algorithm, you should probably know in advance applicable deviation of your geometry (like no more than 1 mm). However, in context of visualization and arbitrary CAD model, linear deflection is usually defined relatively to the bounding box of the document.
RWGltf_CafWriter requires TDocStd_Document and TDF_LabelSequence, how do we get these from Shapes?
TDocStd_Document is an XDE document supported by various file format translators - including STEP and glTF. If at that point you have a single TopoDS_Shape from STEP file, then you probably used a simplified STEP translator STEPControl_Reader. To preserve the structure of original document, it is better using STEPCAFControl_Reader filling in an XDE document.
Within XDE document, shapes (and not only shapes) are stored as Labels, so that TDF_LabelSequence collection is used to pass through the information like a sequence of root shapes (model tree roots in the document), which are called Free Shapes:
// read / create / fill in the document
Handle(TDocStd_Document) theXdeDoc; // created in advance
STEPCAFControl_Reader aStepReader;
if (!aStepReader.ReadFile ("myStep.stp") != IFSelect_RetDone) { // parse error }
if (!aStepReader.Transfer (theXdeDoc)) { // translation error }
...
// collect document roots into temporary compound
Handle(XCAFDoc_ShapeTool) aShapeTool = XCAFDoc_DocumentTool::ShapeTool (myXdeDoc->Main());
TDF_LabelSequence aRootLabels;
aShapeTool->GetFreeShapes (aRootLabels);
TopoDS_Compound aCompound;
BRep_Builder aBuildTool;
aBuildTool.MakeCompound (aCompound);
for (TDF_LabelSequence::Iterator aRootIter (aRootLabels); aRootIter.More(); aRootIter.Next())
{
const TDF_Label& aRootLabel = aRootIter.Value();
TopoDS_Shape aRootShape;
if (XCAFDoc_ShapeTool::GetShape (aRootLabel, aRootShape))
{
aBuildTool.Add (aCompound, aRootShape);
}
}
// perform meshing
Handle(Prs3d_Drawer) aDrawer = new Prs3d_Drawer(); // holds visualization defaults
BRepMesh_IncrementalMesh anAlgo;
anAlgo.ChangeParameters().Deflection = Prs3d::GetDeflection (aCompound, aDrawer);
anAlgo.ChangeParameters().Angle = 20.0 * M_PI / 180.0; // 20 degrees
anAlgo.ChangeParameters().InParallel = true;
anAlgo.SetShape (aCompound);
anAlgo.Perform();
...
// write or export the document
TColStd_IndexedDataMapOfStringString aMetadata;
RWGltf_CafWriter aGltfWriter ("exported.glb", true);
// STEP reader translates into mm units by default
aGltfWriter.ChangeCoordinateSystemConverter().SetInputLengthUnit (0.001);
aGltfWriter.ChangeCoordinateSystemConverter().SetInputCoordinateSystem (RWMesh_CoordinateSystem_Zup);
if (!aGltfWriter.Perform (theXdeDoc, aMetadata, Handle(Message_ProgressIndicator)())) { // export error }
In Draw Harness the conversion may look like this (the source code of commands can be used as a helpful reference of working code using related OCCT algorithms):
pload XDE OCAF VISUALIZATION MODELING
# read STEP file into XDE document
ReadStep D myStep.stp
# display the document in 3D viewer (will also compute default triangulation)
vinit
XDisplay -dispMode 1 D
vfit
# export XDE document into glTF file
WriteGltf D myGltf.glb

How to create 3d mesh vertices in Gideros

I'm using Lua for the first time, and of course need to check around to learn how to implement certain code.
To create a vertex in Gideros, there's this code:
mesh:setVertex(index, x, y)
However, I would also like to use the z coordinate.
I've been checking around, but haven't found any help. Does anyone know if Gideros has a method for this, or are there any tips and tricks on setting the z coordinates?
First of all these functions are not provided by Lua, but by the Gideros Lua API.
There are no meshes or things like that in native Lua.
Referring to the reference Gideros Lua API reference manual would give you some valuable hints:
http://docs.giderosmobile.com/reference/gideros/Mesh#Mesh
Mesh can be 2D or 3D, the latter expects an additionnal Z coordinate
in its vertices.
http://docs.giderosmobile.com/reference/gideros/Mesh/new
Mesh.new([is3d])
Parameters:
is3d: (boolean) Specifies that this mesh
expect Z coordinate in its vertex array and is thus a 3D mesh
So in order to create a 3d mesh you have to do something like:
local myMesh = Mesh.new(true)
Although the manual does not say that you can use a z coordinate in setVertex
http://docs.giderosmobile.com/reference/gideros/Mesh/setVertex
It is very likely that you can do that.
So let's have a look at Gideros source code:
https://github.com/gideros/gideros/blob/1d4894fb5d39ef6c2375e7e3819cfc836da7672b/luabinding/meshbinder.cpp#L96-L109
int MeshBinder::setVertex(lua_State *L)
{
Binder binder(L);
GMesh *mesh = static_cast<GMesh*>(binder.getInstance("Mesh", 1));
int i = luaL_checkinteger(L, 2) - 1;
float x = luaL_checknumber(L, 3);
float y = luaL_checknumber(L, 4);
float z = luaL_optnumber(L, 5, 0.0);
mesh->setVertex(i, x, y, z);
return 0;
}
Here you can see that you can indeed provide a z coordinate and that it will be used.
So
local myMesh = Mesh.new(true)
myMesh:SetVertex(1, 100, 20, 40)
should work just fine.
You could have simply tried that btw. It's for free, it doesn't hurt and it's the best way to learn!

Simple registration algorithm for small sets of 2D points

I am trying to find a simple algorithm to find the correspondence between two sets of 2D points (registration). One set contains the template of an object I'd like to find and the second set mostly contains points that belong to the object of interest, but it can be noisy (missing points as well as additional points that do not belong to the object). Both sets contain roughly 40 points in 2D. The second set is a homography of the first set (translation, rotation and perspective transform).
I am interested in finding an algorithm for registration in order to get the point-correspondence. I will be using this information to find the transform between the two sets (all of this in OpenCV).
Can anyone suggest an algorithm, library or small bit of code that could do the job? As I'm dealing with small sets, it does not have to be super optimized. Currently, my approach is a RANSAC-like algorithm:
Choose 4 random points from set 1 and from set 2.
Compute transform matrix H (using openCV getPerspective())
Warp 1st set of points using H and test how they aligned to the 2nd set of points
Repeat 1-3 N times and choose best transform according to some metric (e.g. sum of squares).
Any ideas? Thanks for your input.
With python you can use Open3D librarry, wich is very easy to install in Anaconda. To your purpose ICP should work fine, so we'll use the classical ICP, wich minimizes point-to-point distances between closest points in every iteration. Here is the code to register 2 clouds:
import numpy as np
import open3d as o3d
# Parameters:
initial_T = np.identity(4) # Initial transformation for ICP
distance = 0.1 # The threshold distance used for searching correspondences
(closest points between clouds). I'm setting it to 10 cm.
# Read your point clouds:
source = o3d.io.read_point_cloud("point_cloud_1.xyz")
target = o3d.io.read_point_cloud("point_cloud_0.xyz")
# Define the type of registration:
type = o3d.pipelines.registration.TransformationEstimationPointToPoint(False)
# "False" means rigid transformation, scale = 1
# Define the number of iterations (I'll use 100):
iterations = o3d.pipelines.registration.ICPConvergenceCriteria(max_iteration = 100)
# Do the registration:
result = o3d.pipelines.registration.registration_icp(source, target, distance, initial_T, type, iterations)
result is a class with 4 things: the transformation T(4x4), 2 metrict (rmse and fitness) and the set of correspondences.
To acess the transformation:
I used it a lot with 3D clouds obteined from Terrestrial Laser Scanners (TLS) and from robots (Velodiny LIDAR).
With MATLAB:
We'll use the point-to-point ICP again, because your data is 2D. Here is a minimum example with two point clouds random generated inside a triangle shape:
% Triangle vértices:
V1 = [-20, 0; -10, 10; 0, 0];
V2 = [-10, 0; 0, 10; 10, 0];
% Create clouds and show pair:
points = 5000
N1 = criar_nuvem_triangulo(V1,points);
N2 = criar_nuvem_triangulo(V2,points);
pcshowpair(N1,N2)
% Registrate pair N1->N2 and show:
[T,N1_tranformed,RMSE]=pcregistericp(N1,N2,'Metric','pointToPoint','MaxIterations',100);
pcshowpair(N1_tranformed,N2)
"criar_nuvem_triangulo" is a function to generate random point clouds inside a triangle:
function [cloud] = criar_nuvem_triangulo(V,N)
% Function wich creates 2D point clouds in triangle format using random
% points
% Parameters: V = Triangle vertices (3x2 Matrix)| N = Number of points
t = sqrt(rand(N, 1));
s = rand(N, 1);
P = (1 - t) * V(1, :) + bsxfun(#times, ((1 - s) * V(2, :) + s * V(3, :)), t);
points = [P,zeros(N,1)];
cloud = pointCloud(points)
end
results:
You may just use cv::findHomography. It is a RANSAC-based approach around cv::getPerspectiveTransform.
auto H = cv::findHomography(srcPoints, dstPoints, CV_RANSAC,3);
Where 3 is the reprojection threshold.
One traditional approach to solve your problem is by using point-set registration method when you don't have matching pair information. Point set registration is similar to method you are talking about.You can find matlab implementation here.
Thanks

How to access the nearest neighbours found by a cv::flann knnsearch?

Or is it even possible with flann ? Im not the most experienced coder, I also might just be overlooking something really basic (C++,OpenCV 2.4.3.)
The problem :
I have two pointclouds and want to calculate a displacement map. I am trying to use the flann .lib to get the nearest neighbour to a point in the first cloud from the points of the second cloud, and use them and the distance to calculate the displacement vector(s).
What I got so far is this:
int nn = 1;
cv::Mat MyIndex(data1.size(),3,CV_64FC1);
cv::Mat MyQuery(data2.size(),3,CV_64FC1);
cv::Mat indices(data2.size(),1,CV_32SC1);
cv::Mat distances(data2.size(),3,CV_64FC1);
cv::flann::Index_<double> NN_Index(MyIndex, cvflann::KDTreeIndexParams(4));
NN_Index.knnsearch(MyQuery,indices,distances,nn,cvflann::SearchParams(32));
It works as far as I can tell, I got the distances, I got the query points, I got the indices. But how do I get the actual points that got matched to my query points, from the indices ?
I looked through the flann.hpp but couldn't really find any hints. I messed arround a bit with MyIndex, NN_Index and the indices, but didn't get any useful results.
Try
for (int queryIdx = 0; queryIdx < MyQuery.rows; ++queryIdx) {
int dbIdx = indices.at<int>(queryIdx, 0);
std::cout<<"Query Idx:"<<queryIdx<<" matched to "<<"Database Idx:"<<dbIdx<<std::endl;
}

From Latitude and Longitude to absolute X, Y coordinates

I have a series of Lat/Long coords. I need to transform it in X, Y coordinates. I've read about UTM, but the problem is that UTM coordinates are relatives to a single zone.
For example, this two coordinates UTM has the same Easting (x) and Northing (y) but different code zone, and so each coords point to a completly different location (one in spain and one in italy):
UTM: 33T 292625m E 4641696m N
UTM: 30U 292625m E 4641696m N
I need a method to automatically transform that relatives coord in absolute X, Y coordinates. Ideas?
Does it have to be UTM? If not, you can also use Mercator, which is a simpler projection that doesn't rely on zones.
See, for example, the Bing Maps system.
You should be able to use the ProjNET library.
What you need is to find the WKT (well known text) that defines your projections, and then you should be able to convert between them.
var utm33NCoordinateSystem = CoordinateSystemWktReader.Parse("WKT for correct utm zone") as IProjectedCoordinateSystem;
var wgs84CoordiateSystem = CoordinateSystemWktReader.Parse(MappingTransforms.WGS84) as IGeographicCoordinateSystem;
var ctfac = new CoordinateTransformationFactory();
_etrsToWgsTransformation = ctfac.CreateFromCoordinateSystems(etrs89CoordinateSystem,wgs84CoordiateSystem);
double[] transform = _etrsToWgsTransformation.MathTransform.Transform(new double[] { y,x });
Note: you have to find the correct WKTs, but that can be found on the project site.
Also you may have to flip the order of the inputs, depending on the transforms.
if we want to correct bearing & Distance in between two points than we use (Polar) method by using Scientific Calculator first of all Press The Button polar than started bracket first
For the Distance POL(N-N,E-E)
For the Bearing POL(N-N,E-E)RclTan-1
Now you have got a Correct bearing

Resources