I am new to game programming as well as new to DirectX & SlimDX. Currenty I am converting a MDX code to SlimDX code and I got stuck into an issue which I am unable to find out in the SlimDX documentation. I am trying to "Unproject" a vector code for which is as follows:
MDX code:
Vector3 p1 = new Vector3(x, y, device.Viewport.MinZ);
p1.Unproject(device.Viewport, device.Transform.Projection, device.Transform.View, device.Transform.World);
SlimDX code which I converted is as follows:
Vector3 p1 = new Vector3(x, y, device.Viewport.MinZ);
p1 = Vector3.Unproject(p1, device.Viewport.X, device.Viewport.Y, device.Viewport.Width,
device.Viewport.Height, device.Viewport.MinZ, device.Viewport.MaxZ,
device.GetTransform(TransformState.World));
Using the above SlimDX code I am unable to get the correct results, please advice me on this i.e. how can I Unproject the vector in SlimDX.
I'm not familiar with SlimDX, but in the MDX case you're passing the world, view, and projection matrices to the Unproject function (as you should) and in your second example you're only passing the world matrix. Assuming that Vector3.Unproject function only takes one matrix as an argument, try passing the product of World * View * Projection instead.
Related
I am trying to get the Coriolis matrix for my robot (need the matrix explicitly for the controller) based on the following approach which I have found online:
plant_.CalcBiasTerm(*context, &Cv_);
auto jac = autoDiffToGradientMatrix(Cv_);
C = 0.5*jac.rightCols(n_v_);
where Cv_, plant_, context are AutoDiffXd and n_v_ is the number of generalized velocities. So basically I have a 62-joint robot loaded from URDF into drake which is a free body (floating base system). After finalizing the robot I am using the DiagramBuilder.Build() method and then the CreateDefaultContext() in order to get the context. Next, I am trying to set up the AutoDiff environment like this:
plant_autodiff = drake::systems::System<double>::ToAutoDiffXd(*multibody_plant);
context_autodiff = plant_autodiff->CreateDefaultContext();
context_autodiff->SetTimeStateAndParametersFrom(*diagram_context);
The code above is contained in an initialization setup code. In another method, which is called on update events, the following lines of code are written:
drake::AutoDiffVecXd c_auto_diff_ = drake::AutoDiffVecXd::Zero(62);
plant_autodiff->CalcBiasTerm(*context_autodiff, &c_auto_diff_);
MatrixXd jac = drake::math::autoDiffToGradientMatrix(c_auto_diff_);
auto C = 0.5*jac.rightCols(jac.size());
This setup compiles and runs, however the size of the jac matrix is 0, whereas I would expect 62x62. I am also extracting and then exposing the Coriolis vector, which is 62x1 and seems to be more or less correct. The c_auto_diff_ variable is 62x1 as well, but all the elements are 0.
I am clearly making a mistake, but I do not know where exactly.
Any help is appreciated,
Thank you all,
Robert
You are close. You need to tell the autodiff pipeline what you want to take the derivative with respect to. In this case, I believe you want
auto v = drake::math::initializeAutoDiff(Eigen::VectorXd::Zero(62))
plant_autodiff->SetVelocities(context_autodiff.get(), v);
By calling initializeAutoDiff, you are initializing the autodiff terms to the identity matrix, which is saying that you want to take the derivative with respect to v. Then you should get non-zero derivatives.
Btw - I normally would use
plant_autodiff = multibody_plant->ToAutoDiffXd();
but I guess what you have must work, too!
so I'm working with a gaming engine called Core whos scripts are written in Lua. And I'm having trouble writing a script to detect line of sight, I cant seem to find any other help scripting this in Cores function document and as it is a relatively new engine I also can't find any others with this issue.
If anyone knows how to detect line of sight in a Core script I would greatly appreciate it.
What I think you need is World.Raycast(), which takes a Vector3 for the start and end point. There is an example here: https://docs.coregames.com/api/hitresult/#examples that gets a HitResult based on a starting position Vector3 and a camera direction:
local rayStart = player:GetViewWorldPosition()
local cameraForward = player:GetViewWorldRotation() * Vector3.FORWARD
local rayEnd = rayStart + cameraForward * 10000
local hitResult = World.Raycast(rayStart, rayEnd)
thanks to list for WebGL vec4() help! It was fast; don't know if a Google search (Swizzling) would have worked, but maybe?
Another WebGL question; then I should have resources from list to help me in future WebGL q's.
I guess a good WebGL book would have answered this; although I am reading WebGL Programming Guide by Matsuda and Lea. I am 61 years old and books are how I learned in the past but guess that online is the way now.
I don't know what m3 is in the following WebGL statement:
matrix = m3.translate(matrix,translation[0],translation[1]);
I know there are Matrix definitions and Matrix4 objects but no help here.
Again, thank you.
This book you quote is gold to learn WebGL in the right way! Glad we can help here too (By the way, please remember to accept the best answer here
)
m3 is an instance of Matrix4 type you can find in cuon-matrix.js. Every example in the book uses this file for the maths part.
matrix = m3.translate(matrix,translation[0],translation[1]);
The translate function actually applies a translation on 3 axis to the matrix instance (m3 in your case)
Matrix4.prototype.translate = function(x, y, z)
Thus the line of code you ask for is wrong. You should not pass matrix as first parameter. There are only 3 params: the translation amount on x, y and z axis.
I am using ROS-Kinetic. I have a pointcloud of type PointCloud. I have projected the same pointcloud on a plane. I would like to convert the planar pointcloud to an image of type sensor_msgs/Image.
toROSMsg(cloud, image);
enter code hereis throwing an error as
error: ‘const struct pcl::PointXYZI’ has no member named ‘rgb’
memcpy (pixel, &cloud (x, y).rgb, 3 * sizeof(uint8_t));
Kindly enlighten me in this regard. If possible along with a code snippet.
Thanks in advance
If toROSMsg() is complaining that your input cloud does not have an 'rgb' member, try to input a cloud of type pcl::PointXYZRGB. This is another type of point cloud handled by PCL. You can look at the documentation of PCL point types.
Convert to type pcl::PointXYZRGB with these lines:
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudrgb (new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::copyPointCloud(*cloud, *cloudrgb);
Then call your function as:
toROSMsg(cloudrgb, image);
What you try to achieve is some 2D voxelization. And I assume that you want to implement some "inverse sensor model" (ISM) as explained by Thrun, right?
This approach is commonly directly implemented into a mapping algorithm to circumvent the exhaustive calculation of the plain ISM.
Therefore, you'll hardly find an out of the box solution.
Anyway, you could do it in multiple ways like this:
Use pointcloud_to_laserscan for 2D projection (but you have it anyway)
Use the ISM alg. explained in the book
or
Transform the PCL to an octree
Downsample to a quadtree and convert it to an imge
I have no idea for how to implement matrix implementation efficiently in OpenCV.
I have binary Mat nz(150,600) with 0 and 1 elements.
I have Mat mk(150,600) with double values.
I like to implement as in Matlab as
sk = mk(nz);
That command copy mk to sk only for those element of mk element at the location where nz has 1. Then make sk into a row matrix.
How can I implement it in OpenCV efficiently for speed and memory?
You should take a look at Mat::copyTo and Mat::clone.
copyTo will make an copy with optional mask where its non-zero elements indicate which matrix elements need to be copied.
mk.copyTo(sk, nz);
And if you really want a row matrix then call sk.reshape() as member sansuiso already suggested. This method ...
creates alternative matrix header for the same data, with different
number of channels and/or different number of rows.
bkausbk gave the best answer. However, a second way around:
A=bitwise_and(nz,mk);
If you access A, you can copy the non-zero into a std::vector. If you want your output to be a cv::Mat instance then you have to allocate the memory first:
S=countNonZero(A); //size of the final output matrix
Now, fast element access is an actual topic of itself. Google it. Or have a look at opencv/modules/core/src/stat.cpp where countNonZero() is implemented to get some ideas.
There are two steps involved in your task.
First, you convert to double the input matrix:
cv::Mat binaryMat; // source matrix, filled somewhere
cv::Mat doubleMat; // target matrix (with doubles)
binaryMat.convertTo(doubleMat, CV64F); // Perform the conversion
Then, reshape the result as a row matrix:
doubleMat = cv::reshape(doubleMat, 1, 1);
// Alternatively:
cv::Mat doubleRow = cv::reshape(doubleMat, 1, 1);
The cv::reshape operation is efficient in the sense that the data is not copied, only the structure header changes.
This function returns a new reference to a matrix (by creating a new header), thus you should not forget to assign its result.