thanks to list for WebGL vec4() help! It was fast; don't know if a Google search (Swizzling) would have worked, but maybe?
Another WebGL question; then I should have resources from list to help me in future WebGL q's.
I guess a good WebGL book would have answered this; although I am reading WebGL Programming Guide by Matsuda and Lea. I am 61 years old and books are how I learned in the past but guess that online is the way now.
I don't know what m3 is in the following WebGL statement:
matrix = m3.translate(matrix,translation[0],translation[1]);
I know there are Matrix definitions and Matrix4 objects but no help here.
Again, thank you.
This book you quote is gold to learn WebGL in the right way! Glad we can help here too (By the way, please remember to accept the best answer here
)
m3 is an instance of Matrix4 type you can find in cuon-matrix.js. Every example in the book uses this file for the maths part.
matrix = m3.translate(matrix,translation[0],translation[1]);
The translate function actually applies a translation on 3 axis to the matrix instance (m3 in your case)
Matrix4.prototype.translate = function(x, y, z)
Thus the line of code you ask for is wrong. You should not pass matrix as first parameter. There are only 3 params: the translation amount on x, y and z axis.
Related
Say I have a simple pendulum attached to a quadrotor defined in an sdf file.
When I load this sdf file into a MultibodyTree, the default continuous state vector is of size 4 (quaternion) + 3 (x, y, z) + 1 (joint connecting quadrotor to pendulum) = 8 as indicated by this answer.
By default, the urdf/sdf parser adds the system with a quaternion-based floating base.
My question is, how do I know where this 4 (quaternion) + 3 (x, y, z) frame is attached?
And how do I control where it is attached to? Say my equations of motion are w.r.t. to the tip of the pendulum, and I want this 4 + 3 quaternion floating base to be attached to the tip of my pendulum, how would I define that?
I think I understand now. Our SDF parser still has some limitations and in order to achieve what you want you need to understand them. Right now the "parent"/"child" specification of the links in a <joint> element ends up directly defining the tree relationship between links. If you make your pendulum body to be the "parent" and the quadrotor the "child", then MBP will assign the quaternion to the pendulum instead, which I believe is what you want.
When doing that you'll have to also pay attention to <joint>/<pose>, please refer to this tutorial for details, which includes a nice schematic on joint frames and conventions.
I am not sure I understand what you mean with "where this quaternion-based floating base is attached to". If what you are asking is how to set the pose of a body that you know from your sdf/urdf is floating, then the answer is that you can do that with MultibodyPlant::SetFreeBodyPose().
Hopefully that helps.
I'm new to ML and Kaggle. I was going through the solution of a Kaggle Challenge.
Challenge: https://www.kaggle.com/c/trackml-particle-identification
Solution: https://www.kaggle.com/outrunner/trackml-2-solution-example
While going through the code, I noticed that the author has used only train_1 file (not train_2, 3, …).
I know there is some strategy involved behind using only the train_1 file. Can someone, please, explain why is it so? Also, what are the use of blacklist_training.zip, train_sample.zip, and detectors.zip files?
I'm one of the organiser of the challenge. train_1 2 3 .. files are all equivalent. Outrunner has probably seen there was no improvement using more data.
train_sample.zip is a small dataset equivalent to train_1 2 3... provided for convenience.
blacklist_training.zip is a list of particles to be ignored due to a small bug in the simulator (not very important).
detectors.zip is the list of the geometrical surfaces where the x y z measurements are made.
David
I am new to Metal API in iOS. So my question is, how to use Metal compute function for Multiplication, for instance,
let say we are having two float[] arrays of length 2048, we want to multiplicate corresponding elements together forming another array of floats[] of 2048
like this
res[i] = a[i] * b[i];
With a[] and b[] an array of 2048 floats
and res[] another array of 2048 floats
The next step is to perform that "operation", 2048 times.
Can someone please help me with this.
if possible, I need to do this in Objective-c, but I can read swift as well.
Thank you in advance.
You should start with a working example and then adapt it to fit your needs. Here is a prefix sum implementation that runs on top of Metal. This is a render implementation as opposed to a compute shader so that it is able to run effectively on the A7 chip.
This tutorial : https://machinethink.net/blog/mps-matrix-multiplication/
As well as This one: https://www.youtube.com/watch?v=lSofOJrFsJ4&ut=
Really helped me out!
I'm looking for some guidance on the approach I should take to mapping some points with R.
Earlier this year I went off to a forest to map the spatial distribution of some seedlings. I created a grid—every two meters I set down a flag with a tagname, and what I did is I would measure the distance from a flag to a seedling, as well as the angle using a military compass. I chose this method in hopes of getting better accuracy (GPS Garmins prove useless for this sort of task under canopy cover).
I am really new to spatial distribution work altogether, so I was hoping someone could provide guidance on what R packages I should use.
First step is to create a grid with my reference points (the flags). Second step is to tell R to use a reference point and my directions to mark the location of a seedling. From there come other things, such as cluster analysis.
The largest R package for analysing point pattern data like yours is spatstat which has a very detailed documentation and an accompanying book.
For more specific help you would need to upload (part of) your data so we can see how it is organised and how you should read it in and convert to standard x,y coordinates.
Full disclosure: I'm a co-author of both the package and the book.
I want to get the properly rendered projection result from a Stage3D framework that presents something of a 'gray box' interface via its API. It is gray rather than black because I can see this critical snippet of source code:
matrix3D.copyFrom (renderable.getRenderSceneTransform (camera));
matrix3D.append (viewProjection);
The projection rendering technique that perfectly suits my needs comes from a helpful tutorial that works directly with AGAL rather than any particular framework. Its comparable rendering logic snippet looks like this:
cube.mat.copyToMatrix3D (drawMatrix);
drawMatrix.prepend (worldToClip);
So, I believe the correct, general summary of what is going on here is that both pieces of code are setting up the proper combined matrix to be sent to the Vertex Shader where that matrix will be a parameter to the m44 AGAL operation. The general description is that the combined matrix will take us from Object Local Space through Camera View Space to Screen or Clipping Space.
My problem can be summarized as arising from my ignorance of proper matrix operations. I believe my failed attempt to merge the two environments arises precisely because the semantics of prepending one matrix to another is not, and is never intended to be, equivalent to appending that matrix to the other. My request, then, can be summarized in this way. Because I have no control over the calling sequence that the framework will issue, e.g., I must live with an append operation, I can only try to fix things on the side where I prepare the matrix which is to be appended. That code is not black-boxed, but it is too complex for me to know how to change it so that it would meet the interface requirements posed by the framework.
Is there some sequence of inversions, transformations or other manuevers which would let me modify a viewProjection matrix that was designed to be prepended, so that it will turn out right when it is, instead, appended to the Object's World Space coordinates?
I am providing an answer more out of desperation than sure understanding, and still hope I will receive a better answer from those more knowledgeable. From Dunn and Parberry's "3D Math Primer" I learned that "transposing the product of two matrices is the same as taking the product of their transposes in reverse order."
Without being able to understand how to enter text involving superscripts, I am not sure if I can reduce my approach to a helpful mathematical formulation, so I will invent a syntax using functional notation. The equivalency noted by Dunn and Parberry would be something like:
AB = transpose (B) x transpose (A)
That comes close to solving my problem, which problem, to restate, is really just a problem arising out of the fact that I cannot control the behavior of the internal matrix operations in the framework package. I can, however, perform appropriate matrix operations on either side of the workflow from local object coordinates to those required by the GPU Vertex Shader.
I have not completed the test of my solution, which requires the final step to be taken in the AGAL shader, but I have been able to confirm in AS3 that the last 'un-transform' does yield exactly the same combined raw data as the example from the author of the camera with the desired lens properties whose implementation involves prepending rather than appending.
BA = transpose (transpose (A) x transpose (B))
I have also not yet tested to see if these extra calculations are so processing intensive as to reduce my application frame rate beyond what is acceptable, but am pleased at least to be able to confirm that the computations yield the same result.