Say I have a simple pendulum attached to a quadrotor defined in an sdf file.
When I load this sdf file into a MultibodyTree, the default continuous state vector is of size 4 (quaternion) + 3 (x, y, z) + 1 (joint connecting quadrotor to pendulum) = 8 as indicated by this answer.
By default, the urdf/sdf parser adds the system with a quaternion-based floating base.
My question is, how do I know where this 4 (quaternion) + 3 (x, y, z) frame is attached?
And how do I control where it is attached to? Say my equations of motion are w.r.t. to the tip of the pendulum, and I want this 4 + 3 quaternion floating base to be attached to the tip of my pendulum, how would I define that?
I think I understand now. Our SDF parser still has some limitations and in order to achieve what you want you need to understand them. Right now the "parent"/"child" specification of the links in a <joint> element ends up directly defining the tree relationship between links. If you make your pendulum body to be the "parent" and the quadrotor the "child", then MBP will assign the quaternion to the pendulum instead, which I believe is what you want.
When doing that you'll have to also pay attention to <joint>/<pose>, please refer to this tutorial for details, which includes a nice schematic on joint frames and conventions.
I am not sure I understand what you mean with "where this quaternion-based floating base is attached to". If what you are asking is how to set the pose of a body that you know from your sdf/urdf is floating, then the answer is that you can do that with MultibodyPlant::SetFreeBodyPose().
Hopefully that helps.
Related
In the case of Compass Gait Limit Cycle, the system is obtained from 'models/compass_gait_limit_cycle.urdf' and the context is from CreateDefaultContext().
These define the configuration vector q as [x, y, θ_1, θ_2]^T in my recognition.
In the case that URDF model is more complicated(e.g. having several branched structures), I could obtain the system and the context as the same way, but I could not understand which component of q is corresponding to the joint in URDF file.
Are there some way to know the correspondence relationship?
The best way I need is get the index of q from the name of the joint from URDF file.
But anything is ok if I could know the correspondence.
In addition, I want to know the way to rearrange the order of components of q(and also of qd and qdd).
For example, in the case of compass gait, the default order is [x, y, θ_1, θ_2] and the new order is [θ_1, θ_2, x, y] or something like this.
Perhaps, creating context by myself is the best way.
But I do not understand the context perfectly, so I need some instruction to crate it.
I am using pydrake on ubuntu 20.04.
Thank you for your answer.
If you are using MultibodyPlant (loaded from URDF), then MultibodyPlant controls the order of the variables in the context state vector. As you say, it is intended that you access them from the joint accessors. In python, I have a simple method that packs them into a namedview, which is very convenient for working with them: https://github.com/RussTedrake/underactuated/blob/ee7a2041047772f823d0dc0dd32e992196b2a670/underactuated/utils.py#L32-L86
You can see how I use this in a number of examples in the underactuated notes. Perhaps the littledog example shows it best.
If you want to use them in a different order in a different system, then I would recommend adding something like a MatrixGain system that implements a permutation matrix to rearrange them.
On page 348 of the geant4 User’s guide and applications manual (refer to following link)
http://ftp.tku.edu.tw/Linux/Gentoo/distfiles/BookForAppliDev-4.10.03.pdf
it states that
"Pol01 - interaction of polarized beam (e.g. circularly polarized photons) with polarized target"
On lines 25 and 26 in the histo.mac file of the Pol01 example it has the following two lines of instructions...
/gun/polarization 0. 0. -1.
/gun/particle gamma
The direction of this gamma beam is along the z-axis, and so, assuming the code is correct, the first line of code cannot be describing the polarization state of the electric field. Am I to take it then that, in this context, the first line defines the photon spin projection, and therefore it is defining a circularly polarized photon, either left or right depending on which convention geant4 uses?
Reading the code, Geant4 treats the three vector you use in /gun/polarization for a photon as the S1,S2,S3 components of a Stokes vector in calculations described in the Wikipedia article below.
https://en.wikipedia.org/wiki/Stokes_parameters
A (0,0,1) vector will represent circularly polarized light.
I'm looking for some guidance on the approach I should take to mapping some points with R.
Earlier this year I went off to a forest to map the spatial distribution of some seedlings. I created a grid—every two meters I set down a flag with a tagname, and what I did is I would measure the distance from a flag to a seedling, as well as the angle using a military compass. I chose this method in hopes of getting better accuracy (GPS Garmins prove useless for this sort of task under canopy cover).
I am really new to spatial distribution work altogether, so I was hoping someone could provide guidance on what R packages I should use.
First step is to create a grid with my reference points (the flags). Second step is to tell R to use a reference point and my directions to mark the location of a seedling. From there come other things, such as cluster analysis.
The largest R package for analysing point pattern data like yours is spatstat which has a very detailed documentation and an accompanying book.
For more specific help you would need to upload (part of) your data so we can see how it is organised and how you should read it in and convert to standard x,y coordinates.
Full disclosure: I'm a co-author of both the package and the book.
I want to get the properly rendered projection result from a Stage3D framework that presents something of a 'gray box' interface via its API. It is gray rather than black because I can see this critical snippet of source code:
matrix3D.copyFrom (renderable.getRenderSceneTransform (camera));
matrix3D.append (viewProjection);
The projection rendering technique that perfectly suits my needs comes from a helpful tutorial that works directly with AGAL rather than any particular framework. Its comparable rendering logic snippet looks like this:
cube.mat.copyToMatrix3D (drawMatrix);
drawMatrix.prepend (worldToClip);
So, I believe the correct, general summary of what is going on here is that both pieces of code are setting up the proper combined matrix to be sent to the Vertex Shader where that matrix will be a parameter to the m44 AGAL operation. The general description is that the combined matrix will take us from Object Local Space through Camera View Space to Screen or Clipping Space.
My problem can be summarized as arising from my ignorance of proper matrix operations. I believe my failed attempt to merge the two environments arises precisely because the semantics of prepending one matrix to another is not, and is never intended to be, equivalent to appending that matrix to the other. My request, then, can be summarized in this way. Because I have no control over the calling sequence that the framework will issue, e.g., I must live with an append operation, I can only try to fix things on the side where I prepare the matrix which is to be appended. That code is not black-boxed, but it is too complex for me to know how to change it so that it would meet the interface requirements posed by the framework.
Is there some sequence of inversions, transformations or other manuevers which would let me modify a viewProjection matrix that was designed to be prepended, so that it will turn out right when it is, instead, appended to the Object's World Space coordinates?
I am providing an answer more out of desperation than sure understanding, and still hope I will receive a better answer from those more knowledgeable. From Dunn and Parberry's "3D Math Primer" I learned that "transposing the product of two matrices is the same as taking the product of their transposes in reverse order."
Without being able to understand how to enter text involving superscripts, I am not sure if I can reduce my approach to a helpful mathematical formulation, so I will invent a syntax using functional notation. The equivalency noted by Dunn and Parberry would be something like:
AB = transpose (B) x transpose (A)
That comes close to solving my problem, which problem, to restate, is really just a problem arising out of the fact that I cannot control the behavior of the internal matrix operations in the framework package. I can, however, perform appropriate matrix operations on either side of the workflow from local object coordinates to those required by the GPU Vertex Shader.
I have not completed the test of my solution, which requires the final step to be taken in the AGAL shader, but I have been able to confirm in AS3 that the last 'un-transform' does yield exactly the same combined raw data as the example from the author of the camera with the desired lens properties whose implementation involves prepending rather than appending.
BA = transpose (transpose (A) x transpose (B))
I have also not yet tested to see if these extra calculations are so processing intensive as to reduce my application frame rate beyond what is acceptable, but am pleased at least to be able to confirm that the computations yield the same result.
For some reason it seems that everyone writing webpages about Poincare discs is only concerned with how to represent lines and measure distances.
I'd like to morph a collection of 2D points (as defined by x,y coordinates in the Euclidian plane) onto a Poincare disc, but I have no idea what the algorithm is supposed to be like. At this point I don't even know if it's possible to create a mapping between Euclidian 2-space and a Poincare disc...
Any pointers?
Goodwill,
David
You describe your data as a collection of points. But from your comments, you want to make lines in the plane still map to lines in the disk. You seem to want to preserve the "structure" of the space somehow, which is probably why you use the term "morph". I think that you want a conformal map.
There is no conformal bijection between the disk and the plane. There is such a mapping between the half-plane and the disk, and it preserves "lines", but not the kind that you want, unfortunately.
You said "I don't even know if it's possible to create a mapping" ... there are a number of mappings for you to choose from (see the Unit Disk page for an example) but there are none with all the features you seem to want.
If I understand everything correctly, the answer you get on the other forum is for the Beltrami–Klein model. Once you have that, you can get to the coordinates in the Poicare' disk with
p = b / (1 + sqrt(1 - b * b))
Where p is the vector of coordinates in the Poincare' disk (i.e. what you need) and b is the one in the Beltrami–Klein model (i.e. what you get from the other answer).