I'd like to define a reference frame that isn't necessarily attached to a body in Drake but can still be updated during simulation with a LeafSystem. I tried using the AddFrame method described here (https://drake.mit.edu/pydrake/pydrake.multibody.parsing.html), but this returned an AddFrame(): incompatible function arguments error. I could get this error to go away by instantiating a FixedOffsetFrame first, but don't know if/how to update the frame's pose once it's been created. How can I create a reference frame that can be updated during simulation?
Also as a side note: I think it needs to be a frame and not just a RigidTransform since I'd like to use it with the CalcJacobianSpatialVelocity function
A frame in the MultibodyPlant should be attached (rigidly) to some body frame. You can specify e.g. the frame of the hand relative to the wrist. As the joint angles change, then the location of your frame will change (during simulation). This would be accomplished, as you say, with a FixedOffsetFrame.
It sounds like you want a frame that can move independently from any body/link during simulation? Then how will you specify its pose? If you want the MultibodyPlant kinematics engine to do it for you, then you would accomplish that by adding a body, with joints or a floating base (which the kinematics/dynamics engine uses) and using that as your frame. Or you can simply keep track of a frame yourself by having a pose (relative to a body or world) in your controller/perception system/etc.
Related
In Rviz, we are publishing a series of point clouds (PointCloud2) and I want the viewer to follow and center on the point clouds. Currently, we just have an Orbit view and one has to manually keep moving the Focal Point to keep the data in view. Is there a way to do this automatically? I played around with the other view types, but they don't seem to do what I want.
You should set your Fixed Frame to be the frame of your pointcloud. The other option you have is to set your Target Frame to be the pointcloud frame. The ladder will keep your pointcloud centered on the origin which is probably easier.
I'm working on developing iOS AR application using ARKit + Core location. And the points which are displayed on the map using coordinates move from place to place when I go. But I need they are displayed on the same place.
Here you can see the example of what I mean:
https://drive.google.com/file/d/1DQkTJFc9aChtGrgPJSziZVMgJYXyH9Da/view?usp=sharing
Could you help to handle with this issue? How can I have fixed places for points using coordinates? Any ideas?
Thanks.
Looks like you attach objects to planes. However, when you move the ARKit extends the existing planes. As a result if you put points, for example, at the center of the plane, then the center is always updated. You need to recalculate the coordinates of the point and place objects correctly.
The alternative is not to add objects to planes (or in relation to them). If you need to "put" object on a plane, then the best way is to wait, until the plane will be directed enough (it will not change his direction significantly if you will move), then select a point on the plane where you want to put your object, then convert this point coordinate to global coordinates (as a result if plane will change his size the coordinate you have will not be changed at all), and finally put object in root (or another object that it's not related to the plane).
I've imported a vector layer from a psd into paint code v1, I'm trying to create a background image and make it universal.
I can't seem to add a frame around the vector, to complicate matters, I only need a portion, the center, of the layer. (The design is based around a circle, it has lines drawn towards the center of the circle.)
I can’t seem to add a frame to dynamically resize the part I need.
I found this http://www.raywenderlich.com/36341/paintcode-tutorial-dynamic-buttons the part about frame ans groups doesn't help me....
When I add a click frame and drag it around the area I need, it's at the same level as the vector layer. I've also tried adding a group around both, but that doesn't seem to obey the frame size either.
I’ve looked through the tutorials and googled adding a frame, but I can’t seem to achieve what I need.
EDIT
A frame is supposed to be at the same level as the vectors you're working with.
All you do then is set the resize rules of your vectors. There is a little rectangle in the frame's parameters interface with straight arrows and springs that you can modify to fit your wishes.
I think I also remember a checkbox setting to resize only what's inside the frame.
Now I haven't used PaintCode for a while, but if this doesn't help you, there probably is a problem with your vector layer.
I don't know if this information helps.
But if you resizing doesn't work as you expected. Look carefully at the transformation box (the one with the springs attached). When you have put a frame around your object. The middle dot in this box should be blue instead of green. if t's green, you may have a problem with the originating point of your objects and then the resizing may not work as you expected.
Can anyone please help me in calculating center of rotation and position of a X3D object?
I've noticed that aopt tool by InstantReality adds something like:
<Viewpoint DEF='AOPT_CAM' centerOfRotation='x y z' position='x y z'/>
The result is nice, object is properly zoomed, centrated and center
of rotation is somehow perfectly "inside" the object (x,y,z, center).
I must avoid using aopt, how can I obtain that, (i.e. via JavaScript)
pheraphs looping trough XML Coordinate point and doing some calculations...?
I'm using X3DOM to render the object.
Many thanks.
"AOPT_CAM" is the name of the Viewpoint. The centerOfRotation and position values are automatically computed by the Browser (InstantReality in your case).
In order to compute these values by yourself you need to know your object size (BoundingBox) and do some math to compute where the Viewpoint should be located ('position' attribute) in your local coordinate system. You also need to know the object displacement in the coordinate system. If not specified this should be (0,0,0)
I need to track single object's motion from frame to frame. I only need to know its position. But sometimes the object may go beyond the frame's boundaries partly (even most part) and sometimes it can approach the camera so closely that it will not fit in the frame. Which algorithm is the best for this purpose?
The question seems not to be specific. What are you tracking?
Is it just a colored object (which is simplest), use threshold method for that color and use contour or method of moment to find its position. Then even if it goes out and come back, still it will be tracked.
http://aishack.in/tutorials/tracking-colored-objects-in-opencv/
Or whatever it is, try to isolate the blob first.
And if i misunderstood the question, please specify with few more details.