How to visualize new poses without simulation, using VTKRenderer - drake

builder = DiagramBuilder()
plant, scene_graph = AddMultibodyPlantSceneGraph(builder, 0.0)
parser = Parser(plant)
renderer_name = "renderer"
renderer = MakeRenderEngineVtk(RenderEngineVtkParams())
scene_graph.AddRenderer(
renderer_name, renderer)
...
for i, position in enumerate(positions):
obj_joints[i].set_translation(plant_context, position)
diagram.Publish(diagram_context)
scene_graph.Publish(sg_context)
The above code does not update anything in the visualizer.
I must run:
simulator.AdvanceTo(100.0)
For anything to update visually.
How do I get the objects to move to their new poses without physics simulation?

... does not update anything in the visualizer.
Is that drake_visualizer? It's worth noting that RenderEngines don't "visualize" geoemetry in that sense. They are part of the simulation of cameras and sensors in the actual simulation. What you actually want to do is attach an instance of DrakeVisualizer to your diagram as:
DrakeVisualizer.AddToBuilder(builder, scene_graph)
(before you call builder.Build()).
That is the system responsible for populating drake_visualizer and is the system that would update drake_visualizer upon calling Publish() (and you'd only have to call Publish on the diagram, not scene_graph as well.
On the off chance, if you actually meant that you were attempting to produce rendered images inside the simulation from an RgbdSensor, the instructions would change.

Related

Printing an image to a dye based application

I am learning about fluid dynamics (and Haxe) and have come across this awesome project and thought I would try to extend to it to help me learn. A demo of the original project in action can be seen here.
So far, I have created a side menu of items containing different shapes. When the user clicks on one of the shapes, then, clicks onto the canvas, the image selected should be imprinted onto the dye. The user will then move the mouse and explore the art etc.
To try and achieve this I did the following:
import js.html.webgl.RenderingContext;
function imageSelection(): Void{
document.querySelector('.myscrollbar1').addEventListener('click', function() {
// twilight image clicked
closeNav();
reset();
var image:js.html.ImageElement = cast document.querySelector('img[src="images/twilight.jpg"]');
gl.current_context.texSubImage2D(cast fluid.dyeRenderTarget.writeToTexture, 0, Math.round(mouse.x), Math.round(mouse.y), RenderingContext.RGB, RenderingContext.UNSIGNED_BYTE, image);
TWILIGHT = true;
});
After this call, inside the update function, I have the following:
override function update( dt:Float ){
time = haxe.Timer.stamp() - initTime;
performanceMonitor.recordFrameTime(dt);
//Smaller number creates a bigger ripple, was 0.016
dt = 0.090;//#!
//Physics
//interaction
updateDyeShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
mouseForceShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
//step physics
fluid.step(dt);
particles.flowVelocityField = fluid.velocityRenderTarget.readFromTexture;
if(renderParticlesEnabled){
particles.step(dt);
}
//Below handles the cycling of colours once the mouse is moved and then the image should be disrupted into the set dye colours.
}
However, although the project builds, I can't seem to get the image imprinted onto the canvas. I have checked the console log and I can see the following error:
WebGL: INVALID_ENUM: texSubImage2D: invalid texture target
Is it safe to assume that my cast for the first param is not allowed?
I have read that the texture target is the first parameter and INVALID_ENUM in particular means that one of the gl.XXX parameters are just flat out wrong for that particular function.
Looking through to the file writeToTexture is declared as so: public var writeToTexture (default, null):GLTexture;. WriteToTexture is a wrapper around a regular webgl handle.
I am using Haxe version 3.2.1 and using Snow to build the project. WriteToTexture is defined inside HaxeToolkit\haxe\lib\gltoolbox\git\gltoolbox\render
writeToTexture in gltoolbox is a GLTexture. With snow and snow_web, this is defined in snow.modules.opengl.GL as:
typedef GLTexture = js.html.webgl.Texture;
So we're simply dealing with a js.html.webgl.Texture here, or WebGLTexture in native JS.
Which means that yes, this is definitely not a valid value for texSubImage2D()'s target, which is specified to take one of the gl.TEXTURE_* constants.
A GLenum specifying the binding point (target) of the active texture.
From this description it's obvious that the parameter isn't actually for the texture itself - it merely gives some info on how the active texture should be used.
The question then becomes how the "active" texture can be set. bindTexture() can be used for this.

Change request.comment-value?

I use different kinds of stop losses and would like to be notified (SendNotification()) about which kind of stop loss was hit upon trade exit.
Let's say I entered a trade by...
request.action = TRADE_ACTION_DEAL;
request.symbol = pSymbol;
request.type = pType;
request.sl = pStop;
request.tp = pProfit;
request.comment = pComment;
request.volume = pVolume;
request.price = SymbolInfoDouble(pSymbol,SYMBOL_ASK);
request.price = SymbolInfoDouble(pSymbol,SYMBOL_BID)
OrderSend(request,result);
I would now like to have the request.comment changed by the last stop loss like so:
request.action = TRADE_ACTION_SLTP;
request.symbol = pSymbol;
request.sl = pStop;
request.tp = pProfit;
request.comment = "Fixed SL";
PositionSelect(_Symbol);
request.order = PositionGetInteger(POSITION_IDENTIFIER);
OrderSend(request,result);
Unfortunately the second block of code does not change the first request.comment = pComment; though (instead the new comment is [sl 1.19724]).
Is it possible to change the comment via TRADE_ACTION_SLTP? What am I doing wrong?
Thank you!
I would now like to have the request.comment changed
There was never a way to do this in MQL4/5 trading platforms
Sad, but true.
The core-functionality was always focused on engineering a fast, reliable soft-real-time ( providing still just a best-effort scheduling alongside the stream of externally injected FxMarket-Event-Flow ), so bear with the product as-is.
Plus, there was always one more degree-of-uncertainty, the Broker-side automation was almost free for modifying the .comment-part of the Trade-position, so even if your OrderSend() was explicit on what ought be stored there, the result was unsure and the Broker-side could ( whenever, be it immediately or at any later stage ) change this field outside of any control ( which was not left on your side ), so the only semi-UUID# keys could have been placed into the .magic ( and your local-side application code always had to do all the job via some key:value storage extension to the otherwise uncertain Broker-side content.
Even the Trade number ( ID, ticket ) identifier is not always a persistent key and may change under some Trade management operations, so be indeed very carefull, before deciding your way.
like to be notified ( SendNotification() ) about which kind of stop loss was hit upon trade exit.
Doable, yet one will need to build all the middleware-logic on one's own :
The wish is clear and doable. Given a proper layer of middleware-logic will get built, one can enjoy whatever such automation.
Having built things like an augmented-visual-trading, remote AI/ML-quant-predictors or real-time fully-adaptive non-blocking GUI-quant-tools augmentations ( your trader gets online graphical visual aids inside GUI, automatically overlaid over other EA + Indicator tools on the GUI-surface, fully click-and-modify interactive / adaptive for fast visually augmented discretionary modifications of the traded asset management ), so only one's imagination and resources available are one's limit here.
Yet, one has to respect the published platform limits - the same as OrderModify() does not provide any means for the wish above, the add-on traded assets customer-specific reporting on position terminations is to be assembled by one's own initiative, as the platform does not provide ( for obvious reasons noted above ) any tools, relevant for such non-core activity.

How do I get the DirectX 12 command queue from a swap chain

Assuming a scenario where DX12 is being hooked for overlay rendering it looks like the best function to hook is the IDXGISwapChain::Present the same way it was done for DX11. Having this function hooked the swap chain is available and from that the device can be retrieved to create resources. Given those resources it’s possible to record rendering commands too. The problem arises when we are trying to execute the rendering commands as there is no option to retrieve the associated command queue from the swap chain so there is nothing like this:
CComPtr<ID3D12Device> pD3D12Device;
if (pSwapChain->GetDevice(__uuidof(ID3D12Device), (void**)(&pD3D12Device)) == S_OK)
{
pD3D12Device->GetCommandQueueForSwapChain( swapChain )->ExecuteCommandLists(…);
}
The other option would be creating a new command queue to execute on, like this:
CComPtr<ID3D12Device> pD3D12Device;
if (pSwapChain->GetDevice(__uuidof(ID3D12Device), (void**)(&pD3D12Device)) == S_OK)
{
D3D12_COMMAND_QUEUE_DESC queue_desc = {};
queue_desc.Flags = D3D12_COMMAND_QUEUE_FLAG_NONE;
queue_desc.Type = D3D12_COMMAND_LIST_TYPE_DIRECT;
HRESULT commandQueueRes = _device->CreateCommandQueue( &queue_desc, IID_PPV_ARGS( &_commandQueue ) );
_commandQueue->ExecuteCommandLists( ... );
}
This results in an error and subsequent device removal. See the error message below.
D3D12 ERROR: ID3D12CommandQueue::ExecuteCommandLists: A command list, which writes to a swap chain back buffer, may only be executed on the command queue associated with that buffer. [ STATE_SETTING ERROR #907: EXECUTECOMMANDLISTS_WRONGSWAPCHAINBUFFERREFERENCE]
The problem is not resolved even if the ID3D12CommandQueue::ExecuteCommandLists is hooked as well because there is no way to retrieve the associated swap chain from the command queue either.
So my question is what is the recommended way to deal with this problem in a scenario where the swap chain creation happens before the hooking could possibly happen?
In case anyone is looking for the answer here's what I found out.
There is no official way to do this, for overlay rendering the recommended way is to use DirectComposition but this has performance consequences which is not very nice for game overlays.
Investigating the memory a bit there is a possible solution to get the CommandQueue from the swap chain with something like this:
#ifdef _M_X64
size_t* pOffset = (size_t*)((BYTE*)swapChain + 216);
#else
size_t* pOffset = (size_t*)((BYTE*)swapChain + 132);
#endif
*(&_commandQueue) = reinterpret_cast<ID3D12CommandQueue*>(*pOffset);
Obviously this solution is not recommended but it might be useful if someone just want's to do some debugging.
My final solution is to hook into a function that uses the CommandQueue (I use ExecuteCommandLists) and get the pointer there and use it later to render the overlay. It's not completely satisfying but it works as long as there are no multiple swap chains.

How to not use threads

This is a dart newbie question about how to do "multithreading" in dart.
(Excuse me I am an old java developer ...)
So I have this kind of code (se below) but since recreating the gui is costly I would like to defer it so that instead of recreating the gui in the _onWindowResize() I would like to start a thread that does this when the size has been stable some time. E.g. for one second.
If a thread is already is started do nothing. (Btw, StageXL is cool ....)
(This will also fix the bug that _onWindowResize() is called twice by the dart:html ...)
...
html.window.onResize.listen((e) => _onWindowResize());
}
_createGui() {
var shape = new Shape();
shape.graphics.ellipse(html.window.innerWidth / 2, html.window.innerHeight / 2, html.window.innerWidth / 4, html.window.innerHeight / 4);
shape.graphics.fillColor(Color.Red);
stage.addChild(shape);
}
void _onWindowResize() {
print("New window size ${html.window.innerWidth}x${html.window.innerHeight}");
stage = new Stage('stage', canvas);
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.align = StageAlign.TOP_LEFT;
renderLoop = new RenderLoop();
renderLoop.addStage(stage);
juggler = renderLoop.juggler;
_createGui();
}
One can send work to other threads in Dart via Isolates, but this won't work for your scenario since it's mostly about modifying the UI of the app.
One cannot share objects between isolates in Dart (or using WebWorkers in general). So you cannot pass the canvas into an Isolate to create your stage, renderloop, etc.
If you are doing complex calculations (Physics, for example), it might make sense to send those off to an Isolate and use the result to update the UI.

Using matrices to transform the Three.js scene graph

I'm attempting to load a scene from a file into Three.js (custom format, not one that Three.js supports). This particular format describes a scene graph where each node in the tree has a transform specified as a 4x4 matrix. The process for pushing it into Three.js looks something like this:
// Yeah, this is javascript-like psuedocode
function processNodes(srcNode, parentThreeObj) {
for(child in srcNode.children) {
var threeObj = new THREE.Object3D();
// This line is the problem
threeObj.applyMatrix(threeMatrixFromSrcMatrix(child.matrix));
for(mesh in child.meshes) {
var threeMesh = threeMeshFromSrcMesh(mesh);
threeObj.add(threeMesh);
}
parentThreeObj.add(threeObj);
processNodes(child, threeObj); // And recurse!
}
}
Or at least that's what I'd like it to be. As I pointed out, the applyMatrix line doesn't work the way that I would expect. The majority of the scene looks okay, but certain elements that have been rotated aren't aligned properly (while other are, it's strange).
Looking through the COLLADA loader (which does approximately the same thing I'm trying to do) it appears that they decompose the matrix into a translate/rotate/scale and apply each individually. I tried that in place of the applyMatrix shown above:
var props = threeMatrixFromSrcMatrix(child.matrix).decompose();
threeObj.useQuaternion = true;
threeObj.position = props[ 0 ];
threeObj.quaternion = props[ 1 ];
threeObj.scale = props[ 2 ];
This, once again, yields a scene where most elements are in the right place but meshes that previously were misaligned have now been transformed into oblivion somewhere and no longer appear at all. So in the end this is no better than the applyMatrix from above.
Looking through several online discussions about the topic it seems that the recommended way to use matrices for your transforms is to apply them directly to the geometry, not the nodes, so I tried that by manually building the transform matrix like so:
function processNodes(srcNode, parentThreeObj, parentMatrix) {
for(child in srcNode.children) {
var threeObj = new THREE.Object3D();
var childMatrix = threeMatrixFromSrcMatrix(child.matrix);
var objMatrix = THREE.Matrix4();
objMatrix.multiply(parentMatrix, childMatrix);
for(mesh in child.meshes) {
var threeMesh = threeMeshFromSrcMesh(mesh);
threeMesh.geometry.applyMatrix(objMatrix);
threeObj.add(threeMesh);
}
parentThreeObj.add(threeObj);
processNodes(child, threeObj, objMatrix); // And recurse!
}
}
This actually yields the correct results! (minus some quirks with the normals, but I can figure that one out) That's great, but the problem is that we've now effectively flattened the scene hierarchy: Changing the transform on a parent will yield unexpected results on the children because the full transform stack is now "baked in" to the meshes. In this case that's an unacceptable loss of information about the scene.
So how might one go about telling Three.js to do the same logic, but at the appropriate point in the scene graph?
(Sorry, I would dearly love to post some live code examples but that's unfortunately not an option in this case.)
You can use matrixAutoUpdate = false to skip the Three.js scenegraph position/scale/rotation stuff. Then set object.matrix to the matrix you want and all should be dandy (well, it still gets multiplied by parent node matrices, so if you're using absolute modelview matrices you need to hack updateMatrixWorld method on Object3D.)
object.matrixAutoUpdate = false;
object.matrix = myMatrix;
Now, if you'd like to have a custom transformation matrix applied on top of the Three.js position/scale/rotation stuff, you need to edit Object3D#updateMatrix to be something like.
THREE.Object3D.prototype._updateMatrix = THREE.Object3D.prototype.updateMatrix;
THREE.Object3D.prototype.updateMatrix = function() {
this._updateMatrix();
if (this.customMatrix != null)
this.matrix.multiply(this.customMatrix);
};
See https://github.com/mrdoob/three.js/blob/master/src/core/Object3D.js#L209
Sigh...
Altered Qualia pointed out the solution on Twitter within minutes of me posting this.
It's a simple one-line fix: Just set matrixAutoUpdate to false on the Object3D instances and the first code sample works as intended.
threeObj.matrixAutoUpdate = false; // This fixes it
threeObj.applyMatrix(threeMatrixFromSrcMatrix(child.matrix));
It's always the silly little things that get you...

Resources