Porting old MDX code - sharpdx

I'm porting some old MDX code to SharpDX using Direct3D9 assemblies.
I was able to 'convert' most of the code to SharpDX but I'm stuck at the following:
Mesh result = Mesh.Cylinder(_device, _arrowRadius1, _arrowRadius2, _arrowLength, _arrowNumberOfSlices, _arrowNumberOfStacks);
Mesh result = Mesh.Box(_device, _axisLength, _axisThick, _axisThick);
Mesh.TextFromFont(_device, new System.Drawing.Font("Berlin Sans FB", 12), text, 5f, 0.2f);
The mesh class exists but does not contain the Cylinder or Box methods. I've gone through tons of documentation and could not find a solution.
Apart from the problem with the Mesh class I could not find matching classes and methods for the following in SharpDX:
using (Surface backbuffer = _device.GetBackBuffer(0, 0))
{
GraphicsStream stream = SurfaceLoader.SaveToStream(ImageFileFormat.Bmp, backbuffer);
return new Bitmap(stream);
}
GraphicStream and SurfaceLoader do not exist.

i had similar problem proting from old Managed Microsoft.DirectX to SharpDx9.
For Meshes we had to implement our own Mesh classes since there are no pritives like cylinder, sphere or box in SharpDx.Mesh (its just a mock class i guess).
But for SurfaceLoader check Surface class itself it has static methods that will probably match your needs. For example:
Surface.ToStream()

Related

Printing an image to a dye based application

I am learning about fluid dynamics (and Haxe) and have come across this awesome project and thought I would try to extend to it to help me learn. A demo of the original project in action can be seen here.
So far, I have created a side menu of items containing different shapes. When the user clicks on one of the shapes, then, clicks onto the canvas, the image selected should be imprinted onto the dye. The user will then move the mouse and explore the art etc.
To try and achieve this I did the following:
import js.html.webgl.RenderingContext;
function imageSelection(): Void{
document.querySelector('.myscrollbar1').addEventListener('click', function() {
// twilight image clicked
closeNav();
reset();
var image:js.html.ImageElement = cast document.querySelector('img[src="images/twilight.jpg"]');
gl.current_context.texSubImage2D(cast fluid.dyeRenderTarget.writeToTexture, 0, Math.round(mouse.x), Math.round(mouse.y), RenderingContext.RGB, RenderingContext.UNSIGNED_BYTE, image);
TWILIGHT = true;
});
After this call, inside the update function, I have the following:
override function update( dt:Float ){
time = haxe.Timer.stamp() - initTime;
performanceMonitor.recordFrameTime(dt);
//Smaller number creates a bigger ripple, was 0.016
dt = 0.090;//#!
//Physics
//interaction
updateDyeShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
mouseForceShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
//step physics
fluid.step(dt);
particles.flowVelocityField = fluid.velocityRenderTarget.readFromTexture;
if(renderParticlesEnabled){
particles.step(dt);
}
//Below handles the cycling of colours once the mouse is moved and then the image should be disrupted into the set dye colours.
}
However, although the project builds, I can't seem to get the image imprinted onto the canvas. I have checked the console log and I can see the following error:
WebGL: INVALID_ENUM: texSubImage2D: invalid texture target
Is it safe to assume that my cast for the first param is not allowed?
I have read that the texture target is the first parameter and INVALID_ENUM in particular means that one of the gl.XXX parameters are just flat out wrong for that particular function.
Looking through to the file writeToTexture is declared as so: public var writeToTexture (default, null):GLTexture;. WriteToTexture is a wrapper around a regular webgl handle.
I am using Haxe version 3.2.1 and using Snow to build the project. WriteToTexture is defined inside HaxeToolkit\haxe\lib\gltoolbox\git\gltoolbox\render
writeToTexture in gltoolbox is a GLTexture. With snow and snow_web, this is defined in snow.modules.opengl.GL as:
typedef GLTexture = js.html.webgl.Texture;
So we're simply dealing with a js.html.webgl.Texture here, or WebGLTexture in native JS.
Which means that yes, this is definitely not a valid value for texSubImage2D()'s target, which is specified to take one of the gl.TEXTURE_* constants.
A GLenum specifying the binding point (target) of the active texture.
From this description it's obvious that the parameter isn't actually for the texture itself - it merely gives some info on how the active texture should be used.
The question then becomes how the "active" texture can be set. bindTexture() can be used for this.

Possible to make a composite symbol?

When editing a vertex I would like to substitute the vertex symbol with SimpleMarkerSymbol and a TextSymbol but that appears to be impossible. Any suggestions on how I could do this? I want the appearance of dragging something like this (text + circle):
After taking some time to look at the API I've come to the conclusion it is impossible. Here is my workaround:
editor.on("vertex-move", args => {
let map = this.options.map;
let g = <Graphic>args.vertexinfo.graphic;
let startPoint = <Point>g.geometry;
let tx = args.transform;
let endPoint = map.toMap(map.toScreen(startPoint).offset(tx.dx, tx.dy));
// draw a 'cursor' as a hack to render text over the active vertex
if (!cursor) {
cursor = new Graphic(endPoint, new TextSymbol({text: "foo"}));
this.layer.add(cursor);
} else {
cursor.setGeometry(endPoint);
cursor.draw();
}
})
You could use a TextSymbol to create a point with font type having numbers inside the circle. Here is one place where you can find such font. http://www.fontspace.com/the-fontsite/combinumerals
Wont be exactly as shown in the image but close enough. Also some limitation it wont work with IE9 or lower (this is as per esri documentation, as I am using halo to get the white border).
Here is the working Jsbin : http://jsbin.com/hayirebiga/edit?html,output use point of multipoint
PS: I have converted the ttf to otf and then added the font as base64, which is optional. I did it as I could not add the ttf or otf to jsbin.
Well, Achieve this seems impossible so far however ArcGIS JS API provides a new Application/platform where you can generate single symbol online for your applications.
We can simply create all kind of symbols(Provide by ESRI) online and it gives you on the fly code which you just need to paste in your application.
This will help us to try different type of suitable symbols for the applications.
Application URL: https://developers.arcgis.com/javascript/3/samples/playground/index.html
Hoping this will help you :)

How do I get slot number for texture by name?

Using Direct3D 11 and SharpDX, given the name of a Texture Map as declared in the shader, how do I know what slot to assign my Sampler and TextureView to?
Documentation indicates I can use ShaderReflection, however it is not clear how...
void SetTexture(MyShaderProgram shaderProgram, string name, MyTextureMap textureMap)
{
byte[] byteCode = shaderProgram.ByteCode;
var shaderReflection = new
SharpDX.D3DCompiler.ShaderReflection(byteCode);
var slot = ?
PixelShaderStage pixelShader = shaderProgram.PixelShader;
pixelShader.SetSampler(slot, textureMap.Sampler);
pixelShader.SetShaderResource(slot, textureMap.TextureView);
}
It seems that BindPoint of shader InputBindingDescription serves this purpose. Thus, this may suffice:
var slot = shaderReflection.GetResourceBindingDescription(name).BindPoint;
It may also be worth noting that it seems technically one should get two bind points: one for the sample and one for the texture view. As they are often declared side-by-side this solution may be sufficient.

How to create my own setSVMDetector?

When i create statndart detector...
static vector<float> detector = HOGDescriptor::getDefaultPeopleDetector();
if (!detector.size()) {
fprintf(stderr, "ERROR: getDefaultPeopleDetector returned NULL\n");
return -1;
}
hog.setSVMDetector(detector);
hog.detectMultiScale(img, rects);
...all works fine.
But!
When i create my own classifier using "Classifier Tool For OpenCV" (classifieropencv.codeplex.com) i can't to find object. I use all default parameters: winSize, blockSize, blockStride, cellSize and others. Why? Any one used this tool to create classifiers fot HOG-detection? Any one used HOGDescriptor to detect his own object (without getDefaultPeopleDetector )?
Thanks!
This tool is useful: "Classifier Tool For OpenCV" (classifieropencv.codeplex.com)
Parameters in this tool (when you create classifier) must be equal with parameters in your OpenCv code(when you use classifier).
Here is manual in russian, but it have many pictures and video, and is clear.

Using matrices to transform the Three.js scene graph

I'm attempting to load a scene from a file into Three.js (custom format, not one that Three.js supports). This particular format describes a scene graph where each node in the tree has a transform specified as a 4x4 matrix. The process for pushing it into Three.js looks something like this:
// Yeah, this is javascript-like psuedocode
function processNodes(srcNode, parentThreeObj) {
for(child in srcNode.children) {
var threeObj = new THREE.Object3D();
// This line is the problem
threeObj.applyMatrix(threeMatrixFromSrcMatrix(child.matrix));
for(mesh in child.meshes) {
var threeMesh = threeMeshFromSrcMesh(mesh);
threeObj.add(threeMesh);
}
parentThreeObj.add(threeObj);
processNodes(child, threeObj); // And recurse!
}
}
Or at least that's what I'd like it to be. As I pointed out, the applyMatrix line doesn't work the way that I would expect. The majority of the scene looks okay, but certain elements that have been rotated aren't aligned properly (while other are, it's strange).
Looking through the COLLADA loader (which does approximately the same thing I'm trying to do) it appears that they decompose the matrix into a translate/rotate/scale and apply each individually. I tried that in place of the applyMatrix shown above:
var props = threeMatrixFromSrcMatrix(child.matrix).decompose();
threeObj.useQuaternion = true;
threeObj.position = props[ 0 ];
threeObj.quaternion = props[ 1 ];
threeObj.scale = props[ 2 ];
This, once again, yields a scene where most elements are in the right place but meshes that previously were misaligned have now been transformed into oblivion somewhere and no longer appear at all. So in the end this is no better than the applyMatrix from above.
Looking through several online discussions about the topic it seems that the recommended way to use matrices for your transforms is to apply them directly to the geometry, not the nodes, so I tried that by manually building the transform matrix like so:
function processNodes(srcNode, parentThreeObj, parentMatrix) {
for(child in srcNode.children) {
var threeObj = new THREE.Object3D();
var childMatrix = threeMatrixFromSrcMatrix(child.matrix);
var objMatrix = THREE.Matrix4();
objMatrix.multiply(parentMatrix, childMatrix);
for(mesh in child.meshes) {
var threeMesh = threeMeshFromSrcMesh(mesh);
threeMesh.geometry.applyMatrix(objMatrix);
threeObj.add(threeMesh);
}
parentThreeObj.add(threeObj);
processNodes(child, threeObj, objMatrix); // And recurse!
}
}
This actually yields the correct results! (minus some quirks with the normals, but I can figure that one out) That's great, but the problem is that we've now effectively flattened the scene hierarchy: Changing the transform on a parent will yield unexpected results on the children because the full transform stack is now "baked in" to the meshes. In this case that's an unacceptable loss of information about the scene.
So how might one go about telling Three.js to do the same logic, but at the appropriate point in the scene graph?
(Sorry, I would dearly love to post some live code examples but that's unfortunately not an option in this case.)
You can use matrixAutoUpdate = false to skip the Three.js scenegraph position/scale/rotation stuff. Then set object.matrix to the matrix you want and all should be dandy (well, it still gets multiplied by parent node matrices, so if you're using absolute modelview matrices you need to hack updateMatrixWorld method on Object3D.)
object.matrixAutoUpdate = false;
object.matrix = myMatrix;
Now, if you'd like to have a custom transformation matrix applied on top of the Three.js position/scale/rotation stuff, you need to edit Object3D#updateMatrix to be something like.
THREE.Object3D.prototype._updateMatrix = THREE.Object3D.prototype.updateMatrix;
THREE.Object3D.prototype.updateMatrix = function() {
this._updateMatrix();
if (this.customMatrix != null)
this.matrix.multiply(this.customMatrix);
};
See https://github.com/mrdoob/three.js/blob/master/src/core/Object3D.js#L209
Sigh...
Altered Qualia pointed out the solution on Twitter within minutes of me posting this.
It's a simple one-line fix: Just set matrixAutoUpdate to false on the Object3D instances and the first code sample works as intended.
threeObj.matrixAutoUpdate = false; // This fixes it
threeObj.applyMatrix(threeMatrixFromSrcMatrix(child.matrix));
It's always the silly little things that get you...

Resources