picking object using mouse click in direct3d (managed directx) - directx

basically I want to pick an object using mouse click in direct3d (managed directx) C#.
I need to transform 2D point to 3D point. After googling, I have found that, I can use picking
or ray. here is an example of using ray
http://msdn.microsoft.com/en-us/library/bb203905.aspx
my problem if I implement ray is I can't find GraphicsDevice.Viewport.Unproject in direct3d.
If there is another solution please tell me.
Update:
here is what I have done:
protected override void OnMouseClick(MouseEventArgs e)
{
base.OnMouseClick(e);
int x = e.X;
int y = e.Y;
Vector3 nearsource = new Vector3((float)x, (float)y, 0f);
Vector3 farsource = new Vector3((float)x, (float)y, 1f);
nearsource.Unproject(device.Viewport, device.Transform.Projection, device.Transform.View,device.Transform.World);
farsource.Unproject(device.Viewport, device.Transform.Projection, device.Transform.View, device.Transform.World);
Vector3 direction=nearsource-farsource;
direction.Normalize();
}
The problem is the farsouce matrix has NaN value

Managed DirectX has been deprecated for a long, long time. I would suggest using a better supported API like SlimDX.
A little googling revealed this thread which seems to be what you want in SlimDX.

Related

Printing an image to a dye based application

I am learning about fluid dynamics (and Haxe) and have come across this awesome project and thought I would try to extend to it to help me learn. A demo of the original project in action can be seen here.
So far, I have created a side menu of items containing different shapes. When the user clicks on one of the shapes, then, clicks onto the canvas, the image selected should be imprinted onto the dye. The user will then move the mouse and explore the art etc.
To try and achieve this I did the following:
import js.html.webgl.RenderingContext;
function imageSelection(): Void{
document.querySelector('.myscrollbar1').addEventListener('click', function() {
// twilight image clicked
closeNav();
reset();
var image:js.html.ImageElement = cast document.querySelector('img[src="images/twilight.jpg"]');
gl.current_context.texSubImage2D(cast fluid.dyeRenderTarget.writeToTexture, 0, Math.round(mouse.x), Math.round(mouse.y), RenderingContext.RGB, RenderingContext.UNSIGNED_BYTE, image);
TWILIGHT = true;
});
After this call, inside the update function, I have the following:
override function update( dt:Float ){
time = haxe.Timer.stamp() - initTime;
performanceMonitor.recordFrameTime(dt);
//Smaller number creates a bigger ripple, was 0.016
dt = 0.090;//#!
//Physics
//interaction
updateDyeShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
mouseForceShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
//step physics
fluid.step(dt);
particles.flowVelocityField = fluid.velocityRenderTarget.readFromTexture;
if(renderParticlesEnabled){
particles.step(dt);
}
//Below handles the cycling of colours once the mouse is moved and then the image should be disrupted into the set dye colours.
}
However, although the project builds, I can't seem to get the image imprinted onto the canvas. I have checked the console log and I can see the following error:
WebGL: INVALID_ENUM: texSubImage2D: invalid texture target
Is it safe to assume that my cast for the first param is not allowed?
I have read that the texture target is the first parameter and INVALID_ENUM in particular means that one of the gl.XXX parameters are just flat out wrong for that particular function.
Looking through to the file writeToTexture is declared as so: public var writeToTexture (default, null):GLTexture;. WriteToTexture is a wrapper around a regular webgl handle.
I am using Haxe version 3.2.1 and using Snow to build the project. WriteToTexture is defined inside HaxeToolkit\haxe\lib\gltoolbox\git\gltoolbox\render
writeToTexture in gltoolbox is a GLTexture. With snow and snow_web, this is defined in snow.modules.opengl.GL as:
typedef GLTexture = js.html.webgl.Texture;
So we're simply dealing with a js.html.webgl.Texture here, or WebGLTexture in native JS.
Which means that yes, this is definitely not a valid value for texSubImage2D()'s target, which is specified to take one of the gl.TEXTURE_* constants.
A GLenum specifying the binding point (target) of the active texture.
From this description it's obvious that the parameter isn't actually for the texture itself - it merely gives some info on how the active texture should be used.
The question then becomes how the "active" texture can be set. bindTexture() can be used for this.

Porting old MDX code

I'm porting some old MDX code to SharpDX using Direct3D9 assemblies.
I was able to 'convert' most of the code to SharpDX but I'm stuck at the following:
Mesh result = Mesh.Cylinder(_device, _arrowRadius1, _arrowRadius2, _arrowLength, _arrowNumberOfSlices, _arrowNumberOfStacks);
Mesh result = Mesh.Box(_device, _axisLength, _axisThick, _axisThick);
Mesh.TextFromFont(_device, new System.Drawing.Font("Berlin Sans FB", 12), text, 5f, 0.2f);
The mesh class exists but does not contain the Cylinder or Box methods. I've gone through tons of documentation and could not find a solution.
Apart from the problem with the Mesh class I could not find matching classes and methods for the following in SharpDX:
using (Surface backbuffer = _device.GetBackBuffer(0, 0))
{
GraphicsStream stream = SurfaceLoader.SaveToStream(ImageFileFormat.Bmp, backbuffer);
return new Bitmap(stream);
}
GraphicStream and SurfaceLoader do not exist.
i had similar problem proting from old Managed Microsoft.DirectX to SharpDx9.
For Meshes we had to implement our own Mesh classes since there are no pritives like cylinder, sphere or box in SharpDx.Mesh (its just a mock class i guess).
But for SurfaceLoader check Surface class itself it has static methods that will probably match your needs. For example:
Surface.ToStream()

How do I get slot number for texture by name?

Using Direct3D 11 and SharpDX, given the name of a Texture Map as declared in the shader, how do I know what slot to assign my Sampler and TextureView to?
Documentation indicates I can use ShaderReflection, however it is not clear how...
void SetTexture(MyShaderProgram shaderProgram, string name, MyTextureMap textureMap)
{
byte[] byteCode = shaderProgram.ByteCode;
var shaderReflection = new
SharpDX.D3DCompiler.ShaderReflection(byteCode);
var slot = ?
PixelShaderStage pixelShader = shaderProgram.PixelShader;
pixelShader.SetSampler(slot, textureMap.Sampler);
pixelShader.SetShaderResource(slot, textureMap.TextureView);
}
It seems that BindPoint of shader InputBindingDescription serves this purpose. Thus, this may suffice:
var slot = shaderReflection.GetResourceBindingDescription(name).BindPoint;
It may also be worth noting that it seems technically one should get two bind points: one for the sample and one for the texture view. As they are often declared side-by-side this solution may be sufficient.

Using matrices to transform the Three.js scene graph

I'm attempting to load a scene from a file into Three.js (custom format, not one that Three.js supports). This particular format describes a scene graph where each node in the tree has a transform specified as a 4x4 matrix. The process for pushing it into Three.js looks something like this:
// Yeah, this is javascript-like psuedocode
function processNodes(srcNode, parentThreeObj) {
for(child in srcNode.children) {
var threeObj = new THREE.Object3D();
// This line is the problem
threeObj.applyMatrix(threeMatrixFromSrcMatrix(child.matrix));
for(mesh in child.meshes) {
var threeMesh = threeMeshFromSrcMesh(mesh);
threeObj.add(threeMesh);
}
parentThreeObj.add(threeObj);
processNodes(child, threeObj); // And recurse!
}
}
Or at least that's what I'd like it to be. As I pointed out, the applyMatrix line doesn't work the way that I would expect. The majority of the scene looks okay, but certain elements that have been rotated aren't aligned properly (while other are, it's strange).
Looking through the COLLADA loader (which does approximately the same thing I'm trying to do) it appears that they decompose the matrix into a translate/rotate/scale and apply each individually. I tried that in place of the applyMatrix shown above:
var props = threeMatrixFromSrcMatrix(child.matrix).decompose();
threeObj.useQuaternion = true;
threeObj.position = props[ 0 ];
threeObj.quaternion = props[ 1 ];
threeObj.scale = props[ 2 ];
This, once again, yields a scene where most elements are in the right place but meshes that previously were misaligned have now been transformed into oblivion somewhere and no longer appear at all. So in the end this is no better than the applyMatrix from above.
Looking through several online discussions about the topic it seems that the recommended way to use matrices for your transforms is to apply them directly to the geometry, not the nodes, so I tried that by manually building the transform matrix like so:
function processNodes(srcNode, parentThreeObj, parentMatrix) {
for(child in srcNode.children) {
var threeObj = new THREE.Object3D();
var childMatrix = threeMatrixFromSrcMatrix(child.matrix);
var objMatrix = THREE.Matrix4();
objMatrix.multiply(parentMatrix, childMatrix);
for(mesh in child.meshes) {
var threeMesh = threeMeshFromSrcMesh(mesh);
threeMesh.geometry.applyMatrix(objMatrix);
threeObj.add(threeMesh);
}
parentThreeObj.add(threeObj);
processNodes(child, threeObj, objMatrix); // And recurse!
}
}
This actually yields the correct results! (minus some quirks with the normals, but I can figure that one out) That's great, but the problem is that we've now effectively flattened the scene hierarchy: Changing the transform on a parent will yield unexpected results on the children because the full transform stack is now "baked in" to the meshes. In this case that's an unacceptable loss of information about the scene.
So how might one go about telling Three.js to do the same logic, but at the appropriate point in the scene graph?
(Sorry, I would dearly love to post some live code examples but that's unfortunately not an option in this case.)
You can use matrixAutoUpdate = false to skip the Three.js scenegraph position/scale/rotation stuff. Then set object.matrix to the matrix you want and all should be dandy (well, it still gets multiplied by parent node matrices, so if you're using absolute modelview matrices you need to hack updateMatrixWorld method on Object3D.)
object.matrixAutoUpdate = false;
object.matrix = myMatrix;
Now, if you'd like to have a custom transformation matrix applied on top of the Three.js position/scale/rotation stuff, you need to edit Object3D#updateMatrix to be something like.
THREE.Object3D.prototype._updateMatrix = THREE.Object3D.prototype.updateMatrix;
THREE.Object3D.prototype.updateMatrix = function() {
this._updateMatrix();
if (this.customMatrix != null)
this.matrix.multiply(this.customMatrix);
};
See https://github.com/mrdoob/three.js/blob/master/src/core/Object3D.js#L209
Sigh...
Altered Qualia pointed out the solution on Twitter within minutes of me posting this.
It's a simple one-line fix: Just set matrixAutoUpdate to false on the Object3D instances and the first code sample works as intended.
threeObj.matrixAutoUpdate = false; // This fixes it
threeObj.applyMatrix(threeMatrixFromSrcMatrix(child.matrix));
It's always the silly little things that get you...

How do I set up DirectX 9 so that backface culling is off, z-buffering is on, and gouraud shading works, for triangle meshes without normals data?

I've been having difficulty identifying the correct parameters for the PresentParameters and DirectX device, so that there can be both vertex-level gouraud shading and the use of a z buffer. Some triangle meshes work fine, others have background triangles appearing in front of triangles which are closer-to-camera.
An example of this is found here: http://gallery.me.com/robert.perkins/100045/zBufferGone. The input data is a simple list of vertices in facets. The winding order of the vertices in each facet is nondeterministic (comes from various CAD software export functions) and there is no normals data.
The PresentParameters are being set up right now as follows. I realize this is C# instead of C++ but I think it's descriptive enough, and the parameters pass through to C++ code. This produces the image in the picture; the behavior is the same on the Reference device:
pParams = new PresentParameters()
{
BackBufferWidth = this.ClientSize.Width,
BackBufferHeight = this.ClientSize.Height,
AutoDepthStencilFormat = Format.D16,
EnableAutoDepthStencil = true,
SwapEffect = SwapEffect.Discard,
Windowed = true
};
_engineDX9 = new EngineDX9(this, SlimDX.Direct3D9.DeviceType.Hardware, SlimDX.Direct3D9.CreateFlags.SoftwareVertexProcessing, pParams);
_engineDX9.DefaultCamera.NearPlane = 0;
_engineDX9.DefaultCamera.FarPlane = 10;
_engineDX9.D3DDevice.SetRenderState(RenderState.Ambient, false);
_engineDX9.D3DDevice.SetRenderState(RenderState.ZEnable, ZBufferType.UseZBuffer);
_engineDX9.D3DDevice.SetRenderState(RenderState.ZWriteEnable, true);
_engineDX9.D3DDevice.SetRenderState(RenderState.ZFunc, Compare.Always);
_engineDX9.BackColor = Color.White;
_engineDX9.FillMode = FillMode.Solid;
_engineDX9.CullMode = Cull.None;
_engineDX9.DefaultCamera.AspectRatio = (float)this.Width / this.Height;
All of my other setup attempts, even on the reference device, return a COM error code ({"D3DERR_INVALIDCALL: Invalid call (-2005530516)"}). What are the correct setup parameters?
EDIT: The C++ class which interfaces with DirectX9 sets defaults like this:
PresentParameters::PresentParameters()
{
BackBufferWidth = 640;
BackBufferHeight = 480;
BackBufferFormat = Format::X8R8G8B8;
BackBufferCount = 1;
Multisample = MultisampleType::None;
MultisampleQuality = 0;
SwapEffect = SlimDX::Direct3D9::SwapEffect::Discard;
DeviceWindowHandle = IntPtr::Zero;
Windowed = true;
EnableAutoDepthStencil = true;
AutoDepthStencilFormat = Format::D24X8;
PresentFlags = SlimDX::Direct3D9::PresentFlags::None;
FullScreenRefreshRateInHertz = 0;
PresentationInterval = PresentInterval::Immediate;
}
Where does it return an invalid call?
Edit: I'm assuming in the new EngineDX9 call? Have you tried setting a device window handle in the present parameters?
Edit 2: Have you turned on the debug spew in the DirectX control panel to see whether it tells you what the error is?
Edit3: You have tried setting backbufferWidth and Height to 0? What is backbuffercount set to? Might also be worth trying "Format.D24S8" on the backbuffer? Its "possible" your graphics card doesn't support 16-bit (unlikely though). Have you checked in the caps that the mode you are trying to create is valid? I asssume, btw, that the CLR language you are using automagically sets the parameters you don't set to 0? I,personally, always prefer to be explicit in such cases ....
PS I'm guessing here because im a native C++ DX9 coder not a CLR SlimDX coder ...
Edit4: I'm sure its the lack of window handle ... I'm probably wrong but thats the only thing i can see REALLY wrong with your setup. A windowed DX9 device requires a window. Btw set width and height to 0 to just use the window you are setting the device too's size ...
Edit 5: I've really been heading down the wrong route here. There is nothing wrong with the creation of the device that produced your "incorrect" device. Do not mess with the present parameters they are fine. The main reason you'll have problems with your Z-Buffering is that you set the compare function to always. This means that, regardless of what the z-buffer contains, pas the pixel and write its z into the z-buffer overwriting whatever is there already. I'd wager therein lies your Z-buffering problem.

Resources