How do I get slot number for texture by name? - sharpdx

Using Direct3D 11 and SharpDX, given the name of a Texture Map as declared in the shader, how do I know what slot to assign my Sampler and TextureView to?
Documentation indicates I can use ShaderReflection, however it is not clear how...
void SetTexture(MyShaderProgram shaderProgram, string name, MyTextureMap textureMap)
{
byte[] byteCode = shaderProgram.ByteCode;
var shaderReflection = new
SharpDX.D3DCompiler.ShaderReflection(byteCode);
var slot = ?
PixelShaderStage pixelShader = shaderProgram.PixelShader;
pixelShader.SetSampler(slot, textureMap.Sampler);
pixelShader.SetShaderResource(slot, textureMap.TextureView);
}

It seems that BindPoint of shader InputBindingDescription serves this purpose. Thus, this may suffice:
var slot = shaderReflection.GetResourceBindingDescription(name).BindPoint;
It may also be worth noting that it seems technically one should get two bind points: one for the sample and one for the texture view. As they are often declared side-by-side this solution may be sufficient.

Related

Differentiating Mode 2 Form 1 from Mode 2 Form 2 on XA CD-ROMs?

I'm developing a library for reading CD-ROMs and the ISO9660 file system.
Long story short, pretty much everything is working except for one thing I'm having a hard time figuring out how it's done:
Where does XA standard defines differentiation among Mode 2 Form 1 from Mode 2 Form 2?
Currently, I am using the following pseudo-code to differentiate between both forms; albeit it's a naive heuristic, it does work but well, it's far from ideal:
var buffer = ... // this is a raw sector of 2352 bytes
var m2F1 = ISector.Cast<SectorMode2Form1>(buffer);
var edc1 = EdcHelper.ComputeBlock(0, buffer, 16, 2056);
var edc2 = BitConverter.ToUInt32(m2F1.Edc, 0);
var isM2F1 = edc1 == edc2;
if (isM2F1) return CdRomSectorMode.Mode2Form1;
// NOTE we cannot reliably check EDC of M2F2 since it's optional
var isForm2 =
m2F1.SubHeaderCopy1.SubMode.HasFlag(SectorMode2Form1SubHeaderSubMode.Form2) &&
m2F1.SubHeaderCopy2.SubMode.HasFlag(SectorMode2Form1SubHeaderSubMode.Form2);
if (isForm2) return CdRomSectorMode.Mode2Form2;
return CdRomSectorMode.Mode2Formless;
If you look at some software like IsoBuster, it appears to be a track-level property, however, I'm failing to understand where the value would be read from within the track.
I'm actually doing something similar in typescript for my ps1 mod tools. It seems like you actually probably have it correct here, since I'm going to assume your HasFlag check is checking position bit position 6 of the subheader. If that flag is set, you are in form 2.
So what you probably want something like:
const sectorBytes = new Uint8Arrray(buffer);
if (sectorBytes[0x012] & 0x20) === 0x20) {
return CdRomSectorMode.Mode2Form2;
} else {
return CdRomSectorMode.Mode2Form1;
}
You could of course use the flag code you already have, but that would require you to use one of the types first to get that. This just keeps it generic bytes and checks the flag, then returns the relevant mode.

Printing an image to a dye based application

I am learning about fluid dynamics (and Haxe) and have come across this awesome project and thought I would try to extend to it to help me learn. A demo of the original project in action can be seen here.
So far, I have created a side menu of items containing different shapes. When the user clicks on one of the shapes, then, clicks onto the canvas, the image selected should be imprinted onto the dye. The user will then move the mouse and explore the art etc.
To try and achieve this I did the following:
import js.html.webgl.RenderingContext;
function imageSelection(): Void{
document.querySelector('.myscrollbar1').addEventListener('click', function() {
// twilight image clicked
closeNav();
reset();
var image:js.html.ImageElement = cast document.querySelector('img[src="images/twilight.jpg"]');
gl.current_context.texSubImage2D(cast fluid.dyeRenderTarget.writeToTexture, 0, Math.round(mouse.x), Math.round(mouse.y), RenderingContext.RGB, RenderingContext.UNSIGNED_BYTE, image);
TWILIGHT = true;
});
After this call, inside the update function, I have the following:
override function update( dt:Float ){
time = haxe.Timer.stamp() - initTime;
performanceMonitor.recordFrameTime(dt);
//Smaller number creates a bigger ripple, was 0.016
dt = 0.090;//#!
//Physics
//interaction
updateDyeShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
mouseForceShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
//step physics
fluid.step(dt);
particles.flowVelocityField = fluid.velocityRenderTarget.readFromTexture;
if(renderParticlesEnabled){
particles.step(dt);
}
//Below handles the cycling of colours once the mouse is moved and then the image should be disrupted into the set dye colours.
}
However, although the project builds, I can't seem to get the image imprinted onto the canvas. I have checked the console log and I can see the following error:
WebGL: INVALID_ENUM: texSubImage2D: invalid texture target
Is it safe to assume that my cast for the first param is not allowed?
I have read that the texture target is the first parameter and INVALID_ENUM in particular means that one of the gl.XXX parameters are just flat out wrong for that particular function.
Looking through to the file writeToTexture is declared as so: public var writeToTexture (default, null):GLTexture;. WriteToTexture is a wrapper around a regular webgl handle.
I am using Haxe version 3.2.1 and using Snow to build the project. WriteToTexture is defined inside HaxeToolkit\haxe\lib\gltoolbox\git\gltoolbox\render
writeToTexture in gltoolbox is a GLTexture. With snow and snow_web, this is defined in snow.modules.opengl.GL as:
typedef GLTexture = js.html.webgl.Texture;
So we're simply dealing with a js.html.webgl.Texture here, or WebGLTexture in native JS.
Which means that yes, this is definitely not a valid value for texSubImage2D()'s target, which is specified to take one of the gl.TEXTURE_* constants.
A GLenum specifying the binding point (target) of the active texture.
From this description it's obvious that the parameter isn't actually for the texture itself - it merely gives some info on how the active texture should be used.
The question then becomes how the "active" texture can be set. bindTexture() can be used for this.

Using matrices to transform the Three.js scene graph

I'm attempting to load a scene from a file into Three.js (custom format, not one that Three.js supports). This particular format describes a scene graph where each node in the tree has a transform specified as a 4x4 matrix. The process for pushing it into Three.js looks something like this:
// Yeah, this is javascript-like psuedocode
function processNodes(srcNode, parentThreeObj) {
for(child in srcNode.children) {
var threeObj = new THREE.Object3D();
// This line is the problem
threeObj.applyMatrix(threeMatrixFromSrcMatrix(child.matrix));
for(mesh in child.meshes) {
var threeMesh = threeMeshFromSrcMesh(mesh);
threeObj.add(threeMesh);
}
parentThreeObj.add(threeObj);
processNodes(child, threeObj); // And recurse!
}
}
Or at least that's what I'd like it to be. As I pointed out, the applyMatrix line doesn't work the way that I would expect. The majority of the scene looks okay, but certain elements that have been rotated aren't aligned properly (while other are, it's strange).
Looking through the COLLADA loader (which does approximately the same thing I'm trying to do) it appears that they decompose the matrix into a translate/rotate/scale and apply each individually. I tried that in place of the applyMatrix shown above:
var props = threeMatrixFromSrcMatrix(child.matrix).decompose();
threeObj.useQuaternion = true;
threeObj.position = props[ 0 ];
threeObj.quaternion = props[ 1 ];
threeObj.scale = props[ 2 ];
This, once again, yields a scene where most elements are in the right place but meshes that previously were misaligned have now been transformed into oblivion somewhere and no longer appear at all. So in the end this is no better than the applyMatrix from above.
Looking through several online discussions about the topic it seems that the recommended way to use matrices for your transforms is to apply them directly to the geometry, not the nodes, so I tried that by manually building the transform matrix like so:
function processNodes(srcNode, parentThreeObj, parentMatrix) {
for(child in srcNode.children) {
var threeObj = new THREE.Object3D();
var childMatrix = threeMatrixFromSrcMatrix(child.matrix);
var objMatrix = THREE.Matrix4();
objMatrix.multiply(parentMatrix, childMatrix);
for(mesh in child.meshes) {
var threeMesh = threeMeshFromSrcMesh(mesh);
threeMesh.geometry.applyMatrix(objMatrix);
threeObj.add(threeMesh);
}
parentThreeObj.add(threeObj);
processNodes(child, threeObj, objMatrix); // And recurse!
}
}
This actually yields the correct results! (minus some quirks with the normals, but I can figure that one out) That's great, but the problem is that we've now effectively flattened the scene hierarchy: Changing the transform on a parent will yield unexpected results on the children because the full transform stack is now "baked in" to the meshes. In this case that's an unacceptable loss of information about the scene.
So how might one go about telling Three.js to do the same logic, but at the appropriate point in the scene graph?
(Sorry, I would dearly love to post some live code examples but that's unfortunately not an option in this case.)
You can use matrixAutoUpdate = false to skip the Three.js scenegraph position/scale/rotation stuff. Then set object.matrix to the matrix you want and all should be dandy (well, it still gets multiplied by parent node matrices, so if you're using absolute modelview matrices you need to hack updateMatrixWorld method on Object3D.)
object.matrixAutoUpdate = false;
object.matrix = myMatrix;
Now, if you'd like to have a custom transformation matrix applied on top of the Three.js position/scale/rotation stuff, you need to edit Object3D#updateMatrix to be something like.
THREE.Object3D.prototype._updateMatrix = THREE.Object3D.prototype.updateMatrix;
THREE.Object3D.prototype.updateMatrix = function() {
this._updateMatrix();
if (this.customMatrix != null)
this.matrix.multiply(this.customMatrix);
};
See https://github.com/mrdoob/three.js/blob/master/src/core/Object3D.js#L209
Sigh...
Altered Qualia pointed out the solution on Twitter within minutes of me posting this.
It's a simple one-line fix: Just set matrixAutoUpdate to false on the Object3D instances and the first code sample works as intended.
threeObj.matrixAutoUpdate = false; // This fixes it
threeObj.applyMatrix(threeMatrixFromSrcMatrix(child.matrix));
It's always the silly little things that get you...

Actionscript + Google Maps API Memory Leak

I've never used actionscript before, and but I've just had to dive into it in order to get a map working.
I'm using the following code to add a map marker, replacing a previous one if one exists:
public var tracer:Array = new Array();
public var tracerLng:Number = 0;
for ( var i : Number=1 ; i<64000 ; i++)
{
//Check if there is already a marker, if so get rid of it
if(tracerLng > 0) {
map.removeOverlay(tracer[0]);
tracer[0] = null;
tracer.pop();
}
// Set up a marker
var trackMrk:Marker = new Marker(
new LatLng(_lat, _lng),
new MarkerOptions({
strokeStyle: new StrokeStyle({color: 0x987654}),
fillStyle: new FillStyle({color: 0x223344, alpha: 0.8}),
radius: 12,
hasShadow: true
})
);
//Add the marker to the array and show it on the map
tracerLng = tracer.push(trackMrk);
map.addOverlay(tracer[0]);
}
My first problem is that running this code (The 64000 repeats are for testing, the final application won't need to be run quite THAT many times). Either way, memory usage increases by about 4kB/s - how do I avoid that happening?
Secondly - could anyone advise me on how to make that program more graceful?
Thanks in advance for advice
This isn't a memory leak, it's probably the result of created events - enter frames, mouse events, custom events etc. Provided that your memory doesn't keep going up and up forever, it's nothing to be worried about - it'll get garbage collected in due course.
Some points on your code:
The tracer Array doesn't seem to do anything - you only seem to be holding one thing in there at a time, so an array makes no sense. If you need an Array, use Vector instead. It's smaller and faster. More so if you create one with a specific length.
Don't create a new Marker unless you need one. Reuse old objects. Learn about object pooling: http://help.adobe.com/en_US/as3/mobile/WS948100b6829bd5a6-19cd3c2412513c24bce-8000.html or http://lostinactionscript.com/2008/10/30/object-pooling-in-as3/
The LatLng and MarkerOptions (including the stroke and fill objects) don't seem to change (I'm assuming the LatLng object lets you set a new position). If that's the case, don't create new ones when you don't need to. If you need to create new ones, StrokeStyle and FillStyle seem good candidates for a "create once, use everywhere" policy.
Create a destroy() function or similar in your Marker class and explicitly call it when you need to delete one (just before setting it to null or popping it from the array). In the destroy() function, null out any parameters to non-base classes (int, Number, String etc). Garbage collection runs using a reference counting method and a mark and sweep method. Ideally, you want to run everything using reference counting as it's collected quicker and stops any stalls in your program.
I explain memory management in AS3 a bit more here: http://divillysausages.com/blog/tracking_memory_leaks_in_as3
Also included is a class that helps you track down memory leaks if there are any

How do I set up DirectX 9 so that backface culling is off, z-buffering is on, and gouraud shading works, for triangle meshes without normals data?

I've been having difficulty identifying the correct parameters for the PresentParameters and DirectX device, so that there can be both vertex-level gouraud shading and the use of a z buffer. Some triangle meshes work fine, others have background triangles appearing in front of triangles which are closer-to-camera.
An example of this is found here: http://gallery.me.com/robert.perkins/100045/zBufferGone. The input data is a simple list of vertices in facets. The winding order of the vertices in each facet is nondeterministic (comes from various CAD software export functions) and there is no normals data.
The PresentParameters are being set up right now as follows. I realize this is C# instead of C++ but I think it's descriptive enough, and the parameters pass through to C++ code. This produces the image in the picture; the behavior is the same on the Reference device:
pParams = new PresentParameters()
{
BackBufferWidth = this.ClientSize.Width,
BackBufferHeight = this.ClientSize.Height,
AutoDepthStencilFormat = Format.D16,
EnableAutoDepthStencil = true,
SwapEffect = SwapEffect.Discard,
Windowed = true
};
_engineDX9 = new EngineDX9(this, SlimDX.Direct3D9.DeviceType.Hardware, SlimDX.Direct3D9.CreateFlags.SoftwareVertexProcessing, pParams);
_engineDX9.DefaultCamera.NearPlane = 0;
_engineDX9.DefaultCamera.FarPlane = 10;
_engineDX9.D3DDevice.SetRenderState(RenderState.Ambient, false);
_engineDX9.D3DDevice.SetRenderState(RenderState.ZEnable, ZBufferType.UseZBuffer);
_engineDX9.D3DDevice.SetRenderState(RenderState.ZWriteEnable, true);
_engineDX9.D3DDevice.SetRenderState(RenderState.ZFunc, Compare.Always);
_engineDX9.BackColor = Color.White;
_engineDX9.FillMode = FillMode.Solid;
_engineDX9.CullMode = Cull.None;
_engineDX9.DefaultCamera.AspectRatio = (float)this.Width / this.Height;
All of my other setup attempts, even on the reference device, return a COM error code ({"D3DERR_INVALIDCALL: Invalid call (-2005530516)"}). What are the correct setup parameters?
EDIT: The C++ class which interfaces with DirectX9 sets defaults like this:
PresentParameters::PresentParameters()
{
BackBufferWidth = 640;
BackBufferHeight = 480;
BackBufferFormat = Format::X8R8G8B8;
BackBufferCount = 1;
Multisample = MultisampleType::None;
MultisampleQuality = 0;
SwapEffect = SlimDX::Direct3D9::SwapEffect::Discard;
DeviceWindowHandle = IntPtr::Zero;
Windowed = true;
EnableAutoDepthStencil = true;
AutoDepthStencilFormat = Format::D24X8;
PresentFlags = SlimDX::Direct3D9::PresentFlags::None;
FullScreenRefreshRateInHertz = 0;
PresentationInterval = PresentInterval::Immediate;
}
Where does it return an invalid call?
Edit: I'm assuming in the new EngineDX9 call? Have you tried setting a device window handle in the present parameters?
Edit 2: Have you turned on the debug spew in the DirectX control panel to see whether it tells you what the error is?
Edit3: You have tried setting backbufferWidth and Height to 0? What is backbuffercount set to? Might also be worth trying "Format.D24S8" on the backbuffer? Its "possible" your graphics card doesn't support 16-bit (unlikely though). Have you checked in the caps that the mode you are trying to create is valid? I asssume, btw, that the CLR language you are using automagically sets the parameters you don't set to 0? I,personally, always prefer to be explicit in such cases ....
PS I'm guessing here because im a native C++ DX9 coder not a CLR SlimDX coder ...
Edit4: I'm sure its the lack of window handle ... I'm probably wrong but thats the only thing i can see REALLY wrong with your setup. A windowed DX9 device requires a window. Btw set width and height to 0 to just use the window you are setting the device too's size ...
Edit 5: I've really been heading down the wrong route here. There is nothing wrong with the creation of the device that produced your "incorrect" device. Do not mess with the present parameters they are fine. The main reason you'll have problems with your Z-Buffering is that you set the compare function to always. This means that, regardless of what the z-buffer contains, pas the pixel and write its z into the z-buffer overwriting whatever is there already. I'd wager therein lies your Z-buffering problem.

Resources