using setvertexdeclaration with fixed-function pipeline in directx 9 - directx

I am trying to use my own vertex structure, upload the vertices into a vertex buffer (indices into index buffer, without FVF code), set up the vertex declaration and stream source and use, and draw them using DrawIndexedPrimitive with fixed shader (but not FVF).
Do I have to write my own shader to use directx 9 SetVertexDeclaration ?
Can I use a customised vertex structure with SetVertexDeclaration and fixed-pipeline ?
If I can ,is there any restriction on fixed-pipeline and vertex declaration ?
Customised vertex structure:
struct PosNormTexCoord
{
float x,y,z;
float nx,ny,nz;
float tu,tv;
};

Unfortunately, you can't use fixed pipeline with custom vertex format. But your structure can be expressed in FVF, why would you want to skip its usage?

Related

Is there a Dx11-Dx12 method to support a geometry shader that includes SV_COVERAGE

When attempting to expand OIT support for a Geometry Shader. The OIT logic requires a pixel shader to include an input item of SV_COVERAGE; however, this give a fxc compile error x4502 invalid input semantic 'SV_COVERAGE' when trying to compile the geometry shader. So DX geometry shaders do not support SV_COVERAGE? I could find very little on the net related to this problem. Any known work-arounds? I may need to avoid the geometry shader and only support OIT for this scene with a much larger vertex/index buffer, which is a shame. I was thinking the SV_COVERAGE is really only needed at the output of the GS but DX GS's requires a single inout parameter declaration. Any comments appreciated.

Random access to D3D11 buffer with R8G8B8A8_UNorm format in HLSL

I have a D3D11 buffer with a few million elements that is supposed to hold data in the R8G8B8A8_UNorm format.
The desired behavior is the following: One shader calculates a vec4 and writes it to the buffer in a random access pattern. In the next pass, another shader reads the data in a random access pattern and processes them further.
My best guess would be to create an UnorderedAccessView with the R8G8B8A8_UNorm format. But how do I declare the RWBuffer<?> in HLSL, and how do I write to and read from it? Is it necessary to declare it as RWBuffer<uint> and do the packing from vec4 to uint manually?
In OpenGL I would create a buffer and a buffer texture. Then I can declare an imageBuffer with the rgba8 format in the shader, access it with imageLoad and imageStore, and the hardware does all the conversions for me. Is this possible in D3D11?
This is a little tricky due to a lot of different gotchas, but you should be able to do something like this.
In your shader that writes to the buffer declare:
RWBuffer<float4> WriteBuf : register( u1 );
Note that it is bound to register u1 instead of u0. Unordered access views (UAV) must start at slot 1 because the u# register is also used for render targets.
To write to the buffer just do something like:
WriteBuf[0] = float4(0.5, 0.5, 0, 1);
Note that you must write all 4 values at once.
In your C++ code, you must create an unordered access buffer, and bind it to a UAV. You can use the DXGI_FORMAT_R8G8B8A8_UNORM format. When you write 4 floats to it, the values will automatically be converted and packed. The UAV can be bound to the pipeline using OMSetRenderTargetsAndUnorderedAccessViews.
In your shader that reads from the buffer declare a read only buffer:
Buffer<float4> ReadBuf : register( t0 );
Note that this buffer uses t0 because it will be bound as a shader resource view (SRV) instead of UAV.
To access the buffer use something like:
float4 val = ReadBuf[0];
In your C++ code, you can bind the same buffer you created earlier to an SRV instead of a UAV. The SRV can be bound to the pipeline using PSSetShaderResources and can also be created with DXGI_FORMAT_R8G8B8A8_UNORM.
You can't bind both the SRV and UAV using the same buffer to the pipeline at the same time. So you must bind the UAV first and run your first shader pass. Then unbind the UAV, bind SRV, and run the second shader pass.
There are probably other ways to do this as well. Note that all of this requires shader model 5.

How many textures can I use in a webgl fragment shader?

Simple question but I can't find the answer in a specification anywhere. I'm probably missing the obvious answer somewhere.
How many textures can I use at once in a WebGL Fragment shader? If it's variable, what is a reasonable number to assume for PC use? (Not so interested in mobile).
I need at least 23 in one of my shaders so want to know if I'll be able to do that before I start work on the code or if I'll need to do multiple passes.
I'm pretty sure you can find out with
var maxTexturesInFragmentShader = gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS);
WebGL (OpenGL ES 2.0) only requires 8 minimum for fragment shaders.
Note that the number of textures you can access is separate from the amount of times you can access 1 texture. Therefore, for example, if you wanted to access 2 textures using only 1 texture unit then combine the textures and adjust your UV coords. So for example put the contents of 2 textures [A] and [B] side by side into one texture [AB] and then
vec4 colorFromTexA = texture2D(samplerAB, uvForTexA * vec2(0.5, 1.0));
vec4 colorFromTexB = texture2D(samplerAB, uvForTexB * vec2(0.5, 1.0) + vec2(0.5, 0.0));
Of course you probably need to turn set filtering for no mips etc...
You can find the answer at http://webglstats.com/webgl/parameter/MAX_TEXTURE_IMAGE_UNITS
It seems you are out of luck as the stats suggest that most only have 16.

Direct3D 10 Hardware Instancing using Structured Buffers

I am trying to implement hardware instancing with Direct3D 10+ using Structured Buffers for the per instance data but I've not used them before.
I understand how to implement instancing when combining the per vertex and per instance data into a single structure in the Vertex Shader - i.e. you bind two vertex buffers to the input assembler and call the DrawIndexedInstanced function.
Can anyone tell me the procedure for binding the input assembler and making the draw call etc. when using Structured Buffers with hardware instancing? I can't seem to find a good example of it anywhere.
It's my understanding that Structured Buffers are bound as ShaderResourceViews, is this correct?
Yup, that's exactly right. Just don't put any per-instance vertex attributes in your vertex buffer or your input layout and create a ShaderResourceView of the buffer and set it on the vertex shader. You can then use the SV_InstanceID semantic to query which instance you're on and just fetch the relevant struct from your buffer.
StructuredBuffers are very similar to normal buffers. The only differences are that you specify the D3D11_RESOURCE_MISC_BUFFER_STRUCTURED flag on creation, fill in StructureByteStride and when you create a ShaderResourceView the Format is DXGI_UNKNOWN (the format is specified implicitly by the struct in your shader).
StructuredBuffer<MyStruct> myInstanceData : register(t0);
is the syntax in HLSL for a StructuredBuffer and you just access it using the [] operator like you would an array.
Is there anything else that's unclear?

How to design a simple GLSL wrapper for shader use

UPDATE: Because I needed something right away, I've created a simple shader wrapper that does the sort of thing I need. You can find it here: ShaderManager on GitHub. Note that it's designed for Objective-C / iOS, so may not be useful to everyone. If you have any suggestions for design improvements, please let me know!
Original Problem:
I'm new to using GLSL shaders. I'm familiar enough with the GLSL language and the OpenGL interface, but I'm having trouble designing a simple API through which to use shaders.
OpenGL's C interface to interact with shaders seems cumbersome. I can't seem to find any tutorials on the net that cover the API design of such things.
My question is this: does any one have a good, simple, API design or pattern to wrap the OpenGL shader program API?
Take the following simple example. Say I have one vertex shader that just emulates fixed functionality, and two fragment shaders - one for drawing smooth rectangles and one for drawing smooth circles. I have the following files:
Shader.vsh : Simple vertex shader, with the following inputs/outputs:
-- Uniforms: mat4 Model, mat4 View, mat4 Projection
-- Attributes: vec4 Vertex, vec2 TexCoord, vec4 Color
-- Varying: vec4 vColor, vec2 vTexCoord
Square.fsh : Fragment shader for drawing squares based on tex coord / color
Circle.fsh : Fragment shader for drawing circles based on tex coord / color
Basic Linking
Now what is the standard way to use these? Do I link the above shaders into two OpenGL shader programs? That is:
Shader.vsh + Square.fsh = SquareProgram
Shader.vsh + Circle.fsh = CircleProgram
Or do I instead create one big program where the fragment shaders check some conditional uniform variables and call out to a shader function to generate their result. E.g:
Shader.vsh + Square.fsh + Circle.fsh + Main.fsh = ShaderProgram
//Main.fsh here would simply check whether to call out to square or circle
With two individual programs I would presumably need to call
glUseProgram(CircleProgram); or glUseProgram(SquareProgram);
Before each type of element I want to draw. I would then need to set the uniforms (Model / View / Projection) and attributes of each program before I use it. This seems so unwieldy.
With the single ShaderProgram option I would still need to set some sort of boolean switch (circle or square) in the fragment shader that would be checked before drawing each pixel. This also seems complicated.
As a side note, am I allowed to link two fragment shaders, each with a main() function, into one shader program? How would OpenGL know which one to call?
Setting Variables
The calls:
glUniform*
glVertexAttribPointer
Are used to set uniforms and attribute pointer locations on the current program.
Different classes and structures may need to access and set variables on the current shader (or change the current shader) from different places in the code. I can't think of a nice way to do this that decouples the shader code from the code that wants to use it.
That is, each shape I want to draw will need to set vertex and texture coordinate attributes - requiring the handles to those attributes generated by OpenGL.
The camera will need to set its projection matrix as a uniform in the vertex shader, while the class managing the model matrix stack will need to set its own uniform in the vertex shader.
Changing shaders part-way through drawing a scene would mean that all these classes will need to set their uniforms and attributes again.
How do most people design around this?
A global dictionary of shaders accessed by handle or name, with getters and setters for their parameters?
An OO design with shader objects that each have parameters?
I've looked at the following wrappers:
Jon's Teapot: GLSL Shader Manager - This wraps shaders in C++ classes. It seems like little more than a wrapper that enforces OO principles on a C API, resulting in a C++ API that is much the same.
I am after any sort of design that simplifies the use of Shader programs, and am not concerned about the particular paradigm used (OO, procedural, and so on)
I see this is tagged with iOS, so if you're partial to Objective-C, I'd take a good look at Jeff LaMarche's GLProgram wrapper class, which he describes here and has source available here. I've used it within my own applications to simplify some of the shader program setup, and to make the code a little cleaner.
For example, you can set up a shader and its attributes and uniforms using code like the following:
sphereDepthProgram = [[GLProgram alloc] initWithVertexShaderFilename:#"SphereDepth" fragmentShaderFilename:#"SphereDepth"];
[sphereDepthProgram addAttribute:#"position"];
[sphereDepthProgram addAttribute:#"inputImpostorSpaceCoordinate"];
if (![sphereDepthProgram link])
{
NSLog(#"Depth shader link failed");
NSString *progLog = [sphereDepthProgram programLog];
NSLog(#"Program Log: %#", progLog);
NSString *fragLog = [sphereDepthProgram fragmentShaderLog];
NSLog(#"Frag Log: %#", fragLog);
NSString *vertLog = [sphereDepthProgram vertexShaderLog];
NSLog(#"Vert Log: %#", vertLog);
[sphereDepthProgram release];
sphereDepthProgram = nil;
}
sphereDepthPositionAttribute = [sphereDepthProgram attributeIndex:#"position"];
sphereDepthImpostorSpaceAttribute = [sphereDepthProgram attributeIndex:#"inputImpostorSpaceCoordinate"];
sphereDepthModelViewMatrix = [sphereDepthProgram uniformIndex:#"modelViewProjMatrix"];
sphereDepthRadius = [sphereDepthProgram uniformIndex:#"sphereRadius"];
When you need to use the shader program, you then do something like the following:
[sphereDepthProgram use];
This doesn't address the issues of branching vs. individual shaders that you bring up above, but Jeff's implementation does provide a nice encapsulation of some of the OpenGL ES boilerplate shader setup code.
Basic Linking:
There is no standard way here. There are at least 2 general approaches:
Monolithic - one shader covers many cases, using uniform boolean switches. These branches don't hurt performance because the condition result is constant for any fragment group (actually, for all of the fragments).
Multi-object program compositing - main shader declares a set of entry points (like 'get_diffuse', 'get_specular', etc), which are implemented in separate shader objects attached. This implies individual shader for each object, but any kind of caching helps.
Setting Variables: Uniforms
I will just describe the approach I developed.
Each shader program has a list of uniform dictionaries. It's used to fill the uniform source list upon program (re-)linking. When the program is activated, it goes through the uniform list, fetches values from their sources and uploads them to GL. In the result, data is not directly connected with the user shader program, and whatever manages it does not care about the program using it.
One of these dictionaries can be, for example, a core one, containing model,view transformations, camera projection and maybe something else.
Setting Variables: Attributes
First of all, shader program is an attribute consumer, so it is what has to extract these attributes from a mesh (or any other data storage) and upload them to GL in a way it needs. It should also make sure that types of provided attributes match the requested types.
When using with monolithic shader approach, there is a possible unpleasant situation when one the disabled branch ways requires a vertex attribute that is not provided. I would advice using another attribute's data to supply the missing one, because we don't care about the actual values in this case.
P.S.
You can find an actual implementation of these ideas here: http://code.google.com/p/kri/

Resources