Draw multiple models in WebGL - buffer

Problem constraints:
I am not using three.js or similar, but pure WebGL
WebGL 2 is not an option either
I have a couple of models loaded stored as Vertices and Normals arrays (coming from an STL reader).
So far there is no problem when both models are the same size. Whenever I load 2 different models, an error message is shown in the browser:
WebGL: INVALID_OPERATION: drawArrays: attempt to access out of bounds arrays so I suspect I am not manipulating multiple buffers correctly.
The models are loaded using the following typescript method:
public AddModel(model: Model)
{
this.models.push(model);
model.VertexBuffer = this.gl.createBuffer();
model.NormalsBuffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.VertexBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, model.Vertices, this.gl.STATIC_DRAW);
model.CoordLocation = this.gl.getAttribLocation(this.shaderProgram, "coordinates");
this.gl.vertexAttribPointer(model.CoordLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.enableVertexAttribArray(model.CoordLocation);
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.NormalsBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, model.Normals, this.gl.STATIC_DRAW);
model.NormalLocation = this.gl.getAttribLocation(this.shaderProgram, "vertexNormal");
this.gl.vertexAttribPointer(model.NormalLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.enableVertexAttribArray(model.NormalLocation);
}
After loaded, the Render method is called for drawing all loaded models:
public Render(viewMatrix: Matrix4, perspective: Matrix4)
{
this.gl.uniformMatrix4fv(this.viewRef, false, viewMatrix);
this.gl.uniformMatrix4fv(this.perspectiveRef, false, perspective);
this.gl.uniformMatrix4fv(this.normalTransformRef, false, viewMatrix.NormalMatrix());
// Clear the canvas
this.gl.clearColor(0, 0, 0, 0);
this.gl.viewport(0, 0, this.canvas.width, this.canvas.height);
this.gl.clear(this.gl.COLOR_BUFFER_BIT | this.gl.DEPTH_BUFFER_BIT);
// Draw the triangles
if (this.models.length > 0)
{
for (var i = 0; i < this.models.length; i++)
{
var model = this.models[i];
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.VertexBuffer);
this.gl.enableVertexAttribArray(model.NormalLocation);
this.gl.enableVertexAttribArray(model.CoordLocation);
this.gl.vertexAttribPointer(model.CoordLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.uniformMatrix4fv(this.modelRef, false, model.TransformMatrix);
this.gl.uniform3fv(this.materialdiffuseRef, model.Color.AsVec3());
this.gl.drawArrays(this.gl.TRIANGLES, 0, model.TrianglesCount);
}
}
}
One model works just fine. Two cloned models also work OK. Different models fail with the error mentioned.
What am I missing?

The normal way to use WebGL
At init time
for each shader program
create and compile vertex shader
create and compile fragment shader
create program, attach shaders, link program
for each model
for each type of vertex data (positions, normal, color, texcoord
create a buffer
copy data to buffer
create textures
Then at render time
for each model
use shader program appropriate for model
bind buffers, enable and setup attributes
bind textures and set uniforms
call drawArrays or drawElements
But looking at your code it's binding buffers, and enabling and setting up attributes at init time instead of render time.
Maybe see this article and this one

Related

Metal shader's second buffer attribute offset isn't working

I'm trying to render a mesh with an additional attribute present in the vertex data, but I'm noticing that the offset value I've set in the vertex descriptor for this attribute doesn't seem to be respected. It's acting like the offset is zero, thus pulling in vertex values instead of the data I'm looking for.
My vertex data is defined like so:
vertices metadata
0, 1, 0, 1, 1, 0,
0, 1, 0, 0, 4, 0,
In the shader, I pull this in like so:
typedef struct {
float4 data [[attribute(0)]];
float2 index [[attribute(1)]];
} Vertex;
vertex ColorInOut vertexShader(Vertex in [[stage_in]],
constant VertexShaderUniforms & u [[ buffer(2) ]]) {...}
I'm then setting up the vertex descriptor to handle this format:
auto _mtlVertexDescriptor = [[MTLVertexDescriptor alloc] init];
_mtlVertexDescriptor.attributes[0].format = MTLVertexFormatFloat4;
_mtlVertexDescriptor.attributes[0].offset = 0;
_mtlVertexDescriptor.attributes[0].bufferIndex = 0;
_mtlVertexDescriptor.attributes[1].format = MTLVertexFormatFloat2;
_mtlVertexDescriptor.attributes[1].offset = sizeof(vector_float4);
_mtlVertexDescriptor.attributes[1].bufferIndex = 0;
_mtlVertexDescriptor.layouts[0].stepFunction = MTLVertexStepFunctionPerVertex;
_mtlVertexDescriptor.layouts[0].stepRate = 1;
_mtlVertexDescriptor.layouts[0].stride = sizeof(vector_float4) + sizeof(vector_float2);
In the Metal debugger, I'm noticing that instead of the data entries above, I'm seeing the following output in the Geometry viewer:
vertices metadata
0, 1, 0, 1, 0, 1,
0, 1, 0, 0, 0, 1,
As it might be important to this situation, I should point out that I load my models manually as I'm plugging this in as an extra render option to my application, I'm doing this in the following way:
std::vector<float> vertices = {...};
auto allocator = [[MTKMeshBufferAllocator alloc] initWithDevice:device];
auto vertexData = [NSData dataWithBytes: vertices.data() length:vertices.size() * sizeof(float)];
auto vertexBuffer = [allocator newBufferWithData:vertexData type:MDLMeshBufferTypeVertex];
auto mdlVertexDescriptor = MTKModelIOVertexDescriptorFromMetal(vertexDescriptor);
// For this particlular example, `row_size` is 6, corresponding to the number of values in each vertex
auto mdlMesh = [[MDLMesh alloc] initWithVertexBuffer:vertexBuffer vertexCount:vertices.size() / row_size descriptor:mdlVertexDescriptor submeshes:#[]];
mdlMesh.vertexDescriptor = mdlVertexDescriptor;
NSError* error = nil;
auto m = [[MTKMesh alloc] initWithMesh:mdlMesh
device:device
error:&error];
Is there any magic invocation I'm missing to get the offset applying properly?
[edit]
I've verified that the same issue is present in the vertex buffer object itself. I've confirmed that the vertex descriptor being passed into the MTKModelIOVertexDescriptorFromMetal call is the expected descriptor, and I've also confirmed that the raw data in the NSData object is identical to the std::vector values, so the issue may lie in how I'm interacting with MDLMesh.
It seems that ModelIO expects the vertex descriptor attributes to all have names matching the field names used in the vertex buffer struct for the above scenario to work. I fixed this like so:
vertexDescriptor.attributes[0].name = #"data";
vertexDescriptor.attributes[1].name = #"index";
After attaching names to each attribute, the correct data was loaded by the shader.
I managed to find this information out through a random chance run-in with the header that declares the MTKModelIOVertexDescriptorFromMetal method. The requirement is mentioned right at the end:
This method can only set vertex format, offset, bufferIndex, and stride information in the produced Model I/O vertex descriptor. It does not add any semantic information such at attributes names. Names must be set in the returned Model I/O vertex descriptor before it can be applied to a a Model I/O mesh.

Relationship between VertexArray and Attribute [duplicate]

I sometimes find myself struggling between declaring the buffers (with createBuffer/bindBuffer/bufferdata) in different order and rebinding them in other parts of the code, usually in the draw loop.
If I don't rebind the vertex buffer before drawing arrays, the console complains about an attempt to access out of range vertices. My suspect is the the last bound object is passed at the pointer and then to the drawarrays but when I change the order at the beginning of the code, nothing changes. What effectively works is rebinding the buffer in the draw loop. So, I can't really understand the logic behind that. When do you need to rebind? Why do you need to rebind? What is attribute0 referring to?
I don't know if this will help. As some people have said, GL/WebGL has a bunch of internal state. All the functions you call set up the state. When it's all setup you call drawArrays or drawElements and all of that state is used to draw things
This has been explained elsewhere on SO but binding a buffer is just setting 1 of 2 global variables inside WebGL. After that you refer to the buffer by its bind point.
You can think of it like this
gl = function() {
// internal WebGL state
let lastError;
let arrayBuffer = null;
let vertexArray = {
elementArrayBuffer: null,
attributes: [
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
...
],
}
// these values are used when a vertex attrib is disabled
let attribValues = [
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
...
];
...
// Implementation of gl.bindBuffer.
// note this function is doing nothing but setting 2 internal variables.
this.bindBuffer = function(bindPoint, buffer) {
switch(bindPoint) {
case gl.ARRAY_BUFFER;
arrayBuffer = buffer;
break;
case gl.ELEMENT_ARRAY_BUFFER;
vertexArray.elementArrayBuffer = buffer;
break;
default:
lastError = gl.INVALID_ENUM;
break;
}
};
...
}();
After that other WebGL functions reference those. For example gl.bufferData might do something like
// implementation of gl.bufferData
// Notice you don't pass in a buffer. You pass in a bindPoint.
// The function gets the buffer one of its internal variable you set by
// previously calling gl.bindBuffer
this.bufferData = function(bindPoint, data, usage) {
// lookup the buffer from the bindPoint
var buffer;
switch (bindPoint) {
case gl.ARRAY_BUFFER;
buffer = arrayBuffer;
break;
case gl.ELEMENT_ARRAY_BUFFER;
buffer = vertexArray.elemenArrayBuffer;
break;
default:
lastError = gl.INVALID_ENUM;
break;
}
// copy data into buffer
buffer.copyData(data); // just making this up
buffer.setUsage(usage); // just making this up
};
Separate from those bindpoints there's number of attributes. The attributes are also global state by default. They define how to pull data out of the buffers to supply to your vertex shader. Calling gl.getAttribLocation(someProgram, "nameOfAttribute") tells you which attribute the vertex shader will look at to get data out of a buffer.
So, there's 4 functions that you use to configure how an attribute will get data from a buffer. gl.enableVertexAttribArray, gl.disableVertexAttribArray, gl.vertexAttribPointer, and gl.vertexAttrib??.
They're effectively implemented something like this
this.enableVertexAttribArray = function(location) {
const attribute = vertexArray.attributes[location];
attribute.enabled = true; // true means get data from attribute.buffer
};
this.disableVertexAttribArray = function(location) {
const attribute = vertexArray.attributes[location];
attribute.enabled = false; // false means get data from attribValues[location]
};
this.vertexAttribPointer = function(location, size, type, normalized, stride, offset) {
const attribute = vertexArray.attributes[location];
attribute.size = size; // num values to pull from buffer per vertex shader iteration
attribute.type = type; // type of values to pull from buffer
attribute.normalized = normalized; // whether or not to normalize
attribute.stride = stride; // number of bytes to advance for each iteration of the vertex shader. 0 = compute from type, size
attribute.offset = offset; // where to start in buffer.
// IMPORTANT!!! Associates whatever buffer is currently *bound* to
// "arrayBuffer" to this attribute
attribute.buffer = arrayBuffer;
};
this.vertexAttrib4f = function(location, x, y, z, w) {
const attrivValue = attribValues[location];
attribValue[0] = x;
attribValue[1] = y;
attribValue[2] = z;
attribValue[3] = w;
};
Now, when you call gl.drawArrays or gl.drawElements the system knows how you want to pull data out of the buffers you made to supply your vertex shader. See here for how that works.
Since the attributes are global state that means every time you call drawElements or drawArrays how ever you have the attributes setup is how they'll be used. If you set up attributes #1 and #2 to buffers that each have 3 vertices but you ask to draw 6 vertices with gl.drawArrays you'll get an error. Similarly if you make an index buffer which you bind to the gl.ELEMENT_ARRAY_BUFFER bindpoint and that buffer has an indice that is > 2 you'll get that index out of range error. If your buffers only have 3 vertices then the only valid indices are 0, 1, and 2.
Normally, every time you draw something different you rebind all the attributes needed to draw that thing. Drawing a cube that has positions and normals? Bind the buffer with position data, setup the attribute being used for positions, bind the buffer with normal data, setup the attribute being used for normals, now draw. Next you draw a sphere with positions, vertex colors and texture coordinates. Bind the buffer that contains position data, setup the attribute being used for positions. Bind the buffer that contains vertex color data, setup the attribute being used for vertex colors. Bind the buffer that contains texture coordinates, setup the attribute being used for texture coordinates.
The only time you don't rebind buffers is if you're drawing the same thing more than once. For example drawing 10 cubes. You'd rebind the buffers, then set the uniforms for one cube, draw it, set the uniforms for the next cube, draw it, repeat.
I should also add that there's an extension [OES_vertex_array_object] which is also a feature of WebGL 2.0. A Vertex Array Object is the global state above called vertexArray which includes the elementArrayBuffer and all the attributes.
Calling gl.createVertexArray makes new one of those. Calling gl.bindVertexArray sets the global attributes to point to the one in the bound vertexArray.
Calling gl.bindVertexArray would then be
this.bindVertexArray = function(vao) {
vertexArray = vao ? vao : defaultVertexArray;
}
This has the advantage of letting you set up all attributes and buffers at init time and then at draw time just 1 WebGL call will set all buffers and attributes.
Here is a webgl state diagram that might help visualize this better.

Read position data from Vertex Buffer in openGL

Say we have an object and we want to create multiple objects and move them independently based some algorithm.
Here is what is the process I am using:
Create a structure with the geometry of the object
Create an array of vertice buffers using the geometry of the object
Now in the rendering routine, I need to go through each one of those objects and alter their position based on a specific algorithm.
To accomplish this I need to get the current location of the object to compute the new position.
How can I get the current location of a vertice buffer? Clearly, I do not want to store outside the program all locations of the object since they are inside the vertice buffer.
EDIT: This is the code I am using to store and retrieve data from the model matrix of each object
// Set up Code
- (void)setupGL
{
[EAGLContext setCurrentContext:self.context];
[self loadShaders];
glEnable(GL_DEPTH_TEST);
for( int i; i<num_objects; i++) {
glGenVertexArraysOES(1, &_objectArray[i]);
glBindVertexArrayOES(_objectArray[i]);
glGenBuffers(1, &_objectBuffer[i]);
glBindBuffer(GL_ARRAY_BUFFER, _objectBuffer[i]);
glBufferData(GL_ARRAY_BUFFER, sizeof(objectData), objectData, GL_STATIC_DRAW);
glEnableVertexAttribArray(....);
glVertexAttribPointer(......, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
}
glBindVertexArrayOES(0);
}
//********************************************************
// Rendering Code
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glUseProgram(_program);
for(int i=0; i<num_objects; i++) {
glBindVertexArrayOES(_objectArray[i]);
// Get Previous data
GLint uMatrix = glGetUniformLocation(_program, "modelMatrix");
glGetUniformfv(_program, uMatrix, dataOfCurrentObject);
// Get Previous data
... transform dataOfCurrentObject based on an algorithm and create newDataOfCurrentObject
// Update object with new data and draw
glUniformMatrix4fv(uMatrix, 1, 0, newDataOfCurrentObject);
glDrawArrays(GL_TRIANGLES, 0, 36);
}
}
The problem I have now is that the dataOfCurrentObject for object 'i' is identical to the newDataOfCurrentObject for object 'i-1'. In other words it appears that the code keeps track of only one model matrix for all objects, or it does not read correctly the model matrix of a specific object. Any ideas?
The simplest method that gets used is to set the object's position in the Model Matrix, and upload that object's appropriate Model Matrix as a Uniform each time you draw a new object. The [pseudo-]code would look like this:
for(game_object object : game_object_list) {
glUniformMatrix4f(modelMatrixUniformLocation, 1, false, object->model_matrix);
object->draw();
}
And when you need to update the object's position:
for(game_object object : game_object_list) {
object->model_matrix = Matrix.identity();
/*Any transformations that need to take place here*/
object->model_matrix = object->model_matrix.transpose(/*x*/, /*y*/);
/*Any other transformations that need to take place*/
}
Exactly what you put in there will vary depending on your needs. If you're programming a 2D game, you probably don't need a 4x4 model matrix. But the basic logic should be identical to what you eventually use.
You don't have to read anything back from your vertex buffers. You just need to keep in your application the tranform you need and pass it as uniform for every object.
For example if you need to translate the object, keep the modelViewMatrix in and every frame apply trasformation to the previous matrix :
GLKMatrix4 modelViewMatrix;
...
modelViewMatrix = GLKMatrix4Identity;
...
-(void) render {
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, deltaX, deltaY, deltaZ);
glUniformMatrix4fv(_modelViewUniform, 1,0, modelViewMatrix.m);
}
So you just need to keep reference to your objects modelView matrices and apply your transformations accordingly.
If you want to read data back from your vertex buffers (gpu) to your application you have to use Transform Feedback which is a bit more advanced technique and it is used when you modify your vertexes in the vertex shader and you need the results back. (It is not needed for your case).

OpenGL ES 2.0 - memory management in drawing lines (graphing)

I finally got some functioning code to draw lines (in Xamarin/monotouch)
//init calls
Context = new EAGLContext (EAGLRenderingAPI.OpenGLES2);
DrawableDepthFormat = GLKViewDrawableDepthFormat.Format24;
EAGLContext.SetCurrentContext (Context);
effect = new GLKBaseEffect ();
effect.UseConstantColor = true;
effect.ConstantColor = new Vector4 (1f, 1f, 1f, 1f); //white
GL.ClearColor (0f, 0f, 0f, 1f);//black
public void DrawLine(float[] pts) {
//generate, bind, init
GL.GenBuffers (1, out vertexBuffer);
GL.BindBuffer (BufferTarget.ArrayBuffer, vertexBuffer);
GL.BufferData (BufferTarget.ArrayBuffer, (IntPtr) (pts.Length * sizeof (float)), pts, BufferUsage.DynamicDraw);
// RENDER //
effect.PrepareToDraw ();
//describe what's going to happen
GL.EnableVertexAttribArray ((int) GLKVertexAttrib.Position);
GL.VertexAttribPointer ((int) GLKVertexAttrib.Position, 2, VertexAttribPointerType.Float, false, sizeof(float) * 2, 0);
GL.DrawArrays (BeginMode.LineStrip, 0, pts.Length/2);
}
I have a couple questions.
Is this approach for drawing lines optimal? Are there any suggested improvements (i.e. antialiasing, etc..)
GL.Clear (ClearBufferMask.ColorBufferBit);
effect.ConstantColor = new Vector4 (1f, 1f, 1f, 1f);
DrawLine (line);
effect.ConstantColor = new Vector4 (1f, 0f, 1f, 1f);
DrawLine (line2);
Does all the memory associated with the line disappear when I call GL.Clear()? i.e. do I have to do any memory cleanup, or can I just keep calling GL.Clear() followed by DrawLine() and not worry about memory management?
I'm planning on using these functions for graphing. If the underlying data changes (but I have the same number of lines, is there a subset of functions that I can call to more efficiently update the lines?
GL.GenBuffers (1, out vertexBuffer) creates a buffer on the GPU and has to be deleted after the usage. In most cases you create buffer to push data to GPU which will not be updated frequently and are used to draw those data many times. There is probably a flag to stream the data (instead of DynamicDraw) for constant updating though. You could use that to reuse the same buffer but it would probably be best to just push the data pointer directly from the CPU: Lose all 3 lines concerning the buffer and insert pts into VertexAttribPointer instead of 0 for the last argument.
You say you will be using this for graph drawing. If the graph data will not be modified every frame and you can compute all the points you still might want to benefit from buffers. Instead of trying to push every line to its own buffer try pushing all the lines to a single buffer (even axis can be there). Use GL.DrawArrays (BeginMode.LineStrip, 0, pts.Length/2) to draw specific lines as last 2 arguments control the range in current buffer to draw (to draw 5th line only you would write GL.DrawArrays(BeginMode.LineStrip, 5*2, 2)). So when the graph data should update; delete the current buffer, create a new buffer, push the data to buffer, bind the buffer, set the vertex pointer and then just keep calling the draw method.
GLClear has nothing to do with memory cleanup at all. It will only clear (set values) the buffers attached to your frame buffer, in your case it will set all the pixels in your render buffer to the color you set in ClearColor. Nothing more. Other common cases are to also clear depth buffer, stencil buffer...
As for all the optimization and anti-aliasing it all depends on what you are doing, there is no general answer. Though if your scene gets too edgy try to search around for multisampling.

DirectX: How to apply an Effect to a texture drawn with ID3DXSprite.Draw(..)

I want to write a very simple Effect for a DirectX program which uses the ID3DXSprite interface to draw a 2D-Hud. In XNA I simply called
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
effect.Begin();
effect.CurrentTechnique.Passes[0].Begin();
spriteBatch.Draw(texture, new Rectangle(0, 0, 300, 300), Color.White);
effect.CurrentTechnique.Passes[0].End();
effect.End();
spriteBatch.End();
But in C++, nearly the same code doesnt work
pSprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_DONOTSAVESTATE | D3DXSPRITE_SORT_TEXTURE);
anEffect->SetTechnique(technique);
anEffect->Begin(&passes, 0);
anEffect->BeginPass(0);
pSprite->Draw(pTexture, NULL, NULL, &position, 0xFFFFFFFF);
anEffect->EndPass();
anEffect->End();
pSprite->End();
NOTE: The effect is loaded correctly!
Well, first of all the XNA code you have is for XNA 3.1, and it's wrong. This blog post explains how to do it for both XNA 3.1 and 4.0 (the API changes in between).
In XNA 3.1, when using SpriteSortMode.Immediate, SpriteBatch will set up its shaders and other device state in the Begin call, instead of in the End call. This gives you the opportunity to replace parts of the device state before drawing actually takes place (in Draw or End, depending on when it flushes). And then you are supposed to End your effect after you End the sprite batch (so everything gets drawn first).
Now, in DirectX, I would suggest that the same incorrect ordering of your End calls is to blame. Specifically refer to this part of the documentation for the second parameter to ID3DXEffect::Begin
determines if state modified by an effect is saved and restored. The default value 0 specifies that ID3DXEffect::Begin and ID3DXEffect::End will save and restore all state modified by the effect
The upshot is that, when you End the effect, it is resetting the device back to normal sprite drawing, before you call End on the ID3DXSprite, which is what is actually sending your sprite batch to be drawn.
I would guess that the reason your incorrectly-ordered code works on XNA is that XNA is probably doing the equivalent of passing D3DXFX_DONOTSAVESTATE, when beginning the effect, under the hood.
Usage of Sprite with HLSL Effect: (for C++ Game Developers)
Below is the sample code which explains how sprite draw can work with HLSL effect files
Pseudo Code:
ID3DXEffect* g_pEffect = NULL; // D3DX effect interface
void loadTextureEffect() {
D3DXCreateTextureFromFile(gD3dDevice,L"image.png",&gTextureBackdrop);
DWORD dwShaderFlags = D3DXFX_NOT_CLONEABLE;
D3DXCreateEffectFromFile( gD3dDevice, "shader.fx", NULL, NULL, dwShaderFlags,
NULL, &g_pEffect, NULL );
}
void Render()
{
unsigned int passes;
gD3dDevice->Clear(0, 0, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, 0x00000000, 1.0f, 0);
gD3dDevice->BeginScene();
gSprite->Begin(0);
g_pEffect->SetTechnique("PostProcess");
g_pEffect->SetTexture( "Tex0", gTextureBackdrop );
float blurFactor = 25;
g_pEffect->SetValue("TextureBlur",&blurFactor ,sizeof(float));
g_pEffect->Begin(&passes, 0);
for(unsigned int pass = 0; pass < passes; ++pass)
{
g_pEffect->BeginPass(pass);
D3DXVECTOR3 spritePos(0.0f, 0.0f, 0.0f);
gD3dDevice->SetTexture(0,gTextureBackdrop);
gSprite->Draw(gTextureBackdrop, 0, 0, &spritePos, 0xffffffff);
gSprite->End();
g_pEffect->CommitChanges();
g_pEffect->EndPass();
}
g_pEffect->End();
gD3dDevice->EndScene();
gD3dDevice->Present(NULL,NULL,NULL,NULL);
}

Resources