DirectX: How to apply an Effect to a texture drawn with ID3DXSprite.Draw(..) - xna

I want to write a very simple Effect for a DirectX program which uses the ID3DXSprite interface to draw a 2D-Hud. In XNA I simply called
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
effect.Begin();
effect.CurrentTechnique.Passes[0].Begin();
spriteBatch.Draw(texture, new Rectangle(0, 0, 300, 300), Color.White);
effect.CurrentTechnique.Passes[0].End();
effect.End();
spriteBatch.End();
But in C++, nearly the same code doesnt work
pSprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_DONOTSAVESTATE | D3DXSPRITE_SORT_TEXTURE);
anEffect->SetTechnique(technique);
anEffect->Begin(&passes, 0);
anEffect->BeginPass(0);
pSprite->Draw(pTexture, NULL, NULL, &position, 0xFFFFFFFF);
anEffect->EndPass();
anEffect->End();
pSprite->End();
NOTE: The effect is loaded correctly!

Well, first of all the XNA code you have is for XNA 3.1, and it's wrong. This blog post explains how to do it for both XNA 3.1 and 4.0 (the API changes in between).
In XNA 3.1, when using SpriteSortMode.Immediate, SpriteBatch will set up its shaders and other device state in the Begin call, instead of in the End call. This gives you the opportunity to replace parts of the device state before drawing actually takes place (in Draw or End, depending on when it flushes). And then you are supposed to End your effect after you End the sprite batch (so everything gets drawn first).
Now, in DirectX, I would suggest that the same incorrect ordering of your End calls is to blame. Specifically refer to this part of the documentation for the second parameter to ID3DXEffect::Begin
determines if state modified by an effect is saved and restored. The default value 0 specifies that ID3DXEffect::Begin and ID3DXEffect::End will save and restore all state modified by the effect
The upshot is that, when you End the effect, it is resetting the device back to normal sprite drawing, before you call End on the ID3DXSprite, which is what is actually sending your sprite batch to be drawn.
I would guess that the reason your incorrectly-ordered code works on XNA is that XNA is probably doing the equivalent of passing D3DXFX_DONOTSAVESTATE, when beginning the effect, under the hood.

Usage of Sprite with HLSL Effect: (for C++ Game Developers)
Below is the sample code which explains how sprite draw can work with HLSL effect files
Pseudo Code:
ID3DXEffect* g_pEffect = NULL; // D3DX effect interface
void loadTextureEffect() {
D3DXCreateTextureFromFile(gD3dDevice,L"image.png",&gTextureBackdrop);
DWORD dwShaderFlags = D3DXFX_NOT_CLONEABLE;
D3DXCreateEffectFromFile( gD3dDevice, "shader.fx", NULL, NULL, dwShaderFlags,
NULL, &g_pEffect, NULL );
}
void Render()
{
unsigned int passes;
gD3dDevice->Clear(0, 0, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, 0x00000000, 1.0f, 0);
gD3dDevice->BeginScene();
gSprite->Begin(0);
g_pEffect->SetTechnique("PostProcess");
g_pEffect->SetTexture( "Tex0", gTextureBackdrop );
float blurFactor = 25;
g_pEffect->SetValue("TextureBlur",&blurFactor ,sizeof(float));
g_pEffect->Begin(&passes, 0);
for(unsigned int pass = 0; pass < passes; ++pass)
{
g_pEffect->BeginPass(pass);
D3DXVECTOR3 spritePos(0.0f, 0.0f, 0.0f);
gD3dDevice->SetTexture(0,gTextureBackdrop);
gSprite->Draw(gTextureBackdrop, 0, 0, &spritePos, 0xffffffff);
gSprite->End();
g_pEffect->CommitChanges();
g_pEffect->EndPass();
}
g_pEffect->End();
gD3dDevice->EndScene();
gD3dDevice->Present(NULL,NULL,NULL,NULL);
}

Related

Draw multiple models in WebGL

Problem constraints:
I am not using three.js or similar, but pure WebGL
WebGL 2 is not an option either
I have a couple of models loaded stored as Vertices and Normals arrays (coming from an STL reader).
So far there is no problem when both models are the same size. Whenever I load 2 different models, an error message is shown in the browser:
WebGL: INVALID_OPERATION: drawArrays: attempt to access out of bounds arrays so I suspect I am not manipulating multiple buffers correctly.
The models are loaded using the following typescript method:
public AddModel(model: Model)
{
this.models.push(model);
model.VertexBuffer = this.gl.createBuffer();
model.NormalsBuffer = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.VertexBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, model.Vertices, this.gl.STATIC_DRAW);
model.CoordLocation = this.gl.getAttribLocation(this.shaderProgram, "coordinates");
this.gl.vertexAttribPointer(model.CoordLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.enableVertexAttribArray(model.CoordLocation);
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.NormalsBuffer);
this.gl.bufferData(this.gl.ARRAY_BUFFER, model.Normals, this.gl.STATIC_DRAW);
model.NormalLocation = this.gl.getAttribLocation(this.shaderProgram, "vertexNormal");
this.gl.vertexAttribPointer(model.NormalLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.enableVertexAttribArray(model.NormalLocation);
}
After loaded, the Render method is called for drawing all loaded models:
public Render(viewMatrix: Matrix4, perspective: Matrix4)
{
this.gl.uniformMatrix4fv(this.viewRef, false, viewMatrix);
this.gl.uniformMatrix4fv(this.perspectiveRef, false, perspective);
this.gl.uniformMatrix4fv(this.normalTransformRef, false, viewMatrix.NormalMatrix());
// Clear the canvas
this.gl.clearColor(0, 0, 0, 0);
this.gl.viewport(0, 0, this.canvas.width, this.canvas.height);
this.gl.clear(this.gl.COLOR_BUFFER_BIT | this.gl.DEPTH_BUFFER_BIT);
// Draw the triangles
if (this.models.length > 0)
{
for (var i = 0; i < this.models.length; i++)
{
var model = this.models[i];
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, model.VertexBuffer);
this.gl.enableVertexAttribArray(model.NormalLocation);
this.gl.enableVertexAttribArray(model.CoordLocation);
this.gl.vertexAttribPointer(model.CoordLocation, 3, this.gl.FLOAT, false, 0, 0);
this.gl.uniformMatrix4fv(this.modelRef, false, model.TransformMatrix);
this.gl.uniform3fv(this.materialdiffuseRef, model.Color.AsVec3());
this.gl.drawArrays(this.gl.TRIANGLES, 0, model.TrianglesCount);
}
}
}
One model works just fine. Two cloned models also work OK. Different models fail with the error mentioned.
What am I missing?
The normal way to use WebGL
At init time
for each shader program
create and compile vertex shader
create and compile fragment shader
create program, attach shaders, link program
for each model
for each type of vertex data (positions, normal, color, texcoord
create a buffer
copy data to buffer
create textures
Then at render time
for each model
use shader program appropriate for model
bind buffers, enable and setup attributes
bind textures and set uniforms
call drawArrays or drawElements
But looking at your code it's binding buffers, and enabling and setting up attributes at init time instead of render time.
Maybe see this article and this one

Need help - Monogame 2d water reflection

Ok so i am following this tutorial Water Reflection XNA and when i adjust code with monogame i cant get final result.
So here is my LoadContent code:
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
texture = Content.Load<Texture2D>("test");
effect = Content.Load<Effect>("LinearFade");
effect.Parameters["Visibility"].SetValue(0.7f);
}
and my Draw code:
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend);
//effect.CurrentTechnique.Passes[0].Apply();
spriteBatch.Draw(texture, new Vector2(texturePos.X, texturePos.Y + texture.Height), null, Color.White * 0.5f, 0f, Vector2.Zero, 1, SpriteEffects.FlipVertically, 0f);
spriteBatch.End();
spriteBatch.Begin();
spriteBatch.Draw(texture, texturePos, Color.White);
spriteBatch.End();
base.Draw(gameTime);
}
and finally my .fx file: LinearFade
So the problem starts when i apply effect. My texture just disappear and if i comment effect part in Draw method i get mirror image with fade (messing with alpha "Color.White * 0.5f") but without fade effect like he have on tutorial from middle of picture to the bottom of picture. I still dont have much experience in monogame and shader but i am learning.
If any1 know how to fix this or how to make it like on tutorial above it would be nice. Btw sry for bad english not my main language.
Ok no need for help i got it after 2 day of thinking i know the answer. The thing is that you need default vertex shader input and output and then shader is working. So if any1 have problem with shaders in monogame first see if you have default vertex input and output. Will put solution code so if some1 doing same tutorial or similer thing so they know what is the problem.
Solution: Working Effect.fx

OpenGL ES 2.0 - memory management in drawing lines (graphing)

I finally got some functioning code to draw lines (in Xamarin/monotouch)
//init calls
Context = new EAGLContext (EAGLRenderingAPI.OpenGLES2);
DrawableDepthFormat = GLKViewDrawableDepthFormat.Format24;
EAGLContext.SetCurrentContext (Context);
effect = new GLKBaseEffect ();
effect.UseConstantColor = true;
effect.ConstantColor = new Vector4 (1f, 1f, 1f, 1f); //white
GL.ClearColor (0f, 0f, 0f, 1f);//black
public void DrawLine(float[] pts) {
//generate, bind, init
GL.GenBuffers (1, out vertexBuffer);
GL.BindBuffer (BufferTarget.ArrayBuffer, vertexBuffer);
GL.BufferData (BufferTarget.ArrayBuffer, (IntPtr) (pts.Length * sizeof (float)), pts, BufferUsage.DynamicDraw);
// RENDER //
effect.PrepareToDraw ();
//describe what's going to happen
GL.EnableVertexAttribArray ((int) GLKVertexAttrib.Position);
GL.VertexAttribPointer ((int) GLKVertexAttrib.Position, 2, VertexAttribPointerType.Float, false, sizeof(float) * 2, 0);
GL.DrawArrays (BeginMode.LineStrip, 0, pts.Length/2);
}
I have a couple questions.
Is this approach for drawing lines optimal? Are there any suggested improvements (i.e. antialiasing, etc..)
GL.Clear (ClearBufferMask.ColorBufferBit);
effect.ConstantColor = new Vector4 (1f, 1f, 1f, 1f);
DrawLine (line);
effect.ConstantColor = new Vector4 (1f, 0f, 1f, 1f);
DrawLine (line2);
Does all the memory associated with the line disappear when I call GL.Clear()? i.e. do I have to do any memory cleanup, or can I just keep calling GL.Clear() followed by DrawLine() and not worry about memory management?
I'm planning on using these functions for graphing. If the underlying data changes (but I have the same number of lines, is there a subset of functions that I can call to more efficiently update the lines?
GL.GenBuffers (1, out vertexBuffer) creates a buffer on the GPU and has to be deleted after the usage. In most cases you create buffer to push data to GPU which will not be updated frequently and are used to draw those data many times. There is probably a flag to stream the data (instead of DynamicDraw) for constant updating though. You could use that to reuse the same buffer but it would probably be best to just push the data pointer directly from the CPU: Lose all 3 lines concerning the buffer and insert pts into VertexAttribPointer instead of 0 for the last argument.
You say you will be using this for graph drawing. If the graph data will not be modified every frame and you can compute all the points you still might want to benefit from buffers. Instead of trying to push every line to its own buffer try pushing all the lines to a single buffer (even axis can be there). Use GL.DrawArrays (BeginMode.LineStrip, 0, pts.Length/2) to draw specific lines as last 2 arguments control the range in current buffer to draw (to draw 5th line only you would write GL.DrawArrays(BeginMode.LineStrip, 5*2, 2)). So when the graph data should update; delete the current buffer, create a new buffer, push the data to buffer, bind the buffer, set the vertex pointer and then just keep calling the draw method.
GLClear has nothing to do with memory cleanup at all. It will only clear (set values) the buffers attached to your frame buffer, in your case it will set all the pixels in your render buffer to the color you set in ClearColor. Nothing more. Other common cases are to also clear depth buffer, stencil buffer...
As for all the optimization and anti-aliasing it all depends on what you are doing, there is no general answer. Though if your scene gets too edgy try to search around for multisampling.

Issue with transparency while drawing with RenderToSurface

I am facing issue while drawing semitransparent object with RenderToSurface(While it working file when i am drawing object direct on device). Issue is when i m drawing a object with Alpha value 50% on RenderToSurface, and when i am drawing surface to device then transparency of object is not valid. My code is as follow.
[code] RenderingSurface.BeginScene(RenderTexture.GetSurfaceLevel(0), view);
_device.Clear(ClearFlags.Target| ClearFlags.ZBuffer, Color.FromArgb(0, Color.Black), 1.0f, 0);
using (Sprite s = new Sprite(_device))
{
s.Begin(SpriteFlags.DoNotSaveState);
s.Draw(ObjecTexture, new Microsoft.DirectX.Vector3(0, 0, 0), new Microsoft.DirectX.Vector3(0, 1, 0), Color.White.ToArgb());
s.End();
}
RenderingSurface.EndScene(Filter.None);
RenderSurface have same shape with 50% tranparency.
Code to Draw Surface.
_device.BeginScene();
_device.Clear(ClearFlags.Target | ClearFlags.ZBuffer | ClearFlags.Stencil, BackgroundColor, 1, 0);
using (Sprite s = new Sprite(_device))
{
s.Begin(SpriteFlags.DoNotSaveState);
s.Draw(RenderTexture, new Microsoft.DirectX.Vector3(0, 0, 0), new Microsoft.DirectX.Vector3(0, 1, 0), Color.White.ToArgb());
s.End();
}
Make sure your RenderSurface render target is created with a PixelFormat that has an alpha channel (A8R8G8B8 rather than X8R8G8B8).
Also, when rendering in the render target, make sure the resulting alpha is being written to the surface using the right blend mode render states for the alpha channel. Please note that blend modes for alpha (AlphaDestinationBlend, AlphaSourceBlend, ...) and colors (DestinationBlend, SourceBlend, ...) are different; make sure you set both.

Textured Primitives in XNA with a first person camera

So I have a XNA application set up. The camera is in first person mode, and the user can move around using the keyboard and reposition the camera target with the mouse. I have been able to load 3D models fine, and they appear on screen no problem. Whenever I try to draw any primitive (textured or not), it does not show up anywhere on the screen, no matter how I position the camera.
In Initialize(), I have:
quad = new Quad(Vector3.Zero, Vector3.UnitZ, Vector3.Up, 2, 2);
quadVertexDecl = new VertexDeclaration(this.GraphicsDevice, VertexPositionNormalTexture.VertexElements);
In LoadContent(), I have:
quadTexture = Content.Load<Texture2D>(#"Textures\brickWall");
quadEffect = new BasicEffect(this.GraphicsDevice, null);
quadEffect.AmbientLightColor = new Vector3(0.8f, 0.8f, 0.8f);
quadEffect.LightingEnabled = true;
quadEffect.World = Matrix.Identity;
quadEffect.View = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
quadEffect.Projection = this.Projection;
quadEffect.TextureEnabled = true;
quadEffect.Texture = quadTexture;
And in Draw() I have:
this.GraphicsDevice.VertexDeclaration = quadVertexDecl;
quadEffect.Begin();
foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes)
{
pass.Begin();
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionNormalTexture>(
PrimitiveType.TriangleList,
quad.Vertices, 0, 4,
quad.Indexes, 0, 2);
pass.End();
}
quadEffect.End();
I think I'm doing something wrong in the quadEffect properties, but I'm not quite sure what.
I can't run this code on the computer here at work as I don't have game studio installed. But for reference, check out the 3D audio sample on the creator's club website. They have a "QuadDrawer" in that project which demonstrates how to draw a textured quad in any position in the world. It's a pretty nice solution for what it seems you want to do :-)
http://creators.xna.com/en-US/sample/3daudio

Resources