I have some code that goes like this:
GraphicsDevice.SetRenderTarget(0, myRenderTarget);
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
myEffect.Begin();
myEffectCurrentTechnique.Passes[0].Begin();
spriteBatch.Draw(aRegularTexture, Vector2.Zero, Color.White);
spriteBatch.End();
myEffect.CurrentTechnique.Passes[0].End();
myEffect.End();
GraphicsDevice.SetRenderTarget(0, backBuffer);
Texture2D bloomTexture = myRenderTarget.GetTexture();
...
GraphicsDevice.SetRenderTarget(0, myRenderTarget);
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin();
spriteBatch.Draw(aRegularTexture, Vector2.Zero, Color.White);
spriteBatch.End();
GraphicsDevice.SetRenderTarget(0, backBuffer);
aRegularTexture = myRenderTarget.GetTexture();
//SHOULD be doing nothing, since I'm just rendering said texture into a render target and pulling it back out
(note: this is trimmed down to the minimal reproduction code, not quite what i actually use)
If I render aRegularTexture to the screen before the second block of code, it looks fine and untouched. But if I render it out after the second block of code, it's set to the contents of aModifiedTexture, even though I'm never doing anything that would result in that. Why? (Using XNA 3.1)
Via Shawn Hargreaves: "GetTexture returns an alias for the same surface memory as the rendertarget itself, rather than a separate copy of the data"
http://blogs.msdn.com/b/shawnhar/archive/2010/03/26/rendertarget-changes-in-xna-game-studio-4-0.aspx
Related
I have a webgl project setup that uses 2 pass rendering to create effects on a texture.
Everything was working until recently chrome started throwing this error:
[.WebGL-0000020DB7FB7E40] GL_INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture.
This just started happening even though I didn't change my code, so I'm guessing a new update caused this.
I found this answer on SO, stating the error "happens any time you read from a texture which is currently attached to the framebuffer".
However I've combed through my code 100 times and I don't believe I am doing that. So here is how I have things setup.
Create a fragment shader with a uniform sampler.
uniform sampler2D sampler;
Create 2 textures
var texture0 = initTexture(); // This function does all the work to create a texture
var texture1 = initTexture(); // This function does all the work to create a texture
Create a Frame Buffer
var frameBuffer = gl.createFramebuffer();
Then I start the "2 pass processing" by uploading a html image to texture0, and binding texture0 to the sampler.
I then bind the frame buffer & call drawArrays:
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture1, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
To clean up I unbind the frame buffer:
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
Edit:
After adding break points to my code I found that the error is not actually thrown until I bind the null frame buffer. So the drawArrays call isn't causing the error, it's binding the null frame buffer afterwards that sets it off.
Chrome since version 83 started to perform conservative checks for the framebuffer and the active texture feedback loop. These checks are likely too conservative and affect usage that should actually be allowed.
In these new checks Chrome seem to disallow a render target to be bound to any texture slot, even if this slot is not used by the program.
In your 2 pass rendering you likely have something like:
Initialize a render target and create a texture that points to a framebuffer.
Render to the target.
In 1 you likely bind a texture using gl.bindTexture(gl.TEXTURE_2D, yourTexture) you need to then, before the step 2, unbind the texture using gl.bindTexture(gl.TEXTURE_2D, null); Otherwise Chrome will fail because the render target is bound as a texture, even though this texture is not sampled by the program.
void D3DApp::OnResize()
{
assert(md3dImmediateContext);
assert(md3dDevice);
assert(mSwapChain);
// Release the old views, as they hold references to the buffers we
// will be destroying. Also release the old depth/stencil buffer.
ReleaseCOM(mRenderTargetView);
ReleaseCOM(mDepthStencilView);
ReleaseCOM(mDepthStencilBuffer);
// Resize the swap chain and recreate the render target view.
HR(mSwapChain->ResizeBuffers(1, mClientWidth, mClientHeight, DXGI_FORMAT_R8G8B8A8_UNORM, 0));
ID3D11Texture2D* backBuffer;
HR(mSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&backBuffer)));
HR(md3dDevice->CreateRenderTargetView(backBuffer, 0, &mRenderTargetView));
ReleaseCOM(backBuffer);
I am having a bit of trouble understanding exactly what this code is actually doing. I think it is getting the current back buffer storing the buffer in memory and then rendering it to the screen again? What could be in the contents of the buffer that is gotten? I am very confused. My program is using an 800 x 600 pixel window.
As the function name point out, it just Reset the backbuffer and the rendertarget, and this function should be called whenever a render window is resized.
HR(mSwapChain->ResizeBuffers(1, mClientWidth, mClientHeight, DXGI_FORMAT_R8G8B8A8_UNORM, 0));
In an DirectX application, we always set the backbuffer size same as the window size, so when the window size changed, we should update the backbuffer size accordingly, otherwise your scene will be stretched.
ID3D11Texture2D* backBuffer;
HR(mSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&backBuffer)));
HR(md3dDevice->CreateRenderTargetView(backBuffer, 0, &mRenderTargetView));
ReleaseCOM(backBuffer);
The render target was based on the backbuffer which in turn should be updated.
I'm very new to shaders and am very confused about the whole thing, even after following several tutorials (in fact this is my second question about shaders today).
I'm trying to make a shader with two passes :
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 HorizontalBlur();
}
pass Pass2
{
PixelShader = compile ps_2_0 VerticalBlur();
}
}
However, this only applies VerticalBlur(). If I remove Pass2, it falls back to the HorizontalBlur() in Pass1. Am I missing something? Maybe it's simply not passing the result of the first pass to the second pass, in which case how would I do that?
Also, in most of the tutorials I've read, I'm told to put effect.CurrentTechnique.Passes[0].Apply(); after I start my spritebatch with the effect. However, this doesn't seem to change anything; I can set it to Passes[1] or even remove it entirely, and I still get only Pass2. (I do get an error when I try to set it to Passes[2], however.) What's the use of that line then? Has the need for it been removed in recent versions?
Thanks a bunch!
To render multiple passes:
For the first pass, render your scene onto a texture.
For the second, third, fourth, etc passes:
Draw a quad that uses the texture from the previous pass. If there are more passes, to follow, render this pass to another texture, otherwise if this is the last pass, render it to the back buffer.
In your example, say you are rendering a car.
First you render the car to a texture.
Then you draw a big rectangle, the size of the screen in pixels, place at a z depth of 0.5, with identity world, view, and projection matrices, and with your car scene as the texture, and apply the horizontal blur pass. This is rendered to a new texture that now has a horizontally blurred car.
Finally, render the same rectangle but with the "horizontally burred car" texture, and apply the vertical blur pass. Render this to the back buffer. You have now drawn a blurred car scene.
The reason for the following
effect.CurrentTechnique.Passes[0].Apply();
is that many effects only have a single pass.
To run multiple passes, I think you have to do this instead:
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
//Your draw code here:
}
I'm currently developping an old-school game with XNA 4.
My graphics assets are based on 568x320 resolution (16/9 ration), I want to change my window resolution (1136x640 for example) and my graphics are scaled without stretching, that they keep pixel aspect.
How can I reach this ?
You could use a RenderTargetto achieve your goal. It sounds like you don't want to have to render accordingly to every possible screen size, so if your graphics aren't dependant on other graphical features like a mouse, then I would use a RenderTarget and draw all the pixel data to that and afterwards draw it to the actual screen allowing the screen to stretch it.
This technique can be used in other ways too. I use it to draw objects in my game, so I can easily change the rotation and location without having to calculate every sprite for the object.
Example:
void PreDraw()
// You need your graphics device to render to
GraphicsDevice graphicsDevice = Settings.GlobalGraphicsDevice;
// You need a spritebatch to begin/end a draw call
SpriteBatch spriteBatch = Settings.GlobalSpriteBatch;
// Tell the graphics device where to draw too
graphicsDevice.SetRenderTarget(renderTarget);
// Clear the buffer with transparent so the image is transparent
graphicsDevice.Clear(Color.Transparent);
spriteBatch.Begin();
flameAnimation.Draw(spriteBatch);
spriteBatch.Draw(gunTextureToDraw, new Vector2(100, 0), Color.White);
if (!base.CurrentPowerUpLevel.Equals(PowerUpLevels.None)) {
powerUpAnimation.Draw(spriteBatch);
}
// DRAWS THE IMAGE TO THE RENDERTARGET
spriteBatch.Draw(shipSpriteSheet, new Rectangle(105,0, (int)Size.X, (int)Size.Y), shipRectangleToDraw, Color.White);
spriteBatch.End();
// Let the graphics device know you are done and return to drawing according to its dimensions
graphicsDevice.SetRenderTarget(null);
// utilize your render target
finishedShip = renderTarget;
}
Remember, in your case, you would initialize your RenderTarget with dimensions of 568x320 and draw according to that and not worry about any other possible sizes. Once you give the RenderTarget to the spritebatch to draw to the screen, it will "stretch" the image for you!
EDIT:
Sorry, I skimmed through the question and missed that you don't want to "stretch" your result. This could be achieved by drawing the final RenderTarget to your specified dimensions according to the graphics device.
Oh Gosh !!!! I've got it ! Just give SamplerState.PointClamp at your spriteBatch.Begin methods to keep that cool pixel visuel effet <3
spriteBatch.Begin(SpriteSortMode.Immediate,
BlendState.AlphaBlend,
SamplerState.PointClamp,
null,
null,
null,
cam.getTransformation(this.GraphicsDevice));
Is it possible to use a pixel shader inside a sprite?
I have create a simple pixel shader, that just writes red color, for
testing.
I have surrounded my Sprite.DrawImage(tex,...) call by the
effect.Begin(...), BeginPass(0), and EndPass(), End(),
but my shader seems not to be used : My texture is drawn just
normally.
I am not sure what language you are using. I will assume this is an XNA question.
Is it possible to use a pixel shader
inside a sprite?
Yes, you can load a shader file (HLSL, up to and including shader model 3 in XNA) and call spritebatch with using it.
If you post sample code it would be easier for us to see if anything isn't setup properly. However, It looks like you have things in the right order. I would check the shader code.
Your application code should look something like this:
Effect effect;
effect = Content.Load<Effect> ("customeffect"); //load "customeffect.fx"
effect.CurrentTechnique = effect.Techniques["customtechnique"];
effect.Begin();
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Begin();
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
spriteBatch.Draw(texture, Vector2.Zero, null, Color.White, 0, new Vector2(20, 20), 1, SpriteEffects.None, 0);
spriteBatch.End();
pass.End();
}
effect.End();