DirectX Back Buffer code in C++ - buffer

void D3DApp::OnResize()
{
assert(md3dImmediateContext);
assert(md3dDevice);
assert(mSwapChain);
// Release the old views, as they hold references to the buffers we
// will be destroying. Also release the old depth/stencil buffer.
ReleaseCOM(mRenderTargetView);
ReleaseCOM(mDepthStencilView);
ReleaseCOM(mDepthStencilBuffer);
// Resize the swap chain and recreate the render target view.
HR(mSwapChain->ResizeBuffers(1, mClientWidth, mClientHeight, DXGI_FORMAT_R8G8B8A8_UNORM, 0));
ID3D11Texture2D* backBuffer;
HR(mSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&backBuffer)));
HR(md3dDevice->CreateRenderTargetView(backBuffer, 0, &mRenderTargetView));
ReleaseCOM(backBuffer);
I am having a bit of trouble understanding exactly what this code is actually doing. I think it is getting the current back buffer storing the buffer in memory and then rendering it to the screen again? What could be in the contents of the buffer that is gotten? I am very confused. My program is using an 800 x 600 pixel window.

As the function name point out, it just Reset the backbuffer and the rendertarget, and this function should be called whenever a render window is resized.
HR(mSwapChain->ResizeBuffers(1, mClientWidth, mClientHeight, DXGI_FORMAT_R8G8B8A8_UNORM, 0));
In an DirectX application, we always set the backbuffer size same as the window size, so when the window size changed, we should update the backbuffer size accordingly, otherwise your scene will be stretched.
ID3D11Texture2D* backBuffer;
HR(mSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&backBuffer)));
HR(md3dDevice->CreateRenderTargetView(backBuffer, 0, &mRenderTargetView));
ReleaseCOM(backBuffer);
The render target was based on the backbuffer which in turn should be updated.

Related

WebGL Multi-Render Target: For drawBuffers, what does gl.BACK do?

I am working on my Multiple Render Target pipeline and I came across a curiosity in the docs that I don't fully understand and googling for an hour hasn't helped me find a clear answer.
You utilize gl.drawBuffers([...]) to link the locations used in your shader to actual color attachments in your framebuffer. So, most of the expected parameters makes sense:
gl.NONE - Make the shader output for this location NOT output to any Color attachment in the FBO
gl.COLOR_ATTACHMENT[0 - 15] - Make the shader location output to the specified color attachment.
But then we have this mysterious target (from the docs):
gl.BACK: Fragment shader output is written into the back color buffer.
I don't think I understand what the back color buffer is, especially relative to the currently attached FBO. As far as I know you don't specify a 'back color buffer' when making a FBO...so what does this mean? What is this 'back color buffer'?
In WebGL the backbuffer is effectively "the canvas". It's called the backbuffer because sometimes there is a frontbuffer. Canvas's in WebGL are double buffered. One buffer is whatever is visible, the other is the buffer you're currently drawing to.
You can't use [gl.BACK, gl_COLOR_ATTACHMENT0]
When writing to a framebuffer each entry can only be the same attachment or NONE. For example imagine you have 4 attachments. Then the array you pass to drawBuffers is as follows
gl.drawBuffers([
gl.COLOR_ATTACHMENT0, // OR gl.NONE,
gl.COLOR_ATTACHMENT1, // OR gl.NONE,
gl.COLOR_ATTACHMENT2, // OR gl.NONE,
gl.COLOR_ATTACHMENT3, // OR gl.NONE,
])
You can not swap around attachments.
gl.drawBuffers([
gl.NONE,
gl.COLOR_ATTACHMENT0, // !! ERROR! This has to be COLOR_ATTACHMENT1 or NONE
])
You can't use gl.BACK gl.BACK is only for when writing to the canvas, in other words then the frame buffer is set to null as in gl.bindFramebuffer(null);
gl.drawBuffers([
gl.BACK, // OR gl.NONE
]);
note: drawBuffers state is part of the state of each framebuffer (and canvas). See this and this

WebGL feedback loop formed between Framebuffer and active Texture

I have a webgl project setup that uses 2 pass rendering to create effects on a texture.
Everything was working until recently chrome started throwing this error:
[.WebGL-0000020DB7FB7E40] GL_INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture.
This just started happening even though I didn't change my code, so I'm guessing a new update caused this.
I found this answer on SO, stating the error "happens any time you read from a texture which is currently attached to the framebuffer".
However I've combed through my code 100 times and I don't believe I am doing that. So here is how I have things setup.
Create a fragment shader with a uniform sampler.
uniform sampler2D sampler;
Create 2 textures
var texture0 = initTexture(); // This function does all the work to create a texture
var texture1 = initTexture(); // This function does all the work to create a texture
Create a Frame Buffer
var frameBuffer = gl.createFramebuffer();
Then I start the "2 pass processing" by uploading a html image to texture0, and binding texture0 to the sampler.
I then bind the frame buffer & call drawArrays:
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture1, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
To clean up I unbind the frame buffer:
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
Edit:
After adding break points to my code I found that the error is not actually thrown until I bind the null frame buffer. So the drawArrays call isn't causing the error, it's binding the null frame buffer afterwards that sets it off.
Chrome since version 83 started to perform conservative checks for the framebuffer and the active texture feedback loop. These checks are likely too conservative and affect usage that should actually be allowed.
In these new checks Chrome seem to disallow a render target to be bound to any texture slot, even if this slot is not used by the program.
In your 2 pass rendering you likely have something like:
Initialize a render target and create a texture that points to a framebuffer.
Render to the target.
In 1 you likely bind a texture using gl.bindTexture(gl.TEXTURE_2D, yourTexture) you need to then, before the step 2, unbind the texture using gl.bindTexture(gl.TEXTURE_2D, null); Otherwise Chrome will fail because the render target is bound as a texture, even though this texture is not sampled by the program.

Combining Multiple Render Targets (MRT) with Multisampling fails on iOS devices, not on the simulator

I'm trying to write an OpenGLES-3.0 Swift app on iOS (>= 8.0) that makes use of Multiple Render Targets (MRT). To get proper antialiasing, I enabled multisampling.
In detail, my rendering architecture looks like this:
The Display framebuffer has one renderbuffer attached:
The Display renderbuffer : controlled by iOS via EAGLContext.renderbufferStorage(), attached as GL_COLOR_ATTACHMENT0
The Sample framebuffer has two renderbuffers attached:
The Color renderbuffer I : GL_RGBA8, multisampled, attached as GL_COLOR_ATTACHMENT0
The Color renderbuffer II : GL_RGBA8, multisampled, attached as GL_COLOR_ATTACHMENT1
Whenever my layer changes its bounds, I resize all my renderbuffers as Apple does it in the GLPaint sample.
I created a minimal example for you. The rendering itself looks like this:
//Set the GL context, bind the sample framebuffer and specify the viewport:
EAGLContext.setCurrentContext(context)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), self.sampleFramebuffer)
glViewport(0, 0, self.layerWidth, self.layerHeight)
//Clear both render targets:
glClearBufferfv(GLenum(GL_COLOR), 0, self.colorRenderbufferIClearColor)
glClearBufferfv(GLenum(GL_COLOR), 1, self.colorRenderbufferIIClearColor)
//Specify the vertex attribute (only position, 6 floats for a triangle):
glEnableVertexAttribArray(0)
glVertexAttribPointer(0, 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), GLsizei(2 * sizeof(GLfloat)), nil)
//Use the shader program and render a single triangle:
glUseProgram(self.program)
glDrawArrays(GLenum(GL_TRIANGLES), 0, 3)
//Prepare both framebuffers as source and destination to do multisampling:
glBindFramebuffer(GLenum(GL_READ_FRAMEBUFFER), self.sampleFramebuffer)
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), self.displayFramebuffer)
//Specify from which of the attachments we do the multisampling.
//This is GL_COLOR_ATTACHMENT0 or GL_COLOR_ATTACHMENT1.
glReadBuffer(self.blitAttachment)
//Transfer data between framebuffers and do multisampling:
glBlitFramebuffer(0, 0, self.layerWidth, self.layerHeight, 0, 0, self.layerWidth, self.layerHeight, GLbitfield(GL_COLOR_BUFFER_BIT), GLenum(GL_LINEAR))
//Invalidate the sample framebuffer for this pass:
glInvalidateFramebuffer(GLenum(GL_READ_FRAMEBUFFER), 2, [GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_COLOR_ATTACHMENT1)])
//Bind the display renderbuffer and present it:
glBindRenderbuffer(GLenum(GL_RENDERBUFFER), self.displayRenderbuffer)
self.eaglContext.presentRenderbuffer(Int(GL_RENDERBUFFER))
Now to the problem: My sample project draws a blue triangle on red background into the first render target (color renderbuffer I) and a purple triangle on green background into the second render target (color renderbuffer II). By setting blitAttachment in the code, you can select which of the two attachments gets resolved into the display framebuffer.
Everything works as expected on the iOS simulator (all devices, all iOS versions).
I only have access to an iPad Air (Model A1475, iOS 9.3.4) at the moment. But on the device, there are problems:
If I disable multisampling (level = 0 in glRenderbufferStorageMultisample()), everything works.
If I enable multisampling (level = 4), I can only blit from GL_COLOR_ATTACHMENT0 (which is color renderbuffer I).
Blitting from GL_COLOR_ATTACHMENT1 produces the same result (blue triangle on red), but should lead to the other one (purple triangle on green).
You can reproduce the problem with my attached sample code (DropBox).
So there are two questions:
Could somebody please confirm that this works on the simulator, but not on real devices?
Has anybody an idea about errors in my code? Or is this a known bug?
Thanks in advance!
There seem to be a bit of a strange behavior in this API. The code you linked does indeed work on the simulator but the simulator is quite different from the actual device so I suggest you never use it as reference.
So what seems to happen is that the render buffer is discarded simply too quickly. Why and how this happen I have no idea. You blit the buffers and then invalidate them so simply removing the buffer invalidation will remove the issue. But removing the buffer invalidation is not not suggested so rather ensure that all the tasks have been performed by the GPU before you invalidate them. That means simply calling flush.
Before you call glInvalidateFramebuffer(GLenum(GL_READ_FRAMEBUFFER), 2, [GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_COLOR_ATTACHMENT1)]) simply call glFlush():
//Resolve from source to destination while applying multisampling:
glBlitFramebuffer(0, 0, self.layerWidth, self.layerHeight, 0, 0, self.layerWidth, self.layerHeight, GLbitfield(GL_COLOR_BUFFER_BIT), GLenum(GL_LINEAR))
OpenGLESView.checkError(dbgDomain, andDbgText: "Failed to blit between framebuffers")
glFlush()
//Invalidate the whole sample framebuffer:
glInvalidateFramebuffer(GLenum(GL_READ_FRAMEBUFFER), 2, [GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_COLOR_ATTACHMENT1)])
OpenGLESView.checkError(dbgDomain, andDbgText: "Failed to invalidate sample framebuffer")

Keep pixel aspect with different resolution in xna game

I'm currently developping an old-school game with XNA 4.
My graphics assets are based on 568x320 resolution (16/9 ration), I want to change my window resolution (1136x640 for example) and my graphics are scaled without stretching, that they keep pixel aspect.
How can I reach this ?
You could use a RenderTargetto achieve your goal. It sounds like you don't want to have to render accordingly to every possible screen size, so if your graphics aren't dependant on other graphical features like a mouse, then I would use a RenderTarget and draw all the pixel data to that and afterwards draw it to the actual screen allowing the screen to stretch it.
This technique can be used in other ways too. I use it to draw objects in my game, so I can easily change the rotation and location without having to calculate every sprite for the object.
Example:
void PreDraw()
// You need your graphics device to render to
GraphicsDevice graphicsDevice = Settings.GlobalGraphicsDevice;
// You need a spritebatch to begin/end a draw call
SpriteBatch spriteBatch = Settings.GlobalSpriteBatch;
// Tell the graphics device where to draw too
graphicsDevice.SetRenderTarget(renderTarget);
// Clear the buffer with transparent so the image is transparent
graphicsDevice.Clear(Color.Transparent);
spriteBatch.Begin();
flameAnimation.Draw(spriteBatch);
spriteBatch.Draw(gunTextureToDraw, new Vector2(100, 0), Color.White);
if (!base.CurrentPowerUpLevel.Equals(PowerUpLevels.None)) {
powerUpAnimation.Draw(spriteBatch);
}
// DRAWS THE IMAGE TO THE RENDERTARGET
spriteBatch.Draw(shipSpriteSheet, new Rectangle(105,0, (int)Size.X, (int)Size.Y), shipRectangleToDraw, Color.White);
spriteBatch.End();
// Let the graphics device know you are done and return to drawing according to its dimensions
graphicsDevice.SetRenderTarget(null);
// utilize your render target
finishedShip = renderTarget;
}
Remember, in your case, you would initialize your RenderTarget with dimensions of 568x320 and draw according to that and not worry about any other possible sizes. Once you give the RenderTarget to the spritebatch to draw to the screen, it will "stretch" the image for you!
EDIT:
Sorry, I skimmed through the question and missed that you don't want to "stretch" your result. This could be achieved by drawing the final RenderTarget to your specified dimensions according to the graphics device.
Oh Gosh !!!! I've got it ! Just give SamplerState.PointClamp at your spriteBatch.Begin methods to keep that cool pixel visuel effet <3
spriteBatch.Begin(SpriteSortMode.Immediate,
BlendState.AlphaBlend,
SamplerState.PointClamp,
null,
null,
null,
cam.getTransformation(this.GraphicsDevice));

directx texture dimensions

so I've discovered that my graphics card automatically resizes textures to powers of 2, which isn't usually a problem but I need to render only a portion of my texture and in doing so, must have the dimensions it has been resized to...
ex:
I load a picture that is 370x300 pixels into my texture and try to draw it with a specific source rectangle
RECT test;
test.left = 0;
test.top = 0;
test.right = 370;
test.bottom = 300;
lpSpriteHandler->Draw(
lpTexture,
&test, // srcRect
NULL, // center
NULL, // position
D3DCOLOR_XRGB(255,255,255)
);
but since the texture has been automatically resized (in this case) to 512x512, I see only a portion of my original texture.
The question is,
is there a function or something I can call to find the dimensions of my texture?
(I've tried googling this but always get some weird crap about Objects and HSL or something)
You may get file information by using this call:
D3DXIMAGE_INFO info;
D3DXGetImageInfoFromFile(file_name, &info);
Though, knowing the original texture size you'll still get it resized on load. This will obviously affect texture quality. Texture resizing is not a big deal when you apply it on mesh (it will get resized anyway) but for drawing sprites this could be a concern. To workaround this I could suggest creating a surface, loading it via D3DXLoadSurfaceFromFile and then copying it to a "pow2" sized texture.
And an offtopic: are you definitely sure about your card capabilities? May be in fact your card do support arbitrary texture sizes but you use D3DXCreateTextureFromFile() which by deafult enforces pow2 sizes. To avoid this try using extended version of this routine:
D3DTexture* texture;
D3DXCreateTextureFromFileEx(
device, file_name, D3DX_DEFAULT_NONPOW2, D3DX_DEFAULT_NONPOW2, D3DX_DEFAULT, 0,
D3DFMT_UNKNOWN, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, 0, NULL, NULL,
&texture);
If your hardware suppors non-pow2 textures you'll get your file loaded as it is. If hardware is not able to handle it than method will fail.

Resources