I need to grab the texture of whatever is drawn to the frame buffer, in other words, whatever is appearing below the effect that i'm about to draw.
Use case is to feed this texture to a shader that performs distortion to it.
what i've tried so far using MTLBlitCommandEncoder.
auto commandQueue = [device newCommandQueue];
auto commandBuffer = [commandQueue commandBuffer];
[commandBuffer enqueue];
id<MTLRenderCommandEncoder> renderEncoder = [commandBuffer renderCommandEncoderWithDescriptor:mtlDescriptor];
// perform encoding
[renderEncoder endEncoding];
// now after several render passes i need to commit all the rendering for behind the effect,
// so that the texture i am grabbing will omit whatever is about to be drawn after this point
[commandBuffer commit];
[commandBuffer waitUntilCompleted];
Next, i have to create a new command buffer, because if i don't do so i will get an error when i call addCompletedHandler after this commit. I suppose a command buffer cannot commit more than once right?
auto commandBuffer = [commandQueue commandBuffer];
id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer blitCommandEncoder];
[blitEncoder enqueue];
[blitEncoder copyFromTexture:drawable.texture sourceSlice:0 sourceLevel:level sourceOrigin:region.origin sourceSize:region.size toTexture:_mtlTexture destinationSlice:0 destinationLevel:level destinationOrigin:{xOffset, yOffset, 0}];
[blitEncoder endEncoding];
.
// continue with more other render encoding
This can run without any assert error. But the problem is that the depth test is appearing incorrectly. The effect appears to be drawn behind the models that it should be above (when i'm using just 1 command buffer without performing blit it renders correct).
i am using this settings that i suppose whatever written to the depth texture will be preserved?
mtlDescritpor.depthAttachment.loadAction = MTLLoadActionLoad;
mtlDescritpor.depthAttachment.storeAction = MTLStoreActionStore;
Can anyone point to me where went wrong?
EDIT:
i have tried using 2 command buffers, one after another, without performing blit, and the depth is also appearing wrong.
Does it mean depth test just can't work when it's a new command buffer?
Or is there a more recommended implementation for what i'm trying to achieve? I can't seem to find any examples..
EDIT2:
after more testing, it appears that there is a very inconsistent behaviour, even when using just 1 command buffer.
Some time the effect renders below (incorrect), sometimes correctly. (part of it should be rendering above as tested on OpenGL) When i comment off a line of code or add more lines of code, the result will change randomly. I am currently using depthCompareFunction MTLCompareFunctionLessEqual. If i change to MTLCompareFunctionNotEqual, everything will always be on drawn on top, which is also wrong.
So i realise i have been under the wrong impression of having the need to execute a commit first, in order to have what was drawn up till that point to be 'saved' to the texture.
Based on this info here https://github.com/gfx-rs/gfx/issues/2232
It is as though whatever render encoding performed is already drawn to the texture. So having 2 command buffers is not necessary at all.
As for the depth test issue, it was my mistake of not setting the viewport znear and zfar for the MTLRenderCommandEncoder to be same as the models'.
Related
I am working on my Multiple Render Target pipeline and I came across a curiosity in the docs that I don't fully understand and googling for an hour hasn't helped me find a clear answer.
You utilize gl.drawBuffers([...]) to link the locations used in your shader to actual color attachments in your framebuffer. So, most of the expected parameters makes sense:
gl.NONE - Make the shader output for this location NOT output to any Color attachment in the FBO
gl.COLOR_ATTACHMENT[0 - 15] - Make the shader location output to the specified color attachment.
But then we have this mysterious target (from the docs):
gl.BACK: Fragment shader output is written into the back color buffer.
I don't think I understand what the back color buffer is, especially relative to the currently attached FBO. As far as I know you don't specify a 'back color buffer' when making a FBO...so what does this mean? What is this 'back color buffer'?
In WebGL the backbuffer is effectively "the canvas". It's called the backbuffer because sometimes there is a frontbuffer. Canvas's in WebGL are double buffered. One buffer is whatever is visible, the other is the buffer you're currently drawing to.
You can't use [gl.BACK, gl_COLOR_ATTACHMENT0]
When writing to a framebuffer each entry can only be the same attachment or NONE. For example imagine you have 4 attachments. Then the array you pass to drawBuffers is as follows
gl.drawBuffers([
gl.COLOR_ATTACHMENT0, // OR gl.NONE,
gl.COLOR_ATTACHMENT1, // OR gl.NONE,
gl.COLOR_ATTACHMENT2, // OR gl.NONE,
gl.COLOR_ATTACHMENT3, // OR gl.NONE,
])
You can not swap around attachments.
gl.drawBuffers([
gl.NONE,
gl.COLOR_ATTACHMENT0, // !! ERROR! This has to be COLOR_ATTACHMENT1 or NONE
])
You can't use gl.BACK gl.BACK is only for when writing to the canvas, in other words then the frame buffer is set to null as in gl.bindFramebuffer(null);
gl.drawBuffers([
gl.BACK, // OR gl.NONE
]);
note: drawBuffers state is part of the state of each framebuffer (and canvas). See this and this
I need to implement offscreen rendering in Metal with copying to a system memory. Without drawing on the screen.
This code works fine, but I'm not sure that it's a correct code:
// rendering to offscreen texture
auto commandQueue = [device newCommandQueue];
auto commandBuffer = [commandQueue commandBuffer];
//[commandBuffer enqueue]; // Do I need this command?
id<MTLRenderCommandEncoder> renderEncoder = [commandBuffer renderCommandEncoderWithDescriptor:mtlDescriptor];
// perform encoding
[renderEncoder endEncoding];
[commandBuffer commit];
auto commandBuffer = [commandQueue commandBuffer];
id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer blitCommandEncoder];
// Copying offscreen texture to a new managed texture
[blitEncoder copyFromTexture:drawable.texture sourceSlice:0 sourceLevel:level sourceOrigin:region.origin sourceSize:region.size toTexture:_mtlTexture destinationSlice:0 destinationLevel:level destinationOrigin:{xOffset, yOffset, 0}];
[blitEncoder endEncoding];
[commandBuffer commit];
[commandBuffer WaitUntilCompleted]; // I add waiting to get a fully completed texture for copying.
// Final stage - we copy a texture to our buffer in system memory
getBytes_bytesPerRow_fromRegion_mipmapLevel()
Do I need to call commandBuffer.enqueue ?
Also if I remove commandBuffer.WaitUntilCompleted I can get only a half of a frame. It seems that getBytes_bytesPerRow_fromRegion_mipmapLevel doesn't check that rendering is finished.
Or should I create offscreen texture "managed" instead of "private" and then copy it directly to my buffer:
// creating offscreen texture "managed"
// rendering to offscreen texture
auto commandQueue = [device newCommandQueue];
auto commandBuffer = [commandQueue commandBuffer];
//[commandBuffer enqueue]; // Do I need this command?
id<MTLRenderCommandEncoder> renderEncoder = [commandBuffer renderCommandEncoderWithDescriptor:mtlDescriptor];
// perform encoding
[renderEncoder endEncoding];
[commandBuffer commit];
[commandBuffer waitUntilCompleted];
// Copying "managed" offscreen texture to my buffer
getBytes_bytesPerRow_fromRegion_mipmapLevel()
1) You don't need to call enqueue on the command buffer. This is used in situations where you want to explicitly specify the order of command buffers in a multithreaded scenario, which is irrelevant here. Your command buffer will be implicitly enqueued upon being committed.
2) You do indeed need to wait until the command buffer has completed before copying its contents to system memory. Normally, it's essential for the GPU and CPU to be able to run asynchronously without waiting on one another, but in your use case, you want the opposite, and waiting is how you keep them in lockstep.
3) If you don't need a copy of the rendered image as a texture for further work on the GPU, you should be able to omit the full-on blit entirely, provided the texture you're rendering to is in the managed storage mode. You can call synchronizeResource: on the blit encoder instead, which will make the results of the rendering work visible to the copy of the texture in system memory, from which you can then copy directly.
If for some reason the render target can't be in managed storage (I noticed you're using drawables—are these actually MTLDrawables provided by a view or layer, and if so, why?), you will in fact need to blit to either a managed texture or a shared/managed buffer in order to copy the bits on the CPU side.
I have a webgl project setup that uses 2 pass rendering to create effects on a texture.
Everything was working until recently chrome started throwing this error:
[.WebGL-0000020DB7FB7E40] GL_INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture.
This just started happening even though I didn't change my code, so I'm guessing a new update caused this.
I found this answer on SO, stating the error "happens any time you read from a texture which is currently attached to the framebuffer".
However I've combed through my code 100 times and I don't believe I am doing that. So here is how I have things setup.
Create a fragment shader with a uniform sampler.
uniform sampler2D sampler;
Create 2 textures
var texture0 = initTexture(); // This function does all the work to create a texture
var texture1 = initTexture(); // This function does all the work to create a texture
Create a Frame Buffer
var frameBuffer = gl.createFramebuffer();
Then I start the "2 pass processing" by uploading a html image to texture0, and binding texture0 to the sampler.
I then bind the frame buffer & call drawArrays:
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture1, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
To clean up I unbind the frame buffer:
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
Edit:
After adding break points to my code I found that the error is not actually thrown until I bind the null frame buffer. So the drawArrays call isn't causing the error, it's binding the null frame buffer afterwards that sets it off.
Chrome since version 83 started to perform conservative checks for the framebuffer and the active texture feedback loop. These checks are likely too conservative and affect usage that should actually be allowed.
In these new checks Chrome seem to disallow a render target to be bound to any texture slot, even if this slot is not used by the program.
In your 2 pass rendering you likely have something like:
Initialize a render target and create a texture that points to a framebuffer.
Render to the target.
In 1 you likely bind a texture using gl.bindTexture(gl.TEXTURE_2D, yourTexture) you need to then, before the step 2, unbind the texture using gl.bindTexture(gl.TEXTURE_2D, null); Otherwise Chrome will fail because the render target is bound as a texture, even though this texture is not sampled by the program.
I'm having trouble setting up blending in Metal. Even when starting with the Hello Triangle example provided by Apple, using the following code
pipelineStateDescriptor.colorAttachments[0].blendingEnabled = YES;
pipelineStateDescriptor.colorAttachments[0].sourceAlphaBlendFactor = MTLBlendFactorZero;
pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = MTLBlendFactorZero;
and the fragment function
fragment float4 fragmentShader(RasterizerData in [[stage_in]]) {
return float4(in.color.rgb, 0);
}
the triangle still draws completely opaque. What I want to achieve in the end is blending between two shapes by using different blending factors, but I thought I would start with a simple example to understand what is going on. What am I missing?
sourceAlphaBlendFactor and destinationAlphaBlendFactor are to do with constructing a blend for the alpha channel. i.e. they control the alpha that will be written into your destination buffer, which will not really be visible to you. You are probably more interested in the RGB that is written into the frame buffer.
Try setting values for sourceRGBBlendFactor and destinationRGBBlendFactor instead. For traditional alpha blending set sourceRGBBlendFactor to MTLBlendFactorSourceAlpha and set destinationRGBBlendFactor to MTLBlendFactorOneMinusSourceAlpha
I have a kernel function (compute shader) that reads nearby pixels of a pixel from a texture and based on the old nearby-pixel values updates the value of the current pixel (it's not a simple convolution).
I've tried creating a copy of the texture using BlitCommandEncoder and feeding the kernel function with 2 textures - one read-only and another write-only. Unfortunately, this approach is GPU-wise time consuming.
What is the most efficient (GPU- and memory-wise) way of reading old values from a texture while updating its content?
(Bit late but oh well)
There is no way you could make it work with only one texture, because the GPU is a highly parallel processor: Your kernel that you wrote for a single pixel gets called in parallel on all pixels, you can't tell which one goes first.
So you definitely need 2 textures. The way you probably should do it is by using 2 textures where one is the "old" one and the other the "new" one. Between passes, you switch the role of the textures, now old is new and new is old. Here is some pseudoswift:
var currentText = MTLTexture()
var nextText = MTLTexture()
let semaphore = dispatch_semaphore_create(1)
func update() {
dispatch_semaphore_wait(semaphore) // Wait for updating done signal
let commands = commandQueue.commandBuffer()
let encoder = commands.computeCommandEncoder()
encoder.setTexture(currentText, atIndex: 0)
encoder.setTexture(nextText, atIndex: 1)
encoder.dispatchThreadgroups(...)
encoder.endEncoding()
// When updating done, swap the textures and signal that it's done updating
commands.addCompletionHandler {
swap(¤tText, &nextText)
dispatch_semaphore_signal(semaphore)
}
commands.commit()
}
I have written plenty of iOS Metal code that samples (or reads) from the same texture it is rendering into. I am using the render pipeline, setting my texture as the render target attachment, and also loading it as a source texture. It works just fine.
To be clear, a more efficient approach is to use the color() attribute in your fragment shader, but that is only suitable if all you need is the value of the current fragment, not any other nearby positions. If you need to read from other positions in the render target, I would just load the render target as a source texture into the fragment shader.