Draw `MTLTexture` to `CAMetalLayer` - ios

I am drawing stuff onto an off-screen MTLTexture. (using Skia Canvas)
At a later point, I want to render this MTLTexture into a CAMetalLayer to display it on the screen.
Since I was using Skia for the off-screen drawing operations, my code is quite simple and I don't have the typical Metal setup (no MTLLibrary, MTLRenderPipelineDescriptor, MTLRenderPassDescriptor, MTLRenderEncoder, etc).
I now simply want to draw that MTLTexture into a CAMetalLayer, but haven't figured out how to do so simply.
This is where I draw my stuff to the MTLTexture _texture (Skia code):
- (void) renderNewFrameToCanvas(Frame frame) {
if (_skContext == nullptr) {
GrContextOptions grContextOptions;
_skContext = GrDirectContext::MakeMetal((__bridge void*)_device,
// TODO: Use separate command queue for this context?
(__bridge void*)_commandQueue,
grContextOptions);
}
#autoreleasepool {
// Lock Mutex to block the runLoop from overwriting the _texture
std::lock_guard lockGuard(_textureMutex);
auto texture = _texture;
// Get & Lock the writeable Texture from the Metal Drawable
GrMtlTextureInfo fbInfo;
fbInfo.fTexture.retain((__bridge void*)texture);
GrBackendRenderTarget backendRT(texture.width,
texture.height,
1,
fbInfo);
// Create a Skia Surface from the writable Texture
auto skSurface = SkSurface::MakeFromBackendRenderTarget(_skContext.get(),
backendRT,
kTopLeft_GrSurfaceOrigin,
kBGRA_8888_SkColorType,
nullptr,
nullptr);
auto canvas = skSurface->getCanvas();
auto surface = canvas->getSurface();
// Clear anything that's currently on the Texture
canvas->clear(SkColors::kBlack);
// Converts the Frame to an SkImage - RGB.
auto image = SkImageHelpers::convertFrameToSkImage(_skContext.get(), frame);
canvas->drawImage(image, 0, 0);
// Flush all appended operations on the canvas and commit it to the SkSurface
canvas->flush();
// TODO: Do I need to commit?
/*
id<MTLCommandBuffer> commandBuffer([_commandQueue commandBuffer]);
[commandBuffer commit];
*/
}
}
Now, since I have the MTLTexture _texture in memory, I want to draw it to the CAMetalLayer _layer. This is what I have so far:
- (void) setup {
// I set up a runLoop that calls render() 60 times a second.
// [removed to simplify]
_renderPassDescriptor = [[MTLRenderPassDescriptor alloc] init];
// Load the compiled Metal shader (PassThrough.metal)
auto baseBundle = [NSBundle mainBundle];
auto resourceBundleUrl = [baseBundle URLForResource:#"VisionCamera" withExtension:#"bundle"];
auto resourceBundle = [[NSBundle alloc] initWithURL:resourceBundleUrl];
auto shaderLibraryUrl = [resourceBundle URLForResource:#"PassThrough" withExtension:#"metallib"];
NSError* libLoadError;
id<MTLLibrary> defaultLibrary = [_device newLibraryWithURL:shaderLibraryUrl error:&libLoadError];
id<MTLFunction> vertexFunction = [defaultLibrary newFunctionWithName:#"vertexPassThrough"];
id<MTLFunction> fragmentFunction = [defaultLibrary newFunctionWithName:#"fragmentPassThrough"];
if (vertexFunction == nil || fragmentFunction == nil) {
throw std::runtime_error("VisionCamera: Failed to load PassThrough.metal shader!");
}
// Create a Pipeline Descriptor that connects the CPU draw operations to the GPU Metal context
auto pipelineDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
pipelineDescriptor.label = #"VisionCamera: Frame Texture -> Layer Pipeline";
pipelineDescriptor.vertexFunction = vertexFunction;
pipelineDescriptor.fragmentFunction = fragmentFunction;
pipelineDescriptor.colorAttachments[0].pixelFormat = MTLPixelFormatBGRA8Unorm;
NSError* error;
_pipelineState = [_device newRenderPipelineStateWithDescriptor:pipelineDescriptor error:&error];
if (error != nil) {
throw std::runtime_error("VisionCamera: Failed to create render pipeline state!");
}
}
// gets called 60 times a second to draw to the screen
- (void) render() {
#autoreleasepool {
// Blocks until the next Frame is ready (16ms at 60 FPS)
auto drawable = [_layer nextDrawable];
std::unique_lock lock(_textureMutex);
auto texture = _texture;
MTLRenderPassDescriptor* renderPassDescriptor = [[MTLRenderPassDescriptor alloc] init];
renderPassDescriptor.colorAttachments[0].texture = drawable.texture;
renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor();
id<MTLCommandBuffer> commandBuffer([_commandQueue commandBuffer]);
auto renderEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];
[renderEncoder setLabel:#"VisionCamera: PreviewView Texture -> Layer"];
[renderEncoder setRenderPipelineState:_pipelineState];
[renderEncoder setFragmentTexture:texture atIndex:0];
[renderEncoder endEncoding];
[commandBuffer presentDrawable:drawable];
[commandBuffer commit];
lock.unlock();
}
}
And along with that, I have created the PassThrough.metal shader which is just for passing through a texture:
#include <metal_stdlib>
using namespace metal;
// Vertex input/output structure for passing results from vertex shader to fragment shader
struct VertexIO
{
float4 position [[position]];
float2 textureCoord [[user(texturecoord)]];
};
// Vertex shader for a textured quad
vertex VertexIO vertexPassThrough(const device packed_float4 *pPosition [[ buffer(0) ]], const device packed_float2 *pTexCoords [[ buffer(1) ]], uint vid [[ vertex_id ]]) {
VertexIO outVertex;
outVertex.position = pPosition[vid];
outVertex.textureCoord = pTexCoords[vid];
return outVertex;
}
// Fragment shader for a textured quad
fragment half4 fragmentPassThrough(VertexIO inputFragment [[ stage_in ]], texture2d<half> inputTexture [[ texture(0) ]], sampler samplr [[ sampler(0) ]]) {
return inputTexture.sample(samplr, inputFragment.textureCoord);
}
Running this crashes the app with the following exception:
validateRenderPassDescriptor:782: failed assertion `RenderPass Descriptor Validation
Texture at colorAttachment[0] has usage (0x01) which doesn't specify MTLTextureUsageRenderTarget (0x04)
This now raises three questions for me:
Do I have to do all of that Metal setting up, packing along the PassThrough.metal shader, render pass stuff, etc just to draw the MTLTexture to the CAMetalLayer? Is there no simpler way?
Why is the code above failing?
When is the drawing from Skia actually committed to the MTLTexture? Do I need to commit the command buffer (as seen in my TODO)?

Related

How to simplify Metal renderPass to draw 3 source textures to one destination texture

my input to frame rendering is three MTLTextures (identical size and type), three different viewports, and three different scissorRects. My vertexShader and fragmentShader are very simple (renders texture to quad). My current code creates three MTLRenderCommandEncoders in a loop for one buffer and commits the buffer. It works, but there must be a way to do the same using only one MTLRenderCommandEncoder. I started with:
- (void)setViewports:(const MTLViewport *)viewports count:(NSUInteger)count;
- (void)setScissorRects:(const MTLScissorRect *)scissorRects count:(NSUInteger)count;
, and I don't know how to set all three textures to the encoder. It could be:
- (void)setVertexTextures:(id<MTLTexture> _Nullable const *)textures withRange:(NSRange)range;
but I don't know how the encoder passes (or how to pass) textures to shaders. Is encoder passing my array of MTLTextures at once to the shader and I have pickup the right one? Is there an option how to tell to the encoder to pass each texture (from the array) to shader? Or, do you have a better idea how to do it?
Thank you.
It was easier than I thought. MTLRenderCommandEncoder encodes all drawing commands sequentially, in the order of how they are 'written' to the encoder until the endEncoding message is sent to the encoder.
Pseudo code for writing multiple textures to the same target texture:
_commandBuffer = [_commandQueue commandBuffer];
_drawableRenderDescriptor.colorAttachments[0].texture = drawable.texture;
_drawableRenderDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
id <MTLRenderCommandEncoder> renderEncoder = [_commandBuffer renderCommandEncoderWithDescriptor: _drawableRenderDescriptor];
[renderEncoder setRenderPipelineState: _pipelineState];
[renderEncoder setVertexBuffer: _vertices offset: 0 atIndex: PPVertexInputIndexVertices];
for (id <MTLTexture> texture in textures){
MTLScissorRect scissorRect = [self scissorRectForTexture: texture];
MTLViewport viewPort = [self viewPortForTexture: texture];
[renderEncoder setViewport: viewPort];
[renderEncoder setScissorRect: scissorRect];
[renderEncoder setFragmentTexture: region.texture atIndex: 0];
[renderEncoder drawPrimitives: MTLPrimitiveTypeTriangle vertexStart: 0 vertexCount: _numVertices];
}
[renderEncoder endEncoding];
[_commandBuffer presentDrawable: drawable];
[_commandBuffer commit];
[_commandBuffer waitUntilScheduled];

In metal how to clear the depth buffer or the stencil buffer?

In metal, when I already have a RenderCommandEncoder and when I already did some job with it, how can I clear the depth buffer or the stencil buffer (but not both I need to keep one)? For example, in OpenGl we have glClearDepthf / GL_DEPTH_BUFFER_BIT and glClearStencil / GL_STENCIL_BUFFER_BIT but I didn't find any equivalent in metal.
While it's true that Metal doesn't provide a mechanism to clear the depth or stencil buffers in the middle of a rendering pass, it's possible to create a near-trivial pipeline state that allows you to do so as selectively as you like.
In the course of porting some OpenGL code to Metal, I found myself with a need to clear a section of the depth buffer that corresponds to the bounds of the currently set viewport. Here was my solution:
In my setup code, I create a specialized MTLRenderPipelineState and MTLDepthStencilState that are used only for the purpose of clearing the depth buffer, and stash them in my MTKView subclass with my other long-lived resources:
#property (nonatomic, retain) id<MTLRenderPipelineState> pipelineDepthClear;
#property (nonatomic, retain) id<MTLDepthStencilState> depthStencilClear;
[...]
// Special depth stencil state for clearing the depth buffer
MTLDepthStencilDescriptor *depthStencilDescriptor = [[MTLDepthStencilDescriptor alloc] init];
// Don't actually perform a depth test, just always write the buffer
depthStencilDescriptor.depthCompareFunction = MTLCompareFunctionAlways;
depthStencilDescriptor.depthWriteEnabled = YES;
depthStencilDescriptor.label = #"depthStencilClear";
depthStencilClear = [self.device newDepthStencilStateWithDescriptor:depthStencilDescriptor];
// Special pipeline state just for clearing the depth buffer.
MTLRenderPipelineDescriptor *renderPipelineDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
// Omit the color attachment, since we don't want to write the color buffer for this case.
renderPipelineDescriptor.depthAttachmentPixelFormat = self.depthStencilPixelFormat;
renderPipelineDescriptor.rasterSampleCount = self.sampleCount;
renderPipelineDescriptor.vertexFunction = [self.library newFunctionWithName:#"vertex_depth_clear"];
renderPipelineDescriptor.vertexFunction.label = #"vertexDepthClear";
renderPipelineDescriptor.fragmentFunction = [self.library newFunctionWithName:#"fragment_depth_clear"];
renderPipelineDescriptor.fragmentFunction.label = #"fragmentDepthClear";
MTLVertexDescriptor *vertexDescriptor = [[MTLVertexDescriptor alloc] init];
vertexDescriptor.attributes[0].format = MTLVertexFormatFloat2;
vertexDescriptor.attributes[0].offset = 0;
vertexDescriptor.attributes[0].bufferIndex = 0;
vertexDescriptor.layouts[0].stepRate = 1;
vertexDescriptor.layouts[0].stepFunction = MTLVertexStepFunctionPerVertex;
vertexDescriptor.layouts[0].stride = 8;
renderPipelineDescriptor.vertexDescriptor = vertexDescriptor;
NSError* error = NULL;
renderPipelineDescriptor.label = #"pipelineDepthClear";
self.pipelineDepthClear = [self.device newRenderPipelineStateWithDescriptor:renderPipelineDescriptor error:&error];
and set up the matching vertex and fragment functions in my .metal file:
struct DepthClearVertexIn
{
float2 position [[ attribute(0) ]];
};
struct DepthClearVertexOut
{
float4 position [[ position ]];
};
struct DepthClearFragmentOut
{
float depth [[depth(any)]];
};
vertex DepthClearVertexOut
vertex_depth_clear( DepthClearVertexIn in [[ stage_in ]])
{
DepthClearVertexOut out;
// Just pass the position through. We're clearing in NDC space.
out.position = float4(in.position, 0.5, 1.0);
return out;
}
fragment DepthClearFragmentOut fragment_depth_clear()
{
DepthClearFragmentOut out;
out.depth = 1.0;
return out;
}
Finally, the body of my clearDepthBuffer() method looks like this:
// Set up the pipeline and depth/stencil state to write a clear value to only the depth buffer.
[view.commandEncoder setDepthStencilState:view.depthStencilClear];
[view.commandEncoder setRenderPipelineState:view.pipelineDepthClear];
// Normalized Device Coordinates of a tristrip we'll draw to clear the buffer
// (the vertex shader set in pipelineDepthClear ignores all transforms and just passes these through)
float clearCoords[8] = {
-1, -1,
1, -1,
-1, 1,
1, 1
};
[view.commandEncoder setVertexBytes:clearCoords length:sizeof(float) * 8 atIndex:0];
[view.commandEncoder drawPrimitives:MTLPrimitiveTypeTriangleStrip vertexStart:0 vertexCount:4];
// Be sure to reset the setDepthStencilState and setRenderPipelineState for further drawing
Since the vertex shader doesn't transform the coordinates at all, I specify the input geometry in NDC space, so a rectangle from (-1, -1) to (1, 1) covers the entire viewport. The same technique could be used to clear any portion of your depth buffer if you set up the geometry and/or transforms appropiately.
A similar technique should work for clearing stencil buffers, but I'll leave that as an exercise to the reader. ;)

How to manually render a mesh loaded with the DirectX Toolkit

I have a c++/cx project where I'm rendering procedural meshes using DirectX-11, it all seems to work fine, but now I wanted to also import and render meshes from files (from fbx to be exact).
I was told to use the DirectX Toolkit for this.
I followed the tutorials of the toolkit, and that all worked,
but then I tried doing that in my project but it didn't seem to work. The imported mesh was not visible, and the existing procedural meshes were rendered incorrectly (as if without a depth buffer).
I then tried manually rendering the imported mesh (identical to the procedural meshes, without using the Draw function from DirectXTK)
This works better, the existing meshes are all correct, but the imported mesh color's are wrong; I use a custom made vertex and fragment shader, that uses only vertex position and color data, but for some reason the imported mesh's normals are send to shader instead of the vertex-colors.
(I don't even want the normals to be stored in the mesh, but I don't seem to have the option to export to fbx without normals, and even if I remove them manually from the fbx, at import the DirectXTK seem to recalculate the normals)
Does anyone know what I'm doing wrong?
This is all still relatively new to me, so any help appreciated.
If you need more info, just let me know.
Here is my code for rendering meshes:
First the main render function (which is called once every update):
void Track3D::Render()
{
if (!_loadingComplete)
{
return;
}
static const XMVECTORF32 up = { 0.0f, 1.0f, 0.0f, 0.0f };
// Prepare to pass the view matrix, and updated model matrix, to the shader
XMStoreFloat4x4(&_constantBufferData.view, XMMatrixTranspose(XMMatrixLookAtRH(_CameraPosition, _CameraLookat, up)));
// Clear the back buffer and depth stencil view.
_d3dContext->ClearRenderTargetView(_renderTargetView.Get(), DirectX::Colors::Transparent);
_d3dContext->ClearDepthStencilView(_depthStencilView.Get(), D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
// Set render targets to the screen.
ID3D11RenderTargetView *const targets[1] = { _renderTargetView.Get() };
_d3dContext->OMSetRenderTargets(1, targets, _depthStencilView.Get());
// Here I render everything:
_TrackMesh->Render(_constantBufferData);
RenderExtra();
_ImportedMesh->Render(_constantBufferData);
Present();
}
The Present-function:
void Track3D::Present()
{
DXGI_PRESENT_PARAMETERS parameters = { 0 };
parameters.DirtyRectsCount = 0;
parameters.pDirtyRects = nullptr;
parameters.pScrollRect = nullptr;
parameters.pScrollOffset = nullptr;
HRESULT hr = S_OK;
hr = _swapChain->Present1(1, 0, &parameters);
if (hr == DXGI_ERROR_DEVICE_REMOVED || hr == DXGI_ERROR_DEVICE_RESET)
{
OnDeviceLost();
}
else
{
if (FAILED(hr))
{
throw Platform::Exception::CreateException(hr);
}
}
}
Here's the render function which I call on every mesh:
(All of the mesh-specific data is gotten from the imported mesh)
void Mesh::Render(ModelViewProjectionConstantBuffer constantBufferData)
{
if (!_loadingComplete)
{
return;
}
XMStoreFloat4x4(&constantBufferData.model, XMLoadFloat4x4(&_modelMatrix));
// Prepare the constant buffer to send it to the Graphics device.
_d3dContext->UpdateSubresource(
_constantBuffer.Get(),
0,
NULL,
&constantBufferData,
0,
0
);
UINT offset = 0;
_d3dContext->IASetVertexBuffers(
0,
1,
_vertexBuffer.GetAddressOf(),
&_stride,
&_offset
);
_d3dContext->IASetIndexBuffer(
_indexBuffer.Get(),
DXGI_FORMAT_R16_UINT, // Each index is one 16-bit unsigned integer (short).
0
);
_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
_d3dContext->IASetInputLayout(_inputLayout.Get());
// Attach our vertex shader.
_d3dContext->VSSetShader(
_vertexShader.Get(),
nullptr,
0
);
// Send the constant buffer to the Graphics device.
_d3dContext->VSSetConstantBuffers(
0,
1,
_constantBuffer.GetAddressOf()
);
// Attach our pixel shader.
_d3dContext->PSSetShader(
_pixelShader.Get(),
nullptr,
0
);
SetTexture();
// Draw the objects.
_d3dContext->DrawIndexed(
_indexCount,
0,
0
);
}
And this is the vertex shader:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
matrix view;
matrix projection;
};
struct VertexShaderInput
{
float3 pos : POSITION;
//float3 normal : NORMAL0; //uncommenting these changes the color data for some reason (but always wrong)
//float2 uv1 : TEXCOORD0;
//float2 uv2 : TEXCOORD1;
float3 color : COLOR0;
};
struct VertexShaderOutput
{
float3 color : COLOR0;
float4 pos : SV_POSITION;
};
VertexShaderOutput main(VertexShaderInput input)
{
VertexShaderOutput output;
float4 pos = float4(input.pos, 1.0f);
// Transform the vertex position into projected space.
pos = mul(pos, model);
pos = mul(pos, view);
pos = mul(pos, projection);
output.pos = pos;
output.color = input.color;
return output;
}
And this is the pixel shader:
struct PixelShaderInput
{
float3 color: COLOR0;
};
float4 main(PixelShaderInput input) : SV_TARGET
{
return float4(input.color.r, input.color.g, input.color.b, 1);
}
The most likely issue is that you are not setting enough state for your drawing, and that the DirectX Tool Kit drawing functions are setting states that don't match what your existing code requires.
For performance reasons, DirectX Tool Kit does not 'save & restore' state. Instead each draw function sets the state it needs fully and then leaves it. I document which state is impacted in the wiki under the State management section for each class.
Your code above sets the vertex buffer, index buffer, input layout, vertex shader, pixel shader, primitive topology, and VS constant buffer in slot 0.
You did not set blend state, depth/stencil state, or the rasterizer state. You didn't provide the pixel shader so I don't know if you need any PS constant buffers, samplers, or shader resources.
Try explicitly setting the blend state, depth/stencil state, and rasterizer state before you draw your procedural meshes. If you just want to go back to the defined defaults instead of whatever DirectX Tool Kit did, call:
_d3dContext->RSSetState(nullptr);
_d3dContext->OMSetBlendState(nullptr, nullptr, 0);
_d3dContext->OMSetDepthStencilState(nullptr, 0xffffffff);
See also the CommonStates class.
It's generally not a good idea to use identifiers that start with _ in C++. Officially all identifiers that start with _X where X is a capital letter or __ are reserved for the compiler and library implementers so it could conflict with some compiler stuff. m_ or something similar is better.

iOS-Metal: How to clear Depth Buffer ? Similar to glClear(GL_DEPTH_BUFFER_BIT) in OpenGL

I need to clear the depth buffer, for which i use glClear(GL_DEPTH_BUFFER_BIT) in OpenGL, how to do in metal ? I have gone through apple's documentation, there is no hint about it.
The short answer is that to clear the depth buffer you add these two lines before beginning a render pass:
mRenderPassDescriptor.depthAttachment.loadAction = MTLLoadActionClear;
mRenderPassDescriptor.depthAttachment.clearDepth = 1.0f;
And you cannot do a clear without ending and restarting a render pass.
Long answer:
In Metal, you have to define that you want the colour and depth buffers cleared when you start rendering to a MTLTexture. There is no clear function like in OpenGL.
To do this, in your MTLRenderPassDescriptor, set depthAttachment.loadAction to MTLLoadActionClear and depthAttachment.clearDepth to 1.0f.
You may also want to set colorAttachments[0].loadAction to MTLLoadActionClear to clear the colour buffer.
This render pass descriptor is then passed in to your call to MTLCommandBuffer::renderCommandEncoderWithDescriptor.
If you do want to clear a depth or colour buffer midway through rendering you have to call endEncoding on MTLRenderCommandEncoder, and then start encoding again with depthAttachment.loadAction set to MTLLoadActionClear.
To explain the solution more clear with sample codes
Before start rendering:
void prepareRendering(){
CMDBuffer = [_commandQueue commandBuffer]; // get command Buffer
drawable = [_metalLayer nextDrawable]; // get drawable from metalLayer
renderingTexture = drawable.texture; // set that as rendering te
setupRenderPassDescriptorForTexture(drawable.texture); // set the depth and colour buffer properties
RenderCMDBuffer = [CMDBuffer renderCommandEncoderWithDescriptor:_renderPassDescriptor];
RenderCMDBuffer.label = #"MyRenderEncoder";
setUpDepthState(CompareFunctionLessEqual,true,false); //
[RenderCMDBuffer setDepthStencilState:_depthState];
[RenderCMDBuffer pushDebugGroup:#"DrawCube"];
}
void setupRenderPassDescriptorForTexture(id <MTLTexture> texture)
{
if (_renderPassDescriptor == nil)
_renderPassDescriptor = [MTLRenderPassDescriptor renderPassDescriptor];
// set color buffer properties
_renderPassDescriptor.colorAttachments[0].texture = texture;
_renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
_renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(1.0f, 1.0f,1.0f, 1.0f);
_renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
// set depth buffer properties
MTLTextureDescriptor* desc = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat: MTLPixelFormatDepth32Float width: texture.width height: texture.height mipmapped: NO];
_depthTex = [device newTextureWithDescriptor: desc];
_depthTex.label = #"Depth";
_renderPassDescriptor.depthAttachment.texture = _depthTex;
_renderPassDescriptor.depthAttachment.loadAction = MTLLoadActionClear;
_renderPassDescriptor.depthAttachment.clearDepth = 1.0f;
_renderPassDescriptor.depthAttachment.storeAction = MTLStoreActionDontCare;
}
Render Your contents here
Render();
After rendering
similar to ogles2 method [_context presentRenderbuffer:_colorRenderBuffer];
void endDisplay()
{
[RenderCMDBuffer popDebugGroup];
[RenderCMDBuffer endEncoding];
[CMDBuffer presentDrawable:drawable];
[CMDBuffer commit];
_currentDrawable = nil;
}
The above methods clears the depth and colour buffers after rendering each frame
To clear the depth buffer in midway
void clearDepthBuffer(){
// end encoding the render command buffer
[RenderCMDBuffer popDebugGroup];
[RenderCMDBuffer endEncoding];
// here MTLLoadActionClear will clear your last drawn depth values
_renderPassDescriptor.depthAttachment.loadAction = MTLLoadActionClear; _renderPassDescriptor.depthAttachment.clearDepth = 1.0f;
_renderPassDescriptor.depthAttachment.storeAction = MTLStoreActionDontCare;
// here MTLLoadActionLoad will reuse your last drawn color buffer
_renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionLoad;
_renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
RenderCMDBuffer = [CMDBuffer renderCommandEncoderWithDescriptor:_renderPassDescriptor];
RenderCMDBuffer.label = #"MyRenderEncoder";
[RenderCMDBuffer pushDebugGroup:#"DrawCube"];
}
Here is the Swift 5 version. Run your first render pass:
// RENDER PASS 1
renderPassDescriptor = view.currentRenderPassDescriptor
if let renderPassDescriptor = renderPassDescriptor, let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) {
renderEncoder.label = "First Render Encoder"
renderEncoder.pushDebugGroup("First Render Debug")
// render stuff here...
renderEncoder.popDebugGroup()
renderEncoder.endEncoding()
}
Then clear the depth buffer, but keep the colour buffer:
renderPassDescriptor = view.currentRenderPassDescriptor
// Schedule Metal to clear the depth buffer
renderPassDescriptor!.depthAttachment.loadAction = MTLLoadAction.clear
renderPassDescriptor!.depthAttachment.clearDepth = 1.0
renderPassDescriptor!.depthAttachment.storeAction = MTLStoreAction.dontCare
// Schedule Metal to reuse the previous colour buffer
renderPassDescriptor!.colorAttachments[0].loadAction = MTLLoadAction.load
renderPassDescriptor!.colorAttachments[0].storeAction = MTLStoreAction.store
Then run your second render:
if let renderPassDescriptor = renderPassDescriptor, let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) {
renderEncoder.label = "Second Render"
renderEncoder.pushDebugGroup("Second Render Debug")
// render stuff here...
renderEncoder.popDebugGroup()
renderEncoder.endEncoding()
}

alpha value captured by camera in ios

i am trying to make a fake shadow by using RenderTexture
i dynamically create a camera, and assign a RenderTexture to the camera as render target.
the format of the RenderTexture is set to ARGB32
then i use the RenderTexture as texture on a plane with a custom shader to change the color to black and adjust the alpha value so the shadow can be properly shown
below is the shader im using:
Shader "Fake Shadow" {
Properties {
_ShadowTex ("Fake Shadow", 2D) = "white" { TexGen ObjectLinear }
_SAlpha ("Shadow Intensity", float) = 0.35
}
Category {
Tags { "Queue" = "Transparent-1" }
Lighting Off
ZWrite On
Cull Back
Blend SrcAlpha OneMinusSrcAlpha, one one
Subshader {
LOD 200
Pass {
SetTexture[_ShadowTex] {
ConstantColor(0,0,0,[_SAlpha])
matrix [_ProjMatrix]
Combine texture * constant, texture * constant
}
}
}
}
}
things are normal when i test play in Unity3D editor and on Android
however, the result is not correct in ios
the result image is in the link below (sorry i cant post image yet)
shadow in editor
shadow in ios
it seems ios put alpha as 1 in the RenderTexture even on pixels which the camera captured nothing
i have found a solution to solve this problem which is to check 32-bit display in the player settings
but im worried that checking 32-bit display will affect the performance
is there any other solution without checking 32-bit display?
I'm doing the same thing (camera and projector for emulating shadows), and I use a shader based on the one on this page: http://en.wikibooks.org/wiki/Cg_Programming/Unity/Projectors
The shader follows:
Shader "Cg projector shader for drop shadows" {
Properties {
_ShadowTex ("Projected Image", 2D) = "white" {}
_ShadowStrength ("Shadow Strength", Float) = 0.65
}
SubShader {
Pass {
// Blend Off
Blend Zero OneMinusSrcAlpha // attenuate color in framebuffer
// by 1 minus alpha of _ShadowTex
// fb_color = black + fb_color * OneMinusShadowAlpha;
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// User-specified properties
uniform sampler2D _ShadowTex;
uniform float _ShadowStrength;
// Projector-specific uniforms
uniform float4x4 _Projector; // transformation matrix
// from object space to projector space
struct vertexInput {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct vertexOutput {
float4 pos : SV_POSITION;
float4 posProj : TEXCOORD0;
// position in projector space
};
vertexOutput vert(vertexInput input)
{
vertexOutput output;
output.posProj = mul(_Projector, input.vertex);
output.pos = mul(UNITY_MATRIX_MVP, input.vertex);
return output;
}
float4 frag(vertexOutput input) : COLOR
{
if (input.posProj.w > 0.0) // in front of projector?
{
return tex2D(_ShadowTex ,
float2(input.posProj) / input.posProj.w) * _ShadowStrength;
}
else // behind projector
{
return float4(0.0);
}
}
ENDCG
}
}
// The definition of a fallback shader should be commented out
// during development:
// Fallback "Projector/Light"
}

Resources