3D and 2D display in DirectX - directx

I want to render some sprites over my 3D scene, but when I enable D3D sprites, my 3D scene dissapears and i can see only those sprites.
Settings:
LPDIRECT3D9 d3d = NULL;
LPDIRECT3DDEVICE9 d3ddev = NULL;
D3DPRESENT_PARAMETERS d3dpp;
LPD3DXSPRITE d3dspt;
// Create Direct3D and the Direct3D Device
void InitDirect3D(GAMEWINDOW* gw)
{
d3d = Direct3DCreate9(D3D_SDK_VERSION);
ZeroMemory(&d3dpp, sizeof(d3dpp));
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dpp.BackBufferFormat = D3DFMT_X8R8G8B8;
d3dpp.Windowed = gw->Windowed;
d3dpp.BackBufferWidth = gw->Width;
d3dpp.BackBufferHeight = gw->Height;
d3dpp.EnableAutoDepthStencil = TRUE;
d3dpp.AutoDepthStencilFormat = D3DFMT_D16;
d3d->CreateDevice(D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
gw->hWnd,
D3DCREATE_SOFTWARE_VERTEXPROCESSING,
&d3dpp,
&d3ddev);
d3ddev->SetRenderState(D3DRS_LIGHTING, FALSE);
d3ddev->SetRenderState(D3DRS_ZENABLE, TRUE);
d3ddev->SetRenderState(D3DRS_CULLMODE, TRUE);
D3DXCreateSprite(d3ddev, &d3dspt);
return;
}
Rendering:
// Start rendering
void StartRender()
{
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
d3ddev->Clear(0, NULL, D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
d3ddev->BeginScene();
d3dspt->Begin(D3DXSPRITE_ALPHABLEND); // when enabled, 3d scene dissapears
return;
}
// Stop rendering
void EndRender()
{
d3dspt->End(); // disabling sprites
d3ddev->EndScene();
d3ddev->Present(NULL, NULL, NULL, NULL);
return;
}
Rendering function:
void Render()
{
static int frame = 0;
if (frame == 36) frame = 0;
StartRender();
DrawSprite(&interceptor, frame++, 100, 100, 0);
DrawModel(&a, 0.0f, 0.0f, 0.0f);
EndRender();
return;
}

Try processing your sprites separately from the rest of your scene. You could create a single render function that would make your life much easier:
void Render()
{
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER,
D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
d3ddev->BeginScene();
d3dspt->Begin(D3DXSPRITE_ALPHABLEND);
My3DRenderingFunction();
MySpriteRenderingFunction();
d3dspt->End(); // disabling sprites
d3ddev->EndScene();
d3ddev->Present(NULL, NULL, NULL, NULL);
}
The My3DRenderingRunction() and MySpriteRenderingFunction() would be your custom functions where you would render everything. You could even pass a callback function (function pointer) to the render function. Also, note how you don't need two Clear() calls. You can just use one:
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER,
D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);

Related

When is binding transform feedback object not required?

I have some sample code for a transform buffer object I've been trying to simplify down to the most minimal form and I've found that my code works the same if I omit createTransformFeedback() and bindTransformFeedback(). The only "transform feedback" code left is just gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, buffer). I can only test this on two devices (Intel Iris and Qualcomm Adreno 610) but both run identically with or without binding any TFOs.
From all the documentation I've read, this doesn't make any sense to me. Is it actually required to call bindTransformFeedback() to get access to TFOs? Or is my code too "minimal".
The code below does not write to the canvas but only logs the buffer contents after two feedback passes. The relevant code is begins about half-way down. I've commented out code calling createTransformFeedback() and bindTransformFeedback().
// Shaders
const transformVertexShader = `#version 300 es
layout(location=0) in vec2 shaderInput;
out vec2 shaderOutput;
void main() {
shaderOutput = shaderInput * vec2(1.0, 10.0);
}`;
const transformFragmentShader = `#version 300 es
void main()
{
discard;
}`;
// Create program
const canvas = document.querySelector('canvas');
const gl = canvas.getContext('webgl2');
const program = gl.createProgram();
const vertexShader = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vertexShader, transformVertexShader);
gl.compileShader(vertexShader);
gl.attachShader(program, vertexShader);
const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragmentShader, transformFragmentShader);
gl.compileShader(fragmentShader);
gl.attachShader(program, fragmentShader);
gl.transformFeedbackVaryings(program, ['shaderOutput'], gl.INTERLEAVED_ATTRIBS);
gl.linkProgram(program);
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
console.log(gl.getProgramInfoLog(program));
console.log(gl.getShaderInfoLog(vertexShader));
console.log(gl.getShaderInfoLog(fragmentShader));
}
gl.useProgram(program);
// Initialize VAOs, buffers and TFOs
const COUNT = 2000;
const A = 0;
const B = 1;
const sourceData = new Float32Array(COUNT).map((v,i) => i);
const buffer = [];
const vao = [];
// const tf = [];
for (const i of [A,B]) {
vao[i] = gl.createVertexArray();
buffer[i] = gl.createBuffer();
// tf[i] = gl.createTransformFeedback();
gl.bindVertexArray(vao[i]);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer[i]);
gl.bufferData(gl.ARRAY_BUFFER, COUNT * 4, gl.STATIC_DRAW);
gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(0);
}
// Populate initial source data buffer
gl.bindBuffer(gl.ARRAY_BUFFER, buffer[A]);
gl.bufferSubData(gl.ARRAY_BUFFER, 0, sourceData);
// Unbind everything (for no reason)
gl.bindVertexArray(null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, null);
// gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, null);
// First pass through the transform feedback A->B
gl.bindVertexArray(vao[A]);
// gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, tf[A]);
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, buffer[B]);
gl.enable(gl.RASTERIZER_DISCARD);
gl.beginTransformFeedback(gl.TRIANGLES);
gl.drawArrays(gl.TRIANGLES, 0, COUNT / 2);
gl.endTransformFeedback();
// Second pass through the transform feedback B->A
gl.bindVertexArray(vao[B]);
// gl.bindTransformFeedback(gl.TRANSFORM_FEEDBACK, tf[B]);
gl.bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER, 0, buffer[A]);
gl.beginTransformFeedback(gl.TRIANGLES);
gl.drawArrays(gl.TRIANGLES, 0, COUNT / 2);
gl.endTransformFeedback();
gl.flush();
// Check the final results
const fence = gl.fenceSync(gl.SYNC_GPU_COMMANDS_COMPLETE, 0);
const startTime = performance.now();
const check = () => {
const status = gl.clientWaitSync(fence, 0 & gl.SYNC_FLUSH_COMMANDS_BIT, 0);
if (status === gl.CONDITION_SATISFIED) {
console.log(`sync after ${ performance.now() - startTime }ms`);
const output = new Float32Array(COUNT);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer[B]);
gl.getBufferSubData(gl.ARRAY_BUFFER, 0, output);
console.log(`data finished fetching at ${ performance.now() - startTime }`);
console.log(output);
return;
} else console.log('not finished, skipping');
setTimeout(check);
};
check();
I completely missed that OpenGL provides a default transform feedback object that always exists, is bound until you unbind it (or bind to a different TFO) and cannot be deleted. If you are only pushing data back and forth between two buffers then you only need a single TFO, so it makes sense to use the default one.
I think the reason you would want to make two or more TFOs is if you were doing multiple draw calls in a single animation frame and wanted to retain info from both calls.
So for single transform-feedback programs, you only need to call bindBufferBase(gl.TRANSFORM_FEEDBACK_BUFFER,...), beginTransformFeedback(...) and endTransformFeedback() since the default transform feedback object is already bound and ready for work.

DirectX 11 Blending

How can i access pixel colors of destination pixel in pixel shader, in order to use my specific blending equation, when control goes to pixel shader i only have the source pixel position and color, i want to know what is the color of destination pixel at that time..?
One approach i have heard is by using textures, but i am not able to find the way through textures.
Programmable blending is not allowed in directX 11, but with some hacks it is possible.
void D3D12HelloTriangle::LoadPipeline()
{
UINT dxgiFactoryFlags = 0;
ComPtr<IDXGIFactory4> factory;
ThrowIfFailed(CreateDXGIFactory2(dxgiFactoryFlags, IID_PPV_ARGS(&factory)));
//Device Creation
{
ComPtr<IDXGIAdapter1> hardwareAdapter;
GetHardwareAdapter(factory.Get(), &hardwareAdapter);
ThrowIfFailed(D3D12CreateDevice(
hardwareAdapter.Get(),
D3D_FEATURE_LEVEL_11_0,
IID_PPV_ARGS(&m_device)
));
}
// Describe and create the command queue.
D3D12_COMMAND_QUEUE_DESC queueDesc = {};
queueDesc.Flags = D3D12_COMMAND_QUEUE_FLAG_NONE;
queueDesc.Type = D3D12_COMMAND_LIST_TYPE_DIRECT;
ThrowIfFailed(m_device->CreateCommandQueue(&queueDesc, IID_PPV_ARGS(&m_commandQueue)));
// Describe and create the swap chain.
DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {};
swapChainDesc.BufferCount = FrameCount;
swapChainDesc.Width = m_width;
swapChainDesc.Height = m_height;
swapChainDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD;
swapChainDesc.SampleDesc.Count = 1;
ComPtr<IDXGISwapChain1> swapChain;
ThrowIfFailed(factory->CreateSwapChainForCoreWindow(
m_commandQueue.Get(), // Swap chain needs the queue so that it can force a flush on it.
reinterpret_cast<IUnknown*>(Windows::UI::Core::CoreWindow::GetForCurrentThread()),
&swapChainDesc,
nullptr,
&swapChain
));
ThrowIfFailed(swapChain.As(&m_swapChain));
m_frameIndex = m_swapChain->GetCurrentBackBufferIndex();
// Create descriptor heaps.
{
// Describe and create a render target view (RTV) descriptor heap.
D3D12_DESCRIPTOR_HEAP_DESC rtvHeapDesc = {};
rtvHeapDesc.NumDescriptors = FrameCount;
rtvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV;
rtvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;
ThrowIfFailed(m_device->CreateDescriptorHeap(&rtvHeapDesc, IID_PPV_ARGS(&m_rtvHeap)));
m_rtvDescriptorSize = m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_RTV);
}
// Create frame resources.
{
CD3DX12_CPU_DESCRIPTOR_HANDLE rtvHandle(m_rtvHeap->GetCPUDescriptorHandleForHeapStart());
// Create a RTV for each frame.
for (UINT n = 0; n < FrameCount; n++)
{
ThrowIfFailed(m_swapChain->GetBuffer(n, IID_PPV_ARGS(&m_renderTargets[n])));
m_device->CreateRenderTargetView(m_renderTargets[n].Get(), nullptr, rtvHandle);
rtvHandle.Offset(1, m_rtvDescriptorSize);
}
}
ThrowIfFailed(m_device->CreateCommandAllocator(D3D12_COMMAND_LIST_TYPE_DIRECT, IID_PPV_ARGS(&m_commandAllocator)));
}
// Load the sample assets.
void D3D12HelloTriangle::LoadAssets()
{
// Create an empty root signature.
{
CD3DX12_ROOT_SIGNATURE_DESC rootSignatureDesc;
rootSignatureDesc.Init(0, nullptr, 0, nullptr, D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT);
ComPtr<ID3DBlob> signature;
ComPtr<ID3DBlob> error;
ThrowIfFailed(D3D12SerializeRootSignature(&rootSignatureDesc, D3D_ROOT_SIGNATURE_VERSION_1, &signature, &error));
ThrowIfFailed(m_device->CreateRootSignature(0, signature->GetBufferPointer(), signature->GetBufferSize(), IID_PPV_ARGS(&m_rootSignature)));
}
// Create the pipeline state, which includes compiling and loading shaders.
{
ComPtr<ID3DBlob> vertexShader;
ComPtr<ID3DBlob> pixelShader;
#if defined(_DEBUG)
// Enable better shader debugging with the graphics debugging tools.
UINT compileFlags = D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION;
#else
UINT compileFlags = 0;
#endif
ThrowIfFailed(D3DCompileFromFile(GetAssetFullPath(L"VertexShader.hlsl").c_str(), nullptr, nullptr, "VSMain", "vs_5_0", compileFlags, 0, &vertexShader, nullptr));
ThrowIfFailed(D3DCompileFromFile(GetAssetFullPath(L"PixelShader.hlsl").c_str(), nullptr, nullptr, "PSMain", "ps_5_0", compileFlags, 0, &pixelShader, nullptr));
// Define the vertex input layout.
D3D12_INPUT_ELEMENT_DESC inputElementDescs[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
{ "DELAY", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
};
// Describe and create the graphics pipeline state object (PSO).
D3D12_GRAPHICS_PIPELINE_STATE_DESC psoDesc = {};
psoDesc.InputLayout = { inputElementDescs, _countof(inputElementDescs) };
psoDesc.pRootSignature = m_rootSignature.Get();
psoDesc.VS = CD3DX12_SHADER_BYTECODE(vertexShader.Get());
psoDesc.PS = CD3DX12_SHADER_BYTECODE(pixelShader.Get());
psoDesc.RasterizerState = CD3DX12_RASTERIZER_DESC(D3D12_DEFAULT);
psoDesc.BlendState = CD3DX12_BLEND_DESC(D3D12_DEFAULT);
psoDesc.DepthStencilState.DepthEnable = FALSE;
psoDesc.DepthStencilState.StencilEnable = FALSE;
psoDesc.SampleMask = 1;
psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE;
psoDesc.NumRenderTargets = 2;
psoDesc.RTVFormats[0] = DXGI_FORMAT_R8G8B8A8_UNORM;
psoDesc.RTVFormats[1] = DXGI_FORMAT_R8G8B8A8_UNORM;
//psoDesc.RTVFormats[2] = DXGI_FORMAT_R8G8B8A8_UNORM;
psoDesc.SampleDesc.Count = 1;
psoDesc.RasterizerState.CullMode = D3D12_CULL_MODE_NONE;
//ThrowIfFailed(m_device->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&m_pipelineState)));
D3D12_GRAPHICS_PIPELINE_STATE_DESC transparentPsoDesc = psoDesc;
D3D12_RENDER_TARGET_BLEND_DESC transparencyBlendDesc;
transparencyBlendDesc.BlendEnable = true;
transparencyBlendDesc.LogicOpEnable = false;
transparencyBlendDesc.SrcBlend = D3D12_BLEND_ONE;
transparencyBlendDesc.DestBlend = D3D12_BLEND_ONE;
transparencyBlendDesc.BlendOp = D3D12_BLEND_OP_MAX;
transparencyBlendDesc.SrcBlendAlpha = D3D12_BLEND_ONE;
transparencyBlendDesc.DestBlendAlpha = D3D12_BLEND_ONE;
transparencyBlendDesc.BlendOpAlpha = D3D12_BLEND_OP_MAX;
transparencyBlendDesc.LogicOp = D3D12_LOGIC_OP_NOOP;
transparencyBlendDesc.RenderTargetWriteMask = D3D12_COLOR_WRITE_ENABLE_ALL;
transparentPsoDesc.BlendState.RenderTarget[0] =
transparencyBlendDesc;
ThrowIfFailed(m_device->CreateGraphicsPipelineState(&transparentPsoDesc,IID_PPV_ARGS(&m_pipelineState)));
ThrowIfFailed(m_device->CreateCommandList(0, D3D12_COMMAND_LIST_TYPE_DIRECT, m_commandAllocator.Get(), m_pipelineState.Get(), IID_PPV_ARGS(&m_commandList)));
}
// Command lists are created in the recording state, but there is nothing
// to record yet. The main loop expects it to be closed, so close it now.
ThrowIfFailed(m_commandList->Close());
// Create the vertex buffer.
{
// Define the geometry for a triangle.
Vertex triangleVertices[] =
{
{ { 0.0f, 0.0f * m_aspectRatio, 0.0f },{ 0.0f, 0.0f, 1.0f,1.0f },{ 0.0f,0.0f,0.0f,0.0f } },
{ { 0.25f, 0.0f * m_aspectRatio, 0.0f },{ 0.0f, 0.0f, 1.0f, 1.0f },{ 0.0f,0.0f,0.0f,0.0f } },
{ { 0.0f, 0.25f * m_aspectRatio, 0.0f },{ 0.0f, 0.0f, 1.0f, 1.0f },{ 0.0f,0.0f,0.0f,0.0f } },
{ { 0.25f, 0.25f * m_aspectRatio, 0.0f },{ 0.0f,0.0f,1.0f, 1.0f },{ 0.0f,0.0f,0.0f,0.0f } },
};
// Cube vertices. Each vertex has a position and a color.
Vertex triangleVertices2[] =
{
{ { 0.0f, 0.0f * m_aspectRatio, 0.0f },{ 0.0f, 1.0f, 0.0f, 1.0f },{ 0.0f,0.0f,0.0f,1.0f } },
{ { 0.5f, 0.0f * m_aspectRatio, 0.0f },{ 0.0f, 1.0f, 0.0f, 1.0f },{ 0.0f,0.0f,0.0f,1.0f } },
{ { 0.0f, 0.5f * m_aspectRatio, 0.0f },{0.0f, 1.0f, 0.0f, 1.0f },{ 0.0f,0.0f,0.0f,1.0f } },
{ { 0.5f, 0.5f * m_aspectRatio, 0.0f },{ 0.0f,1.0f, 0.0f, 1.0f },{ 0.0f,0.0f,0.0f,1.0f } },
};
const UINT vertexBufferSize = sizeof(triangleVertices);
const UINT my_vertexBufferSize = sizeof(triangleVertices2);
// Note: using upload heaps to transfer static data like vert buffers is not
// recommended. Every time the GPU needs it, the upload heap will be marshalled
// over. Please read up on Default Heap usage. An upload heap is used here for
// code simplicity and because there are very few verts to actually transfer.
ThrowIfFailed(m_device->CreateCommittedResource(
&CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD),
D3D12_HEAP_FLAG_NONE,
&CD3DX12_RESOURCE_DESC::Buffer(vertexBufferSize),
D3D12_RESOURCE_STATE_GENERIC_READ,
nullptr,
IID_PPV_ARGS(&m_vertexBuffer)));
ThrowIfFailed(m_device->CreateCommittedResource(
&CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD),
D3D12_HEAP_FLAG_NONE,
&CD3DX12_RESOURCE_DESC::Buffer(my_vertexBufferSize),
D3D12_RESOURCE_STATE_GENERIC_READ,
nullptr,
IID_PPV_ARGS(&my_vertexBuffer)));
// Copy the triangle data to the vertex buffer.
UINT8* pVertexDataBegin;
CD3DX12_RANGE readRange(0, 0); // We do not intend to read from this resource on the CPU.
ThrowIfFailed(m_vertexBuffer->Map(0, &readRange, reinterpret_cast<void**>(&pVertexDataBegin)));
UINT8* my_pVertexDataBegin;
CD3DX12_RANGE my_readRange(0, 0); // We do not intend to read from this resource on the CPU.
ThrowIfFailed(my_vertexBuffer->Map(0, &my_readRange, reinterpret_cast<void**>(&my_pVertexDataBegin)));
memcpy(pVertexDataBegin, triangleVertices, sizeof(triangleVertices));
m_vertexBuffer->Unmap(0, nullptr);
memcpy(my_pVertexDataBegin, triangleVertices2, sizeof(triangleVertices2));
my_vertexBuffer->Unmap(0, nullptr);
// Initialize the vertex buffer view.
m_vertexBufferView[0].BufferLocation = m_vertexBuffer->GetGPUVirtualAddress();
m_vertexBufferView[0].StrideInBytes = sizeof(Vertex);
m_vertexBufferView[0].SizeInBytes = vertexBufferSize;
m_vertexBufferView[1].BufferLocation = my_vertexBuffer->GetGPUVirtualAddress();
m_vertexBufferView[1].StrideInBytes = sizeof(Vertex);
m_vertexBufferView[1].SizeInBytes = vertexBufferSize;
}
// Create synchronization objects and wait until assets have been uploaded to the GPU.
{
ThrowIfFailed(m_device->CreateFence(0, D3D12_FENCE_FLAG_NONE, IID_PPV_ARGS(&m_fence)));
m_fenceValue = 1;
}
}
// Update frame-based values.
void D3D12HelloTriangle::OnUpdate()
{
}
// Render the scene.
void D3D12HelloTriangle::OnRender()
{
// Record all the commands we need to render the scene into the command list.
PopulateCommandList();
// Execute the command list.
ID3D12CommandList* ppCommandLists[] = { m_commandList.Get() };
m_commandQueue->ExecuteCommandLists(_countof(ppCommandLists), ppCommandLists);
// Present the frame.
ThrowIfFailed(m_swapChain->Present(1, 0));
WaitForPreviousFrame();
}
void D3D12HelloTriangle::OnDestroy()
{
// Ensure that the GPU is no longer referencing resources that are about to be
// cleaned up by the destructor.
WaitForPreviousFrame();
}
void D3D12HelloTriangle::PopulateCommandList()
{
// Command list allocators can only be reset when the associated
// command lists have finished execution on the GPU; apps should use
// fences to determine GPU execution progress.
ThrowIfFailed(m_commandAllocator->Reset());
// However, when ExecuteCommandList() is called on a particular command
// list, that command list can then be reset at any time and must be before
// re-recording.
ThrowIfFailed(m_commandList->Reset(m_commandAllocator.Get(), m_pipelineState.Get()));
// Set necessary state.
m_commandList->SetGraphicsRootSignature(m_rootSignature.Get());
m_commandList->RSSetViewports(1, &m_viewport);
m_commandList->RSSetScissorRects(1, &m_scissorRect);
// Indicate that the back buffer will be used as a render target.
//m_commandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_renderTargets[m_frameIndex].Get(), D3D12_RESOURCE_STATE_PRESENT, D3D12_RESOURCE_STATE_RENDER_TARGET));
CD3DX12_CPU_DESCRIPTOR_HANDLE rtvHandle(m_rtvHeap->GetCPUDescriptorHandleForHeapStart(), m_frameIndex, m_rtvDescriptorSize);
m_commandList->OMSetRenderTargets(1, &rtvHandle, FALSE, nullptr);
// Record commands.
const float clearColor[] = { 1.0f, 0.0f, 0.0f, 1.0f };
m_commandList->ClearRenderTargetView(rtvHandle, clearColor, 0, nullptr);
m_commandList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
m_commandList->IASetVertexBuffers(0, 1, &m_vertexBufferView[1]);
m_commandList->DrawInstanced(4, 1, 0, 0);
m_commandList->IASetVertexBuffers(0, 1, m_vertexBufferView);
m_commandList->DrawInstanced(4, 1, 0, 0);
// Indicate that the back buffer will now be used to present.
m_commandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_renderTargets[m_frameIndex].Get(), D3D12_RESOURCE_STATE_RENDER_TARGET, D3D12_RESOURCE_STATE_PRESENT));
ThrowIfFailed(m_commandList->Close());
}
void D3D12HelloTriangle::WaitForPreviousFrame()
{
// WAITING FOR THE FRAME TO COMPLETE BEFORE CONTINUING IS NOT BEST PRACTICE.
// This is code implemented as such for simplicity. The D3D12HelloFrameBuffering
// sample illustrates how to use fences for efficient resource usage and to
// maximize GPU utilization.
// Signal and increment the fence value.
const UINT64 fence = m_fenceValue;
ThrowIfFailed(m_commandQueue->Signal(m_fence.Get(), fence));
m_fenceValue++;
m_frameIndex = m_swapChain->GetCurrentBackBufferIndex();
}

NVidia SolidWireFrame

I'm wanting to display some geometries using the solidwireframe, it appears to be a fx file with multiple effects and single passes.
So i wrote a class to perform what i thought the effect file was doing. The Header files are my Compiled Shaders, which are copies of the NVidia ones. compiled with version 4.0
# include "SolidWireFrame.h"
# include "WireGeometryShader.h"
# include "GeometryShader.h"
# include "WirePixelShader.h"
# include "ColorPixelShader.h"
# include "WireVertexShader.h"
namespace DirectX11DrawingAdapter
{
SolidWireFrame::SolidWireFrame()
{
}
bool SolidWireFrame::Init(ID3D11Device* device)
{
m_pdepthLessEqualStencilState = nullptr;
m_pfillRasterizerState = nullptr;
m_pBlendingState = nullptr;
m_pdepthWriteLessStencilState = nullptr;
m_vertexShader = nullptr;
m_geometryShader = nullptr;
m_pixelColorShader = nullptr;
m_geometrySolidWireShader = nullptr;
m_pixelSolidWireShader = nullptr;
m_vertexLayout = nullptr;
if (!InitializeDepthStencilState(device))
return false;
if (!InitializeRaterizerState(device))
return false;
if (!InitializeBlendState(device))
return false;
if (!InitializeVertexShader(device))
return false;
if (!InitializeGeometryShader(device))
return false;
if (!InitializePixelShader(device))
return false;
return true;
}
void SolidWireFrame::ApplySolidWirePattern(ID3D11DeviceContext* deviceContext)
{
SetDepthStencilState(deviceContext, m_pdepthLessEqualStencilState);
SetRasterizerState(deviceContext, m_pfillRasterizerState);
SetBlendState(deviceContext, m_pBlendingState);
deviceContext->VSSetShader(m_vertexShader, NULL, 0);
deviceContext->GSSetShader(m_geometrySolidWireShader, NULL, 0);
deviceContext->PSSetShader(m_pixelSolidWireShader, NULL, 0);
}
void SolidWireFrame::ApplyDepthAndSolid(ID3D11DeviceContext* deviceContext)
{
SetDepthStencilState(deviceContext, m_pdepthWriteLessStencilState);
SetRasterizerState(deviceContext, m_pfillRasterizerState);
SetBlendState(deviceContext, m_pBlendingState);
deviceContext->VSSetShader(m_vertexShader, NULL, 0);
deviceContext->GSSetShader(m_geometryShader, NULL, 0);
deviceContext->PSSetShader(m_pixelColorShader, NULL, 0);
}
void SolidWireFrame::ApplyDepthOnly(ID3D11DeviceContext* deviceContext)
{
SetDepthStencilState(deviceContext, m_pdepthWriteLessStencilState);
SetRasterizerState(deviceContext, m_pfillRasterizerState);
SetBlendState(deviceContext, m_pnoColorBlendingState);
deviceContext->VSSetShader(m_vertexShader, NULL, 0);
deviceContext->GSSetShader(m_geometryShader, NULL, 0);
deviceContext->PSSetShader(m_pixelColorShader, NULL, 0);
}
void SolidWireFrame::ApplySolidOnly(ID3D11DeviceContext* deviceContext)
{
SetDepthStencilState(deviceContext, m_pdepthLessEqualStencilState);
SetRasterizerState(deviceContext, m_pfillRasterizerState);
SetBlendState(deviceContext, m_pBlendingState);
deviceContext->VSSetShader(m_vertexShader, NULL, 0);
deviceContext->GSSetShader(m_geometryShader, NULL, 0);
deviceContext->PSSetShader(m_pixelColorShader, NULL, 0);
}
bool SolidWireFrame::InitializeBlendState(ID3D11Device* device)
{
D3D11_BLEND_DESC blendStateDescription;
// Clear the blend state description.
ZeroMemory(&blendStateDescription, sizeof(D3D11_BLEND_DESC));
blendStateDescription.RenderTarget[0].BlendEnable = TRUE;
blendStateDescription.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendStateDescription.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendStateDescription.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendStateDescription.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_SRC_ALPHA;
blendStateDescription.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_DEST_ALPHA;
blendStateDescription.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendStateDescription.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
// Create the blend state using the description.
HRESULT result = device->CreateBlendState(&blendStateDescription, &m_pBlendingState);
if (FAILED(result))
{
return false;
}
//ZeroMemory(&blendStateDescription, sizeof(D3D11_BLEND_DESC));
blendStateDescription.RenderTarget[0].BlendEnable = FALSE;
result = device->CreateBlendState(&blendStateDescription, &m_pnoColorBlendingState);
if (FAILED(result))
{
return false;
}
return true;
}
bool SolidWireFrame::InitializeDepthStencilState(ID3D11Device* device)
{
D3D11_DEPTH_STENCIL_DESC depthStencilDesc;
// Initialize the description of the stencil state.
ZeroMemory(&depthStencilDesc, sizeof(depthStencilDesc));
// Set up the description of the stencil state.
depthStencilDesc.DepthEnable = TRUE;
depthStencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO;
depthStencilDesc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;
depthStencilDesc.StencilEnable = FALSE;
depthStencilDesc.StencilReadMask = 255;
depthStencilDesc.StencilWriteMask = 255;
depthStencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthStencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthStencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthStencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
// Create the depth stencil state.
HRESULT result = device->CreateDepthStencilState(&depthStencilDesc, &m_pdepthLessEqualStencilState);
if (FAILED(result))
{
return false;
}
//ZeroMemory(&depthStencilDesc, sizeof(depthStencilDesc));
depthStencilDesc.DepthEnable = true;
depthStencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthStencilDesc.DepthFunc = D3D11_COMPARISON_LESS;
result = device->CreateDepthStencilState(&depthStencilDesc, &m_pdepthWriteLessStencilState);
if (FAILED(result))
{
return false;
}
return true;
}
bool SolidWireFrame::InitializeRaterizerState(ID3D11Device* device)
{
D3D11_RASTERIZER_DESC rasterizerDescription;
ZeroMemory(&rasterizerDescription, sizeof(D3D11_RASTERIZER_DESC));
rasterizerDescription.FillMode = D3D11_FILL_SOLID;
rasterizerDescription.CullMode = D3D11_CULL_NONE;
rasterizerDescription.DepthBias = FALSE;
rasterizerDescription.MultisampleEnable = TRUE;
rasterizerDescription.FrontCounterClockwise = FALSE;
rasterizerDescription.DepthBiasClamp = 0.000000000;
rasterizerDescription.SlopeScaledDepthBias = 0.000000000;
rasterizerDescription.DepthClipEnable = TRUE;
rasterizerDescription.ScissorEnable = FALSE;
rasterizerDescription.AntialiasedLineEnable = FALSE;
HRESULT hr = device->CreateRasterizerState(&rasterizerDescription, &m_pfillRasterizerState);
if (FAILED(hr))
return false;
return true;
}
bool SolidWireFrame::SetDepthStencilState(ID3D11DeviceContext* deviceContext, ID3D11DepthStencilState* depthStencilState)
{
deviceContext->OMSetDepthStencilState(depthStencilState, 0);
return false;
}
bool SolidWireFrame::SetRasterizerState(ID3D11DeviceContext* deviceContext, ID3D11RasterizerState* raterizerState)
{
deviceContext->RSSetState(raterizerState);
return false;
}
bool SolidWireFrame::SetBlendState(ID3D11DeviceContext* deviceContext, ID3D11BlendState* blendState)
{
float blendFactor[4];
// Setup the blend factor.
blendFactor[0] = 0.0f;
blendFactor[1] = 0.0f;
blendFactor[2] = 0.0f;
blendFactor[3] = 0.0f;
// Turn off the alpha blending.
deviceContext->OMSetBlendState(blendState, blendFactor, 0xffffffff);
return false;
}
bool SolidWireFrame::InitializeVertexShader(ID3D11Device* device)
{
// Create the vertex shader
HRESULT hr = device->CreateVertexShader(m_pwireVertexShader, sizeof(m_pwireVertexShader), NULL, &m_vertexShader);
if (FAILED(hr))
{
return false;
}
// Define the input layout
D3D11_INPUT_ELEMENT_DESC layout[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
UINT numElements = ARRAYSIZE(layout);
// Create the input layout
hr = device->CreateInputLayout(layout, numElements, m_pwireVertexShader, sizeof(m_pwireVertexShader), &m_vertexLayout);
if (FAILED(hr))
return false;
return true;
}
bool SolidWireFrame::InitializeGeometryShader(ID3D11Device* device)
{
// Create the geometry shader
HRESULT hr = device->CreateGeometryShader(m_pwireGeometryShader, sizeof(m_pwireGeometryShader), NULL, &m_geometrySolidWireShader);
if (FAILED(hr))
return false;
hr = device->CreateGeometryShader(m_pgeometryShader, sizeof(m_pgeometryShader), NULL, &m_geometryShader);
if (FAILED(hr))
return false;
return true;
}
bool SolidWireFrame::InitializePixelShader(ID3D11Device* device)
{
// Create the pixel shader
HRESULT hr = device->CreatePixelShader(m_pwirePixelShader, sizeof(m_pwirePixelShader), NULL, &m_pixelSolidWireShader);
if (FAILED(hr))
return false;
hr = device->CreatePixelShader(m_pcolorPixelShader, sizeof(m_pcolorPixelShader), NULL, &m_pixelColorShader);
if (FAILED(hr))
return false;
return true;
}
}
now in my application, at the end of Initializing the Device and Context etc. i call.
SolidWireFrame* solidWireFrame = new SolidWireFrame();
solidWireFrame->Init(m_pd3dDevice.Get());
m_solidWireFrame.reset(solidWireFrame);
and my render loop.
//m_solidWireFrame->ApplyDepthAndSolid(m_pImmediateContext);
//m_solidWireFrame->ApplyDepthOnly(m_pImmediateContext);
//m_solidWireFrame->ApplySolidOnly(m_pImmediateContext);
/*for (it_drawingData iterator = dd.begin(); iterator != dd.end(); iterator++)
{
DrawingData *drawingData = iterator->second;
if (drawingData->IsRendered)
{
m_pImmediateContext->IASetVertexBuffers(0, 1, &drawingData->VertexBuffer, &stride, &offset);
m_pImmediateContext->IASetIndexBuffer(drawingData->IndexBuffer, DXGI_FORMAT_R16_UINT, 0);
m_pImmediateContext->DrawIndexed(drawingData->IndexData.size(), 0, 0);
}
}
//not sure if i should call this between renders?
m_pdesignerSwapChain->Present(0, 0);*/
m_solidWireFrame->ApplySolidWirePattern(m_pImmediateContext);
for (it_drawingData iterator = dd.begin(); iterator != dd.end(); iterator++)
{
DrawingData *drawingData = iterator->second;
if (drawingData->IsRendered)
{
m_pImmediateContext->IASetVertexBuffers(0, 1, &drawingData->VertexBuffer, &stride, &offset);
m_pImmediateContext->IASetIndexBuffer(drawingData->IndexBuffer, DXGI_FORMAT_R16_UINT, 0);
m_pImmediateContext->DrawIndexed(drawingData->IndexData.size(), 0, 0);
}
}
Fixed it.
I was not setting the world, view, projection constant buffer for the wireframe vs.

Must render buffer texture dimensions be power-of-two?

Does the texture that we use for WebGL render buffer storage need to have dimensions that are power-of-two?
Background info
I'm chasing a FRAMEBUFFER_INCOMPLETE_ATTACHMENT reported by a client on this setup:
Windows 7 Enterprise 32-Bit
Firefox Version: 33
Video Card: Intel Q45/Q43 Express Chipset Driver Version 8.13.10.2413
and so far I'm at a loss as to why it's happening, so guessing it might be something to do with NPOT textures.
Here's my render buffer implementation, which does not have power-of-two-texture yet:
SceneJS._webgl.RenderBuffer = function (cfg) {
/**
* True as soon as this buffer is allocated and ready to go
* #type {boolean}
*/
this.allocated = false;
this.canvas = cfg.canvas;
this.gl = cfg.canvas.gl;
this.buf = null;
this.bound = false;
};
/**
* Called after WebGL context is restored.
*/
SceneJS._webgl.RenderBuffer.prototype.webglRestored = function (_gl) {
this.gl = _gl;
this.buf = null;
};
/**
* Binds this buffer
*/
SceneJS._webgl.RenderBuffer.prototype.bind = function () {
this._touch();
if (this.bound) {
return;
}
this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, this.buf.framebuf);
this.bound = true;
};
SceneJS._webgl.RenderBuffer.prototype._touch = function () {
var width = this.canvas.canvas.width;
var height = this.canvas.canvas.height;
if (this.buf) { // Currently have a buffer
if (this.buf.width == width && this.buf.height == height) { // Canvas size unchanged, buffer still good
return;
} else { // Buffer needs reallocation for new canvas size
this.gl.deleteTexture(this.buf.texture);
this.gl.deleteFramebuffer(this.buf.framebuf);
this.gl.deleteRenderbuffer(this.buf.renderbuf);
}
}
this.buf = {
framebuf: this.gl.createFramebuffer(),
renderbuf: this.gl.createRenderbuffer(),
texture: this.gl.createTexture(),
width: width,
height: height
};
this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, this.buf.framebuf);
this.gl.bindTexture(this.gl.TEXTURE_2D, this.buf.texture);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MAG_FILTER, this.gl.NEAREST);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MIN_FILTER, this.gl.NEAREST);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_S, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_T, this.gl.CLAMP_TO_EDGE);
try {
// Do it the way the spec requires
this.gl.texImage2D(this.gl.TEXTURE_2D, 0, this.gl.RGBA, width, height, 0, this.gl.RGBA, this.gl.UNSIGNED_BYTE, null);
} catch (exception) {
// Workaround for what appears to be a Minefield bug.
var textureStorage = new WebGLUnsignedByteArray(width * height * 3);
this.gl.texImage2D(this.gl.TEXTURE_2D, 0, this.gl.RGBA, width, height, 0, this.gl.RGBA, this.gl.UNSIGNED_BYTE, textureStorage);
}
this.gl.bindRenderbuffer(this.gl.RENDERBUFFER, this.buf.renderbuf);
this.gl.renderbufferStorage(this.gl.RENDERBUFFER, this.gl.DEPTH_COMPONENT16, width, height);
this.gl.framebufferTexture2D(this.gl.FRAMEBUFFER, this.gl.COLOR_ATTACHMENT0, this.gl.TEXTURE_2D, this.buf.texture, 0);
this.gl.framebufferRenderbuffer(this.gl.FRAMEBUFFER, this.gl.DEPTH_ATTACHMENT, this.gl.RENDERBUFFER, this.buf.renderbuf);
this.gl.bindTexture(this.gl.TEXTURE_2D, null);
this.gl.bindRenderbuffer(this.gl.RENDERBUFFER, null);
this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, null);
// Verify framebuffer is OK
this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, this.buf.framebuf);
if (!this.gl.isFramebuffer(this.buf.framebuf)) {
throw SceneJS_error.fatalError(SceneJS.errors.ERROR, "Invalid framebuffer");
}
var status = this.gl.checkFramebufferStatus(this.gl.FRAMEBUFFER);
switch (status) {
case this.gl.FRAMEBUFFER_COMPLETE:
break;
case this.gl.FRAMEBUFFER_INCOMPLETE_ATTACHMENT:
throw SceneJS_error.fatalError(SceneJS.errors.ERROR, "Incomplete framebuffer: FRAMEBUFFER_INCOMPLETE_ATTACHMENT");
case this.gl.FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT:
throw SceneJS_error.fatalError(SceneJS.errors.ERROR, "Incomplete framebuffer: FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT");
case this.gl.FRAMEBUFFER_INCOMPLETE_DIMENSIONS:
throw SceneJS_error.fatalError(SceneJS.errors.ERROR, "Incomplete framebuffer: FRAMEBUFFER_INCOMPLETE_DIMENSIONS");
case this.gl.FRAMEBUFFER_UNSUPPORTED:
throw SceneJS_error.fatalError(SceneJS.errors.ERROR, "Incomplete framebuffer: FRAMEBUFFER_UNSUPPORTED");
default:
throw SceneJS_error.fatalError(SceneJS.errors.ERROR, "Incomplete framebuffer: " + status);
}
this.bound = false;
};
/**
* Clears this renderbuffer
*/
SceneJS._webgl.RenderBuffer.prototype.clear = function () {
if (!this.bound) {
throw "Render buffer not bound";
}
this.gl.clear(this.gl.COLOR_BUFFER_BIT | this.gl.DEPTH_BUFFER_BIT);
this.gl.disable(this.gl.BLEND);
};
/**
* Reads buffer pixel at given coordinates
*/
SceneJS._webgl.RenderBuffer.prototype.read = function (pickX, pickY) {
var x = pickX;
var y = this.canvas.canvas.height - pickY;
var pix = new Uint8Array(4);
this.gl.readPixels(x, y, 1, 1, this.gl.RGBA, this.gl.UNSIGNED_BYTE, pix);
return pix;
};
/**
* Unbinds this renderbuffer
*/
SceneJS._webgl.RenderBuffer.prototype.unbind = function () {
this.gl.bindFramebuffer(this.gl.FRAMEBUFFER, null);
this.bound = false;
};
/** Returns the texture
*/
SceneJS._webgl.RenderBuffer.prototype.getTexture = function () {
var self = this;
return {
bind: function (unit) {
if (self.buf && self.buf.texture) {
self.gl.activeTexture(self.gl["TEXTURE" + unit]);
self.gl.bindTexture(self.gl.TEXTURE_2D, self.buf.texture);
return true;
}
return false;
},
unbind: function (unit) {
if (self.buf && self.buf.texture) {
self.gl.activeTexture(self.gl["TEXTURE" + unit]);
self.gl.bindTexture(self.gl.TEXTURE_2D, null);
}
}
};
};
/** Destroys this buffer
*/
SceneJS._webgl.RenderBuffer.prototype.destroy = function () {
if (this.buf) {
this.gl.deleteTexture(this.buf.texture);
this.gl.deleteFramebuffer(this.buf.framebuf);
this.gl.deleteRenderbuffer(this.buf.renderbuf);
this.buf = null;
this.bound = false;
}
};
As far as I could find (I don't use WebGL), the WebGL spec delegates to the OpenGL ES 2.0 spec on these FBO related calls. RGBA with 8 bits per component is not a format that is supported as render target in ES 2.0. Many devices do support it (advertised with the OES_rgb8_rgba8 extension), but it is not part of the standard.
The texture you are using as COLOR_ATTACHMENT0 is RGBA with 8-bit components:
this.gl.texImage2D(this.gl.TEXTURE_2D, 0, this.gl.RGBA, width, height, 0,
this.gl.RGBA, this.gl.UNSIGNED_BYTE, textureStorage);
Try specifying this as RGB565, which is color renderable:
this.gl.texImage2D(this.gl.TEXTURE_2D, 0, this.gl.RGBA, width, height, 0,
this.gl.RGB, this.gl.UNSIGNED_SHORT_5_6_5, textureStorage);
If you do need an alpha component in the texture, RGBA4444 or RGB5_A1 are your only portable options:
this.gl.texImage2D(this.gl.TEXTURE_2D, 0, this.gl.RGBA, width, height, 0,
this.gl.RGBA, this.gl.UNSIGNED_SHORT_4_4_4_4, textureStorage);
this.gl.texImage2D(this.gl.TEXTURE_2D, 0, this.gl.RGBA, width, height, 0,
this.gl.RGBA, this.gl.UNSIGNED_SHORT_5_5_5_1, textureStorage);
The spec actually looks somewhat contradictory to me. Under "Differences Between WebGL and OpenGL ES 2.0", it says:
The following combinations of framebuffer object attachments, when all of the attachments are framebuffer attachment complete, non-zero, and have the same width and height, must result in the framebuffer being framebuffer complete:
COLOR_ATTACHMENT0 = RGBA/UNSIGNED_BYTE texture
Which at first sight suggests that RGBA/UNSIGNED_BYTE is supported. But that's under the condition "when all of the attachments are framebuffer attachment complete", and according to the ES 2.0 spec, attachments with this format are not attachment complete. And there is no override in the WebGL spec on what "attachment complete" means.

Why doesn't CVPixelBufferCreateWithPlanarBytes execute the memory release callback?

I create a buffer with CVPixelBufferCreateWithPlanarBytes and pass a callback function as a parameter:
CVPixelBufferCreateWithPlanarBytes(NULL, imf.iWidth, imf.iHeight, pixelFormat, NULL, 0, 3, (void**)ppPlaneData, nPlaneWidth, nPlaneHeight, nPlaneBytesPerRow, LAVSinkPixelBufferReleasePlanarBytes, ppPlaneData, NULL, &buffer)
For some reason my callback LAVSinkPixelBufferReleasePlanarBytes is not called after I release the buffer with CVPixelBufferRelease(buffer).
However a different callback is called properly if I use CVPixelBufferCreateWithBytes function, but I want to use the planar data version.
Larger code snippet:
void LAVSinkPixelBufferReleasePlanarBytes(void* releaseRefCon, const void* dataPtr, size_t dataSize, size_t numberOfPlanes, const void* planeAddresses[])
{
// This is never called
}
...
...
size_t nPlaneSize[3];
uint8_t** ppPlaneData = new uint8_t*[3];
for (int i = 0; i < 3; i++) {
nPlaneSize[i] = nPlaneBytesPerRow[i] * nPlaneHeight[i];
ppPlaneData[i] = new uint8_t[nPlaneSize[i]];
memcpy(ppPlaneData[i], ppPlaneAddress[i], nPlaneSize[i]);
}
if (CVPixelBufferCreateWithPlanarBytes(NULL, imf.iWidth, imf.iHeight, pixelFormat, NULL, 0, 3, (void**)ppPlaneData, nPlaneWidth, nPlaneHeight, nPlaneBytesPerRow, LAVSinkPixelBufferReleasePlanarBytes, ppPlaneData, NULL, &buffer) != kCVReturnSuccess) break;
const BOOL result = [adaptor appendPixelBuffer:buffer withPresentationTime:timestamp];
CVPixelBufferRelease(buffer);
I found someone else asking the same question with much cleaner sample code:
http://prod.lists.apple.com/archives/cocoa-dev/2013/Nov/msg00177.html
But unfortunately there is no answer.

Resources