How can I retain render target results so they are non-volatile? - xna

I am writing an application in XNA that relies on render targets that are rendered to just once, and then used indefinitely afterwards. The problem I've encountered is that there are certain situations where the render targets' contents are lost or disposed, such when the computer goes to sleep or the application enters full-screen mode.
Re-rendering to each target when content is lost is an option, but likely not the best option, as it could be fairly costly when there are many targets.
I could probably save each result as a PNG image and then load that PNG back up as a texture, but that adds a lot of I/O overhead.
Suggestions?

Most likely option I've found so far is to use GetData() and SetData() to copy from the RenderTarget2D to a new Texture2D.
Since I want a mip mapped texture, I found that I had to copy each mip level individually to the new texture, like so. If you don't do this, the object will turn black as you move away from it since there's no data in the mip maps. Note that the render target must also be mip mapped.
Texture2D copy = new Texture2D(graphicsDevice,
renderTarget.Width, renderTarget.Height, true,
renderTarget.Format);
// Set data for each mip map level
for (int i = 0; i < renderTarget.LevelCount; i++)
{
// calculate the dimensions of the mip level.
// Math.Max because dimensions always non-zero
int width = (int)Math.Max((renderTarget.Width / Math.Pow(2, i)), 1);
int height = (int)Math.Max((renderTarget.Height / Math.Pow(2, i)), 1);
Color[] data = new Color[width * height];
renderTarget.GetData<Color>(i, null, data, 0, data.Length);
copy.SetData<Color>(i, null, data, 0, data.Length);
}

I believe
Presetnationparameters pp = graphics.PresentationParameters;
pp.RenderTargetUsage = RenderTargetUsage.PreserveContents;
Should do the trick. It has to do with how shadermodels work on PC and Xbox, and how shadermodel 2+ made it equal. (Something about Xbox overwriting its output buffer by default, hence old rendertargets clear, whilst PC just uses some other memory)

Related

Is there a way to bind assets once instead of with every command encoder?

I'm rendering a vertex/frag shader with a compute kernel.
Every frame I am binding large assets (such as a 450MB texture) in the usual way:
computeEncoder.setTexture(highResTexture, index: 0)
computeEncoder.setBuffer(largeBuffer, offset: 0, index: 0)
...
renderEncoder.setVertexTexture(highResTexture, index: 0)
renderEncoder.setVertexBuffer(largeBuffer, offset: 0, index: 0)
So that is close to 1GB in bandwidth for a single texture, and I have many more assets totaling a few hundred megs, so that is about 1.5GB that I bind for every frame.
Is there anyway to bind textures/buffers to the GPU once so that they would then be available in the kernel and vertex functions without binding every frame?
I could be wrong, but I thought something was introduced in the one of the last couple WWDCs so thought I would ask to make sure I'm not missing anything.
EDIT:
By simply binding a texture in the vertex function that I have already bound in the compute encoder it does indeed show more texture bandwidth used, even though I am not using it for the capture.
GPU Read Bandwidth:
6.3920 GiB/s without binding
7.1919 GiB/s with binding
Without binding the texture:
With binding the texture but not using it in any way:
Also, if it works as you describe, why does using multiple command encoders warn about wasted bandwidth? If I use more than one emitter, each with a separate encoder, even though they bind identical resources, I get the performance warning:
I think you are confused. Setting a texture to a command encoder doesn't consume bandwidth. Reading it or sampling it inside the shader does.
When you set a texture or any other buffer to an encoder, what happens is that driver just passes some small amount of metadata to the shader using some mechanism, likely some internal buffer that's not visible to you as the API user. It doesn't "load" the texture anywhere. There's an exception for buffers that are marked as constant address buffers in the shaders, because those may get pre-fetched by the GPU for better performance.
Another thing that happens is that the resource is made resident, meaning the GPU driver will map a range of addresses in the GPU addresses virtual memory table to point to the physical memory that stores the texture contents. This also does not consume memory, but it does consume available virtual address space. You might run out of virtual address space in some cases, but that's not a bandwidth issue.
Still, if you do have a lot of textures, you might be actually spending a lot of CPU time just encoding those setTexture commands. Instead, you can use argument buffers. If the hardware you are targeting supports argument buffers tier 2, you can put every texture in an argument buffer. This will require calling useResource on all of those textures, because the driver needs to know that you are going to use those textures to make them resident, so you will still spend CPU time encoding those commands. To avoid that, you can allocate all the textures from one or more heaps and call useHeaps on those heaps. This will make the whole heap resident, and you won't need to call useResource on individual resources. There are a bunch of WWDC talks on this topic, latest one being Explore bindless rendering in Metal.
But again, to reiterate: nothing I mentioned here "wastes" bandwidth.
Update:
A very basic example of using argument buffers would be to use it like this.
let argumentDescriptor = MTLArgumentDescriptor()
argumentDescriptor.index = 0
argumentDescriptor.dataType = .texture
argumentDescriptor.textureType = .type2D
let argumentEncoder = MTLArgumentEncoder(arguments: [argumentDescriptor])
let argumentBuffer = device.makeBuffer(length: argumentEncoder.encodedLength, options: [.storageModeShared])
argumentEncoder.setArgumentBuffer(argumentBuffer, offset: 0)
argumentEncoder.setTexture(someTexture, index: 0)
commandEncoder.setBuffer(argumentBuffer, offset: 0, index: 0)
commandEncoder.useResource(someTexture, usage: .read)
And in the shader you would write a struct like this:
struct MyTexture
{
texture2d<float> texture [[ id(0) ]];
};
and then bind it like
device MyTexture& myTexture [[ buffer(0) ]]
and use it like any other struct. This is a very basic example and you can actually use reflection to create those MTLArgumentEncoders for you from functions and binding indices.

What factors determine DXGI_FORMAT?

I am not familiar with directx, but I ran into a problem in a small project, part of which involves capturing directx data. I hope, below I make some sense.
General question:
I would like to know what factors determine the DXGI_FORMAT of a texture in the backbuffer (hardware?, OS?, application?, directx version?). And more importantly, when capturing a texture from the backbuffer, is it possible to receive a texture in the desired format by supplying the format as a parameter, having the format automatically converted if necessary.
Specifics about my problem :
I capture screens from games using Open Broadcaster Software(OBS) and process them using a specific library(OpenCV) prior to streaming. I noticed that, following updates to both Windows and OBS, I get 'DXGI_FORMAT_R10G10B10A2_UNORM' as the DXGI_FORMAT. This is a problem for me, because as far as I know OpenCV does not provide a convenient way for building an OpenCV object when colors are 10bits. Below are a few relevant lines from the modified OBS source file.
d3d11_copy_texture(data.texture, backbuffer);
...
hlog(toStr(data.format)); // prints 24 = DXGI_FORMAT_R10G10B10A2_UNORM
...
ID3D11Texture2D* tex;
bool success = create_d3d11_stage_surface(&tex);
if (success) {
...
HRESULT hr = data.context->Map(tex, subresource, D3D11_MAP_READ, 0, &mappedTex);
...
Mat frame(data.cy, data.cx, CV_8UC4, mappedTex.pData, (int)mappedTex.RowPitch); //This creates an OpenCV Mat object.
//No support for 10-bit coors. Expects 8-bit colors (CV_8UC4 argument).
//When the resulting Mat is viewed, colours are jumbled (Probably because 10-bits did not fit into 8-bits).
Before the updates (when I was working on this a year ago), I was probably receiving DXGI_FORMAT = DXGI_FORMAT_B8G8R8A8_UNORM, because the code above used to work.
Now I wonder what changed, and whether I can modify the source code of OBS to receive data with the desired DXGI_FORMAT.
'create_d3d11_stage_surface' method called above sets the DXGI_FORMAT, but I am not sure if it means 'give me data with this DXGI_FORMAT' or 'I know you work with this format, give me what you have'.
static bool create_d3d11_stage_surface(ID3D11Texture2D **tex)
{
HRESULT hr;
D3D11_TEXTURE2D_DESC desc = {};
desc.Width = data.cx;
desc.Height = data.cy;
desc.Format = data.format;
...
I hoped that, overriding the desc.Format with DXGI_FORMAT_B8G8R8A8_UNORM would result in that format being passed as argument in the ID3D11DeviceContext::Map call above, and I would get data with specified format. But that did not work.
The choice of render target is up to the application, but they need to pick one based on the Direct3D hardware feature level. Formats for render targets in swapchains are usually display scanout formats:
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_R10G10B10A2_UNORM
DXGI_FORMAT_R16G16B16A16_FLOAT
DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM (rare)
See the DXGI documentation for the full list of supported formats and usages by feature level.
Direct3D 11 does not do format conversions when you copy resources such as copying to staging render textures, so if you want to do a format conversion you'll need to handle that yourself. Note that CPU-side conversion code for all the DXGI formats can be found in DirectXTex.
It is the application that decides that format. The simplest one would be R8G8B8A8, which simply represents RGB and alpha values. But, if developer decides that he will be using HDR, the backbuffer would probably be R11B11G10, because you can store way more precise data there, without alpha channel information. If the game is for example black and white, there's no need to keep all RGB channels in the back buffer, you could use simpler format. I hope this helps.

How to use a custom compute shaders using metal and get very smooth performance?

Im trying to apply the live camera filters through metal using the default MPSKernal filters given by apple and custom compute Shaders.
In compute shader I did the inplace encoding with the MPSImageGaussianBlur
and the code goes here
func encode(to commandBuffer: MTLCommandBuffer, sourceTexture: MTLTexture, destinationTexture: MTLTexture, cropRect: MTLRegion = MTLRegion.init(), offset : CGPoint) {
let blur = MPSImageGaussianBlur(device: device, sigma: 0)
blur.clipRect = cropRect
blur.offset = MPSOffset(x: Int(offset.x), y: Int(offset.y), z: 0)
let threadsPerThreadgroup = MTLSizeMake(4, 4, 1)
let threadgroupsPerGrid = MTLSizeMake(sourceTexture.width / threadsPerThreadgroup.width, sourceTexture.height / threadsPerThreadgroup.height, 1)
let commandEncoder = commandBuffer.makeComputeCommandEncoder()
commandEncoder.setComputePipelineState(pipelineState!)
commandEncoder.setTexture(sourceTexture, at: 0)
commandEncoder.setTexture(destinationTexture, at: 1)
commandEncoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
commandEncoder.endEncoding()
autoreleasepool {
var inPlaceTexture = destinationTexture
blur.encode(commandBuffer: commandBuffer, inPlaceTexture: &inPlaceTexture, fallbackCopyAllocator: nil)
}
}
But sometimes the inplace texture tend to fail and eventually it creates a jerk effect on the screen.
So if anyone can suggest me the solution without using the inplace texture or how to use the fallbackCopyAllocator or using the compute shaders in a different way that would be really helpful.
I have done enough coding in this area (applying computing shaders to video stream from camera), and the most common problem you run into is the "pixel buffer reuse" issue.
The metal texture you create from the sample buffer is backed up a pixel buffer, which is managed by the video session, and can be re-used for following video frames, unless you retain the reference to the sample buffer (retaining the reference to the metal texture is not enough).
Feel free to take a look at my code at https://github.com/snakajima/vs-metal, which applies various computing shaders to a live video stream.
VSContext:set() method takes optional sampleBuffer parameter in addition to the texture parameter, and retain the reference to the sampleBuffer until the computing shader's computation is completed (in VSRuntime:encode() method).
The in place operation method can be hit or miss depending on what the underlying filter is doing. If it is a single pass filter for some parameters, then you'll end up running out of place for those cases.
Since that method was added, MPS has added an underlying MTLHeap to manage memory a bit more transparently for you. If your MPSImage doesn't need to be viewed by the CPU and exists for only a short period of time on the GPU, it is recommended that you just use a MPSTemporaryImage instead. When the readCount hits 0 on that, the backing store will be recycled through the MPS heap and made available for other MPSTemporaryImages and other temporary resources used downstream. Likewise, the backing store for it isn't actually allocated from the heap until absolutely necessary (e.g. texture is written to, or .texture is called) A separate heap is allocated for each command buffer.
Using temporary images should help reduce memory usage quite a lot. For example, in an Inception v3 neural network graph, which has over a hundred passes, the heap was able to automatically reduce the graph to just four allocations.

best approach to constructing an OpenGL ES 2.0 shader-based dynamic chain of filters

I'm on iOS 6 (7 too if you will and makes any difference) and GL ES 2.0.
The idea is for a CAEAGLLayer to have a dynamic chain of shader-based filters that processes its contents property and displays the final result. Filters can be added / removed at any point in the chain.
So far I came up with an implementation, but I'm wondering if there's better ways to go about it. My implementation roughly goes about it this way:
A base filter class from which concrete filters inherit, creating a shader program (vertex / fragment combo) for whatever filter / imaging they implement.
A CAEAGLLayer subclass which implements the filter chain and to which filters are added. The high-level processing algorithm is:
// 1 - Assume whenever the layer's content property is changed to an image, a copy of the image gets stored in a sourceImage property.
// 2 - Assume changing the content property or adding / removing an image unit triggers this algorithm.
// 3 - Assume the whole filter chain basically processes a quad with position and texture coordinates thru a VBO.
// 4 - Assume all shader programs (by shader program I mean a vertex and fragment shader pair in a single program) have access to texture unit 0.
// 5 - Assume P shader programs.
load imageSource into a texture object bound to GL_TEXTURE2D and pointing to to GL_TEXTURE0
attach bound texture object to GL_FRAMEBUFFER GL_COLOR_ATTACHMENT0 (so we are doing render-to-texture, which will be accessible to fragment shaders)
for p = program identifier 0 up to P - 2:
glUseProgram(p)
glDrawArrays()
attach GL_RENDERBUFFER to GL_FRAMEBUFFER GL_COLOR_ATTACHMENT0 (GL_RENDERBUFFER in turn has its storage set to the layer itself);
p = program identifier P - 1 (last program in the chain)
glUseProgram(p)
glDrawArrays()
present GL_RENDERBUFFER onscreen
This approach seems to work so far, but there's a number of things I'm wondering about:
Best way to implement adding / removing of filters:
Adding and removing programs seems the most logical approach right now. However this means one program per plugin and switching between all of these at render time. I wonder how these other approaches would compare:
Attaching / detaching shader-pairs and re-linking a single composite program, instead of adding / removing programs. The OpenGL ES 2.0 Programming Guide says you cannot do it. However, since desktop GL allows for multiple shader objects in one program, I'm anyway curious if it would be a better approach if ES supported it.
Keeping the filters in text format (their code within a function other than main) and instead compile them all into a monolithic shader pair (with an added main of course) each time a filter is added / removed.
Best way to implement per-filter caching:
Right now, adding / removing any number of filters at any point in the chain requires running all programs again to render the final image. It'd be nice however if I could somehow cache the output of each filter. That way, removing, adding or bypassing a filter would only require running the filters past the point of insertion / deletion / bypassing in the chain. I can think of a naive approach: on each program pass, bind a different texture object to GL_TEXTURE0 and to the GL_COLOR_ATTACHMENT0of the frame buffer. In this way I can keep the output of every filter around. However, creating a new texture, binding and changing the framebuffer attachment once per filter seems inefficient.
I don't have much to say about the filter output caching problem, but as for filter switching... The EXT_separate_shader_objects extension is designed to solve this very problem, and it's supported on every device that runs iOS 5.0 or later. Here's a brief overview:
There's a new convenience API for compiling shader programs that also takes care of making them "separable":
_vertexProgram = glCreateShaderProgramvEXT(GL_VERTEX_SHADER, 1, &source);
Program Pipeline Objects manage your program state and let you mix and match already-compiled shaders:
GLuint _ppo;
glGenProgramPipelinesEXT(1, &_ppo);
glBindProgramPipelineEXT(_ppo);
glUseProgramStagesEXT(_ppo, GL_VERTEX_SHADER_BIT_EXT, _vertexProgram);
glUseProgramStagesEXT(_ppo, GL_FRAGMENT_SHADER_BIT_EXT, _fragmentProgram);
Mixing and matching shaders can make attribute binding a pain, so you can specify that in the shader (likewise for varyings):
#extension GL_EXT_separate_shader_objects : enable
layout(location = 0) attribute vec4 position;
layout(location = 1) attribute vec3 normal;
Uniforms are set for the shader program they belong to:
glProgramUniformMatrix3fvEXT(_vertexProgram, u_normalMatrix, 1, 0, _normalMatrix.m);

Binding texture memory to a GPU allocated matrix

I created a float point matrix on the GPU of size (p7P_NXSTATES)x(p7P_NXTRANS) like so:
// Special Transitions
// Host pointer to array of device pointers
float **tmp_xsc = (float**)(malloc(p7P_NXSTATES * sizeof(float*)));
// For every alphabet in scoring profile...
for(i = 0; i < p7P_NXSTATES; i++)
{
// Allocate memory for device for every alphabet letter in protein sequence
cudaMalloc((void**)&(tmp_xsc[i]), p7P_NXTRANS * sizeof(float));
// Copy over arrays
cudaMemcpy(tmp_xsc[i], gm.xsc[i], p7P_NXTRANS * sizeof(float), cudaMemcpyHostToDevice);
}
// Copy device pointers to array of device pointers on GPU (matrix)
float **dev_xsc;
cudaMalloc((void***)&dev_xsc, p7P_NXSTATES * sizeof(float*));
cudaMemcpy(dev_xsc, tmp_xsc, p7P_NXSTATES * sizeof(float*), cudaMemcpyHostToDevice);
This memory, once copied over to the GPU, is never changed and is only read from. Thus, I've decided to bind this to texture memory. Problem is that when working with 2D texture memory, the memory being bound to it is really just an array that uses offsets to function as a matrix.
I'm aware I need to use cudaBindTexture2D() and cudaCreateChannelDesc() to bind this 2D memory in order to access it as such
tex2D(texXSC,x,y)
-- but I'm just not sure how. Any ideas?
The short answer is that you cannot bind arrays of pointers to textures. You can either create a CUDA array and copy data to it from linear source memory, or use pitched linear memory directly bound to a texture. But an array of pointers will not work.

Resources