What factors determine DXGI_FORMAT? - directx

I am not familiar with directx, but I ran into a problem in a small project, part of which involves capturing directx data. I hope, below I make some sense.
General question:
I would like to know what factors determine the DXGI_FORMAT of a texture in the backbuffer (hardware?, OS?, application?, directx version?). And more importantly, when capturing a texture from the backbuffer, is it possible to receive a texture in the desired format by supplying the format as a parameter, having the format automatically converted if necessary.
Specifics about my problem :
I capture screens from games using Open Broadcaster Software(OBS) and process them using a specific library(OpenCV) prior to streaming. I noticed that, following updates to both Windows and OBS, I get 'DXGI_FORMAT_R10G10B10A2_UNORM' as the DXGI_FORMAT. This is a problem for me, because as far as I know OpenCV does not provide a convenient way for building an OpenCV object when colors are 10bits. Below are a few relevant lines from the modified OBS source file.
d3d11_copy_texture(data.texture, backbuffer);
...
hlog(toStr(data.format)); // prints 24 = DXGI_FORMAT_R10G10B10A2_UNORM
...
ID3D11Texture2D* tex;
bool success = create_d3d11_stage_surface(&tex);
if (success) {
...
HRESULT hr = data.context->Map(tex, subresource, D3D11_MAP_READ, 0, &mappedTex);
...
Mat frame(data.cy, data.cx, CV_8UC4, mappedTex.pData, (int)mappedTex.RowPitch); //This creates an OpenCV Mat object.
//No support for 10-bit coors. Expects 8-bit colors (CV_8UC4 argument).
//When the resulting Mat is viewed, colours are jumbled (Probably because 10-bits did not fit into 8-bits).
Before the updates (when I was working on this a year ago), I was probably receiving DXGI_FORMAT = DXGI_FORMAT_B8G8R8A8_UNORM, because the code above used to work.
Now I wonder what changed, and whether I can modify the source code of OBS to receive data with the desired DXGI_FORMAT.
'create_d3d11_stage_surface' method called above sets the DXGI_FORMAT, but I am not sure if it means 'give me data with this DXGI_FORMAT' or 'I know you work with this format, give me what you have'.
static bool create_d3d11_stage_surface(ID3D11Texture2D **tex)
{
HRESULT hr;
D3D11_TEXTURE2D_DESC desc = {};
desc.Width = data.cx;
desc.Height = data.cy;
desc.Format = data.format;
...
I hoped that, overriding the desc.Format with DXGI_FORMAT_B8G8R8A8_UNORM would result in that format being passed as argument in the ID3D11DeviceContext::Map call above, and I would get data with specified format. But that did not work.

The choice of render target is up to the application, but they need to pick one based on the Direct3D hardware feature level. Formats for render targets in swapchains are usually display scanout formats:
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_R10G10B10A2_UNORM
DXGI_FORMAT_R16G16B16A16_FLOAT
DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM (rare)
See the DXGI documentation for the full list of supported formats and usages by feature level.
Direct3D 11 does not do format conversions when you copy resources such as copying to staging render textures, so if you want to do a format conversion you'll need to handle that yourself. Note that CPU-side conversion code for all the DXGI formats can be found in DirectXTex.

It is the application that decides that format. The simplest one would be R8G8B8A8, which simply represents RGB and alpha values. But, if developer decides that he will be using HDR, the backbuffer would probably be R11B11G10, because you can store way more precise data there, without alpha channel information. If the game is for example black and white, there's no need to keep all RGB channels in the back buffer, you could use simpler format. I hope this helps.

Related

DirectCompute: How to read from a RWTexture2D<float4>?

I have the following buffer:
RWTexture2D<float4> Output : register(u0);
This buffer is used by a compute shader for rendering a computed image.
To write a pixel in that texture, I just use code similar to this:
Output[XY] = SomeFunctionReturningFloat4(SomeArgument);
This works very well and my computed image is correctly rendered on screen.
Now at some stage in the compute shader, I would like to read back an
already computed pixel and process it again.
Output[XY] = SomeOtherFunctionReturningFloat4(Output[XY]);
The compiler return an error:
error X3676: typed UAV loads are only allowed for single-component 32-bit element types
Any help appreciated.
In Compute Shaders, data access is limited on some data types, and not at all intuitive and straightforward. In your case, you use a
RWTexture2D<float4>
That is a UAV typed of DXGI_FORMAT_R32G32B32A32_FLOAT format.
This forma is only supported for UAV typed store, but it’s not supported by UAV typed load.
Basically, you can only write on it, but not read it. UAV typed load only supports 32 bit formats, in your case DXGI_FORMAT_R32_FLOAT, that can only contain a single component (32 bits and that’s all).
Your code should run if you use a RWTexture2D<float> but I suppose this is not enough for you.
Possible workarounds that spring to my minds are:
1. using 4 different RWTexture2D<float>, one for each component
2. using 2 different textures, RWTexture2D<float4> to write your values and Texture2D<float4> to read from
3. Use a RWStructuredBufferinstead of the texture.
I don’t know your code so I don’t know if solutions 1. and 2. could be viable. However, I strongly suggest going for 3. and using StructuredBuffer. A RWStructuredBuffer can hold any type of struct and can easily cover all your needs. To be honest, in compute shaders I almost only use them to pass data. If you need the final output to be a texture, you can do all your calculations on the buffer, then copy the results on the texture when you’re done. I would add that drivers often use CompletePath to access RWTexture2D data, and FastPath to access RWStructuredBuffer data, making the former awfully slower than the latter.
Reference for data type access is here. Scroll down to UAV typed load.

Equivalent of glColorMask in Metal for a kernel program?

I am trying to move from OpenGL to Metal for my iOS apps. In my OpenGL code I use glColorMask (if I want to write only to selected channels, for example only to alpha channel of a texture) in many places.
In Metal, for render pipeline (though vertex and fragment shader) seems like MTLColorWriteMask is the equivalent of glColorMask. I can setup it up while creating a MTLRenderPipelineState through the MTLRenderPipelineDescriptor.
But I could not find a similar option for compute pipeline (through kernel function). I always need to write all the channels (red, green, blue and alpha) every time I write to an output texture. What if I want to preserve the alpha (or any other channel) and only want to modify the color channels? I can create a copy of the output texture and use it as one of the inputs and read alpha from it to preserve the values but that is expensive.
Computer memory architectures don't like writing only some bytes of data. A write to 1 out of 4 bytes usually involves reading those four bytes into the cache, modifying one of them in the cache, and then writing those four bytes back out into memory. Well, most computers read/write a lot more than 4 bytes at a time, but you get the idea.
This happens with framebuffers too. If you do a partial write mask, the hardware is still going to be doing the equivalent of a read/modify/write on that texture. It's just not changing all of the bytes its reads.
So you can do the same thing from your compute shader. Read the 4-vector value, modify the channels you want, and then write it back out. As long as the read and write are from the same shader invocation, there should be no synchronization problems (assuming that no other invocations are trying to read/write to that same location, but if that were the case, you'd have problems anyway).

How do I choose a pixel format when creating a new Texture2D?

I'm using the SharpDX Toolkit, and I'm trying to create a Texture2D programmatically, so I can manually specify all the pixel values. And I'm not sure what pixel format to create it with.
SharpDX doesn't even document the toolkit's PixelFormat type (they have documentation for another PixelFormat class but it's for WIC, not the toolkit). I did find the DirectX enum it wraps, DXGI_FORMAT, but its documentation doesn't give any useful guidance on how I would choose a format.
I'm used to plain old 32-bit bitmap formats with 8 bits per color channel plus 8-bit alpha, which is plenty good enough for me. So I'm guessing the simplest choices will be R8G8B8A8 or B8G8R8A8. Does it matter which I choose? Will they both be fully supported on all hardware?
And even once I've chosen one of those, I then need to further specify whether it's SInt, SNorm, Typeless, UInt, UNorm, or UNormSRgb. I don't need the sRGB colorspace. I don't understand what Typeless is supposed to be for. UInt seems like the simplest -- just a plain old unsigned byte -- but it turns out it doesn't work; I don't get an error, but my texture won't draw anything to the screen. UNorm works, but there's nothing in the documentation that explains why UInt doesn't. So now I'm paranoid that UNorm might not work on some other video card.
Here's the code I've got, if anyone wants to see it. Download the SharpDX full package, open the SharpDXToolkitSamples project, go to the SpriteBatchAndFont.WinRTXaml project, open the SpriteBatchAndFontGame class, and add code where indicated:
// Add new field to the class:
private Texture2D _newTexture;
// Add at the end of the LoadContent method:
_newTexture = Texture2D.New(GraphicsDevice, 8, 8, PixelFormat.R8G8B8A8.UNorm);
var colorData = new Color[_newTexture.Width*_newTexture.Height];
_newTexture.GetData(colorData);
for (var i = 0; i < colorData.Length; ++i)
colorData[i] = (i%3 == 0) ? Color.Red : Color.Transparent;
_newTexture.SetData(colorData);
// Add inside the Draw method, just before the call to spriteBatch.End():
spriteBatch.Draw(_newTexture, new Vector2(0, 0), Color.White);
This draws a small rectangle with diagonal lines in the top left of the screen. It works on the laptop I'm testing it on, but I have no idea how to know whether that means it's going to work everywhere, nor do I have any idea whether it's going to be the most performant.
What pixel format should I use to make sure my app will work on all hardware, and to get the best performance?
The formats in the SharpDX Toolkit map to the underlying DirectX/DXGI formats, so you can, as usual with Microsoft products, get your info from the MSDN:
DXGI_FORMAT enumeration (Windows)
32-bit-textures are a common choice for most texture scenarios and have a good performance on older hardware. UNorm means, as already answered in the comments, "in the range of 0.0 .. 1.0" and is, again, a common way to access color data in textures.
If you look at the Hardware Support for Direct3D 10Level9 Formats (Windows) page you will see, that DXGI_FORMAT_R8G8B8A8_UNORM as well as DXGI_FORMAT_B8G8R8A8_UNORM are supported on DirectX 9 hardware. You will not run into compatibility-problems with both of them.
Performance is up to how your Device is initialized (RGBA/BGRA?) and what hardware (=supported DX feature level) and OS you are running your software on. You will have to run your own tests to find it out (though in case of these common and similar formats the difference should be a single digit percentage at most).

How to determine the human-readable type of a cv::Mat?

My current problem is that I would like to know the type of the cv::Mat-frames grabbed by cv::VideoCapture from a video file. The documentation doesn't specify that (as is often the case, so even if I have overlooked it in this particular case, it would still be helpful to get an answer for dealing with the problem in general).
Of course, I could open the appropriate OpenCV header file and go through the CV_64FC2, ... macros to find a macro which matches the Mat's type(). But I'm kind of sick of that. There must be an easier way.
Isn't there any function that lets me translate a Mat's type() to a human-readable format ? Like this:
cv::Mat myMatWithUnknownType;
// Some code modifying myMatWithUnknownType.
// ...
std::string readableType = myMatWithUnknownType.typeString();
std::cout << readableType; // Prints "CV_64FC3".
How do you deal with that?
First, the format that come from cameras is only one: CV_8UC3. This is hardcoded in OpenCV, and any video format is converted to this before being sent to user. So
capture >> frame;
Will always return a RGB image, of 8 bits per channel.
Now, for other types you can write your function, keeping in mid that there are not so many types supported in OpenCV: A Mat can be of type char, uchar, short, ushort, int, uint, float, double, of 1 to 512 channels (according to the latest docs.) So writing your own type_to_string() is not difficult.

Sampling BC5_SNORM texture yields incorrect value range

I'm working with Direct3d 11, and I've come across something strange. I have taken a normal map and encoded it to a DDS file twice. Once with R8G8B8A8_SNORM encoding, and once with BC5_SNORM.
Next I load each texture using D3DX11CreateShaderResourceViewFromFile in conjunction with D3DX11GetImageInfoFromFile. When I sample these textures in my pixel shader I find that the R8G8B8A8_SNORM texture is returning values in the range [-1,1], which is what I would expect for a SNORM texture. However, the BC5_SNORM texture is returning values in the range [0,1], which doesn't make any sense to me.
I double an triple checked with my debugger and PIX. The format of the texture is correct (BC5_*S*NORM), so I am at a loss for why it's not returning signed values.
I managed to reproduce the same issue as you and I also got the same behaviour when doing a conversion from a R8G8B8A8_SNORM texture (with -1 to +1 values) to BC5_SNORM (producing only 0 to 1 values) when doing the conversion through D3Dx11LoadTextureFromTexture. There does appear to be a fault in D3DX11, at least regarding BC5_SNORM, in that, regardless of all kinds of input formats, the (BC5)SNORM output is always in the 0 to 1 range.
As suggested by #chuckwalbourn I can confirm that the DirectXTex utilities, which supersedes the now deprecated D3DX11, does respect and correctly handle signed values for BC5_SNORM outputs.
You can either have your program write out a temporary .dds (using D3DX11SaveTextureToFile with a R8G8B8A8_SNORM texture) and then invoke the standalone DirectXTex 'texconv.exe' utility to convert to BC5_SNORM, or wrangle the DirectXTex library into your program and use the 'Convert(...)' function appropriately.

Resources