Determining which swap chain formats are supported - directx

When calling IDXGIFactory1::CreateSwapChain with DXGI_FORMAT_B5G6R5_UNORM I get an error that this format isn't supported, specifically E_INVALIDARG One or more arguments are invalid. However, this works fine with a more standard format like DXGI_FORMAT_B8G8R8A8_UNORM.
I'm trying to understand how I can know which swap chain formats are supported. From digging around in the documentation, I can find lists of supported formats for "render targets", but this doesn't appear to be the same set of formats supported for swap chains. B5G6R5 does need 11.1 to have required support for most uses, but it's working as a render target.
https://learn.microsoft.com/en-us/previous-versions//ff471325(v=vs.85)
https://learn.microsoft.com/en-us/windows/win32/direct3ddxgi/format-support-for-direct3d-11-0-feature-level-hardware
https://learn.microsoft.com/en-us/windows/win32/direct3ddxgi/format-support-for-direct3d-11-1-feature-level-hardware
As a test, I looped through all formats and attempted to create swap chains with each. Of the 118 formats, only 8 appear to be supported on my machine (RTX 2070):
DXGI_FORMAT_R16G16B16A16_FLOAT
DXGI_FORMAT_R10G10B10A2_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_B8G8R8A8_UNORM_SRGB
DXGI_FORMAT_NV12
DXGI_FORMAT_YUY2
What is the proper way to know which swap chain formats are supported?
For additional context, I'm doing off-screen rendering to a 16-bit (565) format. I have an optional "preview window" that I open occasionally to quickly see the rendering results. When I create the window I create a swap chain and do a copy from the real render target into the swap chain back buffer. I'm targeting DirectX 11 or 11.1. I'm able to render to the B5G6R5 format just fine, it's only the swap chain that complains. I'm running Windows 10 1909.
Here's a Gist with resource creation snippets and a full code sample.
https://gist.github.com/akbyrd/c9d312048b49c5bd607ceba084d95bd0

For the swap-chain, it must be supported for "Display Scan-Out". If you need to check a format support at runtime, you can use:
UINT formatSupport = 0;
if (FAILED(device->CheckFormatSupport(backBufferFormat, &formatSupport)))
formatSupport = 0;
UINT32 required = D3D11_FORMAT_SUPPORT_RENDER_TARGET | D3D11_FORMAT_SUPPORT_DISPLAY;
if ((formatSupport & required) != required)
{
// Not supported
}
For all Direct3D Hardware Feature Levels devices, you can always count on DXGI_FORMAT_R8G8B8A8_UNORM working. Unless you are on Windows Vista or ancient WDDM 1.0 legacy drivers, you can also count on DXGI_FORMAT_B8G8R8A8_UNORM
For Direct3D Hardware Feature Level 10.0 or better devices, you can also count on DXGI_FORMAT_R16G16B16A16_FLOAT and DXGI_FORMAT_R10G10B10A2_UNORM being supported.
You can also count on all Direct3D Hardware Feature Level devices supporting DXGI_FORMAT_R8G8B8A8_UNORM_SRGB and DXGI_FORMAT_B8G8R8A8_UNORM_SRGB for swap-chains if you are using the 'legacy' swap effects. For modern swap effects which are required for DirectX 12 and recommended on Windows 10 for DirectX 11 (see this blog post), the swapchain buffer is not created with _SRGB but instead you create just the render target view with it.
See Anatomy of Direct3D 11 Create Device

Related

Can I use OpenGLES3.0 on IOS just by changing the rendering context, not the header files?

Our IOS engine has used OpenGLES2.0 for ages now, but this limits the maximum number of textures that can be bound at once to 8. If I change to OpenGLES3.0 this raises the number on all devices to 16. I do this by changing all my
#import <OpenGLES/ES2/glext.h>
to
#import <OpenGLES/ES3/glext.h>
and by changing the rendering context from
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
to
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
I notice that if I only change the rendering context, then I dont need to change any code. It all compiles and works. If I change the header files I do need to change a few lines of code.
Can I just change the rendering context ? As GLES3.0 is a superset of GLES2.0 will this work ? Upon testing the maximum number of texture units with
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &maxUnits);
It has indeed changed from 8 to 16, whether I change the headers or not.
Furthermore then, what if the device doesnt supporet GLES3.0 ? I believe devices before the A7 chip don't. Can I then revert to the GLES2 rendering context, and just use the GLES2.0 headers for both ?
Yes, I know OpenGL is deprecated and I should use Metal. We're working on that, but inthe meantime can I just use the GL2 header files with a GL3 rendering context ?
Thanks
Shaun
Users of my program use more than 50 textures simultaneously with OpenGLES2.0 and I did not notice any limitations even on old phones.
Perhaps it is worth revising the drawing technique.

How to log a large object or primitive to the console in NativeScript core on iOS?

Currently, both console.log and console.dir truncate their output. Here's a pull request that explicitly limited, by default, the size of the console output for the NativeScript Android runtime. I couldn't find a similar pull request for the iOS runtime. The pull request for Android added a configuration option that can be set to change the limit but no such option seems to exist for iOS.
Here's a (closed) issue for the main NativeScript project with a comment mentioning that no configuration option seems to be available (or at least known) to change the apparent limit:
console.dir not showing full object · Issue #6041 · NativeScript/NativeScript
I checked the NativeScript, NativeScript iOS runtime, and even WebKit sources (on which the NativeScript iOS runtime depends for its JavaScript runtime from what I could tell) and I couldn't find any obvious limit on the size of console messages.
In the interim, I've opted to use this function in my code:
function logBigStringToConsole(string) {
const maxConsoleStringLength = 900; // The actual max length isn't clear.
const length = string.length;
if (length < maxConsoleStringLength) {
console.log(string);
} else {
console.log(string.substring(0, maxConsoleStringLength));
logBigStringToConsole(string.substring(maxConsoleStringLength));
}
}
and I use it like this:
logBigStringToConsole(JSON.stringify(bigObject));

Metal Shader Debugging - Capture GPU Frame

I want to debug my metal shader, tho the "Capture GPU Frame" button is not visible and unavailable in the debug menu.
My scheme was initially set up like this:
Capture GPU Frame: Automatically Enabled
Metal API Validation: Enabled
Tho when I change the Capture GPU Frame option to Metal, I do see the capture button, tho my app crashes when I'm trying to make the render command encoder:
commandBuffer.makeRenderCommandEncoder(descriptor: ...)
validateRenderPassDescriptor:644: failed assertion `Texture at colorAttachment[0] has usage (0x01) which doesn't specify MTLTextureUsageRenderTarget (0x04)'
Question one: Why do I need to specify the usage? (It works in Automatically Enabled mode)
Question two: How do I specify the MTLTextureUsageRenderTarget?
Running betas; Xcode 10 and iOS 12.
With newer versions of Xcode you need to explicitly set MTLTextureDescriptor.usage, for my case (a render target) that looks like this:
textureDescriptor.usage = MTLTextureUsageRenderTarget|MTLTextureUsageShaderRead;
The above setting indicates that a texture can be used as a render target and that it could also be read after that by another shader. As the comment above mentioned, you may also want to set the framebufferOnly property, here is how I do that:
if (isCaptureRenderedTextureEnabled) {
mtkView.framebufferOnly = false;
}
Note that this framebufferOnly only setting is left as the default of true when for the optimized case (isCaptureRenderedTextureEnabled = false) which makes it easy to inspect the data that will be rendered in the view (the output of the shader).
Specify usage purpose
textureDescriptor.usage = [.renderTarget , .shaderRead]
or
textureDescriptor.usage = MTLTextureUsage(rawValue: MTLTextureUsage.renderTarget.rawValue | MTLTextureUsage.shaderRead.rawValue)

directx feature level and code

I coded a program with direct11 and my feature level coded like this
unsigned int featureLevel[4] =
{
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0
};
I am curious that why this program can work in feature level 10 even if I coded only direct11?
If I used direct11 version functions, should the program run only direct11?
I think the feature levels have nothing to do with directx api version. Different feature level has different feature which is useful for game developer such as partial constant buffer updates,16bpp rendering, partial clears and so on.

Does iOS support TLS compression?

I need to compress data sent over a secure channel in my iOS app and I was wondering if I could use TLS compression for the same. I am unable to figure out if Apple's TLS implementation, Secure Transport, supports the same.
Does anyone else know if TLS compression is supported in iOS or not?
I was trying to determine if Apple implementation of SSL/TLS did support compression, but I have to say that I am afraid it does not.
At first I was hopeful that having a errSSLPeerDecompressFail error code, there has to be a way to enable the compression. But I could not find it.
The first obvious reason that Apple doesn’t support compression is several wire captures I did from my device (6.1) opening secure sockets in different ports. In all of them the Client Hello packet reported only one compression method: null.
Then I looked at the last available code for libsecurity_ssl available from Apple. This is the implementation from Mac OS X 10.7.5, but something tells me that the iOS one will be very similar, if not the same, but surely it will not be more powerful than the Mac OS X one.
You can find in the file sslHandshakeHello.c, lines 186-187 (SSLProcessServerHello):
if (*p++ != 0) /* Compression */
return unimpErr;
That error code sounds a lot like “if the server sends another compression but null (0), we don’t implement that, so fail”.
Again, the same file, line 325 (SSLEncodeClientHello):
*p++ = 0; /* null compression */
And nothing else around (DEFLATE is the method 1, according to RFC 3749).
Below, lines 469, 476 and 482-483 (SSLProcessClientHello):
compressionCount = *(charPtr++);
...
/* Ignore list; we're doing null */
...
/* skip compression list */
charPtr += compressionCount;
I think it is pretty clear that this implementation only handles the null compression: it is the only one sent in the Client Hello, the only one understood in the Server Hello, and the compression methods are ignored when the Client Hello is received (null must be implemented and offered by every client).
So I think both you and me have to implement an application level compression. Good luck.

Resources