Desktop Duplication API & switchable graphics - directx

The problem: calling IDXGIOutput1::DuplicateOutput method returns DXGI_ERROR_UNSUPPORTED when you run an application using discrete graphics controller on a machine with switchable graphics.
This answer shed some light on the issue. In short, the discrete graphics renders only a part of the screen and sends the data to the framebuffer of the intergrated graphics controller -- in other words all output always goes through the integrated graphics controller. It seems that this is why DuplicateOutput returns DXGI_ERROR_UNSUPPORTED.
I wrote a sample that gets all outputs and their videoadapters using winapi (EnumDisplayDevices function) & directx (IDXGIFactory::EnumAdapters method & IDXGIAdapter::EnumOutputs method) to compare on a machine with switchable graphics (Intel HD 4600 & NVIDIA 840M). This is the result:
Not sure how much correct is my may of comparison, but you can see that winapi says that DISPLAY1 belongs to Intel card and directx says DISPLAY1 belongs to NVIDIA card. One solution would be to duplicate the output of Intel card (because everything goes through it) but EnumOutputs returns no outputs for it.
Currently there is a workaround: always run an application that uses Duplication API using the integrated graphics controller.
The question: how to make DuplicateOutput work with the discrete graphics controller on a laptop with switchable graphics? Or it is a limitation of the Desktop Duplication API?

solved:
unfortunately this issue occurs because the Desktop Duplication API
does not support being run against the discrete GPU on a Microsoft
Hybrid system. By design, the call fails together with error code
DXGI_ERROR_UNSUPPORTED in such a scenario.
To work around this issue, run the application on the integrated GPU
instead of on the discrete GPU on a Microsoft Hybrid system.
from here: https://support.microsoft.com/en-us/kb/3019314

Related

Vulkan API : max MSAA samples supported is VK_SAMPLE_COUNT_8_BIT

I am writing Vulkan API based renderer. Currently I am trying to add MSAA for color attachment.
I was pretty sure I could use VK_SAMPLE_COUNT_16_BIT ,but limits.framebufferColorSampleCounts returns bit flags that allow MSAA level up to VK_SAMPLE_COUNT_8_BIT (inclusive)
I run on a brand new NVIDIA QUADRO RTX 3000 card. I also use latest NVIDIA driver: 441.28
I checked the limits in OpenGL and GPU caps viewer shows
GL_MAX_FRAMEBUFFER_SAMPLES = 32
How does it make sense? is the limit dictated by the Vulkan API only? And if the hardware doesn't support more than x8 then does it mean OpenGL driver simulates it on CPU, e.g via stuff like supersampling? That's what I was said by several rendering developers at khronosdev.slack ? Does it make sense? Doesn't a vendor have to compile with the standard and either implement MSAA the right way or not to implement at all?
Is that possible that OpenGL doesn't "really" support more than x8 MSAA ,but the drivers simulate it via stuff like supersampling?
UPDATE
This page explains the whole state of MSAA implmentation for OpenGL and actually it becomes clear from it why Vulkan doesn't provide more than x8 samples on my card. Here is the punch line:
Some NVIDIA drivers support multisample modes which are internally
implemented as a combination of multisampling and automatic
supersampling in order to obtain a higher level of anti-aliasing than
can be directly supported by hardware.
framebufferColorSampleCounts is flags, not a count. See this enum for the values: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkSampleCountFlagBits.html
15 offers VK_SAMPLE_COUNT_1_BIT, VK_SAMPLE_COUNT_2_BIT, VK_SAMPLE_COUNT_4_BIT or VK_SAMPLE_COUNT_8_BIT.
This answers why you get 15, rather than a power of two, but it still begs the question why the NVidia driver is limiting you more than the OpenGL driver. Perhaps a question for the NVidia forums. You should double-check that your driver is up to date and that you're actually picking your NVidia card and not an integrated one.
I've also come across a similar problem (not Vulkan though, but OpenGL, but also NVidia): on my NVidia GeForce GTX 750 Ti, the Linux driver nvidia reports GL_MAX_SAMPLES=32, but anything higher than 8 samples results in ugly blurring of everything including e.g. text, even with glDisable(GL_MULTISAMPLING) for all rendering.
I remember seeing the same blurring problems when I enabled FXAA globally (via nvidia-settings --assign=fxaa=1) and ran KWin (KDE's compositing window manager) with this setting on. So I suspect this behavior with samples>=9 is because the driver enables FXAA in addition to (or instead of) MSAA.

How do I detect the DirectX shader model above v3 supported by a graphics card?

I am writing a small utility that reports system capabilities. One is the highest shader model supported by the installed graphics card, and I am currently detecting this using Direct3D 9.0c's device capabilities and checking the VertexShaderVersion and PixelShaderVersion fields of the D3DCAPS9 structure.
HRESULT hrDCaps = poD3D9->GetDeviceCaps(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, &oCaps);
if (!FAILED(hrDCaps)) {
// Pixel and vertex shader model versions. Use the minimum number of each for "the" shader model version
const int iVertexShaderModel = D3DSHADER_VERSION_MAJOR(oCaps.VertexShaderVersion);
const int iPixelShaderModel = D3DSHADER_VERSION_MAJOR(oCaps.PixelShaderVersion);
However, both these values return shader model 3 even for cards that support higher models. Here is what GPU-Z returns for the same card, for example:
This question indicates that DX9 will never report more than SM3 even on cards that support a higher model, but doesn't actually mention how to solve it.
How do I accurately get the shader model supported by the installed card? That is, the card capabilities, not the installed DirectX driver capabilities.
The utility has to run on Windows 2000 and above, and work on systems where a graphics card and even DirectX are not installed. I am currently dynamically loading DX9, so on those systems the check gracefully fails (which is ok.) But I am seeking a similar solution: something that will still run on all systems, and work correctly (detect the SM version) on most systems.
Edit - purpose: I am not using this code to dynamically change features of a program, ie select shaders. I am using it to report hardware capabilities as a 'ping' to a server, which is used to we have a good idea of typical hardware that our customers use, which can inform future product decisions. (For example: how many customers have SM4 or above? How many are using a 64-bit OS? Etc.) This is why either (a) gracefully failing, so we know it failed, or (b) getting an accurate shader model number are the two preferred modes.
Edit - answers so far: The answer below by SigTerm suggests instantiating DirectX 11, 10.1, 10, and 9.0c in order, and basing the reported shader model on which version instantiated without failures (shader model 5, 4.1, 4, and DXCAPS in that order.) If possible, I'd appreciate a code example of the DX11 and 10 ways to do this.
This may not be a reliable solution. For example, I am running Windows on a VMWare Fusion virtual machine on OSX. The Fusion drivers report DX11 in DxDiag, yet I know from the Fusion tech specs that it only supports DX9.0c and shader model 3. Still, with this exception, this method seems the best way so far.
version 4 is only supported on Direct3D10. Therefore, D3D9 api won't report it. Use D3D10/D3D11 api to detect higher version.
something that will still run on all systems, and work correctly (detect the SM version) on most systems.
Attempt to initialize D3D10/D3D11 to check functionality, if it fails init D3D9. Use LoadLibrary + GetProcAddress to load D3D10 functions, because if you link with D3D10 using .lib file, your application will fail to start if d3d10 is missing.
Or use OpenGL and try to map capabilities reported by OpenGL to D3D capabilities (probably a very bad idea).
Or build GPU database and use that.
where a graphics card and even DirectX are not installed.
I think you're asking for the impossible, because shaders are provided by DirectX, and the driver/GPU might not even have a concept of a "shader model" under the hood. In this case the only way to detect capabilites will be to make GPU database of some sort, detect installed devices, and return answer from database. This won't be relabile, of course.
Here is a link about DirectX versions and supported shader models.

DirectX, GDI+ or SDI on Windows XP?

If I want to do scaling and compositing of 2D anti-aliased vector and bitmap images in real-time on Windows XP and later versions of Windows, making the best use of hardware acceleration available, should I be using GDI+ or DirectX 9.0c? (Actually, Windows XP and Windows 7 are important but we're not concerned about performance on Vista.)
Is there any merit in using SDL, given that the application is not cross-platform (and never will be)? I wonder if SDL might make it easier to switch to whichever underlying drawing API gives better performance…
Where can I find the documentation for doing scaling and compositing of 2D images in DirectX 9.0c? (I found the documentation for DirectDraw but read that it is deprecated after DirectX 7. But Direct2D is not available until DirectX 10.)
Can I reasonably expect scaling and compositing to be hardware accelerated on Windows XP on a mid- to low-spec PC (i.e. integrated graphics)? If not then does it even matter whether I use GDI+ or DirectX 9.0c?
Do not use GDI+. It does everything in software, and it has a rendering model that is not good for performance in software. You'd be better off with just about anything else.
Direct3D or OpenGL (which you can access via SDL if you want a more complete API that is cross-platform) will give you the best performance on hardware that supports it. Direct2D is in the same boat but is not available on Windows XP. My understanding is that, at least in the case of Intel's integrated GPU's, the hardware is able to do simple operations like transforming and composing, and that most of the problems with these GPU's are with games that have high demands for features and performance, and are optimized for ATI/Nvidia cards. If you somehow find a machine where Direct3D is not supported by the video card and is falling back to software, then you might have a problem.
I believe SDL uses DirectDraw on Windows for its non-OpenGL drawing. Somehow I got the impression that DirectDraw does all its operations in software in modern releases of Windows (and given what DirectDraw is used for it never really mattered since the win9x era), but I'm not able to verify that.
The ideal would be a cross-platform vector graphics library that can make use of Direct3D or OpenGL for rendering, but AFAICT no such thing is available. The Cairo graphics library lacks acceleration on Windows, and Mozilla has started a project called Azure that apparently has that but doesn't appear to be designed for use outside of their projects.
I just found this: 2D Rendering in DirectX 8.
It appears that since Microsoft removed DirectDraw after DirectX 7 they expected all 2D drawing to be done using the 3D API. This would explain why I totally failed to find the documentation I was looking for.
The article looks promising so far.
Here's another: 2D Programming in a 3D World

D3D9 Application not working w/ Intel HD Graphics

I've inherited an application that uses D3D9 to display graphics full screen on monitor #2. The application works properly on a desktop machine with a GeForce 9500 GT. When I attempt to get the application running on a laptop equipped with onboard Intel HD Graphics, all of the graphics are not displayed. One of the vertex buffers is drawn but the rest are black.
I'm not very familiar with D3D, so I'm not sure where to begin debugging this problem. I've been doing some searching but haven't been able to turn anything up.
Update:
Drawing simple vertex buffers with only 2 triangles works, but anything more complex doesn't.
My gut feeling is likely the supported shader models for the given GPU.
Generally it is good practice to query the gfx card to see what it can support.
There is also a chance it could be specific D3D API functionality - you see this more so with switching between say GeForce & ATI(AMD), but of course also possible with Intel being its own vendor; but I would start by querying supported shaders.
For D3D9 you use IDirect3D9::GetDeviceCaps to query the gfx device.
links:
Post here: https://gamedev.stackexchange.com/questions/22705/how-can-i-check-for-shader-model-3-support
http://msdn.microsoft.com/en-us/library/bb509626%28VS.85%29.aspx
DirectX also offer functionality to create features for a given device level:
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476876%28v=vs.85%29.aspx
Solution #1:
Check every error code for every D3D9 call. Use DXGetErrorString9 and DXGetErrorDescription9 to get human-readable translation of error code. See DirectX documentation for more info. When you finally encounter a call that returns something other thant D3D_OK, investigate DirectX documentation for the call.
Solution #2:
Install debug DirectX drivers (should be included with DirectX SDK), examine debug messages process outputs while being run (messages are being printed using OutputDebugMessage, so you'll only see then in debugger/IDE). With high debug settings, you'll see every single problem in your app.

What is the most efficent way to screen capture? Screen capturing using DirectX?

I've known about screen capture using Device Contexts and GDI, since windows XP. Is there a better way (i.e. DirectX?) now that the desktop is mostly Direct3D.
How can I screen capture using DirectX?
I want to know the most efficent way to user-mode screen capture. For a tech support program that needs frequent screen scrapes.
UPDATE: I don't want to resort to using kernel mode drivers.
I am unsure this will actually be faster than the algorithms you have in mind, but one way to do it would be to copy your buffer out using GetRenderTargetData.
GetRenderTargetData
Based upon vcsjones answer (above). See CodeProject http://www.codeproject.com/KB/dialog/screencap.aspx#And%20The%20DirectX%20way%20of%20doing%20it%20
An alternative method is using Spazzarama's application, which uses DirectX (based on SlimDx) and Easyhook to inject your capture dll into a running application's DirextX pipeline.

Resources