The DXGI Overview on MSDN says that the Direct3D API (10, 11 and 12) sits on top of DXGI wheras DXGI sits on top of the Hardware which is illustrated by the following picture:
The article further mentions that the tasks of DXGI basically are enumerating adapters and present images on the screen. Now, if DirectX sits on top of DXGI, how are all the math related tasks invoked on the actual hardware (GPU)? Or is the the architectural overview wrong and D3D_ also directly access the hardware?
This diagram is a logical map, not an explicit map of how everything actually gets implemented. In reality, Direct3D and DXGI are more 'side-by-side' and the layer that includes the User Mode Driver (UMD) and the Kernel Mode Driver (KMD) is the Windows Display Driver Model (WDDM) which uses the Device Driver Interface (DDI) to communicate to the kernel mode which in turns communicates with the hardware. The various versions of Direct3D are also 'lofted' together to use the same DDI in most cases (i.e. Direct3D 9 an Direct3D 10 legacy applications end up going through the same Direct3D 11 codepaths where possible).
Since "DXGI" means "DirectX Graphics Infrastructure" this diagram is lumping the DXGI APIs with WDDM and DDI.
The purpose of the DXGI API was to separate the video hardware/output enumeration as well as swapchain creation/presentation from Direct3D. Back in Direct3D 9 and prior, these were all lumped together. In theory DXGI was supposed to not change much between Direct3D versions, but in practice it has evolved at basically the same pace with a lot of changes dealing with the CoreWindow swapchain model for Windows Store apps / Universal Windows Platform apps.
Many of the DXGI APIs are really for internal use, particularly when dealing with surface creation. You need to create Direct3D resources with the Direct3D APIs and not try to create them directly with DXGI, but you can use QueryInterface in places to get a DXGI surface for doing certain operations like inter-API surface sharing. With Direct3D 11.1 or later, most of the device sharing behavior has been automated so you don't have to deal with DXGI to use Direct2D/DirectWrite with Direct3D 11.
The real question is: Why does it matter to you?
See DirectX Graphics Infrastructure (DXGI): Best Practices and Surface Sharing Between Windows Graphics APIs
Related
I am writing a small utility that reports system capabilities. One is the highest shader model supported by the installed graphics card, and I am currently detecting this using Direct3D 9.0c's device capabilities and checking the VertexShaderVersion and PixelShaderVersion fields of the D3DCAPS9 structure.
HRESULT hrDCaps = poD3D9->GetDeviceCaps(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, &oCaps);
if (!FAILED(hrDCaps)) {
// Pixel and vertex shader model versions. Use the minimum number of each for "the" shader model version
const int iVertexShaderModel = D3DSHADER_VERSION_MAJOR(oCaps.VertexShaderVersion);
const int iPixelShaderModel = D3DSHADER_VERSION_MAJOR(oCaps.PixelShaderVersion);
However, both these values return shader model 3 even for cards that support higher models. Here is what GPU-Z returns for the same card, for example:
This question indicates that DX9 will never report more than SM3 even on cards that support a higher model, but doesn't actually mention how to solve it.
How do I accurately get the shader model supported by the installed card? That is, the card capabilities, not the installed DirectX driver capabilities.
The utility has to run on Windows 2000 and above, and work on systems where a graphics card and even DirectX are not installed. I am currently dynamically loading DX9, so on those systems the check gracefully fails (which is ok.) But I am seeking a similar solution: something that will still run on all systems, and work correctly (detect the SM version) on most systems.
Edit - purpose: I am not using this code to dynamically change features of a program, ie select shaders. I am using it to report hardware capabilities as a 'ping' to a server, which is used to we have a good idea of typical hardware that our customers use, which can inform future product decisions. (For example: how many customers have SM4 or above? How many are using a 64-bit OS? Etc.) This is why either (a) gracefully failing, so we know it failed, or (b) getting an accurate shader model number are the two preferred modes.
Edit - answers so far: The answer below by SigTerm suggests instantiating DirectX 11, 10.1, 10, and 9.0c in order, and basing the reported shader model on which version instantiated without failures (shader model 5, 4.1, 4, and DXCAPS in that order.) If possible, I'd appreciate a code example of the DX11 and 10 ways to do this.
This may not be a reliable solution. For example, I am running Windows on a VMWare Fusion virtual machine on OSX. The Fusion drivers report DX11 in DxDiag, yet I know from the Fusion tech specs that it only supports DX9.0c and shader model 3. Still, with this exception, this method seems the best way so far.
version 4 is only supported on Direct3D10. Therefore, D3D9 api won't report it. Use D3D10/D3D11 api to detect higher version.
something that will still run on all systems, and work correctly (detect the SM version) on most systems.
Attempt to initialize D3D10/D3D11 to check functionality, if it fails init D3D9. Use LoadLibrary + GetProcAddress to load D3D10 functions, because if you link with D3D10 using .lib file, your application will fail to start if d3d10 is missing.
Or use OpenGL and try to map capabilities reported by OpenGL to D3D capabilities (probably a very bad idea).
Or build GPU database and use that.
where a graphics card and even DirectX are not installed.
I think you're asking for the impossible, because shaders are provided by DirectX, and the driver/GPU might not even have a concept of a "shader model" under the hood. In this case the only way to detect capabilites will be to make GPU database of some sort, detect installed devices, and return answer from database. This won't be relabile, of course.
Here is a link about DirectX versions and supported shader models.
If I write a DirectX 11 application using the DX11 SDK, and I don't have a DirectX card, will I be able to run the application?
I can't find the requirements for actually using the DX11 SDK.
Thanks!
Yes you can. What you are looking for is called Direct3D feature levels. With this
paradigm you can write an application with the features supported by your graphic card. Before the creation of the D3D device you can query the card and find the feature level supported, then you use this level in the device creation.
Of course you cannot use the new features provided by DirectX 11 on your hardware.
This tutorial site states that when setting up a device and swap chain, a D3D_DRIVER_TYPE of D3D_DRIVER_TYPE_REFERENCE can be used to allow DX11 features
to be explored even if you don't have a DX11 graphics card.
For further reading, click that link and scroll down halfway until you see D3D11CreateDeviceAndSwapChain().
If I want to do scaling and compositing of 2D anti-aliased vector and bitmap images in real-time on Windows XP and later versions of Windows, making the best use of hardware acceleration available, should I be using GDI+ or DirectX 9.0c? (Actually, Windows XP and Windows 7 are important but we're not concerned about performance on Vista.)
Is there any merit in using SDL, given that the application is not cross-platform (and never will be)? I wonder if SDL might make it easier to switch to whichever underlying drawing API gives better performance…
Where can I find the documentation for doing scaling and compositing of 2D images in DirectX 9.0c? (I found the documentation for DirectDraw but read that it is deprecated after DirectX 7. But Direct2D is not available until DirectX 10.)
Can I reasonably expect scaling and compositing to be hardware accelerated on Windows XP on a mid- to low-spec PC (i.e. integrated graphics)? If not then does it even matter whether I use GDI+ or DirectX 9.0c?
Do not use GDI+. It does everything in software, and it has a rendering model that is not good for performance in software. You'd be better off with just about anything else.
Direct3D or OpenGL (which you can access via SDL if you want a more complete API that is cross-platform) will give you the best performance on hardware that supports it. Direct2D is in the same boat but is not available on Windows XP. My understanding is that, at least in the case of Intel's integrated GPU's, the hardware is able to do simple operations like transforming and composing, and that most of the problems with these GPU's are with games that have high demands for features and performance, and are optimized for ATI/Nvidia cards. If you somehow find a machine where Direct3D is not supported by the video card and is falling back to software, then you might have a problem.
I believe SDL uses DirectDraw on Windows for its non-OpenGL drawing. Somehow I got the impression that DirectDraw does all its operations in software in modern releases of Windows (and given what DirectDraw is used for it never really mattered since the win9x era), but I'm not able to verify that.
The ideal would be a cross-platform vector graphics library that can make use of Direct3D or OpenGL for rendering, but AFAICT no such thing is available. The Cairo graphics library lacks acceleration on Windows, and Mozilla has started a project called Azure that apparently has that but doesn't appear to be designed for use outside of their projects.
I just found this: 2D Rendering in DirectX 8.
It appears that since Microsoft removed DirectDraw after DirectX 7 they expected all 2D drawing to be done using the 3D API. This would explain why I totally failed to find the documentation I was looking for.
The article looks promising so far.
Here's another: 2D Programming in a 3D World
I've inherited an application that uses D3D9 to display graphics full screen on monitor #2. The application works properly on a desktop machine with a GeForce 9500 GT. When I attempt to get the application running on a laptop equipped with onboard Intel HD Graphics, all of the graphics are not displayed. One of the vertex buffers is drawn but the rest are black.
I'm not very familiar with D3D, so I'm not sure where to begin debugging this problem. I've been doing some searching but haven't been able to turn anything up.
Update:
Drawing simple vertex buffers with only 2 triangles works, but anything more complex doesn't.
My gut feeling is likely the supported shader models for the given GPU.
Generally it is good practice to query the gfx card to see what it can support.
There is also a chance it could be specific D3D API functionality - you see this more so with switching between say GeForce & ATI(AMD), but of course also possible with Intel being its own vendor; but I would start by querying supported shaders.
For D3D9 you use IDirect3D9::GetDeviceCaps to query the gfx device.
links:
Post here: https://gamedev.stackexchange.com/questions/22705/how-can-i-check-for-shader-model-3-support
http://msdn.microsoft.com/en-us/library/bb509626%28VS.85%29.aspx
DirectX also offer functionality to create features for a given device level:
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476876%28v=vs.85%29.aspx
Solution #1:
Check every error code for every D3D9 call. Use DXGetErrorString9 and DXGetErrorDescription9 to get human-readable translation of error code. See DirectX documentation for more info. When you finally encounter a call that returns something other thant D3D_OK, investigate DirectX documentation for the call.
Solution #2:
Install debug DirectX drivers (should be included with DirectX SDK), examine debug messages process outputs while being run (messages are being printed using OutputDebugMessage, so you'll only see then in debugger/IDE). With high debug settings, you'll see every single problem in your app.
I'm completely new to OpenCL and GPU programming in general. Right now I am working on a project where I'm trying to see the performance saves that making use of the GPU in a game has. With this, however, I have ran into a snag; how do I set up my Directx project to speak to the OpenCL code base?
I've been googling this for about a week and haven't been able to find anything. If someone could point me in the right direction, I would be greatful.
OpenCL does not have anything to do with DirectX, it's simply another library.
For OpenCL you'll need an implementation ('SDK'), as Khronos don't provide those (they only provide the specifications).
Intel, AMD and Nvidia all provide one, but they have different requirements and limitations. See here for some of the existing implementations
After installing one of these, you'll have the necessary headers and libraries to code against the OpenCL API and link with OpenCL.dll
There are lots of sample sources in the SDKs or online, you have to write the kernel, the rest is mostly boilerplate code for initialization and kernel compilation.
The specific OpenCL extension that allows sharing of OpenCL buffers as textures and vice versa is cl_khr_d3d10_sharing.txt. http://www.khronos.org/registry/cl/extensions/khr/cl_khr_d3d10_sharing.txt
OpenCL has extensions for sharing memory between DirectX and OpenCL (and also between OpenGL and OpenCL.) This allows you to read or write DirectX buffers, including textures from within OpenCL. Ani's answer mentioned the extension for DirectX 10, but since the question is about DirectX 9, the extension you'll actually be using is cl_khr_dx9_media_sharing.
This extension has just 4 functions:
clGetDeviceIDsFromDX9MediaAdapterKHR
This function allows you to get the OpenCL device IDs of the OpenCL device(s) that can share memory with a given Direct3D 9 device.
clCreateFromDX9MediaSurfaceKHR
This function gets an OpenCL cl_mem memory object for a given Direct3D 9 memory object.
clEnqueueAcquireDX9MediaSurfacesKHR
This function locks the specified shared memory object so that you can read and/or write to it from OpenCL.
clEnqueueReleaseDX9MediaSurfacesKHR
This function unlocks the specified memory object from OpenCL, so that Direct3D can read/write it again.
Once you've used the above functions to share and synchronize access to the memory buffers, everything else on both the Direct3D 9 side and the OpenCL side works as it would otherwise with those particular APIs.
Note that your GPU will need to support the cl_khr_dx9_media_sharing extension in order for this to work. You can check the extensions property of the OpenCL platform and device in order to confirm that this extension is supported.
Some NVidia GPUs support a different extension instead, called cl_nv_d3d9_sharing. The basic idea of how it works is the same as with the cl_khr_dx9_media_sharing extension, but the exact details are a bit different. The biggest difference is just that it has different functions for getting cl_mem objects for different types of Direct3D 9 buffers, rather than just one function to cover all of them.