DeleteVertexShader dx8.1 to dx9 conversion - directx

i am currently trying to convert a game to use dx9 instead of dx8. I would say that i'm quite close to completing it, but I have a few errors that I don't exactly know how to deal with atm.
DeleteVertexShader and DeletePixelShader do not exist anymore in directx 9. What do I do with those? I could not find any equivalent to them in dx9 so far.
Old code example:
D3D_CHECKERROR(hr); hr = _pGfx->gl_pd3dDevice->DeletePixelShader(ulHandle);
Render state D3DRS_PATCHSEGMENTS does not exist anymore, it was used for the number of segments per edge when drawing patches. Do I need to replace it with something? I could not find any equivalent for this either.
Code example:
HRESULT hr = _pGfx->gl_pd3dDevice->SetRenderState( D3DRS_PATCHSEGMENTS, *((DWORD*)&fSegments));
These two issues are the ones I have the most struggles with atm, so any help would be appreciated.
Thanks in advance!

In Direct3D 9, vertex shader and pixel shaders return COM interfaces to the shader object. Therefore, it's deleted whenever the IUnknown reference count is 0. See Microsoft Docs: Programming DirectX with COM.
The 'n-patch' and 'rect/tri-patch' features were never widely supported or used. Direct3D 9 does support these legacy features Using Higher-Order Primitives (Direct3D 9), but only if the hardware reports support via D3DDEVCAPS_NPATCHES / D3DDEVCAPS_RTPATCHES.
You can also take a look at some of the n-patch support in legacy D3DX9, but you probably just need to rewrite this code for modern cards.
See Microsoft Docs: Converting to Direct3D 9.
Be sure to read this blog post as well.

Related

SharpDX numbered classes, where do i find their respective documentation/responsibilities?

I'm having a hard time everytime i look at SharpDX code and try to follow DirectX documentation. Is there a place where what each of the numbered classes map to and why they exist is clearly laid out?
I'm talking about things like :
DXGI.Device
DXGI.Device1
DXGI.Device2
DXGI.Device3
DXGI.Device4
SharpDX.Direct3D11.Device
SharpDX.Direct3D11.Device1
SharpDX.Direct3D11.Device11On12
SharpDX.Direct3D11.Device2
SharpDX.Direct3D11.Device3
SharpDX.Direct3D11.Device4
SharpDX.Direct3D11.Device5
SharpDX.Direct3D11.DeviceContext
SharpDX.Direct3D11.DeviceContext1
SharpDX.Direct3D11.DeviceContext2
SharpDX.Direct3D11.DeviceContext3
SharpDX.Direct3D11.DeviceContext4
Everytime i start from code i find it seems to be picked by black magic and i have no idea where to go from there, for example i'm using this (from code i found) and i have no idea why it's device3, factory 3 going with swapchain1 on which we queryinterface swapchain2 :
using (DXGI.Device3 dxgiDevice3 = this.device.QueryInterface<DXGI.Device3>())
using (DXGI.Factory3 dxgiFactory3 = dxgiDevice3.Adapter.GetParent<DXGI.Factory3>())
{
DXGI.SwapChain1 swapChain1 = new DXGI.SwapChain1(dxgiFactory3, this.device, ref swapChainDescription);
this.swapChain = swapChain1.QueryInterface<DXGI.SwapChain2>();
}
If full explanation is too large of a the scope of an answer here any link to get me started on figuring out what C++ DX maps to which numbered object and why is most welcome.
In case this matters i'm only interested in DX >= 11, and i'm using SharpDX within an UWP project.
SharpDx is a pretty thin wrapper around DirectX, and pretty much everything in DirectX is expressed in SharpDx as a pass-through with some naming and calling conventions to accommodate the .net world.
Real documentation on SharpDx is essentially nonexistent, so you will have to do what everybody else does. If you are starting with something in SharpDx then look directly at the SharpDx API listings and the header files to understand what underlying DirectX functions are being expressed. Once you have the name of the DirectX function, you can read the MSDN documentation to understand how that function works. If you are starting with something in DirectX, then look first at MSDN to understand how it works and how it's named, and then go to the SharpDx API and header files to find out how that function is wrapped (named and exposed) in SharpDx.
For the specific question you ask, SharpDx device numbering identifies the Direct3D version that is being wrapped.
Direct3D 11.1 device ==> ID3D11Device1 ==> SharpDX.Direct3D11.Device1
Direct3D 11.2 device ==> ID3D11Device2 ==> SharpDX.Direct3D11.Device2
Direct3D 11.3 device ==> ID3D11Device3 ==> SharpDX.Direct3D11.Device3
and so on.
Naturally each version has a slightly different ("improved") interface. Lower version numbers will work pretty much anywhere, and higher version numbers include additional functionality that may require something specific from your video card and/or your operating system. You can read about the API for each version in sections found here.
For example, the description of the new methods added to the ID3D11Device5 interface (i.e, what's new since ID3D11Device4) is here. In this case, Device5 adds the ability to create a fence object and to open a handle for a shared fence.
When example code uses a specific device number, it's usually because the code requires some functionality that wasn't there in a previous version of Direct3D. In general you want to use the lowest numbered device (and factory, etc.) possible, because that will permit your code to run on the widest variety of machines and video cards.
If you find example code that creates a SharpDX.Direct3D11.Device1 but doesn't appear to use any methods beyond those in SharpDX.Direct3D11.Device, it's probably for one of two reasons. First, the author may know that a later example will require a method or field that doesn't exist before Direct3D 11.1. Second, the author may know that every video card and operating system capable of running the example at all will be capable of running Direct3D 11.1.
For a person just starting out, I would suggest you just stick with Direct3D (and Direct2D) version 11.1, thus DXGI.Device1, SharpDX.Direct3D11.Device1 and SharpDX.Direct3D11.DeviceContext1. These are likely to run on any machine you'll encounter. Only increase the version number if you actually need some functionality that doesn't appear in that version.
One additional hint: if you read a thread about some Direct3D or Direct2D functionality and you can't seem to find it anywhere in SharpDx, look at the Direct3D API to see what version number first contains that functionality. Then go through the SharpDx API (or better yet the header files) for that version until you see a similarly named element. It may be wrapped in an unexpected way, but AFAIK it's all exposed, even when you have a hard time finding it.
Here you can find about all SharpDx objects, specifically for DXGI you can found here, There you can see the Device mapped to IDXGIDevice.
Note the words IDXGIDevice are hyperlink that references to documentation for C++ object. And on this way Device1 and Device2 etc.
You can see that there is a very simple logic here, SharpDx divides the name of the C++ object into Namespace and a class name,
For example instead of IDXGIDevice, you get Namespace: DXG and class Name: Device.
In the documentation for each C++ object you can find Requirements.
And there is detailed in which operating system you can use the object.
As the number is higher, the object will work in a newer operating system.
For example, IDXGIDevice1 works under Windows 7, however IDXGIDevice3 works under Windows 8.1 or higher.

WebGL Compute Shader and VBO/UBO's

AFAIK is the compute shader model very limited in WebGL. The documentation on this is even less. I have a hard time to find any answers to my questions.
Is there a possibility to execute a compute shader on one or multiple VBO/UBO's and alter their values?
Update: On April 9 2019, the Khronos group released the a draft standard for compute shaders in WebGL 2.
Original answer:
In this press release, the Khronos group stated that they are working on an extension to WebGL 2 to allow for compute shaders:
What’s next? An extension to WebGL 2.0 providing compute shader support is under development, which will bring many leading-edge graphics algorithms to the web. Khronos is also beginning work on the next generation of WebGL, to bring the enhanced performance of the new generation of explicit 3D APIs to the web. Stay tuned for more news!
Your best bet is to wait about a year or two for it to happen on a limited number of GPU + browser combination.
2022 UPDATE
It has been declared here (in red) that the WebGL 2.0 Compute specification has instead been moved into the new WebGPU spec and is deprecated for WebGL 2.0.
WebGPU has nowhere near global coverage across browsers yet, whereas WebGL 2.0 reached global coverage as of Feb 2022. WebGL 2.0 Compute is implemented only in Google Chrome (Windows, Linux) and Microsoft Edge Insider Channels and will not be implemented elsewhere.
This is obviously a severe limitation for those wanting compute capability on the web. But it is still possible to do informal compute using other methods, such as using regular graphics shaders + the expanded input and output buffer functionalities supplied by WebGL 2.0.
I would recommend Amanda Ghassaei's gpu-io for this. It does all the work for you in wrapping regular GL calls to give compute capability that "just works" (in either WebGL or WebGL 2.0).

How do I detect the DirectX shader model above v3 supported by a graphics card?

I am writing a small utility that reports system capabilities. One is the highest shader model supported by the installed graphics card, and I am currently detecting this using Direct3D 9.0c's device capabilities and checking the VertexShaderVersion and PixelShaderVersion fields of the D3DCAPS9 structure.
HRESULT hrDCaps = poD3D9->GetDeviceCaps(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, &oCaps);
if (!FAILED(hrDCaps)) {
// Pixel and vertex shader model versions. Use the minimum number of each for "the" shader model version
const int iVertexShaderModel = D3DSHADER_VERSION_MAJOR(oCaps.VertexShaderVersion);
const int iPixelShaderModel = D3DSHADER_VERSION_MAJOR(oCaps.PixelShaderVersion);
However, both these values return shader model 3 even for cards that support higher models. Here is what GPU-Z returns for the same card, for example:
This question indicates that DX9 will never report more than SM3 even on cards that support a higher model, but doesn't actually mention how to solve it.
How do I accurately get the shader model supported by the installed card? That is, the card capabilities, not the installed DirectX driver capabilities.
The utility has to run on Windows 2000 and above, and work on systems where a graphics card and even DirectX are not installed. I am currently dynamically loading DX9, so on those systems the check gracefully fails (which is ok.) But I am seeking a similar solution: something that will still run on all systems, and work correctly (detect the SM version) on most systems.
Edit - purpose: I am not using this code to dynamically change features of a program, ie select shaders. I am using it to report hardware capabilities as a 'ping' to a server, which is used to we have a good idea of typical hardware that our customers use, which can inform future product decisions. (For example: how many customers have SM4 or above? How many are using a 64-bit OS? Etc.) This is why either (a) gracefully failing, so we know it failed, or (b) getting an accurate shader model number are the two preferred modes.
Edit - answers so far: The answer below by SigTerm suggests instantiating DirectX 11, 10.1, 10, and 9.0c in order, and basing the reported shader model on which version instantiated without failures (shader model 5, 4.1, 4, and DXCAPS in that order.) If possible, I'd appreciate a code example of the DX11 and 10 ways to do this.
This may not be a reliable solution. For example, I am running Windows on a VMWare Fusion virtual machine on OSX. The Fusion drivers report DX11 in DxDiag, yet I know from the Fusion tech specs that it only supports DX9.0c and shader model 3. Still, with this exception, this method seems the best way so far.
version 4 is only supported on Direct3D10. Therefore, D3D9 api won't report it. Use D3D10/D3D11 api to detect higher version.
something that will still run on all systems, and work correctly (detect the SM version) on most systems.
Attempt to initialize D3D10/D3D11 to check functionality, if it fails init D3D9. Use LoadLibrary + GetProcAddress to load D3D10 functions, because if you link with D3D10 using .lib file, your application will fail to start if d3d10 is missing.
Or use OpenGL and try to map capabilities reported by OpenGL to D3D capabilities (probably a very bad idea).
Or build GPU database and use that.
where a graphics card and even DirectX are not installed.
I think you're asking for the impossible, because shaders are provided by DirectX, and the driver/GPU might not even have a concept of a "shader model" under the hood. In this case the only way to detect capabilites will be to make GPU database of some sort, detect installed devices, and return answer from database. This won't be relabile, of course.
Here is a link about DirectX versions and supported shader models.

Is Post-pixel-shader-blending always guaranteed in DirectX 10 and 11?

Under DirectX 9, it was still necessary to query the device for the capability bit "post pixel shader blending" on a per-texture-basis.
This functionality now doesn't exist any more, but DirectX 11 has the whole new Output-Merger_State OM, which basically does what PPSB says on the tin.
I can't find anywhere that it says that DX10 and DX11 guarantee that they offer this capability, so can I always rely on it?
AFAIK yes, it's always available.

Format of compiled directx9 shader files?

Is the format of compiled pixel and vertex shader object files as produced by fxc.exe documented anywhere either officially or unofficially?
I'd like to be able to read the constant name to register assignments from the shader files. I know that the effects framework in D3DX can do this, but I need to avoid using D3DX as it may not be installed on user's machines and I don't need it for anything else so I want to avoid them having to run the directx update.
If the effects framework can do it, then so can I if I can find out the file format but I can' seem to find it documented anywhere.
(this is for use in directx9)
From MSDN:
Asm Shader Reference (Windows)
Shader Binary Format
The bitwise layout of the shader instruction stream is defined in D3d9types.h. If you want to design your own shader compiler or construction tools and you want more information about the shader token stream, refer to the Direct3D 9 Driver Development Kit (DDK).
So you can either look through 'D3d9type.h' and try to figure it out that way (had a quick look and could see the enums/types you should need, but not how its structured), or download the DDK and read the official documentation.
Some more info can be found here: Direct3D Shader Codes (expand the tree on the left hand side of the screen to get all the info).
Microsoft deliberately keep this information away from you. As you are using DirectX 9 its relatively easy to backward engineer the format though. If you write a simple piece of shader assembly you can check out what he compiled code returned at the other side is. By making modifications to the assembler you can see how they byte code changes. You will start to see patterns in how registers are handled and where the instruction is encoded. You can thus, slowly but surely, work out the byte code. It won't be too quick though!
Microsoft has put the format specification online here: Direct3D Shader Codes.
It refers to constants by name, however (eg. D3DSIO_DCL), so you'll likely still need the Windows DDK to get any use out of it.

Resources