Is the format of compiled pixel and vertex shader object files as produced by fxc.exe documented anywhere either officially or unofficially?
I'd like to be able to read the constant name to register assignments from the shader files. I know that the effects framework in D3DX can do this, but I need to avoid using D3DX as it may not be installed on user's machines and I don't need it for anything else so I want to avoid them having to run the directx update.
If the effects framework can do it, then so can I if I can find out the file format but I can' seem to find it documented anywhere.
(this is for use in directx9)
From MSDN:
Asm Shader Reference (Windows)
Shader Binary Format
The bitwise layout of the shader instruction stream is defined in D3d9types.h. If you want to design your own shader compiler or construction tools and you want more information about the shader token stream, refer to the Direct3D 9 Driver Development Kit (DDK).
So you can either look through 'D3d9type.h' and try to figure it out that way (had a quick look and could see the enums/types you should need, but not how its structured), or download the DDK and read the official documentation.
Some more info can be found here: Direct3D Shader Codes (expand the tree on the left hand side of the screen to get all the info).
Microsoft deliberately keep this information away from you. As you are using DirectX 9 its relatively easy to backward engineer the format though. If you write a simple piece of shader assembly you can check out what he compiled code returned at the other side is. By making modifications to the assembler you can see how they byte code changes. You will start to see patterns in how registers are handled and where the instruction is encoded. You can thus, slowly but surely, work out the byte code. It won't be too quick though!
Microsoft has put the format specification online here: Direct3D Shader Codes.
It refers to constants by name, however (eg. D3DSIO_DCL), so you'll likely still need the Windows DDK to get any use out of it.
Related
i am currently trying to convert a game to use dx9 instead of dx8. I would say that i'm quite close to completing it, but I have a few errors that I don't exactly know how to deal with atm.
DeleteVertexShader and DeletePixelShader do not exist anymore in directx 9. What do I do with those? I could not find any equivalent to them in dx9 so far.
Old code example:
D3D_CHECKERROR(hr); hr = _pGfx->gl_pd3dDevice->DeletePixelShader(ulHandle);
Render state D3DRS_PATCHSEGMENTS does not exist anymore, it was used for the number of segments per edge when drawing patches. Do I need to replace it with something? I could not find any equivalent for this either.
Code example:
HRESULT hr = _pGfx->gl_pd3dDevice->SetRenderState( D3DRS_PATCHSEGMENTS, *((DWORD*)&fSegments));
These two issues are the ones I have the most struggles with atm, so any help would be appreciated.
Thanks in advance!
In Direct3D 9, vertex shader and pixel shaders return COM interfaces to the shader object. Therefore, it's deleted whenever the IUnknown reference count is 0. See Microsoft Docs: Programming DirectX with COM.
The 'n-patch' and 'rect/tri-patch' features were never widely supported or used. Direct3D 9 does support these legacy features Using Higher-Order Primitives (Direct3D 9), but only if the hardware reports support via D3DDEVCAPS_NPATCHES / D3DDEVCAPS_RTPATCHES.
You can also take a look at some of the n-patch support in legacy D3DX9, but you probably just need to rewrite this code for modern cards.
See Microsoft Docs: Converting to Direct3D 9.
Be sure to read this blog post as well.
I'm having a hard time everytime i look at SharpDX code and try to follow DirectX documentation. Is there a place where what each of the numbered classes map to and why they exist is clearly laid out?
I'm talking about things like :
DXGI.Device
DXGI.Device1
DXGI.Device2
DXGI.Device3
DXGI.Device4
SharpDX.Direct3D11.Device
SharpDX.Direct3D11.Device1
SharpDX.Direct3D11.Device11On12
SharpDX.Direct3D11.Device2
SharpDX.Direct3D11.Device3
SharpDX.Direct3D11.Device4
SharpDX.Direct3D11.Device5
SharpDX.Direct3D11.DeviceContext
SharpDX.Direct3D11.DeviceContext1
SharpDX.Direct3D11.DeviceContext2
SharpDX.Direct3D11.DeviceContext3
SharpDX.Direct3D11.DeviceContext4
Everytime i start from code i find it seems to be picked by black magic and i have no idea where to go from there, for example i'm using this (from code i found) and i have no idea why it's device3, factory 3 going with swapchain1 on which we queryinterface swapchain2 :
using (DXGI.Device3 dxgiDevice3 = this.device.QueryInterface<DXGI.Device3>())
using (DXGI.Factory3 dxgiFactory3 = dxgiDevice3.Adapter.GetParent<DXGI.Factory3>())
{
DXGI.SwapChain1 swapChain1 = new DXGI.SwapChain1(dxgiFactory3, this.device, ref swapChainDescription);
this.swapChain = swapChain1.QueryInterface<DXGI.SwapChain2>();
}
If full explanation is too large of a the scope of an answer here any link to get me started on figuring out what C++ DX maps to which numbered object and why is most welcome.
In case this matters i'm only interested in DX >= 11, and i'm using SharpDX within an UWP project.
SharpDx is a pretty thin wrapper around DirectX, and pretty much everything in DirectX is expressed in SharpDx as a pass-through with some naming and calling conventions to accommodate the .net world.
Real documentation on SharpDx is essentially nonexistent, so you will have to do what everybody else does. If you are starting with something in SharpDx then look directly at the SharpDx API listings and the header files to understand what underlying DirectX functions are being expressed. Once you have the name of the DirectX function, you can read the MSDN documentation to understand how that function works. If you are starting with something in DirectX, then look first at MSDN to understand how it works and how it's named, and then go to the SharpDx API and header files to find out how that function is wrapped (named and exposed) in SharpDx.
For the specific question you ask, SharpDx device numbering identifies the Direct3D version that is being wrapped.
Direct3D 11.1 device ==> ID3D11Device1 ==> SharpDX.Direct3D11.Device1
Direct3D 11.2 device ==> ID3D11Device2 ==> SharpDX.Direct3D11.Device2
Direct3D 11.3 device ==> ID3D11Device3 ==> SharpDX.Direct3D11.Device3
and so on.
Naturally each version has a slightly different ("improved") interface. Lower version numbers will work pretty much anywhere, and higher version numbers include additional functionality that may require something specific from your video card and/or your operating system. You can read about the API for each version in sections found here.
For example, the description of the new methods added to the ID3D11Device5 interface (i.e, what's new since ID3D11Device4) is here. In this case, Device5 adds the ability to create a fence object and to open a handle for a shared fence.
When example code uses a specific device number, it's usually because the code requires some functionality that wasn't there in a previous version of Direct3D. In general you want to use the lowest numbered device (and factory, etc.) possible, because that will permit your code to run on the widest variety of machines and video cards.
If you find example code that creates a SharpDX.Direct3D11.Device1 but doesn't appear to use any methods beyond those in SharpDX.Direct3D11.Device, it's probably for one of two reasons. First, the author may know that a later example will require a method or field that doesn't exist before Direct3D 11.1. Second, the author may know that every video card and operating system capable of running the example at all will be capable of running Direct3D 11.1.
For a person just starting out, I would suggest you just stick with Direct3D (and Direct2D) version 11.1, thus DXGI.Device1, SharpDX.Direct3D11.Device1 and SharpDX.Direct3D11.DeviceContext1. These are likely to run on any machine you'll encounter. Only increase the version number if you actually need some functionality that doesn't appear in that version.
One additional hint: if you read a thread about some Direct3D or Direct2D functionality and you can't seem to find it anywhere in SharpDx, look at the Direct3D API to see what version number first contains that functionality. Then go through the SharpDx API (or better yet the header files) for that version until you see a similarly named element. It may be wrapped in an unexpected way, but AFAIK it's all exposed, even when you have a hard time finding it.
Here you can find about all SharpDx objects, specifically for DXGI you can found here, There you can see the Device mapped to IDXGIDevice.
Note the words IDXGIDevice are hyperlink that references to documentation for C++ object. And on this way Device1 and Device2 etc.
You can see that there is a very simple logic here, SharpDx divides the name of the C++ object into Namespace and a class name,
For example instead of IDXGIDevice, you get Namespace: DXG and class Name: Device.
In the documentation for each C++ object you can find Requirements.
And there is detailed in which operating system you can use the object.
As the number is higher, the object will work in a newer operating system.
For example, IDXGIDevice1 works under Windows 7, however IDXGIDevice3 works under Windows 8.1 or higher.
I have not found any definitive answers to this, so decided to ask here. Is there really no way to save and load compiled webgl shaders? It seems a waste to compile the shaders every time someone loads the page, when all you would have to do is compile the shaders once, save it to a file, then load the compiled shader object, as you would HLSL (I know it's not GLSL, but i'm still a little new to OpenGL).
So, if possible, how can i save and load a compiled shader in webgl?
There really is no way, and imho thats a good thing. It would pose a security issue(feeding arbitrary bytecode to the GPU) in addition to that when drivers are updated the precompiled shaders are potentially missing new optimizations or just break.
when all you would have to do is compile the shaders once, save it to a file, then load the compiled shader object, as you would HLSL
OpenGL(and its derivatives) does not support loading pre-compiled shaders the same way DirectX does:
Program binary formats are not intended to be transmitted. It is not reasonable to expect different hardware vendors to accept the same binary formats. It is not reasonable to expect different hardware from the same vendor to accept the same binary formats.
https://www.opengl.org/wiki/Shader_Compilation#Binary_limitations
There seems to be no intermediate format like SPIR-V in OpenGL so you would need to compile the shaders on the target platform introducing a whole lot of additional concerns with users changing their graphics cards / employing a hybrid graphics solution, storage limitations on the client(5 MB using localstorage) and the possibility of abusing it to fingerprint the hardware.
I'm thinking about releasing a bunch of GPGPU functions as a framework using OpenGL ES 2.0 for iOS devices.
When capturing an OpenGL ES frame in XCode, I can see the code of the shaders being used. Is there a way to avoid this from happening? I've tried deleting and detaching the shaders with glDeleteShader and glDetachShader after linking the OpenGL ES program, but the code is still captured.
I'm not looking for a bullet proof option (which probably doesn't exist), just something that makes getting to the code a bit more difficult than just pressing a button.
Thank you.
The debugger has to capture input from calls to glShaderSource, the actual shader source is never stored in VRAM after compilation. I cannot think of any way to overcome this problem directly. Calling glShaderSourceis required because OpenGL ES does not support precompiled shader binaries.
I would recommend obfuscating the original shader code, perhaps using compile time macros, or even a script to scramble variable names etc (be carful of attribs and uniforms as they affect linkage to app code).
Here is a tool used for obfuscation/minimization of shader code. I believe it is built for WebGL so it may not work perfectly. http://glslunit.appspot.com/compiler.html
I want record screen (by capturing 15 screenshots per second). This part I know how to do. But I don't know how to write this to some popular video format. Best option which I found is write frames to separated PNG files and use commandline Mencoder which can convert them to many output formats. But maybe someone have another idea?
Requirements:
Must be multi-platform solutions (I'm using Free Pascal / Lazarus). Windows, Linux, MacOS
Exists some librarys for that?
Could be complex commandline application which record screen for me too, but I must have possibility to edit frames before converting whole raw data to popular video format
All materials which could give me some idea are appreciated. API, librarys, anything even in other languages than FPC (I would try rewrite it or find some equivalent)
I considered also writting frames to video RAW format and then use Mencoder (he can handle it) or other solution, but can't find any API/doc for video RAW data
Regards
Argalatyr mentioned ffmpeg already.
There are two ways that you can get that to work:
By spawning an new process. All you have to do is prepare the right input (could be a series of jpeg images for example), and the right commandline parameters. After that you just call ffmpeg.exe and wait for it to finish.
ffmpeg makes use of some dll's that do the actual work. You can use those dll's directly from within your Delphi application. It's a bit more work, because it's more low-level, but in the end it'll give you a finer control over what happens, and what you show the user while you're processing.
Here are some solutions to check out:
FFVCL Commercial. Actually looks quite good, but I was too greedy to spend money on this.
Open Source Delphi headers for FFMpeg. I've tried it, but I never managed to get it to work.
I ended up pulling the DLL wrappers from an open source karaoke program (UltraStar Deluxe). I had to remove some dependencies, but in the end it worked like a charm. The relevant (pascal) code can be found here:
http://ultrastardx.svn.sourceforge.net/viewvc/ultrastardx/trunk/src/lib/ffmpeg-0.10/
There was some earlier discussion with a Delphi component here. It's a very simple component that sometimes generates some weird movies. Maybe a start.