WebGL Compute Shader and VBO/UBO's - webgl

AFAIK is the compute shader model very limited in WebGL. The documentation on this is even less. I have a hard time to find any answers to my questions.
Is there a possibility to execute a compute shader on one or multiple VBO/UBO's and alter their values?

Update: On April 9 2019, the Khronos group released the a draft standard for compute shaders in WebGL 2.
Original answer:
In this press release, the Khronos group stated that they are working on an extension to WebGL 2 to allow for compute shaders:
What’s next? An extension to WebGL 2.0 providing compute shader support is under development, which will bring many leading-edge graphics algorithms to the web. Khronos is also beginning work on the next generation of WebGL, to bring the enhanced performance of the new generation of explicit 3D APIs to the web. Stay tuned for more news!
Your best bet is to wait about a year or two for it to happen on a limited number of GPU + browser combination.

2022 UPDATE
It has been declared here (in red) that the WebGL 2.0 Compute specification has instead been moved into the new WebGPU spec and is deprecated for WebGL 2.0.
WebGPU has nowhere near global coverage across browsers yet, whereas WebGL 2.0 reached global coverage as of Feb 2022. WebGL 2.0 Compute is implemented only in Google Chrome (Windows, Linux) and Microsoft Edge Insider Channels and will not be implemented elsewhere.
This is obviously a severe limitation for those wanting compute capability on the web. But it is still possible to do informal compute using other methods, such as using regular graphics shaders + the expanded input and output buffer functionalities supplied by WebGL 2.0.
I would recommend Amanda Ghassaei's gpu-io for this. It does all the work for you in wrapping regular GL calls to give compute capability that "just works" (in either WebGL or WebGL 2.0).

Related

DeleteVertexShader dx8.1 to dx9 conversion

i am currently trying to convert a game to use dx9 instead of dx8. I would say that i'm quite close to completing it, but I have a few errors that I don't exactly know how to deal with atm.
DeleteVertexShader and DeletePixelShader do not exist anymore in directx 9. What do I do with those? I could not find any equivalent to them in dx9 so far.
Old code example:
D3D_CHECKERROR(hr); hr = _pGfx->gl_pd3dDevice->DeletePixelShader(ulHandle);
Render state D3DRS_PATCHSEGMENTS does not exist anymore, it was used for the number of segments per edge when drawing patches. Do I need to replace it with something? I could not find any equivalent for this either.
Code example:
HRESULT hr = _pGfx->gl_pd3dDevice->SetRenderState( D3DRS_PATCHSEGMENTS, *((DWORD*)&fSegments));
These two issues are the ones I have the most struggles with atm, so any help would be appreciated.
Thanks in advance!
In Direct3D 9, vertex shader and pixel shaders return COM interfaces to the shader object. Therefore, it's deleted whenever the IUnknown reference count is 0. See Microsoft Docs: Programming DirectX with COM.
The 'n-patch' and 'rect/tri-patch' features were never widely supported or used. Direct3D 9 does support these legacy features Using Higher-Order Primitives (Direct3D 9), but only if the hardware reports support via D3DDEVCAPS_NPATCHES / D3DDEVCAPS_RTPATCHES.
You can also take a look at some of the n-patch support in legacy D3DX9, but you probably just need to rewrite this code for modern cards.
See Microsoft Docs: Converting to Direct3D 9.
Be sure to read this blog post as well.

Opengl ES 3.1+ support on iOS through Vulkan wrapper

Now that a Vulkan to Metal wrapper is officially supported by Khronos (MoltenVK), and that OpenGL to Vulkan wrappers began to appear (glo), would it be technically possible to use OpenGL ES 3.1 or even 3.2 (so even with support to OpenGL compute shaders) on modern iOS versions/HW by chaining these two technologies? Has anybody tried this combination?
I'm not much interested in the performance drop (that would obviously be there due to the two additional layers of abstraction), but only on the enabling factor and cross-platform aspect of the solution.
In theory, yes :).
MoltenVK doesn't support every bit of Vulkan (see the Vulkan Portable Subset section), and some of those features might be required by OpenGL ES 3.1. Triangle fans are an obvious one, full texture swizzle is another. MoltenVK has focused on things that could translate directly; if the ES-on-Vulkan translator was willing to accept extra overhead, it could fake some or all of these features.
The core ANGLE team is working on both OpenGL ES 3.1 support and a Vulkan backend, according to their README and recent commits. They have a history of emulating features (like triangle fans) needed by ES that weren't available in D3D.

Is it possible to run #version 120 shaders with WebGL

I have a number of GLSL fragment shaders for which I can pretty much guarantee that they conform to #version 120 They use standard, non-ES conformant values and they do not have any ES-specific pragmas.
I really want to make a web previewer for them using WebGL. The previewer won't be used on mobile. Is this feasible? Is the feature set exposed to GLSL shaders in WebGL restricted compared to that GLSL version? Are there precision differences?
I've already tried playing with THREE.js but that doesn't really rub it since it mucks up my shader code before loading it onto the GPU (which I cannot do).
In short: is the GLSL spec sufficient for me to run those shaders?.. because if it isn't what I am after is not doable and I should just drop it.
No, WebGL shaders must be version #100. Anything else is disallowed.
If you're curious why it's because, as much as possible, WebGL needs to run everywhere. If you could choose any version your web page would only run on systems with GPUs/Drivers that handled that version.
The next version of WebGL will raise the version number. It will allow GLSL ES 3.0 (note the ES). It's currently available behind a flag in Chrome and Firefox as of May 2016

How do I detect the DirectX shader model above v3 supported by a graphics card?

I am writing a small utility that reports system capabilities. One is the highest shader model supported by the installed graphics card, and I am currently detecting this using Direct3D 9.0c's device capabilities and checking the VertexShaderVersion and PixelShaderVersion fields of the D3DCAPS9 structure.
HRESULT hrDCaps = poD3D9->GetDeviceCaps(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, &oCaps);
if (!FAILED(hrDCaps)) {
// Pixel and vertex shader model versions. Use the minimum number of each for "the" shader model version
const int iVertexShaderModel = D3DSHADER_VERSION_MAJOR(oCaps.VertexShaderVersion);
const int iPixelShaderModel = D3DSHADER_VERSION_MAJOR(oCaps.PixelShaderVersion);
However, both these values return shader model 3 even for cards that support higher models. Here is what GPU-Z returns for the same card, for example:
This question indicates that DX9 will never report more than SM3 even on cards that support a higher model, but doesn't actually mention how to solve it.
How do I accurately get the shader model supported by the installed card? That is, the card capabilities, not the installed DirectX driver capabilities.
The utility has to run on Windows 2000 and above, and work on systems where a graphics card and even DirectX are not installed. I am currently dynamically loading DX9, so on those systems the check gracefully fails (which is ok.) But I am seeking a similar solution: something that will still run on all systems, and work correctly (detect the SM version) on most systems.
Edit - purpose: I am not using this code to dynamically change features of a program, ie select shaders. I am using it to report hardware capabilities as a 'ping' to a server, which is used to we have a good idea of typical hardware that our customers use, which can inform future product decisions. (For example: how many customers have SM4 or above? How many are using a 64-bit OS? Etc.) This is why either (a) gracefully failing, so we know it failed, or (b) getting an accurate shader model number are the two preferred modes.
Edit - answers so far: The answer below by SigTerm suggests instantiating DirectX 11, 10.1, 10, and 9.0c in order, and basing the reported shader model on which version instantiated without failures (shader model 5, 4.1, 4, and DXCAPS in that order.) If possible, I'd appreciate a code example of the DX11 and 10 ways to do this.
This may not be a reliable solution. For example, I am running Windows on a VMWare Fusion virtual machine on OSX. The Fusion drivers report DX11 in DxDiag, yet I know from the Fusion tech specs that it only supports DX9.0c and shader model 3. Still, with this exception, this method seems the best way so far.
version 4 is only supported on Direct3D10. Therefore, D3D9 api won't report it. Use D3D10/D3D11 api to detect higher version.
something that will still run on all systems, and work correctly (detect the SM version) on most systems.
Attempt to initialize D3D10/D3D11 to check functionality, if it fails init D3D9. Use LoadLibrary + GetProcAddress to load D3D10 functions, because if you link with D3D10 using .lib file, your application will fail to start if d3d10 is missing.
Or use OpenGL and try to map capabilities reported by OpenGL to D3D capabilities (probably a very bad idea).
Or build GPU database and use that.
where a graphics card and even DirectX are not installed.
I think you're asking for the impossible, because shaders are provided by DirectX, and the driver/GPU might not even have a concept of a "shader model" under the hood. In this case the only way to detect capabilites will be to make GPU database of some sort, detect installed devices, and return answer from database. This won't be relabile, of course.
Here is a link about DirectX versions and supported shader models.

Will OpenGL ES 1.1 become obsolete in iOS?

I am at a crossroads between using OpenGL ES 1.1 and 2.0 for iPhone development. I plan on creating 2D applications and simple 3D applications. In the interest of simplicity, should I use 1.1? Or will this be discontinued at some point in iOS? I would like to know if the shaders in 2.0 are significantly more beneficial in making simple programs than the shaders in 1.1. Please tell me the advantages of each. Thanks.
I would say the definitive reason the use 2.0 starting out is complex effects, 2.0 is a wild card in this matter, shaders are little bits of software that run inside the graphics card (ok, not always true) and have the ability to affect individual pixels at run time. In 1.1 there's a lot you can do, but most of the time you're affecting all the pixels in the texture, to affect individual pixels, you have to combine textures, and the after a while there's just stuff you can't do in 1.1.
Now if you don't need complex effects, you can start and use 1.1, but let me show you your journey:
In the beginning 1.1 will be easier, glTranslatef, glRotatef, glScalef, etc do save you time and allow you to start manipulating objects easily.
But has you progress and start to do more complex things, you learn about matrix manipulations, say there's a routine that's a bit slow because you're doing a lot of tranlastes, rotates, etc. So you read up on the subject and learn you can combine all those operations into one matrix, so you start doing your own matrix calc funcs and start to use glMultMatrixf more, after a while it's just easier to always glMultMatrixf, because you can latter add stuff without having to rewrite code.
At this point you have no reason to use 1.1, going from glMultMatrixf to 2.0 is a very small step, and you get that whole new world from the shaders.
So, if you don't need big effects and are itching to go, alright use 1.1.
Anything else just go direct to 2.0.
Disclaimer
Yes this is a simplification, but a journey i've made myself.
2.0 has a programmable pipeline
This seems like not very much information (as when i first started i was like "uh ok?") but it really means a lot.
In allowing you to take control of the transformations of vertices
Now in the smaller things I have done (3d) the transformations could easily be managed by 1.1 automatically, but it is cool to have total control over them in 2.0
If you're learning opengles on the idevices i suggest making programs at first that support both 1.1 and 2.0 so you can get even more experience and understanding into opengles
If you need full control over the vertices then 2.0 is the way to go, otherwise 1.1 should be fine

Resources