What is the difference between prefixes of WebGL extensions?
There is several prefixes for WebGL extensions like ANGLE , OES , WEBGL or EXT. What is actual difference between them?
Taken from here.
WebGL API extensions may derive from many sources, and the naming of
each extension reflects its origin and intent.
More about each tags:
ANGLE tag should be used if Angle library is used.
OES tag should be used for mirroring functionality from OpenGL ES or
OpenGL API extensions approved by the respective architecture review
boards.
EXT tag should be used or mirroring other OpenGL ES or OpenGL API
extensions. If only small differences in behavior compared to OpenGL
ES or OpenGL are specified for a given extension, the original tag
should be maintained.
WEBGL tag should be used for WebGL-specific extensions which are
intended to be compatible with multiple web browsers. It should also
be used for extensions which originated with the OpenGL ES or OpenGL
APIs, but whose behavior has been significantly altered.
Related
I want to add instancing to my WebGL application, which works fine using gl_InstanceID for devices running WebGL2. However, I want to also support older devices running WebGL1 - apparently this is available as an extension for OpenGLES2.0 (see here):
#extension GL_EXT_draw_instanced : enable
#define gl_InstanceID gl_InstanceIDEXT
However, it doesn't look like WebGL1 supports this extension (at least not on the devices I've tested with). Is the list at MDN the canonical list? Is there another way to support instancing for WebGL? I found this thread that has someone offering an implementation, but unfortunately the links are dead.
The official canonical list for WebGL extensions can be found here:
https://registry.khronos.org/webgl/extensions/
WebGL1 does support instancing through the ANGLE_instanced_arrays extension (MDN page). It has the same functionality and API(postfixed with ANGLE) as the WebGL2 default functionality, however GLSL ES 1.0 does not support gl_InstanceID so you'll have to implement a work around if you require it.
By default, gl_FragCoord gives the coordinates of the current fragment with an origin in the bottom left.
According to the docs:
The origin of gl_FragCoord may be changed by redeclaring gl_FragCoord with the origin_upper_left identifier
However, I can't find the syntax or any examples of gl_FragCoord being redeclared.
How do you redeclare gl_FragCoord with either of the two possible origins origin_upper_left or pixel_center_integer?
That documentation is for OpenGL 4.x. You're using WebGL. It's describing functionality that WebGL doesn't have.
For what it's worth, redeclaring it would look like this:
layout(pixel_center_integer) in vec4 gl_FragCoord;
But that requires desktop GLSL 1.50 or better; even OpenGL ES 3.2 doesn't have this capability.
There is no way to redeclare gl_FragCoord in WebGL1 or 2
As #Nicol points out those docs are for OpenGL. WebGL is not based on OpenGL it's based on OpenGL ES. Confusing yes but they are not the same thing.
The relevant docs for WebGL1 are linked in the WebGL1 spec
[GLES20]
OpenGL® ES Common Profile Specification Version 2.0.25, A. Munshi, J. Leech, November 2010.
[GLES20GLSL]
The OpenGL® ES Shading Language Version 1.00, R. Simpson, May 2009.
The relevant docs for WebGL2 are linked in the WebGL2 spec
[GLES30]
OpenGL® ES Version 3.0.4, B. Lipchak 2014.
[GLES30GLSL]
The OpenGL® ES Shading Language Version 3.00.6, R. Simpson, January 2016.
Reading the OpenGL specs for WebGL will only confuse you and give you wrong info
If you want the reference pages the ES 2.0 reference pages are here and the ES 3.0 reference pages are here
Of course be aware there are differences between OpenGL ES 2.0 and WebGL1 and there are differences between OpenGL ES 3.0 and WebGL2. Those differences are documented in the 2 WebGL specs linked above.
Now that a Vulkan to Metal wrapper is officially supported by Khronos (MoltenVK), and that OpenGL to Vulkan wrappers began to appear (glo), would it be technically possible to use OpenGL ES 3.1 or even 3.2 (so even with support to OpenGL compute shaders) on modern iOS versions/HW by chaining these two technologies? Has anybody tried this combination?
I'm not much interested in the performance drop (that would obviously be there due to the two additional layers of abstraction), but only on the enabling factor and cross-platform aspect of the solution.
In theory, yes :).
MoltenVK doesn't support every bit of Vulkan (see the Vulkan Portable Subset section), and some of those features might be required by OpenGL ES 3.1. Triangle fans are an obvious one, full texture swizzle is another. MoltenVK has focused on things that could translate directly; if the ES-on-Vulkan translator was willing to accept extra overhead, it could fake some or all of these features.
The core ANGLE team is working on both OpenGL ES 3.1 support and a Vulkan backend, according to their README and recent commits. They have a history of emulating features (like triangle fans) needed by ES that weren't available in D3D.
AFAIK is the compute shader model very limited in WebGL. The documentation on this is even less. I have a hard time to find any answers to my questions.
Is there a possibility to execute a compute shader on one or multiple VBO/UBO's and alter their values?
Update: On April 9 2019, the Khronos group released the a draft standard for compute shaders in WebGL 2.
Original answer:
In this press release, the Khronos group stated that they are working on an extension to WebGL 2 to allow for compute shaders:
What’s next? An extension to WebGL 2.0 providing compute shader support is under development, which will bring many leading-edge graphics algorithms to the web. Khronos is also beginning work on the next generation of WebGL, to bring the enhanced performance of the new generation of explicit 3D APIs to the web. Stay tuned for more news!
Your best bet is to wait about a year or two for it to happen on a limited number of GPU + browser combination.
2022 UPDATE
It has been declared here (in red) that the WebGL 2.0 Compute specification has instead been moved into the new WebGPU spec and is deprecated for WebGL 2.0.
WebGPU has nowhere near global coverage across browsers yet, whereas WebGL 2.0 reached global coverage as of Feb 2022. WebGL 2.0 Compute is implemented only in Google Chrome (Windows, Linux) and Microsoft Edge Insider Channels and will not be implemented elsewhere.
This is obviously a severe limitation for those wanting compute capability on the web. But it is still possible to do informal compute using other methods, such as using regular graphics shaders + the expanded input and output buffer functionalities supplied by WebGL 2.0.
I would recommend Amanda Ghassaei's gpu-io for this. It does all the work for you in wrapping regular GL calls to give compute capability that "just works" (in either WebGL or WebGL 2.0).
I want to use some of the features of OpenGL 4 (specifically, tessellation shaders and newer shader language versions) from WebGL. Is this possible, in either a standards-compliant or a hackish way? Is there some magic value I could use instead of, say, gl.FRAGMENT_SHADER to tell the underlying GL implementation to compile tessellation shaders?
WebGL is based on the OpenGL ES 2.0 Specification so you wouldn't be able to use GL4 unless the browser also somehow exposes a GL4 interface to JavaScript which i doubt. Even if a browser would give you such an interface it would only work on that browser.