By default, gl_FragCoord gives the coordinates of the current fragment with an origin in the bottom left.
According to the docs:
The origin of gl_FragCoord may be changed by redeclaring gl_FragCoord with the origin_upper_left identifier
However, I can't find the syntax or any examples of gl_FragCoord being redeclared.
How do you redeclare gl_FragCoord with either of the two possible origins origin_upper_left or pixel_center_integer?
That documentation is for OpenGL 4.x. You're using WebGL. It's describing functionality that WebGL doesn't have.
For what it's worth, redeclaring it would look like this:
layout(pixel_center_integer) in vec4 gl_FragCoord;
But that requires desktop GLSL 1.50 or better; even OpenGL ES 3.2 doesn't have this capability.
There is no way to redeclare gl_FragCoord in WebGL1 or 2
As #Nicol points out those docs are for OpenGL. WebGL is not based on OpenGL it's based on OpenGL ES. Confusing yes but they are not the same thing.
The relevant docs for WebGL1 are linked in the WebGL1 spec
[GLES20]
OpenGL® ES Common Profile Specification Version 2.0.25, A. Munshi, J. Leech, November 2010.
[GLES20GLSL]
The OpenGL® ES Shading Language Version 1.00, R. Simpson, May 2009.
The relevant docs for WebGL2 are linked in the WebGL2 spec
[GLES30]
OpenGL® ES Version 3.0.4, B. Lipchak 2014.
[GLES30GLSL]
The OpenGL® ES Shading Language Version 3.00.6, R. Simpson, January 2016.
Reading the OpenGL specs for WebGL will only confuse you and give you wrong info
If you want the reference pages the ES 2.0 reference pages are here and the ES 3.0 reference pages are here
Of course be aware there are differences between OpenGL ES 2.0 and WebGL1 and there are differences between OpenGL ES 3.0 and WebGL2. Those differences are documented in the 2 WebGL specs linked above.
Related
Now that a Vulkan to Metal wrapper is officially supported by Khronos (MoltenVK), and that OpenGL to Vulkan wrappers began to appear (glo), would it be technically possible to use OpenGL ES 3.1 or even 3.2 (so even with support to OpenGL compute shaders) on modern iOS versions/HW by chaining these two technologies? Has anybody tried this combination?
I'm not much interested in the performance drop (that would obviously be there due to the two additional layers of abstraction), but only on the enabling factor and cross-platform aspect of the solution.
In theory, yes :).
MoltenVK doesn't support every bit of Vulkan (see the Vulkan Portable Subset section), and some of those features might be required by OpenGL ES 3.1. Triangle fans are an obvious one, full texture swizzle is another. MoltenVK has focused on things that could translate directly; if the ES-on-Vulkan translator was willing to accept extra overhead, it could fake some or all of these features.
The core ANGLE team is working on both OpenGL ES 3.1 support and a Vulkan backend, according to their README and recent commits. They have a history of emulating features (like triangle fans) needed by ES that weren't available in D3D.
AFAIK is the compute shader model very limited in WebGL. The documentation on this is even less. I have a hard time to find any answers to my questions.
Is there a possibility to execute a compute shader on one or multiple VBO/UBO's and alter their values?
Update: On April 9 2019, the Khronos group released the a draft standard for compute shaders in WebGL 2.
Original answer:
In this press release, the Khronos group stated that they are working on an extension to WebGL 2 to allow for compute shaders:
What’s next? An extension to WebGL 2.0 providing compute shader support is under development, which will bring many leading-edge graphics algorithms to the web. Khronos is also beginning work on the next generation of WebGL, to bring the enhanced performance of the new generation of explicit 3D APIs to the web. Stay tuned for more news!
Your best bet is to wait about a year or two for it to happen on a limited number of GPU + browser combination.
2022 UPDATE
It has been declared here (in red) that the WebGL 2.0 Compute specification has instead been moved into the new WebGPU spec and is deprecated for WebGL 2.0.
WebGPU has nowhere near global coverage across browsers yet, whereas WebGL 2.0 reached global coverage as of Feb 2022. WebGL 2.0 Compute is implemented only in Google Chrome (Windows, Linux) and Microsoft Edge Insider Channels and will not be implemented elsewhere.
This is obviously a severe limitation for those wanting compute capability on the web. But it is still possible to do informal compute using other methods, such as using regular graphics shaders + the expanded input and output buffer functionalities supplied by WebGL 2.0.
I would recommend Amanda Ghassaei's gpu-io for this. It does all the work for you in wrapping regular GL calls to give compute capability that "just works" (in either WebGL or WebGL 2.0).
What is the difference between prefixes of WebGL extensions?
There is several prefixes for WebGL extensions like ANGLE , OES , WEBGL or EXT. What is actual difference between them?
Taken from here.
WebGL API extensions may derive from many sources, and the naming of
each extension reflects its origin and intent.
More about each tags:
ANGLE tag should be used if Angle library is used.
OES tag should be used for mirroring functionality from OpenGL ES or
OpenGL API extensions approved by the respective architecture review
boards.
EXT tag should be used or mirroring other OpenGL ES or OpenGL API
extensions. If only small differences in behavior compared to OpenGL
ES or OpenGL are specified for a given extension, the original tag
should be maintained.
WEBGL tag should be used for WebGL-specific extensions which are
intended to be compatible with multiple web browsers. It should also
be used for extensions which originated with the OpenGL ES or OpenGL
APIs, but whose behavior has been significantly altered.
I have a number of GLSL fragment shaders for which I can pretty much guarantee that they conform to #version 120 They use standard, non-ES conformant values and they do not have any ES-specific pragmas.
I really want to make a web previewer for them using WebGL. The previewer won't be used on mobile. Is this feasible? Is the feature set exposed to GLSL shaders in WebGL restricted compared to that GLSL version? Are there precision differences?
I've already tried playing with THREE.js but that doesn't really rub it since it mucks up my shader code before loading it onto the GPU (which I cannot do).
In short: is the GLSL spec sufficient for me to run those shaders?.. because if it isn't what I am after is not doable and I should just drop it.
No, WebGL shaders must be version #100. Anything else is disallowed.
If you're curious why it's because, as much as possible, WebGL needs to run everywhere. If you could choose any version your web page would only run on systems with GPUs/Drivers that handled that version.
The next version of WebGL will raise the version number. It will allow GLSL ES 3.0 (note the ES). It's currently available behind a flag in Chrome and Firefox as of May 2016
I want to use some of the features of OpenGL 4 (specifically, tessellation shaders and newer shader language versions) from WebGL. Is this possible, in either a standards-compliant or a hackish way? Is there some magic value I could use instead of, say, gl.FRAGMENT_SHADER to tell the underlying GL implementation to compile tessellation shaders?
WebGL is based on the OpenGL ES 2.0 Specification so you wouldn't be able to use GL4 unless the browser also somehow exposes a GL4 interface to JavaScript which i doubt. Even if a browser would give you such an interface it would only work on that browser.