gl_FragData must be constant zero - webgl

I'm having an issue trying to compile a fragment shader. I keep getting this error:
Uncaught Error: Fragment Shader Compiler Error: ERROR: 0:21: '[' :
array index for gl_FragData must be constant zero ERROR: 0:21: '[' :
array index for gl_FragData must be constant zero ERROR: 0:21: '[' :
array index for gl_FragData must be constant zero
This is the code:
#ifdef GL_EXT_draw_buffers
#extension GL_EXT_draw_buffers : require
#endif
#ifdef GL_ES
precision highp float;
#endif
void main() {
gl_FragData[0] = vec4(1.,2.,3.,4.);
gl_FragData[1] = vec4(1.,2.,3.,4.);
gl_FragData[2] = vec4(1.,2.,3.,4.);
gl_FragData[3] = vec4(1.,2.,3.,4.);
}
The whole setup works fine if I'm setting gl_FragColor (with the 4 textures attached to the frame buffer), but trying to do the code above (indexing the buffers to output to) seems to not compile. I have seen this working fine in WebGL1 using extensions. I'm using WebGL2, so perhaps something is different in this context? (I'm trying it in the latest version of Chrome).

So it appears there's some changes to consider going from WebGL1 to WebGL2. Given #gman's comment I thought it best to link to his article, since I know he's really the expert here. ;)
WebGL 1 to WebGL 2 conversion: https://webgl2fundamentals.org/webgl/lessons/webgl1-to-webgl2.html
I also found it helpful to remember the version differences:
WebGL 1.0 is based on OpenGL ES 2.0 and provides an API for 3D
graphics. It uses the HTML5 canvas element and is accessed using
Document Object Model (DOM) interfaces.
WebGL 2.0 is based on OpenGL ES 3.0 and made guaranteed availability
of many optional extensions of WebGL 1.0 and exposes new APIs.
In a nutshell (referring also to the first link on the history):
My shader code was designed for examples I've seen for WebGL 1 (OpenGL ES 2) using extensions. This worked fine because OpenGL 2.0 supported multiple color values via gl_FragData.
Switching to WebGL 2 (OpenGL ES 3) this is depreciated in favour of a different way. Now there are out declarations required, like out vec4 out_0; main(void) { out_0 = vec4(1.0, 0.0, 0.0, 1.0); }. But I was still having some problems. It seems I needed to specify the buffer locations. Also, I was getting this error:
ERROR: must explicitly specify all locations when using multiple fragment outputs
Which means that I needed to add #version 300 es to the top of my program, so the correct code for WebGL 2 looks more like this:
#version 300 es
layout(location = 0) out vec4 out_0;
layout(location = 1) out vec4 out_1;
main(void) {
out_0 = vec4(1.0, 0.0, 0.0, 1.0);
out_1 = vec4(1.0, 0.0, 0.0, 1.0);
}
At one point I had the wrong version, which caused this error:
invalid version directive ERROR: 0:24: 'out' : storage qualifier
supported in GLSL ES 3.00 and above only
But I found out that the version for WebGL 2 specifically is #version 300 es (notice the es part), which worked!
Note: The version specifier must be on the FIRST line, and, unfortunately, cannot be in a preprocessor directive (i.e. #ifdef), so I had to dynamically change the string before sending it to be compiled. If not, you'll get this:
#version directive must occur before anything else, except for comments and white space
For vertex shaders, if compiled for WebGL 2 (ES 3) note that attribute is now in instead. The version of the vertex shader must ALSO match that of the fragment being compiled, or else you'll get this:
ERROR: Shader Linking Error: Versions of linked shaders have to match.
I hope glueing all this confusion together helps save someone else lots of time. ;)

Related

HighStock Boost module Chrome browser error

I want to load many stock charts on a single page. ~1500 charts with 1 year timeseries data.
I use the boost module but get the following error on Chrome Console and midway the Chrome goes to a black screen, however it recovers from it yet the error persists
when compiling fragment shader:
highstock.js:473 [highcharts boost] - unable to init WebGL renderer
highstock.js:473 [highcharts boost] shader error - when compiling vertex shader:
How to get rid of this error?

msvc trying to compile cv::Matx<float,3,1> as a 4-element vector

Using MSVC 2017, OpenCV 3.4. Code
typedef Vec3f localcolor;
inline double lensqd(const localcolor & c) {
return c.ddot(c);
}
Get
error C2338: Matx should have at least 4 elements. channels >= 4
note: while compiling class template member function 'cv::Matx<float,3,1>::Matx(_Tp,_Tp,_Tp,_Tp)'
when compiling the ddot function.
The compiler is trying to instantiate a 3-element vector with 4 initializers. I can't see anything in the OCV source code that would make this happen.
So do I file a bug report with MS?
And how do you suggest I get a working build? The code is this way because I sometimes want
typedef Vec4f localcolor;
which BTW compiles without error.
Could you show the ddot function ?
I recently had the same error by attempting to initialize a Vec3f with 4 elements.

puts(NULL) - why doesn't WP+RTE complain?

Consider this small C file:
#include <stdio.h>
void f(void) {
puts(NULL);
}
I'm running the WP and RTE plugins of Frama-C like this:
frama-c-gui puts.c -wp -rte -wp-rte
I would expect this code to generate a proof obligation of valid_read_string(NULL); or similar, which would be obviously unprovable. However, to my surprise, no such thing happens. Is this a deficiency in the ACSL specification of the standard library?
Basically yes. You can see in the version of stdio.h that is bundled with Frama-C that the specification for puts is
/*# assigns *stream \from s[..]; */
extern int fputs(const char * restrict s,
FILE * restrict stream);
i.e. the bare minimum, an assigns clause (plus a from clause for Eva). Preconditions on s and stream. Adding a precondition on s would be easy; things are more complex for stream since you need a model for the various objects of type FILE.

Rendering image using texSubImage2D in Haxe

I am learning how to stamp an image onto my canvas using Haxe and I have read that texSubImage2D should be the function I need to do the job.
I have read some documentation found here and thought I could implement what I was after by completing the following params:
void gl.texSubImage2D(target, level, xoffset, yoffset, format, type, HTMLImageElement? pixels);
This is what I did:
gl.texSubImage2D (cast fluid.dyeRenderTarget.writeToTexture, 0, Math.round(mouse.x), Math.round(mouse.y), gl.RGB, gl.UNSIGNED_BYTE, document.querySelector('img[src="images/myImage.jpg"]'));
However, when I try to build the project, I am getting the following errors:
src/Main.hx:571: characters 135-191 : js.html.Element should be Int
src/Main.hx:571: characters 135-191 : For function argument 'format'
When I went back to the docs, the format I have passed gl.RGB is an accepted param, so I am not sure where I am going wrong.
Any guidance would be really appreciated.
I can't quite reproduce the error message you're getting, I think the errors might have improved a bit in more in recent Haxe versions. Anyway, there's a few issues here:
Firstly, by doing gl.RGB / gl.UNSIGNED_BYTE, you're trying to access static fields from an instance. I actually get a helpful error for this:
Cannot access static field RGB from a class instance
While other languages allow this, Haxe does not, you have to access them through the class name. To fix this, simply prefix js.html.webgl.RenderingContext.
Secondly, querySelector() returns a plain js.html.Element, which none of the overloads accepts. They all want something more specific: VideoElement, ImageElement or CanvasElement. So you'd have to cast it first:
var image:js.html.ImageElement = cast document.querySelector('img[src="images/myImage.jpg"]');
Finally, it seems a bit suspicious that you'd need to cast the first parameter. Even if it works, there might be a nicer way of doing that with the wrapper you're using.
So in summary, the following should compile:
gl.texSubImage2D(cast fluid.dyeRenderTarget.writeToTexture, 0,
Math.round(mouse.x), Math.round(mouse.y),
RenderingContext.RGB, RenderingContext.UNSIGNED_BYTE, image);

openCV 3.0, openCL and meanShiftFiltering

Based on the changes in openCV 3.0 and openCL, I can not seem to get pyrMeanShiftFiltering to work using openCL. I know that ocl::meanShiftFiltering was supported in openCV 2.4.10. The two functions below take the same amount of time to execute.
How can I even check which functions in openCV 3.0 are supported under openCL? Any suggestions?
#include <opencv2/core/ocl.hpp> //attempting to use openCL
using namespace cv;
using namespace ocl;
void meanShiftOCL()
{
setUseOpenCL(true)
UMat in, out;
imread("./images/img.png").copyTo(in);
pyrMeanShiftFiltering(in, out, 40, 20, 3);
}
//not using openCL
void meanShift()
{
Mat in, out;
imread("./images/img.png").copyTo(in);
pyrMeanShiftFiltering(in, out, 40, 20, 3);
}
I'm not sure that there is simple way to determine it with given OpenCV binaries, but you can recompile OpenCV yourself with additional define (can be specified in cmake):
CV_OPENCL_RUN_VERBOSE
With this define every function for which OpenCL implementation is available will print to console (stdout) the following message:
<function name>: OpenCL implementation is running
Regarded to your question - currently pyrMeanShiftFiltering doesn't have OpenCL implementation, as I know.

Resources