How to debug WebGL uncaught type error - webgl

I'm getting
Uncaught TypeError: Type error
When I have the WebGL Inspector enabled (in Chrome), this error originates in a file that starts with 'VM' and ends in a sequence of digits (not sure what code owns that -- is it core browser behavior or the WebGLInspector?). This is the line.
// Call real function
var result = originalFunction.apply(context.rawgl, arguments);
I enabled the debug context and am logging all WebGL calls. This is the call that breaks:
uniform1i(3, 0)
In the WebGL inspector, I see that the uniform at index 3 is my uniform sampler2D uSampler in my fragment shader. The API documentation says that this is a GLint, so the type is correct. I also tried setting some other uniforms first and they also fail with the same error.
I'm reworking some existing code I wrote after following tutorials and one of the things I'm adding is interleaved vertex data. I'm sure that that is the root cause, however, this is the third time I've come across an error like this and my only recourse has been to massage the code until it goes away. It feels random and it's frustrating.
Are there any more debugging techniques? I assume it's an error in the shaders. Is there some way to get a stack trace from them?

uniform1i(3, 0)
Is not valid WebGL. The uniform functions require a WebGLUniformLocation object which can only be gotten by calling gl.getUniformLocation
This is different from OpenGL. The reason is you are not allowed to do math on uniform locations. In OpenGL developers often make that mistake. They'll do something like this
--in shader--
uniform float arrayOfFloats[4];
--in code--
GLint location = glGetUniformLocation(program, "arrayOfFloats");
glUniform1f(location, 123.0f);
glUniform1f(location + 1, 456.0f); // BAD!!!
That second line is not valid OpenGL but it might work depending on the driver.
In WebGL they wanted to make that type of mistake impossible because web pages need to work everywhere whereas OpenGL programs only need to work on the platform they are compiled on. To make it work everywhere they had gl.getUniformLocation return an object so you can't do math on the result.
The correct way to write the code above in OpenGL is
--in shader--
uniform float arrayOfFloats[4];
--in code--
GLint location0 = glGetUniformLocation(program, "arrayOfFloats[0]");
GLint location1 = glGetUniformLocation(program, "arrayOfFloats[1]");
glUniform1f(location0, 123.0f);
glUniform1f(location1, 456.0f);
And in WebGL is
--in shader--
uniform float arrayOfFloats[4];
--in code--
var location0 = gl.getUniformLocation(program, "arrayOfFloats[0]");
var location1 = gl.getUniformLocation(program, "arrayOfFloats[1]");
gl.uniform1f(location0, 123.0);
gl.uniform1f(location1, 456.0);

Related

ARKit, metal shader for ARSCNView

Trying to figure out how to solve my issue of applying shaders to my ARSCNView.
Previously, when using a standard SCNView, i have successfully been able to apply a distortion shader the following way:
if let path = Bundle.main.path(forResource: "art.scnassets/distortion", ofType: "plist") {
if let dict = NSDictionary(contentsOfFile: path) {
let technique = SCNTechnique(dictionary: dict as! [String : AnyObject])
scnView.technique = technique
}
}
Replacing SCNView with ARSCNView gives me the following error(s):
"Error: Metal renderer does not support nil vertex function name"
"Error: _executeProgram - no pipeline state"
I was thinking it's because that ARSCNView uses a different renderer than SCNView. But logging ARSCNView.renderingAPI tells me nothing about the renderer, and i can't seem to choose one when i construct my ARSCNView instance. I must be missing something obvious, because i can't seem to find a single resource when scouring for references online.
My initial idea was instead use a SCNProgram to apply the shaders. But i can't find any resources of how to apply it to an ARSCNView, or if it's even a correct/possible solution, SCNProgram seems to be reserved for materials.
Anyone able to give me any useful pointers of how to solve vertex+fragment shaders for ARSCNView?
SCNTechnique for ARSCNView does not work with GLSL shaders, instead Metal functions need to be provided in the technique's plist file under the keys metalVertexShader and metalFragmentShader.
To the contrary, documentation says any combination of shader should work:
You must specify both fragment and vertex shaders, and you must
specify either a GLSL shader program, a pair of Metal functions, or
both. If both are specified, SceneKit uses whichever shader is
appropriate for the current renderer.
So it might be a mistake, but I guess the documentation is outdated. Since all ARKit running devices also run Metal, GLSL support has not been added to ARSCNViews.
As iOS12 deprecates OpenGL this looks like planned.
I had this issue in ARKit iOS11.4 and 12 and it came down to a series of miss-spelt shaders. I hope this might help someone.

webgl replace program shader

I'm trying to swap the fragement-shader used in a program. The fragment-shaders all have the same variables, just different calculations. I am trying to provide alternative shaders for lower level hardware.
I end up getting single color outputs (instead of a texture), does anyone have an idea what I could be doing wrong? I know the shaders are being used, due to the color changing accordingly.
//if I don't do this:
//WebGL: INVALID_OPERATION: attachShader: shader attachment already has shader
gl.detachShader(program, _.attachedFS);
//select a random shader, all using the same parameters
attachedFS = fragmentShaders[~~(Math.qrand()*fragmentShaders.length)];
//attach the new shader
gl.attachShader(program, attachedFS);
//if I don't do this nothing happens
gl.linkProgram(program);
//if I don't add this line:
//globject.js:313 WebGL: INVALID_OPERATION: uniform2f:
//location not for current program
updateLocations();
I am assuming you have called gl.compileShader(fragmentShader);
Have you tried to test the code on a different browser and see if you get the same behavior? (it could be standards implementation specific)
Have you tried to delete the fragment shader (gl.deleteShader(attachedFS); ) right after detaching it. The
previous shader may still have a pointer in memory.
If this does not let you move forward, you may have to detach both shaders (vertex & frag) and reattach them or even recreate the program from scratch
I found the issue, after trying about everything else without result. It also explains why I was seeing a shader change, but just getting a flat color. I was not updating some of the attributes.

bufferData - usage parameter differences

While reading specification at Khronos, I found:
bufferData(ulong target, Object data, ulong usage)
'usage' parameter can be: STREAM_DRAW, STATIC_DRAW or DYNAMIC_DRAW
My question is, which one should I use?
What are the advantages, what are the differences?
Why would I choose to use some other instead STATIC_DRAW?
Thanks.
For 'desktop' OpenGL, there is a good explanation here:
http://www.opengl.org/wiki/Buffer_Object
Basically, usage parameter is a hint to OpenGL/WebGL on how you intend to use the buffer. The OpenGL/WebGL can then optimize the buffer depending on your hint.
The OpenGL ES docs writes the following, which is not exactly the same as for OpenGL (remember that WebGL is inherited from OpenGL ES):
STREAM
The data store contents will be modified once and used at most a few times.
STATIC
The data store contents will be modified once and used many times.
DYNAMIC
The data store contents will be modified repeatedly and used many times.
The nature of access must be:
DRAW
The data store contents are modified by the application, and used as the source for GL drawing and image specification commands.
The most common usage is STATIC_DRAW (for static geometry), but I have recently created a small particle system where DYNAMIC_DRAW makes more sense (the particles are stored in a single buffer, where parts of the buffer is updated when particles are emitted).
http://jsfiddle.net/mortennobel/YHMQZ/
Code snippet:
function createVertexBufferObject(){
particleBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, particleBuffer);
var vertices = new Float32Array(vertexBufferSize * particleSize);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.DYNAMIC_DRAW);
bindAttributes();
}
function emitParticle(x,y,velocityX, velocityY){
gl.bindBuffer(gl.ARRAY_BUFFER, particleBuffer);
// ...
gl.bufferSubData(gl.ARRAY_BUFFER, particleId*particleSize*sizeOfFloat, data);
particleId = (particleId +1 )%vertexBufferSize;
}

GLKTexture is not correctly mapped since iOS6

I got a strange behavior since Xcode 4.5 and the iOS 6 SDK when using textures on my 3D objects.
The problem also appears on my mac application when building against OS X 10.8 SDK.
I am using OpenGL ES 2.0 on iOS and OpenGL legacy profile ( < 3.0 ) on OS X 10.8.
The textures are not placed at there correct coordinates anymore and i have lots of artifacts. The VAOs are correctly uploaded and they look good without texturing. When using XCode 4.4.1 and iOS 5.1 SDK everything is fine.
The VAO is exactly the same (checked with OpenGL ES frame capture) and also the texture uniforms are binded correct.
Xcode 4.4 VAO Overview
Xcode 4.5 VAO Overview
XCode 4.4.1 (iOS 5.1 SDK)
XCode 4.5 (iOS 6 SDK)
Code / Shader Snippet
Relevant parts for uploading and processing the texture. I had to strip the shaders to the minium.
Vertex shader
precision highp float;
attribute vec2 a_vTextureCoordinate;
uniform mat4 u_mModelViewMatrix;
uniform mat4 u_mModelViewMatrixInverse;
uniform mat4 u_mProjectionMatrix;
uniform mat4 u_mNormalMatrix;
void main()
{
....
// Transform output position
gl_Position = u_mProjectionMatrix * u_mModelViewMatrix * a_vPosition;
// Pass through texture coordinate v_texcoord = a_vTextureCoordinate.xy;
v_vPosition = vec3(u_mModelViewMatrix * a_vPosition);
v_vTextureCoordinate = a_vTextureCoordinate.xy;
....
}
Fragment Shader
precision highp float;
// location 1
uniform sampler2D u_diffuseTexture;
varying vec2 v_vTextureCoordinate;
varying vec3 v_vPosition;
....
void main() {
....
vec4 base = texture2D(u_diffuseTexture, v_vTextureCoordinate);
gl_FragColor = base;
....
}
Texture loading
NSDictionary *options = #{GLKTextureLoaderOriginBottomLeft: #(YES), [NSNumber numberWithBool:YES] : GLKTextureLoaderGenerateMipmaps};
NSError *error;
path = [path stringByReplacingOccurrencesOfString:#"/" withString:#""];
path = [[NSBundle mainBundle] pathForResource:[path stringByDeletingPathExtension] ofType:[path pathExtension]];
GLKTextureInfo *texture = [GLKTextureLoader textureWithContentsOfFile:path options:options error:&error];
Render loop (Only sending the uniform of the active texture)
....
[self setShaderTexture:[[materials objectForKey:#"diffuse"] objectForKey:#"glktexture"]
forKey:#"u_diffuseTexture"
withUniform1i:0
andTextureUnit:GL_TEXTURE0+0];
....
#pragma mark - Texture communication
-(void)setShaderTexture:(GLKTextureInfo*)texture forKey:(NSString*)key withUniform1i:(int32_t)uniform andTextureUnit:(int32_t)unit {
glActiveTexture(unit);
glBindTexture(texture.target, texture.name);
[self.shaderManager sendUniform1Int:key parameter:uniform];
}
Had anyone a close problem to mine since iOS 6?
You should report the bug to bugreport.apple.com as already mentioned. As an aside, if you are suggesting that GLKTextureLoader is maybe the problem (seems like a good theory) then you might narrow things down in one of two ways off the top of my head...
1) I would render the texture shown to a trivial quad and see if the render results are what you expect., i.e., is it rendering vertically flipped from what you expect? Is the source texture partially garbled in some way that you weren't expecting?
2) You could try converting your image to a different size/color depth/image type and see if the problem still exists. What I'm thinking is, and it seems unlikely, but maybe it's not being reported because you are hitting an unusual edge case due to something with the image format. Knowing this would be of huge help to anyone trying to fix this at apple.
Probably not much help but without having access to all your source and assets, pretty hard to know what to suggest. FWIW, I have some samples that do similar things to what you are doing and haven't noticed anything under GLKit v. 10.8.

Getting 'PERFORMANCE WARNING' messages in Chrome

I just recently starting getting these messages, and was wondering if anyone has seen them, or know what may be causing them. I'm using Three.js with Chrome version '21.0.1180.57' on MacOS. I don't get these messages with Safari or FireFox.
PERFORMANCE WARNING: Attribute 0 is disabled. This has signficant performance penalty
WebGL: too many errors, no more errors will be reported to the console for this context.
Same message on Firefox is : "Error: WebGL: Drawing without vertex attrib 0 array enabled forces the browser to do expensive emulation work when running on desktop OpenGL platforms, for example on Mac. It is preferable to always draw with vertex attrib 0 array enabled, by using bindAttribLocation to bind some always-used attribute to location 0."
This is not only a performance drawback, but will also result in bad output.
PROBLEM: This message occurs if a JS tries to run a WebGL shader that is expecting color information in gl_Color on a mesh not providing a color array.
SOLUTION: Use a WebGL shader with constant color not accessing gl_Color or provide a color array in the mesh to be shaded.
If using lightgl.js from Evan Wallace, try to add the option colors:true in the new GL.Mesh statement and provide a propper mesh.colors array of same size as your vertices array. Or try this shader:
blackShader = new GL.Shader(
'void main() { gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; }',
'void main() { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); }'
);
Sorry, I never used Three.js but the problem should be similar, provide color to your mesh before shading.
Looks like a Chrome bug:
http://code.google.com/p/chromium-os/issues/detail?id=32528

Resources