Getting 'PERFORMANCE WARNING' messages in Chrome - webgl

I just recently starting getting these messages, and was wondering if anyone has seen them, or know what may be causing them. I'm using Three.js with Chrome version '21.0.1180.57' on MacOS. I don't get these messages with Safari or FireFox.
PERFORMANCE WARNING: Attribute 0 is disabled. This has signficant performance penalty
WebGL: too many errors, no more errors will be reported to the console for this context.

Same message on Firefox is : "Error: WebGL: Drawing without vertex attrib 0 array enabled forces the browser to do expensive emulation work when running on desktop OpenGL platforms, for example on Mac. It is preferable to always draw with vertex attrib 0 array enabled, by using bindAttribLocation to bind some always-used attribute to location 0."
This is not only a performance drawback, but will also result in bad output.
PROBLEM: This message occurs if a JS tries to run a WebGL shader that is expecting color information in gl_Color on a mesh not providing a color array.
SOLUTION: Use a WebGL shader with constant color not accessing gl_Color or provide a color array in the mesh to be shaded.
If using lightgl.js from Evan Wallace, try to add the option colors:true in the new GL.Mesh statement and provide a propper mesh.colors array of same size as your vertices array. Or try this shader:
blackShader = new GL.Shader(
'void main() { gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; }',
'void main() { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); }'
);
Sorry, I never used Three.js but the problem should be similar, provide color to your mesh before shading.

Looks like a Chrome bug:
http://code.google.com/p/chromium-os/issues/detail?id=32528

Related

FFT in C++ AMP Throw CLIPBRD_E_CANT_OPEN error

I am trying to use C++ AMP in Visual C++ 2017 on Windows 10 (updated to the latest) and I find the archived FFT library from C++ AMP team on codeplex. I try to run the sample code, however the program throws ran out of memory error when creating DirectX FFT. I solve that problem by following the thread on Microsoft forum.
However, the problem doesn't stop. When the FFT library tries to create Unordered Access View, it throws error of CLIPBRD_E_CANT_OPEN. I did not try to operate on clipboard anyhow.
Thank you for reading this!
It seems I solve the problem. The original post mentioned that we need to create a new DirectX device and then create accelerator view upon it. Then I pass that view to ctor of fft as the second parameter.
fft(
concurrency::extent<_Dim> _Transform_extent,
const concurrency::accelerator_view& _Av = concurrency::accelerator().default_view,
float _Forward_scale = 0.0f,
float _Inverse_scale = 0.0f)
However, I still have crashes of the CLIPBRD_E_CANT_OPEN.
After reading the code, I realize that I need to create array on that DirectX views too. So I started to change:
array<std::complex<float>,dims> transformed_array(extend, directx_acc_view);
The idea comes from the different behaviors of create_uav(). The internal buffers and the precomputing caused no problem, but the samples' calls trigger the clipboard error. I guess the device matters here, so I do that change.
I hope my understanding is correct and anyway there is no such errors now.

ARKit, metal shader for ARSCNView

Trying to figure out how to solve my issue of applying shaders to my ARSCNView.
Previously, when using a standard SCNView, i have successfully been able to apply a distortion shader the following way:
if let path = Bundle.main.path(forResource: "art.scnassets/distortion", ofType: "plist") {
if let dict = NSDictionary(contentsOfFile: path) {
let technique = SCNTechnique(dictionary: dict as! [String : AnyObject])
scnView.technique = technique
}
}
Replacing SCNView with ARSCNView gives me the following error(s):
"Error: Metal renderer does not support nil vertex function name"
"Error: _executeProgram - no pipeline state"
I was thinking it's because that ARSCNView uses a different renderer than SCNView. But logging ARSCNView.renderingAPI tells me nothing about the renderer, and i can't seem to choose one when i construct my ARSCNView instance. I must be missing something obvious, because i can't seem to find a single resource when scouring for references online.
My initial idea was instead use a SCNProgram to apply the shaders. But i can't find any resources of how to apply it to an ARSCNView, or if it's even a correct/possible solution, SCNProgram seems to be reserved for materials.
Anyone able to give me any useful pointers of how to solve vertex+fragment shaders for ARSCNView?
SCNTechnique for ARSCNView does not work with GLSL shaders, instead Metal functions need to be provided in the technique's plist file under the keys metalVertexShader and metalFragmentShader.
To the contrary, documentation says any combination of shader should work:
You must specify both fragment and vertex shaders, and you must
specify either a GLSL shader program, a pair of Metal functions, or
both. If both are specified, SceneKit uses whichever shader is
appropriate for the current renderer.
So it might be a mistake, but I guess the documentation is outdated. Since all ARKit running devices also run Metal, GLSL support has not been added to ARSCNViews.
As iOS12 deprecates OpenGL this looks like planned.
I had this issue in ARKit iOS11.4 and 12 and it came down to a series of miss-spelt shaders. I hope this might help someone.

webgl replace program shader

I'm trying to swap the fragement-shader used in a program. The fragment-shaders all have the same variables, just different calculations. I am trying to provide alternative shaders for lower level hardware.
I end up getting single color outputs (instead of a texture), does anyone have an idea what I could be doing wrong? I know the shaders are being used, due to the color changing accordingly.
//if I don't do this:
//WebGL: INVALID_OPERATION: attachShader: shader attachment already has shader
gl.detachShader(program, _.attachedFS);
//select a random shader, all using the same parameters
attachedFS = fragmentShaders[~~(Math.qrand()*fragmentShaders.length)];
//attach the new shader
gl.attachShader(program, attachedFS);
//if I don't do this nothing happens
gl.linkProgram(program);
//if I don't add this line:
//globject.js:313 WebGL: INVALID_OPERATION: uniform2f:
//location not for current program
updateLocations();
I am assuming you have called gl.compileShader(fragmentShader);
Have you tried to test the code on a different browser and see if you get the same behavior? (it could be standards implementation specific)
Have you tried to delete the fragment shader (gl.deleteShader(attachedFS); ) right after detaching it. The
previous shader may still have a pointer in memory.
If this does not let you move forward, you may have to detach both shaders (vertex & frag) and reattach them or even recreate the program from scratch
I found the issue, after trying about everything else without result. It also explains why I was seeing a shader change, but just getting a flat color. I was not updating some of the attributes.

Using OSVR camera in OpenCV 3

I'm trying to use the OSVR IR camera in OpenCV 3.1.
Initialization works OK.
Green LED is lit on camera.
When I call VideoCapture.read(mat) it returns false and mat is empty.
Other cameras work fine with the same code and VLC can grab the stream from the OSVR camera.
Some further testing reveals: grab() return true, whereas retrieve(mat) again returns false.
Getting width and height from the camera yields expected results but MODE and FORMAT gets me 0.
Is this a config issue? Can it be solved by a combination of VideoCapture.set calls?
Alternative Official answer received from the developers (after my own solution below):
The reason my camera didn't work out of the box with OpenCV might be that it has old firmware (pre-v7).
Work around (or just update firmware):
I found the answer here while browsing anything remotely linked to the issue:
Fastest way to get frames from webcam
You need to specify that it should use DirectShow.
VideoCapture capture( CV_CAP_DSHOW + id_of_camera );

How to debug WebGL uncaught type error

I'm getting
Uncaught TypeError: Type error
When I have the WebGL Inspector enabled (in Chrome), this error originates in a file that starts with 'VM' and ends in a sequence of digits (not sure what code owns that -- is it core browser behavior or the WebGLInspector?). This is the line.
// Call real function
var result = originalFunction.apply(context.rawgl, arguments);
I enabled the debug context and am logging all WebGL calls. This is the call that breaks:
uniform1i(3, 0)
In the WebGL inspector, I see that the uniform at index 3 is my uniform sampler2D uSampler in my fragment shader. The API documentation says that this is a GLint, so the type is correct. I also tried setting some other uniforms first and they also fail with the same error.
I'm reworking some existing code I wrote after following tutorials and one of the things I'm adding is interleaved vertex data. I'm sure that that is the root cause, however, this is the third time I've come across an error like this and my only recourse has been to massage the code until it goes away. It feels random and it's frustrating.
Are there any more debugging techniques? I assume it's an error in the shaders. Is there some way to get a stack trace from them?
uniform1i(3, 0)
Is not valid WebGL. The uniform functions require a WebGLUniformLocation object which can only be gotten by calling gl.getUniformLocation
This is different from OpenGL. The reason is you are not allowed to do math on uniform locations. In OpenGL developers often make that mistake. They'll do something like this
--in shader--
uniform float arrayOfFloats[4];
--in code--
GLint location = glGetUniformLocation(program, "arrayOfFloats");
glUniform1f(location, 123.0f);
glUniform1f(location + 1, 456.0f); // BAD!!!
That second line is not valid OpenGL but it might work depending on the driver.
In WebGL they wanted to make that type of mistake impossible because web pages need to work everywhere whereas OpenGL programs only need to work on the platform they are compiled on. To make it work everywhere they had gl.getUniformLocation return an object so you can't do math on the result.
The correct way to write the code above in OpenGL is
--in shader--
uniform float arrayOfFloats[4];
--in code--
GLint location0 = glGetUniformLocation(program, "arrayOfFloats[0]");
GLint location1 = glGetUniformLocation(program, "arrayOfFloats[1]");
glUniform1f(location0, 123.0f);
glUniform1f(location1, 456.0f);
And in WebGL is
--in shader--
uniform float arrayOfFloats[4];
--in code--
var location0 = gl.getUniformLocation(program, "arrayOfFloats[0]");
var location1 = gl.getUniformLocation(program, "arrayOfFloats[1]");
gl.uniform1f(location0, 123.0);
gl.uniform1f(location1, 456.0);

Resources