What does "PERFORMANCE WARNING: Some textures are unrenderable" mean in Chrome? - webgl

In my WebGL, in the JavaScript console, I see the warning
PERFORMANCE WARNING: Some textures are unrenderable.
What does it mean?

WebGL must enforce OpenGL ES 2.0 behavior and prevent WebGL apps from accessing data they don't have access to. To do this WebGL implementations have to validate many things including that all the textures that will be read from are readable according to the OpenGL ES 2.0 spec with no extensions.
So, at every draw, they have to check if all the textures meet all the required criteria which includes checking that that each texture is "texture complete", if it's a cubemap that it is "cube complete" and "mipmap cube complete", if it's non-power-of-2 dimensions that texture filtering is set correctly, etc... If any of these conditions are not met the WebGL implementation will substitute a transparent black texture so that behavior will be spec compliant and consistent across devices.
These checks are expensive so a shortcut a WebGL implementation can take is to track if any textures are unrenderable. If no textures are unrenderable then no checking is needed at draw time. The warning above is that some textures are unrenderable which is basically telling you WebGL has to do all this expensive checking. If you make sure all your textures are renderable WebGL can skip this check and your app may run faster.
For definitions of "texture complete", "cube complete", etc... see OpenGL ES 2.0 spec section 3.7.10

This could also be the result of a bug in Chrome 28: http://code.google.com/p/chromium/issues/detail?id=242321 I got this message even when my WebGL script wasn't using any textures at all.
It was fixed in Chrome 29.

Related

Throw an exception from WebGL

Is there any way to throw a run-time exception from a WebGL shader? Since shaders are written a flavor of C which does not support exceptions, I imagine this isn't going to be easy.
I have inserted divide by zero errors but these are flagged as warnings during 'constant folding' but I don't believe that occurs at run time.
Any clever ideas on how to stop execution on invalid runtime values? Ideally in a way that indicates which line the error occurred on....
The context is that I'm doing math on the extended complex plane which allows infinity, but doesn't permit some operations (such as 0/0).
It's not possible, not using WebGL or any other graphics API.
In terms of shaders there are no such things as "runtime errors" there is only "undefined" behavior.
The only way to do runtime feedback is to color code your validations into the backbuffer or textures assuming you're doing the math in a fragment shader. Otherwise you're out of luck and may want to look into the APIs actually made for GPGPU namely OpenCL and Cuda.

How can I use a renderbuffer as a texture in Elm's WebGL library

I'm using the Elm WebGL library found here to make webGL graphics for my website. I would like to use certain graphics techniques such as shadow mapping which require the ability to use the results of operations performed on the graphics card; a write to a renderbuffer backed by a texture, if I recall my OpenGL ES terminology correctly, which is then used by the shader which draws to the screen.
Looking in the API provided it doesn't look like doing this is possible, because the only thing in the API that can actually execute/hold the result of a WebGL pipeline/Entity is of type Element.
My question is if it is possible to use techniques like shadow mapping and SSAO which require more than one pass to draw the scene with the standard Elm WebGL library, and how I might accomplish this.
Sadly, the answer is indeed: No, you cannot do multiple passes and generate textures using the graphics card yet. The WebGL library is pretty new, so this is a feature that was only requested for the first time 6 days ago on the elm-discuss mailing list.
The author of the WebGL library has yet to respond, but I expect the features described in the linked post will become available at some point.

Is it possible to run #version 120 shaders with WebGL

I have a number of GLSL fragment shaders for which I can pretty much guarantee that they conform to #version 120 They use standard, non-ES conformant values and they do not have any ES-specific pragmas.
I really want to make a web previewer for them using WebGL. The previewer won't be used on mobile. Is this feasible? Is the feature set exposed to GLSL shaders in WebGL restricted compared to that GLSL version? Are there precision differences?
I've already tried playing with THREE.js but that doesn't really rub it since it mucks up my shader code before loading it onto the GPU (which I cannot do).
In short: is the GLSL spec sufficient for me to run those shaders?.. because if it isn't what I am after is not doable and I should just drop it.
No, WebGL shaders must be version #100. Anything else is disallowed.
If you're curious why it's because, as much as possible, WebGL needs to run everywhere. If you could choose any version your web page would only run on systems with GPUs/Drivers that handled that version.
The next version of WebGL will raise the version number. It will allow GLSL ES 3.0 (note the ES). It's currently available behind a flag in Chrome and Firefox as of May 2016

Hidding shader code from the XCode OpenGL ES debugger

I'm thinking about releasing a bunch of GPGPU functions as a framework using OpenGL ES 2.0 for iOS devices.
When capturing an OpenGL ES frame in XCode, I can see the code of the shaders being used. Is there a way to avoid this from happening? I've tried deleting and detaching the shaders with glDeleteShader and glDetachShader after linking the OpenGL ES program, but the code is still captured.
I'm not looking for a bullet proof option (which probably doesn't exist), just something that makes getting to the code a bit more difficult than just pressing a button.
Thank you.
The debugger has to capture input from calls to glShaderSource, the actual shader source is never stored in VRAM after compilation. I cannot think of any way to overcome this problem directly. Calling glShaderSourceis required because OpenGL ES does not support precompiled shader binaries.
I would recommend obfuscating the original shader code, perhaps using compile time macros, or even a script to scramble variable names etc (be carful of attribs and uniforms as they affect linkage to app code).
Here is a tool used for obfuscation/minimization of shader code. I believe it is built for WebGL so it may not work perfectly. http://glslunit.appspot.com/compiler.html

Can you prewarm a shader on a background thread with its own context?

I am developing a large game that streams in level data (including shaders) as you move through the game world. I do not want to have hitches in my frame rate as shaders are compiled/linked or on the first time they are used.
I have my shader compilation and linking working on a separate thread with its own open-gl context. But I have not been able to get the prewarming of the shaders to work on the separate thread (so that there is no performance hit when the shader is first used).
Prewarming is really not mentioned anywhere in the iOS or OpenGL docs. It is however mentioned in the OpenGL ES Analyzer (one of the instruments available when profiling from xcode). In this tool I get a "Shader Compiled Outside of Prewarming Phase" warning each time something is rendered with a shader that has not been used to render something before. The "Extended detail" says this:
"OpenGL ES Analyzer detected a shader compilation that is not part of an initial prewarming phase. Shader compilation can be a time consuming operation. To avoid them, prewarm all shaders used for rendering. To do this, make a prewarming passwhen your application launches and execute a drawing call with each of the shader programs to be used, using any gl state settings the shader program will be used in conjunction with. States such as blending, color mask, logic ops, multisamping, texture formats, and point primitive state can all affect shader compilation."
The term "compilation" is a little confusing here. The vertex and fragment shaders have already been compiled and the program has been linked. But the first time something is rendered with a given OpenGL state it does some more work on the shader to optimize it for that state I guess.
I have code to pre-warm the shaders by rendering a zero sized triangle before it's first use.
If I compile, link and pre-warm the shaders on the main thread with the same Open GL context as the normal rendering then it works. However if I do it on the background thread with its separate Open GL context it does not work (it still gets the Analyzer warning on first use).
So... it could be that prewarming a shader on a separate context has no effect on other contexts. Or it could be that I don't have all the same state set up the separate context. There is a lot of potential Open GL state that might need to be set up. I'm using an offscreen render buffer on the background thread so that could be considered part of the state.
Has anyone succeeded in getting prewarming working on a background thread?
To be honest with you I was quite ignorant on this matter until yesterday though I have been working on my engine optimization for a while. So, first of all, thank you for the tip :).
I have studied since then the shader warming topic and I have not found much around.
I have found a mention the official AMD documentation in a document titled "ATI OpenGL Programming and Optimization Guide":
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&ved=0CEoQFjAF&url=http%3A%2F%2Fdeveloper.amd.com%2Fmedia%2Fgpu_assets%2FATI_OpenGL_Programming_and_Optimization_Guide.pdf&ei=3HIeT_-jKYbf8AOx3o3BDg&usg=AFQjCNFProzLiXf5Aqqs4jZ2jOb4x0pssg&sig2=6YV7SVA97EFglXv_SX5weg
This is an excerpt of which refers to the warming of the shaders:
Quote:
While the R500 natively supports flow control in the fragment shading unit, the R300 and R400
asics does not. Static flow control for the R300 and R400 is emulated by the driver compiling out
unused conditionals and unrolling loops based on the set constants. Even though the R500 asics family
natively support flow control, the driver will still attempt to compile out static flow conditions enabling
it to reorganize shader instructions for better instruction scheduling. The driver will also try to cache
away the compiled shader for a specific static flow condition set in anticipation for its reuse. So when
writing a fragment program that uses static flow control, it is recommended to “warm” the shader cache
by rendering a dummy triangle on the very first frame that uses the common static conditional
permutations relevant for the life of the shader.
The best explanation I have found around is the following:
http://fgiesen.wordpress.com/2011/07/01/a-trip-through-the-graphics-pipeline-2011-part-1/
Quote:
Incidentally, this is also the reason why you’ll often see a delay the first time you use a new shader or resource; a lot of the creation/compilation work is deferred by the driver and only executed when it’s actually necessary (you wouldn’t believe how much unused crap some apps create!). Graphics programmers know the other side of the story – if you want to make sure something is actually created (as opposed to just having memory reserved), you need to issue a dummy draw call that uses it to “warm it up”. Ugly and annoying, but this has been the case since I first started using 3D hardware in 1999 – meaning, it’s pretty much a fact of life by this point, so get used to it. :)
In this presentation, it is mentioned how the cryteck engined performed it on the far cry engine though it is mostly related to DirectX.
http://www.powershow.com/view/11f2b1-MzUxN/Far_Cry_and_DirectX_flash_ppt_presentation
I hope these links help in some way.

Resources