Is WebGL Shader Caching Possible? - webgl

My question is similar to Saving/Loading compiled WebGL shaders, but I don't want to pre-compile my shaders. Instead, I just want the browser to store the shaders it compiles longer than the default. Right now, every time I refresh the page the shaders have to be recompiled.
I understand the security and portability issues raised in answers like this one and this one. It seems that these are both non-issues assuming that the browser is caching shaders that it compiled for my web app.
Assuming the same OS + browser + GPU + driver combination, is there a way to make the browser cache compiled shaders in such a way that shader compilation will not be required after each time the page is refreshed?

There is nothing the user can do to force the browser to cache shaders. It's up to the browser to implement shader caching and to decide when to use it. Further, the browser relies on the OS to provide a way to cache shaders so if the OS doesn't support it then of course the browser can't either. As an example, currently on MacOS, WebGL runs on top of OpenGL and OpenGL on MacOS provides no way to cache shaders.
For example search for 'BINARY' in this official Apple OpenGL feature table and you'll see the number of formats for caching is 0. In other words you can't cache OpenGL shaders on MacOS
I don't know Metal that well, it possible that some future version of WebGL could be written on top of Metal and maybe Metal provides a way.
Chrome can cache shaders. Here's the code for caching them. But it can't if the OS doesn't support it.
Then there's the question of when to clear or not use the cache. Should the cache be cleared if the user presses 'refresh'. Note that 'refresh' is a signal from the user to NOT cache the page. There are many ways to revisit. One, click a link to the page again, pick it from a bookmark, enter it in the URL bar. All of these don't clear the cache. Clicking the 'Refresh' button AFAIK ignores the cache for at least the specific request (ie, the page itself) but not the things the page references.
Should the cache be cleared if the user picks to empty the browser's normal cache of web resources? Clearly the cache should be cleared anytime the driver changes version numbers. There may be other reasons to clear the cache as the browser needs to make sure it never delivers a bad or out of date shader.
As for Windows I believe DirectX allows caching shaders and Chrome, via ANGLE caches them. A quick test on Windows seems to bare this out. Going to shadertoy.com the first time I load the page it takes a while. The next time it doesn't. Another test. Pick a complex shader on shadertoy. Edit some constant in the shader, for example change 1.0 to 1.01 and press the compile button. Look at the compile time. Now change it back to 1.0 and press compile. In my tests the second compile takes much less time suggesting the shader was cached.
I have no idea if Firefox caches shaders. Safari doesn't since it only runs on platforms that don't support caching.

Related

WebGL how to avoid long shader compile stalling a tab

I have a giant shader that takes more than a minute to compile, which completely stalls whole browser during the process. As far as I know shader compilation cannot be made asynchronous, so you can run other WebGL commands while waiting for compilation to be done.
I already tried the following:
don't use that particular shader for some time - this doesn't work, because most other WebGL commands will wait for it to finish, even if that shader program is never made active
use another context - same as above, but even WebGL commands from another context will cause the stall
use OffscreenCanvas in web worker - this doesn't avoid the stall either, and even if it is in worker, it stalls whole browser. Even if I wait few minutes after command to link program to issue any other WebGL command, browser stalls (as if nothing was happening during that time)
Another problem is that it sometimes crashes WebGL (context loss), which crashes all contexts on page (or in worker).
Is there something I can do to avoid stalling browser?
Can I split my shader to multiple parts and compile them separately?
This is how my program initialization looks like, can it be changed somehow?
let vertexShader = gl.createShader(gl.VERTEX_SHADER);
let fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
let program = gl.createProgram();
gl.shaderSource(vertexShader, vertexSource);
gl.shaderSource(fragmentShader, fragmentSource);
gl.compileShader(vertexShader);
gl.compileShader(fragmentShader);
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
gl.useProgram(program);
let status = gl.getProgramParameter(program, gl.LINK_STATUS);
let programLog = gl.getProgramInfoLog(program);
Waiting after call to linkProgram for minutes doesn't help even in worker.
As a final thing to note: I can have e.g. windows game using OpenGL running that is not affected by this (game is running, I start compiling this shader in browser and game continues to run ok while browser stalls)
Update
Chromium added the KHR_parallel_shader_compile extension which allows you to query if a shader is done compiling.
Unfortunately only Chromium (Chrome/Edge/Brave/etc...) has implemented it as of January 2021
Original answer
There is no good solution.
Browsers on Windows use DirectX because OpenGL doesn't ship by default on many machines and because lots of other features needed for the browser are incompatible with OpenGL.
DirectX takes a long time to compile shaders. Only Microsoft can fix that. Microsoft has provided source to an HLSL shader compiler but it only works with DX12.
Some people suggest allowing webpages to provide binary shaders but that's never going to happen ever for 2 very important reasons
They aren't portable
A webpage would have to provide 100s or 1000s of variations of binary shaders. One for every type of GPU * every type of driver * every platform (iOS, Android, PI, Mac, Windows, Linux, Fire, ...). Webpages are supposed to load everywhere so shader binaries are not solution for the web.
It would be a huge security issue.
Having users download random binary blobs that are given to the OS/GPU to execute would be huge source for exploits.1
Note that some browsers (Chrome in particular) cache shader binaries locally behind the scenes but that doesn't help first time compilation.
So basically at the moment there is no solution. You can make simpler shaders or compile less of them at once. People have asked for an async extension to compile shaders but there's been no movement.
Here's a thread from 2 years ago
https://www.khronos.org/webgl/public-mailing-list/public_webgl/1702/msg00039.php
Just a personal opinion but I'm guessing the reason there isn't much movement for an async extension it's way more work to implement than it sounds and that plenty of sites with complex shaders exist and seem to work.
1The shaders you pass to WebGL as text GLSL are compiled by the browser, checked for all kinds of issues, rejected if any of the WebGL rules are broken, they are then re-written to be safe with bug workarounds inserted, variable names re-written, clamping instructions added, sometimes loops unrolled, all kinds of things to make sure you can't crash the driver. You can use WEBGL_debug_shaders extension to see the shader that's actually sent to the driver.
A binary shader is a blob you give to the driver, you have no chance to inspect it or verify its not doing something bad as it's a driver proprietary binary. There is no documentation on what's in it, the format, they can change with every GPU and every driver. You just have to trust the driver. Drivers are not trustworthy. On top of which it's untrusted code executing on your machine. It would no different than downloading random .exes and executing them therefore it won't happen.
As for WebGPU, No, there is no more security risk with WebGL. Even if it uses a binary format that binary format will be for WebGPU itself, not the driver. WebGPU will read the binary, check all the rules are followed, then generate a shader that matches the user's GPU. That generated shader could be GLSL, HLSL, MetalSL, SPIR-V, whatever works but similarly to WebGL it will write a shader only after verifying all the rules are followed and then the shader it writes, just like WebGL, will include workarounds, clamping and whatever else is needed make the shader safe. Note as of today 2018/11/30 it's undecided what the shader format for WebGPU is. Google and Mozilla are pushing for a subset of SPIR-V in binary, Apple and Microsoft are pushing for WHLSL, a variation of HLSL in text
Note that when the browser says "RATS! WebGL it a snag" that doesn't mean the driver crashed. Rather it nearly always means the GPU was reset for taking too long. In Chrome (not sure about other browsers), when Chrome asks the GPU (via the driver) to do something it starts a timer. If the GPU doesn't finish within 2-5 seconds (not sure the actual timeout) then Chrome will kill the GPU process. This includes compiling shaders and since it's DirectX that takes the most time to compile this is why this issues comes up most on DirectX.
On Windows even if Chrome didn't do this Windows does this. This is mostly because most GPUs (maybe all in 2018) can not multitask like a CPU can. That means if you give them 30 minutes of work to do they will do it without interruption for 30 minutes which would basically freeze your machine since your machine needs the GPU to draw application windows etc. In the past Windows got around this by, just like Chrome, resetting the GPU if something took too long. It used to be that Linux and Mac would just freeze for those 30 minutes or crash the OS since the OS would expect to be able to draw graphics and not be able to. Sometime in the last 8 years Mac and Linux got better at this. In any case, Chrome needs to try to be proactive so it uses its own timer and kills things if they are taking too long.

webgl load in iphone very slow , and UGUI item squeezed in center

i bulid webgl , and host in iis server android load the webgl has no problem bug
if use iphone to load webgl , need very time , ro stuck at about 90%
if i success use iphone load webgl , the UI item well all squeezed in center
is UnityWebGL not support iphone? i do somethine wrong ro lost somethine?
I can just understand that WebGl is very slow on iOS or crashes sometimes, right?
Well, the bad news is that Unity says that mobile browsers aren't supported at all.
See this answer
They recommend to use WASM export instead of asm.js and personally I think that you should keep the app as small as possible. Reduce the amount of data to minimum. Also switch off as many internal packages as you can using the package manager.

How to limit memory usage by libjpeg

Short version: iOS's UIImageJPEGRepresentation() crashes on large images. I'm trying to use & modify libjpeg to respect the max_memory_to_use field, which it's ignoring.
Long version: I'm writing an iOS app which crashes when converting a large image to JPEG after prolonged usage reduces available memory (a trickling leak involving quirks of #autoreleasepool{}, but we're addressing that separately). For images captured by the device camera (normal use, actual size) UIImageJPEGRepresentation() can require up to 200MB (!), crashing if not available. This is a problem with UIImageJPEGRepresentation() which a web search shows goes back for years and seems unsolved; filing a tech support request with Apple elicits "file a bug report" which doesn't solve my immediate customer needs.
To resolve this, I'm bypassing UIImageJPEGRepresentation() by using libjpeg (http://www.ijg.org) and digging into its operation, which shows exactly the same problem (presumably Apple uses it in iOS). libjpeg does provide a means to specify maximum memory usage via the parameter max_memory_to_use a la:
struct jpeg_compress_struct cinco;
cinfo.mem->max_memory_to_use = 10*1024*1024;
which would be used by the libjpeg function jpeg_mem_available (j_common_ptr cinfo, long min_bytes_needed, long max_bytes_needed, long already_allocated) (in jmemnobs.c) but, in the standard implementation, is completely ignored (comment even says Here we always say, "we got all you want bud!"). Blender has altered the function (http://download.blender.org/source/chest/blender_2.03_tree/jpeg/jmemmac.c) to respect the parameter, but seems I'm missing something to make it work in my app or it's just being ignored anyway elsewhere.
So: how does one modify jmemnobs.c in libjpeg to actually & seriously respect memory limitations, rather than jokingly ignore them?

values of levelOfDetail and levelsOfDetailBias to render pdf on CATiledLayer in ios

i am developing a project in which i render PDF on the CATiledLayers.I have Used the CGPdf class methods to render the pdf and succeeded too.
I would like to know the values to be used for levelsOfDetail and levelsOfDetailBias for avoiding any memory issues either in normal mode or zoom mode.
Right now i am setting the values a s below.
tiledLayer1.levelsOfDetail = 1;
tiledLayer1.levelsOfDetailBias = 30;
Am i using the appropriate values and does the memory get affected with these values?
I got this doubt since i am facing memory issues on zooming the page.I ensured there are no memory leaks and the code is effectively written.
my zoomScale ranges between 1.0 to 2.0.
Can anyone help me out to avoid the memory issue...and the values to be used for the above parameters.
Thanks in advance...
You can try reducing the levelsOfDetailBias. But one thing you should keep in mind is that whatever you do, memory warnings would certainly appear,we just need to handle it.
For instance, a simple pdf page may not trigger memory warning at all in any zoom level, whereas pdf page with high quality images may lead to memory warning. Also memory warning depends on the entire device on what is available for the application to run.

Super slow Image processing on Android tablet

I am trying to implement SLIC superpixel algorithm in Android tablet (SLIC)
I port the code which in C++ to work with android environment using stl-lib and all. What application doing is taking an image from camera and send data to process in native code.
I got the app running but the problem is that it took 20-30 second to process a single frame (640 x 400) while in my notebook running with visual studio application would be almost instantly finish!
I check the memory leak, their isn't any... is their anything that might cause computation time to be way more expensive than VS2010 in notebook?
I know this question might be very open and not really specific but I'm really in the dark too. Hope you guys can help.
Thanks
PS. I check running time for each process, I think that every line of code execution time just went up. I don't see any specific function that take way longer than usual.
PSS. Do you think follow may cause the slow?
Memory size : investigated, during native not much of paused time show from GC
STL-library : not investigate yet, is it possible that function like vector, max and min running in STL may cause significant slow?
Android environment it self?
Lower hardware specification of Android Tablet (Acer Iconia tab - 1GHz Nvidia Tegra 250 dual-core processor and has 1GB of RAM)
Would be better to run in Java?
PSSS. If you have time please check out the code
I've taken a look to your code and can make the following recommendations:
First of all, you need to add the line APP_ABI := armeabi-v7a into your Application.mk file. Otherwise your code is compiled for old armv5te architecture where you have no any FPU (all floating point arithmetic is emulated), have less registers available and so on.
Your SLIC implementation intensively uses double floating-point values for computation. You should replace them with float wherever possible because ARM still misses hardware support for double type.

Resources