I am trying to draw the same WebGL scene using two canvases side-by-side. Is it possible? So far I haven't been lucky.
The idea goes as follows:
I load my geometry
I set up two gl context, one per canvas
I call drawElements on the first context passing my geometry
I call drawElements on the second context passing my geometry
Results:
Only the first invocation is successful. The second context executes gl.clear correctly but no geometry is displayed.
What am I missing? is this a spec constrain? or a limitation in the Firefox and Safari's implementations?
Thanks,
Yes, you can have more than one WebGL context on a page, but each context must manage it's own resources. You cannot create buffers or textures on one context and then use them on the other, so to use this approach if you wanted to render the same scene you would need to load all your resources twice.
That's certainly doable, but it will be very memory intensive (as you might imagine). A better solution would be to create a context that is 2x the width you need and use gl.viewport and gl.scissor to render to half of it for each view of the scene.
Or, make a full viewport size canvas and use the scissor/viewport settings to match other html elements. See this Q&A: How can we have display of same objects in two canvas in webgl?
That should have thrown an error. As Toji said, two WebGLContexts cannot share resources. There is, however, a proposal for this. Check here. I suspect we'll see something similar in WebGL soon.
I use multiple WebGL contexts on a single page by assigning the same class to the canvasses that should render the WebGL content, and putting something specific in the canvas inner html:
function specific(what) {
switch (what) {
case 'front':
//do something
break;
case 'side':
//do something else
break;
}
}
function webGLStart() {
var canvasArray = document.getElementsByClassName("webGLcanvas");
for (var index = 0; index < canvasArray.length; ++index) {
initWebGlContext();
specific(canvasArray[index].innerHTML);
}
}
</script>
</head>
<body onload="webGLStart();">
<canvas class="webGLcanvas">front</canvas>
<canvas class="webGLcanvas">side</canvas>
</body>
You can find a working example here.
As earlier answers (Toji and Chris Broadfoot) explained, you need to keep in mind that you cannot share resources between WebGL contexts.
edit: code is now available on github.
Related
I have a 2D game I've been working on in webGL, and, with few exceptions, I use one default program for drawing sprites onscreen. I call gl.useProgram once, on initialization, and if I ever need to use a different program, I reset the program to the default when I'm done.
However, I see examples where others call gl.useProgram every time they draw, and therefore at least once on every frame, or possibly as many times as there are quads to be rendered, in a worst-case scenario.
For the sake of peace of mind, I'd like to use gl.useProgram for every draw call, so I always know exactly which program is being used, but only if it's still relatively efficient to do so.
My question is, if you use gl.useProgram to set the program to the program already in use, is there a performance impact, or does webGL/javascript essentially "know" that the program remains unchanged?
A modern GPU driver applies state changes before issuing a draw command, effectively filtering any in-between changes that have no side effects. So setting the same program several times will unlikely have a serious impact. But setting the same state needlessly results in redundant state change warning in many graphic debuggers. It is considered an undesirable behavior. After you set a program with gl.UseProgram it will remain active in the context until replaced with another program or the context is lost. Even deleting the program with gl.DeleteProgram doesn't make the currently bound program invalid.
if you use gl.useProgram to set the program to the program already in use, is there a performance impact
Doing more work than less always has some impact. The question I think you're really asking is it too much impact. Only you can tell. I suspect it's tiny but how about just trying it and seeing if you notice a difference?
does webGL/javascript essentially "know" that the program remains unchanged?
There's no way to guarantee this is checked. You could look into the source code of every browser and check back every few months to see it didn't change.
But, if you're concerned just check yourself
let globalLastProgram = null
function checkedUseProgram(program) {
if (globalLastProgram !== program) {
globalLastProgram = program;
gl.useProgram(program);
}
}
Now call checkedUseProgram instead of gl.useProgram.
Or if you want wrap useProgram itself.
WebGLRenderingContext.prototype.useProgram = function(origFn) {
return function(program) {
if (this.lastProgram !== program) {
this.lastProgram = program;
origFn.call(this, program);
}
};
}(WebGLRenderingContext.prototype.useProgram);
If you don't like lastProgram being on the context you can make a wrapping function
function addWrapper(gl) {
gl.useProgram = function(origFn) {
let lastProgram;
return function(program) {
if (lastProgram !== program) {
lastProgram = program;
origFn.call(this, program);
}
};
}(gl.useProgram);
}
addWrapper(gl);
Of course all of those will have a tiny impact as well though I suspect it's hard to measure
Please refer to the background section below if the following does not make much sense, I omitted most of the context as to make the problem as clear as possible.
I have two WebGLRenderingContexts with the following traits:
WebGLRenderingContext: InputGL (Allows read and write operations on its framebuffers.)
WebGLRenderingContext: OutputGL (Allows only write operations on its framebuffers.)
GOAL: Superimpose InputGL's renders onto OutputGL's renders periodically within 33ms (30fps) on mobile.
Both the InputGL's and OutputGL's framebuffers get drawn to from separate processes. Both are available (and with complete framebuffers) within one single window.requestAnimationFrame callback. As InputGL requires read operations, and OutputGL only supportes write operations, InputGL and OutputGL cannot be merged into one WebGLRenderingContext.
Therefore, I would like to copy the framebuffer content from InputGL to OutputGL in every window.requestAnimationFrame callback. This allows me to keep read/write supported on InputGL and only use write on OutputGL. Neither of them have (regular) canvasses attached so canvas overlay is out of the question. I have the following code:
// customOutputGLFramebuffer is the WebXR API's extended framebuffer which does not allow read operations
let fbo = InputGL.createFramebuffer();
InputGL.bindFramebuffer(InputGL.FRAMEBUFFER, fbo)
// TODO: Somehow get fbo data into OutputGL (I guess?)
OutputGl.bindFramebuffer(OutputGl.FRAMEBUFFER, customOutputGLFramebuffer);
// Drawing to OutputGL here works, and it gets drawn on top of the customOutputGLFramebuffer
I am not sure if this requires binding in some particular order, or some kind of texture manipulation of some sorts, any help with this would be greatly appreciated.
Background: I am experimenting with Unity WebGL in combination with the unreleased WebXR API. WebXR uses its own, modified WebGLRenderingContext which disallows reading from its buffers (as a privacy concern). However, Unity WebGL requires reading from its buffers. Having both operate on the same WebGLRenderingContext gives errors on Unity's read operations, which means they need to be kept separate. The idea is to periodically superimpose Unity's framebuffer data onto WebXR's framebuffers.
WebGL2 is also supported in case this is required.
You can not share resources across contexts period.
The best you can do is use one via some method as a source to the other via texImage2D
For example if the context is using a canvas then draw the framebuffer to the canvas and then
destContext.texImage2D(......., srcContext.canvas);
If it's a OffscreenRenderingContext use transferToImageBitmap and then pass the resulting bitmap to texImage2D
Noticed that SwapBuffer functionality is not there in WebGL, If that is the case how do we change state across draw calls and draw multiple objects in WebGL, at what point of time is swapBuffer called internally by WebGL?
First off there is no SwapBuffers in OpenGL. SwapBuffers is a platform specific thing that is not part of OpenGL.
In any case though the equivalent of SwapBuffers is implicit in WebGL. If you call any WebGL functions that affect the drawingbuffer (eg, drawArray, drawElements, clear, ...) then the next time the browser composites the page it will effectively "swapbuffers".
note that whether it actually "swaps" or "copies" is up to the browser. For example if antialiasing is enabled (the default) then internally the browser will effectively do a "copy" or rather a "blit" that converts the internal multisample buffer to something that can actually be displayed.
Also note that because the swap is implicit WebGL will clear the drawingBuffer before the next render command. This is to make the behavior consistent regardless of whether the browser decides to swap or copy internally.
You can force a copy instead of swap (and avoid the clearing) by passing {preserveDrawingBuffer: true} to getContext as the 2nd parameter but of course at the expensive of disallowing a swap.
Also it's important to be aware that the swap itself and when it happens is semi-undefined. In other words calling gl.drawXXXor gl.clear will tell the browser to swap/copy at the next composite but between that time and the time the browser actually composites other events could get processed. The swap won't happen until your current event exits, for example a requestAnimationFrame event, but, between the time your event exits and the time the browser composites more events could happen (like say mousemove).
The point of all that is that if don't use {preserveDrawingBuffer: true} you should always do all of your drawing during one event, usually requestAnimationFrame, otherwise you might get inconsistent results.
AFAIK, swap buffers call usually doesn't change any visible GL state. There're plenty of GL calls to change that state between draw calls though. As for buffer swapping, browser does that for you sometime after a callback with rendering code returns (and yes, there's no direct control over when this will actually happen).
I have an Polymer.dart element with multiple attributes, e.g.
<code-mirror lines="{{lines}}" widgets="{{widgets}}">
</code-mirror>
on some occasions lines and widgets change simultaneously sometimes only widgets changes.
I would like to rerender component once independently on how many properties change in the same turn of event loop.
Is there a way a good built-in way to achieve that?
Additional trouble here is that interpretation of widgets depends on content of lines and ordering in which linesChanged and widgetsChanged callbacks arrive is browser dependent, e.g. on Firefox widgetsChanged arrives first before linesChanged and component enters inconsistent state if I do any state management in the linesChanged callback.
Right now I use an auxiliary class like this:
class Task {
final _callback;
var _task;
Task(this._callback);
schedule() {
if (_task == null) {
_task = new async.Timer(const Duration(milliseconds: 50), () {
_task = null;
_callback();
});
}
}
}
final renderTask = new Task(this._render);
linesChanged() => renderTask.schedule();
widgetsChanged() => renderTask.schedule();
but this looks pretty broken. Maybe my Polymer element is architectured incorrectly (i.e. I have two attributes with widgets depending on lines)?
*Changed methods are definitely the right way to approach the problem. However, you're trying to force synchronicity in an async delivery system. Generally we encourage folks to observe property changes and react to them and not rely on methods being called in a specific order.
One thing you could use is an observe block. In that way, you could define a single callback for the two properties and react accordingly:
http://www.polymer-project.org/docs/polymer/polymer.html#observeblock
Polymer's data binding system does the least amount of work possible to rerender DOM. With the addition of Object.observe(), it's even faster. I'd have to see more about your element to understand what needs rendering but you might be creating a premature optimization.
I think there are three possible solutions:
See this: http://jsbin.com/nilim/3/edit
Use an observe block with one callback for multiple attributes (the callback will only be called once)
Create an additional attribute (i.e. isRender) that is set by the other two attributes (lines and widgets). Add a ChangeWatcher (i.e. isRenderChanged() in which you call your expensive render method)
Specify a flag (i.e. autoUpdate) that can be set to true or false. When autoUpdate = false you have to call the render method manually. If it is set to true then render() will be called automatically.
The disadvantage of solution 1 is that you can only have one behavior for all observed attributes. Sometimes you want to do different things when you set a specific attribute (i.e. size) before you call render. That's not possible with solution 1.
I don't think there is a better way. You may omit the 50ms delay (just Timer.run(() {...});) as the job gets scheduled behind the ongoing property changes anyway (my experience, not 100% sure though)
So, my idea is to do something like that (the code is simplified of course):
var gl;
function Renderer(canvas) {
gl = this.gl = canvas.getContext('experimental-webgl');
}
function Object() {
}
Object.prototype.render = function() {
...
gl.drawElements(...);
}
The gl variable itself can be placed into a namespace for better consistency, it can also be incapsulated by wrapping all the code into an anonymous function to make sure it won't clash with anything.
I can see one obvious tradeoff here: problems with running multiple WebGL canvases on the same page. But I'm totally fine with it.
Why doing that? Because otherwise it's more painful to call any WebGL functions, you have to pass your renderer as a parameter here and there. That's actually the thing I don't like about Three.js: all the graphics stuff is handled inside a Renderer object, which makes the whole Renderer object huge and complicated.
If using a globally visible context, you don't have to bother about OpenGL constants, you don't have to worry about your renderer object's visibility, and so on.
So, my question is: should I expect any traps with this approach? Aside from potential emptiness of the gl variable, of course.
Define bad
Lots of WebGL programs do this. OpenGL does this by default since the functions are global in scope. In normal OpenGL you have to call eglMakeCurrent (or equivalent) to switch contexts which effectively is just doing a hidden gl = contextToMakeCurrent under the hood.
So, basically it's up to you. If you think someday you're going to need multiple WebGL contexts then it might be wise to not have your contexts use global variables. But you can always fallback to the eglMakeCurrent style of coding. Both have their pluses and minuses.