I have an idea to extend linked list by reducing rendering area, i.e by using a small render target that then blitted into main render target, tile by tile.
The main problem is how to render into a small render target as it would be just a scissor rect of main render target?
The main idea of using small render target is taken there - Metal: limit MTLRenderCommandEncoder texture loading to only part of texture
UPD
This is not for realtime rendering
Related
I'm trying to render something to texture using a library called regl. I manage to render an effect using two render targets and i see the result in one.
Capturing the frame after i've done rendering to the target looks like this, and it represents a screen blit (full screen quad with this texture). This is how i would like this to work.
Once i pass this to some other regl commands, in some future frame, this texture attachment seems to get nuked. This is the same object that i'm trying to render with the same resource, but the data is gone. I have tried detaching the texture from the FBO, but it doesn't seem to be helping. What can i be looking for that would make this texture behave like this?
This ended up being a problem with Regl and WebViz. I was calling React.useState to set the whatever resource that regl uses for the texture. For some reason, this seems like it was invoked, which "resets" the texture to an empty 1x1.
I am drawing texture in a quad with 4. then I am drawing a triangle with sample count 4. I feel there is no need to draw texture in a quad with 4 sample count. It affect performance. Is it possible use different sample count in a single program.
It's not possible to use different MSAA sample counts with a single render pipeline state or within a single render pass (render command encoder), because each of these objects is immutably configured with the sample count. In order to achieve MSAA, the render pass has one or more attachments which must be resolved to produce a final image. If you need different sample counts for different draw calls (i.e., you want to draw some MSAA passes and some non-MSAA passes), you should first perform your multisample passes, then load the resolveTextures of the final MSAA pass as the textures of the corresponding attachments in subsequent passes, using a loadAction of .load, then perform your non-MSAA drawing.
Problem
I've gotten to a point in my project where I'm rendering to WebGLRenderTargets and using them as a textures in my main scene. It works, but it seems like I'm having it do a lot more work than it needs to. My generated textures only need to be 64x64, but because I'm using my main renderer (window width by window height) for both, it's unnecessarily rendering the WebGLRenderTargets at a much larger resolution.
I may be wrong, but I believe this increases both the processing required to draw to each RenderTarget and the processing required to then draw that large texture to the mesh.
I've tried using a second renderer, but I seem to get this error when trying to use a WebGLRenderTarget in renderer A after drawing to it from renderer B:
WebGL: INVALID_OPERATION: bindTexture: object not from this context
Example
For reference, you can see my abstracted page here (Warning: Due to the very issue I'm inquiring about, this page may lag for you). I'm running a simplex function on a plane in my secondary scene and chopping it up into sections using camera placement, then applying the segments to tile pieces via WebGLRenderTarget so that they're fluent but individual pieces.
Question
Am I correct in my assumption that using the same renderer size is much less efficient than rendering to a smaller renderer would be? And if so, what do you think the best solution for this would be? Is there currently a way to achieve this optimization?
The size parameters in renderer.setSize() are used by the renderer to set the viewport when rendering to the screen only.
When the renderer renders to an offscreen render target, the size of the texture rendered to is given by the parameters renderTarget.width and renderTarget.height.
So the answer to your question is that it is OK to use the same renderer for both; there is no inefficiency.
It is possible to bind multiple framebuffers and renderbuffers in OpenGL ES? I'm rendering into an offscreen framebuffer/renderbuffer and would prefer to just use my existing render code.
Here's what I'm currently doing:
// create/bind framebuffer and renderbuffer (for screen display)
// render all content
// create/bind framebuffer2 and renderbuffer2 (for off-screen rendering)
// render all content again (would like to skip this)
Here's what I'd like to do:
// create/bind framebuffer and renderbuffer (for screen display)
// create/bind framebuffer2 and renderbuffer2 (for off-screen rendering)
// render all content (only once)
Is this possible?
You cannot render into multiple framebuffers at once. You might be able to use MRTs to render into multiple render targets (textures/renderbuffers) that belong the same FBO by putting out multiple colors in the fragment shader, but not into multiple FBOs, like an offscreen FBO and the default framebuffer. But if I'm informed correctly, ES doesn't support MRTs at the moment, anyway.
But in your case you still don't need to render the scene twice. If you need it in an offscreen renderbuffer anyway, why don't you just use a texture instead of a renderbuffer to hold the offscreen data (shouldn't make a difference). This way you can just render the scene once into the offscreen buffer (texture) and then display this texture to the screen framebuffer by drawing a simple textured quad with a simple pass-through fragment shader.
Though, in OpenGL ES it may make a difference if you use a renderbuffer or a texture to hold the offscreen data, as ES doesn't have a glGetTexImage. So if you need to copy the offscreen data to the CPU you won't get around glReadPixels and therefore need a renderbuffer. But in this case you still don't need to render the scene twice. You just have to introduce another FBO with a texture attached. So you render the scene once into the texture using this FBO and then render this texture into both the offsrceen FBO and the screen framebuffer. This might still be faster than drawing the whole scene twice, though only evaluation can tell you.
But if you need to copy the data to the CPU for processing, you can also just copy it from the screen framebuffer directly and don't need an offscreen FBO. And if you need the offscreen data for GPU-based processing only, then a texture is better than a renderbuffer anyway. So it might be usefull to reason if you actually need an additional offscreen buffer anyway, if it only contains the same data as the screen framebuffer. This might render the whole problem obsolete.
I have been thinking about how to implement a PostProcessing manager into my engine (Vanquish).
I am struggling to get my head around how a Post Processing technique works. I have read and viewed the Post Processing sample on the creators.xna.com webiste, but this seems to re-render the model applying the Post Processing effect.
I can add this functionality for static models, but when it comes to redrawing the Skinned Models, I then get confused as they already have their own techniques to their effects.
Can someone help me straighten my thoughts by pointing me in the right direction?
Generally post-processing works over the entire screen. This is pretty simple, really.
What you do is set a render target on the device (I assume you can figure out how to create a render target from the sample):
GraphicsDevice.SetRenderTarget(renderTarget);
From this point onward everything you render will be rendered to that render target. When you're done you can set the device back to drawing to the back buffer:
GraphicsDevice.SetRenderTarget(null);
And finally you can draw using your render target as if it were a texture (this is new in XNA 4.0, in XNA 3.1 you had to call GetTexture on it).
So to making a post-processing effect:
Render your scene to a render target
Switch back to the back buffer
Render your render target full screen (using SpriteBatch will do) with a pixel shader that will apply your post-processing effect.
You sound like you want to do this per model? Which seems kind of strange but certianly possible. Simply ensure your render target has a transparent alpha channel to begin with, and then draw it with alpha blending.
Or do you not mean post-processing at all? Do you actually wish to change the pixel shader the model is drawn with, while keeping the skinned model vertex shader?