here's my problem:
I want to particionate the canvas in 4 virtual quadrants and in every quadrant I want to
render the same scene with different colors(with different fragment shaders), to compare some effects in real time. I am not sure how to do that.
Should I render the same scene 4 times in 4 different textures and then refill 4 rectangles with those textures? Or should I make another fshader and manually fill all the fragments with those textures? Any posibility to use render buffer objects to increase performance?
Thanks in advance,
You don't need render to texture for this (though that is one way). It can actually be done much more simply with gl.viewport.
gl.viewport simply sets a rectangle on the canvas that you want to render to. Anything that falls outside that rectangle is cropped. Typically you set it to the same size as the canvas because you want to render fullscreen, but in your case you can do the following:
// Clears the entire scene. gl.clear does not respect the viewport
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// Render upper left quadrant
gl.viewport(0, 0, canvas.width/2, canvas.height/2);
drawSceneWithShader(shader[0]);
// Render upper right quadrant
gl.viewport(canvas.width/2, 0, canvas.width/2, canvas.height/2);
drawSceneWithShader(shader[1]);
// Render lower left quadrant
gl.viewport(0, canvas.height/2, canvas.width/2, canvas.height/2);
drawSceneWithShader(shader[2]);
// Render lower right quadrant
gl.viewport(canvas.width/2, canvas.height/2, canvas.width/2, canvas.height/2);
drawSceneWithShader(shader[3]);
When rendering just render the whole scene like normal, you don't need to do anything special to account for the new viewport. (If you're doing any mouse picking or similar, though, you do need to account for the viewport!)
Related
So im doing some graph drawing using GL_LINE_STRIP's and i am using a multisampled buffer so the lines dont look so jagged. the problem is i have some lines in the background of the graph that act as a legend. The multisampling kind of screws the lines up cause they are meant to be exactly 1 pixel thick, but because of the multisampling, it will sometimes put the line spread over 2 pixels that are slightly dimmer than the original colour, making the lines look different to each other.
Is it possible to render those legend lines directly to the resolved frame buffer, then have the multisampled stuff drawn on top? this will effectively not multisample the background legend lines, but multisample the graph lines.
Is this possible? I just want to know before i dive into this and later find out you cant do this. If you have some demo code to show me that would be great as well
It would be much easier if the legend came last: You could just resolve the MSAA buffers into the view framebuffer and then normally render the legend into the resolved buffer afterwards. But the other way won't be possible, since multisample resolution will just overwrite any previous contents of the target framebuffer, it won't do any blending or depth testing.
The only way to actually render the MSAA stuff on top yould be to first resolve them into another FBO and draw that FBO's texture on top of the legend. But for the legend to not get completely overwritten, you will have to use alpha blending. So you basically clear the MSAA buffers to an alpha of 0 before rendering, then render the graph into it. Then you resolve those buffers and draw the resulting texture on top of the legend, using alpha blending to only overwrite the parts where the graph was actually drawn.
I'm very new to shaders and am very confused about the whole thing, even after following several tutorials (in fact this is my second question about shaders today).
I'm trying to make a shader with two passes :
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 HorizontalBlur();
}
pass Pass2
{
PixelShader = compile ps_2_0 VerticalBlur();
}
}
However, this only applies VerticalBlur(). If I remove Pass2, it falls back to the HorizontalBlur() in Pass1. Am I missing something? Maybe it's simply not passing the result of the first pass to the second pass, in which case how would I do that?
Also, in most of the tutorials I've read, I'm told to put effect.CurrentTechnique.Passes[0].Apply(); after I start my spritebatch with the effect. However, this doesn't seem to change anything; I can set it to Passes[1] or even remove it entirely, and I still get only Pass2. (I do get an error when I try to set it to Passes[2], however.) What's the use of that line then? Has the need for it been removed in recent versions?
Thanks a bunch!
To render multiple passes:
For the first pass, render your scene onto a texture.
For the second, third, fourth, etc passes:
Draw a quad that uses the texture from the previous pass. If there are more passes, to follow, render this pass to another texture, otherwise if this is the last pass, render it to the back buffer.
In your example, say you are rendering a car.
First you render the car to a texture.
Then you draw a big rectangle, the size of the screen in pixels, place at a z depth of 0.5, with identity world, view, and projection matrices, and with your car scene as the texture, and apply the horizontal blur pass. This is rendered to a new texture that now has a horizontally blurred car.
Finally, render the same rectangle but with the "horizontally burred car" texture, and apply the vertical blur pass. Render this to the back buffer. You have now drawn a blurred car scene.
The reason for the following
effect.CurrentTechnique.Passes[0].Apply();
is that many effects only have a single pass.
To run multiple passes, I think you have to do this instead:
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
//Your draw code here:
}
Question is in the title:
[ActionScript3.0] How to get color (uint) of pixel at coordinates? (Stage3D, Flare3D)
I am using Flare3D library to render 3D scene on an iPad2. I need to get color values at 768 different coordinates every time screen is redrawn. Previously, on simple stage (2D), I could just draw it on 1x1 bitmaps translated to specified coordinates, now it does not work on stage3D. Plus, I am a bit worried weather it will kill the performance since I really need to do it as often as possible - actually every time screen is drawn.
It would be really nice if that currently displayed screen was like a bitmap somewhere, so I could access it like simple array...but yeah, I am not holding my breath:)
Since Stage3D renders to back-buffer and one can't directly access it, you also need to render to BitmapData using Context3D.drawToBitmapData() method. Rendering to a bitmap is very slow, especially if the viewport is large. As you only need to access those 768 pixels, you could use Context3D.setScissorRectangle to render scene 768 times with the size of scissor rectangle set to 1x1 along with needed coordinates. I haven't tested that myself so I don't know if rendering scene 700 times won`t be slower than rendering it once, but you may want to try that. :)
I render all my surfaces to a buffer, then at the end of the frame I flip the buffer.
However, when a certain event is happening in the game, I wanted to shake the buffer around to add intensity. Rather than blitting each surface at the offset individually, I thought I would just offset the entire buffer at the end of the frame, since I wanted everything of the buffer to shake.
Is there a way that I can render to buffer at on offset, or do I need to then blit the buffer to a second buffer and flip that?
You can make a function and put it right before your render.
It should get a random direction (up, down, left or right), and add the same small transformation for all textures being rendered at that frame (all textures should be moved a little bit down in that frame, for example).
In the next frame you get again a random direction, avoiding the last one picked.
The function should also have a timer (use SDL_GetTicks() ), so you can set how long is the shaking going to last.
I don't know if I was clear but, good luck anyway. :)
It is possible to bind multiple framebuffers and renderbuffers in OpenGL ES? I'm rendering into an offscreen framebuffer/renderbuffer and would prefer to just use my existing render code.
Here's what I'm currently doing:
// create/bind framebuffer and renderbuffer (for screen display)
// render all content
// create/bind framebuffer2 and renderbuffer2 (for off-screen rendering)
// render all content again (would like to skip this)
Here's what I'd like to do:
// create/bind framebuffer and renderbuffer (for screen display)
// create/bind framebuffer2 and renderbuffer2 (for off-screen rendering)
// render all content (only once)
Is this possible?
You cannot render into multiple framebuffers at once. You might be able to use MRTs to render into multiple render targets (textures/renderbuffers) that belong the same FBO by putting out multiple colors in the fragment shader, but not into multiple FBOs, like an offscreen FBO and the default framebuffer. But if I'm informed correctly, ES doesn't support MRTs at the moment, anyway.
But in your case you still don't need to render the scene twice. If you need it in an offscreen renderbuffer anyway, why don't you just use a texture instead of a renderbuffer to hold the offscreen data (shouldn't make a difference). This way you can just render the scene once into the offscreen buffer (texture) and then display this texture to the screen framebuffer by drawing a simple textured quad with a simple pass-through fragment shader.
Though, in OpenGL ES it may make a difference if you use a renderbuffer or a texture to hold the offscreen data, as ES doesn't have a glGetTexImage. So if you need to copy the offscreen data to the CPU you won't get around glReadPixels and therefore need a renderbuffer. But in this case you still don't need to render the scene twice. You just have to introduce another FBO with a texture attached. So you render the scene once into the texture using this FBO and then render this texture into both the offsrceen FBO and the screen framebuffer. This might still be faster than drawing the whole scene twice, though only evaluation can tell you.
But if you need to copy the data to the CPU for processing, you can also just copy it from the screen framebuffer directly and don't need an offscreen FBO. And if you need the offscreen data for GPU-based processing only, then a texture is better than a renderbuffer anyway. So it might be usefull to reason if you actually need an additional offscreen buffer anyway, if it only contains the same data as the screen framebuffer. This might render the whole problem obsolete.