I am trying to overlay graphics on top of my OpenGL render scene.
I have managed to get it up and running but the drop in FPS is somewhat too much.
I am currently using GLScene in combination with Graphics32.
What I do is to render the GLScene Rendering Context to a bitmap,
apply that bitmap to a TImageView32, and do some final UI touches inside the TImage32.
The code I am using to render to a bitmap is the following, which also reduces FPS is:
procedure RenderToBitmap;
var b: TBitmap;
begin
b:=TBitmap.Create;
b:=GLSceneViewer.Buffer.CreateSnapShotBitmap; //TGLSceneViewer
ImgVwr32.Bitmap.Assign(b); //TImageViewer32
b.Free;
end;
I have tried some other code (see below), which gives me a realtime rendering, but I can not
modify the "Bitmap" property of the ImageViewer32. In other words: The GLScene Rendering context is being rendered, but none of my own graphics is rendered.
The code:
//The following line is put inside the FormCreate call
GLSceneViewer.Buffer.CreateRC(GetDC(ImgVwr32.Handle),false);
How can I properly overlay graphics on top of the rendering context, or copy the rendering context output, without losing FPS?
Well, by avoiding the whole GPU→CPU→GPU copy roundtrips. Upload your overlay into a OpenGL texture and draw that over the whole scene using a large textured quad.
OpenGL is not a scene graph, it's just a somewhat more sophisticated drawing API. You can change the viewport and transformation parameters at any time without altering the pixels drawn so far. So it's easy to just switch into a screen space coordinate system and draw the overlay using that.
Related
I'm using XNA 3.1
I have recently created a 2D Heads Up Display (HUD) by using Components.Add(myComponent) to my game. The HUD looks fine, showing a 2D map, crosshairs and framerate counter. The thing is, whenever the HUD is on screen the 3D objects in the game no longer draw correctly.
Something further from my player might get drawn after something closer, the models sometimes lose definition when I walk past them. When I remove the HUD all is drawn normally.
Are their any known issues regarding this that I should be aware of? How should I draw a 2D HUD over my 3D game area?
This is what it looks like without a GameComponent:
And here's how it looks with a GameComponent (in this case it's just some text offscreen in the upper left corner that shows framerate), notice how the tree in the back is appearing in front of the tree closer to the camera:
You have to enable the depth buffer:
// XNA 3.1
GraphicsDevice.RenderState.DepthBufferEnable = true;
GraphicsDevice.RenderState.DepthBufferWriteEnable = true;
// XNA 4.0
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
SpriteBatch.Begin alters the state of the graphics pipeline:
SpriteBatch render states for XNA 3.1
SpriteBatch render states for XNA 4.0
In both versions depth buffering is disabled, that's what causes the issue.
Again, I cannot stress this enough:
Always make sure that ALL render states are correctly set before drawing any geometry.
BlendState
DepthStencilState
RasterizerState
Viewports
RenderTargets
Shaders
SamplerStates
Textures
Constants
Educate yourself on the purpose of each state and each stage in the rendering pipeline. If in doubt, try resetting everything to default.
I am building a 3D image viewer which has Three.JS plane geometries as placeholders with the images as their textures.
Now I want to add a black border around the image. The only way I have found yet to implement this is to add a new black plane geometry behind the image to be displayed. But this required whole-sale changes to my framework which I want to avoid.
WebGL's texture loading function gl.texImage2D has a parameter for border. But I couldn't find this exposed anywhere through Three.js and doubt that it even works the way I think it does.
Is there an easier way to add borders around textures?
You can use a temporary regular 2D canvas to render your image and apply any kind of editing/effects there, like paint borders and such. Then use that canvas image as a texture. Might be a bit of work, but you will gain a lot of flexibility styling your borders and other stuff.
I'm not near my dev machine and won't be for a couple of days, so I can't look up an example of my own. This issue contains some code to get you started: https://github.com/mrdoob/three.js/issues/868
My iOS application draws into a bitmap (same size as my view) using Core Graphics. I want to push updated regions of the bitmap to the screen. (I've used the standard UIView drawRect method but I have some good reasons to switch to OpenGL).
I just want to replicate the same behavior as UIView/CALayer drawRect but in an OpenGL view. Essentially I would like to update dirty rectangles on my OpenGL view. Nothing more.
So far I've been able to create an OpenGL ES 1.1 view and push my entire bitmap on screen using a single quad (texture on a vertex array) for each update of my bitmap. Of course, this is pretty inefficient since I only need to refresh the dirty rectangle, not the whole view.
What would be the most efficient way to do that in OpenGL ES? Should I use a lattice of quads and update the texture of the quads that intersect with my dirty rectangle? (If I were to use that method, should I use VBO?) Is there a better way to do that?
FYI (just in case), I won't need rotation but will need to scale the entire OpenGL view.
UPDATE:
This method indeed works. However, there's a bug in iOS5.x on retina display devices that produces an artifact when using single buffering. The problem has been fixed in iOS6. I don't yet have a work around.
You could simply just update a part of the texture using TexSubImage, and redraw your standard full-screen quad, but with the scissor rect beeing set (glScissor) to the "dirty" part. The GL will then not draw any fragments outside this rect.
For this to work, you must of course use single buffering.
Problem
I've gotten to a point in my project where I'm rendering to WebGLRenderTargets and using them as a textures in my main scene. It works, but it seems like I'm having it do a lot more work than it needs to. My generated textures only need to be 64x64, but because I'm using my main renderer (window width by window height) for both, it's unnecessarily rendering the WebGLRenderTargets at a much larger resolution.
I may be wrong, but I believe this increases both the processing required to draw to each RenderTarget and the processing required to then draw that large texture to the mesh.
I've tried using a second renderer, but I seem to get this error when trying to use a WebGLRenderTarget in renderer A after drawing to it from renderer B:
WebGL: INVALID_OPERATION: bindTexture: object not from this context
Example
For reference, you can see my abstracted page here (Warning: Due to the very issue I'm inquiring about, this page may lag for you). I'm running a simplex function on a plane in my secondary scene and chopping it up into sections using camera placement, then applying the segments to tile pieces via WebGLRenderTarget so that they're fluent but individual pieces.
Question
Am I correct in my assumption that using the same renderer size is much less efficient than rendering to a smaller renderer would be? And if so, what do you think the best solution for this would be? Is there currently a way to achieve this optimization?
The size parameters in renderer.setSize() are used by the renderer to set the viewport when rendering to the screen only.
When the renderer renders to an offscreen render target, the size of the texture rendered to is given by the parameters renderTarget.width and renderTarget.height.
So the answer to your question is that it is OK to use the same renderer for both; there is no inefficiency.
Morning all (if its morning where you are)
I have been looking around and have not seen a satisfactory method for doing this so thought I would ask around...
Ideal world I would like to be able to generate a transparent Texture2D object. Drawing this to the screen I would like to be able to "paint" to it, i.e. when the left mouse button is down whatever pixel the cursor is over should be set to black. Following this I would then need to be able to use this texture.
Using the texture is the easy part, we can simply make a new Texture2D attribute for a "painting" object and use that in the SpriteBatch.Draw method. The two tricky parts are
Generating a texture2D object of a specified size, filled with transparency in code.
Editing that texture2D on the fly (i.e. being able to alter pixel colours)
If anyone has any experience of these you input would be very much appreciated.
You can either use a RenderTarget2D (MSDN), which is itself a Texture2D (so you can use it in SpriteBatch.Draw). This allows you to render onto a texture in the same way you render onto the screen. You need to use GraphicsDevice.SetRenderTarget (MSDN) to set this up.
Or you can use Texture2D.SetData (MSDN) to manipulate pixels directly. You can construct a transparent Texture2D directly (MSDN). Don't forget to Dispose of any textures or other resources you create yourself!