I'm writing a 3d modeling application in D3D9 that I'd like to make as broadly compatible as possible. This means using few hardware-dependent features, i.e. multisampling. However, while the realtime render doesn't need to be flawless, I do need to provide nice-looking screen captures, which without multisampling, look quite aliased and poor.
To produce my screen captures, I create a temporary surface in memory, render the scene to it once, then save it to a file. My first thought of how I could achieve an antialiased capture was to create my off-screen stencilsurface as multisampled, but of course DX wouldn't allow that since the device itself had been initialized with D3DMULTISAMPLE_NONE.
To start off, here's a sample of exactly how I create the screencapture. I know that it'd be simpler to just save the backbuffer of an already-rendered frame, however I need the ability to save images of dimension different than the actual render window - which is why I do it this way. Error checking, code for restoring state, and releasing resource are ommitted here for brevity. m_d3ddev is my LPDIRECT3DDEVICE9.
//Get the current pp
LPDIRECT3DSWAPCHAIN9 sc;
D3DPRESENT_PARAMETERS pp;
m_d3ddev->GetSwapChain(0, &sc);
sc->GetPresentParameters(&pp);
//Create a new surface to which we'll render
LPDIRECT3DSURFACE9 ScreenShotSurface= NULL;
LPDIRECT3DSURFACE9 newDepthStencil = NULL;
LPDIRECT3DTEXTURE9 pRenderTexture = NULL;
m_d3ddev->CreateDepthStencilSurface(_Width, _Height, pp.AutoDepthStencilFormat, pp.MultiSampleType, pp.MultiSampleQuality, FALSE, &newDepthStencil, NULL );
m_d3ddev->SetDepthStencilSurface( newDepthStencil );
m_d3ddev->CreateTexture(_Width, _Height, 1, D3DUSAGE_RENDERTARGET, pp.BackBufferFormat, D3DPOOL_DEFAULT, &pRenderTexture, NULL);
pRenderTexture->GetSurfaceLevel(0,&ScreenShotSurface);
//Render the scene to the new surface
m_d3ddev->SetRenderTarget(0, ScreenShotSurface);
RenderFrame();
//Save the surface to a file
D3DXSaveSurfaceToFile(_OutFile, D3DXIFF_JPG, ScreenShotSurface, NULL, NULL);
You can see the call to CreateDepthStencilSurface(), which is where I was hoping I could replace pp.MultiSampleType with i.e. D3DMULTISAMPLE_4_SAMPLES, but that didn't work.
My next thought was to create an entirely different LPDIRECT3DDEVICE9 as a D3DDEVTYPE_REF, which always supports D3DMULTISAMPLE_4_SAMPLES (regardless of the video card). However, all of my resources (meshes, textures) have been loaded into m_d3ddev, my HAL device, thus I couldn't use them for rendering the scene under the REF device. Note that resources can be shared between devices under Direct3d9ex (Vista), but I'm working on XP. Since there are quite a lot of resources, reloading everything to render this one frame, then unloading them, is too time-inefficient for my application.
I looked at other options for antialiasing the image post-capture (i.e. 3x3 blur filter), but they all generated pretty crappy results, so I'd really like to try and get an antialiased scene right out of D3D if possible....
Any wisdom or pointers would be GREATLY appreciated...
Thanks!
Supersampling by either rendering to a larger buffer and scaling down or combining jittered buffers is probably your best bet. Combining multiple jittered buffers should give you the best quality for a given number of samples (better than the regular grid from simply rendering an equivalent number of samples at a multiple of the resolution and scaling down) but has the extra overhead of multiple rendering passes. It has the advantage of not being limited by the maximum supported size of your render target though and allows you to choose pretty much an arbitrary level of AA (though you'll have to watch out for precision issues if combining many jittered buffers).
The article "Antialiasing with Accumulation Buffer" at opengl.org describes how to modify your projection matrix for jittered sampling (OpenGL but the math is basically the same). The paper "Interleaved Sampling" by Alexander Keller and Wolfgang Heidrich talks about an extension of the technique that gives you a better sampling pattern at the expense of even more rendering passes. Sorry about not providing links - as a new user I can only post one link per answer. Google should find them for you.
If you want to go the route of rendering to a larger buffer and down sampling but don't want to be limited by the maximum allowed render target size then you can generate a tiled image using off center projection matrices as described here.
You could always render to a texture that is twice the width and height (ie 4x the size) and then supersample it down.
Admittedly you'd still get problems if the card can't create a texture 4x the size of the back buffer ...
Edit: There is another way that comes to mind.
If you repeat the frame n-times with tiny jitters to the view matrix you will be able to generate as many images as you like which you can then add together afterwards to form a very highly anti-aliased image. The bonus is, it will work on any machine that can render the image. It is, obviously, slower though. Still 256xAA really does look good when you do this!
This article http://msdn.microsoft.com/en-us/library/bb172266(VS.85).aspx seems to imply that you can use the render state flag D3DRS_MULTISAMPLEANTIALIAS to control this. Can you create your device with antialiasing enabled but turn it off for screen rendering and on for your offscreen rendering using this render state flag?
I've not tried this myself though.
Related
I am trying to capture a screenshot on iOS from an OpenGL view using glReadPixels at half of the native resolution.
glReadPixels is quite slow on retina screens so I'd like to somehow force reading every second pixel and every second row, resulting in a non-retina screenshot (1/4 of the resolution).
I tried setting these:
glPixelStorei(GL_PACK_SKIP_PIXELS, 2);
glPixelStorei(GL_PACK_SKIP_ROWS, 2);
before calling glReadPixels but it doesn't seem to be changing absolutely anything. Instead, it just renders 1/4 of the original image because the width and height I'm passing to glReadPixels is the view's non-retina size.
Alternatively, if you know any more performant way of capturing an OpenGL screenshot, feel free to share it as well.
I don't think there's a very direct way of doing what you're looking for. As you already found out, GL_PACK_SKIP_ROWS and GL_PACK_SKIP_PIXELS do not have the functionality you intended. They only control how many rows/pixels are skipped at the start, not after each row/pixel. And I believe they control skipping in the destination memory anyway, not in the framebuffer you're reading from.
One simple approach to a partial solution would be to make a separate glReadPixels() call per row, which you can then make for every second row. You would still have to copy every second pixel from those rows, but at least it would cut the amount of data you read in half. And it does reduce the additional amount of memory to almost a quarter, since you would only store one row at full resolution. Of course you have overhead for making many more glReadPixels() calls, so it's hard to predict if this will be faster overall.
The nicer approach would be to produce a half-resolution frame that you can read directly. To do that, you could either:
If your toolkits allow it, re-render the frame at half the resolution. You could use an FBO as render target for this, with half the size of the window.
Copy the frame, while downscaling it in the process. Again, create an FBO with a render target half the size, and copy from default framebuffer to this FBO using glBlitFramebuffer().
You can also look into making the read back asynchronous by using a pixel pack buffer (see GL_PACK_BUFFER argument to glBindBuffer()). This will most likely not make the operation faster, but it allows you to continue feeding commands to the GPU while you're waiting for the glReadPixels() results to arrive. It might help you take screenshots while being less disruptive to the game play.
Currently I am developing application for the Windows Store which does real time-image processing using Direct2D. It must support various sizes of images. The first problem I have faced is how to handle the situations when the image is larger than the maximum supported texture size. After some research and documentation reading I found the VirtualSurfaceImageSource as a solution. The idea was to load the image as IWICBitmap then to create render target with CreateWICBitmapRenderTarget (which as far as I know is not hardware accelerated). After some drawing operations I wanted to display the result to the screen by invalidating the corresponding region in the VirtualSurfaceImage source or when the NeedUpdate callback fires. I supposed that it is possible to do it by creating ID2D1Bitmap (hardware accelerated) and to call CopyFromRenderTarget with the render target created with CreateWICBitmapRenderTarget and the invalidated region as bounds, but the method returns D2DERR_WRONG_RESOURCE_DOMAIN as a result. Another reason for using IWICBitmap is one of the algorithms involved in the application which must have access to update the pixels of the image.
The question is why this logic doesn't work? Is this the right way to achieve my goal using Direct2D? Also as far as the render target created with CreateWICBitmapRenderTarget is not hardware accelerated if I want to do my image processing on the GPU with images larger than the maximum allowed texture size which is the best solution?
Thank you in advance.
You are correct that images larger than the texture limit must be handled in software.
However, the question to ask is whether or not you need that entire image every time you render.
You can use the hardware accel to render a portion of the large image that is loaded in a software target.
For example,
Use ID2D1RenderTarget::CreateSharedBitmap to make a bitmap that can be used by different resources.
Then create a ID2D1BitmapRenderTarget and render the large bitmap into that. (making sure to do BeginDraw, Clear, DrawBitmap, EndDraw). Both the bitmap and the render target can be cached for use by successive calls.
Then copy from that render target into a regular ID2D1Bitmap with the portion that will fit into the texture memory using the ID2D1Bitmap::CopyFromRenderTarget method.
Finally draw that to the real render target, pRT->DrawBitmap
I am preparing to start on a C++ DirectX 10 application that will consist of multiple "panels" to display different types of information. I have had some success experimenting with multiple viewports on one RenderTargetView. However, I cannot find a definitive answer regarding how to clear a single viewport at a time. These panels (viewports) in my application will overlap in some areas, so I would like to be able to draw them from "bottom to top", clearing each viewport as I go so the drawing from lower panels doesn't show through on the higher ones. In DirectX 9, it seems that there was a Clear() method of the device object that would clear only the currently set viewport. DirectX 10 uses ClearRenderTargetView(), which clears the entire drawing area, and I cannot find any other option that is equivalent to the way DirectX 9 did it.
Is there a way in DirectX 10 to clear only a viewport/rectangle within the drawing area? One person speculated that the only way may be to draw a quad in that space. It seems that another possibility would be to have a seprate RenderTargetView for each panel, but I would like to avoid that as it requires other redundant resources, such as a separate depth/stencil buffers (unless that is a misunderstanding on my part).
Any help will be greatly appreciated! Thanks!
I would recommend using one render target per "viewport", and compositing them together using quads for the final view. I know of no way to scissor a clear in DX 10.
Also, according to the article here, "An array of render-target views may be passed into ID3D10Device::OMSetRenderTargets, however all of those render-target views will correspond to a single depth stencil view."
Hope this helps.
Could you not just create a shader together with the appropriate blendstate settings and a square mesh (or other shape of mesh) and use it to clear the area where you want to clear? I haven't tried this but I think it can be done.
I have a directx9 game engine that creates its normal adaptor with this format:
D3DFMT_X8R8G8B8
I have a system where I render some objects to an offscreen render target, as lightmaps. I then use that lightmap data to composite back to the back buffer where they act as a full screen 'mask' and let me get the effect of torches or other light sources on a dark scene.
Everything works just great.
The problem is, I'm aware that my big offscreen lightmap render targets are 16MB each, at a large res, and I only really need 8 bits of data (greyscale) from them, so 75% of the 32 bit render target memory is a waste. (I'm targeting low spec cards).
I tried creating the render targets as
D3DFMT_A8
But directx silently fails on that (if I add CheckDeviceFormat() I see it happen) and creates 32 bit anyway. I use the D3DXCreateTexture function
My question is, what format is best for creating these offscreen buffers?
Thankyou for your help, I'm not good at render target related stuff :)
D3DFMT_L8 is 8 bit luminance. I believe it's supported on GeForce 3 (i.e. the first consumer card with shader 1.1!), so must be available everywhere. I think the colour is read as L, L, L, 1, i.e. rgb = luminance value, alpha = 1.
Edit: this tool is useful for finding caps:
http://zp.lo3.wroc.pl/cdragan/wizard.php
Ontopic: If you are targeting lower spec cards, you are very likely to be running on systems where 8-bit single channel render targets are not supported at all.
If you are using shaders to do the rendering and compositing, it should be possible to use the rgba channels for 4 alternating pixels of your lightmap, packing your information. Perhaps you can tell us a little bit more about your current rendering setup?
Offtopic: AWESOME to have you here on StackOverflow, big fan of your work!
I have a 32 frame greyscale animation of a diamond exploding into pieces (ie 32 PNG images # 1024x1024)
my game consists of 12 separate colours, so I need to perform the animation in any desired colour
this I believe rules out any Apple frameworks, also it rules out a lot of public code for animating frame by frame in iOS.
what are my potential solution paths?
these are the best SO links I have found:
Faster iPhone PNG Animations
frame by frame animation
Is it possible using video as texture for GL in iOS?
that last one just shows it is may be possible to load an image into a GL texture each frame ( he is doing it from the camera, so if I have everything stored in memory, that should be even faster )
I can see these options ( listed laziest first, most optimised last )
option A
each frame (courtesy of CADisplayLink), load the relevant image from file into a texture, and display that texture
I'm pretty sure this is stupid, so onto option B
option B
preload all images into memory
then as per above, only we load from memory rather than from file
I think this is going to be the ideal solution, can anyone give it the thumbs up or thumbs down?
option C
preload all of my PNGs into a single GL texture of the maximum size, creating a texture Atlas. each frame, set the texture coordinates to the rectangle in the Atlas for that frame.
while this is potentially a perfect balance between coding efficiency and performance efficiency, the main problem here is losing resolution; on older iOS devices maximum texture size is 1024x1024. if we are cramming 32 frames into this ( really this is the same as cramming 64 ) we would be at 128x128 for each frame. if the resulting animation is close to full screen on the iPad this isn't going to hack it
option D
instead of loading into a single GL texture, load into a bunch of textures
moreover, we can squeeze 4 images into a single texture using all four channels
I baulk at the sheer amount of fiddly coding required here. My RSI starts to tingle even thinking about this approach
I think I have answered my own question here, but if anyone has actually done this or can see the way through, please answer!
If something higher performance than (B) is needed, it looks like the key is glTexSubImage2D http://www.opengl.org/sdk/docs/man/xhtml/glTexSubImage2D.xml
Rather than pull across one frame at a time from memory, we could arrange say 16 512x512x8-bit greyscale frames contiguously in memory, send this across to GL as a single 1024x1024x32bit RGBA texture, and then split it within GL using the above function.
This would mean that we are performing one [RAM->VRAM] transfer per 16 frames rather than per one frame.
Of course, for more modern devices we could get 64 instead of 16, since more recent iOS devices can handle 2048x2048 textures.
I will first try technique (B) and leave it at that if it works ( I don't want to over code ), and look at this if needed.
I still can't find any way to query how many GL textures it is possible to hold on the graphics chip. I have been told that when you try to allocate memory for a texture, GL just returns 0 when it has run out of memory. however to implement this properly I would want to make sure that I am not sailing close to the wind re: resources... I don't want my animation to use up so much VRAM that the rest of my rendering fails...
You would be able to get this working just fine with CoreGraphics APIs, there is no reason to deep dive into OpenGL for a simple 2D problem like this. For the general approach you should take to creating colored frames from a grayscale frame, see colorizing-image-ignores-alpha-channel-why-and-how-to-fix. Basically, you need to use CGContextClipToMask() and then render a specific color so that what is left is the diamond colored in with the specific color you have selected. You could do this at runtime, or you could do it offline and create 1 video for each of the colors you want to support. It is be easier on your CPU if you do the operation N times and save the results into files, but modern iOS hardware is much faster than it used to be. Beware of memory usage issues when writing video processing code, see video-and-memory-usage-on-ios-devices for a primer that describes the problem space. You could code it all up with texture atlases and complex openGL stuff, but an approach that makes use of videos would be a lot easier to deal with and you would not need to worry so much about resource usage, see my library linked in the memory post for more info if you are interested in saving time on the implementation.