Take screenshot of DirectX full-screen application - directx

This boggles me. DirectX bypasses everything and talks directly to the device driver, thus GDI and other usual methods won't work - unless Aero is disabled (or unavailable), all that appears is a black rectangle at the top left of the screen. I have tried what other have suggested on several forums, using DirectX to get the back buffer and save it, but I get the same result:
device->GetFrontBufferData(0, surface);
D3DXSaveSurfaceToFile("fileName", D3DXIFF_BMP, surface, NULL, NULL);
Is there any way to get a screenshot of another full-screen DirectX application when Aero is enabled?

Have a look at Detours.
Using Detours, you can instrument calls like Direct3DCreate9, IDirect3D9::CreateDevice and IDirect3D9::Present in which you perform the operations necessary to setup and then do a frame capture.

Here is a C# example of hooking IDirect3DDevice9 objects via DLL injection and function hooking using EasyHook (like Microsoft Detours). This is similar to how FRAPS works.
This allows you to capture the screen in windowed / fullscreen mode and uses the back buffer which is much faster than trying to retrieve data from the front buffer.
A small C++ helper DLL is used to determine the methods of the IDirect3DDevice9 object to hook at runtime.
Update: for DirectX 10/11 see Screen capture and overlays for D3D 9, 10 and 11

This is a snippet of the code I used as test just now, it seems to work.
width and height are the size of the SCREEN in windowed mode not the windows. So for me they are set to 1280 x 1024 and not the window I'm rendering to's size.
You'd need to replace mEngine->getDevice() with some way of getting your IDirect3DDevice9 too. I just inserted this code into a random d3d app I had to make it easier to test. But I can confirm that it captures both the output from that app AND another d3d app running at the same time.
Oh I've assumed this is D3D9 as you didn't say, I'm not sure about d3d10 or 11
IDirect3DSurface9* surface;
mEngine->getDevice()->CreateOffscreenPlainSurface(width, height, D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH, &surface, NULL);
mEngine->getDevice()->GetFrontBufferData(0, surface);
D3DXSaveSurfaceToFile("c:\\tmp\\output.jpg", D3DXIFF_JPG, surface, NULL, NULL);
surface->Release();

There is an open source program like fraps: taksi but looks outdated

Here is some discussion of how Fraps works. It is not simple.
http://www.woodmann.com/forum/archive/index.php/t-11023.html
Any trick that tries to read the front buffer from a different DirectX device, I suspect may only occasionally work due to luck of uninitialized memory.

Following J99's answer, I made the code work for both windowed and fullscreen modes. It is also done in D3D9.
IDirect3DSurface9* surface;
D3DDISPLAYMODE mode;
pDev->GetDisplayMode(0, &mode); // pDev is my *IDirect3DDevice
// we can capture only the entire screen,
// so width and height must match current display mode
pDev->CreateOffscreenPlainSurface(mode.Width, mode.Height, D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH, &surface, NULL);
if(pDev->GetFrontBufferData(0, surface)==D3D_OK)
{
if(bWindowed) // a global config variable
{
// get client area in desktop coordinates
// this might need to be changed to support multiple screens
RECT r;
GetClientRect(hWnd, &r); // hWnd is our window handle
POINT p = {0, 0};
ClientToScreen(hWnd, &p);
SetRect(&r, p.x, p.y, p.x+r.right, p.y+r.bottom);
D3DXSaveSurfaceToFile(szFilename, D3DXIFF_JPG, surface, NULL, &r);
}
else
D3DXSaveSurfaceToFile(szFilename, D3DXIFF_JPG, surface, NULL, NULL);
}
surface->Release();
It looks like format and pool parameters of CreateOffscreenPlainSurface must be exactly the same.

You might want to take a look at my Investigo project.
It uses a DirectX proxy DLL to intercept DirectX API functions.
There is already code in there to take screenshots during the call to Present. Although it isn't yet accessible from the UI. You should be able to enable the code easily though.
http://www.codeproject.com/Articles/448756/Introducing-Investigo-Using-a-Proxy-DLL-and-embedd

Related

Webgl Upload Texture Data to the gpu without a draw call

I'm using webgl to do YUV to RGB conversions on a custom video codec.
The video has to play at 30 fps. In order to make this happen I'm doing all my math every other requestAnimationFrame.
This works great, but I noticed when profiling that uploading the textures to the gpu takes the longest amount of time.
So I uploaded the "Y" texture and the "UV" texture separately.
Now the first "requestAnimationFrame" will upload the "Y" texture like this:
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, yTextureRef);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.LUMINANCE, textureWidth, textureHeight, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, yData);
The second "requestAnimationFrame" will upload the "UV" texture in the same way, and make a draw call to the fragment shader doing the math between them.
But this doesn't change anything in the profiler. I still show nearly 0 gpu time on the frame that uploads the "Y" texture, and the same amount of time as before on the frame that uploads the "UV" texture.
However if I add a draw call to my "Y" texture upload function, then the profiler shows the expected results. Every frame has nearly half the gpu time.
From this I'm guessing the Y texture isn't really uploaded to the gpu using the texImage2d function.
However I don't really want to draw the Y texture on the screen as it doesn't have the correct UV texture to do anything with until a frame later. So is there any way to force the gpu to upload this texture without performing a draw call?
Update
I mis-understood the question
It really depends on the driver. The problem is OpenGL/OpenGL ES/WebGL's texture API really sucks. Sucks is a technical term for 'has unintended consequences'.
The issue is the driver can't really fully upload the data until you draw because it doesn't know what things you're going to change. You could change all the mip levels in any order and any size and then fix them all in between and so until you draw it has no idea which other functions you're going to call to manipulate the texture.
Consider you create a 4x4 level 0 mip
gl.texImage2D(
gl.TEXTURE_2D,
0, // mip level
gl.RGBA,
4, // width
4, // height
...);
What memory should it allocate? 4(width) * 4(height) * 4(rgba)? But what if you call gl.generateMipmap? Now it needs 4*4*4+2*2*4+1*1*4. Ok but now you allocate an 8x8 mip on level 3. You intend to then replace levels 0 to 2 with 64x64, 32x32, 16x16 respectively but you did level 3 first. What should it do when you replace level 3 before replacing the levels above those? You then add in levels 4 8x8, 5 as 4x4, 6 as 2x2, and 7 as 1x1.
As you can see the API lets you change mips in any order. In fact I could allocate level 7 as 723x234 and then fix it later. The API is designed to not care until draw time when all the mips must be the correct size at which point they can finally allocate memory on the GPU and copy the mips in.
You can see a demonstration and test of this issue here. The test uploads mips out of order to verify that WebGL implementations correctly fail with they are not all the correct size and correctly start working once they are the correct sizes.
You can see this was arguably a bad API design.
They added gl.texStorage2D to fix it but gl.texStorage2D is not available in WebGL1 only WebGL2. gl.texStorage2D has new issues though :(
TLDR; textures get uploaded to the driver when you call gl.texImage2D but the driver can't upload to the GPU until draw time.
Possible solution: use gl.texSubImage2D since it does not allocate memory it's possible the driver could upload sooner. I suspect most drivers don't because you can use gl.texSubImage2D before drawing. Still it's worth a try
Let me also add that gl.LUMIANCE might be a bottleneck as well. IIRC DirectX doesn't have a corresponding format and neither does OpenGL Core Profile. Both support a RED only format but WebGL1 does not. So LUMIANCE has to be emulated by expanding the data on upload.
Old Answer
Unfortunately there is no way to upload video to WebGL except via texImage2D and texSubImage2D
Some browsers try to make that happen faster. I notice you're using gl.LUMINANCE. You might try using gl.RGB or gl.RGBA and see if things speed up. It's possible browsers only optimize for the more common case. On the other hand it's possible they don't optimize at all.
Two extensions what would allow using video without a copy have been proposed but AFAIK no browser as ever implemented them.
WEBGL_video_texture
WEBGL_texture_source_iframe
It's actually a much harder problem than it sounds like.
Video data can be in various formats. You mentioned YUV but there are others. Should the browser tell the app the format or should the browser convert to a standard format?
The problem with telling is lots of devs will get it wrong then a user will provide a video that is in a format they don't support
The WEBGL_video_texture extensions converts to a standard format by re-writing your shaders. You tell it uniform samplerVideoWEBGL video and then it knows it can re-write your color = texture2D(video, uv) to color = convertFromVideoFormatToRGB(texture(video, uv)). It also means they'd have to re-write shaders on the fly if you play different format videos.
Synchronization
It sounds great to get the video data to WebGL but now you have the issue that by the time you get the data and render it to the screen you've added a few frames of latency so the audio is no longer in sync.
How to deal with that is out of the scope of WebGL as WebGL doesn't have anything to do with audio but it does point out that it's not as simple as just giving WebGL the data. Once you make the data available then people will ask for more APIs to get the audio and more info so they can delay one or both and keep them in sync.
TLDR; there is no way to upload video to WebGL except via texImage2D and texSubImage2D

Can Direct3D 11 do offscreen-only rendering (no swap chain)?

Is it possible to use Direct3D 11 for rendering to textures only, i.e. without creating a swap chain and without creating any window? I have tried that and all my API calls succeed. The only problem is that the picture I am downloading from a staging texture is black.
I finally managed to capture a full stream using PIX (Parallel Nsight does not seem to work at all). PIX shows that my render target is black, too, although I clear it to blue.
Is it possible at all what I intend to do? If so, how would one do it?
Actually, the whole thing is working as intended if you initialise the device correctly.

Strange problem with ID3DXSprite Draw method

I use ID3DXSprite interface to draw gui controls in my app. I have a 512x512 texture with all controls and use sprite->Draw() telling the exact RECT of control. Everything works fine except a strange bug on only one(!) machine.
Normally, the control looks:
And on that strange machine:
Moreover, some controls look fine but also many of them look like this one - with corrupted edges and ... well you can see the difference :(
The second machine has Intel(R) G41 Express Chipset video adapter.
Please, if someone has ANY ideas why can it happen - help!
Regards, Anthony.
It looks to me like you have mipmaps in the sprite's texture and the card is choosing the wrong mipmap level. Set the mip map level, explicitly to 1, and see if that helps.

DWM Screen Capturing with DirectX IDXGIOutput::GetDisplaySurfaceData

I am trying to capture DWM's DirectX surface by using DXGI and GetDisplaySurfaceData() using Direct3D 10/11.
However, when I am taking ownership of the adapter's output with IDXGIOutput::TakeOwnership() before calling to GetDisplaySurfaceData(), the whole screen blacks out for a moment and then restores back (just as during display mode switching).
Why does this happen, and how can I prevent this?
I know this is extremely late. But for what it's worth, the documentation explicitly says that you are not supposed to call TakeOwnership() directly as the results will be unpredictable.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb174558(v=vs.85).aspx

Antialiasing/Multisampling in D3D9

I'm writing a 3d modeling application in D3D9 that I'd like to make as broadly compatible as possible. This means using few hardware-dependent features, i.e. multisampling. However, while the realtime render doesn't need to be flawless, I do need to provide nice-looking screen captures, which without multisampling, look quite aliased and poor.
To produce my screen captures, I create a temporary surface in memory, render the scene to it once, then save it to a file. My first thought of how I could achieve an antialiased capture was to create my off-screen stencilsurface as multisampled, but of course DX wouldn't allow that since the device itself had been initialized with D3DMULTISAMPLE_NONE.
To start off, here's a sample of exactly how I create the screencapture. I know that it'd be simpler to just save the backbuffer of an already-rendered frame, however I need the ability to save images of dimension different than the actual render window - which is why I do it this way. Error checking, code for restoring state, and releasing resource are ommitted here for brevity. m_d3ddev is my LPDIRECT3DDEVICE9.
//Get the current pp
LPDIRECT3DSWAPCHAIN9 sc;
D3DPRESENT_PARAMETERS pp;
m_d3ddev->GetSwapChain(0, &sc);
sc->GetPresentParameters(&pp);
//Create a new surface to which we'll render
LPDIRECT3DSURFACE9 ScreenShotSurface= NULL;
LPDIRECT3DSURFACE9 newDepthStencil = NULL;
LPDIRECT3DTEXTURE9 pRenderTexture = NULL;
m_d3ddev->CreateDepthStencilSurface(_Width, _Height, pp.AutoDepthStencilFormat, pp.MultiSampleType, pp.MultiSampleQuality, FALSE, &newDepthStencil, NULL );
m_d3ddev->SetDepthStencilSurface( newDepthStencil );
m_d3ddev->CreateTexture(_Width, _Height, 1, D3DUSAGE_RENDERTARGET, pp.BackBufferFormat, D3DPOOL_DEFAULT, &pRenderTexture, NULL);
pRenderTexture->GetSurfaceLevel(0,&ScreenShotSurface);
//Render the scene to the new surface
m_d3ddev->SetRenderTarget(0, ScreenShotSurface);
RenderFrame();
//Save the surface to a file
D3DXSaveSurfaceToFile(_OutFile, D3DXIFF_JPG, ScreenShotSurface, NULL, NULL);
You can see the call to CreateDepthStencilSurface(), which is where I was hoping I could replace pp.MultiSampleType with i.e. D3DMULTISAMPLE_4_SAMPLES, but that didn't work.
My next thought was to create an entirely different LPDIRECT3DDEVICE9 as a D3DDEVTYPE_REF, which always supports D3DMULTISAMPLE_4_SAMPLES (regardless of the video card). However, all of my resources (meshes, textures) have been loaded into m_d3ddev, my HAL device, thus I couldn't use them for rendering the scene under the REF device. Note that resources can be shared between devices under Direct3d9ex (Vista), but I'm working on XP. Since there are quite a lot of resources, reloading everything to render this one frame, then unloading them, is too time-inefficient for my application.
I looked at other options for antialiasing the image post-capture (i.e. 3x3 blur filter), but they all generated pretty crappy results, so I'd really like to try and get an antialiased scene right out of D3D if possible....
Any wisdom or pointers would be GREATLY appreciated...
Thanks!
Supersampling by either rendering to a larger buffer and scaling down or combining jittered buffers is probably your best bet. Combining multiple jittered buffers should give you the best quality for a given number of samples (better than the regular grid from simply rendering an equivalent number of samples at a multiple of the resolution and scaling down) but has the extra overhead of multiple rendering passes. It has the advantage of not being limited by the maximum supported size of your render target though and allows you to choose pretty much an arbitrary level of AA (though you'll have to watch out for precision issues if combining many jittered buffers).
The article "Antialiasing with Accumulation Buffer" at opengl.org describes how to modify your projection matrix for jittered sampling (OpenGL but the math is basically the same). The paper "Interleaved Sampling" by Alexander Keller and Wolfgang Heidrich talks about an extension of the technique that gives you a better sampling pattern at the expense of even more rendering passes. Sorry about not providing links - as a new user I can only post one link per answer. Google should find them for you.
If you want to go the route of rendering to a larger buffer and down sampling but don't want to be limited by the maximum allowed render target size then you can generate a tiled image using off center projection matrices as described here.
You could always render to a texture that is twice the width and height (ie 4x the size) and then supersample it down.
Admittedly you'd still get problems if the card can't create a texture 4x the size of the back buffer ...
Edit: There is another way that comes to mind.
If you repeat the frame n-times with tiny jitters to the view matrix you will be able to generate as many images as you like which you can then add together afterwards to form a very highly anti-aliased image. The bonus is, it will work on any machine that can render the image. It is, obviously, slower though. Still 256xAA really does look good when you do this!
This article http://msdn.microsoft.com/en-us/library/bb172266(VS.85).aspx seems to imply that you can use the render state flag D3DRS_MULTISAMPLEANTIALIAS to control this. Can you create your device with antialiasing enabled but turn it off for screen rendering and on for your offscreen rendering using this render state flag?
I've not tried this myself though.

Resources