DWM Screen Capturing with DirectX IDXGIOutput::GetDisplaySurfaceData - directx

I am trying to capture DWM's DirectX surface by using DXGI and GetDisplaySurfaceData() using Direct3D 10/11.
However, when I am taking ownership of the adapter's output with IDXGIOutput::TakeOwnership() before calling to GetDisplaySurfaceData(), the whole screen blacks out for a moment and then restores back (just as during display mode switching).
Why does this happen, and how can I prevent this?

I know this is extremely late. But for what it's worth, the documentation explicitly says that you are not supposed to call TakeOwnership() directly as the results will be unpredictable.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb174558(v=vs.85).aspx

Related

How are Protected Media Path and similar systems implemented?

Windows provides DRM functionality to applications that require it.
Some of them, however, have more protection than others.
As an example, take Edge (both Legacy and Chromium) or IE that use Protected Media Path. They get to display >720p Netflix content. Other browsers don't use PMP and are capped at 720p.
The difference in the protections is noticeable when you try to capture the screen: while you have no problems on Firefox/Chrome, in Edge/IE a fixed black image takes the place of the media you are playing, but you still see media control buttons (play/pause/etc) that are normally overlaid (alpha blended) on the media content.
Example (not enough rep yet to post directly)
The question here is mainly conceptual, and in fact also could apply to systems that have identical behavior, like iOS that also replaces the picutre when you screenshot or capture the screen on Netflix.
How does it get to display two different images on two different outputs (Capture APIs with no DRM content and attached phisical monitor screen with DRM content)?
I'll make a guess and I'll start by excluding HW overlays. The reason is that play/pause buttons are still visible on the captured output. Since they are overlaid (alpha blended) on the media on the screen, and alpha blending on HW overlays is not possible in DirectX 9 or later nor it is using legacy DirectDraw, hardware overlays have to be discarded. And by the way, neither d3d9.dll or ddraw.dll are loaded by mfpmp.exe or iexplore.exe (version 11). Plus, I think hardware overlays are now considered a legacy feature, while Media Foundation (which Protected Media Path is a part of) is totally alive and maintained.
So my guess is that DWM, that is in charge for the screen composition, is actually doing two compositions. Either by forking the composition process at the point when it encounters a DRM area and feeds one output to the screen (with DRM protected content) and the other to the various screen capturing methods and APIs, or by entirely doing two different compositions in the first place.
Is my guess correct? And could you please provide evidence to support your answer?
My interest is understanding how composition software and DRM are implemented, primarily in Windows. But how many other ways could there be to do it in different OSes?
Thanks in advance.
According to this document, both the options are available.
The modern PlayReady DRM that Netflix uses for its playback in IE, Edge, and the UWP app uses the DWM method, which can be noticed by the video area showing only a black screen when DWM is forcibly killed. It seems that this is because the modern PlayReady is supported since Windows 8.1, which does not let users disable DWM easily.
I think both methods were used in Windows Vista-7, but I have no samples to test it. As HW overlays don't look that good with window previews, animations, and transparencies, they would have switched each method depending on the DWM status.
For iOS, it seems that a mechanism that is similar to the DWM method is done in the display server(SpringBoard?) level to present the protected content which is processed in the Secure Enclave Processor.

Can Direct3D 11 do offscreen-only rendering (no swap chain)?

Is it possible to use Direct3D 11 for rendering to textures only, i.e. without creating a swap chain and without creating any window? I have tried that and all my API calls succeed. The only problem is that the picture I am downloading from a staging texture is black.
I finally managed to capture a full stream using PIX (Parallel Nsight does not seem to work at all). PIX shows that my render target is black, too, although I clear it to blue.
Is it possible at all what I intend to do? If so, how would one do it?
Actually, the whole thing is working as intended if you initialise the device correctly.

iOS OpenGL ES - Only draw on request

I'm using OpenGL ES to write a custom UI framework on iOS. The use case is for an application, as in something that won't be updating on a per-frame basis (such as a game). From what I can see so far, it seems that the default behavior of the GLKViewController is to redraw the screen at a rate of about 30fps. It's typical for UI to only redraw itself when necessary to reduce resource usage, and I'd like to not drain extra battery power by utilizing the GPU while the user isn't doing anything.
I tried only clearing and drawing the screen once as a test, and got a warning from the profiler saying that an uninitialized color buffer was being displayed.
Looking into it, I found this documentation: http://developer.apple.com/library/ios/#DOCUMENTATION/iPhone/Reference/EAGLDrawable_Ref/EAGLDrawable/EAGLDrawable.html
The documentation states that there is a flag, kEAGLDrawablePropertyRetainedBacking, which when set to YES, will allow the backbuffer to retain things drawn to it in the previous frame. However, it also states that it isn't recommended and cause performance and memory issues, which is exactly what I'm trying to avoid in the first place.
I plan to try both ways, drawing every frame and not, but I'm curious if anyone has encountered this situation. What would you recommend? Is it not as big a deal as I assume it is to re-draw everything 30 times per frame?
In this case, you shouldn't use GLKViewController, as its very purpose is to provide a simple animation timer on the main loop. Instead, your view can be owned by any other subclass of UIViewController (including one of your own creation), and you can rely on the usual setNeedsDisplay/drawRect system used by all other UIKit views.
It's not the backbuffer that retains the image, but a separate buffer. Possibly a separate buffer created specifically for your view.
You can always set paused on the GLKViewController to pause the rendering loop.

Is iOS glGenerateMipmap synchronous, or is it possibly asynchronous?

I'm developing an iPad app that uses large textures in OpenGL ES. When the scene first loads I get a large black artifact on the ceiling for a few frames, as seen in the picture below. It's as if higher levels of the mipmap have not yet been filled in. On subsequent frames, the ceiling displays correctly.
This problem only began showing up when I started using mipmapping. One possible explanation is that the glGenerateMipmap() call does its work asynchronously, spawning some mipmap creation worker (in a separate process, or perhaps in the GPU) and returning.
Is this possible, or am I barking up the wrong tree?
Within a single context, all operations will appear to execute strictly in order. However, in your most recent reply, you mentioned using a second thread. To do that, you must have created a second shared context: it is always illegal to re-enter an OpenGL context. If already using a shared context, there are still some synchronization rules you must follow, documented at http://developer.apple.com/library/ios/ipad/#DOCUMENTATION/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithOpenGLESContexts/WorkingwithOpenGLESContexts.html
It should be synchronous; OpenGL does not in itself have any real concept of threading (excepting the implicit asynchronous dialogue between CPU and GPU).
A good way to diagnose would be to switch to GL_LINEAR_MIPMAP_LINEAR. If it's genuinely a problem with lower resolution mip maps not arriving until later then you'll see the troublesome areas on the ceiling blend into one another rather than the current black-or-correct effect.
A second guess, based on the output, would be some sort of depth buffer clearing issue.
I followed #Tommy's suggestion and switched to GL_LINEAR_MIPMAP_LINEAR. Now the black-or-correct effect changed to a fade between correct and black.
I guess that although we all know that OpenGL is a pipeline (and therefore asynchronous unless you are retrieving state or explicity synchronizing), we tend to forget it. I certainly did in this case, where I was not drawing, but loading and setting up textures.
Once I confirmed the nature of the problem, I added a glFinish() after loading all my textures, and the problem went away. (Btw, my draw loop is in the foreground and my texture loading loop - because it is so time consuming and would impair interactivity - is in the background. Also, since this may vary between platforms, I'm using iOS5 on an iPad 2)

Take screenshot of DirectX full-screen application

This boggles me. DirectX bypasses everything and talks directly to the device driver, thus GDI and other usual methods won't work - unless Aero is disabled (or unavailable), all that appears is a black rectangle at the top left of the screen. I have tried what other have suggested on several forums, using DirectX to get the back buffer and save it, but I get the same result:
device->GetFrontBufferData(0, surface);
D3DXSaveSurfaceToFile("fileName", D3DXIFF_BMP, surface, NULL, NULL);
Is there any way to get a screenshot of another full-screen DirectX application when Aero is enabled?
Have a look at Detours.
Using Detours, you can instrument calls like Direct3DCreate9, IDirect3D9::CreateDevice and IDirect3D9::Present in which you perform the operations necessary to setup and then do a frame capture.
Here is a C# example of hooking IDirect3DDevice9 objects via DLL injection and function hooking using EasyHook (like Microsoft Detours). This is similar to how FRAPS works.
This allows you to capture the screen in windowed / fullscreen mode and uses the back buffer which is much faster than trying to retrieve data from the front buffer.
A small C++ helper DLL is used to determine the methods of the IDirect3DDevice9 object to hook at runtime.
Update: for DirectX 10/11 see Screen capture and overlays for D3D 9, 10 and 11
This is a snippet of the code I used as test just now, it seems to work.
width and height are the size of the SCREEN in windowed mode not the windows. So for me they are set to 1280 x 1024 and not the window I'm rendering to's size.
You'd need to replace mEngine->getDevice() with some way of getting your IDirect3DDevice9 too. I just inserted this code into a random d3d app I had to make it easier to test. But I can confirm that it captures both the output from that app AND another d3d app running at the same time.
Oh I've assumed this is D3D9 as you didn't say, I'm not sure about d3d10 or 11
IDirect3DSurface9* surface;
mEngine->getDevice()->CreateOffscreenPlainSurface(width, height, D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH, &surface, NULL);
mEngine->getDevice()->GetFrontBufferData(0, surface);
D3DXSaveSurfaceToFile("c:\\tmp\\output.jpg", D3DXIFF_JPG, surface, NULL, NULL);
surface->Release();
There is an open source program like fraps: taksi but looks outdated
Here is some discussion of how Fraps works. It is not simple.
http://www.woodmann.com/forum/archive/index.php/t-11023.html
Any trick that tries to read the front buffer from a different DirectX device, I suspect may only occasionally work due to luck of uninitialized memory.
Following J99's answer, I made the code work for both windowed and fullscreen modes. It is also done in D3D9.
IDirect3DSurface9* surface;
D3DDISPLAYMODE mode;
pDev->GetDisplayMode(0, &mode); // pDev is my *IDirect3DDevice
// we can capture only the entire screen,
// so width and height must match current display mode
pDev->CreateOffscreenPlainSurface(mode.Width, mode.Height, D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH, &surface, NULL);
if(pDev->GetFrontBufferData(0, surface)==D3D_OK)
{
if(bWindowed) // a global config variable
{
// get client area in desktop coordinates
// this might need to be changed to support multiple screens
RECT r;
GetClientRect(hWnd, &r); // hWnd is our window handle
POINT p = {0, 0};
ClientToScreen(hWnd, &p);
SetRect(&r, p.x, p.y, p.x+r.right, p.y+r.bottom);
D3DXSaveSurfaceToFile(szFilename, D3DXIFF_JPG, surface, NULL, &r);
}
else
D3DXSaveSurfaceToFile(szFilename, D3DXIFF_JPG, surface, NULL, NULL);
}
surface->Release();
It looks like format and pool parameters of CreateOffscreenPlainSurface must be exactly the same.
You might want to take a look at my Investigo project.
It uses a DirectX proxy DLL to intercept DirectX API functions.
There is already code in there to take screenshots during the call to Present. Although it isn't yet accessible from the UI. You should be able to enable the code easily though.
http://www.codeproject.com/Articles/448756/Introducing-Investigo-Using-a-Proxy-DLL-and-embedd

Resources