Show presentation timer on only one of two cloned displays - device-driver

Is is possible to show an application on only one display, if the display is cloned? My goal is to develop a timer that is shown on the laptop display during a presentation but not on the projector.
I'm looking for something like windows' "Identify displays" feature, which displays numbers 1 and 2 on the different displays, even in cloned mode.
Thanks
Edit
I discovered a possible duplicate to this question. The accepted answer in this case was to use Screens.AllScreens to discover the number of screens. This is not enough in my case. A comment on the accepted answer links to a thread about directly painting on the desktop. I tried this with the following code, but the text appeared on both displays. The code to get the Hdc of the input is from an article about screen captures. I'm not sure what to set for the other parameters (they are IntPtr.Zero in the article)
[DllImport("gdi32.dll")]
static extern IntPtr CreateDC(IntPtr lpszDriver, string lpszDevice, IntPtr lpszOutput, IntPtr lpszInitData);
[DllImport("gdi32.dll")]
static extern IntPtr DeleteDC(IntPtr hdc);
private void PaintOnDesktop(string stringToPrint, Font font, Brush brush, PointF position) {
string deviceName = Screen.AllScreens[0].DeviceName;
IntPtr targetHdc = CreateDC(IntPtr.Zero, deviceName, IntPtr.Zero, IntPtr.Zero);
using (Graphics g = Graphics.FromHdc(targetHdc)) {
g.DrawString(stringToPrint, font, brush, position);
}
DeleteDC(targetHdc);
}
Edit 2
Apparently there is no way to do this in C#, so I changed the C# tag to device driver.

The only way to achieve this effect would be to "clone" the screen "manually", i.e. outside of utilizing this feature of the driver, which would of course induce some lag. UltraMon has a feature like this, designed for video cards or OS's that don't support multiple-monitor cloning. It's basically done by screenshotting the desktop rapidly and displaying it on a form on another (extended) monitor.
The lag induced is only present on one of the monitors, so in your case if this is for a presentation, I would suggest making "your" monitor (probably the primary one on your laptop) the recipient of the lag, so the audience doesn't see it. You can then of course draw a timer form on top of either screen.

Related

How are Protected Media Path and similar systems implemented?

Windows provides DRM functionality to applications that require it.
Some of them, however, have more protection than others.
As an example, take Edge (both Legacy and Chromium) or IE that use Protected Media Path. They get to display >720p Netflix content. Other browsers don't use PMP and are capped at 720p.
The difference in the protections is noticeable when you try to capture the screen: while you have no problems on Firefox/Chrome, in Edge/IE a fixed black image takes the place of the media you are playing, but you still see media control buttons (play/pause/etc) that are normally overlaid (alpha blended) on the media content.
Example (not enough rep yet to post directly)
The question here is mainly conceptual, and in fact also could apply to systems that have identical behavior, like iOS that also replaces the picutre when you screenshot or capture the screen on Netflix.
How does it get to display two different images on two different outputs (Capture APIs with no DRM content and attached phisical monitor screen with DRM content)?
I'll make a guess and I'll start by excluding HW overlays. The reason is that play/pause buttons are still visible on the captured output. Since they are overlaid (alpha blended) on the media on the screen, and alpha blending on HW overlays is not possible in DirectX 9 or later nor it is using legacy DirectDraw, hardware overlays have to be discarded. And by the way, neither d3d9.dll or ddraw.dll are loaded by mfpmp.exe or iexplore.exe (version 11). Plus, I think hardware overlays are now considered a legacy feature, while Media Foundation (which Protected Media Path is a part of) is totally alive and maintained.
So my guess is that DWM, that is in charge for the screen composition, is actually doing two compositions. Either by forking the composition process at the point when it encounters a DRM area and feeds one output to the screen (with DRM protected content) and the other to the various screen capturing methods and APIs, or by entirely doing two different compositions in the first place.
Is my guess correct? And could you please provide evidence to support your answer?
My interest is understanding how composition software and DRM are implemented, primarily in Windows. But how many other ways could there be to do it in different OSes?
Thanks in advance.
According to this document, both the options are available.
The modern PlayReady DRM that Netflix uses for its playback in IE, Edge, and the UWP app uses the DWM method, which can be noticed by the video area showing only a black screen when DWM is forcibly killed. It seems that this is because the modern PlayReady is supported since Windows 8.1, which does not let users disable DWM easily.
I think both methods were used in Windows Vista-7, but I have no samples to test it. As HW overlays don't look that good with window previews, animations, and transparencies, they would have switched each method depending on the DWM status.
For iOS, it seems that a mechanism that is similar to the DWM method is done in the display server(SpringBoard?) level to present the protected content which is processed in the Secure Enclave Processor.

Before diving in, is this possible with Awesome WM?

I've been trying different tiling WM's to see which one best fits my needs. Every time I try a new one, it looks good but I find other things that don't quite work the way I like. My requirements have evolved as I go. Initially, I didn't want to get into Awesome because having to learn Lua is not on my wish list but maybe I should give it a try IF it can do what I want better than the other tiling WM's out there.
I'm going to as specific as I can about what I want. I am running a 3440x1440 monitor. I want to use as much vertical space as possible (meaning, a full width, persistent but mostly empty status bar is not an option, but I do like the notification area and a date/time).
I understand it may not do everything exactly the way I want, which is oke. If it does more or less most of what I want I can weigh my options between Awesome and other tiling WM's (actually, only i3 which is what I'm using now but I'm open to better suggestions). I would very much appreciate it if people don't just say no to something it can't do, but say "no, but it can do ...". In other words, feel free to suggest alternatives that might be helpful as well.
Divide the screen in 3 columns, initially 30/45/25, with the right column split horizontally; Fully adjustable and resizable as needed during my work session;
Persistent layout; when closing the last application in a tile, I don't want that tile to disappear and the remaining tiles to resize. Just show an empty space and leave all tiles as they are.
tabbed tiles, so I see which applications are running in a tile (similar to i3).
Resizable tiles with the keyboard into 1 direction; When making the middle column/tile wider, I want that into a specific direction into another tile and leave the other side alone.
Certain applications I want to always launch into a specific tile. For instance, terminals always go into the right-most column top/bottom, browser/spotify always into the middle, atom/IDE always into the left. Some applications should always be floating. Obviously I want to be able to send them to a different tile after launch.
I don't want a 100% width status bar. It will be mostly empty which is a waste of screen estate. Preferably, I'd like a statusbar part of a tile, for example in the right-most tile, resizing with it. Otherwise I'd like it to be fixed to 30% and allow tiles which are not beneath it to use the full height of the screen. My reason for a statusbar is mute; I actually only want a notification area and a date time permanently visible. I don't need a "start menu", dmenu or similar is perfect, which I believe it has integrated.
Many thanks in advance!
The general answer is "Awesome configuration is code and it can do whatever you want". But there is a catch. Can Awesome be configured like you describe? Yes, totally. There is at least 2 distributions coming close enough (mine[1] and worron[2]) (at least for the tiling workflow, not the look).
The "catch" is that the workflow you describe isn't really the "Awesome way". Awesome is usually used as an automatic tiler. You have layouts that describe a workflow (code, web, internet) and manage the clients according to their programming. Manual tile management is rarely necessary once you have proper layouts. That doesn't mean you can't, I did, but it might be worth thinking outside the box and see if you can automate your workflow a bit further.
Also, the default layout system isn't very modern and makes it hard to implement the features you requested. My layout system (see link below) can be used as a module or as a branch and supports all features described above. Awesome is extremely configurable and it's components can be replaced by modules.
https://github.com/awesomeWM/awesome/pull/644
The layout "serialization" documentation is here:
https://elv13.github.io/libraries/awful.layout.html#awful.layout.suit.dynamic.manual
It is similar to i3 but has more layouts and containers. As for the "leaving blank space" part, it is configured using the fill_strategy:
https://awesomewm.org/doc/api/classes/wibox.layout.ratio.html#wibox.layout.ratio.inner_fill_strategy
As a word of conclusion, I would note that what you ask is "work exactly like i3". If you want such thing, well, use i3. Awesome is a window manager framework. Its goal and purpose is to create a customized desktop shell / WM. If it's what you want, then go ahead and learn it, nothing else can come close to the possibility and the level of control you can get out of it. However it takes time and effort to get to the point where you have "your own perfect desktop". Our users perfect desktops:
https://github.com/awesomeWM/awesome/issues/1395
[1] https://gfycat.com/SmallTerribleAdamsstaghornedbeetle
[2] https://www.youtube.com/watch?v=-yNALqST1-Y
The WM your are looking for is herbstluftwm (hlwm). Its a manual tiling window manager. The tiles which you are talking about are called frames in hlwm. Each frame can contain multiple windows. A frame can also be empty. Only if you kill a frame the other frames will automatically resize. You can add new frames vertically and horizontally and resize them. Each frame can also have a different layout to organize the windows inside. The layout you are looking for is max. This will stack the windows inside a frame on each other. It doesn't show you tabs like i3 however. hlwm allows you to create rules to open certain applications always in certain frames and workspaces. hlwm doesn't have a statusbar buildin. I personally like to use tint2. It can be used as a replacement for your requirement to see running applications as tabs.

How can I find the colour of any pixel on the screen

I usually program in VB6 but but I believe with that I might be restricted to details within the active form. I also have codegear 2009 with c++ and delphi, that I got from a mate but I only have a little experience with Delphi and none at all with c++ but at least I have them if one of those program need to be used to acheive what I'm trying to do. I want to be able to do something like
IF pixelVar(x,y) = 'Red' ( or 'RGBvalue or whatever the correct colour representation)
THEN
do something
END IF
I want to write a program to keep poker hand statistics and want the program to run while Im playing in the poker client program, auto recognizing the cards by pixel colour and position and entering them into the database. I think that if I can get easy access to pixel info it wouldn't be too difficult working out patterns to identify the number and suit of cards.
Any help would be enormous. Thanks.
Use GetDC() with its hWnd parameter set to 0 to get an HDC handle for the screen, then use GetPixel() to get a COLORREF of the pixel at the desired screen coordinates, and then finally use GetRValue(), GetGValue(), and GetBValue() to split the COLORREF into its Red, Green, and Blue values.

How do I open a hardware accelarated DirectX window on a secondary screen

I'm looking to create a hardware accelarated DirectX (9 at the moment) window on a secondary screen. This screen is connected to the same graphics display as the primary screen (at least at the moment).
Currently, when I try to open the window on the secondary screen based on window position or by dragging it there, CPU usage jumps by about 10%, which seems to indicate that windows is switching to a software fallback rather than the hardware accelaration.
Machine is windows XP running a NVIDIA graphics card (varying cards as this runs on several machines), with the latest driver. It's also running CUDA at the same time to produce the images if that matters. Programming language is c++, manual window and message queue creation, no tookbox used at the moment to manage the GUI
Thanks
When you call CreateDevice, make sure to use the index of the monitor you are targeting. The standard D3DADAPTER_DEFAULT value is just 0, which is the primary monitor. DirectX is a bit kludgy that way, but if the window is on a different monitor than is specified in CreateDevice, then it will silently render in a framebuffer targeting the first monitor, then buffer copy to a framebuffer on the second monitor using the OS window manager.
So, the quick and dirty solution is to use CreateDevice(1, ...) instead since that is almost always be how a dual monitor setup is indexed.
A more robust solution is to use MonitorFromWindow(hwnd) to find the monitor that the window covers the most, then iterate through available d3d adapters looking for one that returns the same monitor handle using GetAdapterMonitor(). If you have a system with more than two monitors, or if you don't know in advance what monitor you want and just have an HWND, then you need the longer method.

Take screenshot of DirectX full-screen application

This boggles me. DirectX bypasses everything and talks directly to the device driver, thus GDI and other usual methods won't work - unless Aero is disabled (or unavailable), all that appears is a black rectangle at the top left of the screen. I have tried what other have suggested on several forums, using DirectX to get the back buffer and save it, but I get the same result:
device->GetFrontBufferData(0, surface);
D3DXSaveSurfaceToFile("fileName", D3DXIFF_BMP, surface, NULL, NULL);
Is there any way to get a screenshot of another full-screen DirectX application when Aero is enabled?
Have a look at Detours.
Using Detours, you can instrument calls like Direct3DCreate9, IDirect3D9::CreateDevice and IDirect3D9::Present in which you perform the operations necessary to setup and then do a frame capture.
Here is a C# example of hooking IDirect3DDevice9 objects via DLL injection and function hooking using EasyHook (like Microsoft Detours). This is similar to how FRAPS works.
This allows you to capture the screen in windowed / fullscreen mode and uses the back buffer which is much faster than trying to retrieve data from the front buffer.
A small C++ helper DLL is used to determine the methods of the IDirect3DDevice9 object to hook at runtime.
Update: for DirectX 10/11 see Screen capture and overlays for D3D 9, 10 and 11
This is a snippet of the code I used as test just now, it seems to work.
width and height are the size of the SCREEN in windowed mode not the windows. So for me they are set to 1280 x 1024 and not the window I'm rendering to's size.
You'd need to replace mEngine->getDevice() with some way of getting your IDirect3DDevice9 too. I just inserted this code into a random d3d app I had to make it easier to test. But I can confirm that it captures both the output from that app AND another d3d app running at the same time.
Oh I've assumed this is D3D9 as you didn't say, I'm not sure about d3d10 or 11
IDirect3DSurface9* surface;
mEngine->getDevice()->CreateOffscreenPlainSurface(width, height, D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH, &surface, NULL);
mEngine->getDevice()->GetFrontBufferData(0, surface);
D3DXSaveSurfaceToFile("c:\\tmp\\output.jpg", D3DXIFF_JPG, surface, NULL, NULL);
surface->Release();
There is an open source program like fraps: taksi but looks outdated
Here is some discussion of how Fraps works. It is not simple.
http://www.woodmann.com/forum/archive/index.php/t-11023.html
Any trick that tries to read the front buffer from a different DirectX device, I suspect may only occasionally work due to luck of uninitialized memory.
Following J99's answer, I made the code work for both windowed and fullscreen modes. It is also done in D3D9.
IDirect3DSurface9* surface;
D3DDISPLAYMODE mode;
pDev->GetDisplayMode(0, &mode); // pDev is my *IDirect3DDevice
// we can capture only the entire screen,
// so width and height must match current display mode
pDev->CreateOffscreenPlainSurface(mode.Width, mode.Height, D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH, &surface, NULL);
if(pDev->GetFrontBufferData(0, surface)==D3D_OK)
{
if(bWindowed) // a global config variable
{
// get client area in desktop coordinates
// this might need to be changed to support multiple screens
RECT r;
GetClientRect(hWnd, &r); // hWnd is our window handle
POINT p = {0, 0};
ClientToScreen(hWnd, &p);
SetRect(&r, p.x, p.y, p.x+r.right, p.y+r.bottom);
D3DXSaveSurfaceToFile(szFilename, D3DXIFF_JPG, surface, NULL, &r);
}
else
D3DXSaveSurfaceToFile(szFilename, D3DXIFF_JPG, surface, NULL, NULL);
}
surface->Release();
It looks like format and pool parameters of CreateOffscreenPlainSurface must be exactly the same.
You might want to take a look at my Investigo project.
It uses a DirectX proxy DLL to intercept DirectX API functions.
There is already code in there to take screenshots during the call to Present. Although it isn't yet accessible from the UI. You should be able to enable the code easily though.
http://www.codeproject.com/Articles/448756/Introducing-Investigo-Using-a-Proxy-DLL-and-embedd

Resources