Is there any way to control more than one mouse, the first act as usual mouse while other would be treated as digitizer input. The problem is How to register each mouse to be treated differently and how using mousehook without capturing the whole mouse event.
I haven't write any code yet on this project, and I 'am just starting with collecting information and stuck with theses questions. I'm going to use Delphi 7
I do not know anything about it, but Multi-Touch Vista at CodePlex adds support for multiple mice.
Multi-Touch Vista is a user input management layer that handles input from various devices (touchlib, multiple mice, TUIO etc.) and normalises it against the scale and rotation of the target window. Now with multitouch driver for Windows 7.
http://multitouchvista.codeplex.com/
Related
Windows provides DRM functionality to applications that require it.
Some of them, however, have more protection than others.
As an example, take Edge (both Legacy and Chromium) or IE that use Protected Media Path. They get to display >720p Netflix content. Other browsers don't use PMP and are capped at 720p.
The difference in the protections is noticeable when you try to capture the screen: while you have no problems on Firefox/Chrome, in Edge/IE a fixed black image takes the place of the media you are playing, but you still see media control buttons (play/pause/etc) that are normally overlaid (alpha blended) on the media content.
Example (not enough rep yet to post directly)
The question here is mainly conceptual, and in fact also could apply to systems that have identical behavior, like iOS that also replaces the picutre when you screenshot or capture the screen on Netflix.
How does it get to display two different images on two different outputs (Capture APIs with no DRM content and attached phisical monitor screen with DRM content)?
I'll make a guess and I'll start by excluding HW overlays. The reason is that play/pause buttons are still visible on the captured output. Since they are overlaid (alpha blended) on the media on the screen, and alpha blending on HW overlays is not possible in DirectX 9 or later nor it is using legacy DirectDraw, hardware overlays have to be discarded. And by the way, neither d3d9.dll or ddraw.dll are loaded by mfpmp.exe or iexplore.exe (version 11). Plus, I think hardware overlays are now considered a legacy feature, while Media Foundation (which Protected Media Path is a part of) is totally alive and maintained.
So my guess is that DWM, that is in charge for the screen composition, is actually doing two compositions. Either by forking the composition process at the point when it encounters a DRM area and feeds one output to the screen (with DRM protected content) and the other to the various screen capturing methods and APIs, or by entirely doing two different compositions in the first place.
Is my guess correct? And could you please provide evidence to support your answer?
My interest is understanding how composition software and DRM are implemented, primarily in Windows. But how many other ways could there be to do it in different OSes?
Thanks in advance.
According to this document, both the options are available.
The modern PlayReady DRM that Netflix uses for its playback in IE, Edge, and the UWP app uses the DWM method, which can be noticed by the video area showing only a black screen when DWM is forcibly killed. It seems that this is because the modern PlayReady is supported since Windows 8.1, which does not let users disable DWM easily.
I think both methods were used in Windows Vista-7, but I have no samples to test it. As HW overlays don't look that good with window previews, animations, and transparencies, they would have switched each method depending on the DWM status.
For iOS, it seems that a mechanism that is similar to the DWM method is done in the display server(SpringBoard?) level to present the protected content which is processed in the Secure Enclave Processor.
I'm working on an application where the concept is that you can 'select' objects before actually placing them. So what I wanted to do was have some low quality objects on a shelf or something like it. When the user selects the object he then can tap to place the high quality version of the object in his area for further viewing.
I was wondering if it's possible with vuforia. I wanted to use this platform since it works well from what I could tell and it's cross platform (The application needs to be for android and the HoloLens).
I have set up the basic application where you can place a capsule in the area. Now I wanted to automatically place the (in this case capsule) once vuforia has detected a ground plane. From what I could see the plane finder has events that go off when an input is detected, but I couldn't find an event that goes off when the ground plane is detected. Is it still possible with vuforia? I know it's doable with the HoloLens, but I would like to know if it's possible for android or other mobile devices. I really don't know where to start/look for so I hope someone can point me in the right direction.
Let me know if I need to include more information!
The Vuforia PlaneFinderBehaviour (see doc here) has the event OnAutomaticHitTest which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test instead of the On Interactive Hit Test list of the "Plane Finder":
I've heard that vuforia fusion, does not yet support ARCore (it supports ARKit) so it uses an internal implementation to simulate ARCore functionality, and they are waiting for a final release of ARCore to support it. Many users reported that their objects move even when they use an ARCore supported device.
Windows 7 appears to only support 48x48 pixel mouse cursors natively. My dad is half blind, so this isn't good enough.
Looking around a little, I see that there is High DPI Cursor Changer Beta on SourceForge. Unfortunately, it states that it is limited to Windows 8 and above. Even if he did have Windows 8, I still don't think it would be big enough.
I've been thinking about some other alternatives which might get the job done. One is to use AutoHotKey to track the mouse position and type, and using that information, put up a larger bitmap under the cursor that is displayed on a click through window.
This seems doable, but I would like it if I could invert the information beneath the cursor as the inverted mouse cursor does. I'm thinking that this would require using the DirectX API, which I'm totally unfamiliar with. As I don't have a whole lot of time to devote to this, it would be great if someone could point out some key concepts and API calls so that I can move through this project as fast as possible.
I've also looked into zooming in, but seems using that is also problematic as it requires that he get it in and out of that state and zoom in and out fairly easily, which he can't as he can't really touch type and his fingertips don't have very much sensation, making it hard to navigate the keyboard effectively.
If there are any other ideas, I'd be interested in hearing about them as well.
I haven't tried Bill Myers solution, so I think that the cursors are still the max. size, but check out Bill's link: https://www.bmyers.com/public/high_visibility_cursors.cfm
I searched a lot about finding a way to make me move the desktop cursor using OpenCV but all I found is some demos for people who already did it.
what I know is that the function setMouseCallback gives me the coordinates of the mouse and more but i need to give the mouse some positions to move into it.
So can anybody tell me how can i do it using OpenCV C++ ?
You cannot do this in OpenCV. OpenCV is a computer vision library focused around analysing and manipulating images and although it provides simple user interface (UI) elements do not get fooled into thinking it is a powerful user interaction tool.
Now, if you want to move the cursor in windows you can use SetCursorPos which I believe works on most versions:
SetCursorPos(X,Y)
e.g.
SetCursorPos(100, 200)
I'm looking to create a hardware accelarated DirectX (9 at the moment) window on a secondary screen. This screen is connected to the same graphics display as the primary screen (at least at the moment).
Currently, when I try to open the window on the secondary screen based on window position or by dragging it there, CPU usage jumps by about 10%, which seems to indicate that windows is switching to a software fallback rather than the hardware accelaration.
Machine is windows XP running a NVIDIA graphics card (varying cards as this runs on several machines), with the latest driver. It's also running CUDA at the same time to produce the images if that matters. Programming language is c++, manual window and message queue creation, no tookbox used at the moment to manage the GUI
Thanks
When you call CreateDevice, make sure to use the index of the monitor you are targeting. The standard D3DADAPTER_DEFAULT value is just 0, which is the primary monitor. DirectX is a bit kludgy that way, but if the window is on a different monitor than is specified in CreateDevice, then it will silently render in a framebuffer targeting the first monitor, then buffer copy to a framebuffer on the second monitor using the OS window manager.
So, the quick and dirty solution is to use CreateDevice(1, ...) instead since that is almost always be how a dual monitor setup is indexed.
A more robust solution is to use MonitorFromWindow(hwnd) to find the monitor that the window covers the most, then iterate through available d3d adapters looking for one that returns the same monitor handle using GetAdapterMonitor(). If you have a system with more than two monitors, or if you don't know in advance what monitor you want and just have an HWND, then you need the longer method.