I am trying to use DirectInput to capture XBOX One controller input signals. I am tying it to a C# WinForms application. The issue I am having is: When the form has focus, it captures inputs just fine. When the window loses focus, I don't get any feedback. On Windows 7, this isn't an issue.
I have tried other controllers on Windows 10: PS4, Logitech, Steering Wheels, etc... Everything works as expected: When the window loses focus, I still get feedback. It's just the XBOX One controller on Windows 10.
I thought maybe it had something to do with this line:
dev.SetCooperativeLevel(_ctlParent, CooperativeLevel.Background | CooperativeLevel.Nonexclusive);
But, even if I take that line out, everything still acts the same.
It seems like the XBOX One controller ignores the CooperativeLevel.Background flag and adds the CooperativeLevel.Foreground flag. Here is some more info about the flags.
Is there anyone else familiar with this issue that has figured out a work around?
Been exploring this myself regarding XBox One controllers. Each API has its own set of limitations when it comes to XBox One and XBox360 controllers.
Here's a summary of what I've learned:
Windows.Gaming.Input ref
Unlimited(?) number of controllers on Windows 10.
(Until recently I thought the limit was 8 but I've tested with 9 XBox 360/XBox One controllers so the MAX_PLAYER_NUMBER=8 found in DirectXTK is not set according to a real limit for Windows 10. ref)
Limitations:
Controllers can’t be accessed in the background...
(Not 100% sure though. I haven't gotten the controllers to work in the background myself and documentation says this limitation exists for UWP apps. ref )
Only works on UWP devices (e.g. Windows 10, XBoxOne, Windows Tablets, etc)
The Gamepad class only supports Xbox One certified or Xbox 360 compatible gamepads (ref), other gamepads can be accessed via the RawGameController class (ref).
XInput ref
API for accessing 4 XInput compatible controllers (e.g. XBox 360 or XBox One controllers)
XInput controllers can be accessed in the background.
Limitations:
Max 4 controllers.
Only XInput capable devices.
Unable to activate the extra 2 rumble motors in XBox One controller triggers. (Use Windows.Gaming.Input to do that.)
DirectInput ref
The oldest of these APIs.
Unlimited number of controllers (?)
Limitations:
XBox One controllers cannot be accessed in the background. (Bug or a feature?)
XBox 360 and XBox One controllers have triggers on the same axis, no guide button and no rumble.
Windows Store Apps can’t use DirectInput. ref
Microsoft no longer recommends using DirectInput. ref
Raw Input ref
Unlimited number of controllers (Lacking reference. Tested with 9 controllers.)
XBox One and other controllers can be accessed in the background.
(Use this source code to try it out: ref)
Limitations:
XBox 360 and XBox One controllers have triggers on the same axis, no guide button and probably no rumble. (Tested using this source code ref)
The drivers for the Xbox 360 Common Controller and the Xbox One Controller both emulate "HID" for use with the legacy DirectInput API for older games, but the emulation tends to have some quirks like this one.
Probably the best solution is to use XINPUT for these controllers and only fall back to DirectInput for legacy HID controllers. For Windows 10 DirectX 12 / UWP apps you can use Windows.Gaming.Input directly as well.
See XINPUT and Windows 8 and DirectX Tool Kit: Now with GamePads.
Related
I need to build a web based interface for a quick personal project and I'd like to use electron.
It's essentially a kind of gaming Zoom app with lots of bells and whistles.
The problem is, I need to have a main window that will run in fullscreen, which I capture with OBS and broadcast to a popular streaming service.
I also need another window, what I refer to as the Control Panel, to control elements in the main window that I work with behind-the-scenes, for showing graphics, playing sound effects, controlling video settings etc.
If I were to code this app for a web browser, I would have absolutely no problem, as I can create the secondary window from my main window (window.open) and the 2 windows can easily refer to one another and their documents.
In electron however, windows are essentially contained boxes. Communication between 2 windows has to be channelled via ipc essentially as encoded commands and interpreted by the main process...
So to control loads of elements and entities from my control panel, I'd either have to implement a complex messaging system, which frankly seems MASSIVELY unnecessary, or to be hacky I could simply issue javascript commands as strings to the other window with BrowserWindow.webContents.executeJavaScript()... but yuck.
I could also contain all the logic-y stuff in the main process, but again, this still requires a messaging system that I don't have the time to implement.
I guess I want to know if there's a better way.
I'm just imagining a scenario win which a developer has written a web-app that uses multiple windows and lots of direct window-to-window communication, and now they need to migrate it to electron... how would they best go about it without re-writing a ton of code?
Found my answer, 'nativeWindowOpen':
main_window = new electron.BrowserWindow({
webPreferences: {
nativeWindowOpen: true
}
});
I can now use window.open() from my main window and refer directly back and forth.
So now I can do this:
control_panel = window.open("./control_panel.html", "Control Panel", "width=720,height=480")
console.log(control_panel.document)
And from the opened window:
console.log(window.opener.document)
And I can refer to variables and document elements.
I couldn't find anyone mentioning this useful 'nativeWindowOpen' option when I was scouring stackoverflow et al earlier, only found it by reading documentation.
I have an application which has sound. I have a global property to mute the sound. The problem is, there's so many different things which can make sound, I would hate to iterate through different class types and mute/unmute their sound. Instead, I'm looking for a way to mute the sound on a global application level. I don't mean muting the entire system volume either.
One scenario: In Windows 7, you can open the Volume Mixer and adjust the volume of individual applications. While I don't intend to change this actual particular value (as I know it's Windows 7 specific), I would like to change the volume of everything in my application all at once. I would also need the ability to completely mute the sound of everything in my application. I need this ability to be compatible with Windows XP and above. I am assuming it will involve Windows API calls, but I have no idea what calls to make.
XP does not support per-application volume control. That capability was added in Vista. So what you are attempting to do cannot be done in XP by fair means.
There is software called IndieVolume that retro fits per-app volume control to XP. I can only imagine it does so by means of low-level hacking, DLL injection and so on. I doubt that's really an option for you.
What you're asking for isn't possible on XP; the OS simply does not support per-application volume levels.
You can accomplish what you want by creating a settings class that keeps things like SoundActive: Boolean or PlaySounds: Boolean or something similar. Place it in it's own unit, and have an initialization section that creates an instance of it and a finalization section that frees it (making it effectively a collection of global values).
Each unit that needs access to these settings simply uses the unit containing them, and adjusts behavior accordingly. So each of your child classes or forms or whatever would just need a check added:
if CurrentSettings.PlaySounds then
// Code that makes sounds, plays music, whatever.
The settings class can also contain methods that keep track of the current volume level (on XP, the system-wide level), and methods to increase or decrease volume using the MMSystem volume functions (there are tons of examples here and through Google of doing so). Your app can then use the OnActivate and OnDeactivate events to set the volume level when your app gains focus, and restore it to the proper volume when your app loses focus.
In Vista and higher, you can use the IAudioEndPointVolume interface I mentioned earlier and either the GetMasterVolumeLevel or SetMasterVolumeLevel methods to control system wide volume (I have an example of doing this, along with the appropriate MMDevAPI interface definitions) or device level volume (using IMMDevioce.Activate to select the proper device first and then the above IAudioEndPointVolume methods on the device interface received from IMMDevice.Activate in the ppInterface parameter).
For individual applications, you use the ISimpleAudioVolume interface, which has four methods: GetMasterVolume and SetMasterVolume, which control the volume level for your application's audio session, and GetMute and SetMute to allow you to retrieve the current mute flag value or set it respectively. (Larry Osterman of MS, who was one of the developers who worked on the new audio support in Vista and Win7, has a great starting point article on his blog about the types of audio in the new API and when to use each of them.)
It's conceptually possible to determine at runtime which operating system you're using, and to programmatically switch between using the MMSystem functionality on XP and earlier, and the MMDevAPI functionality on Vista and higher. Expecting someone here to provide the code for doing so is a little unreasonable, however. The links I've provided should get you started on the right track, and when you run into snags along the way specific help in working through those snags would be great questions.
I am trying to figure out if SetCooperativeLevel is still available in DX11. If it is no longer supported, what is the new API to get exclusive input from input device? Thanks.
The last version of DirectInput was DirectInput8. It has not been significantly changed since then. Use of DirectInput to handle keyboard and mouse is not recommended, use the Win32 messages instead. For legacy gamepads and joysticks, you can continue to use DirectInput for Win32 desktop apps, but it is not available for Windows Store, Windows phone, or Xbox One apps.
For Xbox 360 Common Controllers on Windows, you should use XINPUT. See GamePad in DirectX Tool Kit for a nice helper for using it for gamepads.
PS: For Some details on XInput 1.4 on Windows 8.0 and later vs. XInput 1.3 on Windows 7 et all, see this post.
To handle 'input focus' you should monitor the Win32 message WM_ACTIVATEAPP. If wParam is TRUE, then you are foreground. If wParam is FALSE, then you are losing focus.
I have an application which has sound. I have a global property to mute the sound. The problem is, there's so many different things which can make sound, I would hate to iterate through different class types and mute/unmute their sound. Instead, I'm looking for a way to mute the sound on a global application level. I don't mean muting the entire system volume either.
One scenario: In Windows 7, you can open the Volume Mixer and adjust the volume of individual applications. While I don't intend to change this actual particular value (as I know it's Windows 7 specific), I would like to change the volume of everything in my application all at once. I would also need the ability to completely mute the sound of everything in my application. I need this ability to be compatible with Windows XP and above. I am assuming it will involve Windows API calls, but I have no idea what calls to make.
XP does not support per-application volume control. That capability was added in Vista. So what you are attempting to do cannot be done in XP by fair means.
There is software called IndieVolume that retro fits per-app volume control to XP. I can only imagine it does so by means of low-level hacking, DLL injection and so on. I doubt that's really an option for you.
What you're asking for isn't possible on XP; the OS simply does not support per-application volume levels.
You can accomplish what you want by creating a settings class that keeps things like SoundActive: Boolean or PlaySounds: Boolean or something similar. Place it in it's own unit, and have an initialization section that creates an instance of it and a finalization section that frees it (making it effectively a collection of global values).
Each unit that needs access to these settings simply uses the unit containing them, and adjusts behavior accordingly. So each of your child classes or forms or whatever would just need a check added:
if CurrentSettings.PlaySounds then
// Code that makes sounds, plays music, whatever.
The settings class can also contain methods that keep track of the current volume level (on XP, the system-wide level), and methods to increase or decrease volume using the MMSystem volume functions (there are tons of examples here and through Google of doing so). Your app can then use the OnActivate and OnDeactivate events to set the volume level when your app gains focus, and restore it to the proper volume when your app loses focus.
In Vista and higher, you can use the IAudioEndPointVolume interface I mentioned earlier and either the GetMasterVolumeLevel or SetMasterVolumeLevel methods to control system wide volume (I have an example of doing this, along with the appropriate MMDevAPI interface definitions) or device level volume (using IMMDevioce.Activate to select the proper device first and then the above IAudioEndPointVolume methods on the device interface received from IMMDevice.Activate in the ppInterface parameter).
For individual applications, you use the ISimpleAudioVolume interface, which has four methods: GetMasterVolume and SetMasterVolume, which control the volume level for your application's audio session, and GetMute and SetMute to allow you to retrieve the current mute flag value or set it respectively. (Larry Osterman of MS, who was one of the developers who worked on the new audio support in Vista and Win7, has a great starting point article on his blog about the types of audio in the new API and when to use each of them.)
It's conceptually possible to determine at runtime which operating system you're using, and to programmatically switch between using the MMSystem functionality on XP and earlier, and the MMDevAPI functionality on Vista and higher. Expecting someone here to provide the code for doing so is a little unreasonable, however. The links I've provided should get you started on the right track, and when you run into snags along the way specific help in working through those snags would be great questions.
We need to drive 8 to 12 monitors from one pc, all rendering different views of a single 3d scenegraph, so have to use several graphics cards. We're currently running on dx9, so are looking to move to dx11 to hopefully make this easier.
Initial investigations seem to suggest that the obvious approach doesn't work - performance is lousy unless we drive each card from a separate process. Web searches are turning up nothing. Can anybody suggest the best way to go about utilising several cards simultaneously from a single process with dx11?
I see that you've already come to a solution, but I thought it'd be good to throw in my own recent experiences for anyone else who comes onto this question...
Yes, you can drive any number of adapters and outputs from a single process. Here's some information that might be helpful:
In DXGI and DX11:
Each graphics card is an "Adapter". Each monitor is an "Output". See here for more information about enumerating through these.
Once you have pointers to the adapters that you want to use, create a device (ID3D11Device) using D3D11CreateDevice for each of the adapters. Maybe you want a different thread for interacting with each of your devices. This thread may have a specific processor affinity if that helps speed things up for you.
Once each adapter has its own device, create a swap chain and render target for each output. You can also create your depth stencil view for each output as well while you're at it.
The process of creating a swap chain will require your windows to be set up: one window per output. I don't think there is much benefit in driving your rendering from the window that contains the swap chain. You can just create the windows as hosts for your swap chain and then forget about them entirely afterwards.
For rendering, you will need to iterate through each Output of each Device. For each output change the render target of the device to the render target that you created for the current output using OMSetRenderTargets. Again, you can be running each device on a different thread if you'd like, so each thread/device pair will have its own iteration through outputs for rendering.
Here are a bunch of links that might be of help when going through this process:
Display Different images per monitor directX 10
DXGI and 2+ full screen displays on Windows 7
http://msdn.microsoft.com/en-us/library/windows/desktop/ee417025%28v=vs.85%29.aspx#multiple_monitors
Good luck!
Maybe you not need to upgrade the Directx.
See this article.
Enumerate the available devices with IDXGIFactory, create a ID3D11Device for each and then feed them from different threads. Should work fine.