I'm wondering if there's an easy way to tell which input device triggered a particular GUI event.
For example: A TButton.OnClick event gets fired. Did the user trigger it with a keyboard press (shortcut, Enter key for default button, space key for a focused button, etc.) or was it triggered with a mouse click? Is there any easy way to tell?
The reason I'd like to know is so that I can implement keyboard usage hints into some of our applications when the user uses the mouse to initiate actions that could also be done with the keyboard. Our systems on the shop floor are in pretty dusty/dirty environments, and mice tend to not hold up so well in them. Also, in many cases, there's simply not that much room for a mouse to be used. (No, keyboards without numeric keypads is not a solution. They're relied on too heavily.)
However, since our apps run in Windows, users tend to simply use the programs like they would at home -- with a mouse. There's nothing particularly wrong with that, but we've worked hard to optimize the input workflow to be keyboard friendly as well. It'd be nice if there was a low-impact way to indicate to our users that there's a way for them to do the things they're doing without having to grab the mouse.
There's no way to tell from within OnClick. However, you can also attach events to a control that will fire when the mouse rolls over it, which would probably be more appropriate for what you're trying to do anyway. Take a look at the OnMouseEnter and OnMouseLeave events. Also, if you really want something specific to happen when the mouse is clicked, you can attach it to OnMouseUp.
Related
It seems to be a consistent browser behaviour that when the user drags out of a a element, to avoiding clicking on releasing the mouse button, that that element retains focus until the user clicks somewhere else.
Is there an accessibility related reason for this?
It seems to be a common user behaviour (interaction) to negate a click 'halfway through' and, in MacOS, at least, is the default menubar mouse behaviour.
I guess I'm looking for a way to force a drop of focus on mouseout but I don't know whether this is bad for accessibility.
Incidentally this is visually bad ugly either button is used as a UI object or an a is styled as a button.
Whether the focus remains on the button/link or if you force it away will not cause an accessibility issue with regards to WCAG conformance. It does not fail any guidelines.
It comes close to failing WCAG 3.2.2 On Focus but I think the timing of your events technically makes it pass. The mousedown event causes the focus event and you are not doing anything with the focus event. It's when the mouseup event happens that you're trying to move focus so in theory, you are not changing the context on the focus event.
I'm not sure that means you should do what you have proposed from a UX perspective but there probably aren't a lot of people that rely on the default behavior.
This time something what I had troubles to find title for...
I'm using windows hooks ( TMOUSEHOOKSTRUCT ~ WH_MOUSE ) to follow mouse movements and mouse button clicks...
This all works fine, however, I want to know whether is there a way to determine the mouse move/click source?
Either if it's possible to identify it by either device ID, or maybe source type (mouse, trackball, touchscreen...), or at least if it was sent from hardware source at all (aka, was it hardware or some mouse recorder macro, or other app's "SetCursor" procedure).
The main goal I'm looking for here is to find a way to block the ability to use mouse recorder to make clicks inside my app (so while in focus, I'd start a mouse hook and in case the move / click was made by software, I'd ignore any actions made..)
This could be asked as "How to block mouse recorder macros in my app" but I'd rather see an actual solution to identify the source as well (and as primary answer), as it might have other uses too.
Thanks.
Edit:
One way would be to check whether the mouse movement was made on snap or slowly (like actually moving the mouse). However, here then the problem persists with touchscreens.
Though, on the other hand - Windows does detect when touch was used (cursor is changed for the dot onscreen), so there surely is a way to at least separate mouse from touch events -> and knowing that would already solve a lot, combined to decline snap-actions on mouse only...
Raymond Chen summed it up in this blog post
There's no point discussing the possibility that the sender of the message is playing tricks and lying to you because (1) your program should just go along with the ruse and respond to fake [menu] messages as if they were real [menu] messages, because (2) there's no way to tell that you're being lied to anyway.
And this blog post of his shows that all input ultimately goes through the same queue, so even Windows does not know the difference between real input and simulated input.
You should look into using the Raw Input API to receive WM_INPUT messages directly from hardware. SetCursor(), SendInput(), macro players, etc cannot simulate those messages. And you would be able to differentiate between input from different devices, but not necessary the type of mouse if multiple mouse devices are being used. Although, since a trackball might have more capabilities than a standard mouse, it might represent itself as a HID device instead of a mouse device.
As for touchscreens, that type of input generates WM_TOUCH messages, which cannot be simulated, either.
So, between WM_INPUT and WM_TOUCH, you can differentiate between hardware mouse input and touchscreen input, at least. Beyond that, simulated input is going to generate standard WM_MOUSE... and WM_(L|M|R|X)BUTTON... messages, which cannot differentiate between hardware input and simulated input, not even in lower-level mouse hooks. You would probably need to keep track of the WM_INPUT/WM_TOUCH messages and match them up to the other messages, and if you cannot find a match then assume input is being simulated.
I'm trying to activate Hint from control of different application created from delphi upon focus, I'm using hook to identify the focused control, and then use WM_MOUSEMOVE, which I think will activate the Hint of that control, the handle would be the Control itself and the lParam is the Left and Top of the Control. The Control activate the OnMouseMove Event, but the Hint never shows. but when I use SetCursorPos, Hint show, but I need to show the Hint with out the cursor move on that Control. Can you please help me with this? Thank you in advance... by the way I'm using Delphi XE4
Among the really asked question is how to show the hint of a control that resides on another application (i am afraid without hook that application can not be done), the title is "Delphi - Activating the Hint from another control of Application".
First is first: That can be done without knowing what language the other app was done, but it is very complex to put it here (and i am not an expert on such way of coding, also i hate apps that work that way).
Second: The main idea is to hook the other application, search on Google some code of that apps, that can show you a rectangular hole square of the object the mouse is passing over, that application while doing it is iconized (i do not remember the name of it).
Waht such app does: As you move the mouse over the screen it overlays a rectangular over the control that mouse is over, then if you press the key to print screen, that small region is the only thing that goes to clipboard; on of such apps i saw had an extra funcion, they can move such controls if you press cursors on keyboard, also can hide/enable/disable such control... more, it can also make controls that are invisible to be visible, etc... i saw it working on my computer, and hey, for fun it is pretty and to DeBug or get extra things on some apps is also great (make some menús to be visible and enabled and then can use such funtions).
Please, please, understand i am agaist piracy and also against using such apps to let code to be run... some apps need pay for letting some menus enabled, but they have the code there, no need to change EXE to have/use that menus; just using this kind of apps makes that limited apps to be unlimited (just enable or show hidden menus and voila).
Note: To unhide menus, mouse point is not needed to be over the app, can be anywhere and is not moved.
The idea i want to say is: Any app can move, alter any control on any other app (at least on Windows) that is running, so maybe there can be a way to show such hint.
In the past i had use such app (sorry i do not remember the name) for DeBug my own apps, so i do not need to recompile in such cases where something was wrongly hidden, also work with buttons, labels, texts, combos, memos, etc.
Now my small problem is: I just need exactly what title say, but i can not make it to work.
Must be:
Mouse position must be irrelevant (it also must be able to be outside the application)
Just when a button that has focus is pressed with keyboard (Space or Enter) or just after some code somewhere on my application, i want to show the Hint of a specific TEdit for a short period of time.
I did not get Hint to be shown; not unless mouse point is over such TEdit but i want/need mouse pointer to not be over it, neither it to jump to the TEdit.
Idea Conept for that Hint to be shown: After doing some code that changes something, show extra info associated.
Example:
A button with that loads a file using an open dialog, filename is put on a ReadOnly TEdit (so it let user copy the text, but not change it); i want extra info that i punt on .Hint of such TEdit to be shown inmediatly.
That hint use is for not overload window with a lot of fields (TLabels) for showing such file data.
Simple idea: such Hint shows TimeStamp and size of the selected file.
P.D.: Not much related (since i am trying with a normal plain text hint), but hints can also store a full HTML page and with 3rd party tools be shown as an HTMLhint, so they can show a lot info of that file (also its content, etc) in a web based format; as i say i first try with standard plain text hints.
In order to show hint programatically you need to call TApplications ActivateHint method to which you specify the position parameter.
http://docwiki.embarcadero.com/Libraries/XE3/en/Vcl.Forms.TApplication.ActivateHint
Based on position parameter Application automatically finds which controll is at that position and shows its hint.
NOTE: Position parameter screen coordinates in pixels and not your controll coordinates. So you will have to use ClientToScreen method to change your coordinates apropriately.
You can see simple example of how to use this here: https://stackoverflow.com/a/15031208/3636228
Now if you need to do this from another application then you will have to add some comunication mechanizm to these two application so that one could send a proper message to tell the other to show the hint at specific position. But this does require you to be able to change both applications.
EDIT: This works with VCL applications but I'm not sure if it would work with FireMonkey applications.
I am programming a Delphi (XE3) application where mouse position is important, but I would like to be able with another mouse to be able to set breakpoints without moving the primary mouse position. I may be pressing shift or control in the application I am trying to debug, so alt-tabbing to the IDE and setting a breakpoint with the keyboard keyboard won't work. Can Windows 7 easily be set up to do this?
It's possible to attach multiple keyboards and mice to a computer, and various video games can take advantage of the multiple input devices, but the OS in general does not take advantage of that. No matter how many keyboards and mice you attach, there's still just one input queue and one cursor on the screen.
If all you need is to set breakpoints without moving the mouse, then you can navigate the input caret to the desired line with the keyboard and then press F5 to toggle breakpoints.
If you need to be able to debug without interfering with the program at all, then you might need to use remote debugging. Although the documentation suggests using Remote Desktop to operate the remote program while you're sitting at the local system, that's not what you want to do in this situation because you'll still have just one set of input devices. Instead, log on to the remote computer from elsewhere (either directly, or via Remote Desktop on a third computer). It'll help to have two computers you can access from the same chair.
Question:
Can anyone point to an article or code samples anywhere on how to
provide BOTH editing AND range selection in a TStringGrid?
Yes, I KNOW there are third-party grids that do this, but it's
frustrating that the built-in grid lacks this basic capability.
Background:
It's pretty normal to expect to be able to both edit a cell in a grid,
and also to select a range of cells such as for a Copy operation.
As delivered, TStringGrid doesn't do that. It's either/or. In fact, the
docs tell us about the grid Options, "When goEditing is included in
Options, goRangeSelect has no effect".
However, it looks like it may be possible to do editing and rangeselects
in a TStringGrid anyway!!! Through careful use of the mousedown,
mouseup, selectcell and exit events, you can get dang close by switching
editing elements on and off at the right times. But I still don't have
it perfect, and that only covers mouse use, not keyboard changes.
I have not used the TStringGrid for this, so I can't provide a specific answer. But am I right in assuming you can manually (in code) start a cell being edited? That link implies it is possible even if the grid doesn't have goEditing included in its Options. (See below to work around this if this is not true.)
If so, I'd suggest the following approach:
Combined selection and edit behaviour
I find this is a good, Windows-standard-behaviour sort of approach:
Leave the grid in selection mode, so mouse and keyboard interaction selects cells
Trigger a cell being edited yourself, based on certain criteria (I think you are on the way to doing this from what you said in your last paragraph.) There are common ways to trigger editing, and the following criteria are what my programs follow when they do something similar with other controls:
Selection is normal. Ie, click to select, click and drag to multi-select, use the keyboard arrows and Shift or Control to select, etc.
A cell enters edit mode when either:
A cell is selected and the user presses Enter or F2 (F2 is the standard "Rename" or "Edit" shortcut, which works in a number of programs)
The user "slow-double-clicks" on a cell - ie, slow-double-clicks to select and edit, or clicks again, after a pause, on an already-selected cell. This mimics Explorer's behaviour, where if a file is selected and you later click on it, it enters the inline edit/rename mode. To implement this, record when a cell was last clicked (and selected.) If it is clicked again, and if the time is greater than GetDoubleClickTime then they have clicked twice, slowly, and enter edit mode. This allows you to distinguish between the first click to select, a double-click (to perform some kind of action), and a slow second click, to enter edit mode.
I also tend to check the mouse position, so that if an object is slow-double-clicked and it wasn't first selected (ie, this both selects the object and then enters edit mode) I verify the mouse hasn't moved very much. I use GetSystemMetrics to find the double-click distance, and check that the slow double click was within this box. (Because it's not a true doubleclick, I actually check the distance times 2. My action code is:
const int iMAX_MOVE_AMOUNT = ::GetSystemMetrics(SM_CYDOUBLECLK) * 2; (sorry, C++ not Delphi, but should be convertable easily enough!)
but I'm actually not certain if this is completely and utterly 100% to Windows guidelines. In practice users find it works as they expect, though.)
That should let you change between selecting and editing at the appropriate times with both the keyboard and the mouse.
Miscellaneous thoughts
You may find some of this is cleaner and easier to implement by subclassing TStringGrid and creating a new component. That will allow you to implement this in normal code and override the inbuilt behaviour (rather than event handlers) while keeping it invisible to the form code. It will also give you lower-level access to the mouse events or Windows messages than are exposed simply through events such as OnMouseDown. Finally, if there are problems with showing the editor when goEditing is included in Options, this will allow you to change that behaviour. You could also add your own events if you want your code to respond to certain things happening, such as creating an OnBeginEdit event, say.
Creating your own components is normally regarded as an advanced Delphi topic, but it's actually remarkably easy once you know how! This site has a few good topics that will introduce you to the subject in general, and if you go this route and encounter problems, Stack Overflow is of course a good place to ask questions :) The Embarcadero Delphi » VCL » Writing Components newsgroup / forum is also an excellent resource, in fact possibly even better than SO for this specific topic.
Hope that helps!
Yes it's old post, but the problem still exist on Delphi XE3.
To manage this feature I used next "trick" in SelectCell procedure :
if (ARow = StringGridParam.Row) then
begin
StringGridParam.Options:= StringGridParam.Options + [goEditing] - [goRowSelect];
end else begin
StringGridParam.Options:= StringGridParam.Options + [goRowSelect] - [goEditing];
end;