I bough a cheap RFID reader from eBay, just to play about with. There is no API, it just writes to stdin - that it to say, if you have Notepad open and tap an RFID tag to the reader its Id number appears in the Notepad window.
I am looking around for a reasonably priced reader/writer with an actual API (any recommendations?).
Until then I need to knock together a quick demo using what I have, just to prove the concept.
How can I best intercept the input from the USB connection? (and is there a free VCL control to do this?)
I guess if I just have a modal form with a control which is active then I can hook its on change event. But modal forms seem a bit rude. Maybe I can hook keyboard input, as it seems to be injecting like types chars?
Any idea? Please tell me if I cam not explaining this clearly enough.
Thanks in advance for your help.
In the end, I just hooked the keyboard, rather than trying to intercept the USB. It works if I check that my application is active and pass on the keystrokes otherwise. My app doesn't have any keyboard input, just mouse clicks (and what I read from RFID is digits only, so I can still handle things like Alt+F4. Maybe not the perfect solution for everyone, but all that I could get to work)
Based on your description, it sounds like the RFID reader is providing a USB HID keyboard interface.
I don't know if there is anything similar in delphi, but in libusb there is a libusb_claim_interface, which requests that the OS hand control over to your program.
A Delphi library for doing HID devices:
http://www.soft-gems.net/index.php?option=com_content&task=view&id=14&Itemid=33
Related
I'm in charge of technology at my local camera club, a not-for-profit charity in Malvern UK. We have a database-centric competition management system which is home-brewed by me in Delphi 6 and now we wish to add a scoring system to it. This entails attaching 5 x cheap-and-standard USB numeric keypads to a PC (using a USB hub) and being able to programmatically read the keystrokes from each keyboard as they are entered by the 5 judges. Of course, they will hit their keys in a completely parallel and asynchronous way, so I need to identify which key has been struck by which judge, so as to assemble the scores (i.e. possible multiple keystrokes each) they have entered individually.
From what I can gather, Windows grabs the attention of keyboard devices and looks after the characer strings they produce, simply squirting the chars into the normal keyboard queue (and I have confirmed that by experiment!). This won't do for my needs, as I really must collect the 5 sets of (possibly multiple) key-presses and allocate the received characters as 5 separate variables for the scoring system to manipulate thereafter.
Can anyone (a) suggest a method for doing this in Delphi and (b) offer some guide to the code that might be needed? Whilst I am pretty Delphi-aware, I have no experience of accessing USB devices, or capturing their data.
Any help or guidance would be most gratefully received!
Windows provides a Raw Input API, which can be used for this purpose. In the reference at the link provided, one of the advantages is listed as:
An application can distinguish the source of the input even if it is
from the same type of device. For example, two mouse devices.
While this is more work than regular Windows input messages, it is a lot easier than writing USB device drivers.
One example of its use (while not written in Delphi) demonstrates what it can do, and provides some information on using it:
Using Raw Input from C# to handle multiple keyboards.
I'm just dipping my toe into UIAutomation as a replacement for the usual SendKeys stuff to send keystrokes to another application from a testbed I'm writing in D7.
Fwiw, I'm not sure that this is the correct terminology, but so far as I know the target application doesn't actively "support" UIAutomation in the sense that, say, Adobe Acrobat does: What I mean is that when you query Acrobat via UIAutomation, it detects that you are and pops up a series of dialogs offering to help set up its accessibility assistance. My target app seemingly isn't "UIAutomation-aware" in that way. Anyway ...
I'm stuck at the point where I provoke the target app into popping up a modal dialog for data input. This pop-up doesn't seem to be accessible via the target's UIAutomation tree. OTOH, I can find its window handle via EnumWindows easily enough. So I could retrieve the UIAutomation tree for the pop-up and work with that to fill in the data it's asking for, I imagine.
However, and this is the substance of my q, I'm wondering whether I'm missing a trick not being able to find the pop-up dialog by a recursive inspection of the tree elements of the target app's UIAutomation tree, or whether needing to hop to another tree to fill in a pop-up dialog is just part of the UIAutomation paradigm? (It seems unlikely that the UIAutomation framework could "know" that an app and a pop-up are related, but I thought I'd check).
In case it's of interest, the target application in my case is actually the D7 IDE itself, though it could be XE4 or XE6, which I also have. I'm doing this to see if I can devise an answer to another SO question. What the OP asked there would have been doable in 5 minutes if Delphi presented its OTA services via an automation interface - seems odd that the best tool for putting together automation interfaces doesn't** have one of its own. I wonder why not? I imagine it might be objected to as off-topic to ask that here, although some contributors here would be in a position to know the answer rather than just speculate. Maybe I'll ask on the EMB NGs when they're back up.
** Update: I just noticed a cryptic mention in the OTA chapter of the D5 Developer's Guide that "[the OTA] interfaces can be used by any programming language that supports COM". I wonder if they mean, from an external app?
We're working on a app for blind and visually impaired users. We've been experimenting with a third party library to get spoken user input and convert it to text, which we then parse as commands to control the app. The problem is that the word recognition is not very good and certainly not anywhere near as good as what iOS uses to get voice input on a text field.
I'd like to experiment with that, but our users are mostly unable to tap a text field, then hit the mic button on the popup keyboard, then hit the done button or even dismiss any of it. I'm not even sure how they can deal with a single tap on the whole screen, it might be too difficult for some. So, I'd like to automate that for them, but I don't see anything in the docs that indicates it is possible. So, is it even possible, and if so, what's the proper way to do it so that it passes verification?
The solution for you is to implement a keyword spotting so that the speech recognition will be activated with the keyword instead of button tap. After that you can record commands/text and recognize them with any service you need. Something like "Ok google" activation on Motorola X.
There are several keyword activation libraries for iOS, one possible solution is OpenEars based on the open source speech recogntion library CMUSphinx. If you want to use Pocketsphinx directly, you can find keyword activation implementation in kws branch in subversion (branches/kws)
The only way to get the iOS dictation is to sign up yourself through Nuance: http://dragonmobile.nuancemobiledeveloper.com/ - it's expensive, because it's the best. Presumably, Apple's contract prevents them from exposing an API.
The built in iOS accessibility features allow immobilized users to access dictation (and other keyboard buttons) through tools like VoiceOver and Assistive Touch. It may not be worth reinventing this if your users might be familiar with these tools.
I want to use a second pointing device (the trackball) as a control for a specific function on a program. This means I would NOT want any mouse functionality from the trackball, I just want to get the movement data and somehow use the NPAPI to get that into our web app. Is there a way to bind a mouse/trackball to a specific program that it doesn't act as a mouse/trackball for the computer?
Thanks in advance!
UPDATED: actually ask a question
To the best of my knowledge and my understanding of how HID devices work, there is no way to do what you want to do. If you could do it at all, you could probably do it from an NPAPI plugin, but there is no way to tell the operating system not to take control of one specific pointing device as opposed to any others.
Now, if you had a special HID trackball that didn't show up as being a regular pointing device then you could possibly do it with that, but I have never heard of any way to take control of just one of potentially many HID pointing devices on windows, linux, or mac.
There may be a way to hack something together in Linux by changing the way the drivers work, etc, but I don't know of any.
I have a piece of equipment from the late 1980s. It outputs text and graphics directly to dot matrix and Laserjet III printers. These are starting to be harder to find. I would like to implement a solution which allows me to connect to the computer and "print" to it (a piece of software) then print it to a modern printer or .pdf file. I can't locate the piece of software that would accept the input of the device and emulate an old printer. Any solutions??? Thanks for any help. I know this isn't exactly what most of you do, but I'm hoping someone has had need of something similar in their experience.
It already exists. It's called PrintCapture and sells for US$97.00. You will need the necessary interface hardware as well, depending on which type of printer port the device has; they list some of those devices on their web site under the "Details" section.