I'm just dipping my toe into UIAutomation as a replacement for the usual SendKeys stuff to send keystrokes to another application from a testbed I'm writing in D7.
Fwiw, I'm not sure that this is the correct terminology, but so far as I know the target application doesn't actively "support" UIAutomation in the sense that, say, Adobe Acrobat does: What I mean is that when you query Acrobat via UIAutomation, it detects that you are and pops up a series of dialogs offering to help set up its accessibility assistance. My target app seemingly isn't "UIAutomation-aware" in that way. Anyway ...
I'm stuck at the point where I provoke the target app into popping up a modal dialog for data input. This pop-up doesn't seem to be accessible via the target's UIAutomation tree. OTOH, I can find its window handle via EnumWindows easily enough. So I could retrieve the UIAutomation tree for the pop-up and work with that to fill in the data it's asking for, I imagine.
However, and this is the substance of my q, I'm wondering whether I'm missing a trick not being able to find the pop-up dialog by a recursive inspection of the tree elements of the target app's UIAutomation tree, or whether needing to hop to another tree to fill in a pop-up dialog is just part of the UIAutomation paradigm? (It seems unlikely that the UIAutomation framework could "know" that an app and a pop-up are related, but I thought I'd check).
In case it's of interest, the target application in my case is actually the D7 IDE itself, though it could be XE4 or XE6, which I also have. I'm doing this to see if I can devise an answer to another SO question. What the OP asked there would have been doable in 5 minutes if Delphi presented its OTA services via an automation interface - seems odd that the best tool for putting together automation interfaces doesn't** have one of its own. I wonder why not? I imagine it might be objected to as off-topic to ask that here, although some contributors here would be in a position to know the answer rather than just speculate. Maybe I'll ask on the EMB NGs when they're back up.
** Update: I just noticed a cryptic mention in the OTA chapter of the D5 Developer's Guide that "[the OTA] interfaces can be used by any programming language that supports COM". I wonder if they mean, from an external app?
Related
I've been trying to work on a proof of concept (POC) where I can embed a UE4 project into an existing application (in my case NativeScript) but this could just as easily apply to Kotlin or ReactNative.
In the proof of concept I've been able to run the projects on my iPhone launching from UE4 pretty easily by following the Blueprint and C++ tutorials for the FPS. However the next stage of my POC requires that I embed the FPS into an existing NativeScript application, this application will manage the root menu, chat, and store aspects of the platform in the POC.
The struggle I'm running into is that I cannot find how to interact with the xcode project generated from the blueprint tutorial and the C++ tutorial generates a xcode project that i'm unsure where the actual root is that I need to wrap.
Has anyone seen a project doing this before and if so are there any blogs or guidance that you can point me to? I've been Googling and looking around for a couple weeks and have hit a dead end. I found a feedback post here from April of 2020, that was referring to a post in January 2020 that talked about how Unity has a way to embed into other applications additionally a question from 2014 here. But other than that it's a dead end.
A slightly different approach
Disclaimer: I'm not an UE4 developer. Guilty as charged for seeing an unanswered bounty too big to ignore. So I started thinking and looking - and I've found something that could be bent to your needs. Enters pixelstreaming.
Pixelstreaming is a beta feature that is primarily designed to allow for embedding the game into a browser. This opens a two way communication between a server where the GPU heavy computations happen and a browser where the player can interact with the content - the mouseclick & other events are sent back to the server. Apparently it allows some additional neat stuff, however that is not relevant for the question at hand.
Since you want to embedd the Unreal application into your NativeScript tool(menu of some kind if I understood correctly), you could make your application a from two separate parts:
One part would run the server.
The second part would handle the overlay via the pixelstreaming.
This reduces the issue of embedding the UE4 into an application to the(possibly easier) issue of embedding a browser into your application. (Or if your application is browser based - voila, problem solved.)
If you don't want to handle the remote communication, just have the server-side run on the localhost.(With the nice sideeffect of saving bandwidth.)
Alternatively, if you are feeling adventurous, you could go and write your own WebRTC support on the application side to bypass the need for the browser alltogether. It might not be worth the effort though.
Side note: The first of the links you provided is a feature request which hints at the unfortunate fact that UE4 doesn't support embedding. This is further enforced by the fact that one of the people there says somethig along the lines "Unity can to this, it would be nice if UE4 could as well."
Yet a different approach:
You could embedd and use a virtual display to insert the UE4 part into your controller - you would be basically tricking UE4 into thinking that the desired display device is a canvas inside your application.
This thread suggests a similar approach:
In general, the way to connect two libraries like this would be through a platform dependent window handle, e.g. a HWND under Windows. Check the UE api if you find any way to bind the render target to a HWND. Then you could create a wxWindow in wxWidgets and tell UE to render into that window. That would be a first step.
I'm not sure if anything I've listed will be of much help but hey, at least I tried :-). Good luck with your game.
At the same time, the author suggests to:
Reverse the problem:
Using the UE4 slate framework and online subsystem. You would use the former to create the menus you need directly in the UE4 and then use the latter to link to the logic you want to have outside of the UE4. However that is not what you asked for so I'm listing it only for the completeness sake.
I'd like to get information about a third party application's controls such as a list of its properties and their values: something like RTTI information but for a third-party Delphi application.
I see that this is possible. For example TestComplete has the ObjectSpy window which can give many useful information about the control, including RTTI information. How can this be done ?
Edit: To explain why I'm investigating this issue... I'm a registered user of TestComplete/TestExecute and I do like... most of it. I can get over the minor things but the one major problem for me is their license verification system which requires me to have a physical computer (not a virtual machine) always on just for the sake of running a license server so that TestExecute can run at night. As I have basic testing needs (compare screenshots and check basic Delphi component's properties) I wondered how hard it would be to make my own private very simple "TestExecute-like" application.
To go further, I suggest you these relevant resources found here on SO
Writing a very basic debugger (The accepted answer along with its comment thread are all valuable).
Is it possible to access memory from an application to another ? How? (Excerpt from the accepted answer: It is possible. Just use the Windows API functions WriteProcessMemory/ReadProcessMemory. Pass in the handle of the process and the pointer to the data).
Search the memory of another process (The excellent accepted answer also forwards to another valuable resource delphi-code-coverage by Christer Fahlgren and Nick Ring).
StackWalk of other process in delphi? (Check Barry Kelly's answer !!!, the same for the one from the AsmProfiler author !!!).
I strongly suggest you to port to Delphi this c++ project entitled Get Process Info with NtQueryInformationProcess: A hands on experience on using ReadProcessMemory to access the CommandLine used to launch another process.
Last Edit:
NtQuerySystemInformation Delphi Example.
RRUZ's answer to Delphi - get what files are opened by an application as suggested by LU RD.
When we want to take another application which is compiled with debug information and get stuff out of it at runtime, what we are dealing with is the problem of "how to write my own custom debugger/profiler/automated-test kernel".
TestComplete and other AutomatedQA programs contain a Debugger and Profiler Kernel which can start up, run and remotely control apps, and parse their Debug information in several formats, including the TurboDebugger TD32 information attached to these executables. Their profiling kernel also can see each object as it is created, and can iterate the RTTI-like debug information to determine that an object that was created is of a particular class type, and then see what properties exist in that object.
Now, TestComplete adds on top of the AQTime-level of stuff, the ability to introspect Window handles, and intuit from Window Handles, the Delphi class Names that are behind it. However, it's much easier for you (or me) to write a program which can tell you that the mouse is over a window handle that belongs to a TPanel, than to know which version of Delphi created that particular executable, what version of TPanel that is, then, and what properties it would contain, and to read those values back from a running program, which requires that you implement your own "debugger engine". I am not aware of any open source applications that you could even use to get a start writing your own debugger, and you certainly can't use the ones that are inside AQTime/TestComplete, or the one inside Delphi itself, in your own apps.
I could not write you a sample program to do this, but even if I could, it would require a lot of third-party library support. To see the window classes for a window handle which your mouse is over, look for how to implement something like the MS Spy++ utility.
An easy case is if your mouse is mousing over a window inside your own application. For that, see this about.com link, which simply uses RTTI.
I bough a cheap RFID reader from eBay, just to play about with. There is no API, it just writes to stdin - that it to say, if you have Notepad open and tap an RFID tag to the reader its Id number appears in the Notepad window.
I am looking around for a reasonably priced reader/writer with an actual API (any recommendations?).
Until then I need to knock together a quick demo using what I have, just to prove the concept.
How can I best intercept the input from the USB connection? (and is there a free VCL control to do this?)
I guess if I just have a modal form with a control which is active then I can hook its on change event. But modal forms seem a bit rude. Maybe I can hook keyboard input, as it seems to be injecting like types chars?
Any idea? Please tell me if I cam not explaining this clearly enough.
Thanks in advance for your help.
In the end, I just hooked the keyboard, rather than trying to intercept the USB. It works if I check that my application is active and pass on the keystrokes otherwise. My app doesn't have any keyboard input, just mouse clicks (and what I read from RFID is digits only, so I can still handle things like Alt+F4. Maybe not the perfect solution for everyone, but all that I could get to work)
Based on your description, it sounds like the RFID reader is providing a USB HID keyboard interface.
I don't know if there is anything similar in delphi, but in libusb there is a libusb_claim_interface, which requests that the OS hand control over to your program.
A Delphi library for doing HID devices:
http://www.soft-gems.net/index.php?option=com_content&task=view&id=14&Itemid=33
An Ocaml interpreter app was put up on iTunes last November. I've done some Haskell programming, and briefly looked into Ocaml at one time, but never really became acquainted with it. I have a new iPad, and am curious whether the Ocamlexample app available on iPad can actually be used for anything other than working through tutorial exercises.
I.e., does anyone know if it has the capability to save scripts (in its sandbox, of course), and any ability to export results (other than cut and paste)?
I can't find any references on Google much more current than last November, so it would appear that no one is actually doing anything with it.
Apple dropped many of their restrictions on iOS software development on September 9, 2010. Here is the press release announcing the changes:
Changes to development agreement Sept 9, 2010.
The only restriction now is that you can't download code. I.e., you can't have an embedded language implementation that is its own app platform.
This does limit the usefulness of an interpreter, but there is no rule against interpreters per se or against saving and reloading scripts in a particular iPad.
You can also compile OCaml to run on iOS. That's what I'm spending my time on right now, and I'm selling an OCaml iOS app in the iTunes Store. Visit my profile for a link.
(Hmm--I just noticed this was a pretty old question. Sorry for any extra noise.)
You can download scripts, but only if the mac/pc is tethered to the ipad and you use the dropbox function of ios. in theory this could be a program which opens a socket for your own protocol, however I have not tried this. It would have to be a single threaded protocol because Lwt is not implemented
From the way it's pitched, and knowing the App Store's rules, I don't think it's actually for making OCaml scripts. It just lets you do a limited set of calculations and drawing operations. Apple would reject it if they actually thought it was a programming language interpreter.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Stackoverflow User Luke wrote in this answer:
The boundaries between desktop and web
applications have really blurred.
Whilst once upon a time the nature of
developing for the web was totally
different to developing for the
desktop, nowadays you find the same
concepts [...] cropping up in both.
Since I am continually looking to improve my existing web applications, I'd like to know which common features of "classic" desktop applications do most web application miss?
For example, most desktop apps prompt the user to save unsaved data leaving a page or closing a window - a feature that many web applications miss. It could be that some feature aren't even necessary or are compensated in some other way. Maybe there are features which can't be implemented in (a classic) web application?
The thing you'll never be able to imitate in a web application is the low latency and instant feedback of a well written desktop app.
Even with the ajax techniques to load only parts of the pages, there usually is a noticeable delay in the response (or maybe it's just me and my narrowband). You're (for at least a few more years) just bound to the orders of magnitude of speed difference between network access and no network access.
The Undo button.
Right-click application-specific pop-up menus is the thing I've noticed most.
Usually right-clicking on a browser application will bring up the browser pop-up menu rather than an application-specific menu.
Keyboard support on most web applications is weak to non-existent. This is getting better than it used to be but you will still find plenty of mainstream sites that can't even get the tab order to work correctly. Most sites don't handle focus correctly and force users to use the mouse to activate even the simplest of data entry forms. You can usually forget about accelerator key support.
You can't pull the plug when the application hangs. (Yes, I'm serious)
For fairness is to mention, that desktop-applications miss a common feature of webapps: XSS (Cross-Site-Scripting). ;-)
Support for Big Files.
Integration with the client OS.
Support for special Input/Ouput Devices.
3D or anything else computationally intensive (specific to each users).
Advanced graphics: I've written a C program that draws a surface joining Bézier patches in a simple window and I had to tweak it in unimaginable ways to get it to draw in a decent time. I can't imagine that being ported to the web.
I mean, doing advanced graphics is not what every application needs, but if displaying nontrivial pictures is slow, then we shouldn't even talk about animations.
One Proper Macintosh menu bar support.
If you're a long-term Mac user, even with two large monitors, you have muscles that swoop to the top of the screen for actions, comfortable in the knowledge that the infinite depth effect will kick in and you can slide along that edge, picking from the menus.
No in-browser app can deliver that experience.
Two Command-keys, which is a side-effect of the menu bar not belonging to the app but goes a bit beyond that - good desktop apps have command-key shortcuts (accelerators to you Windows guys, I'm not just talking the Mnemonics which work with alt-key support). Great desktop apps show little reminders next to the buttons that have accelerators, when you hold down the appropriate modifier keys and wait a fraction of a second.
Three Smarter tables. There are a lot of apps where some kind of spreadsheet view works as a paradigm, including editing, sorting, resizing columns. I think I've seen some odd examples of partial support but a good table in a web app is still a bit of a dancing bear.
Four Used to be right-clicking but I'm finding more and more apps that do this properly, like Kerio's excellent webmail engine. It is still missing in enough web apps to be worth emphasizing.
Displaying application request/process status or messages on Taskbar or Status bar.
For the web, Javascript can be used to update text on status bar, but its not a common usage.
The usability benefits of standard GUI elements that look and behave uniformly across applications.
(Although this will surely change as web app developers adopt certain GUI elements and patterns that are considered best-practice, notably by eventually using the same libraries, e.g. for drag-and-drop.)
A common feature of "classic" desktop applications is the ability to work without an internet connection. I miss that in Web applications.
For example, MS word works without an internet connection, but you need to be connected if you want to use Google docs.
Of course, it does not matter if the application requires an internet connection anyway. For example, if its a feed reader, I have to connect to the internet, whether I use a desktop reader or an online reader.
Drag and drop from Finder/Explorer into the web app. And vice-versa.
The ComboBox is the most notable widget omission.
On the web, lack of desktop features such as popup dialogues is actually a boon, making for a simpler interaction experience. Think also of the autosave draft feature of Gmail vs. the desktop convention of prompting the user to save.
So consider carefully before trying to reconstruct that desktop feature in your web app.
Decent help. Seems to always be an afterthought, if it's even implemented...
Desktop integration (may change if we get online desktops)
Offline use (does exist but it is early days)
(Reliable) Responsiveness
Reliability generally (somewhat debatable as there are pros and cons - e.g. your data is probably better backed up online, however security generally is less in your control with an online app, and if the network connection fails an online app tends to freeze or fail horribly.)
Blue Screen of Death
A task-specific UI with no extra controls. A web app, in addition to all the controls of the web app, also has back, next, bookmarks, etc buttons. You end up with an extra inch-high set of buttons that don't directly support the task at hand.
This isn't necessarily a programming feature, but the audience of an application will be different. For a web application you are cutting out a complete segment of your audience (those with slow or no internet access). While this is a relatively low number, it is a difference between a desktop application and a web application.