I have been working on Flutter desktop and stuck at the point where I need to get a keyboard key input, without TextField, like in any desktop application or game. How can I work with input streams without TextFields?
You want a RawKeyboardListener.
An important caveat is that desktop support for that is still a work in progress; on Linux and Windows you'll currently get an Android key event that's only partially populated.
Related
I'm pretty new to Cordova dev. and I'm trying to achieve the following.
We have an application, running both on Android and iPhone, written in AngularJS,
under the Cordova framework.
In order to use our application, we require the users to send their phone number,
receive an SMS containing an OTP, type the OTP into a shaped text field, and press
a button for sending the OTP (and receiving an authentication token).
I was asked to enable the simple feature of enabling the application to do that
automatically, meaning it would parse the SMS, feed that input field, and send
the OTP, without any user intervention.
This is pretty easily achieved on Android, using a specific SMS Receive plugin,
but cannot be done in iOS.
However, I saw that it can be achieved semi-automatically on the new iOS versions,
but I have to change the input field type to "one-time-code". I tried to do that
on my Cordova code, and I couldn't achieve that, no matter what I did. I would
like to know how to do it through Cordova, if this can be done, anyway.
You should be able to do this using purely HTML without needing a Cordova plugin or any native iOS code as described here. Just set the autocomplete attribute, not the type attribute, of the input element to one-time-code:
<input id="single-factor-code-text-field" autocomplete="one-time-code" />
In my case it had to contain the word 'code' in the message then space and then the code you want to show. I'm not sure but I think it gets the symbols till the next space because I had more symbols after.
ex: code 123456
I have been trying to understand how Mobile Device Farms like DeviceConnect, AWS Device Farm, SauceLabs, etc. get to remote control iOS devices, but I can't find anything on the subject. They get to do it without jailbreaking, which baffles me even more.
I love these kind of projects, because at the moment it seems undoable, but I know that it is possible ('they' are doing it).
With remote control I mean: seeing the screen of the iOS device on your computer screen and able to touch and swipe with your mouse.
Can someone please point me in the right direction as to how these technically work?
If you're using an iPad (in particular) or an iPhone (if you think you'll be able to make our any detail on the smaller screen) then using remote access to view and control what's on your friend's Mac is a good option. And the best way to remote-access a Mac from an iPad is to use Google's free Chrome Remote Access service, which lets you remotely use Mac programs from an iOS device.
It's quite an involved process to set it up the first time, but easy if you want to do it again in future. You'll need the Google Chrome web browser for Mac, and a Google account.
Here is link for Chrome Remote access
https://chrome.google.com/webstore/detail/chrome-remote-desktop/gbchcmhmhahfdphkhkmpfmihenigjmpp?hl=en
Open Chrome and go to Chrome Remote Desktop on the Chrome Webstore. Click Add to Chrome, then Add App. Click Allow, then Continue.
I will give you one approach and small explanation .
You will need to create application with all possible permissions at first also implementation handlers functions .
For example :
Works with files
Real native Socket connections (not http protocol) + need to have some main server signalling (domain or static ip).
Handle remote touch trigger (main problem for real remote/iOS)
Background part experience
Your app need to be non-Market app (more likely).
You can make Application with all possible options that apple give us.
What can you do remote with your app : -control camera/ Mic read geo data , work with galleries , delete or create files . Socket will be communication line .
Also app must initially started and make (on user request) always allow all permissions .
Use camera :
Send with socket some command example ( openCamera ) . AFTER receive this string perform action for opening camera .
If you can fix programability triggering touch events you can make remoteIOS.
More data links :
Q/A send remote events
Q about touch events
Q/A about Permisions
Sorry for the first quick answer,
All of these: DeviceConnect, AWS Device Farm, SauceLabs use Appium in order to control devices.
The component that execute the command is the WebDriver.
Appium have different WebDriver implementation in order to execute operations to different device.
The iOS WebDriver can be found here: https://github.com/appium/appium-ios-driver.
The protocol in use is the JsonWireProtocol.
more details can be found here:
http://appium.io/,
http://www.seleniumhq.org/
Regards
I'm part of a team that has made a Keyboard Extension with a lot of users using google docs. A recent update of google docs is broken and causes textWillChange and textDidChange to stop being fired if the user manually moves the cursor around by holding the finger down. If the user closes the keyboard and opens i again it will work until broken again...
Above event are quite important as all suggestions rely on these events in order to update (this goes for all keyboard extensions, also big ones as Swiftkey).
To make matters worse google has stopped giving support of editing documents from the website only supporting edit through apps.
So my questions are as follows:
So is there any way we can detect this issue has happened?
Is there any way we can reenable it when it happens?
Is there any way we can avoid it from happening?
Where should we report this to google (tried: https://productforums.google.com/forum/#!topic/docs/0OsHxzOjTq4)
This is a general problem with keyboard-extension in iOS. Different apps react different and sends somewhat different event.
given a Delphi 10.1 Berlin update 2 Firemonkey Android app and a TEdit. I like to detect when the user presses Enter while being in the TEdit.
I implemented an OnTyping event already where I loop through all the chars of the .Text property.
If vkLineFeed or vkReturn is detected it is Enter (I added the check for vkLineFeed by finding out that certain devices do send that one instead of vkReturn).
ReturnKeyType is default. When being set to done or go it looks like I don't even get the Enter key to see in .Text. The OnKeyDown/OnKeyUp events of a TEdit do not fire on purpose in a FMX app on Android.
But now I encountered a device which simply closes the keyboard on enter but doesn't send me any "enter" char. It's a LG L50 with Android 4.4.
Is there some method to reliably detect return presses on Android/FMX?
Here is a working solution:
http://www.danielespinetti.it/2017/03/intercept-keyevent-on-android-with.html
I had issues with the memo when trying it out, but after I added a TEdit to the form and tested with that one (as I wanted to use a TEdit anyhow) it worked on the LG L50. Further tests on other devices need to be carried out, but since that was the not functioning device...
Interestingly the hardware key used to show the list of open apps (the most right one) was detected as 0x12.
You can also use TEdit.OnChangeTracking event, that occurs when typing individual characters into the edit control. Or OnExit event
I'm trying to create an application where I can send a string from an iPhone to an active textfield on my mac. I'm coming from a Microsoft background and they call it focus. The active textfield is not part of my application (3rd party).
I tested the concept by creating an iOS app to send a string to a mac via bluetooth. The mac (cocoa app) presents the string, in a label, in an NSWindow.
I want to create a keyboard wedge like a USB device to input the string in a textfield with a Safari webpage open using the active text box. I see there is a CGEventCreateKeyboardEvent in Apple's documentation. My question is can I pass the entire string to a Keyboard event with out having to input each CGKeyCode possibilities, and coding each true/false for keyup and keydown?
I must be missing a better way...
There is no universal "better way", since, unlike Microsoft, Apple knows something about security and is not going to let just any old process out of the blue start manipulating the text entered in some application's text box. However, there is a hole which you can ask the user to open: if the user has granted Accessibility permissions, then you can use the Accessibility API to "see" interface of the target application and to make changes like modifying the text in a text box. That is how applications like Nuance Dragon Dictate for Mac and Smile TextExpander work.