How to change the switch application MenuItem text? - blackberry

i need to change the text of the switch application MenuItem to "changer d'application" in French.
I think it's not a deafult menu Item. I do searchs and I found a code returning the id and the ordinal of all the menuItems:
for (int i = 0; i < 100; i++) {
try {
MenuItem item = MenuItem.getPrefab(i);
System.out.println("Item found: "+item.toString());
System.out.println("id: "+item.getId()+" index: "+i);
} catch (Exception e) {
System.out.println("No item for "+i);
}
}
It didn't work for switch application!
Have someone please any idea about how to change the text of switch application?
thanks

It is possible to add additional languages to the BlackBerry simulator.
See this BlackBerry support forum article:
Add language support to the BlackBerry Simulator

Related

Appium: automatically handling the iOS location permissions popup

I need to be able to either prevent the location permissions popup from occurring or have it handled by Appium/WebDriver.
It looks like the Appium settings API supports these two settings for this purpose:
appium:autoAcceptAlerts
and
acceptAlertButtonSelector
We're using Java and unless I'm mistaken, settings have to be applied via a driver instance, something like this:
driver.setSetting("appium:autoAcceptAlerts",true);
The problem that I have is that when creating the driver instance, we have to pass in the capabilities that we require. In practice, this seems to mean that by the time the instance has been created, the application has already launched and the location permissions popup is already on display before we can set the appropriate settings.
I'm sure I must be missing something so I'd appreciate it if anyone can point out my mistake
TIA,
Mike
You can Try this in your capabilities setting:
capabilities.setCapability("autoAcceptAlerts", "true"); // for auto alerts
capabilities.setCapability("autoGrantPermissions", "true"); // for auto premission
or you can try manual handle with this:
custom element when you needed with alert displayed
public void alertAction(String action) {
By element;
switch (action.toUpperCase()) {
case "DONT ALLOW":
element = MobileBy.id("Don’t Allow");
break;
case "ALLOW":
element = MobileBy.id("Allow");
break;
case "ASK APP NOT TO TRACK":
element = MobileBy.id("Ask App Not to Track");
break;
case "OK":
element = MobileBy.id("OK");
break;
case "TONTON SEKARANG":
element = MobileBy.id("Tonton Sekarang");
break;
case "CANCEL":
element = MobileBy.id("CANCEL");
break;
case "LOGOUT":
element = MobileBy.id("LOG OUT");
break;
default:
throw new IllegalStateException("Unexpected value: " + action.toUpperCase());
}
if (isPresent(element)) {
clickOn(element);
}
}
this work for me on appium version 1.21.0, and iOS version 15

Can't translate QComboBox's "Cancel" and "Done" button in iOs

I use Qt to make a multilanguage app in iOS.
and i use ".ts" and ".qm" file to translate text.
the QComboBox in ios has 2 buttons can't be translate.
the ts file need a classname, but i can't found this 2 words in any class from qt source.
I think the Cancel and Done translations are in the Qt translation. I think you need one QTranslator for your app string and a second for the Qt library strings.
The qt_*.qm files are in your Qt directory someplace.
For example, to switch to German you would need to switch two translators:
myapp_de.qm
qt_de.qm
https://wiki.qt.io/How_to_create_a_multi_language_application
See loadLanguage() is setting two QTranslators, one for app, and one for Qt.
// Called every time, when a menu entry of the language menu is called
void MainWindow::slotLanguageChanged(QAction* action)
{
if(0 != action) {
// load the language dependant on the action content
loadLanguage(action->data().toString());
setWindowIcon(action->icon());
}
}
void switchTranslator(QTranslator& translator, const QString& filename)
{
// remove the old translator
qApp->removeTranslator(&translator);
// load the new translator
if(translator.load(filename))
qApp->installTranslator(&translator);
}
void MainWindow::loadLanguage(const QString& rLanguage)
{
if(m_currLang != rLanguage) {
m_currLang = rLanguage;
QLocale locale = QLocale(m_currLang);
QLocale::setDefault(locale);
QString languageName = QLocale::languageToString(locale.language());
switchTranslator(m_translator, QString("TranslationExample_%1.qm").arg(rLanguage));
switchTranslator(m_translatorQt, QString("qt_%1.qm").arg(rLanguage));
ui.statusBar->showMessage(tr("Current Language changed to %1").arg(languageName));
}
}

How to hide keyboard in iOS mobile automation using Appium

I am using iOS version of 10.2 and xcode version is 8.3.
Can anyone let me know how to hide the keyboard in iOS mobile automation using Appium?
programming language used: Java.
I tried driver.hideKeyboard(), but it doesn't work for me.
So, I tried with way:
by pressing the button specified key name and way
inspect key coordinate with appium and perform action. Both ways are work for me.
// way 1
driver.findElementByXPath(String.format("//XCUIElementTypeButton[#name='%s']", "Done")).click();
// way 2
TouchAction touchAction = new TouchAction(driver);
touchAction.tap(new PointOption().withCoordinates(345, 343)).perform();
You could use java_client library methods:
driver.findElementByAccessibilityId("Hide keyboard").click();
driver.hideKeyboard(HideKeyboardStrategy.TAP_OUTSIDE);
driver.hideKeyboard(HideKeyboardStrategy.PRESS_KEY, "Done");
I noticed that "Done" is not part of the keyboard group. So I tried to use the name "Done" as my reference to get the element. I tried this on my end and it works.
driver.findElementByName("Done").click();
The "driver" set declared as IOSDriver.
You can use below code snippet to hide keyboard:
driver.getKeyboard().pressKey(Keys.RETURN);
Solution for Python - 2020:
#staticmethod
def hide_keyboard(platform):
"""
Hides the software keyboard on the device.
"""
if platform == "Android":
driver.hide_keyboard()
elif platform == "iOS":
driver.find_element_by_name("Done").click()
i prefer to tap last key on keyboard for iOS instead of hide:
#HowToUseLocators(iOSXCUITAutomation = LocatorGroupStrategy.CHAIN)
#iOSXCUITFindBy(className = "XCUIElementTypeKeyboard")
#iOSXCUITFindBy(className = "XCUIElementTypeButton")
private List<IOSElement> last_iOSKeyboardKey;
#HowToUseLocators(iOSXCUITAutomation = LocatorGroupStrategy.CHAIN)
#iOSXCUITFindBy(className = "XCUIElementTypeKeyboard")
#iOSXCUITFindBy(iOSNsPredicate = "type == 'XCUIElementTypeButton' AND " +
"(name CONTAINS[cd] 'Done' OR name CONTAINS[cd] 'return' " +
"OR name CONTAINS[cd] 'Next' OR name CONTAINS[cd] 'Go')")
private IOSElement last_iOSKeyboardKey_real;
public boolean tapLastKeyboardKey_iOS() {
System.out.println(" tapLastKeyboardKey_iOS()");
boolean bool = false;
setLookTiming(3);
try {
// one way
//bool = tapElement_XCTest(last_iOSKeyboardKey.get(last_iOSKeyboardKey.size()-1));
// slightly faster way
bool = tapElement_XCTest(last_iOSKeyboardKey_real);
} catch (Exception e) {
System.out.println(" tapLastKeyboardKey_iOS(): looks like keyboard closed!");
System.out.println(driver.getPageSource());
}
setDefaultTiming();
return bool;
}
I tried using all of above method. In some case, it doesn't work perfectly. In my way, it will tap on top left of keyboard.
public void hideKeyboard() {
if (isAndroid()) {
driver.hideKeyboard();
} else {
IOSDriver iosDriver = (IOSDriver) driver;
// TODO: Just work for Text Field
// iosDriver.hideKeyboard();
// TODO: Tap outside of Keyboard
IOSElement element = (IOSElement) iosDriver.findElementByClassName("XCUIElementTypeKeyboard");
Point keyboardPoint = element.getLocation();
TouchAction touchAction = new TouchAction(driver);
touchAction.tap(keyboardPoint.getX() + 2, keyboardPoint.getY() - 2).perform();
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Since the IOS device keyboard doesn't have any "Done" or "Enter" buttons anymore so we can't use any of the Appium server utility interface like HideKeyboardStrategy.
I basically used the TouchAction class tap method to tap at top of the screen and dismiss the keyboard.
TouchAction touchAction = new TouchAction(driver);
int topY = driver.manage().window().getSize().height / 8;
int pressX = driver.manage().window().getSize().width / 2;
touchAction.tap(new PointOption().withCoordinates(pressX, topY)).perform();
Quick & simple solution :
I always try to tap anywhere on screen, may be on
Static text
Image
after entering to hide keyboard unless I explicitly has requirement of interacting with keyboard. This works pretty well for me. Try it :)

Is there a way to rearrange objects on a DevX report?

I want to enable user to move things around on a devexpress print preview and print it only after it is done. If it is possible, could I get some directions where I can start looking? (I will not have the time to look into the whole documentation, what may sound lazy, but devx is kinda huge for the short time I have.)
I don't think you could do this on the Print preview directly, but what you could do is provide a button which launches the XtraReports Designer and pass in the layout from your currently displayed document. When the user has finished editing then you can reload the document in the print preview, loading its new layout as required. You may need to customize the designer heavily to remove various options restricting the user to only editing certain aspects - you can hide much of the functionality including data source, component tray etc:
designer video
designer documentation
hide options in designer
if(EditLayout(document))
RefreshDocument();
public static bool EditLayout(XtraReport document)
{
using (var designer = new XRDesignRibbonForm())
{
designer.OpenReport(document);
XRDesignPanel activePanel = designer.ActiveDesignPanel;
activePanel.AddCommandHandler(new DesignerCommandHandler(activePanel));
HideDesignerOptions(activePanel);
designer.ShowDialog();
changesMade = activePanel.Tag != null && (DialogResult)activePanel.Tag == DialogResult.Yes; //set this tag in your DesignerCommandHandler
activePanel.CloseReport();
}
return changesMade;
}
Finally, some utility methods for changing a document/reports layout:
internal static byte[] GetLayoutData(this XtraReport report)
{
using (MemoryStream mem = new MemoryStream())
{
report.SaveLayoutToXml(mem);
return mem.ToArray();
}
}
internal static void SetLayoutData(this XtraReport report, byte[] data)
{
using (var mem = new MemoryStream(data))
{
report.LoadLayoutFromXml(mem);
}
}

How to detect open/closed hand using Microsoft Kinect for Windows SDK ver 1.7 C#

I have recently started using Microsoft Kinect for Windows SDK to program some stuff using the Kinect the device.
I am busting my ass to find a way to detect whether a certain hand is closed or opened.
I saw the Kinect for Windows Toolkit but the documentation is none existent and I can't find a way to make it work.
Does anyone knows of a simple way to detect the hand's situation? even better if it doesn't involve the need to use the Kinect toolkit.
This is how I did it eventually:
First things first, we need a dummy class that looks somewhat like this:
public class DummyInteractionClient : IInteractionClient
{
public InteractionInfo GetInteractionInfoAtLocation(
int skeletonTrackingId,
InteractionHandType handType,
double x,
double y)
{
var result = new InteractionInfo();
result.IsGripTarget = true;
result.IsPressTarget = true;
result.PressAttractionPointX = 0.5;
result.PressAttractionPointY = 0.5;
result.PressTargetControlId = 1;
return result;
}
}
Then, in the main application code we need to announce about the interactions events handler like this:
this.interactionStream = new InteractionStream(args.NewSensor, new DummyInteractionClient());
this.interactionStream.InteractionFrameReady += InteractionStreamOnInteractionFrameReady;
Finally, the code to the handler itself:
private void InteractionStreamOnInteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
{
using (InteractionFrame frame = e.OpenInteractionFrame())
{
if (frame != null)
{
if (this.userInfos == null)
{
this.userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
}
frame.CopyInteractionDataTo(this.userInfos);
}
else
{
return;
}
}
foreach (UserInfo userInfo in this.userInfos)
{
foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
{
string action = null;
switch (handPointer.HandEventType)
{
case InteractionHandEventType.Grip:
action = "gripped";
break;
case InteractionHandEventType.GripRelease:
action = "released";
break;
}
if (action != null)
{
string handSide = "unknown";
switch (handPointer.HandType)
{
case InteractionHandType.Left:
handSide = "left";
break;
case InteractionHandType.Right:
handSide = "right";
break;
}
if (handSide == "left")
{
if (action == "released")
{
// left hand released code here
}
else
{
// left hand gripped code here
}
}
else
{
if (action == "released")
{
// right hand released code here
}
else
{
// right hand gripped code here
}
}
}
}
}
}
SDK 1.7 introduces the interaction concept called "grip". You read about all the KinectInteraction concepts at the following link: http://msdn.microsoft.com/en-us/library/dn188673.aspx
The way Microsoft has implemented this is via an event from an KinectRegion. Among the KinectRegion Events are HandPointerGrip and HandPointerGripRelease, which fire at the appropriate moments. Because the event is coming from the element the hand is over you can easily take appropriate action from the event handler.
Note that a KinectRegion can be anything. The base class is a ContentControl so you can place something as simple as an image to a complex Grid layout within the region to be acted on.
You can find an example of how to use this interaction in the ControlBasics-WPF example, provided with the SDK.
UPDATE:
KinectRegion is simply a fancy ContentControl, which in turn is just a container, which can have anything put inside. Have a look at the ControlBasics-WPF example, at the Kinect for Windows CodePlex, and do a search for KinectRegion in the MainWindow.xaml file. You'll see that there are several controls inside it which are acted upon.
To see how Grip and GripRelease are implemented in this example, it is best to open the solution in Visual Studio and do a search for "grip". They way they do it is a little odd, in my opinion, but it is a clean implementation that flows very well.
For what i know Microsoft kinect for windows SDK does not best to detect open and close hands. Microsoft provides tracking of 20 body parts and does not include the fingers of the hand. You can take advantage of the kinect interactions for that in an inderect way. This tutorial shows how:
http://dotneteers.net/blogs/vbandi/archive/2013/05/03/kinect-interactions-with-wpf-part-iii-demystifying-the-interaction-stream.aspx
But i think the best solution when tracking finger movements would be using OpenNI SDK.
Some of the MiddleWares of OpenNI allow finger tracking.
You can use something like this
private void OnHandleHandMove(object source, HandPointerEventArgs args)
{
HandPointer ptr = args.HandPointer;
if (ptr.HandEventType==HandEventType.Grip)
{
// TODO
}
}

Resources