Can't translate QComboBox's "Cancel" and "Done" button in iOs - ios

I use Qt to make a multilanguage app in iOS.
and i use ".ts" and ".qm" file to translate text.
the QComboBox in ios has 2 buttons can't be translate.
the ts file need a classname, but i can't found this 2 words in any class from qt source.

I think the Cancel and Done translations are in the Qt translation. I think you need one QTranslator for your app string and a second for the Qt library strings.
The qt_*.qm files are in your Qt directory someplace.
For example, to switch to German you would need to switch two translators:
myapp_de.qm
qt_de.qm
https://wiki.qt.io/How_to_create_a_multi_language_application
See loadLanguage() is setting two QTranslators, one for app, and one for Qt.
// Called every time, when a menu entry of the language menu is called
void MainWindow::slotLanguageChanged(QAction* action)
{
if(0 != action) {
// load the language dependant on the action content
loadLanguage(action->data().toString());
setWindowIcon(action->icon());
}
}
void switchTranslator(QTranslator& translator, const QString& filename)
{
// remove the old translator
qApp->removeTranslator(&translator);
// load the new translator
if(translator.load(filename))
qApp->installTranslator(&translator);
}
void MainWindow::loadLanguage(const QString& rLanguage)
{
if(m_currLang != rLanguage) {
m_currLang = rLanguage;
QLocale locale = QLocale(m_currLang);
QLocale::setDefault(locale);
QString languageName = QLocale::languageToString(locale.language());
switchTranslator(m_translator, QString("TranslationExample_%1.qm").arg(rLanguage));
switchTranslator(m_translatorQt, QString("qt_%1.qm").arg(rLanguage));
ui.statusBar->showMessage(tr("Current Language changed to %1").arg(languageName));
}
}

Related

How can I Get Xamarin iOS Application to Automatically Sense Light and Dark Changes?

Someone here (thanks sushihangover!) helped me get my application to read the iOS Settings Dark or Light theme on command. I'm using Xamarin (not Forms). I also need the following (just for iOS):
iOS Settings Theme is Light
App is set to Automatic, so it uses current the iOS Settings Theme (Light)
App launched is Light
Home button press
Change iOS Settings to Dark
Bring app to foreground
App still look Light, but it should look Dark.
I realize the AppDelegate has a WillEnterForeground method, but I don't know how to wire that up so the App looks Dark when it comes to the foreground. I'm using MvvmCross. The following link looks promising.
https://forums.xamarin.com/discussion/181648/best-approach-to-handle-dark-theme
I don't understand how to apply the link's contents to my MvvmCross architecture.
Your help is appreciated!
Thanks!
Larry
The best way to react on application changes while using the MVVM pattern would be to implement a IThemeService interface as shown in your link.
xamarin forms iOS
But I think it's not possible to react to configuration changes in Xamarin.Forms.iOS plattform while using MvvmCross. I looked into the source code of the MvvmCross.Forms.iOS project and couldn't find any equivalent to the MvvmCross.Forms.Android setup methods like OnConfigurationChanged.
On Android you can easily refresh the app-theme while change the system theme in the MainActivity.
public class MainActivity : MvxFormsAppCompatActivity
{
public override void OnConfigurationChanged(Configuration newConfig)
{
base.OnConfigurationChanged(newConfig);
this.UpdateTheme(newConfig);
}
protected override void OnResume()
{
base.OnResume();
UpdateTheme(Resources.Configuration);
}
protected override void OnStart()
{
base.OnStart();
this.UpdateTheme(Resources.Configuration);
}
private void UpdateTheme(Configuration newConfig)
{
if (Build.VERSION.SdkInt >= BuildVersionCodes.Froyo)
{
var uiModeFlags = newConfig.UiMode & UiMode.NightMask;
switch (uiModeFlags)
{
case UiMode.NightYes:
Mvx.IoCProvider.Resolve<IThemeService>().UpdateTheme(BaseTheme.Dark);
break;
case UiMode.NightNo:
Mvx.IoCProvider.Resolve<IThemeService>().UpdateTheme(BaseTheme.Light);
break;
default:
throw new NotSupportedException($"UiMode {uiModeFlags} not supported");
}
}
}
}
But in the AppDelegate on the iOS plattform, you don't have any of these functionalitys to override.
public class AppDelegate : MvxFormsApplicationDelegate
{
public override bool FinishedLaunching(UIApplication application, NSDictionary launchOptions)
{
return base.FinishedLaunching(application, launchOptions);
}
}
I copied this code from this project.
native xamarin iOS
When you are using native iOS you could override the TraitCollectionDidChange method. It's the equivalent to the android OnConfigurationChanged function.
Maybee look here for more details. I adapted the android version to iOS for you. At First, you have to create a custom view controller.
// your supported theme versions
public enum BaseTheme
{
Inherit = 0,
Light = 1,
Dark = 2
}
public class MyViewController : UIViewController
{
public override void TraitCollectionDidChange(UITraitCollection previousTraitCollection)
{
base.TraitCollectionDidChange(previousTraitCollection);
if (TraitCollection.UserInterfaceStyle != previousTraitCollection.UserInterfaceStyle)
{
UpdateTheme(TraitCollection.UserInterfaceStyle);
}
}
private void UpdateTheme(UIUserInterfaceStyle newConfig)
{
switch(newConfig)
{
case UIUserInterfaceStyle.Dark:
Mvx.IoCProvider.Resolve<IThemeService>().UpdateTheme(BaseTheme.Dark);
break;
case UIUserInterfaceStyle.Light:
Mvx.IoCProvider.Resolve<IThemeService>().UpdateTheme(BaseTheme.Light);
break;
default:
throw new NotSupportedException($"UiMode {uiModeFlags} not supported");
}
}
}
I uploaded a project where I simplify coded an implementation for native IOS and android here. Complete and improve some things and it will work. Also look at the StarWars and TipCalc Project in the mvvmcross sample repo.
mvvmcross ioc
your interface structure could look like so;
IThemeService (base project) - ThemeService (base project) - ThemeService(iOS project)
And you have to register the interface of course.
Mvx.IoCProvider.RegisterSingleton<IThemeService>(() => new ThemeService());

Lazy rendering print pages on iOS

I need to print custom UI in a cross-platform application. On Windows, the work flow is like:
Choose what to print
Open the Windows Print dialog/print preview. This only queries for print pages as they are needed.
Begin printing and report progress as each page prints.
I am trying to replicate this workflow on iOS, but it seems that all the print pages need to be print-ready before opening the iOS print dialog (UIPrintInteractionController). This is a big problem because the user may be printing a lot of pages, and each page can take awhile to prepare for printing. Therefore it takes too long to prepare and open the UIPrintInteractionController.
Is there any way on iOS to have it lazily query for the print pages one (or even a few) at a time as they are being previewed or printed? Or do they all have to be prepared ahead of presenting the UIPrintInteractionController?
EDIT: It's a major problem when printing many pages because it can easily run out of memory holding that many UIImages in memory at the same time.
I tried using the PrintItems property with UIImages, as well as a UIPrintPageRenderer. I am using Xamarin.iOS so pardon the C# syntax, but you get the idea. An answer in Swift or Objective C would be fine. Here is my sample pseudo-code:
//Version 1
public void ShowPrintUI_UsingPrintingItems()
{
UIPrintInfo printOptions = UIPrintInfo.PrintInfo;
InitializePrintOptions(printOptions);
_printer.PrintInfo = printOptions;
var printRect = new CGRect(new CGPoint(), _printer.PrintPaper.PaperSize);
//Each of these is horribly slow.
//Can I have it render only when the page is actually needed (being previewed or printed) ?
for (uint i = 0; i < numPrintPages; i++)
{
_printPages[i] = RenderPrintPage(i, printRect);
}
_printer.PrintingItems = _printPages;
_printer.Present(true, (handler, completed, error) =>
{
//Clean up, etc.
});
}
//Version 2
public void ShowPrintUI_UsingPageRenderer()
{
UIPrintInfo printOptions = UIPrintInfo.PrintInfo;
InitializePrintOptions(printOptions);
_printer.PrintInfo = printOptions;
//This still draws every page in the range and is just as slow
_printer.PrintPageRenderer = new MyRenderer();
_printer.Present(true, (handler, completed, error) =>
{
//Clean up, etc.
});
}
private class MyRenderer : UIPrintPageRenderer
{
public MyRenderer(IIosPrintCallback callback)
{
_callback = callback;
}
public override void DrawPage(nint index, CGRect pageRect)
{
DrawPrintPage(i, printRect);
}
public override nint NumberOfPages => _numPrintPages;
}

Is there a way to rearrange objects on a DevX report?

I want to enable user to move things around on a devexpress print preview and print it only after it is done. If it is possible, could I get some directions where I can start looking? (I will not have the time to look into the whole documentation, what may sound lazy, but devx is kinda huge for the short time I have.)
I don't think you could do this on the Print preview directly, but what you could do is provide a button which launches the XtraReports Designer and pass in the layout from your currently displayed document. When the user has finished editing then you can reload the document in the print preview, loading its new layout as required. You may need to customize the designer heavily to remove various options restricting the user to only editing certain aspects - you can hide much of the functionality including data source, component tray etc:
designer video
designer documentation
hide options in designer
if(EditLayout(document))
RefreshDocument();
public static bool EditLayout(XtraReport document)
{
using (var designer = new XRDesignRibbonForm())
{
designer.OpenReport(document);
XRDesignPanel activePanel = designer.ActiveDesignPanel;
activePanel.AddCommandHandler(new DesignerCommandHandler(activePanel));
HideDesignerOptions(activePanel);
designer.ShowDialog();
changesMade = activePanel.Tag != null && (DialogResult)activePanel.Tag == DialogResult.Yes; //set this tag in your DesignerCommandHandler
activePanel.CloseReport();
}
return changesMade;
}
Finally, some utility methods for changing a document/reports layout:
internal static byte[] GetLayoutData(this XtraReport report)
{
using (MemoryStream mem = new MemoryStream())
{
report.SaveLayoutToXml(mem);
return mem.ToArray();
}
}
internal static void SetLayoutData(this XtraReport report, byte[] data)
{
using (var mem = new MemoryStream(data))
{
report.LoadLayoutFromXml(mem);
}
}

How to detect open/closed hand using Microsoft Kinect for Windows SDK ver 1.7 C#

I have recently started using Microsoft Kinect for Windows SDK to program some stuff using the Kinect the device.
I am busting my ass to find a way to detect whether a certain hand is closed or opened.
I saw the Kinect for Windows Toolkit but the documentation is none existent and I can't find a way to make it work.
Does anyone knows of a simple way to detect the hand's situation? even better if it doesn't involve the need to use the Kinect toolkit.
This is how I did it eventually:
First things first, we need a dummy class that looks somewhat like this:
public class DummyInteractionClient : IInteractionClient
{
public InteractionInfo GetInteractionInfoAtLocation(
int skeletonTrackingId,
InteractionHandType handType,
double x,
double y)
{
var result = new InteractionInfo();
result.IsGripTarget = true;
result.IsPressTarget = true;
result.PressAttractionPointX = 0.5;
result.PressAttractionPointY = 0.5;
result.PressTargetControlId = 1;
return result;
}
}
Then, in the main application code we need to announce about the interactions events handler like this:
this.interactionStream = new InteractionStream(args.NewSensor, new DummyInteractionClient());
this.interactionStream.InteractionFrameReady += InteractionStreamOnInteractionFrameReady;
Finally, the code to the handler itself:
private void InteractionStreamOnInteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
{
using (InteractionFrame frame = e.OpenInteractionFrame())
{
if (frame != null)
{
if (this.userInfos == null)
{
this.userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
}
frame.CopyInteractionDataTo(this.userInfos);
}
else
{
return;
}
}
foreach (UserInfo userInfo in this.userInfos)
{
foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
{
string action = null;
switch (handPointer.HandEventType)
{
case InteractionHandEventType.Grip:
action = "gripped";
break;
case InteractionHandEventType.GripRelease:
action = "released";
break;
}
if (action != null)
{
string handSide = "unknown";
switch (handPointer.HandType)
{
case InteractionHandType.Left:
handSide = "left";
break;
case InteractionHandType.Right:
handSide = "right";
break;
}
if (handSide == "left")
{
if (action == "released")
{
// left hand released code here
}
else
{
// left hand gripped code here
}
}
else
{
if (action == "released")
{
// right hand released code here
}
else
{
// right hand gripped code here
}
}
}
}
}
}
SDK 1.7 introduces the interaction concept called "grip". You read about all the KinectInteraction concepts at the following link: http://msdn.microsoft.com/en-us/library/dn188673.aspx
The way Microsoft has implemented this is via an event from an KinectRegion. Among the KinectRegion Events are HandPointerGrip and HandPointerGripRelease, which fire at the appropriate moments. Because the event is coming from the element the hand is over you can easily take appropriate action from the event handler.
Note that a KinectRegion can be anything. The base class is a ContentControl so you can place something as simple as an image to a complex Grid layout within the region to be acted on.
You can find an example of how to use this interaction in the ControlBasics-WPF example, provided with the SDK.
UPDATE:
KinectRegion is simply a fancy ContentControl, which in turn is just a container, which can have anything put inside. Have a look at the ControlBasics-WPF example, at the Kinect for Windows CodePlex, and do a search for KinectRegion in the MainWindow.xaml file. You'll see that there are several controls inside it which are acted upon.
To see how Grip and GripRelease are implemented in this example, it is best to open the solution in Visual Studio and do a search for "grip". They way they do it is a little odd, in my opinion, but it is a clean implementation that flows very well.
For what i know Microsoft kinect for windows SDK does not best to detect open and close hands. Microsoft provides tracking of 20 body parts and does not include the fingers of the hand. You can take advantage of the kinect interactions for that in an inderect way. This tutorial shows how:
http://dotneteers.net/blogs/vbandi/archive/2013/05/03/kinect-interactions-with-wpf-part-iii-demystifying-the-interaction-stream.aspx
But i think the best solution when tracking finger movements would be using OpenNI SDK.
Some of the MiddleWares of OpenNI allow finger tracking.
You can use something like this
private void OnHandleHandMove(object source, HandPointerEventArgs args)
{
HandPointer ptr = args.HandPointer;
if (ptr.HandEventType==HandEventType.Grip)
{
// TODO
}
}

xpcom/jetpack observe all document loads

I write a Mozilla Jetpack based add-on that has to run whenever a document is loaded. For "toplevel documents" this mostly works using this code (OserverService = require('observer-service')):
this.endDocumentLoadCallback = function (subject, data) {
console.log('loaded: '+subject.location);
try {
server.onEndDocumentLoad(subject);
}
catch (e) {
console.error(formatTraceback(e));
}
};
ObserverService.add("EndDocumentLoad", this.endDocumentLoadCallback);
But the callback doesn't get called when the user opens a new tab using middle click or (more importantly!) for frames. And even this topic I only got through reading the source of another extension and not through the documentation.
So how do I register a callback that really gets called every time a document is loaded?
Edit: This seems to do what I want:
function callback (event) {
// this is the content document of the loaded page.
var doc = event.originalTarget;
if (doc instanceof Ci.nsIDOMNSHTMLDocument) {
// is this an inner frame?
if (doc.defaultView.frameElement) {
// Frame within a tab was loaded.
console.log('!!! loaded frame:',doc.location.href);
}
else {
console.log('!!! loaded top level document:',doc.location.href);
}
}
}
var wm = Cc["#mozilla.org/appshell/window-mediator;1"].getService(Ci.nsIWindowMediator);
var mainWindow = wm.getMostRecentWindow("navigator:browser");
mainWindow.gBrowser.addEventListener("load", callback, true);
Got it partially from here: https://developer.mozilla.org/en/XUL_School/Intercepting_Page_Loads
#kizzx2 you are better served with #jetpack
To the original question: why don't you use tab-browser module. Something like this:
var browser = require("tab-browser");
exports.main = function main(options, callbacks) {
initialize(function (config) {
browser.whenContentLoaded(
function(window) {
// something to do with the window
// e.g., if (window.locations.href === "something")
}
);
});
Much cleaner than what you do IMHO and (until we have official pageMods module) the supported way how to do this.
As of Addon SDK 1.0, the proper way to do this is to use the page-mod module.
(Under the hood it's implemented using the document-element-inserted observer service notification, you can use it in a regular extension or if page-mod doesn't suit you.)

Resources