I have a keyboard app designed for Serbian language. My keys have labels based in Serbian cyrillic alphabet. My xml strings that are used for those labels are enclosed in <xliff:g></xliff:g> tags, but a certain provider on a certain type of a phone still translates these into a different language. Just in case, I also have my strings in language specific folders, but it still happens. Does anyone know if there is a way I could disable translating of all my strings any other way?
There are providers who can handle technical files translations,i.e. know what to translate in technical files. Also, some are available for you to manage the translations. OneSky is one of these platform and we also provide translation service.
See GIF of how placeholder validation works in OneSky
Disclaimer: I work in OneSky
Related
We are migrating our app from iOS6 to iOS7 and we use programmatic way of creating view (rather from storyboard or nibs).
We are trying to support multiple countries with different languages.
Example,
English for - China, India, US
Simplied Chinese for- Taiwan, China
There can be custom override's for specfic country from the basic language localization set.
Now I need to have a common base for language bundles and country specific bundles.
Common Language Bundles: (base language bundles)
en.lproj
zh_hans.lproj
Country Specific Override Bundles: (if i have custom text for each specific countries)
ch(ina)_en.lproj
ch(ina)_hans.lproj
us_en.lproj
Problem:
Resource files (Translations) have to be duplicated for each countries(chinese, taiwan) with english, chinese. How can we avoid this ?. Images are also duplicated sometimes, it is a maintenance problem, if we start support more than 10 countries.
Android supports delta overrides of translations for each language translation per country, do we have anything in iOS similar to that ?.
I know it is not supported out of the box from iOS. What is the right way to achieve the same without duplicating the resources ?. Any hints or ideas to achieve the same ?.
Thanks,
Alex
I hope I've understood correctly.
1a.Image files will only need to be duplicated per language if they contain text or "imagery" that requires translation otherwise there should only be one version. From memory, you select which image files you want to be translated.
2a.A translation is needed for each language you want to support - there is no way round this (obviously). These usually live in "strings" files which you send off for translation.
2b.If you don't supply a specific translation for a string it defaults to the "base" translation. Unfortunately, I don't know how this would work with two "base" translations or even if this is possible as usually the base translation is the language you developed in. You will need to investigate further.
2c.You will need to manage deltas to your strings file yourself - through GIT perhaps? This is annoying but do-able although there may be third-party products that can do this.
We are developing an Application which runs on various plattforms (Windows, Windows RT, MacOSX, iOS, Android).
The Problem is how to manage the different localizations on the different Platforms in an Easy Way. The Language Files on the different platforms have various formats (some are xml based, others are simple key-value pairs and others are totally crazy formats like on MacOS)
I'm sure, we aren't the first company with this problem, but I wasn't able to find an easy to use solution o achive the possibility to have one "datasource" where the strings are collected in different languages (the best would be an User Interface for the translators) and then can export it to the different formats for the different platforms.
Does anybody has a solution for this problem?
Greetings
Alexander
I recommend using GNU Gettext toolchain for management and at runtime use either
some alternate implementation for runtime reading like Boost.Locale,
own implementation (the .mo format is pretty trivial) or
use Translate Toolkit to convert the message catalogs to some other format of your liking.
You can't use the libintl component of GNU Gettext, because it is licensed under LGPL and terms of both Apple AppStore and Windows Live Store are incompatible with that license. But it is really trivial to reimplement the bit you need at runtime.
The Translate Toolkit actually reimplements all or most of GNU Gettext and supports many additional localization formats, but the Gettext .po format has most free tools for it (e.g. poedit for local editing and Weblate for online editing) so I recommend sticking with it anyway. And read the GNU Gettext manual, it describes the intended process and rationale behind it well.
I have quite good experience with the toolchain. The Translate Toolkit is easy to script when you need some special processing like extracting translatable strings from your custom resource files and Weblate is easy to use for your translators, especially when you rely on business partners and testers in various countries for most translations like we do.
Translate Toolkit also supports extracting translatable strings from HTML, so the same process can be used for translating your web site.
I did a project for iPhone and Android which had many translations and I think I have exactly the solution you're looking for.
The way I solved it was to put all translation texts in an Excel spreadsheet and use a VBA macro to generate the .string and .xml translation files from there. You can download my example Excel sheet plus VBA macro here:
http://members.home.nl/bas.de.reuver/files/multilanguage.zip
Just recently I've also added preliminary Visual Studio .resx output, although that's untested.
edit:
btw also my javascript xcode/eclipse converter might be of use..
you can store your translations on https://l10n.ws and get it via they API
Disclaimer: I am the CTO and Co-Founder at Tethras, but will try to answer this in a way that is not just "Use our service".
As loldop points out above, you really need to normalize your content across all platforms if you want to have a one-stop solution for managing your localized content. This can be a lot of work, and would require much coding and scripting and calling of various tools from the different SDKs to arrive at a common format that would service the localization needs of all the various file formats you need to support. The length and complexity of my previous sentence is inversely proportional to the amount of work you would need to do to arrive at a favorable solution for all of this.
At Tethras, we have built a platform that alleviates the need for multi-platform software publishers to have to do this. We support all of the native formats from the platforms you list above, and can leverage translations from one file format to another. For example, translate the content in Localizable.strings from your iOS app into a number of languages, then upload your equivalent strings.xml file from Android or foo.resx from Windows RT to the system, and it will leverage translations for you automatically. Any untranslated strings will be flagged and you can order updates for these strings.
In effect, Tethras is a CMS for localized content across many different native files formats.
In our company we are trying to decide whether to use english for our message id or whether we should invent some kind of key. We can only take the second approach if there are editors which allow us to work on multiple po files in one interface because translators will need to use english in order to translate into another language rather than translating from some cryptic message id.
Is there a tool which basically circumvents the messageid and allows translation from one language into another?
I guess many translation tools do support this mode as it is used by several translation formats (like Android resource strings or Java properties). However it is not commonly used with Gettext (as you might easily end up with application showing message ID instead of not translated text).
I believe it should work with Weblate, though most likely nobody has ever used this mode with Gettext files.
Update: This mode is fully supported in Weblate and used by many projects now.
I'm creating an English translation for a program written in German (i.e. all strings within tr("...") are German). Users who are in a non-English non-German locale will probably want to see the English translation, but with the program as it is now they will see German.
There are some ways to solve this problem:
Check if it's a German locale and force to English otherwise.
Present an option to the user.
Make the programmers change their source code to English.
What is considered best-practice for internationalizing where the source code is not in English?
These are two separate questions.
The best practice is to not use any kind of hard-coded string in the sources.
Strings should be stored in external files and loaded by ID.
But what you have there does not sound like the best practice. Might be too much work to get it there.
What you describe (the tr("...") stuff) sounds like gettext (or something similar).
That approach for gettext (and similar libraries) is that "the stuff in the sources is the ultimate fallback", used if the strings for the desired language are not present.
In this case I would go with "Present an option to the user."
You can't assume the user knows English.
Real example: in Switzerland the official languages are Italian, German, French and Romansh. If I ask for French and it is not present, then the next best option is probably German, not English. I Canada the official languages are French and English, so if I as for French and is not available, the next best option is probably English.
I think the best option is asking the user (during installation probably).
Change the source to English is too costly and not worth it. I live in Brazil, we have tons of codes in Portuguese and translating to English wan't necessary one time (we do make software to english speakers). Unless you have a client that requires you to do so (usually when you are selling the source also).
Hope it helps
OK, so I guess the three options are:
Recompile the program with translated strings.
This is fraught with danger as you'll end up with two copies of the source. Bug-fixes in one will need to be done in the other. And then, what happens if you need French? Italian? Spanish? The only advantage of this approach is that it's feasible for a non-developer to do the work. (Just about.)
Resource out the strings, and automatically check what the UI locale is on load.
Here the strings are replaced with GetResource("key") or similar. On load the program automatically translates to the user's culture. This might work, but I know plenty of German-speakers who have English-language culture installed on their PCs but who would prefer German language programs at some points.
Resource out the strings and give the user the choice on load
In general it's always best to give the user control. This might be a prompt on load, although if the application is used often this can be an annoyance. Perhaps a balance is to ask the user during installation for their preference and then give then an option in a dialog to later change this setting.
Note, by the way, that translation is not localisation. For instance: number formats are quite different in Germany (e.g. 1.233,44) from English (e.g. 1,233.44). Icons and suchlike often have national characteristics.
I am designing a localized web app. I am leaning on auto-detect browser language setting. But I notice a number of respectable sites asking the user to select a language. Is there any usability issue you know of (from actual experiences out there) with just auto-detecting user language?
Thanks.
Give me a choice
Remember my choice
Use the auto-detect as default
Make transition easy
In many situation I prefer or even need the "original" over my local one, bad translations or different content being the major reason.
If you register multiple domains, you can base your auto-detect on that: When foo.com redirects me to foo.de, or otherwise shows me a german interface, it is actively ignoring my choice to go to foo.com.
MSDN did insist on showing me atrocious automatic translations and ALWAYS made me click to go to the readable, understandable english one (that's a step up: when they introduced it, the default selection for changing the language was something like Afrikaans).
Make transition easy: i.e. make it easy to go to the counterpart of the current page in a different language. Amazon often succeeds when I change ".com" to ".de", but then it fails to lead me to the german translation of the item. That's not always possible, as that requires each local view having the same structure and a 1:1 page mapping. But generally, you have to weight above requirements against other constraints of the project.
[edit] MSDN got better now :)
I would suggest to autodetect the language and display the site in this language or the default languge (probably english) if the translation is not available. Additionally present the user with a selection of languages on top or bottom of your page. The names of the languages should be written in the target language.
Don't do it like that: English, German, Italian.
But: English, Deutsch, Italiano.
Obviously there is the usability problem that you might detect a language that the user doesn't understand. How are you going to do the detection? Don't think everybody has their browser set to the correct language. IP-Adresses are also a very bad indicator for the users language.
Practical example: YouTube tried to convince me for a week or so to use the Japanese version, though I can't read Japanese. Not very helpful. Microsoft is also determined to serve me automatically translated versions of there documentation when I just want to read the English one.
So don't try to tell your users which language they're supposed to prefer, let them decide for themselves.
I really hate non-configurable auto-detection because a lot of applications are translated more than imperfectly. I would rather read perfect English than bad Russian. For example, some terms do not translate in a reasonable way, and trying to translate everything makes localized version faintly ridiculous.
Also some applications can not translate new features fast enough, leading to a mixed language.
So I always prefer to have a choice, and choose the version that is native to the application author -- for the best language (unless it is a language I do not know).
Update:
One situation when it has gone beyond ridiculous is DB2 (or its client tools, not sure), which forced me to install a Russian version, but all errors in this version were shown as "???????? ??? ??? ??".
Yes: at work, we have a Windows XP deployed with 'English' language (because we have worldwide site and only one kind Windows to deploy with only one kind of settings when it comes to language).
Yet all out applications must run in French. The auto-detect feature alone would not be enough for an appropriate display of the labels.
Sometimes when you are trying to describe something to a user over the phone and you are in a different location, it is very annoying when you are both looking at the same URL, but see different results. You might even go so far as to include the language in the URL similar to how wikipedia does it (e.g. en.wikipedia.org).
Also sometimes a user will be on a friend's computer and try to access a website but won't see it in their preferred language, because of the language settings on the computer.
I think the best solution would be to allow the user to override the setting, but default it to the auto-detected language.
I agree that the auto-detect is not enough.
Not many users know the settings for selecting their language. Therefore the settings will often be the default and therefore incorrect (for non-english users).