Localization ground rules - localization

I've just submitted my first localized app to the iPhone app store the other day. I decided to do it to learn about application localization, and because my app was simple enough to stumble through localizing with my mediocre french. I know I didn't do everything "right", but I learned a lot from doing it once. I'd like to keep doing this for all my future apps.
For one thing, I learned to code with localization in mind, but don't start localizing until your app is ready to be released. I spent way too much time doing small tweaks in 2 UI files.
What are your favourite localization basics, cardinal rules, and best practices?
I'm thinking mostly for small hobby developers like myself, although stuff from the big leagues would be interesting as well.

The biggest one for me is don't concatenate strings:
Bad:
"You have " + messageCount + " messages";
Good:
"You have {0} messages"
Word order varies from language to language, and so you can't assume where in a sentence your dynamic data might occur.

In your UI, allow for about 30-50% expansion of translations from English. A method I learned early in my career was to produce a 'pig latin' localized version of the UI.
If your user interface is still legible in Pig Latin, it will probably be legible in real languages.
Ifway ouryay userway interfaceway isway illstay egiblelay inway Igpay
Atinlay, itway illway obablypray ebay egiblelay inway ealray
anguageslay.

Use Unicode for all strings - UTF-16 or UTF-8. If reading/writing to any program/format that doesn't assume that by default, make sure you specify UTF-16 or UTF-8 explicitly.
As Mike Sickler said, don't concatenate strings. Better yet, don't have sentences with inserts, since you don't know how the insert affects the rest of the sentence - different languages have different rules regarding plural / etc.
Bad: "You have " + messageCount + " messages"
Better: "You have {0} messages" (but what if {0} == 1? Do you write message(s)? What about Hebrew, where "one" comes after the noun, but other numbers before?)
Best: "Messages: {0}"
As rhsatrhs said, allow 30-50% expansion. In my (big league) company, we usually assume that German is the longest, although I found out that sometimes Russian got over 100% larger. I suspect it's sometimes translators who don't know the exact term, so they write a longer description using close term (Example: Symbol ==> source code reference marker).

Related

Profanity filter import

I am looking to write a basic profanity filter in a Rails based application. This will use a simply search and replace mechanism whenever the appropriate attribute gets submitted by a user. My question is, for those who have written these before, is there a CSV file or some database out there where a list of profanity words can be imported into my database? We are submitting the words that we will replace the profanities with on our own. We more or less need a database of profanities, racial slurs and anything that's not exactly rated PG-13 to get triggered.
As the Tin Man suggested, this problem is difficult, but it isn't impossible. I've built a commercial profanity filter named CleanSpeak that handles everything mentioned above (leet speak, phonetics, language rules, whitelisting, etc). CleanSpeak is capable of filtering 20,000 messages per second on a low end server, so it is possible to build something that works well and performs well. I will mention that CleanSpeak is the result of about 3 years of on-going development though.
There are a few things I tell everyone that is looking to try and tackle a language filter.
Don't use regular expressions unless you have a small list and don't mind a lot of things getting through. Regular expressions are relatively slow overall and hard to manage.
Determine if you want to handle conjugations, inflections and other language rules. These often add a considerable amount of time to the project.
Decide what type of performance you need and whether or not you can make multiple passes on the String. The more passes you make the slow your filter will be.
Understand the scunthrope and clbuttic problems and determine how you will handle these. This usually requires some form of language intelligence and whitelisting.
Realize that whitespace has a different meaning now. You can't use it as a word delimiter any more (b e c a u s e of this)
Be careful with your handling of punctuation because it can be used to get around the filter (l.i.k.e th---is)
Understand how people use ascii art and unicode to replace characters (/ = v - those are slashes). There are a lot of unicode characters that look like English characters and you will want to handle those appropriately.
Understand that people make up new profanity all the time by smashing words together (likethis) and figure out if you want to handle that.
You can search around StackOverflow for my comments on other threads as I might have more information on those threads that I've forgotten here.
Here's one you could use: Offensive/Profane Word List from CMU site
Based on personal experience, you do understand that it's an exercise in futility?
If someone wants to inject profanity, there's a slew of words that are innocent in one context, and profane in another so you'll have to write a context parser to avoid black-listing clean words. A quick glance at CMU's list shows words I'd never consider rude/crude/socially unacceptable. You'll see there are many words that could be proper names or nouns, countries, terms of endearment, etc. And, there are myriads of ways to throw your algorithm off using L33T speak and such. Search Wikipedia and the internets and you can build tables of variations of letters.
Look at CMU's list and imagine how long the list would be if, in addition to the correct letter, every a could also be 4, o could be 0 or p, e could be 3, s could be 5. And, that's a very, very, short example.
I was asked to do a similar task and wrote code to generate L33T variations of the words, and generated a hit-list of words based on several profanity/offensive lists available on the internet. After running the generator, and being a little over 1/4 of the way through the file, I had over one million entries in my DB. I pulled the plug on the project at that point, because the time spent searching, even using Perl's Regex::Assemble, was going to be ridiculous, especially since it'd still be so easy to fool.
I recommend you have a long talk with whoever requested that, and ask if they understand the programming issues involved, and low-likelihood of accuracy and success, especially over the long-term, or the possible customer backlash when they realize you're censoring them.
I have one that I've added to (obfuscated a bit) but here it is: https://github.com/rdp/sensible-cinema/blob/master/lib/subtitle_profanity_finder.rb

Avoiding real English words in "short URLs", without sacrificing too much headroom

Assuming here that the language in question is English, and the character sets used are basic ASCII / latin alphabet.
When generating "Short URLs", the first thought is often to use a large "code set"/alphabet to convert an integer (possibly an ID referencing the long URL in your database) to a high "base" (URL-friendly Base-64, for example). In my specific case, I first opted to normalize to Base-36 (numbers, latin letters, not case-sensitive).
However, upon closer inspection, one might find their Short URL generator eventually spitting out naughty words, or other common words, which may be quite undesirable.
One option to avoid generating "real words" would be to just strip out all of the common vowels.
Are there other/better workarounds that don't sacrifice too much headroom?
I think your idea to strip out the vowels will be your best best here.
Anything else, like blacklists, dictionary lookups, etc, will just be incredibly tedious, require a lot of maintenance and, ultimately, falible.
You could normalize to base-30 [0-9bcdfghj-np-tvwxz], which will simply never generate vowels and thus not generate real words.
You could separate your vowels and consonants (xxxddd_eeeaaa). If it's always longer than three letters you're probably safe with curse words.
Or you could insert numbers randomly.
Or you could create a filter.
of the three I'd probably stick with the first.
In order to sacrifice only little information per digit but at the same time prevent as much meaning as possible, you should probably leave out the most frequent letters in english. This will be slightly more efficient than simply skipping all vowels.

What are all of the allowable characters for people's names? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There are the standard A-Z, a-z characters, but also there are hyphens, em dashes, quotes, etc.
Plus, there are all of the international characters, like umlauts, etc.
So, for an English-based system, what's the complete set? What about sets for other languages? What about UTF8, UTF16, etc?
Bonus question: How many name fields are needed, and what are their maximum lengths?
EDIT: There are definitely two different types of characters involved in people's names, those that are there as part of the context, and those that are there for structural reasons. I don't want to limit or interfere with the context characters, but I do need to deal with the structural ones.
For example, I had a name come in that was separated by an em dash, but it was hard to distinguish that from the minus character. To make the system easier for searching, I want to take all five different types of dashes, and map them onto one unique character (minus), that way the searcher doesn't need to know specifically which symbol was initially entered.
The problem exists for dashes, probably quotes as well, but also how many other symbols?
There's good article by the W3C called Personal names around the world that explains the problems (and possible solutions) pretty well (it was originally a two-part blog post by Richard Ishida: part 1 and part 2)
Personally I'd say: support every printable Unicode-Character and to be safe provide just a single field "name" that contains the full, formatted name. This way you can store pretty much every form of name. You might need a more structured storage, but then don't expect to be able to store every single combination in a structured form, as there are simply too many different ones.
Whitelisting characters that could appear in a person's name is the wrong way to go, if you ask me. Sure, [A-Za-z] is a fair starting point, but, as you said, you get problems with "European" names. So you map all the umlauts, circumflexes and those. What about Chinese names? Japanese? Indian? Hebrew? You're entering a battle against wind turbines.
If you absolutely must check the validity of someone's name, I'd suggest doing a modest blacklist of certain characters. Braces, mathematical characters, some punctuation and such might be safe to ignore. But I'd be cautious, if I were you.
It might be best to just accept whatever comes in. UTF-16 should be today's overkill character set, that should be adequate for some years to come.
Edit: As for your question about name length and amount of names. If you really want people to write their real and complete names, I guess the only foolproof answer to both of those questions would be "infinite". Not being able to whip out any real examples for human beings, but surely there are analogous examples for humans as the native name for the city of Bangkok.
I don't think there's a definitive answer. After all, some people have names that can't even be expressed in UTF-16...
There are some odd people out there, who will give their kids the craziest of names, including putting in weird punctuation, accents that don't exist in their own language, etc.
However, you can place arbitrary restrictions on your database. If you want to you can insist on 7 bit ASCII names. It's slightly rude to users, but they'll live with it. It certainly makes searching easier.
My colleague's daughter is named Amélie. But even some (not all!) official British government web sites ("Please enter the name exactly as shown on the birth certificate") won't accept the unicode, so he has to use 'Amelie' instead.
Any character that can be represented by any multiple of eight bits (greater than zero) is a possible character for a person's name. Lengths of both names and encodings are arbitrary, so no upper bound should be considered.
Just make sure you sanitize your database inputs so little Bobby Drop-tables doesn't get ya.
On the issue of name fields, the WRONG answer is first name, middle initial, last name, etc. for many reasons.
Many people are known by their middle name, and formally use a first initial, middle name, last name format.
In some cultures, the surname is the first name, and the given name is the last name.
Multiple first and/or middle given names is getting more common. As #Dour High Arch points out, the other extreme is people with only one word in their name.
In an object-oriented database, you would store a Name object with methods to return a directory-style or signature-style name; and the backing store would contain whatever data was necessary to support those methods.
I haven't yet seen a relational database model that improves on the model of two variable-length strings for directory-style and signature-style names.
I'm making software for driving schools in the USA, so to me what matters most what the state DMV's accept as a proper name on a driver's license. In my case, it would cause problems to allow names beyond what the DMV allows, even if such names were legal because the same name must later be used for a driver's license.
From StackOverflow, I still hadn't confirmed the answer I needed. And I happen to know that in my state (Calif) they're using AS400's with software probably written in COBOL, and to the best of my knowledge, those only support an 8-bit character set. (Is it EBCDIC?) Anyway... Ugh.
So, I called the California DMV... Sure enough, their system allows A-Z and spaces and absolutely nothing else. Not even hyphens are allowed -- Hyphens are replaced with spaces. In fact, apparently just to be difficult, they only use capitals. And names such as "O'Malley" must be replaced with OMALLEY.
Leave it to government. I must say I'm thrilled not to be a developer working for DMV. (Although I could really use that kind of salary.)
It really depends on what the app is supposed to be used for.
Sure, in theory it's great if you allow every script on god's green earth to be used, but if the DB is also used by support staff, are they going to be able to handle names in Japanese, Hebrew and Thai script? Can you printer, if it's used to print postage labels?
You might add an extra field "Latin Transcription", but IMO it's really OK to restrict it to ISO-8859-1 characters - People who don't use Latin characters are by now so used to having to use a transcription that they don't mind it anymore, unless they're hardcore nationalists.
UTF-8 should be good enough, as far as name fields, you'll want at minimum a first name and last.
Depending on the complexity of your name structure I could see:
First Name
Middle Initial/Middle Name
Last Name
Suffix (Jr. Sr. II, III, IV, etc.)
Prefix (Mr., Mrs., Ms., etc.)
What do you do when you have "The Artist Formerly Known as Prince". That symbol he used is not a character in the unicode set (AFAIK).
It's some levity, but at the same time, names are a rather broad concept that doesn't lend itself well to a structured format. In this case, something free-form might be most appropriate.

Coding in Other (Spoken) Languages

This is something I've always wondered, and I can't find any mention of it anywhere online. When a shop from, say Japan, writes code, would I be able to read it in English? Or do languages, like C, PHP, anything, have Japanese translations that they write?
I guess what I'm asking is does every single coder in the world know enough English to use the exact same reserved words I do?
Would this code:
If (i < size){
switch
case 1:
print "hi there"
default:
print "no, thank you"
} else {
print "yes, thank you"
}
display the exact same as I'm seeing it right now in English, or would some other non-English-speaking person see the words "if", "switch", "case", "default", "print", and "else" in their native language?
EDIT - yes, this is serious. I didn't know if different localizations of a language have different keywords. or if there are even different localizations at all.
If I understood well the question actually is: "does every single coder in the world know enough English to use the exact same reserved words as I do?"
Well.. English is not the subject here but programming language reserved words. I mean, when I started about 10 yrs ago, I didn't have any clue of English, and still I was able to program simple things by learning the programming language, even when I did not know what they meant ( in English ). As a matter of fact this helped me to learn English.
For example. I know to do an "iteración" ( iteration of course ) I had to write:
for( i = 0 ; i < 100 ; i++ ) {}
To me, the "for", the ";" and the "++" were simple foreign words or symbols. Later I learned that "for" meant "para", "while" meant "mientras", etc. But, in the meantime, I did not need to know English, what I needed was to know was "C".
Of course when I needed to learn more things, I had to learn English, for the documentation is written in that language.
So the answer is: No, I don't see if, while, for etc. in my native language. I see them in English, but they didn't mean to me any other thing that they meant for the programming language in turn.
Is like switch statement in bash: case .. esac. What Is "esac"... for me the end of the switch statement in bash.
I guess that's what we call "abstraction"
In the Java language some methods must be named (at least partially) using the English language because of the JavaBeans convention.
This convention requires that a property X be established via a pair of getX() and setX() methods. Here in French-Canada, where some developers are obliged to code in the French language this leads to the following travesty:
interface Foo {
Color getCouleur();
void setCouleur(Color couleur);
}
I'm having trouble finding references, but I'm reminded of three stories.
A Lisp hacker defends meaningless functions like "cdr" and "car" by comparing them to programming in your non-native language:
http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg01171.html
When Yukihiro Matsumoto ("Matz") started developing Ruby, he used english keywords even though he was writing all the documentation in Japanese!. There was no English documentation for Ruby for a couple years, and very few Americans using the language. But now it's a world-class language, and it the fact that it was born in Japan is only of historical interest. If the language had been using keywords in hiragana, it would have had a much more difficult time gaining popularity.
I read an essay once -- maybe someone else can find it, Google is no help today -- that suggested that translating keywords was misguided because the words aren't actually English-- they're jargon. Not only do (to use the examples above) para and pour not quite have the exact meaning that for has in English, to non-programmers the phrase "for loop" is jibberish. Even Americans have to learn a new meaning. So to translate the words's superficial meaning into another language is more like making a cross-language pun rather than actually being helpful.
I really have not thought too much about programming in Japanese before, but here we go, using the question's code sample.
Using only the language statements in Japanese with the variables in English:
// In Japanese, it makes more sense to put the keywords/modifiers as
// postfix expressions rather than prefix expressions.
(i < size)か {
(l[i])は {
1だ:
「もしもし。」を書く;
省略時値:
「いいえ、いいですよ。」を書く;
}
} ない {
「はい、ありがとうございます。」を書く;
}
As many people already pointed out, in most programming languages you just have to learn a few keywords, so it doesn't matter that much if they're in English (or a language other than yours, for that matter). It's just a symbol you associate with some construct. For instance, in VB you have "THEN", which in many C-style languages would be "{" and it doesn't make a big difference in readability (well, at least that's how I see it, being a Non-English native speaker).
But where things can sometimes get hairy, and where the choice of (natural) language matters is in naming identifiers. If the names of variables, functions, classes, etc, don't have a meaningful name for you because of a language barrier, following even the simplest code can be rather challenging.
I remember someone once gave me a short snippet of Actionscript taken from some blog. The names were in German and since I don't speak a word of that language, stuff could have been called var_123, var_562 or func_333 as well (and probably it would have been easier for me to remember the names or at least to have a chance of spelling them right without copying and pasting). Since this was a short, self-contained snippet, I used an online translator to give those vars and functions meaningful names in my native language (Spanish) and after that, everything was clear. The point is that the code was actually simple, but I was only able to make sense out of it without too much (unnecessary) extra effort just when I overcame the language barrier.
Since then, I've switched to using English for naming identifiers. Whether you like it or not, it's the "koine" for programming, engineering and generally technical stuff. Most of the APIs are written in English and so is most documentation (and probably the best resources you can find are in English as well). As a nice aside, it keeps your code more coherent with the code you're likely to be interacting with, and I think it tends to be more compact and succinct than other languages like Spanish (which otherwise would be my natural choice).
Of course, if you can't understand at least some English, the problem remains the same, so it's not a perfect solution. But, given a number of developers from many different countries, chances are that the common language for them to communicate (through code and of course other means) will be English. So, choosing English is perhaps the best option, even though it would be not the perfect solution to this problem.
The programming language defines keywords and standard class names, and it's best practice to give user defined types, variables and functions also English names (as a non-native speaker I can tell ;-).
So yes, if all is well, you'll be able to read the code.
However languages like Java and Perl allow the full Unicode set for identifiers, so if somebody writes his class names in Kanji, you'll likely have a problem.
Update: For Perl there's a joke module that allows you to write Perl in Latin. But it's really just that, a joke. Nobody uses things like this seriously.
Second Update: The idea of localized programming languages isn't that ridiculous. Excel's macro language is localized, but luckily it's stored in one canonical language (English) in the file, so the localization is just a layer on top of the normal thing. Such things only make sense for small "programs", for "real" programs it becomes hard to maintain.
Actually there are some Non-English-based programming languages (Wikipedia)
I'm Norwegian but I've allways used English for all code except output (ignoring some silly code from school). Actually I usually write everything in English and then translate it to my native language, using gettext (or something).
I am British and a problem we often run into is the American/British spelling clash. This often occurs with programming related terms such as Initialise() or Initialize(), Analyse() or Analyze() etc. This can (has) lead to problems trying to overriding methods, and is sometimes difficult to spot.
Since the framework (in our case C#) was designed by Americans, we found that it is best to be consistent and use American spellings. We even adopt Color.
We have a mix of nationalities in our development teams and most non-British people tend towards American spellings naturally.
AppleScript was once available in French and Japanese dialects. I do not know why it was withdrawn.
Taking this to the next level, what about being able to substitute symbols?
After seeing languages like Brainf**k and Whitespace I thought of making a language like this: it'd be identical to C except you use closing braces to open, opening braces to close, swap the meanings of + and -, * and /, ; and :, > and <, etc.
The concept is nothing more than a gimmicky altered C compiler. But, like thinking of keywords differently, it challenges you to rethink some basic assumptions if you've never thought of such things before. Ex:
int foo)int i, char c( }
int six = 2 / 3:
int two = six + 4:
if )i > 0( }
printf)"i is negative"(:
{
{
I'm in a French team developing a software system in C#. Despite the fact that the programming language keywords are ostensibly English, I imagine that you would have great difficulty reading the code as all the function names, variables, code comments, database tables and columns, technical specifications, protocols and so on, are all in French, including those lovely accented characters ç, é, è, ù, etc. I'm not even certain if the system would even run elsewhere due to localisation bugs, such as relying on the comma to be the default decimal seperator.
Otherwise, WinDev is a popular programming platform in France, and its programming language WLanguage has keywords in either French or English, see and example here : link text
The only language I saw localized is Excel with its macros. If you try to sum a column using an Italian version of Office you have to write SOMMA(A1:A10) and not SUM. That's a shame.
By the way, just because it's fun, here's how your code should look like with Italian keywords:
se (i < size){
commuta
caso 1:
stampa "hi there"
normalmente:
stampa "no, thank you"
} altrimenti {
stampa "yes, thank you"
}
i've seen VBA translated into spanish-like commands. it's one of the ugliest things ever seen. i would be ashamed to have something like this on my computer.
PD: i happen to think that spanish is a much nicer language than english; but translating is WRONG
Well, As others pointed-out, the keywords and system calls would likely remain in English.
However, understanding the keywords of the language is only a small part in understanding the code. Variable names, function names and comments all risk being in the native language of the author.
Edit: I just flashed-back to my youth where I went in the mapping tables of my TRS-80 built-in BASIC to switch the keywords to French. I could change all the keywords but I couldn't make any of them larger. Made for funny programs.
Don't make fun of this. Some years ago, Microsoft had announced G# (German Sharp) - C# with German keywords and API. Of course, it was an April Fools joke, but the entire site about that looked so real and professional (and was on microsoft.com). Scary.
At work, we use two field bus systems, both developed in German-speaking countries, which have a scary mix of German and English for identifiers, including some lovely false friends. It's a mess.
No, English keywords and identifiers are fine. Though some might argue if it should be Color or Colour :)
In several VBA project I've worked on (yes, very early in my career) we had to detect the version of office which was installed on the user's machine and change the formulas used in the speradsheets accordingly.
As i program in portuguese"SUM" would have to be translated into "SOMA" and so on and so forth. I just can't imagine the necessary work to make this happen in several languages. Has anyone else suffered with this problem?
There are some languages that have translated keywords. Excel formulas, for example. If you write some calculations in a spreadsheet, this will be in your language.
Fortunately, this is not a general practice, and even non-English speakers like me thank God that there is a standard language for keywords :
it's easier to share you work.
it prevent documentation from becoming a bigger nightmare that it already is.
English words and sentences are usually short and syntactically pragmatic. In literature, Latin languages are much more beautiful, but for technical stuff, English rocks.
And where to stop ? Can you imagine a C in ancient Greek ?
Keywords must stay in one language, and well, it started with English, let it stay that way. This could have been worst (Asian language ?). And so we have to write methods and comments in English. Ok, more work for us, but at least the international code base stay congruent.
There is, however, one case where using native language method names and comments can be a good practice : in third world country. I'm going to Senegal in some months to manage a Django project. Senegal have a huge analphabetization rate, and therefor it's already great that they spead energy in improving they programming knowledge. French is the native language here, so it would be inefficient to force them to learn computing AND a new tongue at the same time.
BTW, that would be your code with French keywords :
Si (i < taille) {
cas par cas :
cas 1:
afficher "salut"
défaut:
afficher "non merci"
} sinon {
afficher "oui, merci"
}
Not that translating the keywords have nothing to do with translating the strings. Of course, we have "hi, there" translated in our language. European coders even tend to use I18N much more than American sot their service can reach a wider audience.
Generally speaking, most programmers adapt to the English form.
I learned to program when I was 7 years old and only spoke Hebrew (which is right to left) and with no english, which made it quite a fascinating experience.
The problem you would usually get is with documentation, variables, and function names. I have seen my share of variables in other languages using english alphabet.
The only language I'm familiar with that actually got translated was good old Logo (still amazing to this day).
When I was a kid we went to France, and in a museum we went to, I remember finding a display which showed you how to write computer programmes. The language was some kind of BASIC variant and I distinctly remember it using POUR instead of FOR, and so on. I was 7 years old and had only just learned BASIC, and it seemed completely natural to me that the French would have their own dialect like this!!
I guess it may have been LSE that I saw?
Filemaker's scripting language is localized. The scripts (and data!) are stored in a terrible "sorta canonical" form.
So if you write a script in the American version, then open it up in the French version, all the keywords and built-in function names will be in French. But why won't it run?! Aha! The French version uses "," as the decimal point, and therefore to avoid ambiguity uses ";" to separate function arguments -- where the American version uses "." and "," respectively. This conversion you have to do yourself.
So you work through the incredibly bad script editing interface (you can't write scripts as text files) to fix all these things. It runs! Great! The results are all wrong! Oh no! Aha! The Jan-7-2004 date you entered in the American version is being interpreted as July-1-2004 -- apparently dates are not only displayed but stored in locale-dependent order. Am I kidding you? No.
[Note: Filemaker 8 and 9 may be sane -- I only ever worked with 3 - 7.]
Your question is an interesting one with regard to Perl because it's syntax is designed to follow (English) natural language. I wonder if that makes it more difficult for non-English speakers...
Of course, Perl and Perlers refuse to play by conventional rules. Mad scientist Damian Conway wrote the Lingua::Romana::Perligata module which uses the black magic of source filters to allow you to write Perl in latin!
Here in Australia we still need to spell colour like color.
However, I do find it annoying when other (Australian) developers, working on an Australian project, decide that internal variable names need to be spelt the american way.
It would be pointless, IMHO, to i18n a language syntax. It would just kill any sort of portability.
The only exception are educational languages, such as LOGO. They were designed for ease learning, so portability is not an issue.
I read a lot of code, but the problem always is at variable/method names and comments, if they are commenting their code on their own language, using a language special characters like Japanese or Cyrillic, we are in trouble! but the keywords I think they will stay in English as they are.
in Italian
se (i < dimensione){
scegli
caso 1:
stampa "ciao"
mancante:
stampa "no, grazie"
} altrimenti {
stampa "sì, grazie"
}
To confirm the worries of some previous poster I've seen a Fortran code with a macro include to translate all the keywords from English to French. Allow me not to continue on this.
I also had to work with a code simultaneously containing identifiers in Italian, German, English and French, not only because it was developed in many different places, but also because the main developer thought it was fun and helped him not to duplicate identifier names (of course, with a routine 2000 lines long....)
I think WordBasic was localized. WordBasic was used to write macro's for in Word before VBA was used.
If I remember it correctly, only WordBasic written in the English version would execute on all localized version. If you would write a Dutch version, you could only execute it on a Dutch Word.

What things should be localized in an application

When thinking about what areas should be taken into account for a localized version of an application a number of things pop up right away:
Text display
Date and time
Units
Numbers and decimals
User input formats
LeftToRight support
Dialog and control sizes
Are there other things/areas to remember or keep in mind when building a localizable application? Are there any resources out there which provide a listing of best practices not just for text localization but for all things around localization?
After Kudzu's talk about l10N I left the room with way more questions then I had before and none of my old questions answered. But it gave me something to think about and brought the message "depends on how far you can/want to go" accross.
Translate text bodies with aforementioned things
Test all your controls for length/alignment in LTR/RTL, TTB(TopToBottom) BTT and all it's combinations.
Look out for special characters and encodings
Look out for combinations of different alignments (LTR, RTL, TTB, BTT) and how they effect punctuation and quotation signs.
Align controls according to text alignment (Hebrew Win has its start menu at the right
Take string lengths into account. They can overflow in other languages.
Put labels at the correct side of icons (LTR, TTB etc)
Translate language selection controls
No texts in images (can't be translated)
Translate EVERYTHING (headers, logos, some languages use different brand names, product names etc)
Does the region have a 24:00 or a 00:00 (changes the AM/PM that goes with it too)
Does the region use AM/PM or the 24:00 system
What calendar system are they using
What digit is for what part of the date (day, month, year in all its combinations)
Try to avoid "copying [number] files" equivalents. Some regions have different rules about changing words according to quantities. (This is an extremely complicated topic that I will elaborate on if desired)
Translate sentences, not words. Syntax rules are too complicated to put in your business logic.
Don't use flags for regions. Languages != countries
Consider what languages / dialects you can support (e.g. India has a gazillion of languages)
Encoding
Cultural rules (some western images displaying business woman can be near offensive in some other cultures)
Look out for language generalizations (e.g. boot(UK) != boot(US))
Those are the ones from the top of my head. The list just went on and on...
Don't forget the overhead of converting all documentation and help files.
a couple hints from my J2ME apps days:
don't translate separate words, translate whole phrases, even if there are matching repetitions. You'll later have to translate to a language where words have to be modified differently in different contexts and you may end up with an analog of "color: greenish"
Right2Lelf includes numbering of lists, alignment, and alternative scroll bars
Arabic languages write the same letter differently based on surrounding letters. You can't just print a string from a character buffer, you'll need a special control to output those or support from you platform
alphabetical sorting is HARD. No native Chinese could ever explain me the rules, but they will always spot wrongly sorted words. There appear to be a number of options to sort Chinese. I guess other languages may have the same problem

Resources