Are there any good tools for iOS pseudo localization? [closed] - ios

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'd like to start seeing how comprehensive we've been in our iOS code at localizing strings. We're not ready to go to translators yet, but I'd like to start testing with pseudo localization. Automating this process in a Localizable.strings file should be easy enough, but I can't seem to find any tools that do it. Frankly, I'd be satisfied with a script that just changed all my strings to "NOT ENGLISH!" if such a thing exists.

You can achieve this with the Translate Toolkit.
First you need to convert the .strings file to PO using the prop2po converter:
$ prop2po Localizable.strings en.po
This will create a PO file with the strings of the Localizable.strings file as source strings (in this case I'm using English as a source).
Once you have the PO file, rewrite it using podebug in the desired rewrite format.
$ podebug --rewrite=unicode en.po en_rewritten.po
Finally convert it back to the .strings format (note that you need to pass the original Localizable.strings file as a template):
$ po2prop en_rewritten.po rewritten.strings -t Localizable.strings
The resulting file will look something like this:
"Account: %#" = "Ȧƈƈǿŭƞŧ: %#";
"Add command" = "Ȧḓḓ ƈǿḿḿȧƞḓ";
"Add connection." = "Ȧḓḓ ƈǿƞƞḗƈŧīǿƞ."

I came across two solutions that haven't yet been mentioned here:
A free app called on the Mac App store that generates pseudolocalized .strings files based on your source code (drag-and-drop). It generates similar strings to those the OP provided in the question.
https://itunes.apple.com/us/app/pseudolocalizer/id503026674?mt=12
The online translation service Babble-on offers free pseudolocalized .strings files based on existing .strings files (other options available). They have the extra option of generating strings longer than the original English to test your GUI.
http://www.ibabbleon.com/pseudolocalization.html

although Translate Toolkit can provide a solution I've looked for a simpler approach using bash script.
create [changesDictionary.txt] file (see format at the end of this post) and run the following script with the language file as parameter:
# This script translate iOS strings file into pseudo languge for texting usage.
# the script create a sed change and replace file based on [changesDictionary.txt].
# the loop run across the input string file (ie. myFyle.strings)
# and replace the secong strings with the dictionary values.
# since the strings file is in BOM format (http://en.wikipedia.org/wiki/Byte_order_mark)
# the input is converted from UTF16 to UTF8.
sed -e 's/^"\(.*\)" = "\(.*\)"$/s\/\1\/\2\/g/' changesDictionary.txt > changesDictionary.sed
FILENAME=$1
while read -r; do
if [[ $REPLY = '/*'* ]] ; then
echo "$REPLY"
else
if [[ $REPLY = '' ]] ; then
echo "$REPLY"
else
if [[ $REPLY = '"'* ]] ; then
changes2=$(echo "$REPLY" | cut -d= -f2 | sed -f changesDictionary.sed)
changes1=$(echo "$REPLY" | cut -d= -f1 )
echo "$changes1=$changes2"
echo "$REPLY"
fi
fi
fi
done < <(iconv -f UTF-16 -t UTF-8 $FILENAME) | iconv -f UTF-8 -t UTF-16 >$FILENAME.new
The script look for a [changeDictionary.txt] file in the following format:
"a" = "á"
"b" = "β"
"c" = "ç"
"d" = "δ"
"e" = "è"
"f" = "ƒ"
"g" = "ϱ"
"h" = "λ"
"i" = "ï"
"j" = "J"
"k" = "ƙ"
"l" = "ℓ"
"m" = "₥"
"n" = "ñ"
"o" = "ô"
"p" = "ƥ"
"q" = "9"
"r" = "ř"
"s" = "ƨ"
"t" = "ƭ"
"u" = "ú"
"v" = "Ʋ"
"w" = "ω"
"x" = "ж"
"y" = "¥"
"z" = "ƺ"
"\ñ" = "\n"
"$δ" = "$d"
"$ï" = "$i"
you can use this example or create your own, please note tp the last 3 change string in the file. this is to restore end of lines and parameters to their regular state. I chose this approach to simplify the script (I think that the perfomance are not optimized).

We provide pseudo localization as part of our service at Tethras (www.tethras.com). Pseudo localization is free. We accent all of the characters in your strings and extend the length of the text by 30%. This will help you test not only for hard coded strings, but will also let you see what happens to your layouts due to text expansion during translation.
Examples:
Plain Text
Wè prôvïdè psèúdô lôçálïzátïôñ ás párt ôƒ ôúr sèrvïçè át Tèthrás
(www.tèthrás.çôm). ôñè twô thrèè ƒôúr ƒïvè sïx Psèúdô lôçálïzátïôñ ïs
ƒrèè. ôñè twô thrèè Wè áççèñt áll ôƒ thè çháráçtèrs ïñ ¥ôúr strïñgs
áñd èxtèñd thè lèñgth ôƒ thè tèxt b¥ 30%. ôñè twô thrèè ƒôúr ƒïvè sïx
Thïs wïll hèlp ¥ôú tèst ñôt ôñl¥ ƒôr hárd çôdèd strïñgs, bút wïll álsô
lèt ¥ôú sèè whát háppèñs tô ¥ôúr lá¥ôúts dúè tô tèxt èxpáñsïôñ dúrïñg
tráñslátïôñ. ôñè twô thrèè ƒôúr ƒïvè sïx sèvèñ èïght ñïñè tèñ
Localizable.strings
"Bring All to Front" = "Brïñg Áll tô ƒrôñt ôñè twô";
"Hide" = "Hïdè 12";
"Quit" = "Qúït 12";
"Hide Others" = "Hïdè Óthèrs ôñè ";
Kudos on wanting to test the localizability of your app prior to translation. This is going to save you a lot of time and energy during the actual translation process.

You can use the genstrings tool provided by Apple. It's all explained in the strings section of the Resource Programming Guide

Related

R/exams unicode char in *.Rnw question files are not propoerly displayed: é displayed as <U+00E9> in final PDF

I am struggling to produce an exam sheet in French using exams2nops. There are accents in the text provided in the intro and title argument of this function and also in the Rnw files containing the function. The formers are correctly displayed in the resulting PDF, but not the later, for example é from a Rnw file is displayed as <U+00E9>.
The call to exams2nops looks like this:
exams2nops(file=myexam, n = N.students, dir = '.',
name = paste0('exam-', exam.date),
title = "Examen écrit",
course = course.id,
institution = "",
logo = paste(exams.dir, 'input/logo.jpg', sep='/'),
date = exam.date,
replacement = TRUE,
intro = intro,
blank=round(length(myexam)/4),
duplex = TRUE, pages = NULL,
usepackage = NULL,
language = "fr",
encoding = "UTF-8",
startid = 1,
points = c(1), showpoints = TRUE,
samepage = TRUE,
twocolumn = FALSE,
reglength = 9,
header=NULL)
Note that "Examen écrit" is correctly displayed in the final PDF, the problem is with the accent in the Rnw files. The function call yields no error.
The *.tex files by generated by exams2nops, already have the problem. For example, the sentense 'Quarante patients ont été inscrits' in the original Rnw file, becomes 'Quarante patients ont <U+00E9>t<U+00E9> inscrits' in the tex file.
I use exams_2.4-0 with R 4.2.2 with TeXShop 4.70 on OSX 11.6.
I checked that Rnw are utf-8 encoded, for example:
$ file -I question1.Rnw
question1.Rnw: text/x-tex; charset=utf-8
It seems they are utf-8-encoded, indeed. These files were translated with deepl or google translate, then edited in emacs.
I tried setting the encoding parameter of exams2nops to latin-1. It did not help.
Any Idea?
The problem disapeared after setting R 'locales' properly. A recurrent problem with OSX R installs. The symptome is:
During startup - Warning messages:
1: Setting LC_CTYPE failed, using "C"
2: Setting LC_COLLATE failed, using "C"
3: Setting LC_TIME failed, using "C"
4: Setting LC_MESSAGES failed, using "C"
5: Setting LC_MONETARY failed, using "C"
at start up. This thread explains how to fix it: Installing R on Mac - Warning messages: Setting LC_CTYPE failed, using "C".
I'm collecting a few further comments here in addition to the existing answer:
The only encoding (beyond ASCII) supported by R/exams, starting from version 2.4-0, is UTF-8. Support for other encodings like latin1 etc. has been discontinued.
As only UTF-8 is supported the encoding does not have to be specified in R/exams function calls anymore (as still might be advised in older tutorials).
To leverage this support of UTF-8, R has to be configured with a suitable locale. A "C" locate (see the answer by #vdet) is not sufficient.
When using R/LaTeX (Rnw) exercises all issues with encodings can also be avoided entirely by using LaTeX commands for special characters, e.g., {\'e}t{\'e} instead of été. The latter is of course more convenient but the former can be more robust, especially when working with teams of instructors living on different operating systems with different locale settings.
When using LaTeX commands instead of special characters in R strings (as opposed to the exercise files), then remember that the backslash has to be escaped. For example, the argument title = "Examen écrit" becomes title = "Examen {\\'e}crit".

Is there a script that can extract particular link from txt and write it in another txt file?

I'm looking for a script (or if there isn't, I guess I'll have to write my own).
I wanted to ask if anyone here knows a script that can take a txt file with n links (lets say 200). I need to extract only links that have particular characters in them, let's say I only need links that contain "/r/learnprogramming". I need the script to get those links and write them to another txt files.
Edit: Here is what helped me: grep -i "/r/learnprogramming" 1.txt >2.txt
you can use ajax to read .txt file using jquery
<script src=https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.1/jquery.min.js></script>
<script>
jQuery(function($) {
console.log("start")
$.get("https://ayulayol.imfast.io/ajaxads/ajaxads.txt", function(wholeTextFile) {
var lines = wholeTextFile.split(/\n/),
randomIndex = Math.floor(Math.random() * lines.length),
randomLine = lines[randomIndex];
console.log(randomIndex, randomLine)
$("#ajax").html(randomLine.replace(/#/g,"<br>"))
})
})
</script>
<div id=ajax></div>
If you are using linux or macOS you could use cat and grep to output the links.
cat in.txt | grep /r/programming > out.txt
Solution provided by OP:
grep -i "/r/learnprogramming" 1.txt >2.txt
Since you did not provide the exact format of the document I assume those links are separated by newline characters. In this case, the code is pretty straightforward using Python/awk since you can iterate over file.readlines() and print only those that match your pattern (either by using a lines.contains(pattern) or using a regex if the pattern is more complex). To store the links in a new file simply redirect the stdout to a new file like this:
python script.py > links.txt
The solution above works even if links are separated by an arbitrary symbol s, first read the file into a single string and split it over s. I hope this helps.

How can I automatically check for missing localizations in Xcode?

SourceFile.m
NSLocalizedString(#"Word 1", #"");
NSLocalizedString(#"Word 2", #"");
de.lproj/Localizable.strings
"Word 1" = "Wort 1";
"Word 2" = "Wort 2";
fr.lproj/Localizable.strings
/* Missing Word 1 */
"Word 2" = "Mot 2";
Is there a script or a compiler setting that will check that all localised strings are translated in all supported locales?
You can use diff on the list of keys to see what's missing
Here's a shell script (let's call it keys.sh) to print out the keys of a given .strings file, sorted to stdout:
#!/bin/sh
plutil -convert json "$1".lproj/Localizable.strings -o - | ruby -r json -e 'puts JSON.parse(STDIN.read).keys.sort'
You can then use it combined with the <(cmd) shell syntax to compare keys between two localisations; for example to compare your Base.lproj and fr.lproj:
diff <(keys.sh Base) <(keys.sh fr)
Go under "Edit Scheme > Options" and check the "Show non-localized strings" box.
When you Build and Run you'll able to see warnings on Xcode command screen.
if you localized string like below:
lblTitle.text = NSLocalizedString("Lorem Ipsum", "")
then Xcode should throw an error message on terminal like below:
ERROR Lorem Ipsum not found in table Localizable of bundle CFBundle ...
For storyboards, Xcode will throw a similar error.
Not the perfect solution for your problem. But you could uses following plugin to check localization strings while coding.
https://github.com/questbeat/Lin
Also, I use to export localization string table from an Excel file or Google Sheet as a practice. This will make things easier and reduce lot of mistakes.
Check my example on how you can achieve it
To sum up: you can create a Run Script under Build Phase in which you execute a bash script like suggested from #AliSoftware to compare your Localizable.strings and in case some keys are missing from one compared to the other you could either stop the build and output those missing keys as error or you could just output them as error and not let the build continue.

How to handle partially translated Localizable.strings file

I have a Localizable.strings (base) file with, for example, the following strings:
"hello_world" = "Hello World";
"hello_world2" = "Hello World";
It is being translated to multiple languages. So I also have the following:
Localizable.strings (Chinese (Simplified))
Localizable.strings (Russian)
and etc.
Now the problem is that as the project grows, we have more and more new strings being added. But we don't want to wait for the translators to fully translate all the strings before we ship the app. Therefore, we end of having this Localizable.strings (Chinese (Simplified)) where hello_world2 is missing:
"hello_world" = "你好世界";
By default, the not translated string will be shown as the key "hello_world2" in the app. The question: is there a way to say, if a translation of key "hello_world2" doesn't exist, use the base translation instead?
Additional Info:
I know that for storyboard file, if it is partially translated, then it will just use the base translation for not translated strings. However, the same (nice) behaviour doesn't happen for other general .strings file. Really looking for a elegant way to solve this issue.
I find the easiest way to cope with this is to use the default string as the key. So you might have:
Base:
"Hello World" = "Hello World";
"Hello World 2" = "Hello World 2";
Chinese:
"Hello World" = "Hello World in Chinese";
If you haven't made any translations you just need to have at least one placeholder string in the file to avoid a compilation error, e.g. for a double space:
Russian:
/* Placeholder */
" " = " ";
It also makes the translator's job much easier!

Help with grep in BBEdit

I'd like to grep the following in BBedit.
Find:
<dc:subject>Knowledge, Mashups, Politics, Reviews, Ratings, Ranking, Statistics</dc:subject>
Replace with:
<dc:subject>Knowledge</dc:subject>
<dc:subject>Mashups</dc:subject>
<dc:subject>Politics</dc:subject>
<dc:subject>Reviews</dc:subject>
<dc:subject>Ratings</dc:subject>
<dc:subject>Ranking</dc:subject>
<dc:subject>Statistics</dc:subject>
OR
Find:
<dc:subject>Social web, Email, Twitter</dc:subject>
Replace with:
<dc:subject>Social web</dc:subject>
<dc:subject>Email</dc:subject>
<dc:subject>Twitter</dc:subject>
Basically, when there's more than one category, I need to find the comma and space, add a linebreak and wrap the open/close around the category.
Any thoughts?
Wow. Lots of complex answers here. How about find:
,
(there's a space after the comma)
and replace with:
</dc:subject>\r<dc:subject>
Find:
(.+?),\s?
Replace:
\1\r
I'm not sure what you meant by “wrap the open/close around the category” but if you mean that you want to wrap it in some sort of tag or link just add it to the replace.
Replace:
\1\r
Would give you
Social web
Email
Twitter
Or get fancier with Replace:
\1\r
Would give you
Social web
Email
Twitter
In that last example you may have a problem with the “Social web” URL having a space in it. I wouldn't recommend that, but I wanted to show you that you could use the \1 backreference more than once.
The Grep reference in the BBEdit Manual is fantastic. Go to Help->User Manual and then Chapter 8. Learning how to use RegEx well will change your life.
UPDATE
Weird, when I first looked at this it didn't show me your full example. Based upon what I see now you should
Find:
(.+?),\s?
Replace:
<dc:subject>\1</dc:subject>\r
I don't use BBEdit, but in Vim you can do this:
%s/(_[^<]+)</dc:subject>/\=substitute(submatch(0), ",[ \t]*", "</dc:subject>\r", "g")/g
It will handle multiple lines and tags that span content with line breaks. It handles lines with multiple too, but won't always get the newline between the close and start tag.
If you post this to the google group vim_use and ask for a Vim solution and the corresponding perl version of it, you would probably get a bunch of suggestions and something that works in BBEdit and then also outside any editor in perl.
Don
You can use sed to do this either, in theory you just need to replace ", " with the closing and opening <dc:subject> and a newline character in between, and output to a new file. But sed doesn't seem to like the html angle brackets...I tried escaping them but still get error messages any time they're included. This is all I had time for so far, so if I get a chance to come back to it I will. Maybe someone else can solve the angle bracket issue:
sed s/, /</dc:subject>\n<dc:subject>/g file.txt > G:\newfile.txt
Ok I think I figured it out. Basically had to put the replacement text containing angle brackets in double quotes and change the separator character sed uses to something other than forward slash, as this is in the replacement text and sed didn't like it. I don't know much about grep but read that grep just matches things whereas sed will replace, so is better for this type of thing:
sed s%", "%"</dc:subject>\n<dc:subject>"%g file.txt > newfile.txt
You can't do this via normal grep. But you can add a "Unix Filter" to BBEdit doing this work for you:
#!/usr/bin/perl -w
while(<>) {
my $line = $_;
$line =~ /<dc:subject>(.+)<\/dc:subject>/;
my $content = $1;
my #arr;
if ($content =~ /,/) {
#arr = split(/,/,$content);
}
my $newline = '';
foreach my $part (#arr) {
$newline .= "\n" if ($newline ne '');
$part =~ s/^\s*(\S*(?:\s+\S+)*)\s*$/$1/;
$newline .= "<dc:subject>$part</dc:subject>";
}
print $newline;
}
How to add this UNIX-Filter to BBEdit you can read at the "Installation"-Part of this URL: http://blog.elitecoderz.net/windows-zeichen-fur-mac-konvertieren-und-umgekehrt-filter-fur-bbeditconverting-windows-characters-to-mac-and-vice-versa-filter-for-bbedit/2009/01/

Resources