nsis language selection items and special characters - localization

I am using nsis - MUI to create installer. I am facing a problem that when I do something like this:
!insertmacro MUI_LANGUAGE "Czech"
!insertmacro MUI_LANGUAGE "Slovak"
... in selection of language during the instalation I am getting names of languages withou special character.
So for Czech language I am getting "Cesky" instead of "Česky". Is there any possibility how to solve this

The language name displayed by MUI/LangDLL depends on the NSIS version:
For the official NSIS 2.46 you can probably edit Czech.nsh (This should work correctly as long as you don't define MUI_LANGDLL_ALLLANGUAGES, if you do then "Č" might be displayed as something else.)
For the Unicode fork, compiling as Unicode should have no problems.
For NSIS 3 (if you compiled from SVN yourself), Unicode should have no problems and ANSI is currently restricted to ASCII only for the language name.

Related

How to programmatically set application name in Japanese?

Currently I am trying to set application name using
net.rim.blackberry.api.homescreen.HomeScreen.setName("これはある");
but it throws exception: IllegalArgumentException.
Can anyone provide the solution?
I am using Blackberry JDE 5.0.
This is probably a string encoding problem. Try
new String(new String("これはある").getBytes("UTF-16BE"), "UTF-16BE");
It's not pretty but I think that will work.
Here's a link to the Blackberry string spec: http://www.blackberry.com/developers/docs/5.0.0api/java/lang/String.html
By default it's ISO-8859-1 which does not include Japanese characters.
The problem you are facing is how to get a string represented in your source code into your application with the same characters. For latin characters, this is pretty straightforward, as we can just put the characters in quotes, and get a string, like "Hello world"
When you go to non-latin, like Japanese, it gets harder. You can still directly write Japanese in your source code, but you need to make sure your editor and your compiler agree on an encoding so that the characters can be interpreted correctly. The Java-SE compiler takes an argument "-encoding" which allows you to specify the encoding of your java source files.
Unfortunately, rapc, the BlackBerry compiler, does not offer an option to specify encoding, even though it is invoking javac itself. So rapc uses the platform default, which is utf-8 on Linux and OSX and iso-8859-1 on Windows.
The way around this problem is to use a feature of the Java language for parsing strings - unicode escaping. By entering the six character sequence "\u3053" in a string, the java compiler will parse that number as hexidecimal and use the corresponding unicode code point, solving problems with source file encoding.
So "Hello world" and "\u0048\u0065\u006c\u006c\u006f\u0020\u0057\u006f\u0072\u006c\u0064" will result in the same strings appearing in your class files.
Because of this, Svetlin's answer from the comments is the right approach here:
net.rim.blackberry.api.homescreen.HomeScreen.setName("\u3053\u308C\u306F\u3042\u‌​308B");

RubyMine and special characters, show ? when outputting results

Update-edit: Here is the requested information.
Used both Rubymine 3.2.4 and the EAP version (113.2).
JRuby: 1.6.6-p357
Windows 7 64 bit
I've double checked the UTF8 config, in the IDE: File -> Settings -> General, and in the file located in ....RubyMine40\config\options\encoding.xml, both are set to UTF-8.
I have tried to set #encoding: utf-8 in my .feature file, but it still shows my special chars (æøå) as ?. Actually, it won't even run the test if I use these characters in the code under
Feature: This is some test # Feature = Egenskap in norwegian
I opened the .feature file that I created in RubyMine 4.0 (EAP) in Notepad++, and it says its coded in ANSI as UTF-8.
The question
I have a problem with special characters, in my case, æ ø å. I use RubymMine to develop cucumber tests. My IDE encoding is set to use UTF-8.
The problem is, that when I run the tests in my RubyMine, the console simply replace my special chars (æøå) with ?, which is annoying, and this also happens when I do a HTML export of the tests. I can write the .feature files in my langauge no problem,using #language: no comment at the top of feature file. It only gets messy when Rubymine outputs to console, and export the results.
Has anyone had a similar problem and knows how to solve it? Or can just point me in the right direction?

User language support in Delphi 7

My program is written in Delphi 7 and I want to avoid a Russian or a Chinese,
Korean try to use my soft because file paths contains Unicode chars and my program can t handle them yet (as long as I do not port my program on a new Delphi version supporting UNICODE).
How do I write a function detecting the "Unicode language" in Delphi 7?
A Delphi 7 program (in its VCL part) can handle Russian, Chinese or Korean characters without any problem.
If the Windows system language is properly set, the charset will match the corresponding encoding, and the file names will be able to have Unicode chars as available in this charset. In fact, default string=AnsiString is converted into Unicode when the VCL calls Windows APIs (all ....A() calls will do the conversion then call the ....W() version).
You can force the default code page (the one which will select the charset to be used) by calling code like this:
if GetThreadLocale<>LCID then // force locale settings if different
if SetThreadLocale(LCID) then
GetFormatSettings; // resets all locale-specific variables
In this case, the TFileName (=AnsiString) in the current system charset will be converted by Windows into the corresponding Unicode characters, and you'll be able to use it in your Delphi 7 application.
What you can't do with the standard VCL AnsiString use it to directly mix charsets, as you can since Delphi 2009, thanks to the new string = UnicodeString default paradigm.
PS:
Since the CharSet only involve #128..#255 chars (i.e. all with bit 7 set), if you use only #0..#127 chars, your string will be consistent whatever the current charset/codepage setting is. If you use only English chars and numbers e.g., your path will always work, whatever the charset/codepage is. But if you use non English chars, the path will only work if the charset/codepage is correctly set, which is the case for a path used by an end-user (using a TOpenDialog at runtime for instance).

Special names in Latex

In my english thesis latex file, how to mention the following non English words: François, École Fédérale?
Thanks and regards!
The traditional way is to use the accent-adding macros:
Fran\c{c}ois
\'Ecole F\'ed\'erale
(You can also write Fran\c{}cois or Fran\c cois; the \c macro uses no parameter; the braces or space are just a trick to allow LaTeX to see the proper macro name.)
Otherwise, try this:
\usepackage[utf8]{inputenc}
and type the accents directly, with UTF-8 encoding.
There are a host of more-or-less subtle issues with fonts and hyphenation.
If you don't go the UTF8 inputenc route, and yet find yourself writing a lot of these names, I'd suggest defining macros for them. At the simplest, you can say \newcommand\Francois{Fran\c cois} but then you need to be sure to use it as such: \Francois{} so that any spaces afterwards don't get gobbled.
On the other hand, the following technique works pretty well too (though I can't take credit for inventing it - I saw it originally in a short talk at BachoTeX 2009 by Philip Taylor):
\makeatletter
\let\latex#less<
\catcode`<13
\def<{\ifmmode\latex#less\else\expandafter\find#name\fi}
\def\find#name#1>{\#nameuse{name.#1}}
\def\DefineName#1#2{\#namedef{name.#1}{#2}}
\makeatother
Now you can define special names using, e.g.
\DefineName{Francois}{Fran\c cois}
\DefineName{Ecole Federale}{\'Ecole F\'ed\'erale}
and later on you can use them in text with
I ran into <Francois> at the <Ecole Federale> the other day.
You can make your tags (the plain ASCII versions) be whatever you want - they don't have to actually be related to the properly accented names.
EDIT: in response to the issue that misspelled names don't produce errors, you can change the definition of \find#name to
\def\find#name#1>{\ifcsname name.#1\endcsname
\#nameuse{name.#1}%
\else
\#latex#warning{Undefined name #1}%
\fi}
Note that \#latex#warning{...} can be changed to \#latex#error{...}\#eha and it will complain more forcefully. Or if you want to pretend to be (or actually be) a package you can use \Package(Warning|Error){<package name>} in place of \#latex#(warning|error) and it won't pretend to be a built-in LaTeX error anymore.

How does acrobat encode annotations added as sticky notes to pdfs?

We have been reading and writing Sticky Notes/Annotations/Comments to pdfs via an activex control in our application for a number of years. We have recently upgraded to Delphi2009 with Unicode Support. The following is causing problems.
When we call
CAcroPDAnnot.GetContents
The results seem to be rather strange and we lose our Unicode Chars. It is not like saving as an ansi string which would usually result in returning ????? instead we get a string such as
‚És‚­“ú‚É•—Ž×‚ð‚Ђ¢‚½‚ç
For a string of Japanese characters.
However if I save the comments in the pdf to a datafile via the menu in the pdf itself it is written to file as something like
0kˆL0Oeå0k˜¨ª0’0r0D0_0‰
The latter can be export and reimported into an acrobat pdf and will recreate the correct unicode characters. However once I call CAcroPDAnnot.GetContents in my code it is coming back as something else.
Is CAcroPDAnnot.GetContents broken?
Is there an encoding scheme I should be aware of?
Is there an alternative I might be able to do?
Thanks
‚És‚­“ú‚É•—Ž×‚ð‚Ђ¢‚½‚ç
That's the string:
に行く日に風邪をひいたら
in CP-932 aka Shift-JIS encoding, an awful but lamentably still-popular encoding in Japan.
You're currently interpreting it in as CP-1252 (Windows Western European). If your PDF-reading component won't convert it for you automatically, you'll need to find a way to detect what encoding the document is in and convert it manually.
I don't know what Delphi provides for reading encodings, but have you got the encodings for Shift-JIS installed in Windows, from the Control Panel -> Regional Options -> "Install files for East Asian languages" option? If not, that might explain why it'd be failing to convert automatically, perhaps.
You're not exactly giving us a lot of information to work with.
I take it you're talking about the "Acrobat.CAcroPDAnnot" class' method GetContents here. Which version of Acrobat are you using? Have you perhaps switched versions (or run an update) around the time you started programming with Delphi 2009?
Then: how did you instantiate the object? If using a *_TLB.pas file generated from the DLL, are you certain it still matches it? (Try re-generating it, if uncertain).
Third: how are you calling the method? What type of variable are you assigning the result to?
What might also help, is if you could provide a sample of an annotation (preferably including non-ASCII chars); and for that annotation:
what it should look like (and what it does look like inside Reader)
what it returns when using a pre-2009 version of Delphi*
what it returns when using Delphi 2009*
(* preferably the HEX byte codes of the (ansi/wide)strings; but output from the Ctrl-F7 inspector should do)
Then maybe someone could provide a more meaningful answer.
Ok, one of the main differences between Delphi 2009 and the earlier versions is that the default string type is an unicode string. That means that if you use the same ActiveX component as in previous versions, you are passing unicode strings to ascii strings and that is usually not a good idea.
There are a couple of solutions for this problem:
Try if you can upgrade your activeX component so that it supports full unicode strings.
Use AnsiString and not string to communicate with the activeX component. In this case, you can still use the old interface, but you are still bound to the same limitations.
Use an other control that creates pdf. There is a lot to find, but be prepared to change a big chunk of your software. (Some controls are XML based and use encoding. )

Resources