Delphi Berlin Unicode Issue - delphi

I have a very strange Delphi compiler issue relating to Unicode characters.
I have a unit with this const definition:
const SLANG_SPANISH_ESP = 'Español';
When I compile this on my PC, the ñ gets converted to the ASCII equivalent. I've used a hex viewer to examine the relevant files:
Within the pas source file, the ñ is encoded in UTF-8 as C3 B1.
Within the generated DCU file, the ñ is encoded in ASCII (?) as F1.
All the other Delphi PCs within our group compile the DCU differently, generating the DCU file with the ñ encoded in UTF-8 as C3 B1.
This is just one example, but many of the non-ASCII characters suffer the same fate.
I have tried hard over the last couple of days to identify the cause, without success. I have eliminated the project files and source code as we use SVN. I double checked by manually copying the project folder from a colleague's PC.
I have looked through the Delphi settings for something that might affect this, without success either.
It's very frustrating and worrying to imagine that the same source code on different PCs compiles to different results. My only hope now is that someone from the community will be able to give me a clue.

I finally got to the bottom of this issue. It turns out that the pas file in question was not saved as UTF-8 despite what the IDE was telling me. In fact this is a known issue/quirk with Delphi where the unit is saved with UTF-8 characters but without the BOM.
You can refer to Marco Cantu's blog on this issue: The Delphi Compiler and UTF-8 Encoded Source Code Files With no BOM
The reason that the file did not have a BOM was because it was generated using a in-house tool. This tool has since been updated to output the BOM too.
Finally I discovered that, on a given machine, building the project with the IDE or externally via MsBuild.exe would yield different results. The IDE correctly interprets the unit a UTF-8, whereas MsBuild.exe interprets the unit as Ansi.

Related

Change encoding on a per file or per extension basis

I'm using Microsoft Visual Studio Express 2012 for Web. It seems that every file which I open with it gets encoded into UTF-8. For most files which are going to be web-facing, that's fine. However, I have files in my projects that are specifically for build purposes (e.g., .bat files), which must be encoded in ANSI.
Are there any configuration settings in VS to either designate on a per file or a per extension basis the encoding? Or, if not specify the encoding, at least disable the auto-conversion to UTF-8?
Open the problematic file in Visual Studio and...
On the File menu, click Advanced Save Options.
In the Encoding dropdown, select Unicode (UTF-8 … or the encoding you require.
Click OK.
Also see:
how to change source file encoding in csharp project (visual studio / msbuild machine)?
An option to handle the encoding of all files of a given extension on a per open basis can be configured in the Options dialog. See MSDN page on Options, Text Editor, File Extension.
Navigate to Tools > Options > Text Editor > File Extension.
For the bat extension, I selected Source Code (Text) Editor with Encoding. The with Encoding part means that the user will be given options as to what encoding to use when opening the file. The default in this mode is Auto-detect, which preserves the ANSI encoding, if that is what the file already uses. Otherwise, one can explicitly designate it for the individual file.
Unfortunately, it doesn't seem to remember the setting last used when opening a file, and will thus prompt for an encoding setting every time a file is opened.
I had code conversion problems width VS studio 2012 as well. Namely, I had non ansi compliant characters in strings ind my .js files and unreadable was outputted to the browsers html page.
I figured out that accept script files (like .js) VS 2012 creates all files in UTF-8.
*The problem is with the suggestion bellow to change the defaults in the options dialog resulted in that the syntax highlighting and intelisense stopped working in all .js files.*
So my workaround solution know is that I convert my .js files with notepad++ to utf-8 without BOM.
In this way my "unusual" chars are appearing well in browsers and the intelisense is working fine as well.

Tips on unicode text editors

I am currently converting a legacy system to a new platform and need to extract strings from the old systems resource files.
The old system was written in Delphi and the strings are kept in files called .dfm. I have no trouble locating the strings and for English and other European languages there is no problem. The trouble comes when I try to extract strings in Japanese. I have used Notepad++ and it seems to me that the program don´t recognice the correct encoding. I get Japanese symbols but they don´t seem to match what is in the GUI. Notepad++ shows signs in something called GB2312(Simplified Chinese). But it looks weird.
My question is, does anyone have any tips on programs/text editors that are good at operations like this?
Also I'm grateful for any tips that might help me along the way.
Assuming that your issue is simply that Notepad++ is incorrectly guessing the encoding you can solve the problem by manually setting the encoding in Notepad++, like this:
Notepad++ itself already handles encoding issues. To make it to desired encoding, like Unicode;
first, copy all the contents of the file,
choose Unicode without BOM in the menu,
last, replace all contents with copied contents
save the file
Your contents will then be in your desired encoding.
Strings are kept not [just] in DFMs in Delphi. Only forms and associated text are. So you would to review all the code as well.
As for DFMs - before Delphi 2009 DFMs didn't use Unicode so you must know what charset was used. That was one of big problems with localization and internationalization of Delphi applications.

Setting the source file character encoding for Mono's xbuild

I'm generating C# source code which is being built by both VS2010 and Mono's xbuild (2.10.2.0). This generally works very well, I've only had a single compatibility issue so far, and in that case I was using a 'feature' that is clearly specified as undefined behaviour (so mea culpa).
Now I'm running into an issue where I have special characters in a string literal in the C# source code. I'm generating the source files in UTF-8, the character I'm testing with is a German sharp s: 0xC39F. This is written to a file in latin1 by the code, where it ends up as 0xDF when the executable is built with VS (that's the one I want) and as 0xC33F when built with xbuild.
It does not seem to matter whether I run the executable with the .NET or with the Mono CLR, as far as I can see.
My current suspicion is that xbuild is not reading the source code as UTF-8, so the compiled code already has the wrong character in the string literal. Is there a way to explicitly tell it to? I couldn't find anything on xbuild /? and the xbuild documentation isn't particularly comprehensive. If I just missed the right page where this is documented, just a link is sufficient, of course.
All experiments have been performed on Win7 x64.
EDIT 1: To clarify, I've used a hex editor to confirm that the character in the source code file is really 0xC39F, the character written when compiled with VS2010 is 0xDF and the character written when compiled with xbuild is 0xC33F.
You'll need to modify the .csproj file(s) and add a <CodePage> element to the <PropertyGroup> section.
You should be able to use Visual Studio or MonoDevelop to do this for you, as well.
In MonoDevelop, if you right-click on a project and select the "Options" menu item, you can then go to the Build/General section and there will be a "Compiler Code Page" field which you can use to select "UTF-8".
FWIW, this is what MD outputs when I select UTF-8:
<CodePage>65001</CodePage>
So you can just copy/paste that into the <PropertyGroup>

Is there any tools/utility to convert "string" to "AnsiString" in pascal source files?

Delphi 2009 and above support unicode. I have few legacy pascal source files that I wish to make it compile in Delphi 2009/2010 as well as Delphi 2007 and below.
A quick and safe way is replace
String to AnsiString
PChar to PAnsiChar
Char to AnsiChar
Is there any utility available that able to parse .pas file and make such replacement?
There is a tool for pointing out areas that might need attention:
http://cc.embarcadero.com/Item/27398
It doesn't convert it automatically, grep would do that but as mghie said it's not that simple.
You can use sed for that.
sed -i bak -e "s/string/AnsiString/g" *.pas
It would be a very bad idea, though. There's no reason your code shouldn't compile in all Delphi versions. The meaning of "string" has changed, but so what? Your Delphi 2007 code doesn't need to be used with your Delphi 2009 code. The DCU file formats are different, so you'd have to recompile anything you change anyway.
By changing everything to AnsiString, you're essentially rejecting everything new that Delphi 2009 offers. If that's what you want to do, you could have saved yourself a lot of money by simply not upgrading to Delphi 2009 at all. Why buy a product and then not use any of its features? Since everything else in the product is Unicode, your program's performance will go down the tubes as it continually converts between string formats. You'll also drown in compiler warnings from all the conversions.
Don't force square pegs into round holes, especially when you have a perfectly good set of round pegs sitting right next to you.

What do I need to know to upgrade a complex application from C++Builder 2007 to 2010?

My company's main application is mostly written in C++ (with some Delphi code and components). We are upgrading from RAD Studio 2007 to 2010 for the next release, starting in about a week. What do I need to know to ensure this upgrade goes smoothly?
Points I have thought of so far are:
Unicode. This one looks really complicated. Our app contains a horrible mix of std::string-s and AnsiString-s with casts to and from them. I have lots of questions about this, such as "is wstring capable of holding everything a UnicodeString can, and should we just do a search/replace", or "should we avoid all C++ string types altogether and use UnicodeString", "can we change all event handlers to use String though the existing .HPPs event handler method prototypes were compiler-translated to AnsiString", right down to basics such as "should we prefix all strings with L, or is the compiler smart enough with Unicode enabled to use Unicode strings", etc. Any insight on this would be really appreciated.
We also need backwards compatibility. Our app uses its own binary tuple format that currently stores strings as an array of bytes. I need to upgrade this to read old files and, presumably, write new Unicode strings as well. How do I handle Unicode strings embedded in a binary format? Is there any generic way where I can point a UnicodeString at an array of bytes, that may be originally written as either ANSI bytes or Unicode, and it will figure out what they are?
Third-party components. We use SpTBX mainly, and it appears to be compatible.
Project upgrades. The standard advice in the Codegear forums seems to be to manually recreate all project files when upgrading. This is an awful lot of work (7 projects (mostly libs) in our main app, plus half a dozen DLLs, a lot of files.) Is there any way to automate this?
How's the linker look? We traditionally have a lot of trouble with the linker randomly crashing or running out of resources, though it got a lot better in 2007. This is one reason our main application is split into several libs - the linker cannot (hopefully, "could not, but now can"?) handle it otherwise.
I know there's a new type library editor and format (it stores the IDL, ie text, and generates the TLB dynamically?) How well does this handle upgrading existing COM projects with a TLB? We have Delphi code and TLB that are built into the C++ application.
Is there anything else I should be considering or be aware of?
I have found:
2007 and 2010 co-existing. I'm not sure I trust this answer since I have had issues with 2006 and 2007 on the same machine before.
several answers about Unicode: writing strings with 2009 and generic transition to Unicode text but none are answers for concerns, or the C++Builder-specific parts at all.
This question about guidelines upgrading to 2009 but though the answers are helpful, they don't answer all the Unicode-related issues above.
[Edit: added] Codegear documents for Unicode in RAD Studio and things to look for when converting to Unicode
Project upgrades. The standard advice in the Codegear forums seems to be to manually recreate all project files when upgrading. This is an awful lot of work (7 projects (mostly libs) in our main app, plus half a dozen DLLs, a lot of files.) Is there any way to automate this?
There is: just use the IDE's project importer :)
Seriously, I would just try importing the projects, and then go investigate if it doesn't seem to work.
How's the linker look? We traditionally have a lot of trouble with the linker randomly crashing or running out of resources, though it got a lot better in 2007. This is one reason our main application is split into several libs - the linker cannot (hopefully, "could not, but now can"?) handle it otherwise.
I've had almost no trouble with ILINK anymore since C++Builder 2009. I've occasionally read that others experienced out-of-memory errors, but someone in the newsgroups has discovered a workaround:
https://forums.embarcadero.com/thread.jspa?messageID=140012&tstart=0#140012
Also, as you can read here, the compiler got a new option (-Cx) to control the maximal amount of memory it allocates.
I know there's a new type library editor and format (it stores the IDL, ie text, and generates the TLB dynamically?) How well does this handle upgrading existing COM projects with a TLB?
Should work without a hitch.
I have lots of questions about this, such as "is wstring capable of holding everything a UnicodeString can, and should we just do a search/replace"
Yes, on Windows platforms wchar_t usually is 16 bit large, which means it suffices for holding UTF-16 which UnicodeString is.
or "should we avoid all C++ string types altogether and use UnicodeString"
Depends on how portable your code needs to be. In any case, whenever you just need a string type, use "String", not "UnicodeString".
"can we change all event handlers to use String though the existing .HPPs were compiler-translated to AnsiString"
First, you should NEVER re-use .hpp files generated by older versions of DCC!
For event handlers that use the String type in Delphi, you must use UnicodeString. As above, simply use "String", and your code will work for both the ANSI and Unicode versions of C++Builder.
right down to basics such as "should we prefix all strings with L, or is the compiler smart enough with Unicode enabled to use Unicode strings"
The compiler doesn't convert your strings (it would conflict with the language standards), but both AnsiString and UnicodeString do have copy constructor overloads for both char* and wchar_t* string literals. I.e., the following will work:
AnsiString as = L"foo";
UnicodeString us = "bar";
What will not work this way, though, is the whole bunch of printf()/scanf() functions; AnsiString::sprintf() takes const char*, UnicodeString::sprintf() takes const wchar_t*.
If you are using sprintf() a lot, you may find my CbdeFormat library useful; just read my article on the subject.
You do not say what the data strings in your binary tuple format are for: is it necessary for them to store Unicode? When I transitioned from D2007 to D2009 I was able to keep some parts of the system ANSI-string only.
If storing Unicode is required, then you need to check if your existing data is compatible with a format such as UTF-8. If the range of values stored in existing data files present a problem, then I would make your next upgrade do a one-time conversion of any old data files, reading in the old AnsiString data and writing it back as UTF-8 to a different file name or extension, or by modifying appropriate file header data. I have been versioning data files for a long time, just to allow this sort of processing change.
I am only just starting a BCB2010 project, so cannot comment on your other questions, but I certainly had difficulty upgrading a Delphi project from D2007 to D2009 - though I was able to fix this by editing the project file, which is just XML.
Good luck with the conversion ;-)
Unicode. This one looks really
complicated. Our app contains a
horrible mix of std::string-s and
AnsiString-s with casts to and from
them. I have lots of questions about
this, such as "is wstring capable of
holding everything a UnicodeString
can, and should we just do a
search/replace"
std::wstring contains wchar_t* strings, just like System::UnicodeString does.
should we avoid all C++ string
types altogether and use
UnicodeString
That is up to you to decide. char* strings are still supported. You are not forced to migrate everything to Unicode.
can we change all event handlers to
use String though the existing .HPPs
were compiler-translated to AnsiString
No, you cannot change auto-managed event handlers to use the System::String alias. All IDE versions will complain about that. You will have to manually update your event handler declarations and implementations to use UnicodeString parameters instead of AnsiString parameters when appropriate. That also means you cannot share DFMs and Unit .h files across multiple IDE versions, either (which you should not be doing anyway).
should we prefix all strings with L,
or is the compiler smart enough with
Unicode enabled to use Unicode strings
No. If you declare a string constant or character constant without an L prefix, the data will still be interpretted as Ansi. That has not changed. You can, however, pass Ansi data to System::UnicodeString (but not to std::wstring), and it will convert to Unicode automatically. But you have to be careful because it will use the OS's default Ansi codepage to interpret the data. As long as your Ansi data is only using ASCII characters only, then you will be OK. Otherwise, if you are using non-ASCII characters, then you are better off putting the data into a System::AnsiStringT or System::RawByteString (both were introduced in CB2009) that has been assigned the correct codepage, and then assign that to your System::UnicodeString variable. The associated codepage will be used instead of the OS default codepage for the conversion.
We also need backwards compatibility.
Our app uses its own binary tuple
format that currently stores strings
as an array of bytes. I need to
upgrade this to read old files and,
presumably, write new Unicode strings
as well. How do I handle Unicode
strings embedded in a binary format?
If your tuple is expecting 8-bit characters, then you will have to make sure that any struct declarations and such are using char and not wchar_t characters. If you need to store Unicode strings, but need to maintain the 8-bit compatibility, then you should encode your Unicode strings to UTF-8 first (you can use the System::UTF8String string type to help you - starting in CB2009, it is a true UTF-8 string now). As long as you do not use non-ASCII characters, then your old apps will not know the difference, as ASCII characters are encoded as-is in UTF-8. If you want to store raw Unicode data, however, then your tuple would need a flag somewhere (if it does not already have one) indicating whether the string data is stored as Ansi or Unicode, and your apps would have to look for that flag.
Is there any generic way where I can
point a UnicodeString at an array of
bytes, that may be originally written
as either ANSI bytes or Unicode, and
it will figure out what they are?
No. You have to know the actual encoding of the bytes beforehand. If you pass a memory address to System::AnsiString or std::string, it is going to assume Ansi characters. If you pass the same memory address to System::UnicodeString or std::wstring, it is going to assume Unicode characters instead.
Third-party components. We use SpTBX
mainly, and it appears to be
compatible.
Just like with all prior versions (except for the migration from 2006 to 2007), any third-party components you have will need to be re-compiled for 2010, either manually (if you have the source code for them) or by their respective vendors.
Project upgrades. The standard advice
in the Codegear forums seems to be to
manually recreate all project files
when upgrading.
Yes. That still applies.
I know there's a new type library
editor and format (it stores the IDL,
ie text, and generates the TLB
dynamically?)
.TLB files are not used at all anymore. The new system operates on .ridl (Reduced IDL) files now. During compiling, the .ridl produces the correct TypeLibrary information in the executable's binary resources directly. No .tlb files are generated.
How well does this handle upgrading
existing COM projects with a TLB? We
have Delphi code and TLB that are
built into the C++ application.
I do not remember whether CB2010 (or CB2009, for that matter) can consume pre-existing .tlb files directly. I don't think they can. You can, however, run the .tlb file through tlibimp.exe and it will export a .ridl file. Or you can copy the IDL text from the TLB editor in a past version and paste it into a new .ridl file manually. Either way, you can then add that .ridl ile to your CB2010 project.
2007 and 2010 co-existing. I'm not
sure I trust this answer since I have
had issues with 2006 and 2007 on the
same machine before.
That is why I use virtual machines when installing multiple IDE versions on the same physical machine.
Is the cost of upgrading in line with the benefits?
Why not start a gradual upgrade where new components would be developed on the new platform. Integrate the new components to the old version via different interop helpers.
This approach was suggested to vb6 developers who were thinking about upgrading to vb.net.

Resources