How to change CharSet when using Indy TIdHTTPServer:ServeFile()? - c++builder

My Javascript, HMTL etc files are being served using charset ISO-8859-1, but I need then to have charset=utf-8. I've tried using AResponseInfo->CharSet = "utf-8" but it seems to make no difference.
I've done this, which works, but I'd rather not repeat the test for every possible file extension.
AResponseInfo->ContentDisposition = "inline";
if (paths::getExtension(requestedFile, true) == ".JS")
AResponseInfo->ContentType = "text/javascript; charset=utf-8";
AResponseInfo->SmartServeFile(AContext, ARequestInfo,uString(requestedFile));
(side note: This may be because I'm using a version of Indy10 that's a few years old with RadStudio 2010. What's the easiest way to upgrade Indy?)

side note: This may be because I'm using a version of Indy10 that's a few years old with RadStudio 2010.
That is more than "a few" years old, that is over a decade old. RS2010 was released in 2009. There have been many updates to TIdHTTPServer since that time, including several charset-related changes in 2010, 2012, and 2019.
What's the easiest way to upgrade Indy?
The latest source code is on GitHub: https://github.com/IndySockets/Indy/
Indy 10 Installation Instructions
(note: link is from archive.org due to current problems with indyproject.org)
Once you have upgraded to the latest Indy, AResponseInfo->CharSet = "utf-8" should work as expected.
(Smart)ServeFile() sets the AResponseInfo->ContentType property only if it is blank:
If the new ContentType value specifies a charset, AResponseInfo->CharSet is set to that value.
Otherwise, if the new ContentType value does not include a charset, AND AResponseInfo->CharSet is blank, AND the new ContentType value is a text/... media type, then AResponseInfo->CharSet is set to a default value, either us-ascii for XML types, otherwise ISO-8859-1.
Otherwise, AResponseInfo->ContentType and AResponseInfo->CharSet are left untouched, preserving whatever values they were holding before (Smart)ServeFile() was called.
When AResponseInfo->WriteHeader() is called, if AResponseInfo->ContentType is still blank, AND AResponseInfo->ContentText or AResponseInfo->ContentStream is assigned data, then AResponseInfo->ContentType is set to text/html, and AResponseInfo->CharSet is set to utf-8 (RS2009+) or ISO-8859-1 (pre-RS2009) if it is blank.
So, if you want to ensure a particular charset in the Content-Type header, you should be able to explicitly set AResponseInfo->CharSet to whatever you want, independently of what AResponseInfo->ContentType is set to, TIdHTTPServer will now preserve a pre-existing CharSet value whenever possible.

Related

Special character error despite UTF-8 get °C instead of °C [duplicate]

I'm setting up a new server and want to support UTF-8 fully in my web application. I have tried this in the past on existing servers and always seem to end up having to fall back to ISO-8859-1.
Where exactly do I need to set the encoding/charsets? I'm aware that I need to configure Apache, MySQL, and PHP to do this — is there some standard checklist I can follow, or perhaps troubleshoot where the mismatches occur?
This is for a new Linux server, running MySQL 5, PHP, 5 and Apache 2.
Data Storage:
Specify the utf8mb4 character set on all tables and text columns in your database. This makes MySQL physically store and retrieve values encoded natively in UTF-8. Note that MySQL will implicitly use utf8mb4 encoding if a utf8mb4_* collation is specified (without any explicit character set).
In older versions of MySQL (< 5.5.3), you'll unfortunately be forced to use simply utf8, which only supports a subset of Unicode characters. I wish I were kidding.
Data Access:
In your application code (e.g. PHP), in whatever DB access method you use, you'll need to set the connection charset to utf8mb4. This way, MySQL does no conversion from its native UTF-8 when it hands data off to your application and vice versa.
Some drivers provide their own mechanism for configuring the connection character set, which both updates its own internal state and informs MySQL of the encoding to be used on the connection—this is usually the preferred approach. In PHP:
If you're using the PDO abstraction layer with PHP ≥ 5.3.6, you can specify charset in the DSN:
$dbh = new PDO('mysql:charset=utf8mb4');
If you're using mysqli, you can call set_charset():
$mysqli->set_charset('utf8mb4'); // object oriented style
mysqli_set_charset($link, 'utf8mb4'); // procedural style
If you're stuck with plain mysql but happen to be running PHP ≥ 5.2.3, you can call mysql_set_charset.
If the driver does not provide its own mechanism for setting the connection character set, you may have to issue a query to tell MySQL how your application expects data on the connection to be encoded: SET NAMES 'utf8mb4'.
The same consideration regarding utf8mb4/utf8 applies as above.
Output:
UTF-8 should be set in the HTTP header, such as Content-Type: text/html; charset=utf-8. You can achieve that either by setting default_charset in php.ini (preferred), or manually using header() function.
If your application transmits text to other systems, they will also need to be informed of the character encoding. With web applications, the browser must be informed of the encoding in which data is sent (through HTTP response headers or HTML metadata).
When encoding the output using json_encode(), add JSON_UNESCAPED_UNICODE as a second parameter.
Input:
Browsers will submit data in the character set specified for the document, hence nothing particular has to be done on the input.
In case you have doubts about request encoding (in case it could be tampered with), you may verify every received string as being valid UTF-8 before you try to store it or use it anywhere. PHP's mb_check_encoding() does the trick, but you have to use it religiously. There's really no way around this, as malicious clients can submit data in whatever encoding they want, and I haven't found a trick to get PHP to do this for you reliably.
Other Code Considerations:
Obviously enough, all files you'll be serving (PHP, HTML, JavaScript, etc.) should be encoded in valid UTF-8.
You need to make sure that every time you process a UTF-8 string, you do so safely. This is, unfortunately, the hard part. You'll probably want to make extensive use of PHP's mbstring extension.
PHP's built-in string operations are not by default UTF-8 safe. There are some things you can safely do with normal PHP string operations (like concatenation), but for most things you should use the equivalent mbstring function.
To know what you're doing (read: not mess it up), you really need to know UTF-8 and how it works on the lowest possible level. Check out any of the links from utf8.com for some good resources to learn everything you need to know.
I'd like to add one thing to chazomaticus' excellent answer:
Don't forget the META tag either (like this, or the HTML4 or XHTML version of it):
<meta charset="utf-8">
That seems trivial, but IE7 has given me problems with that before.
I was doing everything right; the database, database connection and Content-Type HTTP header were all set to UTF-8, and it worked fine in all other browsers, but Internet Explorer still insisted on using the "Western European" encoding.
It turned out the page was missing the META tag. Adding that solved the problem.
Edit:
The W3C actually has a rather large section dedicated to I18N. They have a number of articles related to this issue – describing the HTTP, (X)HTML and CSS side of things:
FAQ: Changing (X)HTML page encoding to UTF-8
Declaring character encodings in HTML
Tutorial: Character sets & encodings in XHTML, HTML and CSS
Setting the HTTP charset parameter
They recommend using both the HTTP header and HTML meta tag (or XML declaration in case of XHTML served as XML).
In addition to setting default_charset in php.ini, you can send the correct charset using header() from within your code, before any output:
header('Content-Type: text/html; charset=utf-8');
Working with Unicode in PHP is easy as long as you realize that most of the string functions don't work with Unicode, and some might mangle strings completely. PHP considers "characters" to be 1 byte long. Sometimes this is okay (for example, explode() only looks for a byte sequence and uses it as a separator -- so it doesn't matter what actual characters you look for). But other times, when the function is actually designed to work on characters, PHP has no idea that your text has multi-byte characters that are found with Unicode.
A good library to check into is phputf8. This rewrites all of the "bad" functions so you can safely work on UTF8 strings. There are extensions like the mb_string extension that try to do this for you, too, but I prefer using the library because it's more portable (but I write mass-market products, so that's important for me). But phputf8 can use mb_string behind the scenes, anyway, to increase performance.
Warning: This answer applies to PHP 5.3.5 and lower. Do not use it for PHP version 5.3.6 (released in March 2011) or later.
Compare with Palec's answer to PDO + MySQL and broken UTF-8 encoding.
I found an issue with someone using PDO and the answer was to use this for the PDO connection string:
$pdo = new PDO(
'mysql:host=mysql.example.com;dbname=example_db',
"username",
"password",
array(PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES utf8"));
In my case, I was using mb_split, which uses regular expressions. Therefore I also had to manually make sure the regular expression encoding was UTF-8 by doing mb_regex_encoding('UTF-8');
As a side note, I also discovered by running mb_internal_encoding() that the internal encoding wasn't UTF-8, and I changed that by running mb_internal_encoding("UTF-8");.
First of all, if you are in PHP before 5.3 then no. You've got a ton of problems to tackle.
I am surprised that none has mentioned the intl library, the one that has good support for Unicode, graphemes, string operations, localisation and many more, see below.
I will quote some information about Unicode support in PHP by Elizabeth Smith's slides at PHPBenelux'14
INTL
Good:
Wrapper around ICU library
Standardised locales, set locale per script
Number formatting
Currency formatting
Message formatting (replaces gettext)
Calendars, dates, time zone and time
Transliterator
Spoofchecker
Resource bundles
Convertors
IDN support
Graphemes
Collation
Iterators
Bad:
Does not support zend_multibyte
Does not support HTTP input output conversion
Does not support function overloading
mb_string
Enables zend_multibyte support
Supports transparent HTTP in/out encoding
Provides some wrappers for functionality such as strtoupper
ICONV
Primary for charset conversion
Output buffer handler
mime encoding functionality
conversion
some string helpers (len, substr, strpos, strrpos)
Stream Filter stream_filter_append($fp, 'convert.iconv.ISO-2022-JP/EUC-JP')
DATABASES
MySQL: Charset and collation on tables and on the connection (not the collation). Also, don't use mysql - mysqli or PDO
postgresql: pg_set_client_encoding
sqlite(3): Make sure it was compiled with Unicode and intl support
Some other gotchas
You cannot use Unicode filenames with PHP and windows unless you use a 3rd part extension.
Send everything in ASCII if you are using exec, proc_open and other command line calls
Plain text is not plain text, files have encodings
You can convert files on the fly with the iconv filter
The only thing I would add to these amazing answers is to emphasize on saving your files in UTF-8 encoding, I have noticed that browsers accept this property over setting UTF-8 as your code encoding. Any decent text editor will show you this. For example, Notepad++ has a menu option for file encoding, and it shows you the current encoding and enables you to change it. For all my PHP files I use UTF-8 without a BOM.
Sometime ago I had someone ask me to add UTF-8 support for a PHP and MySQL application designed by someone else. I noticed that all files were encoded in ANSI, so I had to use iconv to convert all files, change the database tables to use the UTF-8 character set and utf8_general_ci collate, add 'SET NAMES utf8' to the database abstraction layer after the connection (if using 5.3.6 or earlier. Otherwise, you have to use charset=utf8 in the connection string) and change string functions to use the PHP multibyte string functions equivalent.
I recently discovered that using strtolower() can cause issues where the data is truncated after a special character.
The solution was to use
mb_strtolower($string, 'UTF-8');
mb_ uses MultiByte. It supports more characters but in general is a little slower.
In PHP, you'll need to either use the multibyte functions, or turn on mbstring.func_overload. That way things like strlen will work if you have characters that take more than one byte.
You'll also need to identify the character set of your responses. You can either use AddDefaultCharset, as above, or write PHP code that returns the header. (Or you can add a META tag to your HTML documents.)
I have just gone through the same issue and found a good solution at PHP manuals.
I changed all my files' encoding to UTF8 and then the default encoding on my connection. This solved all the problems.
if (!$mysqli->set_charset("utf8")) {
printf("Error loading character set utf8: %s\n", $mysqli->error);
} else {
printf("Current character set: %s\n", $mysqli->character_set_name());
}
View Source
Unicode support in PHP is still a huge mess. While it's capable of converting an ISO 8859 string (which it uses internally) to UTF-8, it lacks the capability to work with Unicode strings natively, which means all the string processing functions will mangle and corrupt your strings.
So you have to either use a separate library for proper UTF-8 support, or rewrite all the string handling functions yourself.
The easy part is just specifying the charset in HTTP headers and in the database and such, but none of that matters if your PHP code doesn't output valid UTF-8. That's the hard part, and PHP gives you virtually no help there. (I think PHP 6 is supposed to fix the worst of this, but that's still a while away.)
If you want a MySQL server to decide the character set, and not PHP as a client (old behaviour; preferred, in my opinion), try adding skip-character-set-client-handshake to your my.cnf, under [mysqld], and restart mysql.
This may cause trouble in case you're using anything other than UTF-8.
The top answer is excellent. Here is what I had to on a regular Debian, PHP, and MySQL setup:
// Storage
// Debian. Apparently already UTF-8
// Retrieval
// The MySQL database was stored in UTF-8,
// but apparently PHP was requesting ISO 8859-1. This worked:
// ***notice "utf8", without dash, this is a MySQL encoding***
mysql_set_charset('utf8');
// Delivery
// File *php.ini* did not have a default charset,
// (it was commented out, shared host) and
// no HTTP encoding was specified in the Apache headers.
// This made Apache send out a UTF-8 header
// (and perhaps made PHP actually send out UTF-8)
// ***notice "utf-8", with dash, this is a php encoding***
ini_set('default_charset','utf-8');
// Submission
// This worked in all major browsers once Apache
// was sending out the UTF-8 header. I didn’t add
// the accept-charset attribute.
// Processing
// Changed a few commands in PHP, like substr(),
// to mb_substr()
That was all!

Indy TLS server with non-ASCII characters in private key password

I have a private key file encrypted with a password that also contains non-US-ASCII characters (e.g. passwörd or s€cret).
I didn't find a way to use this key file from an indy based server, as Indy seems to use MBCS to convert the unicodestring password to an octet string.
According to https://www.rfc-editor.org/rfc/rfc8018 (end of section 3) UTF-8 is a common encoding rule for password octet string.
According to my investigations Indy (I'm using the version that comes with Delphi 10.2 Tokyo) uses IndyTextEncoding_OSDefault inside PasswordCallback (IdSSLOpenSSL.pas) to convert the (Unicode)String to a PAnsiChar.
IndyTextEncoding_OSDefault() (in IdGlobal.pas) sets GIdOSDefaultEncoding to TIdMBCSEncoding and also returns it. GIdOSDefaultEncoding is not globally available and I also didn't find a method to set it.
Is there a possibility to either change the encoding PasswordCallback uses or already pass the password as byte array/PAnsiChar/RawByteSting to Indy?
There is no option to change the charset that Indy uses for encoding Unicode passwords. You would have to alter Indy's source code and recompile it. In a future version, I'll consider changing it to UTF-8, or make it user-configurable.
Note that IndyTextEncoding_OSDefault is MBCS only on Windows. It is UTF-8 on other platforms.
Otherwise, you would have to call OpenSSL's SSL_CTX_set_default_passwd_cb() function directly to replace the password callback with your own function, then you can do whatever you want with it.

Delphi IdFTP - get file listing encoded in specified ANSI code page

With IdFTP, the server i'm connecting to is not using UTF-8, but ANSI. There's nothing special about my code, i simply set Host, Username, Password and Connect to server. Then i call List method with no parameters. Iterating through DirectoryListing gives me incorrect results for file names. My sample directory name encoded in local code page (CP-1250) is:
aąęsśćńółżźz
I thought i'll be able to "fix" file name field by converting it to AnsiString and setting code page but it seems to be already broken - memory dump of DirectoryListing[I].FileName:
a ? ? s ? ? ? ?? ?? z
6100 FDFF FDFF 7300 FDFF FDFF FDFF 8FDB DFDF 7A00
Manipulating with GIdDefaultAnsiEncoding or IOHandler.DefStringEncoding (after Connect, before List) makes no difference. I don't want to mess in IdFTP or IdGlobal code because i'm using it with other projects that involve Unicode and these works perfectly. Delphi XE2 or XE7.
As you can see FData contains raw file name in a 2 bytes per char string:
Even if i set IOHandler.DefStringEncoding to any TIdTextEncoding that is FIsSingleByte = True, FMaxCharSize = 1. However it looks promising because #$009F is "ź" in CP-1250, but i'm not looking for a per server, temporary solution. I expected Indy to handle this correctly after setting IOHandler.DefStringEncoding and GIdDefaultAnsiEncoding based on server capabilities (UTF-8 or ANSI with specified encoding).
Total Commander connection log:
Your server supports the MLSD command. Total Commander is sending the MLSD command and not the older LIST command. This is good, because MLSD has a standardized format (see RFC 3659), which includes support for embedded charset information. If no charset is explicitly stated, UTF-8 must be used.
You did not show the command/response log for TIdFTP, but the fact that the TIdFTPListItem.Data property is showing MLSD formatted output data means TIdFTP.List() is also using the MLSD command (by calling TIdFTP.ExtListDir() internally). The output shown does not include an explicit charset attribute, so TIdFTP will decode the filename as UTF-8.
However, the raw filename data that is shown in the TIdFTPListItem.Data property is NOT the correct UTF-8 encoded form of the directory name you have shown (even when stored as a raw 8-bit encoded UnicodeString - which is what TIdFTP.ExtListDir() does internally before parsing it). So the problem is either:
your FTP server is not converting the directory name from CP-1250 to UTF-8 correctly in the first place. Considering that Total Commander appears to be able to handle the listing correctly, this is not likely.
TIdFTP is not storing the raw UTF-8 octet data correctly before parsing it. This is more likely.
Hard to say which is actually the case since you did not show the raw listing data that is actually being transmitted. And you did not specify which exact version of Delphi and Indy you are using, either. Assuming the server is transmitting UTF-8 correctly, you might simply be using an older Indy version that does not handle the UTF-8 transmission correctly. AFAIK, the current version available (10.6.2.5270 at the time of this writing) should be able to handle it, as long as you are using Delphi 2009 or later. If you can provide a Wireshark capture of the raw listing data, I can check if there are any logic issues in TIdFTP that need to be fixed or not.
My team was looking for quick solution that i had to provide. My solution is based on this post: http://forums2.atozed.com/viewtopic.php?p=32301#p32301 and this question: Converting UnicodeString to AnsiString
Once FTP listing is finished i do overwrite FileName property via function that extracts file name from Data, and then convert String to RawByteString with correct code page. Fix is applied only if server doesn't support UTF-8. This way i'm able to move around FTP - ChangeDir, Get, Put etc. without problems.

Adding custom header to TIdHttp request, header value has commas

I'm using Delphi XE2 and Indy 10.5.8.0. I have an instance of TIdHttp and I need to add a custom header to the request. The header value has commas in it so it's getting parsed automatically into multiple headers. I don't want it to do that. I need the header value for my custom header to still be one string and not split based on a comma delimiter.
I have tried setting IdHttp1.Request.CustomHeaders.Delimiter := ';' with no success. Is there a way to make sure the header doesn't get split up?
procedure SendRequest;
const HeaderStr = 'URL-Encoded-API-Key VQ0_RV,ntmcOg/G3oA==,2012-06-13 16:25:19';
begin
IdHttp1.Request.CustomHeaders.AddValue('Authorization', HeaderStr);
IdHttp1.Get(URL);
end;
I am not able to reproduce this issue using the latest Indy 10.5.8 SVN snapshot. The string you have shown gets assigned as a single line for me.
With that said, by default the TIdHeaderList.FoldLines property is set to True, and lines get folded on whitespace and comma characters, so that would explain why your string is getting split. Near as I can tell, there have not been any logic changes made to the folding algorithm between your version of Indy and the latest version in SVN.

What is the _snowman param in Ruby on Rails 3 forms for?

In Ruby on Rails 3 (currently using Beta 4), I see that when using the form_tag or form_for helpers there is a hidden field named _snowman with the value of ☃ (Unicode \x9731) showing up.
So, what is this for?
This parameter was added to forms in order to force Internet Explorer (5, 6, 7 and 8) to encode its parameters as unicode.
Specifically, this bug can be triggered if the user switches the browser's encoding to Latin-1. To understand why a user would decide to do something seemingly so crazy, check out this google search. Once the user has put the web-site into Latin-1 mode, if they use characters that can be understood as both Latin-1 and Unicode (for instance, é or ç, common in names), Internet Explorer will encode them in Latin-1.
This means that if a user searches for "Ché Guevara", it will come through incorrectly on the server-side. In Ruby 1.9, this will result in an encoding error when the text inevitably makes its way into the regular expression engine. In Ruby 1.8, it will result in broken results for the user.
By creating a parameter that can only be understood by IE as a unicode character, we are forcing IE to look at the accept-charset attribute, which then tells it to encode all of the characters as UTF-8, even ones that can be encoded in Latin-1.
Keep in mind that in Ruby 1.8, it is extremely trivial to get Latin-1 data into your UTF-8 database (since nothing in the entire stack checks that the bytes that the user sent at any point are valid UTF-8 characters). As a result, it's extremely common for Ruby applications (and PHP applications, etc. etc.) to exhibit this user-facing bug, and therefore extremely common for users to try to change the encoding as a palliative measure.
All that said, when I wrote this patch, I didn't realize that the name of the parameter would ever appear in a user-facing place (it does with forms that use the GET action, such as search forms). Since it does, we will rename this parameter to _e, and use a more innocuous-looking unicode character.
This is here to support Internet Explorer 5 and encourage it to use UTF-8 for its forms.
The commit message seen here details it as follows:
Fix several known web encoding issues:
Specify accept-charset on all forms. All recent browsers, as well as
IE5+, will use the encoding specified
for form parameters
Unfortunately, IE5+ will not look at accept-charset unless at least one
character in the form's values is not
in the page's charset. Since the
user can override the default
charset (which Rails sets to UTF-8),
we provide a hidden input containing
a unicode character, forcing IE to
look at the accept-charset.
Now that the vast majority of web input is UTF-8, we set the inbound
parameters to UTF-8. This will
eliminate many cases of incompatible
encodings between ASCII-8BIT and
UTF-8.
You can safely ignore params[:_snowman]
In short, you can safely ignore this parameter.
Still, I am not sure why we're supporting old technologies like Internet Explorer 5. It seems like a very non-Ruby on Rails decision if you ask me.

Resources