PDFBox 2.0: Overcoming dictionary key encoding - character-encoding

I am extracting text from PDF forms with Apache PDFBox 2.0.1, extracting the details of AcroForm fields. From a radio button field I dig up the appearance dictionary. I'm interested in the /N and /D entries (normal and "down" appearance). Like this (interactive Bean shell):
field = form.getField(fieldName);
widgets = field.getWidgets();
print("Field Name: " + field.getPartialName() + " (" + widgets.size() + ")");
for (annot : widgets) {
ap = annot.getAppearance();
keys = ap.getCOSObject().getDictionaryObject("N").keySet();
keyList = new ArrayList(keys.size());
for (cosKey : keys) {keyList.add(cosKey.getName());}
print(String.join("|", keyList));
}
The output is
Field Name: Krematorier (6)
Off|Skogskrem
Off|R�cksta
Off|Silverdal
Off|Stork�llan
Off|St Botvid
Nyn�shamn|Off
The question mark blotches should be Swedish characters "ä" or "å". Using iText RUPS I can see that the dictionary keys are encoded with ISO-8859-1 while PDFBox assumes they are Unicode, I guess.
Is there any way of decoding the keys using ISO-8859-1? Or any other way to retrieve the keys correctly?
This sample PDF form can be downloaded here: http://www.stockholm.se/PageFiles/85478/KYF%20211%20Best%C3%A4llning%202014.pdf

Using iText RUPS I can see that the dictionary keys are encoded with ISO-8859-1 while PDFBox assumes they are Unicode, I guess.
Is there any way of decoding the keys using ISO-8859-1? Or any other way to retrieve the keys correctly?
Changing the assumed encoding
PDFBox' interpretation of the encoding of bytes in names (only names can be used as dictionary keys in PDFs) takes place in BaseParser.parseCOSName() when reading the name from the source PDF:
/**
* This will parse a PDF name from the stream.
*
* #return The parsed PDF name.
* #throws IOException If there is an error reading from the stream.
*/
protected COSName parseCOSName() throws IOException
{
readExpectedChar('/');
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
int c = seqSource.read();
while (c != -1)
{
int ch = c;
if (ch == '#')
{
int ch1 = seqSource.read();
int ch2 = seqSource.read();
if (isHexDigit((char)ch1) && isHexDigit((char)ch2))
{
String hex = "" + (char)ch1 + (char)ch2;
try
{
buffer.write(Integer.parseInt(hex, 16));
}
catch (NumberFormatException e)
{
throw new IOException("Error: expected hex digit, actual='" + hex + "'", e);
}
c = seqSource.read();
}
else
{
// check for premature EOF
if (ch2 == -1 || ch1 == -1)
{
LOG.error("Premature EOF in BaseParser#parseCOSName");
c = -1;
break;
}
seqSource.unread(ch2);
c = ch1;
buffer.write(ch);
}
}
else if (isEndOfName(ch))
{
break;
}
else
{
buffer.write(ch);
c = seqSource.read();
}
}
if (c != -1)
{
seqSource.unread(c);
}
String string = new String(buffer.toByteArray(), Charsets.UTF_8);
return COSName.getPDFName(string);
}
As you can see, after reading the name bytes and interpreting the # escape sequences, PDFBox unconditionally interprets the resulting bytes as UTF-8 encoded. To change this, therefore, you have to patch this PDFBox class and replace the charset named at the bottom.
Is PDFBox correct here?
According to the specification, when treating a name object as text
the sequence of bytes (after expansion of NUMBER SIGN sequences, if any) should be interpreted according to UTF-8, a variable-length byte-encoded representation of Unicode in which the printable ASCII characters have the same representations as in ASCII.
(section 7.3.5 Name Objects, ISO 32000-1)
BaseParser.parseCOSName() implements just that.
PDFBox' implementation is not completely correct, though, as already the act of interpreting the name as string without need is wrong:
name objects shall be treated as atomic within a PDF file. Ordinarily, the bytes making up the name are never treated as text to be presented to a human user or to an application external to a conforming reader. However, occasionally the need arises to treat a name object as text
Thus, PDF libraries should handle names as byte arrays as long as possible and only find a string representation when it is explicitly required, and only then the recommendation above (to assume UTF-8) should play a role. The specification even indicates where this may cause trouble:
PDF does not prescribe what UTF-8 sequence to choose for representing any given piece of externally specified text as a name object. In some cases, multiple UTF-8 sequences may represent the same logical text. Name objects defined by different sequences of bytes constitute distinct name objects in PDF, even though the UTF-8 sequences may have identical external interpretations.
Another situation becomes apparent in the document at hand, if the sequence of bytes constitutes no valid UTF-8, it still is a valid name. But such names are changed by the method above, any unparsable byte or subsequence is replaced by the Unicode Replacement Character '�'. Thus, different names may collapse into a single one.
Another issue is that when writing back a PDF, PDFBox is not acting symmetrically but instead interprets the String representation of the name (which has been retrieved as a UTF-8 interpretation if read from a PDF) using pure US_ASCII, cf. COSName.writePDF(OutputStream):
public void writePDF(OutputStream output) throws IOException
{
output.write('/');
byte[] bytes = getName().getBytes(Charsets.US_ASCII);
for (byte b : bytes)
{
int current = (b + 256) % 256;
// be more restrictive than the PDF spec, "Name Objects", see PDFBOX-2073
if (current >= 'A' && current <= 'Z' ||
current >= 'a' && current <= 'z' ||
current >= '0' && current <= '9' ||
current == '+' ||
current == '-' ||
current == '_' ||
current == '#' ||
current == '*' ||
current == '$' ||
current == ';' ||
current == '.')
{
output.write(current);
}
else
{
output.write('#');
output.write(String.format("%02X", current).getBytes(Charsets.US_ASCII));
}
}
}
Thus, any interesting Unicode character is replaced with the US_ASCII default replacement character which I assume to be '?'.
So it is quite fortunate that PDF names most often do merely contain ASCII characters... ;)
Historically
According to the implementation notes from the PDF 1.4 reference,
In Acrobat 4.0 and earlier versions, a name object being treated as text will typically be interpreted in a host platform encoding, which depends on the operating system and the local language. For Asian languages, this encoding may be something like Shift-JIS or Big Five. Consequently, it will be necessary to distinguish between names encoded this way and ones encoded as UTF-8. Fortunately, UTF-8 encoding is very stylized and its use can usually be recognized. A name that is found not to conform to UTF-8 encoding rules can instead be interpreted according to host platform encoding.
Thus, the sample document at hand seems to follow conventions from Acrobat 4, i.e. from the last century.
Source code excerpts are from PDFBox 2.0.0 but at first glance do not seem to have been changed in 2.0.1 or the development trunk.

Related

How and what does Firefox (Hunspell) do to clean text before spellchecking words?

I am trying to clean text in the exact way that Firefox does before spell checking individual words for a Firefox extension I'm building (my addon uses nspell, a JavaScript implementation of Hunspell, since Firefox doesn't expose the Hunspell instance it uses via the extension API).
I've looked at the Firefox gecko cloned codebase, i.e. in the mozSpellChecker.h file and other related files by searching for "spellcheck" but I cannot seem to find out how they are cleaning text.
Reverse engineering it has been a major PITA, I have this so far:
// cleans text and strips out unwanted symbols/patterns before we use it
// returns an empty string if content undefined
function cleanText (content, filter = true) {
if (!content) {
console.warn(`MultiDict: cannot clean falsy or undefined content: "${content}"`)
return ''
}
// ToDo: first split string by spaces in order to properly ignore urls
const rxUrls = /^(http|https|ftp|www)/
const rxSeparators = /[\s\r\n.,:;!?_<>{}()[\]"`´^$°§½¼³%&¬+=*~#|/\\]/
const rxSingleQuotes = /^'+|'+$/g
// split all content by any character that should not form part of a word
return content.split(rxSeparators)
.reduce((acc, string) => {
// remove any number of single quotes that do not form part of a word i.e. 'y'all' > y'all
string = string.replace(rxSingleQuotes, '')
// we never want empty strings, so skip them
if (string.length < 1) {
return acc
}
// for when we're just cleaning the text of punctuation (i.e. not filtering out emails, etc)
if (!filter) {
return acc.concat([string])
}
// filter out emails, URLs, numbers, and strings less than 2 characters in length
if (!string.includes('#') && !rxUrls.test(string) && isNaN(string) && string.length > 1) {
return acc.concat([string])
}
return acc
}, [])
}
But I'm still seeing big differences between content when testing things like - well - the text area used to create this question.
To be clear: I'm looking for the exact method(s) and matches and rules that Firefox uses to clean text, and since it's open source it should be somewhere, but I can't seem to find it!
I believe you want the functions in mozInlineSpellWordUtil.cpp.
From the header:
/**
* This class extracts text from the DOM and builds it into a single string.
* The string includes whitespace breaks whereever non-inline elements begin
* and end. This string is broken into "real words", following somewhat
* complex rules; for example substrings that look like URLs or
* email addresses are treated as single words, but otherwise many kinds of
* punctuation are treated as word separators. GetNextWord provides a way
* to iterate over these "real words".
*
* The basic operation is:
*
* 1. Call Init with the weak pointer to the editor that you're using.
* 2. Call SetPositionAndEnd to to initialize the current position inside the
* previously given range and set where you want to stop spellchecking.
* We'll stop at the word boundary after that. If SetEnd is not called,
* we'll stop at the end of the root element.
* 3. Call GetNextWord over and over until it returns false.
*/
You can find the complete source here, but it is fairly complex. For example, here is the method used to classify parts of the text as email addresses or urls, but it's over 50 lines long just to handle that.
Writing a spell checker seems trivial in principal, but as you can see from the source, it is a major endeavor. I'm not saying you shouldn't try, but as you've likely discovered, the devil is in the details of the edge cases.
Just as one example, when you're deciding what constitutes a word boundary or not, you have to decide which characters to ignore, including characters outside of the ASCII range. For example, here you can see the MONGOLIAN TODO SOFT HYPHEN being handled like the ASCII hyphen character:
// IsIgnorableCharacter
//
// These characters are ones that we should ignore in input.
inline bool IsIgnorableCharacter(char ch) {
return (ch == static_cast<char>(0xAD)); // SOFT HYPHEN
}
inline bool IsIgnorableCharacter(char16_t ch) {
return (ch == 0xAD || // SOFT HYPHEN
ch == 0x1806); // MONGOLIAN TODO SOFT HYPHEN
}
Again, I'm not trying to dissuade you from working on this project, but tokenizing text into discrete words in a way that will work within the context of HTML and in a multilingual environment, is a major endeavor.

Objective-C some special char uncontrollably changing

I have a string that include some special char (like é,â,î,ı etc.), When I use substring on this string. I encounter inconsistent results. Some special char change uncontrollably
You are assuming that these are all characters:
[newword substringWithRange:NSMakeRange(0,1)];
[newword substringWithRange:NSMakeRange(1,1)];
[newword substringWithRange:NSMakeRange(2,1)];
[newword substringWithRange:NSMakeRange(3,1)];
// and so on...
In other words, you believe that:
A location always falls at the start of a character.
A character always has length 1.
Both assumptions are wrong. Please read the Characters and Grapheme Clusters chapter of Apple's String Programming Guide (here).
Your é happens to have length 2, because it is a base letter e followed by a combining diacritical accent. If you want it to have length 1, you need to normalize the string before you use it. Call precomposedStringWithCanonicalMapping and use the resulting string.
Example and proof (in Swift, but it won't matter, as I use NSString throughout):
let s = "é,â,î,ı" as NSString
let c = s.substring(with: NSRange(location: 0, length: 1)) // e
let s2 = s.precomposedStringWithCanonicalMapping as NSString
let c2 = s2.substring(with: NSRange(location: 0, length: 1)) // é
You're treating a unicode string like a sequence of bytes. Unicode codepoints, aside from low UTF8 can be multi-byte so you are changing the text style by stripping out parts responsible for the accent above the letter like this part: https://www.compart.com/en/unicode/U+0301
UTF8 is variable width so by treating it as raw bytes you may get weird results, I would suggest using something that is more aware of unicode like ICU (International Components for Unicode).
Now imagine you have a two byte sequence like this (this may not be 100% accurate but it illustrates my point):
0x056 0x000
e NUL
Now you have a UTF8 string with 1 codepoint and a null terminator. Now say you want to add an accent to that e. How would you do that? You could use a special unicode codepoint to modify the e so now the string is:
0x056 0x0CC 0x810 0x000
e U+0301 NUL
Where U+0301 is 2 a byte control character (Combining Acute Accent) and makes the e accented.
Edit: The answer assumes UTF8 encoding which is likely a bad assumption but I think the answer, whether UTF8 or UTF16, or any other type of encoding with control characters, illustrates why you may have mysterious dissapearing accents. While this may be UTF16, for the sake of simplicity let's pretend we live in a world where life is just slightly better because everyone only uses UTF8 and UTF16 doesn't exist.
To address the comment (this is less to do with the question but is some fun trivia) and for some fun detils about NS/CF/Swift runtimes and bridging and constant CF strings and other fun stuff like that: The representation of the actual string in memory is implementation defined and can vary (even for constant strings, trust me, I know, I fixed the ELF implementation of them in Clang for CoreFoundation a few days ago). Anyway, here's some code:
CF_INLINE CFStringEncoding __CFStringGetSystemEncoding(void) {
if (__CFDefaultSystemEncoding == kCFStringEncodingInvalidId) (void)CFStringGetSystemEncoding();
return __CFDefaultSystemEncoding;
}
CFStringEncoding CFStringFileSystemEncoding(void) {
if (__CFDefaultFileSystemEncoding == kCFStringEncodingInvalidId) {
#if DEPLOYMENT_TARGET_MACOSX || DEPLOYMENT_TARGET_EMBEDDED || DEPLOYMENT_TARGET_EMBEDDED_MINI || DEPLOYMENT_TARGET_WINDOWS
__CFDefaultFileSystemEncoding = kCFStringEncodingUTF8;
#else
__CFDefaultFileSystemEncoding = CFStringGetSystemEncoding();
#endif
}
return __CFDefaultFileSystemEncoding;
}
Throughout CoreFoundation/Foundation/SwiftFoundation (Yes you never know what sort of NSString is actually the one you're holding, they usually pretend to be the same thing but under the hood depending on how you got the object you may be holding onto one of the three variations of it).
This is why code like this exists, because NS/CF(Constant)/Swift strings have implementation defined internal representation.
if (((encoding & 0x0FFF) == kCFStringEncodingUnicode) && ((encoding == kCFStringEncodingUnicode) || ((encoding > kCFStringEncodingUTF8) && (encoding <= kCFStringEncodingUTF32LE)))) {
If you want consistent behavior you have to encode the string using a specific fixed encoding instead of relying on the internal representation.

Parsing RTF non-breaking space

I am making a simple parser from RTF to HTML.
I have the following raw RTF:
who\\~nursed\\~and
According to the RTF specification \~ is the keyword for a non-breaking space.
The end of a keyword is marked by a Delimiter which is defined as follows:
A space. This serves only to delimit a control word and is ignored in subsequent processing.
A numeric digit or an ASCII minus sign (-), which indicates that a numeric parameter is associated with the control word. The subsequent digital sequence is then delimited by any character other than an ASCII digit (commonly another control word that begins with a backslash). The parameter can be a positive or negative decimal number. The range of the values for the number is nominally –32768 through 32767, i.e., a signed 16-bit integer. A small number of control words take values in the range −2,147,483,648 to 2,147,483,647 (32-bit signed integer). These control words include \binN, \revdttmN, \rsidN related control words and some picture properties like \bliptagN. Here N stands for the numeric parameter. An RTF parser must allow for up to 10 digits optionally preceded by a minus sign. If the delimiter is a space, it is discarded, that is, it’s not included in subsequent processing.
Any character other than a letter or a digit. In this case, the delimiting character terminates the control word and is not part of the control word. Such as a backslash “\”, which means a new control word or a control symbol follows.
As i understand it, the highlighted part above, is the rule used in this particular instance. But if that is the case, then my parser would read until the ~ sign, and conclude that since this is not a letter or a digit, it is not part of the keyword.
This currently results in the following output:
who~nursed~and
I have the following code for reading a keyword:
public GetKeyword(index: number): KeywordSet {
var keywordarray: string[] = [];
var valuearray: string[] = [];
index++;
while (index < this.m_input.length) {
var remainint = this.m_input.substr(index);
//Keep going until we hit a delimiter
if (this.m_input[index] == " ") {
index++;
break;
} else if (this.IsNumber(this.m_input[index])) {
valuearray.push(this.m_input[index]);
} else if (this.IsDelimiter(this.m_input[index])) {
break;
} else keywordarray.push(this.m_input[index]);
index++;
}
var value: number = null;
if (valuearray.length > 0) value = parseInt(valuearray.join(""));
var keywordset = new KeywordSet(keywordarray.join(""), index, value);
return keywordset;
}
private IsDelimiter(char: string): boolean {
if (char == "*" || char == "'") return false;
return !this.IsLetterOrDigit(char);
}
When GetKeyword() reaches "~" it recognises it as a delimiter, and stops reading, resulting in an empty keyword as return value.
I do not have an AST constructed for this. Don't think it is necessary for this?
The quote in your question describes the syntax of an entity called control word but the \~ is actually a different entity called control symbol. Control symbols have a different syntax:
Control Symbol
A control symbol consists of a backslash followed by a single, non-alphabetical character. For example, \~ (backslash tilde) represents a non-breaking space. Control symbols do not have delimiters, i.e., a space following a control symbol is treated as text, not a delimiter.
See page 9 of Rich Text Format (RTF) Specification, version 1.9.1.

Is it legitimate for a tokenizer to have a stack?

I have designed a new language for that I want to write a reasonable lexer and parser.
For the sake of brevity, I have reduced this language to a minimum so that my questions are still open.
The language has implicit and explicit strings, arrays and objects. An implicit string is just a sequence of characters that does not contain <, {, [ or ]. An explicit string looks like <salt<text>salt> where salt is an arbitrary identifier (i.e. [a-zA-Z][a-zA-Z0-9]*) and text is an arbitrary sequence of characters that does not contain the salt.
An array starts with [, followed by objects and/or strings and ends with ].
All characters within an array that don't belong to an array, object or explicit string do belong to an implicit string and the length of each implicit string is maximal and greater than 0.
An object starts with { and ends with } and consists of properties. A property starts with an identifier, followed by a colon, then optional whitespaces and then either an explicit string, array or object.
So the string [ name:test <xml<<html>[]</html>>xml> {name:<b<test>b>}<b<bla>b> ] represents an array with 6 items: " name:test ", "<html>[]</html>", " ", { name: "test" }, "bla" and " " (the object is notated in json).
As one can see, this language is not context free due to the explicit string (that I don't want to miss). However, the syntax tree is nonambiguous.
So my question is: Is a property a token that may be returned by a tokenizer? Or should the tokenizer return T_identifier, T_colon when he reads an object property?
The real language allows even prefixes in the identifier of a property, e.g. ns/name:<a<test>a> where ns is the prefix for a namespace.
Should the tokenizer return T_property_prefix("ns"), T_property_prefix_separator, T_property_name("name"), T_property_colon or just T_property("ns/name") or even T_identifier("ns"), T_slash, T_identifier("name"), T_colon?
If the tokenizer should recognize properties (which would be useful for syntax highlighters), he must have a stack, because name: is not a property if it is in an array. To decide whether bla: in [{foo:{bar:[test:baz]} bla:{}}] is a property or just an implicit string, the tokenizer must track when he enters and leave an object or array.
Thus, the tokenizer would not be a finite state machine any more.
Or does it make sense to have two tokenizers - the first, which separates whitespaces from alpha-numerical character sequences and special characters like : or [, the second, which uses the first to build more semantical tokens? The parser could then operate on top of the second tokenizer.
Anyways, the tokenizer must have an infinite lookahead to see when an explicit string ends. Or should the detection of the end of an explicit string happen inside the parser?
Or should I use a parser generator for my undertaking? Since my language is not context free, I don't think there is an appropriate parser generator.
Thanks in advance for your answers!
flex can be requested to provide a context stack, and many flex scanners use this feature. So, while it may not fit with a purist view of how a scanner scans, it is a perfectly acceptable and supported feature. See this chapter of the flex manual for details on how to have different lexical contexts (called "start conditions"); at the very end is a brief description of the context stack. (Don't miss the sentence which notes that you need %option stack to enable the stack.) [See Note 1]
Slightly trickier is the requirement to match strings with variable end markers. flex does not have any variable match feature, but it does allow you to read one character at a time from the scanner input, using the function input(). That's sufficient for your language (at least as described).
Here's a rough outline of a possible scanner:
%option stack
%x SC_OBJECT
%%
/* initial/array context only */
[^][{<]+ yylval = strdup(yytext); return STRING;
/* object context only */
<SC_OBJECT>{
[}] yy_pop_state(); return '}';
[[:alpha:]][[:alnum:]]* yylval = strdup(yytext); return ID;
[:/] return yytext[0];
[[:space:]]+ /* Ignore whitespace */
}
/* either context */
<*>{
[][] return yytext[0]; /* char class with [] */
[{] yy_push_state(SC_OBJECT); return '{';
"<"[[:alpha:]][[:alnum:]]*"<" {
/* We need to save a copy of the salt because yytext could
* be invalidated by input().
*/
char* salt = strdup(yytext);
char* saltend = salt + yyleng;
char* match = salt;
/* The string accumulator code is *not* intended
* to be a model for how to write string accumulators.
*/
yylval = NULL;
size_t length = 0;
/* change salt to what we're looking for */
*salt = *(saltend - 1) = '>';
while (match != saltend) {
int ch = input();
if (ch == EOF) {
yyerror("Unexpected EOF");
/* free the temps and do something */
}
if (ch == *match) ++match;
else if (ch == '>') match = salt + 1;
else match = salt;
/* Don't do this in real code */
yylval = realloc(yylval, ++length);
yylval[length - 1] = ch;
}
/* Get rid of the terminator */
yylval[length - yyleng] = 0;
free(salt);
return STRING;
}
. yyerror("Invalid character in object");
}
I didn't test that thoroughly, but here is what it looks like with your example input:
[ name:test <xml<<html>[]</html>>xml> {name:<b<test>b>}<b<bla>b> ]
Token: [
Token: STRING: -- name:test --
Token: STRING: --<html>[]</html>--
Token: STRING: -- --
Token: {
Token: ID: --name--
Token: :
Token: STRING: --test--
Token: }
Token: STRING: --bla--
Token: STRING: -- --
Token: ]
Notes
In your case, unless you wanted to avoid having a parser, you don't actually need a stack since the only thing that needs to be pushed onto the stack is an object context, and a stack with only one possible value can be replaced with a counter.
Consequently, you could just remove the %option stack and define a counter at the top of the scan. Instead of pushing the start condition, you increment the counter and set the start condition; instead of popping, you decrement the counter and reset the start condition if it drops to 0.
%%
/* Indented text before the first rule is inserted at the top of yylex */
int object_count = 0;
<*>[{] ++object_count; BEGIN(SC_OBJECT); return '{';
<SC_OBJECT[}] if (!--object_count) BEGIN(INITIAL); return '}'
Reading the input one character at a time is not the most efficient. Since in your case, a string terminate must start with >, it would probably be better to define a separate "explicit string" context, in which you recognized [^>]+ and [>]. The second of these would do the character-at-a-time match, as with the above code, but would terminate instead of looping if it found a non-matching character other than >. However, the simple code presented may turn out to be fast enough, and anyway it was just intended to be good enough to do a test run.
I think the traditional way to parse your language would be to have the tokenizer return T_identifier("ns"), T_slash, T_identifier("name"), T_colon for ns/name:
Anyway, I can see three reasonable ways you could implement support for your language:
Use lex/flex and yacc/bison. The tokenizers generated by lex/flex do not have stack so you should be using T_identifier and not T_context_specific_type. I didn't try the approach so I can't give a definite comment on whether your language could be parsed by lex/flex and yacc/bison. So, my comment is try it to see if it works. You may find information about the lexer hack useful: http://en.wikipedia.org/wiki/The_lexer_hack
Implement a hand-built recursive descent parser. Note that this can be easily built without separate lexer/parser stages. So, if the lexemes depend on context it is easy to handle when using this approach.
Implement your own parser generator which turns lexemes on and off based on the context of the parser. So, the lexer and the parser would be integrated together using this approach.
I once worked for a major network security vendor where deep packet inspection was performed by using approach (3), i.e. we had a custom parser generator. The reason for this is that approach (1) doesn't work for two reasons: firstly, data can't be fed to lex/flex and yacc/bison incrementally, and secondly, HTTP can't be parsed by using lex/flex and yacc/bison because the meaning of the string "HTTP" depends on its location, i.e. it could be a header value or the protocol specifier. The approach (2) didn't work because data can't be fed incrementally to recursive descent parsers.
I should add that if you want to have meaningful error messages, a recursive descent parser approach is heavily recommended. My understanding is that the current version of gcc uses a hand-built recursive descent parser.

Is there a way to dumb down text from Unicode to ASCII?

What I need is something like, for each ASCII character, a list of equivalent Unicode characters.
The problem is that programs like Microsoft Excel and Word insert non-ASCII double-quotes, single-quotes, dashes, etc. when people type into documents. I want to store this text in a database field of type "varchar", which requires single-byte characters.
For the sake of storing ASCII (single-byte) text, some of those Unicode characters could be considered equivalent to or similar enough to a particular ASCII character that replacing the Unicode character with the equivalent ASCII character would be fine.
I would like a simple function like MapToASCII, that would convert Unicode text to an ASCII equivalent, allowing me to specify a replacement character for any Unicode characters that are not similar to any ASCII character.
The Win32 API WideCharToMultiByte can be used for this conversion (Unicode to ANSI). Use CP_ACP as the first parameter. Something like that would likely be better than trying to build your own mapping function.
Edit At the risk of sounding like I am trying to promote this as a solution against the OP's wishes, it seems that it may be worth pointing out that this API does much (all?) of what is being asking for. The goal is to map (I think) a Unicode string as much as possible to "ANSI" (where ANSI may be something of a moving target in this case). An additional requirement is to be able to specify some alternative character for those that cannot be mapped. The following example does this. It "converts" a Unicode string to char and uses an underscore (second to last parameter) for those characters that cannot be converted.
ret = WideCharToMultiByte( CP_ACP, 0, L"abc個חあЖdef", -1,
ac, sizeof( ac ), "_", NULL );
for ( i = 0; i < strlen( ac ); i++ )
printf( "%c %02x\n", ac[i], ac[i] );
A highly relevant question is here: Replacing unicode punctuation with ASCII approximations
Although the answer there is insufficient, it gave me an idea. I could map each of the Unicode code points in the Basic Multilingual Plane (0) to an equivalent ASCII character, if one exists. The following C# code will help by creating an HTML form in which you can type a replacement character for each value.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Globalization;
using System.IO;
namespace UnicodeCharacterCategorizer
{
class Program
{
static void Main(string[] args)
{
string output_filename = "output.htm"; //set a filename if not specifying one through the command line
Dictionary<UnicodeCategory,List<char>> category_character_sets = new Dictionary<UnicodeCategory,List<char>>();
foreach (UnicodeCategory c in Enum.GetValues(typeof(UnicodeCategory)))
category_character_sets.Add( c, new List<char>() );
for (int i = 0; i <= 0xFFFF; i++)
{
if (i >= 0xD800 && i <= 0xDFFF) continue; //Skip ranges reserved for high/low surrogate pairs.
char c = (char)i;
UnicodeCategory category = char.GetUnicodeCategory( c );
category_character_sets[category].Add( c );
}
StringBuilder file_data = new StringBuilder( #"<!DOCTYPE html PUBLIC ""-//W3C//DTD XHTML 1.0 Transitional//EN"" ""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd""><html xmlns=""http://www.w3.org/1999/xhtml""><head><title>Unicode Category Character Sets</title><style>.categoryblock{border:3px solid black;margin-bottom:10px;padding:5px;} .characterblock{display:inline-block;border:1px solid grey;padding:5px;margin-right:5px;} .character{display:inline-block;font-weight:bold;background-color:#ffeeee} .numericvalue{color:blue;}</style></head><body><form id=""charactermap"">" );
foreach (KeyValuePair<UnicodeCategory,List<char>> entry in category_character_sets)
{
file_data.Append( #"<div class=""categoryblock""><h1>" + entry.Key.ToString() + ":</h1><br />" );
foreach (char c in entry.Value)
{
string hex_value = ((int)c).ToString( "x" );
file_data.Append( #"<div class=""characterblock""><span class=""character"">&#x" + hex_value + #";<br /><span class=""numericvalue"">" + hex_value + #"</span><br /><input type=""text"" name=""r_" + hex_value + #""" /></div>" );
}
file_data.Append( "</div>" );
}
file_data.Append("</form></body></html>" );
File.WriteAllText( output_filename, file_data.ToString(), Encoding.Unicode );
}
}
}
Specifically, that code will generate an HTML form containing all characters in the BMP, along with input text boxes named after the hex values prefixed with "r_" (r is for "replacement value"). If this ported over to an ASP.NET page, additional code could be written to pre-populate replacement values as much as possible:
with their own value if already ASCII, or
with Unicode normalized FormD or FormKD decomposed equivalents, or
a single ASCII value for an entire category (i.e. all "punctuation initial" characters with a ASCII double quote)
You could then go through manually and make adjustments, and it probably wouldn't take as long as you'd think. There are only 64512 code points, and large chunks of entire categories can probably be dismissed as "no even close to anything ASCII". So, I'm going to build this map and function.

Resources