tackle different types of utf hyphens in ruby 1.8.7 - ruby-on-rails

We have different types of hyphens/dashes (in some text) populated in db. Before comparing them with some user input text, i have to normalize any type of dashes/hyphens to simple hyphen/minus (ascii 45).
The possible dashes we have to convert are:
Minus(−) U+2212 − or − or −
Hyphen-minus(-) U+002D -
Hyphen(-) U+2010
Soft Hyphen U+00AD ­
Non-breaking hyphen U+2011 &#8209
Figure dash(‒) U+2012 (8210) ‒ or ‒
En dash(–) U+2013 (8211) –, – or –
Em dash(—) U+2014 (8212) —, — or —
Horizontal bar(―) U+2015 (8213) ― or ―
These all have to be converted to Hyphen-minus(-) using gsub.
I've used CharDet gem to detect the character encoding type of the fetched string. It's showing windows-1252. I've tried Iconv to convert the encoding to ascii. But it's throwing an exception Iconv::IllegalSequence.
ruby -v => ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin9.8.0]
rails -v => Rails 2.3.5
mysql encoding => 'latin1'
Any idea how to accomplish this?

Caveat: I know nothing about Ruby, but you have problems that are nothing to do with the programming language that you are using.
You don't need to convert Hyphen-minus(-) U+002D - to simple hyphen/minus (ascii 45); they're the same thing.
You believe that the database encoding is latin1. The statement "My data is encoded in ISO-8859-1 aka latin1" is up there with "The check is in the mail" and "Of course I'll still love you in the morning". All it tells you is that it is a single-byte-per-character encoding.
Presuming that "fetched string" means "byte string extracted from the database", chardet is very likely quite right in reporting windows-1252 aka cp1252 -- however this may be by accident as chardet sometimes seems to report that as a default when it has exhausted other possibilities.
(a) These Unicode characters cannot be decoded into latin1 or cp1252 or ascii:
Minus(−) U+2212 − or − or −
Hyphen(-) U+2010
Non-breaking hyphen U+2011 &#8209
Figure dash(‒) U+2012 (8210) ‒ or ‒
Horizontal bar(―) U+2015 (8213) ― or ―
What gives you the impression that they may possibly appear in the input or in the database?
(b) These Unicode characters can be decoded into cp1252 but not latin1 or ascii:
En dash(–) U+2013 (8211) –, – or –
Em dash(—) U+2014 (8212) —, — or —
These (most likely the EN DASH) are what you really need to convert to an ascii hyphen/dash. What was in the string that chardet reported as windows-1252?
(c) This can be decoded into cp1252 and latin1 but not ascii:
Soft Hyphen U+00AD ­
If a string contains non-ASCII characters, any attempt (using iconv or any other method) to convert it to ascii will fail, unless you use some kind of "ignore" or "replace with ?" option. Why are you trying to do that?

Related

Best Ansi Escape beginning

Which Ansi escape sequence is the most portable and/or simply best and why?
1. "\u001B[32;1mThis is bright green\u001B[0m"
2. "\x1B[33;1mThis is bright yellow\x1B[0m"
3. "\e[35;4;1mThis is bright purple underlined\e[0m"
I have been using printf "\x1B[32;1mgreen\x1B[0m" (that's an example in unix bash script for example) out of habit, but I was wondering if there were any reasons to use one over the other. Is one more portable than the others? That would be my assumption.
Also, if you know of any other Ansi Escape sequence feel free to share it in the comments or at the end of your answer.
If you don't know what an Ansi Escape sequence is or want to become more familiar with it, then here you go: http://en.wikipedia.org/wiki/ANSI_escape_code
NOTE:
All of the escape sequences above have worked on all of the Unix systems I have been on, however one must still rely on the system itself to interpret the escape codes. Windows, for example, does not permit any sort of escape codes except four (BEL, L-F or linefeed, C-R or carriage return and, of course, BS or backspace), so Ansi escape sequences will not work.
Short answer: It depends on the host string parser.
Long answer:
It depends on the string parser; that is, the piece of code that actually takes in your string ("\x1b[1mSome string\x1b[0m") as a literal and parses the escape characters using the backslash ANSI escape sequence.
For parsers that support hexadecimal escapes (\x), then \x1b (character 0x1B) should work.
For parsers that support octal escapes (\ddd), then \033 (octal 33) should work.
For parsers that support unicode escapes (\u), then \u001B should work.
Quick elaboration: \x and \u are similar; \x usually refers to a single character, 0-255, in hexadecimal radix. \u means the same (as it is represented in hexadecimal), but supports two bytes (in most parsers) and generally refers to 16-bit unicode characters.
A lesser used/supported escape character, as you mentioned, is \e. This escape is most commonly used with parsers/languages that expect a lot of ANSI escaping to happen, such as bash (and most other shells).
For instance, Node.js does not support \e:
> console.log("\x1b[31mhello\x1b[0m")
hello
undefined
> console.log("\e[31mhello\e[0m")
e[31mhelloe[0m
undefined
Neither does Lua:
> print('\x1b[31mhello\x1b[0m')
hello
> print('\e[31mhello\e[0m')
stdin:1: invalid escape sequence near '\e'
Or even Python:
>>> print("\x1b[31mhello\x1b[0m")
hello
>>> print("\e[31mhello\e[0m")
\e[31mhello\e[0m
>>>
Though PHP does:
<?php
echo "\x1b[31mhello\x1b[0m\n"; // hello
echo "\e[31mhello\e[0m\n"; // hello

String length difference between ruby 1.8 and 1.9

I have a website thats running on ruby 1.8.7 . I have a validation on an incoming post that checks to make sure that we allow upto max of 12000 characters. The spaces are counted as characters and tab and carriage returns are stripped off before the post is subjected to the validation.
Here is the post that is subjected to validation http://pastie.org/5047582
In ruby 1.9 the string length shows up as 11909 which is correct. But when I check the length on ruby 1.8.7 is turns out to be 12044.
I used codepad.org to run this ruby code which gives me http://codepad.org/OxgSuKGZ ( which outputs the length as 12044 which is wrong) but when i run this same code in the console at codeacademy.org the string length is 11909.
Can anybody explain me why this is happening ???
Thanks
This is a Unicode issue. The string you are using contains characters outside the ASCII range, and the UTF-8 encoding that is frequently used encodes those as 2 (or more) bytes.
Ruby 1.8 did not handle Unicode properly, and length simply gives the number of bytes in the string, which results in fun stuff like:
"ą".length
=> 2
Ruby 1.9 has better Unicode handling. This includes length returning the actual number of characters in the string, as long as Ruby knows the encoding:
"ä".length
=> 1
One possible workaround in Ruby 1.8 is using regular expressions, which can be made Unicode aware:
"ą".scan(/./mu).size
=> 1

Rails, Heroku and invalid byte sequence in UTF-8 error

I have a queue of text messages in Redis. Let's say a message in redis is something like this:
"niño"
(spot the non standard character).
The rails app displays the queue of messages. When I test locally (Rails 3.2.2, Ruby 1.9.3) everything is fine, but on Heroku cedar (Rails 3.2.2, I believe there is ruby 1.9.2) I get the infamous error: ActionView::Template::Error (invalid byte sequence in UTF-8)
After reading and rereading all I could find online I am still stuck as to how to fix this.
Any help or point to the right direction is greatly appreciated!
edit:
I managed to find a solution. I ended up using Iconv:
string = Iconv.iconv('UTF-8', 'ISO-8859-1', message)[0]
None of the suggested answers i found around seem to work in my case.
On Heroku, when your app receives the message "niño" from Redis, it is actually getting the four bytes:
0x6e 0x69 0xf1 0x6f
which, when interpreted as ISO-8859-1 correspond to the characters n, i, ñ and o.
However, your Rails app assumes that these bytes should be interpreted as UTF-8, and at some point it tries to decode them this way. The third byte in this sequence, 0xf1 looks like this:
1 1 1 1 0 0 0 1
If you compare this to the table on the Wikipedia page, you can see this byte is the leading byte of a four byte character (it matches the pattern 11110xxx), and as such should be followed by three more continuation bytes that all match the pattern 10xxxxxx. It's not, instead the next byte is 0x6f (01101111), and so this is invalid utf-8 byte sequence and you get the error you see.
Using:
string = message.encode('utf-8', 'iso-8859-1')
(or the Iconv equivalent) tells Ruby to read message as ISO-8859-1 encoded, and then to create the equivalent string in UTF-8 encoding, which you can then use without problems. (An alternative could be to use force_encoding to tell Ruby the correct encoding of the string, but that will likely cause problems later when you try to mix UTF-8 and ISO-8859-1 strings).
In UTF-8, the string "niño" corresponds to the bytes:
0x6e 0x69 0xc3 0xb1 0x6f
Note that the first, second and last bytes are the same. The ñ character is encoded as the two bytes 0xc3 0xb1. If you write these out in binary and compare to the table in the Wikipedia again article you'll see they encode 0xf1, which is the ISO-8859-1 encoding of ñ (since the first 256 unicode codepoints match ISO-8859-1).
If you take these five bytes and treat them as being ISO-8859-1, then they correspond to the string
niño
Looking at the ISO-8859-1 codepage, 0xc3 maps to Â, and 0xb1 maps to ±.
So what's happening on your local machine is that your app is receiving the five bytes 0x6e 0x69 0xc3 0xb1 0x6f from Redis, which is the UTF-8 representation of "niño". On Heroku it's receiving the four bytes 0x6e 0x69 0xf1 0x6f, which is the ISO-8859-1 representation.
The real fix to your problem will be to make sure the strings being put into Redis are all already UTF-8 (or at least all the same encoding). I haven't used Redis, but from what I can tell from a brief Google, it doesn't concern itself with string encodings but simply gives back whatever bytes it's been given. You should look at whatever process is putting the data into Redis, and ensure that it handles the encoding properly.

In what charset is 0xE1 an "a" with an umlaut?

I am trying to identify an extended ascii charset where 0xE1 is an "a" with an umlaut (8859-1 character E4)
and 0xF5 is a "u" with an umlaut (8859-1 character FC).
Has anyone seen this charset before? It quite possibly dates back to the 80's.
To my knowledge, there are no standard ASCII character sets which use those symbols. Here is a list of the standard character sets: http://www.columbia.edu/kermit/csettables.html
However, the character set you referenced is in fact used for interfacing with some LEDs, for example HT1632-compliant LEDs use the same character set: http://blog.thiseldo.co.uk/wp-filez/USB_HT1632_Matrix.pde
I hope this helps.
These are in the commonly used ISO-8859-1 character set, also known as latin-1.

Parsing \"–\" with Erlang re

I've parsed an HTML page with mochiweb_html and want to parse the following text fragment
0 – 1
Basically I want to split the string on the spaces and dash character and extract the numbers in the first characters.
Now the string above is represented as the following Erlang list
[48,32,226,128,147,32,49]
I'm trying to split it using the following regex:
{ok, P}=re:compile("\\xD2\\x80\\x93"), %% characters 226, 128, 147
re:split([48,32,226,128,147,32,49], P, [{return, list}])
But this doesn't work; it seems the \xD2 character is the problem [if I remove it from the regex, the split occurs]
Could someone possibly explain
what I'm doing wrong here ?
why the '–' character seemingly requires three integers for representation [226, 128, 147]
Thanks.
226,128,147 is E2,80,93 in hex.
> {ok, P} = re:compile("\xE2\x80\x93").
...
> re:split([48,32,226,128,147,32,49], P, [{return, list}]).
["0 "," 1"]
As to your second question, about why a dash takes 3 bytes to encode, it's because the dash in your input isn't an ASCII hyphen (hex 2D), but is a Unicode en-dash (hex 2013). Your code is recieving this in UTF-8 encoding, rather than the more obvious UCS-2 encoding. Hex 2013 comes out to hex E28093 in UTF-8 encoding.
If your next question is "why UTF-8", it's because it's far easier to retrofit an old system using 8-bit characters and null-terminated C style strings to use Unicode via UTF-8 than to widen everything to UCS-2 or UCS-4. UTF-8 remains compatible with ASCII and C strings, so the conversion can be done piecemeal over the course of years, or decades if need be. Wide characters require a "Big Bang" one-time conversion effort, where everything has to move to the new system at once. UTF-8 is therefore far more popular on systems with legacies dating back to before the early 90s, when Unicode was created.

Resources