ASCII Space Character in Ruby Unicode Escape - ruby-on-rails

Doing some reading and came across this block of code on the topic of Unicode Escapes in Ruby:
money = "\u{20AC 20 A3 20 A5}" # => "€ £ ¥"
I understand that in this ruby syntax, the actual spaces between the {}'s doesn't output an encoded space, that's the reason for the code point 20 but what I don't understand is why there's a code point 20 at the very beginning of the {}, right after the \u. No space has been output in the result and I copied it verbatim from the book.

It’s not a 20 at the beginning, it’s 20AC, which is the code point for €. The contents of the braces are a space separated list of codepoints (in hex format). 20AC is €, 20 is a space, A3 is £ and A5 is ¥.

Related

How can I convert a 4-byte string into an unicode emoji?

A webservice i use in my Delphi 10.3 returns a string to me consisting of these four bytes: F0 9F 99 82 . I expect a slightly smiling emoji. This site shows this byte sequence as the UTF-8 representation of that emoji. So I guess i have a UTF-8 representation in my string, but its an actual unicode string? How do i convert my string into the actual unicode representation, to show it, for example, in a TMemo?
The character 🙂 has the Unicode code point U+1F642. Displaying text is defined thru an encoding: how a set of bytes has to be interpreted:
in UTF-8 one character can consist of 8, 16, 24 or 32 bits (1 to 4 Bytes); this one is $F0 $9F $99 $82.
in UTF-16 one character can consist of 16 or 32 bits (2 or 4 bytes = 1 or 2 Words); this one is $D83D $DE42 (using surrogates).
in UTF-32 one character always consists of 32 bits (4 bytes = 1 Cardinal or DWord) and always equals to the code point, that is $1F642.
In Delphi, you can use:
TEncoding.UTF8.GetString() for UTF-8
(or TEncoding.Unicode.GetString() if you'd have UTF-16LE
and TEncoding.BigEndianUnicode.GetString() if you'd have UTF-16BE).
Keep in mind that 🙂 is just a character like each letter, symbol and whitespace of this text: it can be marked thru selection (i.e. Ctrl+A) and copied to the clipboard (i.e. Ctrl+C). No special care is needed.

Lua pattern match around comma

I have several small place marks such as 'א,א' 'א,ב'. If we use the comma as the center point, i need at most 2 characters before the comma, and up to the next space after the comma.
I have (.-,.-)%s but its not doing what I need. Any idea?
Also as you can see there not latin letters so using %l will not work.
There are couple of issues here. First, a minor issue: .-, will match as little as possible before the coma, that is zero characters. You should anchor the beginning of the matched string.
The more complicated issue is that you use Hebrew letters. The problem is that Lua has no concept of multi-byte characters.
If you use a 8-bit encoding such as Windows-1255, or ISO-8859-8, then you probably can simply match against a character class [ת-א]. If you have properly set Hebrew locale, %l should work fine for you.
If you use UTF-8 or any other encoding that uses multi-byte characters, then you must construct a regex that has all the Hebrew alphabet escaped as a sequence of octets. The aleph is U+05D0x, which in UTF-8 will be represented as 0xD7 0x90. The tav is U+05EA, which will be encoded as 0xD7 0xAA.
In Lua you can escape any 8-bit character with a backslash + decimal code. All the hebrew characters encoded in UTF-8 have the first byte the same -- 0xD7, that is "\215". The second character can be anything from "\144" to "\170". Thus, the regex that will match a single Hebrew letter is: "\215[\144-\170]". Put that in your original regex, where you had single dots that match any character.
Of course, the above reasoning must be modified for encodings different than UTF-8. Right-to-left writing direction in Hebrew is another thing to keep in mind.

Load testing tool URL encoding system

I have a load testing tool (Borland's SilkPerformer) that is encoding / character as \x252f. Can anyone please tell me what encoding system the tool might be using?
Two different escape sequences have been combined:
a C string hexadecimal escape sequence.
an URL-encoding scheme (percent encoding).
See this little diagram:
+---------> C escape in hexadecimal notation
: +------> hexadecimal number, ASCII for '%'
: : +---> hexadecimal number, ASCII for '/'
\x 25 2f
Explained step for step:
\ starts a C string escape sequence.
x is an escape in hexadecimal notation. Two hex digits are expected to follow.
25 is % in ASCII, see ASCII Table.
% starts an URL encode, also called Percent-encoding. Two hex digits are expected to follow.
2f is the slash character (/) in ASCII.
The slash is the result.
Now I don't know why your software chooses to encode the slash character in such a weird way. Slash characters in urls need to be url encoded if they don't denote directory separators (the same thing the backslash does for Windows). So you will often find the slash character being encoded as %2f. That's normal. But I find it weird and a bit suspicious that the percent character is additionally encoded as a hexadecimal escape sequence for C strings.

In which encoding is 0xDB a currency symbol?

I received files which, sadly, I cannot get info about how they were generated. I need to parse these files.
The file is entirely ASCII besides for one character: 0xDB (in decimal it gives 219).
Obviously (from looking at the file) this character is a currency symbol. I know it because:
it is mandatory for these files to contain a currency symbol anywhere an amount appears
there's no other currency symbol (neither $ nor euro nor nothing) nowhere in the files
everytime that 0xDB appears it's next to an amount
I think that in these files that 0xDB is supposed to represent the Euro symbol (it is actually very highly probable that this 0xDB appears everywhere a Euro symbol is supposed to appear).
The file command says this about the files:
ISO-8859 English text, with CRLF, LF line terminators
An hexdump gives this:
00000030 71 75 61 6e 74 20 db 32 2e 36 30 0a 20 41 49 4d |quant .2.60. AIM|
^^ ^
The files are all otherwise normally formatted/parsable. Actually I'm getting all the infos fine besides for that weird 0xDB character.
Does anyone know what's going on? How did a currency symbol (supposedly the euro symbol) somehow become a 0xDB?
It's neither ISO-8859-1 (aka ISO Latin 1) nor ISO-8859-15 because in both case code point 219 corresponds to 'Û' (just as Unicode codepoint 219 is 'LATIN CAPITAL LETTER U WITH CIRCUMFLEX').
It's not extended-ASCII.
It could be Mac OS Roman
It's MacRoman. In fact it has to be -- that's the only charset in which the Euro sign maps to 0xDB.
Here's the full charset mapping for MacRoman.
Using the macroman script, one learns:
$ macroman 0xDB
MacRoman DB ⇒ U+20AC ‹€› \N{ EURO SIGN }
You can go the other way, too:
$ macroman U+00E9
MacRoman 8E ⇐ U+00E9 ‹é› \N{ LATIN SMALL LETTER E WITH ACUTE }
And we know that U+20AC EURO SIGN is indeed a currency symbol because of the uniprops script’s output:
$ uniprops -a U+20AC
U+20AC <€> \N{ EURO SIGN }:
\pS \p{Sc}
All Any Assigned InCurrencySymbols Common Zyyy Currency_Symbol Sc Currency_Symbols S Gr_Base Grapheme_Base Graph GrBase Print Symbol X_POSIX_Graph X_POSIX_Print
Age=2.1 Bidi_Class=ET Bidi_Class=European_Terminator BC=ET Block=Currency_Symbols Canonical_Combining_Class=0 Canonical_Combining_Class=Not_Reordered CCC=NR Canonical_Combining_Class=NR Script=Common Decomposition_Type=None DT=None East_Asian_Width=A East_Asian_Width=Ambiguous EA=A Grapheme_Cluster_Break=Other GCB=XX Grapheme_Cluster_Break=XX Hangul_Syllable_Type=NA Hangul_Syllable_Type=Not_Applicable HST=NA Joining_Group=No_Joining_Group JG=NoJoiningGroup Joining_Type=Non_Joining JT=U Joining_Type=U Line_Break=PR Line_Break=Prefix_Numeric LB=PR Numeric_Type=None NT=None Numeric_Value=NaN NV=NaN Present_In=2.1 IN=2.1 Present_In=3.0 IN=3.0 Present_In=3.1 IN=3.1 Present_In=3.2 IN=3.2 Present_In=4.0 IN=4.0 Present_In=4.1 IN=4.1 Present_In=5.0 IN=5.0 Present_In=5.1 IN=5.1 Present_In=5.2 IN=5.2 Present_In=6.0 IN=6.0 SC=Zyyy Script=Zyyy Sentence_Break=Other SB=XX Sentence_Break=XX Word_Break=Other WB=XX Word_Break=XX _X_Begin
0xDB represents the Euro sign in the Mac OS Roman character encoding.

Parsing \"–\" with Erlang re

I've parsed an HTML page with mochiweb_html and want to parse the following text fragment
0 – 1
Basically I want to split the string on the spaces and dash character and extract the numbers in the first characters.
Now the string above is represented as the following Erlang list
[48,32,226,128,147,32,49]
I'm trying to split it using the following regex:
{ok, P}=re:compile("\\xD2\\x80\\x93"), %% characters 226, 128, 147
re:split([48,32,226,128,147,32,49], P, [{return, list}])
But this doesn't work; it seems the \xD2 character is the problem [if I remove it from the regex, the split occurs]
Could someone possibly explain
what I'm doing wrong here ?
why the '–' character seemingly requires three integers for representation [226, 128, 147]
Thanks.
226,128,147 is E2,80,93 in hex.
> {ok, P} = re:compile("\xE2\x80\x93").
...
> re:split([48,32,226,128,147,32,49], P, [{return, list}]).
["0 "," 1"]
As to your second question, about why a dash takes 3 bytes to encode, it's because the dash in your input isn't an ASCII hyphen (hex 2D), but is a Unicode en-dash (hex 2013). Your code is recieving this in UTF-8 encoding, rather than the more obvious UCS-2 encoding. Hex 2013 comes out to hex E28093 in UTF-8 encoding.
If your next question is "why UTF-8", it's because it's far easier to retrofit an old system using 8-bit characters and null-terminated C style strings to use Unicode via UTF-8 than to widen everything to UCS-2 or UCS-4. UTF-8 remains compatible with ASCII and C strings, so the conversion can be done piecemeal over the course of years, or decades if need be. Wide characters require a "Big Bang" one-time conversion effort, where everything has to move to the new system at once. UTF-8 is therefore far more popular on systems with legacies dating back to before the early 90s, when Unicode was created.

Resources