In Erlang how do I convert a String to a binary value? - erlang

In Erlang how do I convert a string to a binary value?
String = "Hello"
%% should be
Binary = <<"Hello">>

In Erlang strings are represented as a list of integers. You can therefore use the list_to_binary (built-in-function, aka BIF). Here is an example I ran in the Erlang console (started with erl):
1> list_to_binary("hello world").
<<"hello world">>

the unicode (utf-8/16/32) character set needs more number of bits to express characters that are greater than 1-byte in length:
this is why the above call failed for any byte value > 255 (the limit of information that a byte can hold, and which is sufficient for IS0-8859/ASCII/Latin1)
to correctly handle unicode characters you'd need to use
unicode:characters_to_binary() R1[(N>3)]
instead, which can handle both Latin1 AND unicode encoding.
HTH ...

Related

Converting HEX to ASCII in Lua Dissector

I'm trying to take HEX bytes and display them as their ASCII values. If someone could point me reasonably firmly in the right direction I'd be obliged. Tried any number of uint-type commands, and working with buffer(x, 2) as an argument.
I'm not sure what you mean by hex bytes, but the relevant functions are:
string.byte, which converts chars to numerical codes
string.char, which converts numerical codes to chars
For a single character in hexadecimal, you can use string.byte as mentioned by lhf. For longer sequences, you can create a loop in Lua, but that is not very efficient since it involves a lot of copying.
Since Wireshark 1.11.3 there is a Struct.fromhex function that converts a string of hexadecimal characters to the binary equivalent.
Example:
-- From hex to bytes (with no separators)
assert(Struct.fromhex("5753") == "WS")
-- From hex to bytes (using a single space as separator)
assert(Struct.fromhex("57 53", " ") == "WS")
Similarly, there is a Struct.tohex function that converts from bytes to hex.

how to convert String to bytes in Erlang

I am trying to implement SMPP protocol using Erlang and I have hit a dead end trying to convert string e.g. username and password to bytes in order to come up with a PDU to send over to the SMSC. All the search and reading various materials has not helped.
Kindly advice on a way I can achieve this.
Probably the first thing to note that there's no special type for strings in Erlang. So strings in Erlang actually represented as lists of integers:
1> [116, 101, 115, 116].
"test"
So keeping that in mind your question is actually transformed to "how to convert list of integers to bytes". And now it's should pretty straightforward with list_to_binary/1 function for strings with 8-bit characters:
1> list_to_binary("test").
<<"test">>
2> list_to_binary([0, 255]).
<<0,255>>
However if you have an Unicode string list_to_binary/1 will raise badarg error (note also how original string represented in the error message):
1> list_to_binary("тест").
** exception error: bad argument
in function list_to_binary/1
called as list_to_binary([1090,1077,1089,1090])
And in this case functions from unicode module can be used. For example to convert Unicode string to UTF-8 binary unicode:characters_to_binary/3 (there are also unicode:characters_to_binary/1 and unicode:characters_to_binary/2) can be used:
1> unicode:characters_to_binary("тест", unicode, utf8).
<<209,130,208,181,209,129,209,130>>

Showing wrong character for an unicode value in iOS

I am now working with an iOS app that handle unicode characters, but it seems there is some problem with translating unicode hex value (and int value too) to character.
For example, I want to get character 'đ' which has Unicode value of c491, but after this code:
NSString *str = [NSString stringWithUTF8String:"\uc491"];
The value of str is not 'đ' but '쓉' (a Korean word) instead.
I also used:
int c = 50321; // 50321 is int value of 'đ'
NSString *str = [NSString stringWithCharacters: (unichar *)&c length:1];
But the results of two above pieces of code are the same.
I can't understand what is problem here, please help!
The short answer
To specify đ, you can specify it in the following ways (untested):
#"đ"
#"\u0111"
#"\U00000111"
[NSString stringWithUTF8String: "\u0111"]
[NSString stringWithUTF8String: "\xc4\x91"]
Note that the last 2 lines uses C string literal instead of Objective-C string object literal construct #"...".
As a short explanation, \u0111 is the Unicode escape sequence for đ, where U+0111 is the code point for the character đ.
The last example shows how you would specify the UTF-8 encoding of đ (which is c4 91) in a C string literal, then convert the bytes in UTF-8 encoding into proper characters.
The examples above are adapted from this answer and this blog post. The blog also covers the tricky situation with characters beyond Basic Multilingual Plane (Plane 0) in Unicode.
Unicode escape sequences (Universal character names in C99)
According to this blog1:
Unicode escape sequences were added to the C language in the TC2 amendment to C99, and to the Objective-C language (for NSString literals) with Mac OS X 10.5.
Page 65 of C99 TC2 draft shows that \unnnn or \Unnnnnnnn where nnnn or nnnnnnnn are "short-identifier as defined by ISO/IEC 10646 standard", it roughly means hexadecimal code point. Note that:
A universal character name shall not specify a character whose short identifier is less than 00A0 other than 0024 ($), 0040 (#), or 0060 (`), nor one in the range D800 through DFFF inclusive.
Character set vs. Character encoding
It seems that you are confused between code point U+0111 and UTF-8 encoding c4 91 (representation of the character as byte). UTF-8 encoding is one of the encoding for Unicode character set, and code point is a number assigned to a character in a character set. This Wikipedia article explains quite clearly the difference in meaning.
A coded character set (CCS) specifies how to represent a repertoire of characters using a number of (typically non-negative) integer values called code points. [...]
A character encoding form (CEF) specifies the conversion of a coded character set's integer codes into a set of limited-size integer code values that facilitate storage in a system that represents numbers in binary form using a fixed number of bits [...]
There are other encoding, such as UTF-16 and UTF-32, which may give different byte representation of the character on disk, but since UTF-8, UTF-16 and UTF-32 are all encoding for Unicode character set, the code point for the same character is the same between all 3 encoding.
Footnote
1: I think the blog is correct, but if anyone can find official documentation from Apple on this point, it would be better.

string.sub in Corona Lua crashes with ÅÄÖ

this snippet crashes my simulator bad.
s = "stämma"
s1 = string.sub(s,3,3)
print(s1)
It seems like it handles my character as nil, any ideas?
Joakim
I assume you are using UTF-8 encoding.
In UTF-8, a character can have a variable number of bytes, between 1 to 4. The "ä" character (228) is encoded with the two bytes 0xC3 0xA4.
The instruction string.sub(s, 3, 3) returns the third byte from the string (0xC3), and not the third character. As this byte alone is invalid UTF-8, Corona can't display the character.
See also Extract the first letter of a UTF-8 string with Lua

Parsing \"–\" with Erlang re

I've parsed an HTML page with mochiweb_html and want to parse the following text fragment
0 – 1
Basically I want to split the string on the spaces and dash character and extract the numbers in the first characters.
Now the string above is represented as the following Erlang list
[48,32,226,128,147,32,49]
I'm trying to split it using the following regex:
{ok, P}=re:compile("\\xD2\\x80\\x93"), %% characters 226, 128, 147
re:split([48,32,226,128,147,32,49], P, [{return, list}])
But this doesn't work; it seems the \xD2 character is the problem [if I remove it from the regex, the split occurs]
Could someone possibly explain
what I'm doing wrong here ?
why the '–' character seemingly requires three integers for representation [226, 128, 147]
Thanks.
226,128,147 is E2,80,93 in hex.
> {ok, P} = re:compile("\xE2\x80\x93").
...
> re:split([48,32,226,128,147,32,49], P, [{return, list}]).
["0 "," 1"]
As to your second question, about why a dash takes 3 bytes to encode, it's because the dash in your input isn't an ASCII hyphen (hex 2D), but is a Unicode en-dash (hex 2013). Your code is recieving this in UTF-8 encoding, rather than the more obvious UCS-2 encoding. Hex 2013 comes out to hex E28093 in UTF-8 encoding.
If your next question is "why UTF-8", it's because it's far easier to retrofit an old system using 8-bit characters and null-terminated C style strings to use Unicode via UTF-8 than to widen everything to UCS-2 or UCS-4. UTF-8 remains compatible with ASCII and C strings, so the conversion can be done piecemeal over the course of years, or decades if need be. Wide characters require a "Big Bang" one-time conversion effort, where everything has to move to the new system at once. UTF-8 is therefore far more popular on systems with legacies dating back to before the early 90s, when Unicode was created.

Resources