How does low-level character encodings work? - character-encoding

let's say, i have a textfile called sometext.txt
it has a line - "Sic semper tyrannis" which is (correct me if i'm wrong..)
83 105 99 32 115 101 109 112 101 114 32 116 121
114 97 110 110 105 115
(in decimal ASCII)
When i read this line from file using standard library file i/o routines, i don't perform any character encodings work.. (or do i??)
The question is:
Which software component actually converts 0s and 1s into characters(i.e. contains algorithm for converting 0s and 1s into characters)?? Is it OS component?? Which one??

It's all a bunch of 1's and 0's.
An ASCII "A" is just the letter displayed when the value (01000001b, or 0x41 or 65 dec) is "encountered" (depend on context, naturally). There is no "conversion"; it's just a different view of the same thing defined by an accepted mapping.
Unicode (and other multi-byte) character sets often use different encodings; in UTF-8 (a Unicode encoding), for instance, a single Unicode character can be mapped as 1, 2, 3 or 4 bytes depending upon the character. Unicode encoding conversion often takes place in the IO libraries that come as part of a language or runtime; however, a Unicode-aware operating system also needs to understand a Unicode encoding itself (in system calls) so the line can be blurred.
UTF-8 has the nice property that all normal ASCII characters map to a single byte which makes it the most compatible Unicode encoding with traditional ASCII.

First, I recommend that you read The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!).
When i read this line from file using
standard library file i/o routines, i
don't perform any character encodings
work.. (or do i??)
That depends heavily on which standard library you mean.
In C, when you write:
FILE* f = fopen("filename.txt", "w");
fputs("Sic semper tyrannis", f);
No encoding conversion is performed; the chars in the string are just written to the file as-is (except for line breaks). (Encoding is relevant when you're editing the source file.)
But in Python 3.x, when you write:
f = open('filename.txt', 'w', encoding='UTF-8')
f.write('Sic semper tyrannis')
The write function performs an internal conversion from the UTF-16/32 encoding of the Python str types to the UTF-8 encoding used on disk.
The question is: Which software
component actually converts 0s and 1s
into characters(i.e. contains
algorithm for converting 0s and 1s
into characters)?? Is it OS
component?? Which one??
The decoding function (like MultiByteToWideChar or bytes.decode) for the appropriate character encoding converts the bytes into Unicode code points, which are integers that uniquely identify characters. A font converts code points to glyphs, the images of the characters that appear on screen or paper.

Which software component actually converts 0s and 1s into characters(i.e. contains algorithm for converting 0s and 1s into characters)?
This depends on what languge you're using. For example, Python has character encoding functions:
>>> f = open( ...., 'rb')
>>> data = f.read()
>>> data.decode('utf-8')
u'café'
Here, Python has converted a sequence of bytes into a Unicode string. The exact component is typically a library or program in userspace, but some compilers need knowledge of character encodings.
Underneath, it's all a sequence of bytes, which are 1s and 0s. However, given a sequence of bytes, which characters do these represent? ASCII is one such "character encoding", and tells us how to encode or decode A-Z, a-z, and a few more. There are many others, noteably UTF-8 (an encoding of Unicode). In the end, if you're dealing with text, you need to know what character encoding it is encoded with.

Like DrStrangeLove says, it's 1's & 0's all the way to your display screen and beyond - the 'A' character is an array of pixels whose color/brightness is defined by bits in the display driver. Turning that pixel array into an understandable character needs a bioElectroChemical video camera connected to 10^11 threshold logic gates running an adaptive, massively-parallel OS and apps that no-one understands, especially after a few beers
Not exactly sure what you're asking. The 0's and 1's from the file are blocked up into the bytes that can represent ASCII codes by the disk driver - it will only read/write blocks of eight bits. The ASCII code bytes are rendered into displayable bitmaps by the display driver using the chosen font.
Rgds,
Martin

It has nothing (well, not so much) to do's with 0s and 1s. Most character encodings work with entire bytes of 8 bits. Each of the numbers you wrote represents a single byte. In ASCII, every character is a single byte. Besides that, ASCII is a subset of ANSI and UTF-8, making it compatible with the most used character sets. ASCII contains only the first half of the byte range. Chars upto 127.
For ANSI you need some encoding. ANSI specifies the characters in the upper half of the byte range. In UTF-8, these ANSI characters don't exist. Instead, these last 128 bytes represent part of a character. A whole character is made of 2 to 4 bytes. Except those 128 ASCII characters. They are still the same old single byte characters. I think this is mainly done because if UTF-8 wouldn't be compatible with ASCII, there is no way Americans would have adopted it. ;-)
But yes, the OS does have various functions to work with character encodings. Where they are depends on the OS and platform, but if I read your question right, you're not really looking for some specific API. Your question cannot be answered that concrete. There are numerous ways to work with characters, and these is a major difference between working with the actual character data and writing them to the screen. (difference between character and font).

Related

How many Bits represent ONE character and How many Bits represent One Byte in ASCII?

I know its simple but I still don't know it. Some people are saying that three are 7 Bits that represent a character while some are saying 8. So can anyone just tell me which one is right? If it is 8 Bits/Character then How many Bits represent a Byte? and If it's 7 then How many bits represent a Character and how many Bits represent ONE byte?
US-ASCII is indeed 7 bits per character. The highest code has value 127, which represents the DEL control character. Any character set that has codes with higher values is not US-ASCII (but may be an extension of it, such as Unicode).
Most microprocessors work with bytes (=smallest addressable unit of storage) of eight bits. If you want to use US-ASCII with these microprocessors, you have two options:
Use 7 bytes (of 8 bits each) to store 8 characters (of 7 bits each), even though that makes programs very complicated.
Use 1 byte (of 8 bits) to store 1 character (of 7 bits), even though you'll waste space.
The need for simple programs outweighs the need for efficient memory use in this case. That's why you usually use one 8-bit unit (an octet, for short) to store a character, even though each character is encoded in only 7-bit units. You just set the extra bit to zero (or, as was done in some cases, use the extra bit for error detection).
I know this is an old question, but for the sake of future readers; you can determine how many bytes are in a given string (or string value) via the following (C# .NET):
Encoding.ASCII.GetByteCount("SomeString");
Remember to use the proper encoding when you are attempting to count the number of bytes since it is different with each encoding:
An ASCII character in 8-bit ASCII encoding is 8 bits (1 byte), though it can fit in 7 bits.
An ISO-8895-1 character in ISO-8859-1 encoding is 8 bits (1 byte).
A Unicode character in UTF-8 encoding is between 8 bits (1 byte) and 32 bits (4 bytes).
A Unicode character in UTF-16 encoding is between 16 (2 bytes) and 32 bits (4 bytes), though most of the common characters take 16 bits. This is the encoding used by Windows internally.
A Unicode character in UTF-32 encoding is always 32 bits (4 bytes).
An ASCII character in UTF-8 is 8 bits (1 byte), and in UTF-16 - 16 bits.
The additional (non-ASCII) characters in ISO-8895-1 (0xA0-0xFF) would take 16 bits in UTF-8 and UTF-16.

What is the relationship between unicode/utf-8/utf-16 and my local encode GBK?

I've noted that my text file on Windows(chinese version), when port to Ubuntu, turned garbled.
After more research, I know the default encode on Windows CN version is GBK, while on Ubuntu is utf-8, and iconv can do the encode translation, for example, from GBK to utf-8:
iconv -f gbk -t utf-8 input.txt > output.txt
But I am still confused by the relationship of these encode. What are they? what is the similarity and difference between them?
First it is not about the OS, but about the program you are using to read the file.
On a bare .txt, the program has to be able to guess the encoding, which is not always possible, but might work. On an html, encoding is given as metadata, so browsers don't need to do that.
Second, do you know ASCII? Do you see how it represents symbols via numbers? If not this is the first thing you should learn now.
Next, do you see the difference between Unicode and UTF-XXX? It must be clear to you that Unicode is just a map of numbers (code points) to glyphs (symbols, including Chinese characters, ASCII characters, Egyptian characters, etc.)
UTF-XXX on the other hand says, given a string of bytes, which Unicode numbers (code points) do they represent. Therefore, UTF-8 and UTF-16 are different efficient ways to represent Unicode.
As you may imagine, unlike ASCII, both UTF and GBK must allow more than one byte per character, since there are much more than 256 of them.
In GBK all characters are encoded as 1 or 2 bytes.
Since GBK is specialized for Chinese, it uses less bytes in average than UTF-XXX to represent a given Chinese text, and more for other languages.
In UTF-8 and 16, the number of bytes per glyph is variable, so you have to look at how many bytes are used for the Chinese code points.
In Unicode, Chinese glyphs are on the following ranges. Then you have to look at how efficiently UTF-8 and UTF-16 represent those ranges.
According to Wikipedia articles on UTF-8 and UTF-16, the first and most common range for Chinese glyphs 4E00-9FFF is represented in UTF-8 as either 2 or 3 bytes, while in UTF-16 it is represented as 2 bytes. Therefore, if you are going to use lots of Chinese, UTF-16 might be more efficient. You also have to look into the other ranges to see how many bytes per character are used.
For portability, the best choice is UTF, since UTF can represent almost any possible character set, so it is more likely that viewers will have been programmed to decode it correctly. The size gain of GBK is not that large.

Is ASCII code in matter of fact 7 bit or 8 bit?

My teacher told me ASCII is an 8-bit character coding scheme. But it is defined only for 0-127 codes which means it can be fitted into 7 bits. So can't it be argued that ASCII is actually a 7-bit code?
And what do we mean to say at all when saying ASCII is a 8-bit code at all?
ASCII was indeed originally conceived as a 7-bit code. This was done well before 8-bit bytes became ubiquitous, and even into the 1990s you could find software that assumed it could use the 8th bit of each byte of text for its own purposes ("not 8-bit clean"). Nowadays people think of it as an 8-bit coding in which bytes 0x80 through 0xFF have no defined meaning, but that's a retcon.
There are dozens of text encodings that make use of the 8th bit; they can be classified as ASCII-compatible or not, and fixed- or variable-width. ASCII-compatible means that regardless of context, single bytes with values from 0x00 through 0x7F encode the same characters that they would in ASCII. You don't want to have anything to do with a non-ASCII-compatible text encoding if you can possibly avoid it; naive programs expecting ASCII tend to misinterpret them in catastrophic, often security-breaking fashion. They are so deprecated nowadays that (for instance) HTML5 forbids their use on the public Web, with the unfortunate exception of UTF-16. I'm not going to talk about them any more.
A fixed-width encoding means what it sounds like: all characters are encoded using the same number of bytes. To be ASCII-compatible, a fixed-with encoding must encode all its characters using only one byte, so it can have no more than 256 characters. The most common such encoding nowadays is Windows-1252, an extension of ISO 8859-1.
There's only one variable-width ASCII-compatible encoding worth knowing about nowadays, but it's very important: UTF-8, which packs all of Unicode into an ASCII-compatible encoding. You really want to be using this if you can manage it.
As a final note, "ASCII" nowadays takes its practical definition from Unicode, not its original standard (ANSI X3.4-1968), because historically there were several dozen variations on the ASCII 127-character repertoire -- for instance, some of the punctuation might be replaced with accented letters to facilitate the transmission of French text. All of those variations are obsolete, and when people say "ASCII" they mean that the bytes with value 0x00 through 0x7F encode Unicode codepoints U+0000 through U+007F. This will probably only matter to you if you ever find yourself writing a technical standard.
If you're interested in the history of ASCII and the encodings that preceded it, start with the paper "The Evolution of Character Codes, 1874-1968" (samizdat copy at http://falsedoor.com/doc/ascii_evolution-of-character-codes.pdf) and then chase its references (many of which are not available online and may be hard to find even with access to a university library, I regret to say).
On Linux man ascii says:
ASCII is the American Standard Code for Information Interchange. It is a 7-bit code.
The original ASCII table is encoded on 7 bits, and therefore it has 128 characters.
Nowadays, most readers/editors use an "extended" ASCII table (from ISO 8859-1), which is encoded on 8 bits and enjoys 256 characters (including Á, Ä, Œ, é, è and other characters useful for European languages as well as mathematical glyphs and other symbols).
While UTF-8 uses the same encoding as the basic ASCII table (meaning 0x41 is A in both codes), it does not share the same encoding for the "Latin Extended-A" block. Which sometimes causes weird characters to appear in words like à la carte or piñata.
ASCII encoding is 7-bit, but in practice, characters encoded in ASCII are not stored in groups of 7 bits. Instead, one ASCII is stored in a byte, with the MSB usually set to 0 (yes, it's wasted in ASCII).
You can verify this by inputting a string in the ASCII character set in a text editor, setting the encoding to ASCII, and viewing the binary/hex:
Aside: the use of (strictly) ASCII encoding is now uncommon, in favor of UTF-8 (which does not waste the MSB mentioned above - in fact, an MSB of 1 indicates the code point is encoded with more than 1 byte).
The original ASCII code provided 128 different characters numbered 0 to 127. ASCII and 7-bit are synonymous. Since the 8-bit byte is the common storage element, ASCII leaves room for 128 additional characters which are used for foreign languages and other symbols.
But the 7-bit code was original made before the 8-bit code. ASCII stand for American Standard Code for Information Interchange.
In early Internet mail systems, it only supported 7-bit ASCII codes.
This was because it then could execute programs and multimedia files over such systems. These systems use 8 bits of the byte, but then it must then be turned into a 7-bit format using coding methods such as MIME, uucoding and BinHex. This means that the 8-bit characters has been converted to 7-bit characters, which adds extra bytes to encode them.
When we call ASCII a 7-bit code, the left-most bit is used as the sign bit, so with 7 bits we can write up to 127.
That means from -126 to 127, because the maximum values of ASCII is 0 to 255. This can be only satisfied with the argument of 7 bit if the last bit is considered as the sign bit.

Discover the character encoding from byte

I have a string where I know that the degree symbol (°) is represented by the byte 63 (3F).
Each character is represented by a single byte.
How can I find the character encoding used ?
Almost all 8-bit encodings in modern times coincide with ASCII in the ASCII range, so byte 3F hexadecimal is the question mark “?”. As Sebtm’s comment suggests, this might result from character-level data error. E.g., some software that is limited to ASCII could turn all other bytes to “?” – not a good practice, but possible.
If it were a non-ASCII byte, you could use the page http://www.eki.ee/letter/chardata.cgi?search=degree+sign to make a guess.

Translating memory contents into a string via ASCII encoding

I have to translate some memory contents into a string, using ASCII encoding. For example:
0x6A636162
But I am not sure how to break that up, to be translated into ASCII. I think it has something to do with how many bits are in a char/digit, but I am not sure how to go about doing so (and of course, I would like to know more of the reasoning behind it, not just "how to do it").
ASCII uses 7 bits to encode a character (http://en.wikipedia.org/wiki/ASCII). However, it's common to encode characters using 8 bits instead (note that technically this isn't ASCII). Thus, you'd need to split your data into 8-bit chunks and match that to an ASCII table.
If you're using a specific programming language, it may have a way to translate an ASCII code to a character. For instance, Ruby has the .chr method, Python has the chr() built-in function, and in C you can printf("%c", number).
Note that each nibble (4 bits) can be represented as one hexadecimal digit, so for the sample string you show, each 8-bit "chunk" would be:
0x6A
0x63
0x61
0X62
the string reads "jcab" :)

Resources