Assembly Language: Memory Bytes and Offsets - memory

I am confused as to how memory is stored when declaring variables in assembly language. I have this block of sample code:
val1 db 1,2
val2 dw 1,2
val3 db '12'
From my study guide, it says that the total number of bytes required in memory to store the data declared by these three data definitions is 8 bytes (in decimal). How do I go about calculating this?
It also says that the offset into the data segment of val3 is 6 bytes and the hex byte at offset 5 is 00. I'm lost as to how to calculate these bytes and offsets.
Also, reading val1 into memory will produce 0102 but reading val3 into memory produces 3132. Are apostrophes represented by the 3 or where does it come from? How would val2 be read into memory?

You have two bytes, 0x01 and 0x02. That's two bytes so far.
Then you have two words, 0x0001 and 0x0002. That's another four bytes, making six to date.
The you have two more bytes making up the characters of the string '12', which are 0x31 and 0x32 in ASCII (a). That's another two bytes bringing the grand total to eight.
In little-endian format (which is what you're looking at here based on the memory values your question states), they're stored as:
offset value
------ -----
0 0x01
1 0x02
2 0x01
3 0x00
4 0x02
5 0x00
6 0x31
7 0x32
(a) The character set you're using in this case is the ASCII one (you can follow that link for a table describing all the characters in that set).
The byte values 0x30 thru 0x39 are the digits 0 thru 9, just as the bytes 0x41 thru 0x5A represent the upper-case alpha characters. The pseudo-op:
db '12'
is saying to insert the bytes for the characters '1' and '2'.
Similarly:
db 'Pax is a really cool guy',0
would give you the hex-dump representation:
addr +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +A +B +C +D +E +F +0123456789ABCDEF
0000 50 61 78 20 69 73 20 61 20 72 65 61 6C 6C 79 20 Pax is a really
0010 63 6F 6F 6C 20 67 75 79 00 cool guy.

val1 is two consecutive bytes, 1 and 2. db means "direct byte". val2 is two consecutive words, i.e. 4 bytes, again 1 and 2. in memory they will be 1, 0, 2, 0, assuming you're on a big endian machine. val3 is a two bytes string. 31 and 32 in are 49 and 50 in hexadecimal notation, they are ASCII codes for the characters "1" and "2".

Related

The DHT in JPEG does not contain the actual huffman code, how come?

The DHT contains 16 bytes that just contains count of how many values were encoded with huffman code of each length from 1 bit all the way to 16 bits. After this, it contains the actual values that were encoded, all these value are 8 bits in size.
Q: Why is huffman code not stored, how does decoder derive the codes?
Q: If there are say 4 values that have huffman code of 3 bits long, we shall write them as 4 bytes. Does it matter what order they are in or they have to be in ascending or descending order? I do know that the values must be in order such that the values with 1 bit huffman code are then followed by values with 2 bit huffman code e.t.c.
Q: I have used jpegsnoop to look at huffman table of different files, some made in MS paint and some were downloaded. I find that they all have the SAME table.
Here are the DHT entries I got from JPEG snoop:
Destination ID = 1
Class = 1 (AC Table)
Codes of length 01 bits (000 total):
Codes of length 02 bits (002 total): 00 01
Codes of length 03 bits (001 total): 02
Codes of length 04 bits (002 total): 03 11
Codes of length 05 bits (004 total): 04 05 21 31
Codes of length 06 bits (004 total): 06 12 41 51
Codes of length 07 bits (003 total): 07 61 71
Codes of length 08 bits (004 total): 13 22 32 81
Codes of length 09 bits (007 total): 08 14 42 91 A1 B1 C1
Codes of length 10 bits (005 total): 09 23 33 52 F0
Codes of length 11 bits (004 total): 15 62 72 D1
Codes of length 12 bits (004 total): 0A 16 24 34
Codes of length 13 bits (000 total):
Codes of length 14 bits (001 total): E1
Codes of length 15 bits (002 total): 25 F1
Codes of length 16 bits (119 total): 17 18 19 1A 26 27 28 29 2A 35 36 37 38 39 3A 43
44 45 46 47 48 49 4A 53 54 55 56 57 58 59 5A 63
64 65 66 67 68 69 6A 73 74 75 76 77 78 79 7A 82
83 84 85 86 87 88 89 8A 92 93 94 95 96 97 98 99
9A A2 A3 A4 A5 A6 A7 A8 A9 AA B2 B3 B4 B5 B6 B7
B8 B9 BA C2 C3 C4 C5 C6 C7 C8 C9 CA D2 D3 D4 D5
D6 D7 D8 D9 DA E2 E3 E4 E5 E6 E7 E8 E9 EA F2 F3
F4 F5 F6 F7 F8 F9 FA
Total number of codes: 162
And
Destination ID = 1
Class = 0 (DC / Lossless Table)
Codes of length 01 bits (000 total):
Codes of length 02 bits (003 total): 00 01 02
Codes of length 03 bits (001 total): 03
Codes of length 04 bits (001 total): 04
Codes of length 05 bits (001 total): 05
Codes of length 06 bits (001 total): 06
Codes of length 07 bits (001 total): 07
Codes of length 08 bits (001 total): 08
Codes of length 09 bits (001 total): 09
Codes of length 10 bits (001 total): 0A
Codes of length 11 bits (001 total): 0B
Codes of length 12 bits (000 total):
Codes of length 13 bits (000 total):
Codes of length 14 bits (000 total):
Codes of length 15 bits (000 total):
Codes of length 16 bits (000 total):
Total number of codes: 012
The AC table compresses RRRRSSSS that contain zero-run length and AC coefficient magnitude while the DC table compresses SSSS. Thus, I think that the AC table must contain total of 255 entries (exlcuded 0) while the DC table must be 15 entries (excluded 0). However, neither of the tables contain this many total number of codes. WHY?
Q: Why is huffman code not stored, how does decoder derive the codes?
The reason the Huffman tables is defined as they are rather than with the actual codes is that it is much smaller and simpler to encode that way. PNG uses a similar but different method.
Keep in mind that to store the Huffman codes in the JPEG stream you would need to include both the length and the code itself. The result would be much larger than encoding a count of lengths.
Q: If there are say 4 values that have huffman code of 3 bits long, we shall write them as 4 bytes. Does it matter what order they are in or they have to be in ascending or descending order?
If the Huffman code has 3 bits, it is written as three bits to the JPEG stream. The codes are generated in ascending order.
Q: I have used jpegsnoop to look at huffman table of different files, some made in MS paint and some were downloaded. I find that they all have the SAME table.
The encoder is being lazy and using a fixed Huffman table. There is a sample Huffman table in the JPEG standard that they often use. To generate optimal Huffman codes, the encoder must make two passes over the data. With a preset table, the encoder only needs to make one pass.
F.1.2.1.2 and F.1.2.2.1 of the JPEG Specification explain why the Huffman tables are not fully populated. For baseline encoding DC difference values are limited to 11 bits (table F.1) and AC values are limited to 10 bits (table F.2).
Since DC Huffman symbols only need SSSS values from 0 to 11 their Huffman trees need only 12 codes as you've reported.
AC Huffman symbols have a prefix zero run count from 0 to 15. With 11 bit sizes that works out to 16 * 11 = 176 symbols. However, they don't include the symbols 0x10, 0x20, ... 0xE0 because they are redundant. They encode a run of 1, 2, ... 14 zeros followed by a 0 value. If an encoder has, say, 7 zero values followed by a 3 bit value it can encode that as 0x73. There would be no point encoding it with two symbols 0x60;0x03.
Ignoring those 14 useless symbols we end up with 162 codes as you have reported.
By the way, the 0xF0 (ZRL) value is needed because there isn't a symbol that can express a run of 16 zeros followed by a value thus is cannot be merged.
I don't know why the JPEG spec limits the DC and AC values to a certain number of bits. I speculate that the extra precision would have no effect or is typically thrown away by quantization. Or maybe it has to do with the mathematics of the Inverse Discrete Cosine Transform. Keep in mind that these Huffman encoded values are (quantized) coefficients for the IDCT and are only indirectly related to the continuous tone RGB output.
The Huffman encoding is almost fully determined by the relative frequencies of all 256 symbols (except tiebreaker rules). This means you can choose many, many formats to encode those relative frequencies; the most simple one would be to simply store all these frequencies. The receiver can then rebuild the encoding from that order.
Background: the two least frequent characters of a Huffman encoding share the same (long) prefix, and differ only in the last bit. This combination is then assigned a joint frequency (sum of both combinations), which is used recursively to determine the prefix. Finally, you end up with two sets, one holding X characters and the other holding 256-X characters. The first set has prefix 0 and the second set has prefix 1.
Yes, that's arbitrary, you could swap those 0 and 1, and have a similar table and the same compression ratio - a 0 is just as long as a one. That's why you have detailed rules (e.g. most common set gets 0, tiebreaker is first byte in set)
Back to encoding. You want to store these relative frequencies efficiently, as we're using compression here. Now, as I pointed out, when we have codes suffix-0 and suffix-1, they're both equally long (namely the suffix length plus one). So we know from the fact that there are 119 unique 16 bits codes that there are 60 unique prefixes with length 15. Calculating backwards, we also know that there are two unique symbols with length 15, total 62, so there must be 31 prefixes of length 14. We can backtrack again to prefixes of length 1.
Again, it's necessary to point out that we don't know here the exact values of those prefixes, and the matching symbols. This depends on the tiebreaker rules, as pointed out, but those rules are fixed for JPEG.
JPEG does have a bit of a special case: Huffman codes for very rare symbols should be longer than 16 bits. That's inconvenient, so in building the table you don't choose the two sets with the least combined frequency if either of them already has a long suffix - combining those two subsets would just make the suffix even longer. You see this with all the 16-bit codes in the example: most should have had longer codes in a pure Huffman encoding.
I think the worst case is if the most frequent character appears 50%, the next 25%, etc. You'll get codes 0, 10, 110, 1110 etcetera. That's unary counting, which is indeed optimal for that case, but the longest code is now 256 bits. You'd need a document with 2^256 bytes to have a frequency of 2^-256, though.

What's the representation of a set of char in PASCAL?

They ask me to represet a set of char like into "map memory". What chars are in the set? The teacher told us to use ASCII code, into a set of 32 bytes.
A have this example, the set {'A', 'B', 'C'}
(The 7 comes from 0111)
= {00 00 00 00 00 00 00 00 70 00
00 00 00 00 00 00 00 00 00 00
00}
Sets in pascal can be represented in memory with one bit for every element; if the bit is 1, the element is present in the set.
A "set of char" is the set of ascii char, where each element has an ordinal value from 0 to 255 (it should be 127 for ascii, but often this set is extended up to a byte, so there are 256 different characters).
Hence a "set of char" is represented in memory as a block of 32 bytes which contain a total of 256 bits. The character "A" (upper case A) has an ordinal value of 65. The integer division of 65 by 8 (the number of bits a byte can hold) gives 8. So the bit representing "A" in the set resides in the byte number 8. 65 mod 8 gives 1, which is the second bit in that byte.
The byte number 8 will have the second bit ON for the character A (and the third bit for B, and the fourth for C). All the three characters together give the binary representation of 0000.1110 ($0E in hex).
To demonstrate this, I tried the following program with turbo pascal:
var
ms : set of char;
p : array[0..31] of byte absolute ms;
i : integer;
begin
ms := ['A'..'C'];
for i := 0 to 31 do begin
if i mod 8=0 then writeln;
write(i,'=',p[i],' ');
end;
writeln;
end.
The program prints the value of all 32 bytes in the set, thanks to the "absolute" keyword. Other versions of pascal can do it using different methods. Running the program gives this result:
0=0 1=0 2=0 3=0 4=0 5=0 6=0 7=0
8=14 9=0 10=0 11=0 12=0 13=0 14=0 15=0
16=0 17=0 18=0 19=0 20=0 21=0 22=0 23=0
24=0 25=0 26=0 27=0 28=0 29=0 30=0 31=0
where you see that the only byte different than 0 is the byte number 8, and it contains 14 ($0E in hex, 0000.1110). So, your guess (70) is wrong.
That said, I must add that nobody can state this is always true, because a set in pascal is implementation dependent; so your answer could also be right. The representation used by turbo pascal (on dos/windows) is the most logical one, but this does not exclude other possible representations.

COM Port Commands CRC XOR

I have a check printer that I want to connect in Delphi 7 by COM port and operate.
I have a command that I extracted with Serial Port Monitor:
STX "PIRI(781" FS NULL ETX "0B" wich is 02 50 49 52 49 28 37 38 31 1c 00 03 30 42 hex
The manual says the following:
CRC (which is the last two digits after the ETX) - packet checksum. It
is calculated by the following algorithm: executing XOR for every
byte of the block including ETX by excluding STX. The data of the
checksum take up two bytes and are a symbolic representation of the
numeric in a hexadecimal calculation system.
I tried to an ONLINE CRC calculator and return a 1B result and a 27 numeric.
How to do it? For "PIRI(781" FS NULL ETX it should be 0B
The documentation incorrectly identifies the check value as a CRC. It is not. It is simply the exclusive-or of the noted bytes. The exclusive-or of 50 49 52 49 28 37 38 31 1c 00 03 is 0b. You then convert the 0b to hex (with an upper case B, i.e. 0B), and get 30 42.

24 bit-address in hex

How many hex digits does a 24-bit memory address have?
One hex digit corresponds to 4 binary digits(bits)
for 24 bits, there are 3 bytes(8bits) which makes 6 hex digits.
8 bits = 1 byte
24 bits = 3 bytes
1 byte = 2 hex characters
2 bytes = 4 hex characters
3 bytes = 6 hex characters
Each hex digit handles four bits, so a 24-bit address requires six hex digits. You can see the relationship between hex and binary here:
Hex Binary Hex Binary
--- ------ --- ------
0 0000 8 1000
1 0001 9 1001
2 0010 A 1010
3 0011 B 1011
4 0100 C 1100
5 0101 D 1101
6 0110 E 1110
7 0111 F 1111
every hex is 4 bits,
every number in base 16, is 2^4
hence 4 digits in base 2

How to calculate Internet checksum?

I have a question regarding how the Internet checksum is calculated. I couldn't find any good explanation from the book, so I ask it here.
Have a look at the following example.
The following two messages are sent: 10101001 and 00111001. The checksum is calculated with 1's complement. So far I understand. But how is the sum calculated? At first I thought it maybe is XOR, but it seems not to be the case.
10101001
00111001
--------
Sum 11100010
Checksum: 00011101
And then when they calculate if the message arrived OK. And once again how is the sum calculated?
10101001
00111001
00011101
--------
Sum 11111111
Complement 00000000 means that the pattern is O.K.
It uses addition, hence the name "sum". 10101001 + 00111001 = 11100010.
For example:
+------------+-----+----+----+----+---+---+---+---+--------+
| bin value | 128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 | result |
+------------+-----+----+----+----+---+---+---+---+--------+
| value 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 169 |
| value 2 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 57 |
| sum/result | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 226 |
+------------+-----+----+----+----+---+---+---+---+--------+
If by internet checksum you mean TCP Checksum there's a good explanation here and even some code.
When you're calculating the checksum remember that it's not just a function of the data but also of the "pseudo header" which puts the source IP, dest IP, protocol, and length of the TCP packet into the data to be checksummed. This ties the TCP meta data to some data in the IP header.
TCP/IP Illustrated Vol 1 is a good reference for this and explains it all in detail.
The calculation of the internet checksum uses ones complement arithmetic. Consider the data being checksummed is a sequence of 8 bit integers. First you need to add them using ones complement arithmetic and take the ones complement of the result.
NOTE: When adding numbers ones complement arithmetic, a carryover from the MSB needs to be added to the result. Consider for eg., the addition of 3(0011) and 5(0101).
3'->1100
5'->1010
0110 with a carry of 1
Thus we have, 0111(1's complement representation of -8).
The checksum is the 1's complement of the result obtained int he previous step. Hence we have 1000. If no carry exists, we just complement the result obtained in the summing stage.
The UDP checksum is created on the sending side by summing all the 16-bit words in the segment, with any overflow being wrapped around and then the 1's complement is performed and the result is added to the checksum field inside the segment.
at the receiver side, all words inside the packet are added and the checksum is added upon them if the result is 1111 1111 1111 1111 then the segment is valid else the segment has an error.
exmaple:
0110 0110 0110 0000
0101 0101 0101 0101
1000 1111 0000 1100
--------------------
1 0100 1010 1100 0001 //there is an overflow so we wrap it up, means add it to the sum
the sum = 0100 1010 1100 0010
now let's take the 1's complement
checksum = 1011 0101 0011 1101
at the receiver the sum is calculated and then added to the checksum
0100 1010 1100 0010
1011 0101 0011 1101
----------------------
1111 1111 1111 1111 //clearly this should be the answer, if it isn't then there is an error
references:Computer networking a top-down approach[Ross-kurose]
Here's a complete example with a real header of an IPv4 packet.
In the following example, I use bc, printf and here strings to calculate the header checksum and verify it. Consequently, it should be easy to reproduce the results on Linux by copy-pasting the commands.
These are the twenty bytes of our example packet header:
45 00 00 34 5F 7C 40 00 40 06 [00 00] C0 A8 B2 14 C6 FC CE 19
The sender hasn't calculated the checksum yet. The two bytes in square brackets is where the checksum will go. The checksum's value is initially set to zero.
We can mentally split up this header as a sequence of ten 16-bit values: 0x4500, 0x0034, 0x5F7C, etc.
Let's see how the sender of the packet calculates the header checksum:
Add all 16-bit values to get 0x42C87: bc <<< 'obase=16;ibase=16;4500 + 0034 + 5F7C + 4000 + 4006 + 0000 + C0A8 + B214 + C6FC + CE19'
The leading digit 4 is the carry count, we add this to the rest of the number to get 0x2C8B: bc <<< 'obase=16;ibase=16;2C87 + 4'
Invert¹ 0x2C8B to get the checksum: 0xD374
Finally, insert the checksum into the header:
45 00 00 34 5F 7C 40 00 40 06 [D3 74] C0 A8 B2 14 C6 FC CE 19
Now the header is ready to be sent.
The recipient of the IPv4 packet then creates the checksum of the received header in the same way:
Add all 16-bit values to get 0x4FFFB: bc <<< 'obase=16;ibase=16;4500 + 0034 + 5F7C + 4000 + 4006 + D374 + C0A8 + B214 + C6FC + CE19'
Again, there's a carry count so we add that to the rest to get 0xFFFF: bc <<< 'obase=16;ibase=16;FFFB + 4'
If the checksum is 0xFFFF, as in our case, the IPv4 header is intact.
See the Wikipedia entry for more information.
¹Inverting the hexadecimal number means converting it to binary, flipping the bits, and converting it to hexadecimal again. You can do this online or with Bash: hex_nr=0x2C8B; hex_len=$(( ${#hex_nr} - 2 )); inverted=$(printf '%X' "$(( ~ hex_nr ))"); trunc_inverted=${inverted: -hex_len}; echo $trunc_inverted

Resources