24 bit-address in hex - memory

How many hex digits does a 24-bit memory address have?

One hex digit corresponds to 4 binary digits(bits)
for 24 bits, there are 3 bytes(8bits) which makes 6 hex digits.

8 bits = 1 byte
24 bits = 3 bytes
1 byte = 2 hex characters
2 bytes = 4 hex characters
3 bytes = 6 hex characters

Each hex digit handles four bits, so a 24-bit address requires six hex digits. You can see the relationship between hex and binary here:
Hex Binary Hex Binary
--- ------ --- ------
0 0000 8 1000
1 0001 9 1001
2 0010 A 1010
3 0011 B 1011
4 0100 C 1100
5 0101 D 1101
6 0110 E 1110
7 0111 F 1111

every hex is 4 bits,
every number in base 16, is 2^4
hence 4 digits in base 2

Related

Mapping standard number n bits to new system of n bits based on the amount of enabled bits

I'm trying to figure out a way to map numbers from our standard way of counting bits (binary) to a system where primarily the the cardinality of the bits in a number and then secondary the position of enabled bits in a number are used to map this said number to a new number ordered by the rules described. I've struggled to come up with a general fast method for any amount of bits. I want to know what methods there are for doing this and what is the best time complexity that can be achieved.
I provided an example mapping with 4 bits to make my question more clear.
0 0000 0
1 0001 1
2 0010 2
4 0100 3
8 1000 4
3 0011 5
5 0101 6
9 1001 7
6 0110 8
10 1010 9
12 1100 10
7 0111 11
11 1011 12
13 1101 13
14 1110 14
15 1111 15
I really don't know how to classify this problem in order to do more research on it. If anybody knows how to label this specifically and would like to share that with me I would be most grateful.

How come in two's complementary, 1001 and 11111001 are both -7?

When I learned two's complimentary, I was taught, that for a signed number,
0111 represents 7,
so by using two's complementary,
0111 -> 1000 + 1 -> 1001, is -7
so 1001 represents -7.
While I refreshed this concept on YouTube, I see a video that is saying,
0000 0111 represents 7, so by using two's complementary,
0000 0111 -> 1111 1000 + 1 -> 1111 1001, is -7,
thus, 11111001 represents -7.
I got confused. So by just looking at a signed binary number, how can we determine its value? I thought 11111001 should equal to -121, since the first number MSB is 1, so it is negative, and 1111001 is -121 in decimal, so shouldn't 11111001 be -121? What did I do wrong?
Thanks guys!
The only difference between the two examples is the number of bits you are using for each number.
1001 is -7 with 4 bits and 11111001 is -7 with 8 bits.
If you add up the negative and the positive of the same absolute number the result will be zero.
Both are -7 + 7 = 0
1001 + 0111 = 1|0000
11111001 + 00000111 = 1|00000000

Decode hex string encoding

I have a .bin saved with a VB program, the .bin format is:
String bytes | String
06 00 | C0 E1 E0 E8 F1 E0
The problem is I don't know how the string is encoded. I know what the string is supposed to be: Abaira
Can anyone recognize the encoding used?
I'm not aware of any standard character encoding for this. It is neither ASCII nor EBCDIC.
It seems to be some trivial sort of 8-bit (non-Unicode) ASCII (perhaps ANSI) encryption. Compare your unknown encoding with ASCII:
Unknown ASCII
Hex MSB LSB Hex MSB LSB
A CO 1100 0000 41 0100 0001
b E1 1110 0001 62 0110 0010
a E0 1110 0000 61 0110 0001
i E8 1110 1000 69 0110 1001
r F1 1111 0001 72 0111 0010
a E0 1110 0000 61 0110 0001
Let's define:
MSB: First nibble = most significant 4 bits
LSB: Second nibble = least significant 4 bits
_U: of Unknown
_A: of ASCII
Then you find:
MSB_U = MSB_A Xor 0x80 (maybe MSB_A Or 0x80)
LSB_U = LSB_A + 1 (to tell how overflow is handled I need to see ASCII char 'O' or 'o')
Then U is the concatenation MSB_U & LSB_U.
Further example ASCII to Unknown:
ASCII Hex MSB LSB MSB Xor 0x80 LSB - 1 Concatenated Hex
H 48 0100 1000 1100 1001 1100 0111 C7
e 65 0110 1001 1110 1010 1110 1000 E8
r 72 0111 0010 1111 0011 1111 0001 F1 (as you have shown)
b 62 0110 0010 1110 0011 1110 0001 E1 (do.)

Assembly Language: Memory Bytes and Offsets

I am confused as to how memory is stored when declaring variables in assembly language. I have this block of sample code:
val1 db 1,2
val2 dw 1,2
val3 db '12'
From my study guide, it says that the total number of bytes required in memory to store the data declared by these three data definitions is 8 bytes (in decimal). How do I go about calculating this?
It also says that the offset into the data segment of val3 is 6 bytes and the hex byte at offset 5 is 00. I'm lost as to how to calculate these bytes and offsets.
Also, reading val1 into memory will produce 0102 but reading val3 into memory produces 3132. Are apostrophes represented by the 3 or where does it come from? How would val2 be read into memory?
You have two bytes, 0x01 and 0x02. That's two bytes so far.
Then you have two words, 0x0001 and 0x0002. That's another four bytes, making six to date.
The you have two more bytes making up the characters of the string '12', which are 0x31 and 0x32 in ASCII (a). That's another two bytes bringing the grand total to eight.
In little-endian format (which is what you're looking at here based on the memory values your question states), they're stored as:
offset value
------ -----
0 0x01
1 0x02
2 0x01
3 0x00
4 0x02
5 0x00
6 0x31
7 0x32
(a) The character set you're using in this case is the ASCII one (you can follow that link for a table describing all the characters in that set).
The byte values 0x30 thru 0x39 are the digits 0 thru 9, just as the bytes 0x41 thru 0x5A represent the upper-case alpha characters. The pseudo-op:
db '12'
is saying to insert the bytes for the characters '1' and '2'.
Similarly:
db 'Pax is a really cool guy',0
would give you the hex-dump representation:
addr +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +A +B +C +D +E +F +0123456789ABCDEF
0000 50 61 78 20 69 73 20 61 20 72 65 61 6C 6C 79 20 Pax is a really
0010 63 6F 6F 6C 20 67 75 79 00 cool guy.
val1 is two consecutive bytes, 1 and 2. db means "direct byte". val2 is two consecutive words, i.e. 4 bytes, again 1 and 2. in memory they will be 1, 0, 2, 0, assuming you're on a big endian machine. val3 is a two bytes string. 31 and 32 in are 49 and 50 in hexadecimal notation, they are ASCII codes for the characters "1" and "2".

How to calculate Internet checksum?

I have a question regarding how the Internet checksum is calculated. I couldn't find any good explanation from the book, so I ask it here.
Have a look at the following example.
The following two messages are sent: 10101001 and 00111001. The checksum is calculated with 1's complement. So far I understand. But how is the sum calculated? At first I thought it maybe is XOR, but it seems not to be the case.
10101001
00111001
--------
Sum 11100010
Checksum: 00011101
And then when they calculate if the message arrived OK. And once again how is the sum calculated?
10101001
00111001
00011101
--------
Sum 11111111
Complement 00000000 means that the pattern is O.K.
It uses addition, hence the name "sum". 10101001 + 00111001 = 11100010.
For example:
+------------+-----+----+----+----+---+---+---+---+--------+
| bin value | 128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 | result |
+------------+-----+----+----+----+---+---+---+---+--------+
| value 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 169 |
| value 2 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 57 |
| sum/result | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 226 |
+------------+-----+----+----+----+---+---+---+---+--------+
If by internet checksum you mean TCP Checksum there's a good explanation here and even some code.
When you're calculating the checksum remember that it's not just a function of the data but also of the "pseudo header" which puts the source IP, dest IP, protocol, and length of the TCP packet into the data to be checksummed. This ties the TCP meta data to some data in the IP header.
TCP/IP Illustrated Vol 1 is a good reference for this and explains it all in detail.
The calculation of the internet checksum uses ones complement arithmetic. Consider the data being checksummed is a sequence of 8 bit integers. First you need to add them using ones complement arithmetic and take the ones complement of the result.
NOTE: When adding numbers ones complement arithmetic, a carryover from the MSB needs to be added to the result. Consider for eg., the addition of 3(0011) and 5(0101).
3'->1100
5'->1010
0110 with a carry of 1
Thus we have, 0111(1's complement representation of -8).
The checksum is the 1's complement of the result obtained int he previous step. Hence we have 1000. If no carry exists, we just complement the result obtained in the summing stage.
The UDP checksum is created on the sending side by summing all the 16-bit words in the segment, with any overflow being wrapped around and then the 1's complement is performed and the result is added to the checksum field inside the segment.
at the receiver side, all words inside the packet are added and the checksum is added upon them if the result is 1111 1111 1111 1111 then the segment is valid else the segment has an error.
exmaple:
0110 0110 0110 0000
0101 0101 0101 0101
1000 1111 0000 1100
--------------------
1 0100 1010 1100 0001 //there is an overflow so we wrap it up, means add it to the sum
the sum = 0100 1010 1100 0010
now let's take the 1's complement
checksum = 1011 0101 0011 1101
at the receiver the sum is calculated and then added to the checksum
0100 1010 1100 0010
1011 0101 0011 1101
----------------------
1111 1111 1111 1111 //clearly this should be the answer, if it isn't then there is an error
references:Computer networking a top-down approach[Ross-kurose]
Here's a complete example with a real header of an IPv4 packet.
In the following example, I use bc, printf and here strings to calculate the header checksum and verify it. Consequently, it should be easy to reproduce the results on Linux by copy-pasting the commands.
These are the twenty bytes of our example packet header:
45 00 00 34 5F 7C 40 00 40 06 [00 00] C0 A8 B2 14 C6 FC CE 19
The sender hasn't calculated the checksum yet. The two bytes in square brackets is where the checksum will go. The checksum's value is initially set to zero.
We can mentally split up this header as a sequence of ten 16-bit values: 0x4500, 0x0034, 0x5F7C, etc.
Let's see how the sender of the packet calculates the header checksum:
Add all 16-bit values to get 0x42C87: bc <<< 'obase=16;ibase=16;4500 + 0034 + 5F7C + 4000 + 4006 + 0000 + C0A8 + B214 + C6FC + CE19'
The leading digit 4 is the carry count, we add this to the rest of the number to get 0x2C8B: bc <<< 'obase=16;ibase=16;2C87 + 4'
Invert¹ 0x2C8B to get the checksum: 0xD374
Finally, insert the checksum into the header:
45 00 00 34 5F 7C 40 00 40 06 [D3 74] C0 A8 B2 14 C6 FC CE 19
Now the header is ready to be sent.
The recipient of the IPv4 packet then creates the checksum of the received header in the same way:
Add all 16-bit values to get 0x4FFFB: bc <<< 'obase=16;ibase=16;4500 + 0034 + 5F7C + 4000 + 4006 + D374 + C0A8 + B214 + C6FC + CE19'
Again, there's a carry count so we add that to the rest to get 0xFFFF: bc <<< 'obase=16;ibase=16;FFFB + 4'
If the checksum is 0xFFFF, as in our case, the IPv4 header is intact.
See the Wikipedia entry for more information.
¹Inverting the hexadecimal number means converting it to binary, flipping the bits, and converting it to hexadecimal again. You can do this online or with Bash: hex_nr=0x2C8B; hex_len=$(( ${#hex_nr} - 2 )); inverted=$(printf '%X' "$(( ~ hex_nr ))"); trunc_inverted=${inverted: -hex_len}; echo $trunc_inverted

Resources