Beam file format for "FunT" - erlang

We are using Erlang version 22. We rebuild the Beam file multiple times. Inside the Beam file, we found out that the last 4 bytes in "FunT" and before "LitT" are always changing (different between each build). Is there any explanation on how the last 4 bytes are generated? Because of those changing 4 bytes, the checksums of the Beam image are always different build after build.
00000260: 4675 6e54 0000 001c 0000 0001 0000 0013 FunT............
00000270: 0000 0001 0000 0011 0000 0000 0000 0001 ................
00000280: 0432 95c1 4c69 7454 0000 00c3 0000 00f6 .2..LitT........

The best reference for the BEAM file format that I know of is this one. Those four bytes are the "old unique" value for each lambda function. It's being generated here, using part of the MD5 sum of the module.
It's odd that this bit would change if nothing else in the module changes. My best guess would be to pass the deterministic option to the compiler and hope that fixes things.

Related

Little Endian vs. Big Endian architectures

I've a question it is a kind of disagreement with a university professor, about Endianness, So I did not find any mean to solve this and find the right answer but asking and open a discussion in Stack Overflow community.
Let's say that we have this number (hex)11FF1 defined as an integer, for example in C++ it will be like : int num = 0x11FF1, and I say that the number will be presented in the memory in a little endian machine as :
addr[0] is f1 addr[1] is 1f addr[2] is 01 addr[3] is 00
in binary : 1111 0001 0001 1111 0000 0001 0000 0000
as the compiler considers 0x11ff1 as 0x0001ff1 and considers also 00 as the 1st byte and 01 as the 2nd byte and so on, and for the Big Endian I believe it will look like:
addr[0] is 00 addr[1] is 01 addr[2] is 1f addr[3] is f1
in binary : 0000 0000 0000 0001 0001 1111 1111 0001
but he has another opinion, he says :
Little Endian
Big Endian:
Actually I don't see anything logical in his representation, so I hope the developers Resolve this disagreement, Thanks in advance.
Your hex and binary numbers are correct.
Your (professor's?) French image for little-endian makes no sense at all, none of the 3 representations are consistent with either of the other 2.
73713 is 0x11ff1 in hex, so there aren't any 0xFF bytes (binary 11111111).
In 32-bit little-endian, the bytes are F1 1F 01 00 in order of increasing memory address.
You can get that by taking pairs of hex digits (bytes / octets) from the low end of the full hex value, then fill with zeros once you've consumed the value.
It looks like they maybe padded the wrong side of the hex value with 0s to zero-extend to 32 bits as 0x11ff1000, not 0x00011ff1. Note these are full hex values of the whole number, not an attempt to break it down into separate hex bytes in any order.
But the hex and binary don't match each other; their binary ends with an all-ones byte, so it has FF as the high byte, not the 3rd byte. I didn't check if that matches their hex in PDP (mixed) endian.
They broke up their hex column into 4 byte-sized groups, which would seem to indicate that it's showing bytes in memory order. But that column is the same between their big- and little-endian images, so apparently that's not what they're doing, and they really did just extend it to 32 bits by left shifting (padding with low instead of high zero).
Also, the binary field in the big vs. little endian aren't the reverse of each other. To flip from big to little endian, you reverse the order of the bytes within the integer, keeping each byte value the same. (like x86 bswap). Their 11111111 (FF) byte is 2nd in their big-endian version, but last in little-endian.
TL:DR: unfortunately, nothing about those images makes any sense that I can see.

How do I determine if there is an error in SEC code?

I thought I understood how to find error and correct it in a SEC hamming code but then my textbook question questioned my ability...
Consider a SEC code that protects 8 bit words with 4 parity bits. If we read the value 0x375, is there an error? If so, correct the error.
So 0x375 is equivalent to 0011 0111 0101
I locate the parity bits...
p1: 0011 0111 0101
p2: 0011 0111 0101
p4: 0011 0111 0101
p8: 0011 0111 0101
Now to detect error I see if any parity bits are odd...
p1: 0011 0111 0101 = 010100 = EVEN (0)
p2: 0011 0111 0101 = 011110 = EVEN (0)
p4: 0011 0111 0101 = 10111 = EVEN (0)
p8: 0011 0111 0101 = 10101 = ODD (1)
I was under the impression that to find the error bit you simply add the parity bit numbers that are ODD. In my case, only parity bit 8 is odd. So error bit = p8 = 8. But I didn't think a parity bit number could be the error bit so I must have done something wrong?
The parity bit is just like any other bit in that it can have an error just like any of the other bits, so if only one parity bit indicates error, the parity bit itself is in error. You did nothing wrong.
reference: my professor's lecture slide

Memory range calculation

I have a question about calculating memory addresses:
I am given 3 Memory blocks:
- 1x 1KByte (IC1) - 2^10 Byte
- 2x 4KByte (IC2 + IC3) 2^12 Byte
So far I calculated these memory addresses:
IC1:
0000 0000 0000 0000 (Starting adress)
0000 0011 1111 1111 (Ending adress, I got this from inverting the last 10 digits)
IC2:
0000 0100 0000 0000 (Starting adress)- Last ending adress +1
0000 1011 1111 1111 (Ending adress, I got this from inverting the last 12 digits)
However, at IC3 there has to be some method to get a carry bit into my first 0000-block, as I am running out of space when only using 3 the last 3 hex digits:
IC2:
0000 1100 0000 0000 (Starting adress)- Last ending adress +1
What is the ending address now? If I would invert the last 12 digits again, I would get a hex address which is already in use. It's pretty obvious that the next hex digit has to be increased to 1, but I can't find a rule to do this.
Any advice?
I'm not sure why you're using bit flipping for this, it looks like it should be a very efficient implementation if it works, but it doesn't seem to:
Your IC2 block starting address (in Hex) is 400 (which is 1K from the start of memory, all good so far), but the ending address in hex is BFF when it should be 13FF (1k+4k = 5k) in binary that is 0001 0011 1111 1111
Is there a reason why you cannot calculate these addresses using addition instead of bit-flipping?

how to write extend network prefix?

I need help with this:
Assume that you have been assigned the 200.35.1.0 /24 network block
Define an extended network prefix that allows the creation of 20 hosts on each subnet.
I've written the addresses in binary form:
IP: 1100 1000 . 0010 0011 . 0000 0001 . 0000 0000
Mask: 1111 1111 . 1111 1111 . 1111 1111 . 0000 0000
extended network prefix must be:
New Mask: 1111 1111 . 1111 1111 . 1111 1111 . xxxx xxxx
I know that 2^value - 2 = subnets
But how do you know how many it requires? The subnets are not given.. Help?
The next largest power of 2 from the required 20 hosts is 32.
32 is 2^5.
There are 32 bits in an address or mask.
32 bits for the mask size - 5 bits for the subnet size = 27 bits for the mask length
A 27-bit mask is 11111111.11111111.11111111.11100000
Take the original subnet, AND it with your mask to get the first subnet
Add 32 for each successive subnet
Convert the binaries back to decimal

Binary format of memory address. Computer organization

I'm having a bit of an issue understanding what is going on here, and can't seem to wrap my head about it.
Notes:
Course notes about topic
Example:
Memory location 0x1f6
What is the binary format of this address? 1 1111 0110
What are tag, block index, and block offset? 3, 7, 6
My own work:
Memory location 0x033
What is the binary format of this address? 0 0011 0011
What are tag, block index, and block offset? 0 6, 3
Memory location 0x009
What is the binary format of this address? 0 0000 1001
What are tag, block index, and block offset? 0, 1, 1
Memory location 0x652
What is the binary format of this address? 0110 0101 0010
What are tag, block index, and block offset? 12, 10, 2
These are my attempts, but I have not a clue if I'm doing it right, and I have a feeling that I am not, as least for the last one, which I believe is wrong. Can anyone point me in the right direction?
I ended up figuring it out. The block offset is dependent upon the block size, in this case 16bytes, so it requires 4 binary digits to represent it. Next, the block index is dependent upon the number of blocks, in this case 8 (0-7), which requires 3 binary digits. Finally, the tag is made up of the remaining binary digits after you convert the hex memory location to binary.
Example
Memory location 0x652
What is the binary format of this address? 0110 0101 0010
What is the binary representation of tag, block index, and block offset? 1100 101 0010
What are tag, block index, and block offset? 12, 7, 2

Resources