To detect a number of n_errors errors in a code of n_total bits length, we must sacrifice a certain number n_check of the bits for some sort of checksum.
My question is: What is the method, where I have to sacrifice the least number of bits n_check to detect a given number of errors n_errors in a code of n_total bits length.
If there is no general answer to this question, I would greatly appreciate some hints concerning methods for the following conditions:
n_total=32, n_errors>1and
obviously n_check should be as small as possible.
Thank you.
Link to CRC Zoo notes:
http://users.ece.cmu.edu/~koopman/crc/notes.html
If you look at the table for 3 bit crc:
http://users.ece.cmu.edu/~koopman/crc/crc3.html
You can see that the second entry, 0xb => x^3 + x + 1, has HD (Hamming Distance) 3 for 4 data bits, for a total size of 7 bits. This can detect all 2 bit error patterns (out of 7 bits), but some 3 bit patterns will fail, an obvious case where all bits should be zero would be
0 0 0 1 0 1 1 (when it should be 0 0 0 0 0 0)
This is a simple example where the number of 1 bits in the polynomial determined the maximum number of bit errors. To verify HD = 3 (2 bit errors detected) , all 21 cases of 7 bits total, 2 bits bad were checked.
If you check out 32 bit CRCs, you'll see that 0x04c11db7 (ethernet 802.3, has HD=6 (5 bit error detection) at 263 data bits => 263+32 = 295 total bits, while 0x1f4acfb13, has HD=6 at 32736 data bits => 32736+32 = 32768 total bits.
Here is a pdf article about a search for good CRCs:
https://users.ece.cmu.edu/~koopman/networks/dsn02/dsn02_koopman.pdf
Finding "good" CRCs for specific Hamming distances requires some knowledge about the process. For example, in the case of 0x1f4acfb13 with HD=6 (5 bad bit detection), there 314,728,365,660,920,250,368 possible combinations of 5 bits bad out of 32768 bits. However, 0x1f4acfb13 = 0x1f4acfb13 = 0xc85f*0x8011*0x3*0x3, and either of the 0x3 (x+1) terms will detect any odd number of error bits, which reduces the search to 4 bad bit cases. For the minimal size of a message that fails with this polynomial, the first and last bits will be bad. This reduces the searching down to just 2 of the 5 bits, which reduces the number of cases to about 536 million. Rather than calculating CRC for each bit combination, a table of CRCs can be created for each 1 bit in an otherwise all 0 bit message, and then the table entries corresponding to specific bits can be xor'ed to speed up the process. For a polynomial where it isn't the first and last bits, a table of CRC's could be generated for all 2 bit errors (assuming such a table fits in memory), then sorted, then checked for duplicate values (with sorted data, this only requires one sequential pass of the sorted table). Duplicate values would correspond to a 4 bit failure. Other situations will require different approaches, and in some cases, it's still time consuming.
Related
I'm reading this book 3 easy pieces by remzi. In chapter 18 paging introduction in the first paragraph it is written
(real address spaces are much bigger, of course,
commonly 32 bits and thus 4-GB of address space, or even 64 bits)
Now if 1 byte is 8 bits, shouldn't 32 bits be 32/8 4 bytes space? I have seen the math for getting the answer as 4GB
2^10 = 1KB
2^10 = 1MB
2^10 = 1GB
But then this is assuming 2^1 = 1B, But isn't this simply wrong?
What am I missing? What does my answer (4Bytes) represent here?
This question is related How many bits are needed to address this much memory?
But doesn't address why my math is incorrect. (OP there also has the exact same confusion).
Lets say that I change the word size to 64MB (wild I know). Then number of words is 1. According to the answers, number of bits would be 2^0 = 1, 0 bits? Then where and when do we use the fact that 1 byte = 8 bits?
Any help would be appreciated.
Today, RAM is byte addressable. Each address put on the address bus returns 1 byte. If you have 32 bits, the amount of different addresses that you can come up with is 2^32 = 4,294,967,296. Since you can have that much different addresses, then you can address that much bytes. In terms of bytes, this amount of bytes is called 4 GB.
I'm trying to implement a deflate compressor and I have to decide whether to
compress a block using the static huffman code or create a dynamic one.
What is the rationale behind the length associated with the static code?
(this is the table included in the rfc)
Lit Value Bits
--------- ----
0 - 143 8
144 - 255 9
256 - 279 7
280 - 287 8
I thought static code was more biased towards plain ascii text, instead it
looks like it prefers by a tiny bit the compression of the rle length
What is a good heuristic to decide whether to use static code?
I was thinking to build a distribution of probabilities from a sample of the
input data and calculate a distance (maybe EMD?) from the probabilities derived
from the static code.
I would guess that the creator of the code took a large sample of literals and lengths from compressed data, likely including executables along with text, and found typical code lengths over the large set. They were then approximated with the table shown. However the author passed away many years ago, so we'll never know for sure.
You don't need a heuristic. Once you have done the work to find matching strings, it is comparatively very fast to compute the number of bits in the block for both a dynamic and static representation. Then simply pick the smaller one. Or the static one if equal (decodes faster).
I don't know about rationale, but there was a small amount of irrationale in choosing the static code lengths:
In the table in your question, the maximum static code number there is 287, but the DEFLATE specification only allows up to code 285, meaning code lengths have wastefully been assigned to two invalid codes. (And not even the longest ones either!) It's a similar story with the table for distance codes, with 32 codes having lengths assigned, but only 30 valid.
So there are some easy improvements that could have been made, but that said, without some prior knowledge of the data, it's not really possible to produce anything that's massively more efficient generally. The "flatness" of the table (no code longer than 9 bits) reduces the worst-case performance to 1 extra bit per byte of uncompressable data.
I think the main rationale behind the groupings is that by keeping group sizes to a multiple of 8, it's possible to tell which group a code belongs to by looking at the 5 most significant bits, which also tells you its length, along with what value to add to immediately get the code value itself
00000 00 .. 00101 11 7 bits + 256 -> (256..279)
00110 000 .. 10111 111 8 bits - 48 -> ( 0..144)
11000 000 .. 11000 111 8 bits + 78 -> (280..287)
11001 0000 .. 11111 1111 9 bits - 256 -> (144..255)
So in theory you could set up a lookup table with 32 entries to quickly read in the codes, but it's an uncommon case and probably not worth optimising for.
There are only really two cases (with some overlap) where Fixed Huffman blocks are likely to be the most efficient:
where the input size in bytes is very small, Static Huffman can be more efficient than Uncompressed, because Uncompressed uses a 32-bit header, while Fixed Huffman needs only a 7-bit footer, plus 1 bit potential overhead per byte.
where the output size is very small (ie. small-ish, highly compressible data), Static Huffman can be more efficient than Dynamic Huffman - again because Dynamic Huffman uses a certain amount of space for an additional header. (A practical minimum header size is difficult to calculate, but I'd say at least 64 bits, probably more.)
That said, I've found they are actually helpful from a developer's perspective, because it's very easy to implement a Deflate-compatible function using Static Huffman blocks, and to iteratively improve from there to get more efficient algorithms working.
I want to port a 32 by 32 bit unsigned multiplication on a 24-bit dsp (it's a Linear Congruential Generator, so I'm not allowed to truncate, also I don't want to replace yet the current LCG with a 24 bit one). The available data types are 24 and 48 bit ints.
Only the last 32 LSB are needed. Do you know any hacks to implement this in fewer multiplies, masks and shifts than the usual way?
The line looks like this:
//val is an int(32 bit)
val = (1664525 * val) + 1013904223;
An outline would be (in my current compiler style):
static uint48_t val = SEED;
...
val = 0xFFFFFFFFUL & ((1664525UL * val) + 1013904223UL);
and hopefully the compiler will recognise:
it can use a multiply and accumulate command
it only needs a reduced multiply algorithim due to the "high word" of the constant being zero
the AND could be effected by resetting the upper bits or multiplying a constant and restoring
...other stuff depends on your {mystery dsp} target
Note
if you scale up the coefficients by 2^16, you can get truncation for free, but due to lack of info
you will have to explore/decide if it is better overall.
(This is more an elaboration why two multiplications 24×24→n, 31<n are enough for 32×32→min(n, 40).)
The question discloses amazingly little about the capabilities to build a method
32×21→32 in fewer [24×24] multiplies, masks and shifts than the usual way on:
24 and 48 bit ints & DSP (I read high throughput, non-high latency 24×24→48).
As far as there indeed is a 24×24→48 multiply (or even 24×24+56→56 MAC) and one factor is less than 24 bits, the question is pointless, a second multiply being the compelling solution.
The usual composition of a 24<n<48×24<m<48→24<p multiply from 24×24→48 uses three of the latter; a compiler should know as well as a coder that "the fourth multiply" would yield bits with a significance/position exceeding the combined lengths of the lower parts of the factors.
So, is it possible to generate "the long product" using just a second 24×24→48?
Let the (bytes of the) factors be w_xyz and W_XYZ, respectively; the underscores suggesting "the Ws" being the lower significance bits in the higher significance words/ints if interpreted as 24bit ints. The first 24×24→48 gives the sum of
zX
yXzY
xXyYzZ
xYyZ
xZ, what is needed (fat) is
wZ +
zW.
This can be computed using one combined multiplication of
((w<<16)|(z & 0xff)) × ((W<<16)|(Z & 0xff)). (Never mind the 17th bit of wZ+zW "running" into wW.)
(In the first revision of this answer, I foolishly produced wZ and zW separately - their sum is wanted in the end, anyway.)
(Annoyingly, this is about all you can do for 24×24→24 as a base operation too - beyond this "combining multiplication", you need four instead of one.)
Another angle to explore is choosing a different PRNG.
It may have to be >24 bits (tell!).
On a 24 bit machine, XorShift* (or even XorShift+) 48/32 seems worth a look.
I'm reading a book about assembly; Jones and Bartlett Publishers Introduction to 80x86 Assembly
The author give exercises but no answers to it. Obviously before going further, I want
to make sure that I fully understand the chapter concepts.
donc,
What is the 8-hex-digit address of the "last" byte for a PC with 32 MBytes of RAM
This is my solution:
1) convert to bits
32 MBytes = 268435456 bits
2) I subtract 8 bits to remove the last byte
268435448
3) conversion to hexadecimal
FFFFFF8
So I got FFFFFF8
Does this look a good answer?
No. For assembly programming it's very helpful to be able to do simple power-of-2 calculations in your head. 1K is 2^10. So 1M is 2^20. So 32M is 2^25 (because 2^5 = 32). So the address of the last byte is 2^25-1 (because the first byte is at 0). This is 25 bits that are all 1's (because 2^n-1 is always n 1's). In hex, this is six F's (4 bits per F) plus an additional 1, so prepending a zero to get 8 hex digits, you have 01FFFFFF.
There are two things you should think about:
For most computers (all PCs) adresses are given in bytes, not in bits.
Therefore you must calculate: 32 MByte = 33554432 Bytes, minus 1 byte = 01FFFFFF (hex) as "Gene" wrote in his answer.
In reality (if you are interested in) you must also think about the fact that there is a "gap" in the address area (from 000A0000 to 000FFFFF) of real PCs so either not all the RAM is useable or the last address of the RAM comes later. This area is used for the graphics card and the BIOS ROM.
I thinking is : 00007FFF
because: 32MB = 32*1024 = 32768
byte last have address 32767 (7FFF)
I am preparing for a quiz in my computer science class, but I am not sure how to find the correct answers. The questions come in 4 varieties, such as--
Assume the following system:
Auxiliary memory containing 4 gigabytes,
Memory block equivalent to 4 kilobytes,
Word size equivalent to 4 bytes.
How many words are in a block,
expressed as 2^_? (write the
exponent)
What is the number of bits needed to
represent the address of a word in
the auxiliary memory of this system?
What is the number of bits needed to
represent the address of a byte in a
block of this system?
If a file contains 32 megabytes, how
many blocks are contained in the
file, expressed as 2^_?
Any ideas how to find the solutions? The teacher hasn't given us any examples with solutions so I haven't been able to figure out how to do this by working backwards or anything and I haven't found any good resources online.
Any thoughts?
Questions like these basically boil down to working with exponents and knowing how the different pieces fit together. For example, from your sample questions, we would do:
How many words are in a block, expressed as 2^_? (write the exponent)
From your description we know that a word is 4 bytes (2^2 bytes) and that a block is 4 kilobytes (2^12 bytes). To find the number of words in one block we simply divide the size of a block by the size of a word (2^12 / 2^2) which tells us that there are 2^10 words per block.
What is the number of bits needed to represent the address of a word in the auxiliary memory of this system?
This type of question is essentially an extension of the previous one. First you need to find the number of words contained in the memory. And from that you can get the number of bits required to represent a word in the memory. So we are told that memory contains 4 gigabytes (2^32 bytes) and that the word is 4 bytes (2^2 bytes); therefore the number words in memory is 2^32/2^2 = 2^30 words. From this we can deduce that 30 bits are required to represent a word in memory because each bit can represent two locations and we need 2^30 locations.
Since this is tagged as homework I will leave the remaining questions as exercises :)
Work backwards. This is actually pretty simple mathematics. (Ignore the word "auxilliary".)
How much is a kilobyte? How much is 4 kilobytes? Try putting in some numbers in 2^x, say x == 4. How much is 2^4 words? 2^8?
If you have 4GB of memory, what is the highest address? How large numbers can you express with 8 bits? 16 bits? Hint: 4GB is an even power of 2. Which?
This is really the same question as 2, but with different input parameters.
How many kilobytes is a megabyte? Express 32 megabytes in kilobytes. Division will be useful.