Hardware implementation for integer data processing - image-processing

I am currently trying to implement a data path which processes an image data expressed in gray scale between unsigned integer 0 - 255. (Just for your information, my goal is to implement a Discrete Wavelet Transform in FPGA)
During the data processing, intermediate values will have negative numbers as well. As an example process, one of the calculation is
result = 48 - floor((66+39)/2)
The floor function is used to guarantee the integer data processing. For the above case, the result is -4, which is a number out of range between 0~255.
Having mentioned above case, I have a series of basic questions.
To deal with the negative intermediate numbers, do I need to represent all the data as 'equivalent unsigned number' in 2's complement for the hardware design? e.g. -4 d = 1111 1100 b.
If I represent the data as 2's complement for the signed numbers, will I need 9 bits opposed to 8 bits? Or, how many bits will I need to process the data properly? (With 8 bits, I cannot represent any number above 128 in 2's complement.)
How does the negative number division works if I use bit wise shifting? If I want to divide the result, -4, with 4, by shifting it to right by 2 bits, the result becomes 63 in decimal, 0011 1111 in binary, instead of -1. How can I resolve this problem?
Any help would be appreciated!

If you can choose to use VHDL, then you can use the fixed point library to represent your numbers and choose your rounding mode, as well as allowing bit extensions etc.
In Verilog, well, I'd think twice. I'm not a Verilogger, but the arithmetic rules for mixing signed and unsigned datatypes seem fraught with foot-shooting opportunities.
Another option to consider might be MyHDL as that gives you a very powerful verification environment and allows you to spit out VHDL or Verilog at the back end as you choose.

Related

16 bit Checksum fuzzy analsysis - Leveraging "collisions", biases a thing?

If playing around with CRC RevEng fails, what next? That is the gist of my question. I am trying to learn more how to think for myself, not just looking for an answer 1 time to 1 problem.
Assuming the following:
1.) You have full control of white box algorithm and can create as many chosen sample messages as you want with valid 16 bit / 2 byte checksums
2.) You can verify as many messages as you want to see if they are valid or not
3.) Static or dynamic analysis of the white box code is off limits (say the MCU is of a lithography that would require electron microscope to analyze for example, not impossible but off limits for our purposes).
Can you use any of these methods or lines of thinking:
1.) Inspect "collisions", i.e. different messages with same checksum. Perhaps XOR these messages together and reveal something?
2.) Leverage strong biases towards certain checksums?
3.) Leverage "Rolling over" of the checksum "keyspace", i.e. every 65535 sequentially incremented messages you will see some type of sequential patterns?
4.) AI ?
Perhaps there are other strategies I am missing?
CRC RevEng tool was not able to find the answer using numerous settings configurations
Key properties and attacks:
If you have two messages + CRCs of the same length and exclusive-or them together, the result is one message and one pure CRC on that message, where "pure" means a CRC definition with a zero initial value and zero final exclusive-or. This helps take those two parameters out of the equation, which can be solved for later. You do not need to know where the CRC is, how long it is, or which bits of the message are participating in the CRC calculation. This linear property holds.
Working with purified examples from #1, if you take any two equal-length messages + pure CRCs and exclusive-or them together, you will get another valid message + pure CRC. Using this fact, you can use Gauss-Jordan elimination (over GF(2)) to see how each bit in the message affects other generated bits in the message. With this you can find out a) which bits in the message are particiapting, b) which bits are likely to be the CRC (though it is possible that other bits could be a different function of the input, which can be resolved by the next point), and c) verify that the check bits are each indeed a linear function of the input bits over GF(2). This can also tell you that you don't have a CRC to begin with, if there don't appear to be any bits that are linear functions of the input bits. If it is a CRC, this can give you a good indication of the length, assuming you have a contiguous set of linearly dependent bits.
Assuming that you are dealing with a CRC, you can now take the input bits and the output bits for several examples and try to solve for the polynomial given different assumptions for the ordering of the input bits (reflected or not, perhaps by bytes or other units) and the direction of the CRC shifting (reflected or not). Since you're talking about an allegedly 16-bit CRC, this can be done most easily by brute force, trying all 32,768 possible polynomials for each set of bit ordering assumptions. You can even use brute force on a 32-bit CRC. For larger CRCs, e.g. 64 bits, you would need to use Berlekamp's algorithm for factoring polynomials over finite fields in order to solve the problem before the heat death of the universe. Then you would factor each message + pure CRC as a polynomial, and look for common factors over multiple examples.
Now that you have the message bits, CRC bits, bit orderings, and the polynomial, you can go back to your original non-pure messages + CRCs, and solve for the initial value and final exclusive-or. All you need are two examples with different lengths. Then it's a simple two-equations-in-two-unknowns over GF(2).
Enjoy!

Bit encoding for vector of rational numbers

I would like to implement ultra compact storage for structures with rational numbers.
In the book "Theory of Linear and Integer Programming" by Alexander Schrijver, I found the definition of bit sizes (page. 15) of rational number, vector and matrix:
The representation of rational number is clear: single bit for sign and logarithm for quotient and fraction.
I can't figure out how vector can be encoded only in n bits to distinguish between its elements?
For example what if I would like to write vector of two elements:
524 = 1000001100b, 42 = 101010b. How can I use only 2 additional bits to specify when 1000001100 ends and 101010 starts?
The same problem exists with matrix representation.
Of course, it is not possible just to append the integer representations to each other, and add the information about the merging place, since this would take much more bits than given by the formula in the book, which I don't have access to.
I believe this is a problem from coding theory where I am not an expert. But I found something that might point you to the right direction. In this post an "interpolative code" is described among others. If you apply it to your example (524, 42), you get
f (the number of integers to be encoded, all in the range [1,N] = 2
N = 524
The maximum bit length of the encoded 2 integers is then
f • (2.58 + log (N/f)) = 9,99…, i.e. 10 bits
Thus, it is possible to have ultra compact encoding, although one had to spend a lot of time for coding and decoding.
It is impossible to use only two bits to specify when the quotient end and fraction start. At least you will need as big as the length of the quotient or/and the length of the fraction size. Another way is to use a fixed number of bits for both quotient and fraction similar with IEEE 754.

Efficient way to create a bit mask from multiple numbers possibly using SSE/SSE2/SSE3/SSE4 instructions

Suppose I have 16 ascii characters (hence 16 8 bit numbers) in a 128 bit variable/register. I want to create a bit mask in which those bits will be high whose bit positions (indexes) are represented by those 16 characters.
For example, if the string formed from those 16 characters is "CAD...", in the bit mask 67th bit, 65th bit, 68th bit and so on should be 1. The rest of the bits should be 0. What is the efficient way to do it specially using SIMD instructions?
I know that one of the technique is addition like this: 2^(67-1)+2^(65-1)+2^(68-1)+...
But this will require a large number of operations. I want to do it in one/two operations/instructions if possible.
Please let me know a solution.
SSE4.2 contains one instruction, that performs almost what you want: PCMPISTRM with immediate operand 0. One of its operands should contain your ASCII characters, other - a constant vector with values like 32, 33, ... 47. You get the result in 16 least significant bits of XMM0. Since you need 128 bits, this instruction should be executed 8 times with different constant vectors (6 times if you need only printable ASCII characters). After each PCMPISTRM, use bitwise OR to accumulate the result in some XMM register.
There are 2 disadvantages of this method: (1) you need to read the Intel's architectures software developer's manual to understand PCMPISTRM's details because that's probably the most complicated SSE instruction ever, and (2) this instruction is pretty slow (throughput of 1/2 on Nehalem, 1/3 on Sandy Bridge, 1/4 on Bulldozer), so you'll hardly get any significant speed improvement over 'brute force' method.

Maximum number of different numbers, Huffman Compression

I want to compress many 32bit number using huffman compression.
Each number may appear multiple times, and I know that every number will be replaced with some bit sequences:
111
010
110
1010
1000
etc...
Now, the question: How many different numbers can be added to the huffman tree before the length of the binary sequence exceeds 32bits?
The rule of generating sequences (for those who don't know) is that every time a new number is added you must assign it the smallest binary sequence possible that is not the prefix of another.
You seem to understand the principle of prefix codes.
Many people (confusingly) refer to all prefix codes as "Huffman codes".
There are many other kinds of prefix codes -- none of them compress data into any fewer bits than Huffman compression (if we neglect the overhead of transmitting the frequency table), but many of them get pretty close (with some kinds of data) and have other advantages, such as running much faster or guaranteeing some maximum code length ("length-limited prefix codes").
If you have large numbers of unique symbols, the overhead of the Huffman frequency table becomes large -- perhaps some other prefix code can give better net compression.
Many people doing compression and decompression in hardware have fixed limits for the maximum codeword size -- many image and video compression algorithms specify a "length-limited Huffman code".
The fastest prefix codes -- universal codes -- do, in fact, involve a series of bit sequences that can be pre-generated without regard to the actual symbol frequencies. Compression programs that use these codes, as you mentioned, associate the most-frequent input symbol to the shortest bit sequence, the next-most-frequent input symbol to the next-shorted bit sequence, and so on.
For example, some compression programs use Fibonacci codes (a kind of universal code), and always associate the most-frequent symbol to the bit sequence "11", the next-most-frequent symbol to the bit sequence "011", the next to "0011", the next to "1011", and so on.
The Huffman algorithm produces a code that is similar in many ways to a universal code -- both are prefix codes.
But, as Cyan points out, the Huffman algorithm is slightly different than those universal codes.
If you have 5 different symbols, the Huffman tree will contain 5 different bit sequences -- however, the exact bit sequences generated by the Huffman algorithm depend on the exact frequencies.
One document may have symbol counts of { 10, 10, 20, 40, 80 }, leading to Huffman bit sequences { 0000 0001 001 01 1 }.
Another document may have symbol counts of { 40, 40, 79, 79, 80 }, leading to Huffman bit sequences { 000 001 01 10 11 }.
Even though both situations have exactly 5 unique symbols, the actual Huffman code for the most-frequent symbol is very different in these two compressed documents -- the Huffman code "1" in one document, the Huffman code "11" in another document.
If, however, you compressed those documents with the Fibonacci code, the Fibonacci code for the most-frequent symbol is always the same -- "11" in every document.
For Fibonacci in particular, the first 33-bit Fibonacci code is "31 zero bits followed by 2 one bits", representing the value F(33) = 3,524,578 .
And so 3,524,577 unique symbols can be represented by Fibonacci codes of 32 bits or less.
One of the more counter-intuitive features of prefix codes is that some symbols (the rare symbols) are "compressed" into much longer bit sequences.
If you actually have 2^32 unique symbols (all possible 32 bit numbers), it is not possible to gain any compression if you force the compressor to use prefix codes limited to 32 bits or less.
If you actually have 2^8 unique symbols (all possible 8 bit numbers), it is not possible to gain any compression if you force the compressor to use prefix codes limited to 8 bits or less.
By allowing the compressor to expand rare values -- to use more than 8 bits to store a rare symbol that we know can be stored in 8 bits -- or use more than 32 bits to store a rare symbol that we know can be stored in 32 bits -- that frees up the compressor to use less than 8 bits -- or less than 32 bits -- to store the more-frequent symbols.
In particular, if I use Fibonacci codes to compress a table of values,
where the values include all possible 32 bit numbers,
one must use Fibonacci codes up to N bits long, where F(N) = 2^32 -- solving for N I get
N = 47 bits for the least-frequently-used 32-bit symbol.
Huffman is about compression, and compression requires a "skewed" distribution to work (assuming we are talking about normal, order-0, entropy).
The worst situation regarding Huffman tree depth is when the algorithm creates a degenerated tree, i.e. with only one leaf per level. This situation can happen if the distribution looks like a Fibonacci serie.
Therefore, the worst distribution sequence looks like this : 1, 1, 1, 2, 3, 5, 8, 13, ....
In this case, you fill the full 32-bit tree with only 33 different elements.
Note, however, that to reach a 32 bit-depth with only 33 elements, the most numerous element must appear 3 524 578 times.
Therefore, since suming all Fibonacci numbers get you 5 702 886, you need to compress at least 5 702 887 numbers to start having a risk of not being able to represent them with a 32-bit huffman tree.
That being said, using an Huffman tree to represent 32-bits numbers requires a considerable amount of memory to calculate and maintain the tree.
[Edit] A simpler format, called "logarithm approximation", gives almost the same weight to all symbols. In this case, only the total number of symbols is required.
It computes very fast : say for 300 symbols, you will have some using 8 bits, and others using 9 bits. The formula to decide how many of each type :
9 bits : (300-256)*2 = 44*2 = 88 ;
8 bits : 300 - 88 = 212
Then you can distribute the numbers as you wish (preferably the most frequent ones using 8 bits, but that's not important).
This version scales up to 32 bits, meaning basically no restriction.

How to manually parse a floating point number from a string

Of course most languages have library functions for this, but suppose I want to do it myself.
Suppose that the float is given like in a C or Java program (except for the 'f' or 'd' suffix), for example "4.2e1", ".42e2" or simply "42". In general, we have the "integer part" before the decimal point, the "fractional part" after the decimal point, and the "exponent". All three are integers.
It is easy to find and process the individual digits, but how do you compose them into a value of type float or double without losing precision?
I'm thinking of multiplying the integer part with 10^n, where n is the number of digits in the fractional part, and then adding the fractional part to the integer part and subtracting n from the exponent. This effectively turns 4.2e1 into 42e0, for example. Then I could use the pow function to compute 10^exponent and multiply the result with the new integer part. The question is, does this method guarantee maximum precision throughout?
Any thoughts on this?
All of the other answers have missed how hard it is to do this properly. You can do a first cut approach at this which is accurate to a certain extent, but until you take into account IEEE rounding modes (et al), you will never have the right answer. I've written naive implementations before with a rather large amount of error.
If you're not scared of math, I highly recommend reading the following article by David Goldberg, What Every Computer Scientist Should Know About Floating-Point Arithmetic. You'll get a better understanding for what is going on under the hood, and why the bits are laid out as such.
My best advice is to start with a working atoi implementation, and move out from there. You'll rapidly find you're missing things, but a few looks at strtod's source and you'll be on the right path (which is a long, long path). Eventually you'll praise insert diety here that there are standard libraries.
/* use this to start your atof implementation */
/* atoi - christopher.watford#gmail.com */
/* PUBLIC DOMAIN */
long atoi(const char *value) {
unsigned long ival = 0, c, n = 1, i = 0, oval;
for( ; c = value[i]; ++i) /* chomp leading spaces */
if(!isspace(c)) break;
if(c == '-' || c == '+') { /* chomp sign */
n = (c != '-' ? n : -1);
i++;
}
while(c = value[i++]) { /* parse number */
if(!isdigit(c)) return 0;
ival = (ival * 10) + (c - '0'); /* mult/accum */
if((n > 0 && ival > LONG_MAX)
|| (n < 0 && ival > (LONG_MAX + 1UL))) {
/* report overflow/underflow */
errno = ERANGE;
return (n > 0 ? LONG_MAX : LONG_MIN);
}
}
return (n>0 ? (long)ival : -(long)ival);
}
The "standard" algorithm for converting a decimal number to the best floating-point approximation is William Clinger's How to read floating point numbers accurately, downloadable from here. Note that doing this correctly requires multiple-precision integers, at least a certain percentage of the time, in order to handle corner cases.
Algorithms for going the other way, printing the best decimal number from a floating-number, are found in Burger and Dybvig's Printing Floating-Point Numbers Quickly and Accurately, downloadable here. This also requires multiple-precision integer arithmetic
See also David M Gay's Correctly Rounded Binary-Decimal and Decimal-Binary Conversions for algorithms going both ways.
I would directly assemble the floating point number using its binary representation.
Read in the number one character after another and first find all digits. Do that in integer arithmetic. Also keep track of the decimal point and the exponent. This one will be important later.
Now you can assemble your floating point number. The first thing to do is to scan the integer representation of the digits for the first set one-bit (highest to lowest).
The bits immediately following the first one-bit are your mantissa.
Getting the exponent isn't hard either. You know the first one-bit position, the position of the decimal point and the optional exponent from the scientific notation. Combine them and add the floating point exponent bias (I think it's 127, but check some reference please).
This exponent should be somewhere in the range of 0 to 255. If it's larger or smaller you have a positive or negative infinite number (special case).
Store the exponent as it into the bits 24 to 30 of your float.
The most significant bit is simply the sign. One means negative, zero means positive.
It's harder to describe than it really is, try to decompose a floating point number and take a look at the exponent and mantissa and you'll see how easy it really is.
Btw - doing the arithmetic in floating point itself is a bad idea because you will always force your mantissa to be truncated to 23 significant bits. You won't get a exact representation that way.
You could ignore the decimal when parsing (except for its location). Say the input was:
156.7834e10... This could easily be parsed into the integer 1567834 followed by e10, which you'd then modify to e6, since the decimal was 4 digits from the end of the "numeral" portion of the float.
Precision is an issue. You'll need to check the IEEE spec of the language you're using. If the number of bits in the Mantissa (or Fraction) is larger than the number of bits in your Integer type, then you'll possibly lose precision when someone types in a number such as:
5123.123123e0 - converts to 5123123123 in our method, which does NOT fit in an Integer, but the bits for 5.123123123 may fit in the mantissa of the float spec.
Of course, you could use a method that takes each digit in front of the decimal, multiplies the current total (in a float) by 10, then adds the new digit. For digits after the decimal, multiply the digit by a growing power of 10 before adding to the current total. This method seems to beg the question of why you're doing this at all, however, as it requires the use of the floating point primitive without using the readily available parsing libraries.
Anyway, good luck!
Yes, you can decompose the construction into floating point operations as long as these operations are EXACT, and you can afford a single final inexact operation.
Unfortunately, floating point operations soon become inexact, when you exceed precision of mantissa, the results are rounded. Once a rounding "error" is introduced, it will be cumulated in further operations...
So, generally, NO, you can't use such naive algorithm to convert arbitrary decimals, this may lead to an incorrectly rounded number, off by several ulp of the correct one, like others have already told you.
BUT LET'S SEE HOW FAR WE CAN GO:
If you carefully reconstruct the float like this:
if(biasedExponent >= 0)
return integerMantissa * (10^biasedExponent);
else
return integerMantissa / (10^(-biasedExponent));
there is a risk to exceed precision both when cumulating the integerMantissa if it has many digits, and when raising 10 to the power of biasedExponent...
Fortunately, if first two operations are exact, then you can afford a final inexact operation * or /, thanks to IEEE properties, the result will be rounded correctly.
Let's apply this to single precision floats which have a precision of 24 bits.
10^8 > 2^24 > 10^7
Noting that multiple of 2 will only increase the exponent and leave the mantissa unchanged, we only have to deal with powers of 5 for exponentiation of 10:
5^11 > 2^24 > 5^10
Though, you can afford 7 digits of precision in the integerMantissa and a biasedExponent between -10 and 10.
In double precision, 53 bits,
10^16 > 2^53 > 10^15
5^23 > 2^53 > 5^22
So you can afford 15 decimal digits, and a biased exponent between -22 and 22.
It's up to you to see if your numbers will always fall in the correct range... (If you are really tricky, you could arrange to balance mantissa and exponent by inserting/removing trailing zeroes).
Otherwise, you'll have to use some extended precision.
If your language provides arbitrary precision integers, then it's a bit tricky to get it right, but not that difficult, I did this in Smalltalk and blogged about it at http://smallissimo.blogspot.fr/2011/09/clarifying-and-optimizing.html and http://smallissimo.blogspot.fr/2011/09/reviewing-fraction-asfloat.html
Note that these are simple and naive implementations. Fortunately, libc is more optimized.
My first thought is to parse the string into an int64 mantissa and an int decimal exponent using only the first 18 digits of the mantissa. For example, 1.2345e-5 would be parsed into 12345 and -9. Then I would keep multiplying the mantissa by 10 and decrementing the exponent until the mantissa was 18 digits long (>56 bits of precision). Then I would look the decimal exponent up in a table to find a factor and binary exponent that can be used to convert the number from decimal n*10^m to binary p*2^q form. The factor would be another int64 so I'd multiply the mantissa by it such that I obtained the top 64-bits of the resulting 128-bit number. This int64 mantissa can be cast to a float losing only the necessary precision and the 2^q exponent can be applied using multiplication with no loss of precision.
I'd expect this to be very accurate and very fast but you may also want to handle the special numbers NaN, -infinity, -0.0 and infinity. I haven't thought about the denormalized numbers or rounding modes.
For that you have to understand the standard IEEE 754 in order for proper binary representation. After that you can use Float.intBitsToFloat or Double.longBitsToDouble.
http://en.wikipedia.org/wiki/IEEE_754
If you want the most precise result possible, you should use a higher internal working precision, and then downconvert the result to the desired precision. If you don't mind a few ULPs of error, then you can just repeatedly multiply by 10 as necessary with the desired precision. I would avoid the pow() function, since it will produce inexact results for large exponents.
It is not possible to convert any arbitrary string representing a number into a double or float without losing precision. There are many fractional numbers that can be represented exactly in decimal (e.g. "0.1") that can only be approximated in a binary float or double. This is similar to how the fraction 1/3 cannot be represented exactly in decimal, you can only write 0.333333...
If you don't want to use a library function directly why not look at the source code for those library functions? You mentioned Java; most JDKs ship with source code for the class libraries so you could look up how the java.lang.Double.parseDouble(String) method works. Of course something like BigDecimal is better for controlling precision and rounding modes but you said it needs to be a float or double.
Using a state machine. It's fairly easy to do, and even works if the data stream is interrupted (you just have to keep the state and the partial result). You can also use a parser generator (if you're doing something more complex).
I agree with terminus. A state machine is the best way to accomplish this task as there are many stupid ways a parser can be broken. I am working on one now, I think it is complete and it has I think 13 states.
The problem is not trivial.
I am a hardware engineer interested designing floating point hardware. I am on my second implementation.
I found this today http://speleotrove.com/decimal/decarith.pdf
which on page 18 gives some interesting test cases.
Yes, I have read Clinger's article, but being a simple minded hardware engineer, I can't get my mind around the code presented. The reference to Steele's algorithm as asnwered in Knuth's text was helpful to me. Both input and output are problematic.
All of the aforementioned references to various articles are excellent.
I have yet to sign up here just yet, but when I do, assuming the login is not taken, it will be broh. (broh-dot).
Clyde

Resources