32 bit multiplication on 24 bit ALU - signal-processing

I want to port a 32 by 32 bit unsigned multiplication on a 24-bit dsp (it's a Linear Congruential Generator, so I'm not allowed to truncate, also I don't want to replace yet the current LCG with a 24 bit one). The available data types are 24 and 48 bit ints.
Only the last 32 LSB are needed. Do you know any hacks to implement this in fewer multiplies, masks and shifts than the usual way?
The line looks like this:
//val is an int(32 bit)
val = (1664525 * val) + 1013904223;

An outline would be (in my current compiler style):
static uint48_t val = SEED;
...
val = 0xFFFFFFFFUL & ((1664525UL * val) + 1013904223UL);
and hopefully the compiler will recognise:
it can use a multiply and accumulate command
it only needs a reduced multiply algorithim due to the "high word" of the constant being zero
the AND could be effected by resetting the upper bits or multiplying a constant and restoring
...other stuff depends on your {mystery dsp} target
Note
if you scale up the coefficients by 2^16, you can get truncation for free, but due to lack of info
you will have to explore/decide if it is better overall.

(This is more an elaboration why two multiplications 24×24→n, 31<n are enough for 32×32→min(n, 40).)
The question discloses amazingly little about the capabilities to build a method
32×21→32 in fewer [24×24] multiplies, masks and shifts than the usual way on:
24 and 48 bit ints & DSP (I read high throughput, non-high latency 24×24→48).
As far as there indeed is a 24×24→48 multiply (or even 24×24+56→56 MAC) and one factor is less than 24 bits, the question is pointless, a second multiply being the compelling solution.
The usual composition of a 24<n<48×24<m<48→24<p multiply from 24×24→48 uses three of the latter; a compiler should know as well as a coder that "the fourth multiply" would yield bits with a significance/position exceeding the combined lengths of the lower parts of the factors.
So, is it possible to generate "the long product" using just a second 24×24→48?
Let the (bytes of the) factors be w_xyz and W_XYZ, respectively; the underscores suggesting "the Ws" being the lower significance bits in the higher significance words/ints if interpreted as 24bit ints. The first 24×24→48 gives the sum of
  zX
 yXzY
xXyYzZ
 xYyZ
  xZ, what is needed (fat) is
 wZ +
 zW.
This can be computed using one combined multiplication of
((w<<16)|(z & 0xff)) × ((W<<16)|(Z & 0xff)). (Never mind the 17th bit of wZ+zW "running" into wW.)
(In the first revision of this answer, I foolishly produced wZ and zW separately - their sum is wanted in the end, anyway.)
(Annoyingly, this is about all you can do for 24×24→24 as a base operation too - beyond this "combining multiplication", you need four instead of one.)
Another angle to explore is choosing a different PRNG.
It may have to be >24 bits (tell!).
On a 24 bit machine, XorShift* (or even XorShift+) 48/32 seems worth a look.

Related

What is the use of bit manipulation?

I searched for clear explanation with some example close to real life.
What is bit manipulation?
Why we need to use bit manipulation?
We can use bit manipulation in image processing as far as I know. Can anyone show me a simple problem which can be solved using bit manipulation?
I read about bit manipulation from some link:
Link 1
Link 2
In Link 2 Data compression is done using bit packing. Are there any difference between bit manipulation and bit packing?
It will be appreciable If anyone explain me with very simple example which have resemble to real life problem.
What is bit manipulation?
Bit manipulation usually refers to changing data using bit operators.
I think Wikipedia expains it good enough so I won't write another article.
https://en.wikipedia.org/wiki/Bit_manipulation
Bit manipulation is the act of algorithmically manipulating bits or
other pieces of data shorter than a word. Computer programming tasks
that require bit manipulation include low-level device control, error
detection and correction algorithms, data compression, encryption
algorithms, and optimization. For most other tasks, modern programming
languages allow the programmer to work directly with abstractions
instead of bits that represent those abstractions. Source code that
does bit manipulation makes use of the bitwise operations: AND, OR,
XOR, NOT, and possibly other operations analogous to the boolean
operators; there are also bit shifts and operations to count ones and
zeros, find high and low one or zero, set, reset and test bits,
extract and insert fields, mask and zero fields, gather and scatter
bits to and from specified bit positions or fields. Integer arithmetic
operators can also effect bit-operations in conjunction with the other
operators.
Bit manipulation, in some cases, can obviate or reduce the need to
loop over a data structure and can give many-fold speed ups, as bit
manipulations are processed in parallel.
Why we need to use bit manipulation?
Because it is fast and often we don't have another choice. For example in microcontrollers, pretty much everything is controlled by manipulating the bits of 8 bit registers. So an output would go high if you set a certain bit 1.
Bit packing is a compression technique that tries to minimize the number of bits necessary to represent a number. While you'll use bit operators to implement it, it is not the same as "bit-manipulation". It's just one of many many use cases for bit-manipulation.
Can anyone show me a simple problem which can be solved using bit manipulation?
Let's say you have a rgb touple rgb = 0xa1fc03 and you want to make the green channel 0.
rgb_without_green = rgb & 0xFF00FF
We've bitwise ANDed the value with 0xFF00FF.
Now rgb is 0xa10003.
Basically any operation boils down to bit manipulation. For most of them you just have convenient solutions. Say instead of 0x00000011 << 0x0000101 you write 3 * 32
Or have a look at this where the addition of two integers is implemented using bit operations. Add two integers using only bitwise operators?
Edit due to comment
How bitwise AND operation between 0xa1fc03 and 0xFF00FF gives 0xa10003
? Just I need to see how to do this calculation
Bitwise AND means that you AND all the bits of both numbers.
1 AND 1 -> 1
0 AND 1 -> 0
1 AND 0 -> 0
0 AND 0 -> 0
So
0xa1fc03 -> 0b101000011111110000000011
0xff00ff -> 0b111111110000000011111111
AND -> 0b101000010000000000000011
0b101000010000000000000011 -> 0xa10003
With a bit more expierience you know that 0xFF is 0b11111111 so you instantly know that 0xa1fc03 AND 0xff00ff is 0xa1003 becaue you keep everything that is masked with FF and set everything 0 that is masked with 00.
There are countless resources available. You should not have to ask me how to bitwise AND two numbers. Please do your own research.
I am busy designing a new barcode symbology for real-life applications. It uses a checksum value, which is computed on slices of k bits of large numbers. Hence intense bit manipulation.

Delphi Warning W1073 Combining signed type and unsigned 64-bit type - treated as an unsigned type

I get the subject warning on the following line of code;
SelectedFilesSize := SelectedFilesSize +
UInt64(IdList.GetPropertyValue(TShellColumns.Size)) *
ifthen(Selected, 1, -1);
Specifically, the IDE highlights the third line.
SelectedFilesSize is declared as UInt64.
The code appears to work when I run it; if I select an item, its file size is added to the total, if I deselect a file its size is subtracted.
I know I can suppress this warning with {$WARN COMBINING_SIGNED_UNSIGNED64 OFF}.
Can someone explain? Will there be an unforeseen impact if SelectedFilesSize gets huge? Or an impact on a specific target platform?
Delphi 10.3, Win32 and Win64 targets
This will work here, but the warning is right.
If you multiply a UInt64 with -1, you are actually multiplying it with $FFFFFFFFFFFFFFFF. The final result will be a 128 bit value, but the lower 64 bits will be the same as for a signed multiplication (that is also why the code generator often produces an imul opcode, even for unsiged multiplication: the lower bits will be correct, just the — unused — higher bits won't be). The upper 64 bits won't be used anyway, so they don't matter.
If you add that (actually negative) value to another UInt64 (e.g. SelectedFilesSize), the 64 bit result will be correct again. The CPU does not discriminate between positive or negative values when adding. The resulting CPU flags (carry, overflow) will indicate overflow, but if you ignore that by not using range or overflow checks, your code will be fine.
Your code will likely produce a runtime error if range or overflow checks are on, though.
In other words, this works because any excess upper bit — the 64th bit and above — can be ignored. Otherwise, the values would be wrong. See example.
Example
Say your IdList.GetPropertyValue(TShellColumns.Size) is 420. Then you are performing:
$00000000000001A4 * $FFFFFFFFFFFFFFFF = $00000000000001A3FFFFFFFFFFFFFF5C
This is a huge but positive number, but fortunately the lower 64 bits ($FFFFFFFFFFFFFF5C) can be interpreted as -420 (a really negative value in 128 bit would be $FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF5C or -420).
Now say your SelectedFileSize is 100000 (or hex $00000000000186A0). Then you get:
$00000000000186A0 + $FFFFFFFFFFFFFF5C = $00000000000184FC
(or actually $100000000000184FC, but the top bit -- the carry -- is ignored).
$00000000000184FC is 99580 in decimal, so exactly the value you wanted.

Deflate and fixed Huffman codes

I'm trying to implement a deflate compressor and I have to decide whether to
compress a block using the static huffman code or create a dynamic one.
What is the rationale behind the length associated with the static code?
(this is the table included in the rfc)
Lit Value Bits
--------- ----
0 - 143 8
144 - 255 9
256 - 279 7
280 - 287 8
I thought static code was more biased towards plain ascii text, instead it
looks like it prefers by a tiny bit the compression of the rle length
What is a good heuristic to decide whether to use static code?
I was thinking to build a distribution of probabilities from a sample of the
input data and calculate a distance (maybe EMD?) from the probabilities derived
from the static code.
I would guess that the creator of the code took a large sample of literals and lengths from compressed data, likely including executables along with text, and found typical code lengths over the large set. They were then approximated with the table shown. However the author passed away many years ago, so we'll never know for sure.
You don't need a heuristic. Once you have done the work to find matching strings, it is comparatively very fast to compute the number of bits in the block for both a dynamic and static representation. Then simply pick the smaller one. Or the static one if equal (decodes faster).
I don't know about rationale, but there was a small amount of irrationale in choosing the static code lengths:
In the table in your question, the maximum static code number there is 287, but the DEFLATE specification only allows up to code 285, meaning code lengths have wastefully been assigned to two invalid codes. (And not even the longest ones either!) It's a similar story with the table for distance codes, with 32 codes having lengths assigned, but only 30 valid.
So there are some easy improvements that could have been made, but that said, without some prior knowledge of the data, it's not really possible to produce anything that's massively more efficient generally. The "flatness" of the table (no code longer than 9 bits) reduces the worst-case performance to 1 extra bit per byte of uncompressable data.
I think the main rationale behind the groupings is that by keeping group sizes to a multiple of 8, it's possible to tell which group a code belongs to by looking at the 5 most significant bits, which also tells you its length, along with what value to add to immediately get the code value itself
00000 00 .. 00101 11 7 bits + 256 -> (256..279)
00110 000 .. 10111 111 8 bits - 48 -> ( 0..144)
11000 000 .. 11000 111 8 bits + 78 -> (280..287)
11001 0000 .. 11111 1111 9 bits - 256 -> (144..255)
So in theory you could set up a lookup table with 32 entries to quickly read in the codes, but it's an uncommon case and probably not worth optimising for.
There are only really two cases (with some overlap) where Fixed Huffman blocks are likely to be the most efficient:
where the input size in bytes is very small, Static Huffman can be more efficient than Uncompressed, because Uncompressed uses a 32-bit header, while Fixed Huffman needs only a 7-bit footer, plus 1 bit potential overhead per byte.
where the output size is very small (ie. small-ish, highly compressible data), Static Huffman can be more efficient than Dynamic Huffman - again because Dynamic Huffman uses a certain amount of space for an additional header. (A practical minimum header size is difficult to calculate, but I'd say at least 64 bits, probably more.)
That said, I've found they are actually helpful from a developer's perspective, because it's very easy to implement a Deflate-compatible function using Static Huffman blocks, and to iteratively improve from there to get more efficient algorithms working.

Is information stored in registers/memory structured as binary?

Looking at this question on Quora HERE ("Are data stored in registers and memory in hex or binary?"), I think the top answer is saying that data persistence is achieved through physical properties of hardware and is not directly relatable to either binary or hex.
I've always thought of computers as 'binary', but have just realized that that only applies to the usage of components (magnetic up/down or an on/off transistor) and not necessarily the organisation of, for example, memory contents.
i.e. you could, theoretically, create an abstraction in memory that used 'binary components' but that wasn't binary, like this:
100000110001010001100
100001001001010010010
111101111101010100001
100101000001010010010
100100111001010101100
And then recognize that as the (badly-drawn) image of 'hello', rather than the ASCII encoding of 'hello'.
An answer on SO (What's the difference between a word and byte?) mentions that processors can handle 'words', i.e. several bytes at a time, so while information representation has to be binary I don't see why information processing has to be.
Can computers do arithmetic on hex directly? In this case, would the internal representation of information in memory/registers be in binary or hex?
Perhaps "digital computer" would be a good starting term and then from there "binary digit" ("bit"). Electronically, the terms for the values are sometimes "high" and "low". You are right, everything after that depends on the operation. Most of the time, groups of bits are operated on together. Commonly groups are 1, 8, 16, 32 and 64 bits. The meaning of the bits depends on the program but some operations go hand-in-hand with some level of meaning.
When the meaning of a group of bits is not known or important, humans like to be able to decern the value of each bit. Binary could be used but more than 8 bits is hard to read. Although it is rare to operate on groups of 4 bits, hexadecimal is much more readable and is generally used regardless of the number of bits. Sometimes octal is used but that's based on contexts where there is some meaning to a subgrouping of the 3 bits or an avoidance of digits beyond 9.
Integers can be stored in two's complement format and often CPUs have instructions for such integers. Once such operation is negation. For a group of 8 bits, it would map 1 to -1,… 127 to -127, and -1 to 1, … -127 to 127, and 0 to 0 and -128 to -128. Decimal is likely the most valuable to humans here, not base 256, base 2 or base 16. In unsigned hexadecimal, that would be 01 to FF, …, 00 to 00, 80 to 80.
For an intro to how a CPU might do integer addition on a group of bits, see adder circuits.
Other number formats include IEEE-754 floating point and binary-coded decimal.
I think you understand that digital circuits are binary. So, based on the above, yes, operations do operate on a higher conceptual level despite the actual storage.

How could I guess a checksum algorithm?

Let's assume that I have some packets with a 16-bit checksum at the end. I would like to guess which checksum algorithm is used.
For a start, from dump data I can see that one byte change in the packet's payload totally changes the checksum, so I can assume that it isn't some kind of simple XOR or sum.
Then I tried several variations of CRC16, but without much luck.
This question might be more biased towards cryptography, but I'm really interested in any easy to understand statistical tools to find out which CRC this might be. I might even turn to drawing different CRC algorithms if everything else fails.
Backgroud story: I have serial RFID protocol with some kind of checksum. I can replay messages without problem, and interpret results (without checksum check), but I can't send modified packets because device drops them on the floor.
Using existing software, I can change payload of RFID chip. However, unique serial number is immutable, so I don't have ability to check every possible combination. Allthough I could generate dumps of values incrementing by one, but not enough to make exhaustive search applicable to this problem.
dump files with data are available if question itself isn't enough :-)
Need reference documentation? A PAINLESS GUIDE TO CRC ERROR DETECTION ALGORITHMS is great reference which I found after asking question here.
In the end, after very helpful hint in accepted answer than it's CCITT, I
used this CRC calculator, and xored generated checksum with known checksum to get 0xffff which led me to conclusion that final xor is 0xffff instread of CCITT's 0x0000.
There are a number of variables to consider for a CRC:
Polynomial
No of bits (16 or 32)
Normal (LSB first) or Reverse (MSB first)
Initial value
How the final value is manipulated (e.g. subtracted from 0xffff), or is a constant value
Typical CRCs:
LRC: Polynomial=0x81; 8 bits; Normal; Initial=0; Final=as calculated
CRC16: Polynomial=0xa001; 16 bits; Normal; Initial=0; Final=as calculated
CCITT: Polynomial=0x1021; 16 bits; reverse; Initial=0xffff; Final=0x1d0f
Xmodem: Polynomial=0x1021; 16 bits; reverse; Initial=0; Final=0x1d0f
CRC32: Polynomial=0xebd88320; 32 bits; Normal; Initial=0xffffffff; Final=inverted value
ZIP32: Polynomial=0x04c11db7; 32 bits; Normal; Initial=0xffffffff; Final=as calculated
The first thing to do is to get some samples by changing say the last byte. This will assist you to figure out the number of bytes in the CRC.
Is this a "homemade" algorithm. In this case it may take some time. Otherwise try the standard algorithms.
Try changing either the msb or the lsb of the last byte, and see how this changes the CRC. This will give an indication of the direction.
To make it more difficult, there are implementations that manipulate the CRC so that it will not affect the communications medium (protocol).
From your comment about RFID, it implies that the CRC is communications related. Usually CRC16 is used for communications, though CCITT is also used on some systems.
On the other hand, if this is UHF RFID tagging, then there are a few CRC schemes - a 5 bit one and some 16 bit ones. These are documented in the ISO standards and the IPX data sheets.
IPX: Polynomial=0x8005; 16 bits; Reverse; Initial=0xffff; Final=as calculated
ISO 18000-6B: Polynomial=0x1021; 16 bits; Reverse; Initial=0xffff; Final=as calculated
ISO 18000-6C: Polynomial=0x1021; 16 bits; Reverse; Initial=0xffff; Final=as calculated
Data must be padded with zeroes to make a multiple of 8 bits
ISO CRC5: Polynomial=custom; 5 bits; Reverse; Initial=0x9; Final=shifted left by 3 bits
Data must be padded with zeroes to make a multiple of 8 bits
EPC class 1: Polynomial=custom 0x1021; 16 bits; Reverse; Initial=0xffff; Final=post processing of 16 zero bits
Here is your answer!!!!
Having worked through your logs, the CRC is the CCITT one. The first byte 0xd6 is excluded from the CRC.
It might not be a CRC, it might be an error correcting code like Reed-Solomon.
ECC codes are often a substantial fraction of the size of the original data they protect, depending on the error rate they want to handle. If the size of the messages is more than about 16 bytes, 2 bytes of ECC wouldn't be enough to be useful. So if the message is large, you're most likely correct that its some sort of CRC.
I'm trying to crack a similar problem here and I found a pretty neat website that will take your file and run checksums on it with 47 different algorithms and show the results. If the algorithm used to calculate your checksum is any of these algorithms, you would simply find it among the list of checksums produced with a simple text search.
The website is https://defuse.ca/checksums.htm
You would have to try every possible checksum algorithm and see which one generates the same result. However, there is no guarantee to what content was included in the checksum. For example, some algorithms skip white spaces, which lead to different results.
I really don't see why would somebody want to know that though.

Resources