digital complements - digital

what is the application of complements such as 1's complement and 2's complement?

To express negative numbers in binary format.

From Wikipedia:
The two's complement of the number
then behaves like the negative of the
original number in most arithmetic,
and it can coexist with positive
numbers in a natural way.

Related

What are real numbers in Dafny?

What are real numbers in Dafny. Are they represented as IEEE 754-2008 floating point numbers? If not, then what are they? I.e., what is the specification of the real type in Dafny?
Dafny's real numbers are not floating point numbers.
From a verification perspective, they are the mathematical real numbers, and Dafny reasons about them using Z3's theory of real arithmetic.
From a compilation perspective, Dafny actually compiles them to BigRationals, which is made possible by the fact that Dafny doesn't have any builtin operations for creating irrational real numbers.

Z3 precision for real and decimal values

what is the usual precision for Real variables in Z3? Is exact arithmetic used?
Is there a way to set the accuracy level manually?
If Real means that exact arithmetic must be used, is there any other data type for floating point values which has limited precision?
Finally: from this point of view, is z3 different with respect to the other popular SMT solvers, or is this standardised in the SMT-LIB definition?
See this answer: z3 existential theory of the reals
Regarding printing precision, see this one: algebraic reals: does z3 do rounding when pretty printing?
In short, yes they are precisely represented as roots of polynomials. Not every real number can be represented by the Real type (transcendentals, e, pi, etc.); but all polynomial roots are representable.
This paper discusses how to also deal with transcendentals.

How are numbers represented in a computer and what is the role of floating-point and twos-complement?

I have very general question about how computers work with numbers.
In general computer systems only know binary - 0 and 1. So in memory any number is a sequence of bits. It does not matter if the number represented is a int or float.
But when does things like floating-point-numbers based on IEEE 754 standard and the twos-complement enter the game? Is this only a thing of the compilers (C/C++,...) and VMs (.NET/Java)?
Is it true that all integer numbers are represented by using the twos-complement?
I have read about CPUs that use co-processors for performing the floating-point-arithmetic. To tell a CPU to use it special assembler commands exists like add.s (single precision) and add.d (double precision). When I have some C++ code where a float is use, will such assembler commands be in the output?
I am totally confused at the moment. Would be great if you can help me with that.
Thank you!
Stefan
In general computer systems only know binary - 0 and 1. So in memory any number is a sequence of bits. It does not matter if the number represented is a int or float.
This is correct for the representation in memory. But computers execute instructions and store data currently being worked on in registers. Both instructions and registers are specialized, for some of them, for representations of signed integers in two's complement, and for others, for IEEE 754 binary32 and binary64 arithmetics (on a typical computer).
So to answer your first question:
But when does things like floating-point-numbers based on IEEE 754 standard and the twos-complement enter the game? Is this only a thing of the compilers (C/C++,...) and VMs (.NET/Java)?
Two's complement and IEEE 754 binary floating-point are very much choices made by the ISA, which provides specialized instructions and registers to deal with these formats in particular.
Is it true that all integer numbers are represented by using the twos-complement?
You can represent integers however you want. But if you represent your signed integers using two's complement, the typical ISA will provide instructions to operate efficiently on them. If you make another choice, you will be on your own.

F# - How to compare floats

In F#. How to efficiently compare floats for equality that are almost equal? It should work for very large and very small values too. I am thinking of first comparing the Exponent and then the Significand (Mantissa) while ignoring the last 4 bits of the its 52 bits. Is that a good approach? How can I get the Exponent and Significand of a float?
An F# float is just a shorthand for System.Double. That being the case, you can use the BitConverter.DoubleToInt64Bits method to efficiently (and safely!) "cast" an F# float value to int64; this is useful because it avoids allocating a byte[], as John mentioned in his comment. You can get the exponent and significand from that int64 using some simple bitwise operations.
As John said though, you're probably better off with a simple check for relative accuracy. It's likely to be the fastest solution and "close enough" for many use cases (e.g., checking to see if an iterative solver has converged on a solution). If you need a specific amount of accuracy, take a look at NUnit's code -- it has some nice APIs for asserting that values are within a certain percentage or number of ulps of an expected value.
When you ask how to compare floating-point values that are almost equal, you are asking:
I have two values, x and y, that have been computed with floating-point arithmetic, so they contain rounding errors and are approximations of ideal mathematical values x and y. How can I use the floating-point x and y to compare the mathematical x and y for equality?
There are two problems here:
We do not know how much error there may be in x or y. Some combinations of arithmetic magnify errors, while others shrink them. It is possible for the errors in x and y to range from zero to infinity, and you have not given us any information about this.
It is often assumed that the goal is to produce a result of “equal” when x and y are unequal but close to each other. This converts false negatives (inequality would be reported even though the mathematical x and y would be equal) into positives. However, it creates false positives (equality is reported even though the mathematical x and y would be unequal).
There is no general solution for these problems.
It is impossible to know in general whether an application can tolerate being told that values are equal when they should be unequal or vice-versa without knowing specific details about that application.
It is impossible to know in general how much error there may be in x and y.
Therefore, there is no correct general test for equality in values that have been computed appoximately.
Note that this problem is not really about testing for equality. Generally, it is impossible to compute any function of incorrect data (except for trivial functions such as constant functions). Since x and y contain errors, it is impossible to use x to compute log(x) without errors, or to compute arcosine(y) or sqrt(x) without errors. In fact, if the errors have made y slightly greater than 1 while y is not or made x slightly less than zero while x is not, then computing acos(y) or sqrt(x) will produce exceptions and NaNs even though the ideal mathematical values would work without problem.
What this all means is that you cannot simply convert exact mathematical arithmetic to approximate floating-point arithmetic and expect to get a good result (whether you are testing for equality or not). You must consider the effects of converting exact arithmetic to approximate arithmetic and evaluate how they affect your program and your data. The use of floating-point arithmetic, including comparisons for equality, must be tailored to individual situations.

Lua: subtracting decimal numbers doesn't return correct precision

I am using Lua 5.1
print(10.08 - 10.07)
Rather than printing 0.01, above prints 0.0099999999999998.
Any idea how to get 0.01 form this subtraction?
You got 0.01 from the subtraction. It is just in the form of a repeating decimal with a tiny amount of lost precision.
Lua uses the C type double to represent numbers. This is, on nearly every platform you will use, a 64-bit binary floating point value with approximately 23 decimal digits of precision. However, no amount of precision is sufficient to represent 0.01 exactly in binary. The situation is similar to attempting to write 1/3 in decimal.
Furthermore, you are subtracting two values that are very similar in magnitude. That all by itself causes an additional loss of precision.
The solution depends on what your context is. If you are doing accounting, then I would strongly recommend that you not use floating point values to represent account values because these small errors will accumulate and eventually whole cents (or worse) will appear or disappear. It is much better to store accounts in integer cents, and divide by 100 for display.
In general, the answer is to be aware of the issues that are inherent to floating point, and one of them is this sort of small loss of precision. It is easily handled by rounding answers to an appropriate number of digits for display, and never comparing results of calculations for equality.
Some resources for background:
The semi-official explanation at the Lua Users Wiki
This great page of IEEE floating point calculators where you can enter values in decimal and see how they are represented, and vice-versa.
Wiki on IEEE floating point.
Wiki on floating point numbers in general.
What Every Computer Scientist Should Know About Floating-Point Arithmetic is the most complete discussion of the fine details.
Edit: Added the WECSSKAFPA document after all. It really is worth the study, although it will likely seem a bit overwhelming on the first pass. In the same vein, Knuth Volume 2 has extensive coverage of arithmetic in general and a large section on floating point. And, since lhf reminded me of its existence, I inserted the Lua wiki explanation of why floating point is ok to use as the sole numeric type as the first item on the list.
Use Lua's string.format function:
print(string.format('%.02f', 10.08 - 10.07))
Use an arbitrary precision library.
Use a Decimal Number Library for Lua.

Resources