Multiplying LongInts. Expression of the form: Int64Var := LongIntVar * LongIntVar - delphi

I always thought it was part of the design philosophy in Pascal, that it looked at both the right and left hand sides of an expression when deciding what format/precision to use for an operation. So that, unlike C where an expression like,
Float_Var = 1/3
results in a value of 0.0 for Float_Var, Pascal always gets this stuff right. :)
So I was kind of surprised when I went to multiply two LongInts (32 bit) to give an Int64 result and found I was getting anomalous results. I had to get all C like and use,
Int64_Var := Int64(LongIntVar1) * LongIntVar2
to make it work correctly. (BTW. This was under Delphi, various versions tested, but all win32).
I was just wondering if this is an exceptional case in Delphi/Pascal? Or are there other examples where the usual Pascal way, using the types on both sides of an expression to decide on how the operation is performed, doesn't hold.

If by "both sides" you mean that it looks at the type of the target variable in an assignment for determining the expression type, then no, that has never been the case. Delphi works like any other mainstream compiler in that regard - that is, the type of an expression is determined from the inside out.

I always thought it was part of the design philosophy in
Pascal, that it looked at both the right and left hand sides of
an expression when deciding what format/precision to use
for an operation.
That is not correct. Expressions assignment targets do not influence the evaluation of the expression.
The reason that
Float_Var = 1/3;
evaluates to 0 in C/C++ is that the / operator is overloaded. It can mean either integer division or floating point division. If one of the arguments is floating point then the operator is floating point division, otherwise, as here, it is integer division.
In Delphi the / operator is not overloaded. It is always floating point division. That's why this code gives a compile error:
Int_Var := 1/3;

Related

Delphi Roundto and FormatFloat Inconsistency

I'm getting a rounding oddity in Delphi 2010, where some numbers are rounding down in roundto, but up in formatfloat.
I'm entirely aware of binary representation of decimal numbers sometimes giving misleading results, but in that case I would expect formatfloat and roundto to give the same result.
I've also seen advice that this is the sort of thing "Currency" should be used for, but as you can see below, Currency and Double give the same results.
program testrounding;
{$APPTYPE CONSOLE}
{$R *.res}
uses
System.SysUtils,Math;
var d:Double;
c:Currency;
begin
d:=534.50;
c:=534.50;
writeln('Format: ' +formatfloat('0',d));
writeln('Roundto: '+formatfloat('0',roundto(d,0)));
writeln('C Format: ' +formatfloat('0',c));
writeln('C Roundto: '+formatfloat('0',roundto(c,0)));
readln;
end.
The results are as follows:
Format: 535
Roundto: 534
C Format: 535
C Roundto: 534
I've looked at Why is the result of RoundTo(87.285, -2) => 87.28 and the suggested remedies do not seem to apply.
First of all, we can remove Currency from the question, because the two functions that you use don't have Currency overloads. The value is converted to an IEEE754 floating point value and then follows the same path as your Double code.
Let's look at RoundTo first of all. It is quick to check, using the debugger, or an additional Writeln that RoundTo(d,0) = 534. Why is that?
Well, the documentation for RoundTo says:
Rounds a floating-point value to a specified digit or power of ten using "Banker's rounding".
Indeed in the implementation of RoundTo we see that the rounding mode is temporarily switched to TRoundingMode.rmNearest before being restored to its original value. The rounding mode only applies when the value is exactly half way between two integers. Which is precisely the case we have here.
So Banker's rounding applies. Which means that when the value is exactly half way between two integers, the rounding algorithm chooses the adjacent even integer.
So it makes sense that RoundTo(534.5,0) = 534, and equally you can check that RoundTo(535.5,0) = 536.
Understanding FormatFloat is quite a different matter. Quite frankly its behaviour is somewhat opaque. It performs an ad hoc rounding in code that differs for different platforms. For instance it is assembler on 32 bit Windows, but Pascal on 64 bit Windows. The overall approach appears to be to take the mantissa of the floating point value, convert it to an integer, convert that to text digits, and then perform the rounding based on those text digits. No respect is paid to the current rounding mode when the rounding is performed, and the algorithm appears to implement the round half away from zero policy. However, even that is not implemented robustly for all possible floating point values. It works correctly for your value, but for values with more digits in the mantissa the algorithm breaks down.
In fact it is fairly well known that the Delphi RTL routines for converting between floating point values and text are fundamentally broken by design. There are no routines in the Delphi RTL that can correctly convert from text to float, or from float to text. In fact, I have recently implemented my own conversion routines, that do this correctly, based on existing open source code used by other language runtimes. One of these days I will get around to publishing this code for use by others.
I'm not sure what your exact needs are, but if you are wishing to exert some control over rounding, then you can do so if you take charge of the rounding. Whilst RoundTo always uses Banker's rounding, you can instead use Round which uses the current rounding mode. This will allow you to perform the round using the rounding algorithm of your choice (by calling SetRoundMode), and then you can convert the rounded value to text. That's the key. Keep the value in an arithmetic type, perform the rounding, and only convert to text at the very last moment, after the correct rounding has been applied.
In this case, the value 534.5 is exactly representable in Double precision.
Looking into source code, reveals that the FormatFloat function rounds upwards if the last pending digit is 5 or more.
RoundTo uses the Banker's rounding, and rounds to nearest even number (534) in this case.

How to negate a variable?

I have the below Working-storage variable in my program.
01 W-WRK.
02 W-MNTH-THRSHLD PIC S9(04).
I am using the below COMPUTE function to negate the value of W-MNTH-THRSHLD.
COMPUTE W-MNTH-THRSHLD OF W-WRK =
W-MNTH-THRSHLD OF W-WRK * -1.
I want to know if this approach is right or is there any alternative for the same?
Firstly, why are you using qualification (the OF)? That is only required if you have defined duplicate names. Why define duplicate names in the WORKING-STORAGE?
Secondly, unless you are using a very old COBOL compiler, you should only use the minimum required full-stops/periods in the PROCEDURE DIVISION. That is, one to terminate the paragraph/SECTION label, one to terminate a paragraph/SECTION. One to terminate the PROCEDURE DIVISION header. One to terminate a program (if a full-stop/period is not already there. Keeping extra full-stops/periods around makes it more difficult to copy code around. Put the full-stop/period on a line of its own, so no line of code has one, then you can't accidentally terminate a scope by copying a line of code with a full-stop/period to within a scope.
With those in mind, your code becomes:
COMPUTE W-MNTH-THRSHLD = W-MNTH-THRSHLD
* -1
Multiplication is slower than subtraction. So as Bruce Martin suggested:
COMPUTE W-MNTH-THRSHLD = 0
- W-MNTH-THRSHLD
I do it like this:
SUBTRACT W-MNTH-THRSHLD FROM 0
GIVING W-MNTH-THRSHLD-REV-SIGN
I dislike "destroying" a value just for the heck of it. If the program fails, I know what W-MNTH-THRSHLD contained, plus the meaningful name for the target field explains what the line does.
You could also DIVIDE (or / in COMPUTE), but that is even slower than MULTIPLY (or *).
Also bear in mind that conversions may be required, because you are doing arithmetic with a USAGE DISPLAY field. If you define your field as BINARY or PACKED-DECIMAL conversion is less likely for arithmetic. You won't lose by doing that, unless your compiler can deal with a USAGE DISPLAY in arithmetic without requiring conversion.
Note also, COMPUTE is not a function. COMPUTE is a verb, just a part of the language. "I am using COMPUTE" is sufficient, and not even necessary, as we can see that from the code.

Why is a Boolean expression (with side effects) not enough as a statement?

function A: Boolean;
function B: Boolean;
I (accidently) wrote this:
A or B;
Instead of that:
if not A then
B;
The compiler rejects the first form, I am curious why?
With short circuit evaluation they would both do the same thing, would they not?
Clarification: I was wondering why the language was not designed to allow my expression as a statement.
The first is an expression. Expressions are evaluated. Expressions have no visible side effects (like read or write a variable). Both operands of the expression are functions and those can have side effects, but in order to have side effects, a statement must be executed.
The second is a statement. It compares the result of an expression and based on the evaluation calls another function.
The confusing part, is that in this case, Delphi allows us to disregard the result of a function and execute it as a function. So you expect the same for A or B. But that is not allowed. Which is great because the behaviour is ambiguous. For example, if yo have lazy evaluation enabled. And A evaluates to true, is B called yes or no.
Simply, because the compiler is expecting a statement and the expression that you have provided is not a statement.
Consult the documentation and you will find a list of valid statements. Your expression cannot be found in that list.
You asked in the (now deleted) comments why the language designers elected not to make such an expression count as a statement. But that question implies purpose where there may have been none. It's perfectly plausible that the designers did not decide not to do this. Rather they never considering doing it in the first place. Languages are generally designed to solve specific problems. It's perfectly plausible that the designers simply never considered treating such expressions as statements.
The first form is an expression which evaluates to a Boolean value, not a statement.
At its heart, Delphi is Pascal. The Pascal language was designed by Nicklaus Wirth and published in 1968. My copy of the User Manual and Report is from 1978. It was designed with two purposes in mind, as a teaching language and as one that was easy to implement on any given machine. In this he was spectacularly successful.
Wirth was intimately familiar with other languages of the time (including Fortran, Cobol and particularly Algol) and made a series of careful choices with particular purposes in mind. In particular, he carefully separated the concept of 'actions' from 'values'. The 'actions' in Pascal are the statements in the language, including procedure call. The 'values' include function calls. In this and some other respects the language is quite similar to Algol.
The syntax for declaring and using actions and values are carefully kept quite separate. The language and the libraries provided with it do not in general have 'side effects' as such. Procedures do things and expressions calculate values. For example, 'read' is a procedure, not a function, because it retrieves a value and advances through the file, but 'eof' is a function.
The mass market version of Pascal was created by Borland in the mid 1980s and successively became Turbo Pascal for Windows and then Delphi. The language has changed a lot and not all of it is as pure as Wirth designed it. This is one feature that has survived.
Incidentally, Pascal did not have short-circuit evaluation. It had heap memory and sets, but no objects. They came later.

How safe is comparing numbers in lua with equality operator?

In my engine I have a Lua VM for scripting. In the scripts, I write things like:
stage = stage + 1
if (stage == 5) then ... end
and
objnum = tonumber("5")
if (stage == objnum)
According to the Lua sources, Lua uses a simple equality operator when comparing doubles, the internal number type it uses.
I am aware of precision problems when dealing with floating point values, so I want to know if the comparison is safe, that is, will there be any problems with simply comparing these numbers using Lua's default '==' operation? If so, are there any countermeasures I can employ to make sure 1+2 always compares as equal to 3? Will converting the values to strings work?
You may be better off by converting to string and then comparing the results if you only care about equality in some cases. For example:
> print(21, 0.07*300, 21 == 0.07*300, tostring(21) == tostring(0.07*300))
21 21 false true
I learned this hard way when I gave my students an assignment with these numbers (0.07 and 300) and asked them to implement a unit test that then miserably failed complaining that 21 is not equal 21 (it was comparing actual numbers, but displaying stringified values). It was a good reason for us to have a discussion about comparing floating point values.
I can employ to make sure 1+2 always compares as equal to 3?
You needn't worry. The number type in Lua is double, which can hold many more integers exactly than a long int.
Comparison and basic operations on doubles is safe in certain situations. In particular if the numbers and their result can be expressed exactly - including all low value integers.
So 2+1 == 3 will be fine for doubles.
NOTE: I believe there's even some guarantees for certain math functions ( like pow and sqrt ) and if your compiler/library respects those then sqrt(4.0)==2.0 or 4.0 == pow(2.0,2.0) will reliably be true.
By default, Lua is compiled with c++ floats, and behind the scenes number comparisons boils down to float comparisons in c/c++, which are indeed problematic and discussed in several threads, e.g. most-effective-way-for-float-and-double-comparison.
Lua makes the situation only slightly worse by converting all numbers, including c++ integers, into floats. So you need to keep it in mind.

Should I use unsigned integers for counting members?

Should I use unsigned integers for my count class members?
Answer
For example, assume a class
TList <T> = class
private
FCount : Cardinal;
public
property Count : Cardinal read FCount;
end;
That does make sense, doesn't it? The number of items stored in a list can't be negative, so why not use an unsigned integer type for it? I think it's in general a good principle to always use the least general (ergo the most special) type possible.
Now, iterating over a list looks like this:
for I := 0 to List.Count - 1 do
Writeln (List [I]);
When the number of items stored in the list is zero, the compiler tries to evaluate
List.Count - 1
which results in a nice Integer overflow (underflow to be exact). Combined with the fact that the debugger does not show the appropriate location where the exception occured, this was very hard to find for me.
Let me add that if you have overflow checking turned off, the resulting errors will be even harder to track, because then you will often access memory that doesn't belong to you - and that results in undefined behaviour.
I will be using plain Integers for all my count members from now on to avoid situations like this.
If that's complete nonsense, please point it out to me :)
(I just spent an hour tracking an integer overflow in my code, so I decided to share that - most people on here will know that of course, but perhaps I can save someone some time.)
No, definitely not. Delphi idiom is to use integers here. Don't fight the language.
In a 32 bit environment you'll not have more elements in the list, except if you try to build a bitmap.
Let's be clear: every programmer who is going to have to use your code is going to hate you for using a Cardinal instead of an integer.
Unsigned integers are almost always more trouble than they're worth because you usually end up mixing signed and unsigned integers in expressions at some point. That means that the type will need to be widened (and probably have a performance hit) to get correct semantics (ideally the compiler does this as per language definition), or else you'll need to be very careful in your range checking.
Take C/C++ for example: size_t is the type of the integer for memory sizes and allocation, and is unsigned, but ptrdiff_t is the type for the offset you get when you subtract one pointer from another, and necessarily is signed. Want to know how many elements you've allocated in an array? Perhaps you subtract the first element address from the last+1 element address and divide by sizeof(element-type)? Well, now you've just mixed signed and unsigned integers.
Regarding your statement that "I think it's in general a good principle to always use the least general (ergo the most special) type possible." - actually I think it's a good principle to use the data type that will cause you least angst and trouble.
Generally for me that's a signed int since:
I don't usually have lists with 231 or more elements in them.
You shouldn't have lists that big either :-)
I don't like the hassle of having special edge cases in my code.
But it's really a style issue. If 'purity' of code is more important to you than brevity of code, your method is best (with modifications to catch the edge cases). Myself, I prefer brevity since edge cases tend to clutter the code and reduce understanding.
Don't.
It's not just going against a programming idiom, it's an explicit request to the compiler to use unsigned arithmetic, that is either prone to anomalous behaviors (if you don't guard against overflows) or to irrelevant runtime exceptions (if you guard against overflows, a temporary overflow will be fatal, f.i when you subtract before adding, even if the final result is positive, and I'm referring to the CPU opcode-level ordering of operations, which may not bear a trivial relationship to what you have in your code).
Keep in mind "unsigned" does not translate to "positive", it translates to "doesn't have a sign", which is different. The term "unsigned" was picked for good reason (and naming it "Cardinal" in Delphi was a poor choice IMO).
Unsigned types are for raw storage specifications, bitwise operations, ASM code, embedded controllers and other specialty uses. When you're doing high-level programming, you should forget you ever heard about unsigned types.
Moral: use iterators and foreach when you can, because it avoids this question altogether.
Boundary conditions frequently present problems. Allowing for a type that can go negative may just shift the issue. Perhaps it shifts it in a way that's easier to debug, perhaps not. I started off using integers for counting loops like that, but later on switched to cardinals to help me catch errors.

Resources