I have the following procedure :
procedure GetDegree(const num : DWORD ; var degree : DWORD ; min ,sec : Extended);
begin
degree := num div (500*60*60);
min := num div (500*60) - degree *60;
sec := num/500 - min *60 - degree *60*60;
end;
After degree variable gets assigned the debugger skips to the end of the procedure . Why is that?
It's an optimisation. The variables min and sec are passed by value. That means that modifications to them are not seen by the caller and are private to this procedure. Hence the compiler can work out that assigning to them is pointless. The values assigned to the variables can never be read. So the compiler elects to save time and skip the assignments.
I expect that you meant to declare the procedure like this:
procedure GetDegree(const num: DWORD; var degree: DWORD; var min, sec: Extended);
As I said in your previous question, there's not really much point in using Extended. You would be better off with one of the standard floating point types, Single or Double. Or even using the generic Real which maps to Double.
Also, you have declared min to be of floating point type, but the calculation computes an integer. My answer to your previous question is quite precise in this regard.
I would recommend that you create a record to hold these values. Passing three separate variables around makes your function interfaces very messy and breaks encapsulation. These three values only have meaning when consider as a whole.
type
TGlobalCoordinate = record
Degrees: Integer;
Minutes: Integer;
Seconds: Real;
end;
function LongLatToGlobalCoordinate(const LongLat: DWORD): TGlobalCoordinate;
begin
Result.Degrees := LongLat div (500*60*60);
Result.Minutes := LongLat div (500*60) - Result.Degrees*60;
Result.Seconds := LongLat/500 - Result.Minutes*60 - Result.Degrees*60*60;
end;
function GlobalCoordinateToLongLat(const Coord: TGlobalCoordinate): DWORD;
begin
Result := Round(500*(Coord.Seconds + Coord.Minutes*60 + Coord.Degrees*60*60));
end;
Related
Here's the deal, I'm developing a security system and I'm doing some bit scrambling using bitwise operations. Using 4 bits just to illustrate, supose I have 1001 and I wish to shift left. This would leave me with 0010 since the right-most bit would be lost. What I wanted to do was to shift left and right without losing any bits.
You might choose to use rotate rather than shift. That preserves all the bits. If you wish to use an intermediate value that is the result of a shift, perform both a rotate and a shift. Keep track of the value returned by the rotate, but use the value returned by the shift. This question provides various implementations of rotate operations: RolDWord Implementation in Delphi (both 32 and 64 bit)?
Another option is to never modify the original value. Instead just keep track of the cumulative shift, and when a value is required, return it.
type
TLosslessShifter = record
private
FData: Cardinal;
FShift: Integer;
function GetValue: Cardinal;
public
class function New(Data: Cardinal): TLosslessShifter; static;
procedure Shift(ShiftIncrement: Integer);
property Value: Cardinal read GetValue;
end;
class function TLosslessShifter.New(Data: Cardinal): TLosslessShifter;
begin
Result.FData := Data;
Result.FShift := 0;
end;
procedure TLosslessShifter.Shift(ShiftIncrement: Integer);
begin
inc(FShift, ShiftIncrement);
end;
function TLosslessShifter.GetValue: Cardinal;
begin
if FShift > 0 then
Result := FData shr FShift
else
Result := FData shl -FShift;
end;
Some example usage and output:
var
Shifter: TLosslessShifter;
....
Shifter := TLosslessShifter.New(8);
Shifter.Shift(-1);
Writeln(Shifter.Value);
Shifter.Shift(5);
Writeln(Shifter.Value);
Shifter.Shift(-4);
Writeln(Shifter.Value);
Output:
16
0
8
I'm going maintain and port to Delphi XE2 a bunch of very old Delphi code that is full of VarArrayCreate constructs to fake dynamic arrays having a lower bound that is not zero.
Drawbacks of using Variant types are:
quite a bit slower than native arrays (the code does a lot of complex financial calculations, so speed is important)
not type safe (especially when by accident a wrong var... constant is used, and the Variant system starts to do unwanted conversions or rounding)
Both could become moot if I could use dynamic arrays.
Good thing about variant arrays is that they can have non-zero lower bounds.
What I recollect is that dynamic arrays used to always start at a lower bound of zero.
Is this still true? In other words: Is it possible to have dynamic arrays start at a different bound than zero?
As an illustration a before/after example for a specific case (single dimensional, but the code is full of multi-dimensional arrays, and besides varDouble, the code also uses various other varXXX data types that TVarData allows to use):
function CalculateVector(aSV: TStrings): Variant;
var
I: Integer;
begin
Result := VarArrayCreate([1,aSV.Count-1],varDouble);
for I := 1 to aSV.Count-1 do
Result[I] := CalculateItem(aSV, I);
end;
The CalculateItem function returns Double. Bounds are from 1 to aSV.Count-1.
Current replacement is like this, trading the space zeroth element of Result for improved compile time checking:
type
TVector = array of Double;
function CalculateVector(aSV: TStrings): TVector;
var
I: Integer;
begin
SetLength(Result, aSV.Count); // lower bound is zero, we start at 1 so we ignore the zeroth element
for I := 1 to aSV.Count-1 do
Result[I] := CalculateItem(aSV, I);
end;
Dynamic arrays always have a lower bound of 0. So, low(A) equals 0 for all dynamic arrays. This is even true for empty dynamic arrays, i.e. nil.
From the documentation:
Dynamic arrays are always integer-indexed, always starting from 0.
Having answered your direct question already, I also offer you the beginnings of a generic class that you can use in your porting.
type
TSpecifiedBoundsArray<T> = class
private
FValues: TArray<T>;
FLow: Integer;
function GetHigh: Integer;
procedure SetHigh(Value: Integer);
function GetLength: Integer;
procedure SetLength(Value: Integer);
function GetItem(Index: Integer): T;
procedure SetItem(Index: Integer; const Value: T);
public
property Low: Integer read FLow write FLow;
property High: Integer read GetHigh write SetHigh;
property Length: Integer read GetLength write SetLength;
property Items[Index: Integer]: T read GetItem write SetItem; default;
end;
{ TSpecifiedBoundsArray<T> }
function TSpecifiedBoundsArray<T>.GetHigh: Integer;
begin
Result := FLow+System.High(FValues);
end;
procedure TSpecifiedBoundsArray<T>.SetHigh(Value: Integer);
begin
SetLength(FValues, 1+Value-FLow);
end;
function TSpecifiedBoundsArray<T>.GetLength: Integer;
begin
Result := System.Length(FValues);
end;
procedure TSpecifiedBoundsArray<T>.SetLength(Value: Integer);
begin
System.SetLength(FValues, Value);
end;
function TSpecifiedBoundsArray<T>.GetItem(Index: Integer): T;
begin
Result := FValues[Index-FLow];
end;
function TSpecifiedBoundsArray<T>.SetItem(Index: Integer; const Value: T);
begin
FValues[Index-FLow] := Value;
end;
I think it's pretty obvious how this works. I contemplated using a record but I consider that to be unworkable. That's down to the mix between value type semantics for FLow and reference type semantics for FValues. So, I think a class is best here.
It also behaves rather weirdly when you modify Low.
No doubt you'd want to extend this. You'd add a SetBounds, a copy to, a copy from and so on. But I think you may find it useful. It certainly shows how you can make an object that looks very much like an array with non-zero lower bound.
In Delphi6 or in Delphi 2010, declare two variables of type Currency (vtemp1,vtemp2) and feed them a value of 0.09.
Embed one of the variables with the ABS function and compare it to the other.
You would expect for the comparison to yield a positive result as the compiler
watch reveals the same value for abs(vtemp1) and vtemp2.
Oddly the if statement fails!!!
Notes:
-This problem is experienced only when dealing with the
number 0.09 (trying several other near values revealed normal results)
-Declaring the variable as Double instead of currency, the problem ceases to exist.
I think that the reason is type conversions. Abs() function returns real results, so currency variable casts to real. Take a look at documentation:
Currency is a fixed-point data type that minimizes rounding errors in
monetary calculations. On the Win32 platform, it is stored as a scaled
64-bit integer with the four last significant digits implicitly
representing decimal places. When mixed with other real types in
assignments and expressions, Currency values are automatically divided
or multiplied by 10000.
so Currency is fixed and real is floating-point.
Sample code for your question is :
program Project3;
{$APPTYPE CONSOLE}
const VALUE = 0.09;
var a,b : currency;
begin
a := VALUE;
b := VALUE;
if a = Abs(b) then writeln('equal')
else writeln('not equal', a - Abs(b));
readln;
end.
produces not equal result, because of type conversions;
compiler watch reveals the same value for abs(vtemp1) and vtemp2
Try to add x : real, then call x := abs(b);, add x to watches list, select it and press Edit watch, then select Floating point. X becomes 0.899...967.
not only 0.09 value produces such result. you can try this code to check:
for i := 0 to 10000 do begin
a := a + 0.001;
b := a;
if a <> abs(b) then writeln('not equal', a);
end;
so, if you need absolute value of Currency variable - just do it. don't use floating-point abs():
function Abs(x : Currency):Currency; inline;
begin
if x > 0 then result := x
else result := -x;
end;
A little clarification. The 'issue' appears if float values are compared:
var
A: Currency;
begin
A:= 0.09;
Assert(A = Abs(A));
end;
That is because Abs(A) returns a float value, and A = Abs(A) is implemented as a float compare.
I could not reproduce it if Currency values are compared:
var
A, B: Currency;
begin
A:= 0.09;
B:= Abs(A);
Assert(A = B);
end;
But the second sample is also a potential bug because B:= Abs(A) internally is a float division/multiplication by 10000 with rounding to Currency (int64), and depends on FPU rounding mode.
I have created a qc report #107893, it was opened.
I have just found out the hard way that Delphi XE2 Abs function does not overload the currency type.
See the Delphi XE2 docwiki
these are the only types supported by abs
function Abs(X: ): Real; overload;
function Abs(X: ): Int64; overload;
function Abs(X: ): Integer; overload;
TNumberbox and TSpinEdit return values defined as type single. I want to use these values to do simple integer arithmetic, but I can't cast them successfully to the more generalized integer type, and Delphi gives me compile-time errors if I try to just use them as integers. This code, for example, fails with
"E2010 Incompatible types: 'Int64' and 'Extended'":
var
sMinutes: single;
T: TDatetime;
begin
sMinutes :=Numberbox1.value;
T :=incminute(Now,sMinutes);
All I want to do here is have the user give me a number of minutes and then increment a datetime value accordingly. Nothing I've tried enables me to use that single in this way.
What am I missing??
Just truncate the value before using it:
var
Minutes: Integer;
T: TDateTime;
begin
Minutes := Trunc(NumberBox1.Value);
T := IncMinute(Now, Minutes);
end;
Depending on your particular needs, you may need to use Round instead. It will correctly round to the nearest integer value, making sure that 1.999999999999 correctly becomes integer 2; Trunc would result in 1 instead. (Thanks to Heartware for this reminder.)
var
Minutes: Integer;
T: TDateTime;
begin
Minutes := Round(NumberBox1.Value);
T := IncMinute(Now, Minutes);
end;
Trunc and Round are in in the System unit.
I know marking string parameters as const can make a huge performance difference, but what about ordinal types? Do I gain anything by making them const?
I've always used const parameters when handling strings, but never for Integer, Pointer, class instances, etc.
When using const I often have to create additional temporary variables, which replace the now write-protected parameters, so I'm wondering: Do I gain anything from marking ordinal parameters as const?
You need to understand the reason, to avoid "cargo-cult programming." Marking strings as const makes a performance difference because you no longer need to use an interlocked increment and decrement of the refcount on the string, an operation that actually becomes more expensive, not less, as time goes by because more cores means more work that has to be done to keep atomic operations in sync. This is safe to do since the compiler enforces the "this variable will not be changed" constraint.
For ordinals, which are usually 4 bytes or less, there's no performance gain to be had. Using const as optimization only works when you're using value types that are larger than 4 bytes, such as arrays or records, or reference-counted types such as strings and interfaces.
However, there's another important advantage: code readability. If you pass something as const and it makes no difference whatsoever to the compiler, it can still make a difference to you, since you can read the code and see that the intention of it was to have this not be modified. That can be significant if you haven't seen the code before (someone else wrote it) or if you're coming back to it after a long time and don't remember exactly what you were thinking when you originally wrote it.
You can't accidentally treat them like var parameters and have your code compile. So it makes your intentions clear.
Declaring ordinal types const makes no difference because they are copied anyway (call-by-value), so any changes to the variable do not affect the original variable.
procedure Foo (Val : Integer)
begin
Val := 2;
end;
...
SomeVar := 3;
Foo (SomeVar);
Assert (SomeVar = 3);
IMHO declaring ordinal types const makes no sense and as you say requires you to introduce local variables often.
It depends on how complex is your routine and how it is used. If it is used many places and required that the value stay the same, declare it as "const" to make it cleared and safe. For string type, there was a bug (for Delphi 7 as I stump on it) that causes memory corruption if declare as "const". Below is sample codes
type
TFoo = class
private
FStr: string;
public
procedure DoFoo(const AStr: string);
begin
FStr := AStr; //the trouble code 1
......
end;
procedure DoFoo2;
begin
.....
DoFoo(FStr); //the trouble code 2
end;
end;
There's a huge speed improvement using Const with strings:
function test(k: string): string;
begin
Result := k;
end;
function test2(Const k: string): string;
begin
Result := k;
end;
function test3(Var k: string): string;
begin
Result := k;
end;
procedure TForm1.Button1Click(Sender: TObject);
Var a: Integer;
s,b: string;
x: Int64;
begin
s := 'jkdfjklf lkjj3i2ej39ijkl jkl2eje23 io3je32 e832 eu283 89389e3jio3 j938j 839 d983j9';
PerfTimerInit;
for a := 1 to 10000000 do
b := test(s);
x := PerfTimerStopMS;
Memo1.Lines.Add('default: '+x.ToString);
PerfTimerInit;
for a := 1 to 10000000 do
b := test2(s);
x := PerfTimerStopMS;
Memo1.Lines.Add('const: '+x.ToString);
PerfTimerInit;
for a := 1 to 10000000 do
b := test3(s);
x := PerfTimerStopMS;
Memo1.Lines.Add('var: '+x.ToString);
end;
default: 443 const: 320 var: 325
default: 444 const: 303 var: 310
default: 444 const: 302 var: 305
Same with Integers:
default: 142 const: 13 var: 14
Interestingly though, in 64-bit there seems to be almost no difference with strings (default mode is only a bit slower than Const):
default: 352 const: 313 var: 314