How to visualize the value of a pointer while debugging in Delphi? - delphi

So, I have a variable buffPtr: TPointer
It has a size of 16 and contains a series of numbers, mostly starting with 0, say something like 013854351387365.
I'm sure it contains values, because the application does what it does fine.
I want to see this value while I'm debugging.
If I add "PAnsiChar(buffPtr)^" to the watches I only see the first byte.

Just type in the watch expression PAnsiChar(buffPtr)^,16 or PByte(buffPtr)^,16 if you want the ordinal/byte values.
The trick here is to add the number of pattern repeat after a comma, like ,16.
It is IMHO more convenient than changing the Watch Properties, and it works with the F7 evaluation command of the IDE.

I added a watch to
PAnsiChar(buffPtr)^
with the Watch Properties as
Repeat Count = 16
Decimal

Did you set the watch do dump a region of memory? For some structures that helps.
If you can recompile your application, then define this:
type
T16Values = array[0..15] of Byte;
P16Values = ^T16Values;
Then cast your pointer into a P16Values, and view that.
If it is another data type than Byte, change the above code accordingly.

Related

What is the optimal way to check if some values must be replaced or not?

I am updating some product prices in a ERP program in Delphi and I would like to check if new prices with smaller or zero values should be replaced. I have written it in two ways (the first one is the existing code and the second one is an alternate way to implement it). Code examples are:
1st example:
if DlgBuff.AsInteger[0, FldReplSmall]=0 then // Replace Smaller Values: No
begin
if OldPriceW>PriceW then
PriceW:=OldPriceW;
if OldPriceR>PriceR then
PriceR:=OldPriceR;
end
else if DlgBuff.AsInteger[0, FldReplSmall]=2 then // Replace smaller non zero values
begin
if (OldPriceW>PriceW) and (PriceW=0) then
PriceW:=OldPriceW;
if (OldPriceR>PriceR) and (PriceR=0) then
PriceR:=OldPriceR;
end;
2nd example:
if OldPriceW>PriceW then
if (DlgBuff.AsInteger[0, FldReplSmall]=0) or ((DlgBuff.AsInteger[0, FldReplSmall]=2) and (PriceW=0)) then
PriceW:=OldPriceW;
if OldPriceR>PriceR then
if (DlgBuff.AsInteger[0, FldReplSmall]=0) or ((DlgBuff.AsInteger[0, FldReplSmall]=2) and (PriceR=0)) then
PriceR:=OldPriceR;
What is, in your opinion, the correct way to write something like this?
Correct meaning the more efficient way.
The first example is the pre-existing code. I added the else block to handle the replacement of smaller non zero values (Replace Smaller Values: No was already there).
Then I also though about writing it in a way that is more compact but harder to understand at first glance (maybe?).. What do you think is the more appropriate way to write something like this?
I think the second way is better because it has less IF checks and less code that is repeated, but the first one seems to be more clear and understandable for somebody.
Instead of comparing which number is higher by yourself using bunch of if statements use Max function from System.Math unit.
So code for keeping higher prices would look like this
if DlgBuff.AsInteger[0, FldReplSmall]=0 then // Replace Smaller Values: No
begin
PriceW := Max(OldPriceW, PriceW);
PriceR := Max(OldPriceR, PriceR);
end;
As for replacing only zero valued prices. Why are you even checking old price is larger than the new one? If the new price is zero than the only price can either be same (old price was also zero) or larger. In both cases you can simply assign old price to the new one.

How to integrate in Swift using vDSP

I try to find replacement for SciPy's cumtrapz function function in Swift. I found something called vDSP_vtrapzD but I have no idea how to use it. This is what I've done so far:
import Accelerate
var f1: [Double] = [<some data>]
var tdata: [Double] = [<time vector>]
var output = [Double](unsafeUninitializedCapacity:Int(f1.count), initializingWith: {_, _ in})
vDSP_vtrapzD(&f1, 1, &tdata, &output, 1, vDSP_Length(f1.count))
You're close, but you're using Array.init(unsafeUninitializedCapacity:initializingWith:) incorrectly. From its documentation:
Discussion
Inside the closure, set the initializedCount parameter to the number of elements that are initialized by the closure. The memory in the range buffer[0..<initializedCount] must be initialized at the end of the closure’s execution, and the memory in the range buffer[initializedCount...] must be uninitialized. This postcondition must hold even if the initializer closure throws an error.
This API is a more unsafe (but performant counterpart) to Array.init(repeating:count:), which allocates an array of a fixed size, and spends the time to initialize all its contents). This has two potential drawbacks:
If the purpose of the array is to provide a buffer to write a result into, then initializing it prior to that is redundant and wasteful
If the result you put into that buffer ends up being larger than your
array, you need to remember to manually "trim" the excess off by
copying it into a new array.
Array.init(unsafeUninitializedCapacity:initializingWith:) improves upon this by:
Asking you for the maximum capacity you might possibly need
Giving you a temporary buffer with the capacity
Importantly, it's uninitialized. This makes it faster, but also more dangerous (risk of buffer underflow errors) if used incorrectly.
You then tell it exactly how much of that temporary buffer you actually used
It will automatically copy that much of the buffer into the final array, and return that as the result.
You're using Array.init(unsafeUninitializedCapacity:initializingWith:) as if it were Array.init(repeating:count:). To use it correctly, you would put your initialization logic inside the initializer parameter, like so:
let result = Array<Double>(unsafeUninitializedCapacity: f1.count, initializingWith: { resultBuffer, count in
assert(f1.count == tdata.count)
vDSP_vtrapzD(
&f1, // Double-precision real input vector.
1, // Address stride for A.
&tdata, // Pointer to double-precision real input scalar: step size.
resultBuffer.baseAddress!, // Double-precision real output vector.
1, // Address stride for C.
vDSP_Length(f1.count) // The number of elements to process.,
)
count = f1.count // This tells Swift how many elements of the buffer to copy into the resultant Array
})
FYI, there's a nice Swift version of vDSP_vtrapzD that you can see here: https://developer.apple.com/documentation/accelerate/vdsp/integration_functions. The variant that returns the result uses the unsafeUninitializedCapacity initializer.
On a related note, there's also a nice Swift API to Quadrature: https://developer.apple.com/documentation/accelerate/quadrature-smu
simon
I've been grappling with using DSP in the last week too! Alexander's solution above was helpful. However I'd like to add a couple of things.
vDSP_vtrapzD only allows you to integrate an array of values against a fixed step increment, which is a scalar value. The function doesn't allow you to pass a varying time value for each increment.
The example solution confused me a little as it does a check to make sure the time array is the same size as the data array. This is not necessary as only the first value in the time vector will be used by the vDSP_vtrapzD function.
It's a shame that the vDSP_vtrapzD only takes a scalar as ca a fixed time component. In my experience this doesn't not reflect reality when working with time based data from sensors that don't emit data in precisely the same increments.

Delphi XE2: How to use sets of integers with ordinal values > 255

All I want to do is to define a set of integers that may have values above 255, but I'm not seeing any good options. For instance:
with MyObject do Visible := Tag in [100, 155, 200..225, 240]; // Works just fine
but
with MyObject do Visible := Tag in [100, 201..212, 314, 820, 7006]; // Compiler error
I've gotten by with (often lengthy) conditional statements such as:
with MyObject do Visible := (Tag in [100, 202..212]) or (Tag = 314) or (Tag = 820) or (Tag = 7006);
but that seems ridiculous, and this is just a hard-coded example. What if I want to write a procedure and pass a set of integers whose values may be above 255? There HAS to be a better, more concise way of doing this.
The base type of a Delphi set must be an ordinal type with at most 256 distinct values. Under the hood, such a variable has one bit for each possible value, so a variable of type set of Byte has size 256 bits = 32 bytes.
Suppose it were possible to create a variable of type set of Integer. There would be 232 = 4294967296 distinct integer values, so this variable must have 4294967296 bits. Hence, it would be of size 512 MB. That's a HUGE variable. Maybe you can put such a value on the stack in 100 years.
Consequently, if you truly need to work with (mathematical) sets of integers, you need a custom data structure; the built-in set types won't do. For instance, you could implement it as an advanced record. Then you can even overload the in operator to make it look like a true Pascal set!
Implementing such a slow and inefficient type is trivial, and that might be good enough for small sets. Implementing a general-purpose integer set data structure with efficient operations (membership test, subset tests, intersection, union, etc.) is more work. There might be third-party code available on the WWW (but StackOverflow is not the place for library recommendations).
If your needs are more modest, you can use a simple array of integers instead (TArray<Integer>). Maybe you don't need O(1) membership tests, subset tests, intersections, and unions?
I would say, that such task already requires a database. Something small and simple like TFDMemTable + TFDLocalSQL should do.

What is the fastest mean to transfer a record in DCOM

I want to transfer some records with the following structure between two Windows PC computer using COM/DCOM. I prefer to transfer an array, say 100 members of TARec, at a time, not each record individually. Currently I am doing this using IStrings. I am looking to improve it using the raw records, to save the time to encode/decode the strings at both ends. Please share your experience.
type
TARec = record
A : TDateTime;
B : WORD;
C : Boolean;
D : Double;
end;
All the record's field type are OLE compatible. Many thanks in advance.
As Rudy suggests in the comments, if your data contains simple value types then a variant byte array can be a very efficient approach and quite simple to implement.
Since you have stated that your data already resides in an array, the basic approach would be:
Create a byte array of the required size to hold all your record data (use VarArrayCreate with type varByte)
Lock the array to obtain a pointer that is safe to use to reference the array contents in memory (VarArrayLock will lock and return a pointer to the array data)
Use CopyMemory to directly copy the data from your array of records to the byte array memory.
Unlock the variant array (VarArrayUnlock) and pass it through your COM/DCOM interface
On the other ('receiving') side you simply reverse the process:
Declare an array of records of the required size
Lock the variant byte array to obtain a pointer to the memory holding the bytes
Copy the byte array data into your record array
Unlock the byte array
This exact approach is something I have used very successfully in a very demanding COM/DCOM scenario (w.r.t efficiency/performance) in the past.
Things to be careful of:
If your data ever changes to include more complex types such as strings or dynamic arrays then additional work will be required to correctly transport these through a byte array.
If your data structure ever changes then the code on both sides of the interface will need to be updated accordingly. One way to protect against this is to incorporate some mechanism for the data to be identified as valid or not by the receiver. This could include a "version number" for example and/or a value (in a 'header' as part of the byte array, in addition to the array data, or passed as a separate parameter entirely - precise details don't really matter). If the receiver finds a version number or size that it is not expecting then it can report this gracefully rather than naively processing the data incorrectly and (most likely) crashing or throwing exceptions as a result.
Alignment/packing issues. Even with the same declaration for the record type, if code is compiled with different alignment settings then the size required for each record in memory could change (which is why a "version number" for the data structure format might not be reliable on its own). One way to avoid this would be to declare the record as packed, though this comes at the cost of a slight reduction in efficiency (and still relies on both sides of the interface agreeing that the data structure is packed).
There are just things to bear in mind however, not prescriptive. Just how complex/robust your implementation needs to be will be determined by your specific case.

Why can't I use an Int64 in a for loop?

I can write for..do process for integer value..
But I can't write it for int64 value.
For example:
var
i:int64;
begin
for i:=1 to 1000 do
end;
The compiler refuses to compile this, why does it refuse?
The Delphi compiler simply does not support Int64 loop counters yet.
Loop counters in a for loop have to be integers (or smaller).
This is an optimization to speed up the execution of a for loop.
Internally Delphi always uses an Int32, because on x86 this is the fastest datatype available.
This is documented somewhere deep in the manual, but I don't have a link handy right now.
If you must have a 64 bit loop counter, use a while..do or repeat..until loop.
Even if the compiler did allow "int64" in a Delphi 7 for-loop (Delphi 7???), it probably wouldn't complete iterating through the full range until sometime after the heat death of the Sun.
So why can't you just use an "integer"?
If you must use an int64 value ... then simply use a "while" loop instead.
Problem solved :)
Why to use a Int64 on a for-loop?
Easy to answer:
There is no need to do a lot of iterations to need a Int64, just do a loop from 5E9 to 5E9+2 (three iterations in total).
It is just that values on iteration are bigger than what Int32 can hold
An example:
procedure Why_Int64_Would_Be_Great_On_For_Loop;
const
StartValue=5000000000; // Start form 5E9, 5 thousand millons
Quantity=10; // Do it ten times
var
Index:Int64;
begin
for Index:=StartValue to StartValue+Quantity-1
do begin // Bla bla bla
// Do something really fast (only ten times)
end;
end;
That code would take no time at all, it is just that index value need to be far than 32bit integer limit.
The solution is to do it with a while loop:
procedure Equivalent_For_Loop_With_Int64_Index;
const
StartValue=5000000000; // Start form 5E9, 5 thousand millons
Quantity=10; // Do it ten times
var
Index:Int64;
begin
Index:=StartValue;
while Index<=StartValue+Quantity
do begin // Bla bla bla
// Do something really fast (only ten times)
Inc(Index);
end;
end;
So why the compiler refuses to compile the foor loop, i see no real reason... any for loop can be auto-translated into a while loop... and pre-compiler could do such before compiler (like other optimizations that are done)... the only reason i see is the lazy people that creates the compiler that did not think on it.
If for is optimized and so it is only able to use 32 bit index, then if code try to use a 64 bit index it can not be so optimized, so why not let pre-compiler optimizator to chage that for us... it only gives bad image to programmers!!!
I do not want to make anyone ungry...
I only just say something obvious...
By the way, not all people start a foor loop on zero (or one) values... sometimes there is the need to start it on really huge values.
It is allways said, that if you need to do something a fixed number of times you best use for loop instead of while loop...
Also i can say something... such two versions, the for-loop and the while-loop that uses Inc(Index) are equally fast... but if you put the while-loop step as Index:=Index+1; it is slower; it is really not slower because pre-compiler optimizator see that and use Inc(Index) instead... you can see if buy doing the next:
// I will start the loop from zero, not from two, but i first do some maths to avoid pre-compiler optimizator to convert Index:=Index+Step; to Inc(Index,Step); or better optimization convert it to Inc(Index);
Index:=2;
Step:=Index-1; // Do not put Step:=1; or optimizator will do the convertion to Inc()
Index:=Step-2; // Now fix, the start, so loop will start from zero
while Index<1000000 // 1E6, one millon iterations, from 0 to 999999
do begin
// Do something
Index:=Index+Step; // Optimizator will not change this into Inc(Index), since sees that Step has changed it's value before
end;
The optimizer can see a variable do not change its value, so it can convert it to a constant, then on the increment assign if adding a constant (variable:=variable+constant) it will optimize it to Inc(variable,constant) and in the case it sees such constant is 1 it will also optimes it to Inc(variable)... and such optimizatons in low level computer language are very noticeble...
In Low level computer language:
A normal add (variable:=variable1+variable2) implies two memory reads plus one sum plus one memory write... lot of work
But if is a (variable:=variable+othervariable) it can be optimized holding variable inside the processor cache.
Also if it is a (variable:=variable1+constant) it can also be optimized by holding constant on the processor cache
And if it is (variable:=variable+constant) both are cached on processor cache, so huge fast compared with other options, no acces to RAM is needed.
In such way pre-compiler optimizer do another important optimization... for-loops index variables are holded as processor registers... much more faster than processor cache...
Most mother processor do an extra optimization as well (at hardware level, inside the processor)... some cache areas (32 bit variables for us) seen that are intensivly used are stored as special registers to fasten access... and such for-loop / while-loop indexes are ones of them... but as i said.. most mother AMD proccesors (the ones that uses MP technology does that)... i do not yet know any Intel that do that!!! such optimization is more relevant when multi-core and on super-computing... so maybe that is the reason why AMD has it and Intel not!!!
I only want to show one "why", there are a lot more... another one could be as simple as the index is stored on a database Int64 field type, etc... there are a lot of reasons i know and a lot more i did not know yet...
I hope this will help to understand the need to do a loop on a Int64 index and also how to do it without loosing speed by correctly eficiently converting loop into a while loop.
Note: For x86 compiles (not for 64bit compilation) beware that Int64 is managed internally as two Int32 parts... and when modifing values there is an extra code to do, on adds and subs it is very low, but on multiplies or divisions such extra is noticeble... but if you really need Int64 you need it, so what else to do... and imagine if you need float or double, etc...!!!

Resources