Convert a Set of to Integer - delphi

As a newbie in Delphi I run into a problem with an external API.
This external API expects a parameter with one or two value, I think called bitwise parameter.
In Delphi this is done by a set of
The basic is an Enumeration.
TCreateImageTask = (
citCreate = 1,
citVerify
);
This I have put into a set of:
TCreateImageTasks = set of TCreateImageTask
In a function I fill this set with:
function TfrmMain.GetImageTask: TCreateImageTasks;
begin
Result:=[];
if chkCreate.checked then Include(Result, citCreate);
if chkVerify.checked then Include(Result, citVerify);
end;
Now I have to give this Tasks to a external DLL, written in C++
The DLL expects a __int8 value. It may contain one or two TCreateImageTasks. In C++ done by:
__int8 dwOperation = 0;
if (this->IsDlgButtonChecked(IDC_CHECK_CREATE))
{
dwOperation = BS_IMGTASK_CREATE;
}
if (this->IsDlgButtonChecked(IDC_CHECK_VERIFY))
{
dwOperation |= BS_IMGTASK_VERIFY;
}
int32 res = ::CreateImage(cCreateImageParams, dwOperation);
So I have to convert my set of to an integer. I do by
function TfrmMain.SetToInt(const aSet;const Size:integer):integer;
begin
Result := 0;
Move(aSet, Result, Size);
end;
I call with
current task := GetImageTask;
myvar := SetToInt(currentTask, SizeOf(currentTask));
The problem I have now, that myvar is 6 when 2 values are inside the set, 2 if only create is inside the set and 4 if only verify is inside the set. That do not look right to me and the external DLL do not know this values.
Where is my fault?

I guess it works when you remove the = 1 in the declaration of TCreateImageTask?
The = 1 shifts the ordinal values by 1 giving the results you see, but is probably not what is needed. For that we need to know the values for BS_IMGTASK_CREATE and BS_IMGTASK_VERIFY.
My psychic powers tell me that BS_IMGTASK_CREATE = 1 and BS_IMGTASK_VERIFY = 2. Given that these are bit masks they correspond to the values 2^0 and 2^1. This matches the ordinal values 0 and 1.
Thus you should declare
TCreateImageTask = (citCreate, citVerify);
to map citCreate to 0 and citVerify to 1.

It's all about something called Bitwise Operation!
Converting a SET to LONGWORD is widely used in Delphi implementation of Windows API.
This would be what you are looking for:
How to save/load Set of Types?
This was already answered here too:
Bitwise flags in Delphi

Related

How to clear multidimensional array of Word in Delphi?

I am using Delphi 7 and I try to clear (i.e fill with zeros) the array of declaration:
const CONST_MAX_INDEX_AllColumnsSeparators = 3658;
type TAllColumnsSeparators = array[0..CONST_MAX_INDEX_AllColumnsSeparators] of Word;
type ColSeparators = Array[0..11] of TAllColumnsSeparators; // count of columns
var All_separators: ColSeparators;
Acoording this page, there should be function System.Array.Clear - but seems like this could be for newer IDE than Delphi v. 7.
System.Array.Clear(All_separators, 0, CONST_MAX_INDEX_AllColumnsSeparators+1);
I got error undefined identifier Array.
I thought, I could to do this in a loop reseting manually, but I guess that would be slow performance job.
Notice: I have 63 files processed in a loop which needs to reset this array in every cycle, so this command will be nested in a loop. So I guess there should be some smart way to do it fast and easy without another control loop.
Notice that, given your
const
CONST_MAX_INDEX_AllColumnsSeparators = 3658;
type
TAllColumnsSeparators = array[0..CONST_MAX_INDEX_AllColumnsSeparators] of Word;
TColSeparators = array[0..11] of TAllColumnsSeparators;
both types are static arrays which are value types. Hence, the outermost array type is also a value type containing nothing but a sequence of Words. Hence, to fill such a variable with zeros (zero Words), you only need to fill its memory with zeros (zero Bytes):
var
AllSeparators: TColSeparators;
begin
FillChar(AllSeparators, SizeOf(AllSeparators), 0);

What is the correct constant to use when comparing with the Minimal Single Number in Delphi?

In a loop like this:
cur := -999999; // represent a minimal possible value hold by a Single type
while ... do
begin
if some_value > cur then
cur := some_value;
end;
There is MaxSingle/NegInfinitydefined in System.Math
MaxSingle = 340282346638528859811704183484516925440.0;
NegInfinity = -1.0 / 0.0;
So should I use -MaxSingle or NegInfinity in this case?
I assume you are trying to find the largest value in a list.
If your values are in an array, just use the library function MaxValue(). (If you look at the implementation of MaxValue, you'll see that it takes the first value in the array as the starting point.)
If you must implement it yourself, use -MaxSingle as the starting value, which is approximately -3.40e38. This is the most negative value that can be represented in a Single.
Special values like Infinity and NaN have special rules in comparisons, so I would avoid these unless you are sure about what those rules are. (See also How do arbitrary floating point values compare to infinity?. In fact, it seems NegInfinity would work OK.)
It might help to understand the range of values that can be represented by a Single. In order, most negative to most positive, they are:
NegInfinity
-MaxSingle .. -MinSingle
0
MinSingle .. MaxSingle
Infinity

Should parameters be used as variables in Lua?

I've been told in Java that I should avoid modifying the original parameters such as
public int doStuff(int begin, int end) {
/* loop or something */
begin++; //bad
end--; //also bad
/* end loop */
return
}
instead, I should do something like
public int doStuff(int begin, int end) {
int myBegin = begin; //something like this
int myEnd = end;
/* stuff */
return
}
So, I've been doing this in lua
function do_stuff(begin, last)
local my_begin = begin
local my_last = last
--stuff
my_begin = my_begin + 1
my_last = my_last - 1
--stuff
end
But, I'm wondering if
function do_stuff(begin, last)
--stuff
begin = begin + 1
last = last - 1
--stuff
end
is also discouraged, or is it nice and concise?
There are no rules. Let taste, clarity, and need decide.
Nevetheless, a common idiom is to provide default values for parameters as in
function log(x,b)
b = b or 10
...
end
If you were told not to modify the parameters of functions, then there was probably a reasoning associated with that. Whatever that reasoning is would apply as much to Lua as to Java, since they have similar function argument semantics. Those reasons could be one or more of (but not limited to):
If you modify a parameter... you don't have it anymore. If you suddenly have a need for the original value you were passed, it's gone now.
Creating confusion, depending on how the parameters are named. The word "begin" suggests the beginning of something. If you change it, it isn't necessarily the beginning anymore, but merely the current element you're operating on.
Creating potential errors, if dealing with reference types (non-basic types in Java, tables and such in Lua). When you modify an object, you're changing it for everyone. Whereas incrementing an integer is just changing your local value. So if you're frequently modifying parameters, you still need to think about which ones you ought to be poking at and which ones you shouldn't be.
To put it another way, if you agreed with the suggestion for doing so in Java, then it applies just as much to Lua. If you didn't agree with the suggestion in Java, then you have no more reason to follow it under Lua.
In Lua functions, threads, tables and userdata types are passed by reference. So unless you have one of those you are working with a local copy anyway.
So in your example:
function do_stuff(begin, last)
--stuff
begin = begin + 1
last = last - 1
--stuff
end
begin and last are local non-reference variables in do_stuff's scope.
The only reason to make a copy of them is that you might want to store there initial value for later use. For that purpose you can either create a backup copy of the initial value or you create a working copy of it. Whatever you prefer.
Only make sure you know what is passed by reference and what by value so you avoid changing things you don't want to change and the other way around.

How to correctly set NumberFormat property when automating different localized versions of Excel

I've ran into the following problem:
When automating Excel via OLE from my Delphi program and trying to set a cell's NumberFormat property, Excel is expecting the format string in a localized format.
Normally, when checking the formatting by recording a macro in Excel, Excel is expecting it like this:
Cells(1, 2).NumberFormat = "#,##0.00"
That means the thousands separator is "," and the decimal separator is ".".
In reality, I'm using a localized version of Excel. In my locale, the thousands separator is " " and the decimal separator is ",".
So whenever setting the NumberFormat from my Delphi program I need specify it like "# ##0,00".
My question is: Obviously, if I hardcode these values in my program there is going to be an exception when my program is used with an English or another differently localized version of Excel. Is there a "universal" way to set the NumberFormat property? (using the default English locale?)
Thanks!
Update: I've found a more elegant way to do it on this page:
http://www.delphikingdom.com/asp/viewitem.asp?catalogid=920&mode=print
It's in Russian (which I don't speak too) but you can easily understand the code.
In Excel you have two Fields:
NumberFormat
NumberFormatLocal
NumberFormat takes the format always locale invariant in the american standard and NumberFormatLocal expects the format with the set locale.
For example
Sub test()
Dim r As Range
Set r = ActiveWorkbook.ActiveSheet.Range("$A$1")
r.NumberFormat = "#,##0.00"
Set r = ActiveWorkbook.ActiveSheet.Range("$A$2")
r.NumberFormat = "#.##0,00"
Set r = ActiveWorkbook.ActiveSheet.Range("$A$3")
r.NumberFormatLocal = "#,##0.00"
Set r = ActiveWorkbook.ActiveSheet.Range("$A$4")
r.NumberFormatLocal = "#.##0,00"
End Sub
With german settings (decimal sep: , and thousand sep: .) gives you correct formatted numbers for $A$1 and $A$4. You can test it, if you change your regional settings in windows to anything you like and try, if your formatting is working.
Assuming you use Delphi 5 and have code to start Excel like this (and have access to ComObj.pas):
var
oXL, oWB, oSheet : Variant;
LocaleId : Integer;
begin
oXL := CreateOleObject('Excel.Application');
oXL.Visible := True;
oWB := oXL.Workbooks.Add;
oSheet := oWB.ActiveSheet;
oSheet.Range['$A$1'].NumberFormatLocal := '#.##0,00';
oSheet.Range['$A$2'].NumberFormatLocal := '#,##0.00';
LocaleID:= DispCallLocaleID($0409);
try
oSheet.Range['$A$3'].NumberFormat := '#.##0,00';
oSheet.Range['$A$4'].NumberFormat := '#,##0.00';
finally
DispCallLocaleId( LocaleId);
end;
end;
then by default every call goes through ComObj.VarDispInvoke which calls ComObj.DispatchInvoke. There you find the call to Dispatch.Invoke which gets as third parameter the lcid. This is set to 0. You can use the technique shown in the first link in the comment to this, to create your own unit and copy all code from ComObj to your own unit (or modify ComObj directly). Just don't forget to set the VarDispProc variable in the initialization of the unit. The last part seems not work in all cases (probably depends on the order of the modules), but you can set the variable in your code:
VarDispProc := #VarDispInvoke;
where you must place VarDispInvoke into the interface section of your ComObj copy module.
The code of the first link does not work directly as it modifies a different method which is not called in the above Delphi sample.
And it is enough to change the locale for the numberformat call (to avoid side effects).
The above example together with the described modifications works for my german excel correct. Without the modification or the call to DispCallLocaleId I see the same problem as you describe.
You can let excel manage this option To avoid the difference in other system :
.....NumberFormat :='#'+Excel.ThousandsSeparator+'##0'+Excel.DecimalSeparator+'00';
You can also direct set a property value
SetDispatchPropValue(oSheet,
'Range['$A$1'].NumberFormatLocal',$0409);

ActionScript3 sign-extension with indexed access to ByteArray

In the following code:
var bytes:ByteArray = new ByteArray();
var i:int = -1;
var j:int;
bytes[0] = i; // bytes[0] = -1 == 0xff
j = bytes[0]; // j == 255;
The int j ends up with value 255, rather than -1. I can't seem to find a document defining how indexed access to a ByteArray is supposed to be sign extended - can I reliably assume this behavior, or should I take steps to truncate such values to 8 bit quantities? I'm porting a bunch of code from Java and would prefer to use the indexed access rather that the readByte() etc methods.
The IDataInput interface (implemented by ByteArray) says:
Sign extension matters only when you read data, not when you write it. Therefore you do not need separate write methods to work with IDataInput.readUnsignedByte() and IDataInput.readUnsignedShort().
The same would naturally apply to [] array access, so you wouldn't need to truncate before writing.
I can't see any explicit documentation of that, and nothing that states that array read access is unsigned. If you wanted to ensure that read access gave you an unsigned value back you could say:
j= j<<24>>>24;
and similarly with >> for signed. However with ActionScript being a single implementation, not a general standard, you probably don't have to worry about it

Resources