What is the meaning of the internet connection statuses? - delphi

What is the meaning of the internet connection statuses?
I can't figure out which status represents a router, number 3?
What does 4 mean?
uses
WinInet;
const
MODEM = 1;
LAN = 2;
PROXY = 4;
BUSY = 8;
function GetConnectionKind(var strKind: string): Boolean;
var
flags: DWORD;
begin
strKind := '';
Result := InternetGetConnectedState(#flags, 0);
if Result then
begin
if (flags and MODEM) = MODEM then strKind := 'Modem';
if (flags and LAN) = LAN then strKind := 'LAN';
if (flags and PROXY) = PROXY then strKind := 'Proxy';
if (flags and BUSY) = BUSY then strKind := 'Modem Busy';
end;
end;
procedure TForm1.Button1Click(Sender: TObject);
var
strKind: string;
begin
if GetConnectionKind(strKind) then
ShowMessage(strKind);
end;

[InternetGetConnectedState](http://msdn.microsoft.com/en-us/library/aa384702(VS.85%29.aspx) returns a bitmask in the first parameter that looks like this:
76543210 <-- bit numbers
|| ||||
|| |||+- INTERNET_CONNECTION_MODEM
|| ||+-- INTERNET_CONNECTION_LAN
|| |+--- INTERNET_CONNECTION_PROXY
|| +---- INTERNET_CONNECTION_MODEM_BUSY (No longer used)
|+------ INTERNET_CONNECTION_OFFLINE
+------- INTERNET_CONNECTION_CONFIGURED
If a given bit is set, the connection is of that type. So if bit nr. 2 is set, you're connected through a proxy.
Additionally, the function returns a TRUE/FALSE value, indicating whether you are connected to the internet.
The values you have in your code, 1, 2, 4, 8, corresponds to the decimal value of those bits, counting from the right.
Basically the code inspects each bit in turn, and sets the strKind variable to a text indicating the nature of the connection.
You're asking "which is router? 3?", and I assume you mean by that "how do I figure out that my connection is through a router?". I would assume this would be the same as the LAN connection, presumably the LAN has a bridge somewhere to access the internet through.

The codes 1, 2, 4, 8 represent bitmasks. I generally prefer to always use bitmasks in hex to avoid any confusion, its fairly easy to remember since the pattern continues in nibbles (the set of 4 binary bits).
HEX BINARY DEC
$01 00000001 1
$02 00000010 2
$04 00000100 4
$08 00001000 8
$10 00010000 16
$20 00100000 32
$40 01000000 64
$80 10000000 128
If you ever want to check two values at once, you can OR them together, for example $01 or $02 = $03 (binary 00000011). So a 3 would be BOTH a modem AND a lan.
A common practice to see if something is set or not, would be to AND it with the mask. for example, if my number is 3, and i "and" this with $02, then the result is $02 since the bit for both the mask AND the value were both set. If my number is 4 and I "and" this with $02, then the result is $00 since the bit for both the mask and the value were not set.
Of course this doesn't answer what I think your real question is. The router would be impossible to determine just by checking only this mask. This mask just tells you if your connected via a modem (aka Dialup) or a network adapter. The router would be beyond the network adapter, and would require further analysis of the network to accurately determine.

The constant values are flags which means two things: (1) you cannot have a "3" value and (2) you can have more than one value in the "flags" result. For example, for result 9 (1001 in binary) the first and last checks would be true.
For more info on the result meaning check the MSDN reference for InternetGetConnectedState.

Related

SizeOf set of enums, 32 bit vs 64 bit and memory alignment

Given an enum TEnum with 33 items (Ceil (33 / 8) = 5 bytes), and a TEnumSet = Set of TEnum, the SizeOf (TEnumSet) gives a different result when running in 32 vs. 64-bit Windows:
32 bit: 5 bytes as per the calculation above
64 bit: 8 bytes
When increasing the number of elements in the enum the size will vary to, say, 6 bytes in 32-bit, while in 64-bit, it remains 8 bytes. As if the memory alignment in 64-bit is rounding up the size to the nearest multiple of XX? (not 8, smaller enums do yield a set size of 2, or 4). And a power of 2 is most likely not the case either?
In any case: this is causing a problem while reading a file to a packed record written as a buffer from a 32 bit program. Trying to read the same file back into a 64 bit program, since the packed record sizes don't match (the record contains this mismatching set, among other things), reading fails.
I tried looking in the compiler options for some options related to memory alignment: there is an option for record memory alignment but it does not impact sets, and is already the same in both configurations.
Any explanation on why the set is taking more memory in 64-bit, and any potential solutions to be able to read the file into my packed record on a 64-bit platform?
Note that I have no control over the writing of the file: it is written using a 32-bit program to which I don't have access (so altering the writing is not an option).
Here is my test program:
{$APPTYPE CONSOLE}
type
TEnumSet16 = set of 0..16-1;
TEnumSet17 = set of 0..17-1;
TEnumSet24 = set of 0..24-1;
TEnumSet25 = set of 0..25-1;
TEnumSet32 = set of 0..32-1;
TEnumSet33 = set of 0..33-1;
TEnumSet64 = set of 0..64-1;
TEnumSet65 = set of 0..65-1;
begin
Writeln(16, ':', SizeOf(TEnumSet16));
Writeln(17, ':', SizeOf(TEnumSet17));
Writeln(24, ':', SizeOf(TEnumSet24));
Writeln(25, ':', SizeOf(TEnumSet25));
Writeln(32, ':', SizeOf(TEnumSet32));
Writeln(33, ':', SizeOf(TEnumSet33));
Writeln(64, ':', SizeOf(TEnumSet64));
Writeln(65, ':', SizeOf(TEnumSet65));
end.
And the output (I am using XE7 but I expect that it is the same in all versions):
32 bit
64 bit
16:2
16:2
17:4
17:4
24:4
24:4
25:4
25:4
32:4
32:4
33:5
33:8
64:8
64:8
65:9
65:9
Leaving aside the 32 vs 64 but difference, notice that the 17 and 24 bit cases could theoretically fit in a 3 byte type, they are stored in a 4 byte type.
Why does the compiler choose to use a 4 byte type rather than a 3 byte type? It can only be that this allows for more efficient code. Operating on data that can be mapped directly onto CPU registers is more efficient than picking at the data byte by byte, or in this case by accessing two bytes in one operation, and then the third byte in another.
This then points to why anything between 33 and 64 bits is mapped to an 8 byte type under the 64 bit compiler. The 64 bit compiler has 64 bit registers, and the 32 bit compiler does not.
As for how to solve your problem, then I can see two main approaches:
In your 64 bit program, read and write the record field by field. For the fields which are afflicted by this 32 vs 64 bit issue, you will have to introduce special code to read and write just the first 5 bytes of the field.
Change your record definition to replace the set with array [0..4] of Byte, and then introduce a property that maps the set type onto that 5 byte array.
Working with the memory size of a set leads to process errors sooner or later. This becomes particularly clear when working with subtypes.
program Project1;
{$APPTYPE CONSOLE}
{$R *.res}
uses
System.SysUtils;
type
TBoolSet=set of boolean;
TByteSet=set of byte;
TSubEnum1=5..10;
TSubSet1=set of TSubEnum1;
TSubEnum2=205..210;
TSubSet2=set of TSubEnum2;
var
i, j: integer;
a, a1: TByteSet;
b, b1: TSubSet1;
begin
try
writeln('SizeOf(TBoolSet): ', SizeOf(TBoolSet)); //1
writeln('SizeOf(TByteSet): ', SizeOf(TByteSet)); //32
writeln('SizeOf(TSubSet1): ', SizeOf(TSubSet1)); //2
writeln('SizeOf(TSubSet2): ', SizeOf(TSubSet2)); //2
//Assignments are allowed.
a := [6, 9];
b := [6, 9];
writeln('a = b ?: ', BoolToStr(a = b, true)); //true
a1 := a + b; //OK
b1 := a + b; //OL
a := [7, 200];
b1 := a + b; //??? no exception, Value 200 was lost. !
i := 0;
for j in b1 do
i := succ(i);
writeln('b1 Count: ', i);
readln(i);
except
on E: Exception do
Writeln(E.ClassName, ': ', E.Message);
end;
end.

Delphi - join 2 integer in a Int64

I'm working with Delphi and Assembly, so, i had a problem. I used a instruction(RDTSC) in Assembly of getting a 64-bits read time-stamp, the instruction put the numbers separately in two registers EAX and EDX. But it's ok, i get it with Delphi Integer variables. But now, i need to join those variables in 1 of 64-bits. It's like:
Var1 = 46523
var2 = 1236
So i need to put it into one variable like:
Var3 = 465231236
it's like a StrCat, but i'm don't know how to do it. Somebody can help me?
You certainly don't want to concatenate the decimal string representations of the two values. That is not the way you are expected to combine the two 32 bit values returned from RTDSC into a 64 bit value.
Combining 46523 and 1236 should not yield 465231236. That is the wrong answer. Instead, you want to take the high order 32 bits, and put them alongside the low order 32 bits.
You are combining $0000B5BB and $00004D4. The correct answer is either $0000B5BB00004D4 or $00004D40000B5BB, depending on which of the two values are the high and low order parts.
Implement this in code, for instance, using Int64Rec:
var
Value: UInt64;
...
Int64Rec(Value).Lo := Lo;
Int64Rec(Value).Hi := Hi;
where Lo and Hi are the low and high 32 bit values returned by RTDSC.
So, bits 0 to 31 are set to the value of Lo, and bits 32 to 63 are set to the value of Hi.
Or it can be written using bitwise operations:
Value := (UInt64(Hi) shl 32) or UInt64(Lo);
If all you need to do is read the time stamp counter, then you don't need to do any of this though. You can implement the function like this:
function TimeStampCounter: UInt64;
asm
RDTSC
end;
The register calling convention requires that a 64 bit value return value is passed back to the caller in EDX:EAX. Since the RDTSC places the values in those exact registers (not a coincidence by the way), you have nothing more to do.
All of this said, rather than using the time stamp counter, it is usually preferable to use the performance counter, which is wrapped by TStopWatch from System.Diagnostics.
The simple way is to use a record
type
TMyTimestamp = record
case Boolean of
true:
( Value: Int64 );
false:
( Value1: Integer; Value2: Integer );
end;
and you can store/read each value as you like
var
ts: TMyTimestamp;
begin
ts.Value1 := 46523;
ts.Value2 := 1236;
WriteLn( ts.Value ); // -> 5308579624379
ts.Value := 5308579624379;
WriteLn( ts.Value1 ); // -> 46523
WriteLn( ts.Value2 ); // -> 1236
end;
see: Docwiki: Variant Parts in Records

Converting Integer number into hexadecimal number in delphi 7

Write a program to convert an integer number to its hexadecimal representation without using inbuilt functions.
Here is my code, but it is not working. Can anyone tell where is the mistake?
It is giving an error:
"Project raised exception class EAccessViolation with message 'Access violation at address 00453B7B in module 'Project.exe'.Write of address FFFFFFFF'.Process stopped.Use Step or Run to continue."
unit Unit1;
interface
uses
Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls,Forms,
Dialogs;
type
TForm1 = class(TForm)
end;
function hexvalue(num:Integer):Char;
var
Form1: TForm1;
implementation
{$R *.dfm}
function hexvalue(num:Integer):Char;
begin
case num of
10: Result:='A';
11: Result:='B';
12: Result:='C';
13: Result:='D';
14: Result:='E';
15: Result:='F';
else Result:=Chr(num);
end;
end;
var
intnumber,hexnumber,actualhex:String;
integernum:Integer;
i,j,k:Byte;
begin
InputQuery ('Integer Number','Enter the integer number', intnumber);
integernum:=StrToInt(intnumber);
i:=0;
while integernum >= 16 do
begin
hexnumber[i]:=hexvalue(integernum mod 16);
integernum:= integernum div 16;
Inc(i);
end;
hexnumber[i]:= hexvalue(integernum);
k:=i;
for j:=0 to k do
begin
actualhex[j]:= hexnumber[i];
Dec(i);
end;
ShowMessage(actualhex);
end.
Since this obviously is a homework assignment, I don't want to spoil it for you and write the solution, but rather attempt to guide you to the solution.
User input
In real code you would need to be prepared for any mistake from the user and check that the input really is integer numbers only and politely ask the user to correct the input if erroneous.
Conversion loop
You have got that OK, using mod 16 for each nibble of integernum and div 16 to move to the next nibble, going from units towards higher order values.
Conversion of nibble to hex character
Here you go wrong. If you would have written out also the cases for 0..9, you could have got the case statement right. As others have commented, Chr() takes an ASCII code. However, using a case statement for such a simple conversion is tedious to write and not very efficient.
What if you would have a lookup table (array) where the index (0..15) directly would give you the corresponding hex character. That would be much simpler. Something like
const
HexChars: array[_.._] of Char = ('0',_____'F')
I leave it to you to fill in the missing parts.
Forming the result (hex string)
Your second major mistake and the reason for the AV is that you did not set the length of the string hexnumber before attempting to acess the character positions. Another design flaw is that you fill in hexnumber backwards. As a result you then need an extra loop where you reverse the order to the correct one.
There are at least two solutions to solve both problems:
Since you take 32 bit integer type input, the hex representation is not more than 8 characters. Thus you can preset the length of the string to 8 and fill it in from the lower order position using 8 - i as index. As a final step you can trim the string if you like.
Don't preset the length and just concatenate as you go in the loop hexnumber := HexChars[integernum mod 16] + hexnumber;.
Negative values
You did not in any way consider the possibility of negative values in your code, so I assume it wasn't part of the task.
First mistake : String are 1 indexed. Meaning that the index of their first character is 1 and not 0. You initialize "i" to 0 and then try to set hexnumber[i].
Second mistake : Strings might be dynamic, but they don't grow automatically. If you try to access the first character of an empty string, it won't work. You need to call SetLength(HeXNumber, NumberOfDigits). You can calculate the number of digits this way :
NumberOfDigits := Trunc(Log16(integernum)) + 1;
Since Log16 isn't really something that exists, you can either use LogN(16,integernum) or (Log(IntegerNum) / Log(16)) depending on what is available in your version of Delphi.
Note that this might return an invalid value for very, very large value (high INT64 range) due to rounding errors.
If you don't want to go that road, you could replace the instruction by
hexnumber := hexvalue(integernum mod 16) + hexnumber;
which would also remove the need to invert the string at the end.
Third Mistake : Using unsigned integer for loop variable. While this is debatable, the instruction
for I := 0 to Count - 1 do
is common practice in Delphi without checking Count > 0. When count = 0 and using an unsigned loop counter, you'll either get an integer overflow (if you have them activated in your project options) or you'll loop High(I) times, which isn't what you want to be doing.
Fourth mistake : already mentionned : Result:=Chr(num) should be replaced by something like Result := InttoStr(Num)[1].
Personally, I'd implement the function using an array.
HexArr : Array[0..15] of char = ('0', '1',...,'D','E','F');
begin
if InRange(Num, 0, 15) then
Result := HexArr[Num]
else
//whatever you want
end;

How can I send Unicode characters (16 bit) with Serial port in Delphi 2010?

I have a problem in Delphi 2010. I would like to send from my PC some Unicode (16 bits) characters to the printer with serial port (COM port).
I use the TCiaComPort component in D2010.
For example:
CiaComPort1.Open := True; \\I open the port
Data := #$0002 + UnicodeString(Ж) + #$0003;
CiaComPort1.SendStr(Parancs); //I send the data to the device
If the printer characterset is ASCII then the characters arrive, but the ciril character is '?' on the Printer screen. But if the printer characterset is Unicode then the characters do not arrive to the printer.
An Unicode character represented in 2 bytes. How can I decompose an Unicode character to byte for byte? For example #$0002?
And how can I send this strings byte for byte with the comport? Which function?
Under Windows (check your OS how to open and write to comm ports), I use the following function to write a UnicodeString to a COMM Port:
Bear in mind that the port have to be setup correctly, baud rate, number of bits, etc. See Device Manager => Comm Ports
function WriteToCommPort(const sPort:String; const sOutput:UnicodeString):Boolean;
var
RetW:DWORD;
buff: PByte;
lenBuff:Integer;
FH:THandle;
begin
Result:=False;
lenBuff:=Length(sOutput)*2;
if lenBuff = 0 then Exit; // Nothing to write, empty string
FH:= Windows.CreateFile(PChar(sPort), GENERIC_READ or GENERIC_WRITE, 0, Nil, OPEN_EXISTING, 0, 0);
if (FH <> Windows.INVALID_HANDLE_VALUE) then
try
Buff:=PByte(#sOutput[1]);
Windows.WriteFile(FH, buff^, lenBuff, RetW, Nil);
Result:= Integer(RetW) = lenBuff;
finally
Windows.CloseHandle(FH);
end;
end;
Does CiaComPort1.SendStr() accept an AnsiString or UnicodeString as input? Did you try using a COM port sniffer to make sure that CiaComPort is transmitting the actual Unicode bytes as you are expecting?
The fact that you are using #$0002 and #$0003 makes me think it is actually not, because those characters are usually transmitted on COM ports as 8-bit values, not as 16-bit values. If that is the case, then that would explain why the Ж character is getting converted to ?, if CiaComPort is performing a Unicode->Ansi data conversion before transmitting. In which case, you may have to do something like this instead:
var
B: TBytes;
I: Integer;
B := WideBytesOf('Ж');
SetLength(Data, Length(B)+2);
Data[1] := #$0002;
for I := Low(B) to High(B) do
Data[2+I] := WideChar(B[I]);
Data[2+Length(B)] #$0003;
CiaComPort1.SendStr(Data);
However, if CiaComPort is actually performing a data conversion internally, then you will still run into conversion issues for any encoded bytes that are above $7F.
In which case, look to see if CiaComPort has any other sending methods available that allow you to send raw bytes instead of strings. If it does not, then you are pretty much SOL and will need to switch to a better COM component, or just use OS APIs to access the COM port directly instead.

Which variables are initialized when in Delphi?

So I always heard that class fields (heap based) were initialized, but stack based variables were not. I also heard that record members (also being stack based) were also not initialized. The compiler warns that local variables are not initialized ([DCC Warning] W1036 Variable 'x' might not have been initialized), but does not warn for record members. So I decided to run a test.
I always get 0 from Integers and false from Booleans for all record members.
I tried turning various compiler options (debugging, optimizations, etc.) on and off, but there was no difference. All my record members are being initialized.
What am I missing? I am on Delphi 2009 Update 2.
program TestInitialization;
{$APPTYPE CONSOLE}
uses
SysUtils;
type
TR = Record
Public
i1, i2, i3, i4, i5: Integer;
a: array[0..10] of Integer;
b1, b2, b3, b4, b5: Boolean;
s: String;
End;
var
r: TR;
x: Integer;
begin
try
WriteLn('Testing record. . . .');
WriteLn('i1 ',R.i1);
WriteLn('i2 ',R.i2);
WriteLn('i3 ',R.i3);
WriteLn('i4 ',R.i4);
WriteLn('i5 ',R.i5);
Writeln('S ',R.s);
Writeln('Booleans: ', R.b1, ' ', R.b2, ' ', R.b3, ' ', R.b4, ' ', R.b5);
Writeln('Array ');
for x := 0 to 10 do
Write(R.a[x], ' ');
WriteLn;
WriteLn('Done . . . .');
except
on E:Exception do
Writeln(E.Classname, ': ', E.Message);
end;
ReadLn;
end.
Output:
Testing record. . . .
i1 0
i2 0
i3 0
i4 0
i5 0
S
Booleans: FALSE FALSE FALSE FALSE FALSE
Array
0 0 0 0 0 0 0 0 0 0 0
Done . . . .
Global variables are zero-initialized. Variables used in the context of the main begin..end block of a program can be a special case; sometimes they are treated as local variables, particularly for-loop indexers. However, in your example, r is a global variable and allocated from the .bss section of the executable, which the Windows loader ensures is zero-filled.
Local variables are initialized as if they were passed to the Initialize routine. The Initialize routine uses runtime type-info (RTTI) to zero-out fields (recursively - if a field is of an array or record type) and arrays (recursively - if the element type is an array or a record) of a managed type, where a managed type is one of:
AnsiString
UnicodeString
WideString
an interface type (including method references)
dynamic array type
Variant
Allocations from the heap are not necessarily initialized; it depends on what mechanism was used to allocate memory. Allocations as part of instance object data are zero-filled by TObject.InitInstance. Allocations from AllocMem are zero-filled, while GetMem allocations are not zero-filled. Allocations from New are initialized as if they were passed to Initialize.
I always get 0 from Integers and false from Booleans for all record members.
I tried turning various compiler options (debugging, optimizations, etc.) on and off, but there was no difference. All my record members are being initialized.
What am I missing?
Well, apart from your test using global instead of local variables: the important thing that you are missing is the distinction between variables that coincidentally appear to be initialised, and variables that actally are initialised.
BTW: This is the reason programmers who don't check their warnings make the common mistake of assuming their poorly written code is behaving correctly when the few tests they do; happen to have 0 and False defaults.... Want To Buy: random initialisation of local variables for debug builds.
Consider the following variation on your test code:
program LocalVarInit;
{$APPTYPE CONSOLE}
procedure DoTest;
var
I, J, K, L, M, N: Integer;
S: string;
begin
Writeln('Test default values');
Writeln('Numbers: ', I:10, J:10, K:10, L:10, M:10, N:10);
Writeln('S: ', S);
I := I + 1;
J := J + 2;
K := K + 3;
L := L + 5;
M := M + 8;
N := N + 13;
S := 'Hello';
Writeln('Test modified values');
Writeln('Numbers: ', I:10, J:10, K:10, L:10, M:10, N:10);
Writeln('S: ', S);
Writeln('');
Writeln('');
end;
begin
DoTest;
DoTest;
Readln;
end.
With the following sample output:
Test default values
Numbers: 4212344 1638280 4239640 4239632 0 0
S:
Test modified values
Numbers: 4212345 1638282 4239643 4239637 8 13 //Local vars on stack at end of first call to DoTest
S: Hello
Test default values
Numbers: 4212345 1638282 4239643 4239637 8 13 //And the values are still there on the next call
S:
Test modified values
Numbers: 4212346 1638284 4239646 4239642 16 26
S: Hello
Notes
The example works best if you compile with optimisation off. Otherwise, if you have optimisation on:
Some local vars will be manipulated in CPU registers.
And if you view the CPU stack while stepping through the code you'll note for example that I := I + 1 doesn't even modify the stack. So obviously the change cannot be carried through.
You could experiment with different calling conventions to see how that affects things.
You can also test the effect of setting the local vars to zero instead of incrementing them.
This illustrates how you are entirely dependent on what found its way onto the stack before your method was called.
Note that in the example code you provided, the record is actually a global variable, so it will be completely initialized. If you move all that code to a function, it will be a local variable, and so, per the rules given by Barry Kelly, only its string field will be initialized (to '').
I have a similar situation, and thought the same, but when I add other variables used before the record, the values become garbage, so before I use my record I had to initialize using
FillChar(MyRecord, SizeOf(MyRecord), #0)

Resources