Delphi Indy IdTCPServer and IdTCPClient. read and write control characters and text - delphi

Delphi XE3, Indy 10.5.9.0
I am creating an interface between a computer and an instrument. The instrument uses ASTM protocol.
I have successfully sent text based messages back and forth between the server and client. I have been able to send control characters to the server and read those. What I have not figured out after 3 days of searching is how to write and read messages that have a mixture of control characters and text.
I am sending ASTM protocol messages which require control characters and text like the following line. Everything in angle brackets are control characters. Writing the message is not where I run into problems. It is when reading it since I will receive both text and control characters. My code below is how I read the control characters. How can I tell when I get the character whether it is a control character and when it is text in the same string of control and text characters? Thanks to Remy Lebeau and his posts on this site to get me where I am. He talked about how to use buffers but I couldn't tell how to read a buffer that contained control characters and text characters.
<STX>3O|1|G-13-00017||^^^HPV|R||||||N||||||||||||||O<CR><ETX>D3<CR><LF>
I have added the following code to my server components OnConnect event which is supposed to allows me to send control characters...
...
AContext.Connection.IOHandler.DefStringEncoding := TIdTextEncoding.UTF8;
...
My server OnExecute event...
procedure TTasksForm.IdTCPServer1Execute(AContext: TIdContext);
var
lastline : WideString;
lastcmd : WideString ;
lastbyte : Byte ;
begin
ServerTrafficMemo.Lines.Add('OnExecute') ;
lastline := '' ;
lastcmd := '' ;
lastbyte := (AContext.Connection.IOHandler.ReadByte) ;
if lastbyte = Byte(5) then
begin
lastcmd := '<ENQ>' ;
ServerTrafficMemo.Lines.Add(lastcmd) ;
AContext.Connection.IOHandler.WriteLn(lastcmd + ' received') ;
end;
end;

The only control characters present are STX and ETX, and they are both < 32, so ASCII and UTF-8 will both handle them just fine. Or, you can use Indy's own built-in 8bit encoding instead.
For this type of data, there are several different ways to read it with Indy. Since the bulk of the data is textual, and the control characters are just used as frame delimiters, the easiest way would be to use IOHandler.ReadLn() or IOHandler.WaitFor() with explicit terminators.
Of course, there are other options as well, such as reading bytes from the IOHandler.InputBuffer directly (which I think is overkill in this situation), using the InputBuffer.IndexOf() method to know how many bytes to read.
Also, TIdTCPServer is a multithreaded component, where its events are fired in worker threads, but your code is directly accessing the UI, which is not thread-safe. You MUST synchronize with the UI thread.
And you shouldn't be WideString, either. Use (Unicode)String instead.
Try something like this:
procedure TTasksForm.IdTCPServer1Connect (AContext: TIdContext);
begin AContext.Connection.IOHandler.DefStringEncoding := Indy8BitEncoding;
end;
procedure TTasksForm.IdTCPServer1Execute(AContext: TIdContext);
var
lastline : string;
lastcmd : string ;
lastbyte : Byte ;
begin
TThread.Synchronize(nil,
procedure
begin
ServerTrafficMemo.Lines.Add('OnExecute') ;
end
);
lastbyte := (AContext.Connection.IOHandler.ReadByte);
if lastbyte = $5 then
begin
lastcmd := '<ENQ>' ;
TThread.Synchronize(nil,
procedure
begin
ServerTrafficMemo.Lines.Add(lastcmd) ;
end
);
end
else if lastbyte = $2 then
begin
lastline := #2 + AContext.Connection.IOHandler.ReadLn(#3) + #3;
lastline := lastline + AContext.Connection.IOHandler.ReadLn(#13#10) + #13#10;
{ or:
lastline := #2 + AContext.Connection.IOHandler.WaitFor(#3, true, true);
lastline := lastline + AContext.Connection.IOHandler.WaitFor(#13#10, true, true);
}
lastcmd := '<STX>' ;
TThread.Synchronize(nil,
procedure
begin
ServerTrafficMemo.Lines.Add(lastcmd) ;
end
);
end;
AContext.Connection.IOHandler.WriteLn(lastcmd + ' received') ;
end;

I couldn't tell how to read a buffer that contained control characters and text characters
This protocol is no doubt using ASCII strings. Any characters below decimal 32 will be control characters. Those 32 and above will be data characters. See
http://ascii-table.com/ascii.php
Dealing with that as bytes works fine. You can also use ansistring, which is ASCII plus the top 127 characters. In this situation I would avoid UTF(any) and stick with either byte or ansistring. You need to control the message at the character level, and these characters are 8 bits per character with no escapes.
Alsosee the first example, in the first answer here:

Related

How pass word number to widestring?

First of all I am sorry that I cannot better to describe my problem.
What I have is Word number 65025 which is 0xFE01 or
11111110 00000001 in binary. And I want to pass the value to wstr Word => 11111110 00000001.
I found that using typecast does not work.
And one more question here. If I want to add another number like 10000 => 0x03E8 how to do it. So in the result the widestring should refer to values 0xFE01 0x03E8.
And then, how to retrieve the same numbers from widestring to word back?
var wstr: Widestring;
wo: Word;
begin
wo := 65025;
wstr := Widestring(wo);
wo := 10000;
wstr := wstr + Widestring(wo);
end
Edit:
I'm giving another, simpler example of what I want... If I have word value 49, which is equal to ASCII value 1, then I want the wstr be '1' which is b00110001 in binary terms. I want to copy the bits from word number to the string.
It looks like you want to interpret a word as a UTF-16 code unit. In Unicode Delphi you would use the Chr() function. But I suspect you use an ANSI Delphi. In which case cast to WideChar with WideChar(wo).
You are casting a Word to a WideString. In Delphi, casting usually doesn't convert, so you are simply re-interpreting the value 65025 as a pointer (a WideString is a pointer). But 65025 is not a valid pointer value.
You will have to explicitly convert the Word to a WideString, e.g. with a function like this (untested, but should work):
function WordToBinary(W: Word): WideString;
var
I: Integer;
begin
Result := '0000000000000000';
for I := 0 to 15 do // process bits 0..15
begin
if Odd(W) then
Result[16 - I] := '1';
W := W shr 1;
end;
end;
Now you can do something like:
wo := 65025;
wstr := WordToBinary(wo);
wo := 10000;
wstr := wstr + ' ' + WordToBinary(wo);
For the reverse, you will have to write a function that converts from a WideString to a Word. I'll leave that exercise to you.
Again, you can't cast. You will have to explicitly convert. Both ways.

How to convert AnsiChar to UnicodeChar with specific CodePage?

I'm generating texture atlases for rendering Unicode texts in my app. Source texts are stored in ANSI codepages (1250, 1251, 1254, 1257, etc). I want to be able to generate all the symbols from each ANSI codepage.
Here is the outline of the code I would expect to have:
for I := 0 to 255 do
begin
anChar := AnsiChar(I); //obtain AnsiChar
//Apply codepage without converting the chars
//<<--- this part does not work, showing:
//"E2033 Types of actual and formal var parameters must be identical"
SetCodePage(anChar, aCodepages[K], False);
//Assign AnsiChar to UnicodeChar (automatic conversion)
uniChar := anChar;
//Here we get Unicode character index
uniCode := Ord(uniChar);
end;
The code above does not works (E2033) and I'm not sure it is a proper solution at all. Perhaps there's much shorter version.
What is the proper way of converting AnsiChar into Unicode with specific codepage in mind?
I would do it like this:
function AnsiCharToWideChar(ac: AnsiChar; CodePage: UINT): WideChar;
begin
if MultiByteToWideChar(CodePage, 0, #ac, 1, #Result, 1) <> 1 then
RaiseLastOSError;
end;
I think you should avoid using strings for what is in essence a character operation. If you know up front which code pages you need to support then you can hard code the conversions into a lookup table expressed as an array constant.
Note that all the characters that are defined in the ANSI code pages map to Unicode characters from the Basic Multilingual Plane and so are represented by a single UTF-16 character. Hence the size assumptions of the code above.
However, the assumption that you are making, and that this answer persists, is that a single byte represents a character in an ANSI character set. That's a valid assumption for many character sets, for example the single byte western character sets like 1252. But there are character sets like 932 (Japanese), 949 (Koren) etc. that are double byte character sets. Your entire approach breaks down for those code pages. My guess is that only wish to support single byte character sets.
If you are writing cross-platform code then you can replace MultiByteToWideChar with UnicodeFromLocaleChars.
You can also do it in one step for all characters. Here is an example for codepage 1250:
var
encoding: TEncoding;
bytes: TBytes;
unicode: TArray<Word>;
I: Integer;
S: string;
begin
SetLength(bytes, 256);
for I := 0 to 255 do
bytes[I] := I;
SetLength(unicode, 256);
encoding := TEncoding.GetEncoding(1250); // change codepage as needed
try
S := encoding.GetString(bytes);
for I := 0 to 255 do
unicode[I] := Word(S[I+1]); // as long as strings are 1-based
finally
encoding.Free;
end;
end;
Here is the code I have found to be working well:
var
I: Byte;
anChar: AnsiString;
Tmp: RawByteString;
uniChar: Char;
uniCode: Word;
begin
for I := 0 to 255 do
begin
anChar := AnsiChar(I);
Tmp := anChar;
SetCodePage(Tmp, aCodepages[K], False);
uniChar := UnicodeString(Tmp)[1];
uniCode := Word(uniChar);
<...snip...>
end;

GlobalAlloc causes my Delphi app hang?

I'm want to convert a string value to a global memory handle and vice versa, using the following functions I've just written.
But StrToGlobalHandle() causes my testing program hangs. So GlobalHandleToStr() is untest-able yet and I'm also wondering if my code is logical or not.
function StrToGlobalHandle(const aText: string): HGLOBAL;
var
ptr: PChar;
begin
Result := 0;
if aText <> '' then
begin
Result := GlobalAlloc(GMEM_MOVEABLE or GMEM_ZEROINIT, length(aText) + 1);
if Result <> 0 then
begin
ptr := GlobalLock(Result);
if Assigned(ptr) then
begin
StrCopy(ptr, PChar(aText));
GlobalUnlock(Result);
end
end;
end;
end;
function GlobalHandleToStr(const aHandle: HGLOBAL): string;
var
ptrSrc: PChar;
begin
ptrSrc := GlobalLock(aHandle);
if Assigned(ptrSrc) then
begin
SetLength(Result, Length(ptrSrc));
StrCopy(PChar(Result), ptrSrc);
GlobalUnlock(aHandle);
end
end;
Testing code:
procedure TForm3.Button1Click(Sender: TObject);
var
h: HGLOBAL;
s: string;
s2: string;
begin
s := 'this is a test string';
h := StrToGlobalHandle(s);
s2 := GlobalHandleToStr(h);
ShowMessage(s2);
GlobalFree(h);
end;
BTW, I want to use these two functions as helpers to send string values between programs - send a global handle from process A to process B, and process B get the string using GlobalHandleToStr().
BTW 2, I know WM_COPY and other IPC methods, those are not suitable in my case.
The strings in Delphi 2010 are unicode, so you are not allocating the proper buffer size.
replace this line
Result := GlobalAlloc(GMEM_MOVEABLE or GMEM_ZEROINIT, length(aText) + 1);
with this
Result := GlobalAlloc(GMEM_MOVEABLE or GMEM_ZEROINIT, (length(aText) + 1)* SizeOf(Char));
If your program hangs when you call GlobalAlloc, then you probably have heap corruption from earlier in your program. That leads to undefined behavior; the function might detect the problem and return an error, it might crash your program, it might silently corrupt yet more of your memory, it might hang, or it might do any number of other things.
That heap corruption might come from a previous call to StrToGlobalHandle because your StrCopy call writes beyond the end of the allocated memory. You're allocating bytes, but the Length function returns the number of characters in the string. That's only valid when characters are one byte wide, which isn't the case as of Delphi 2009. Multiply by SizeOf(Char) to get a byte count:
Result := GlobalAlloc(GMEM_MOVEABLE or GMEM_ZEROINIT, SizeOf(Char) * (Length(aText) + 1));
You can't send data between programs using GlobalAlloc - it worked only in 16-bit Windows. Use Memory Mapped File instead.

RichEdit 2.0's usage of single CR character as linebreak throws off SelStart calculations (Delphi XE2)

When transitioning from Delphi 2006 to Delphi XE2, one of the things that we learned is that RichEdit 2.0 replaces internally CRLF pairs with a single CR character. This has the unfortunate effect of throwing off all character index calculations based on the actual text string on the VCL's side.
The behavior I can see by tracing through the VCL code is as follows:
Sending a WM_GETTEXT message (done in TControl.GetTextBuf) will return a text buffer that contains CRLF pairs.
Sending a WM_GETTEXTLENGTH message (done in TControl.GetTextLen) will return a value as if the text still contains CRLF characters.
In contrast, sending an EM_SETSELEX message (i.e. setting SelStart) will treat the input value as if the text contains only CR characters.
This causes all sorts of things to fail (such as syntax highlighting) in our application. As you can tell, everything is off by exactly one character for every new line up to that point.
Obviously, since this is inconsistent behavior, we must be missing something or doing something very wrong.
Does anybody else has any experience with the transition from a RichEdit 1.0 to a RichEdit 2.0 control and how did you solve this issue? Finally, is there any way to force RichEdit 2.0 to use CRLF pairs just like RichEdit 1.0?
We also ran into this very issue.
We do a "mail merge" type of thing where we have templates with merge codes that are parsed and replaced by data from outside sources.
This index mismatch between pos(mystring, RichEdit.Text) and the positioning index into the RichEdit text using RichText.SelStart broke our merge.
I don't have a good answer but I came up with a workaround. It's a bit cumbersome (understatment!) but until a better solution comes along...
The workaround is to use a hidden TMemo and copy the RichEdit text to it and change the CR/LF pairs to CR only. Then use the TMemo to find the proper positioning using pos(string, TMemo) and use that to get the selstart position to use in the TRichEdit.
This really sucks but hopefully this workaround will help others in our situation or maybe spark somebody smarter than me into coming up with a better solution.
I'll show a little sample code...
Since we are replacing text using seltext we need to replace text in BOTH the RichEdit control and the TMemo control to keep the two synchronized.
StartToken and EndToken are the merge code delimiters and are a constant.
function TEditForm.ParseTest: boolean;
var TagLength: integer;
var ValueLength: integer;
var ParseStart: integer;
var ParseEnd: integer;
var ParseValue: string;
var Memo: TMemo;
begin
Result := True;//Default
Memo := TMemo.Create(nil);
try
Memo.Parent := self;
Memo.Visible := False;
try
Memo.Lines.Clear;
Memo.Lines.AddStrings(RichEditor.Lines);
Memo.Text := stringreplace(Memo.Text,#13#10,#13,[rfReplaceAll]);//strip CR/LF pairs and replace with CR
while (Pos(StartToken, Memo.Text) > 0) and (Pos(EndToken, Memo.Text) > 0) do begin
ParseStart := Pos(StartToken, Memo.SelText);
ParseEnd := Pos(EndToken, Memo.SelText) + Length(EndToken);
if ParseStart >= ParseEnd then begin//oops, something's wrong - bail out
Result := true;
myEditor.SelStart := 0;
exit;
end;
TagLength := ParseEnd - ParseStart;
ValueLength := (TagLength - Length(StartToken)) - Length(EndToken);
ParseValue := Copy(Memo.SelText, (ParseStart + Length(StartToken)), ValueLength);
Memo.selstart := ParseStart - 1; //since the .text is zero based, but pos is 1 based we subtract 1
Memo.sellength := TagLength;
RichEditor.selstart := ParseStart - 1; //since the .text is zero based, but pos is 1 based we subtract 1
RichEditor.sellength := TagLength;
TempText := GetValue(ParseValue);
Memo.SelText := TempText;
RichEditor.SelText := TempText;
end;
except
on e: exception do
begin
MessageDlg(e.message,mtInformation,[mbOK],0);
result := false;
end;
end;//try..except
finally
FreeAndNil(Memo);
end;
end;
How about subtracting EM_LINEFROMCHAR from the caret position? (OR the position of EM_GETSEL) whichever you need.
You could even get two EM_LINEFROMCHAR variables. One from the selection start and the other from the desired caret/selection position, if you only want to know how many cl/cr pairs are in the selection.

Why are strings truncated when using direct printing?

I'm trying to print directly to a printer using esc/p commands (EPSON TM-T70) without using printer driver. Code found here.
However, if I try to print any strings, they are truncated. For example:
MyPrinter := TRawPrint.Create(nil);
try
MyPrinter.DeviceName := 'EPSON TM-T70 Receipt';
MyPrinter.JobName := 'MyJob';
if MyPrinter.OpenDevice then
begin
MyPrinter.WriteString('This is page 1');
MyPrinter.NewPage;
MyPrinter.WriteString('This is page 2');
MyPrinter.CloseDevice;
end;
finally
MyPrinter.Free;
end;
Would print only "This isThis is"! I wouldn't ordinarily use MyPrinter.NewPage to send a line break command, but regardless, why does it truncates the string?
Also notice in RawPrint unit WriteString function:
Result := False;
if IsOpenDevice then begin
Result := True;
if not WritePrinter(hPrinter, PChar(Text), Length(Text), WrittenChars) then begin
RaiseError(GetLastErrMsg);
Result := False;
end;
end;
If I put a breakpoint there and step through the code, then WrittenChars is set to 14, which is correct. Why does it act like that?
You are using a unicode-enabled version of Delphi. Chars are 2 bytes long. When you call your function with Length(s) you're sending the number of chars, but the function probably expects the size of the buffer. Replace it with SizeOf(s) Length(s)*SizeOf(Char).
Since the size of one unicode char is exactly 2 bytes, when you're sending Length when buffer size is required, you're essentially telling the API to only use half the buffer. Hence all strings are aproximately split in half.
Maybe you can use the ByteLength function which gives the length of a string in bytes.

Resources