I'm sending data from an ATmega in the form of 16 bit (2 bytes). I have a serial component in Delphi which receives the data.
If I send a String (e.g. 'FF'), I get the data added to my Memo component. All fine.
However, if I send the raw hex $FF, I get a receive data blink saying "data received" but nothing is added to the Memo component's lines. I'm not sure how to convert this data into an Integer or String, something I can use.
A solution would be good but an explanation on how Delphi sees String, Char, etc. would be nice. Thanks.
When you receive data, you can cast them to bytes (if needed) and tranform into hex representation.
For example, if you get AnsiString:
AnsiS := Comport.ReadAnsiString; //your reading here
for i := 1 to Length(AnsiS) do
Memo1.Lines.Add(IntToHex(Ord(AnsiS[i]), 2));
When your ATMega sends the string "FF", it sends two characters ("F" and "F"), each encoded to their ASCII code decimal 70. When your Delphi program receives these two bytes (d70 and d70) it converts those ASCII codes to characters "F" and "F" and adds them to the memo.
When your ATMega sends the hex value FF ($FF as they are represented in Delphi code), it sends one byte with decimal value 255. When your Delphi program receives this one byte (d255) it attempts to convert it to a character but doesn't find a printable character representation for this code. Therefore nothing is added to the memo. Or, maybe your receiving code is filtering out this and possibly other values too.
It's not clear exactly what kind of solution you are looking for, but you can convert the byte value (d255) to hex or decimal representation with function IntToHex(Value: Integer; Digits: Integer): string; or System.SysUtils.Format(const Format: string; const Args: array of const): string; or use it as a byte value in your code.
Related
I have a text file which can come in different encodings (ASCII, UTF-8, UTF-16,UTF-32). The best part is that it is filled only with numbers, for example:
192848292732
My question is: will a function like the one bellow be able to display all the data correctly? If not why? (I have loaded the file as a string into the container string)
function output(container: AnsiString): AnsiString;
var
i: Integer;
begin
Result := '';
for i := 1 to Length(container) do
if (Ord(container[i]) <> 0) then
Result := Result + container[i];
end;
My logic is that if the encoding is different then ASCII and UTF-8 extra characters are all 0 ?
It passes all the tests just fine.
The ASCII character set uses codes 0-127. In Unicode, these characters map to code points with the same numeric value. So the question comes down to how each of the encodings represent code points 0-127.
UTF-8 encodes code points 0-127 in a single byte containing the code point value. In other words, if the payload is ASCII, then there is no difference between ASCII and UTF-8 encoding.
UTF-16 encodes code points 0-127 in two bytes, one of which is 0, and the other of which is the ASCII code.
UTF-32 encodes code points 0-127 in four bytes, three of which are 0, and the remaining byte is the ASCII code.
Your proposed algorithm will not be able to detect ASCII code 0 (NUL). But you state that character is not present in the file.
The only other problem that I can see with your proposed code is that it will not recognise a byte order mark (BOM). These may be present at the beginning of the file and I guess you should detect them and skip them.
Having said all of this, your implementation seems odd to me. You seem to state that the file only contains numeric characters. In which case your test could equally well be:
if container[i] in ['0'..'9'] then
.........
If you used this code then you would also happen to skip over a BOM, were it present.
I'm using Delphi XE2 and use the following code to enter the letter Y into a bookmark in a Word (2010) template.
Doc.Bookmarks.Item('NS').Range.InsertAfter('Y');
Except in the document, instead of the letter Y, the number 89 appears.
Is the fault likely to be from my code or in the Word document? Any direction gratefully received.
Your literal 'Y' is a character literal rather than a string string literal. The ASCII code for Y is 89.
So, you are passing a Char rather than a string. When Word needs to get a string representation of that integer it simply converts the integer 89 to its textual representation, the string '89'.
To get around the problem you can do this:
var
Text: string;
....
Text := 'Y';
Doc.Bookmarks.Item('NS').Range.InsertAfter(Text);
The idea is that we ensure that we pass a string to InsertAfter() rather than a character. Remember that InsertAfter() receives a variant parameter and so you do need to be careful about the type of the payload stored in the variant.
I receive a string, which is displayed as '{'#0'S'#0'a'#0'm'#0'p'#0'l'#0'e'#0'-'#0'M'#0'e'#0's'#0's'#0'a'#0'g'#0'e'#0'}'#0 in the debugger.
I need to print it out in the debug output (OutputDebugString).
When I run OutputDebugString(PChar(mymsg)), only the first character of the received string is displayed (probably because of the #0 end-of-string marker).
How can I convert that string into something OutputDebugString can work with?
Update 1: Here's the code. I want to print the contents of the variable RxBufStr.
procedure ReceivingThread.OnExecute(AContext : TIdContext);
var
RxBufStr: String;
begin
with AContext.Connection.IOHandler do
begin
CheckForDataOnSource(10);
if not InputBufferIsEmpty then
begin
RxBufStr := InputBuffer.Extract();
end;
end;
end;
The data you have shown in the question looks like UTF-16 encoded data rather than UTF-8. However, since you are using a Unicode aware Delphi, and a string data type, clearly there has been an encoding mismatch. Your string variable appears to be double UTF-16 encoded if you can see what I mean!
It would appear therefore that InputBuffer.Extract is assuming that the data is transmitted using ANSI or UTF-8. In other words, an 8-bit encoding. But in fact the data is transmitted as UTF-16.
To solve the problem you need to align the reading of the buffer with the transmission of the buffer. You need to make sure that both sides use the same encoding. UTF-8 would be a good choice.
If the data in the buffer is UTF-16, then you can extract it with
RxBufStr := InputBuffer.Extract(-1, TIdTextEncoding.Unicode);
If you switch to UTF-8 then extract it with
RxBufStr := InputBuffer.Extract(-1, TIdTextEncoding.UTF8);
With
RxBufStr := InputBuffer.Extract();
the code does not specifiy a terminator or a data size, so it may happen that the client receives only a part of the sent data.
You can read the data with a given (known) length into a TIdBytes array and then convert it to a string using the correct encoding.
One way to do it is
TEncoding.Unicode.GetString( MyByteArray );
(found here)
I have incorrect result when converting file to string in Delphi XE. There are several ' characters that makes the result incorrect. I've used UnicodeFileToWideString and FileToString from http://www.delphidabbler.com/codesnip and my code :
function LoadFile(const FileName: TFileName): ansistring;
begin
with TFileStream.Create(FileName, fmOpenRead or fmShareDenyWrite) do
begin
try
SetLength(Result, Size);
Read(Pointer(Result)^, Size);
// ReadBuffer(Result[1], Size);
except
Result := '';
Free;
end;
Free;
end;
end;
The result between Delphi XE and Delphi 6 is different. The result from D6 is correct. I've compared with result of a hex editor program.
Your output is being produced in the style of the Delphi debugger, which displays string variables using Delphi's own string-literal format. Whatever function you're using to produce that output from your own program has actually been fixed for Delphi XE. It's really your Delphi 6 output that's incorrect.
Delphi string literals consist of a series of printable characters between apostrophes and a series of non-printable characters designated by number signs and the numeric values of each character. To represent an apostrophe, write two of them next to each other. The printable and non-printable series of characters can be written right not to each other; there's no need to concatenate them with the + operator.
Here's an excerpt from the output you say is correct:
#$12'O)=ù'dlû'#6't
There are four lone apostrophes in that string, so each one either opens or closes a series of printable characters. We don't necessarily know which is which when we start reading the string at the left because the #, $, 1, and 2 characters are all printable on their own. But if they represent printable characters, then the 0, ), =, and ù characters are in the non-printable region, and that can't be. Therefore, the first apostrophe above opens a printable series, and the #$12 part represents the character at code 18 (12 in hexadecimal). After the ù is another apostrophe. Since the previous one opened a printable string, this one must close it. But the next character after that is d, which is not #, and therefore cannot be the start of a non-printable character code. Therefore, this string from your Delphi 6 code is mal-formed.
The correct version of that excerpt is this:
#$12'O)=ù''dlû'#6't
Now there are three lone apostrophes and one set of doubled apostrophes. The problematic apostrophe from the previous string has been doubled, indicating that it is a literal apostrophe instead of a printable-string-closing one. The printable series continues with dlû. Then it's closed to insert character No. 6, and then opened again for t. The apostrophe that opens the entire string, at the beginning of the file, is implicit.
You haven't indicated what code you're using to produce the output you've shown, but that's where the problem was. It's not there anymore, and the code that loads the file is correct, so the only place that needs your debugging attention is any code that depended on the old, incorrect format. You'd still do well to replace your code with that of Robmil since it does better at handling (or not handling) exceptions and empty files.
Actually, looking at the real data, your problem is that the file stores binary data, not string data, so interpreting this as a string is not valid at all. The only reason it works at all in Delphi 6 is that non-Unicode Delphi allows you to treat binary data and strings the same way. You cannot do this in Unicode Delphi, nor should you.
The solution to get the actual text from within the file is to read the file as binary data, and then copy any values from this binary data, one byte at a time, to a string if it is a "valid" Ansi character (printable).
I will suggest the code:
function LoadFile(const FileName: TFileName): AnsiString;
begin
with TFileStream.Create(FileName, fmOpenRead or fmShareDenyWrite) do
try
SetLength(Result, Size);
if Size > 0 then
Read(Result[1], Size);
finally
Free;
end;
end;
I use this function to read file to string
function LoadFile(const FileName: TFileName): string;
begin
with TFileStream.Create(FileName,
fmOpenRead or fmShareDenyWrite) do begin
try
SetLength(Result, Size);
Read(Pointer(Result)^, Size);
except
Result := '';
Free;
raise;
end;
Free;
end;
end;
Here's the text of file :
version
Here's the return value of LoadFile :
'ÿþv'#0'e'#0'r'#0's'#0'i'#0'o'#0'n'#0
I want to make a new file contain "verabc". The problem is I still have a problem to replace "sion" with "abc". I am using D2007. If I remove all #0 then the result become Chinese character.
What you think is the text of the file isn't really the text of the file. What you've read into your string variable is accurate. You have a Unicode text file encoded as little-endian UTF-16. The first two bytes represent the byte-order mark, and each pair of bytes after that are another character of the string.
If you're reading a Unicode file, you should use a Unicode data type, such as WideString. You'll want to divide the file size by two when setting the length of the string, and you'll want to discard the first two bytes.
If you don't know what kind of file you're reading, then you need to read the first two or three bytes first. If the first two bytes are $ff $fe, as above, then you might have a little-endian UTF-16 file; read the rest of the file into a WideString, or UnicodeString if you have that type. If they're $fe $ff, then it might be big-endian; read the remainder of the file into a WideString and then swap the order of each pair of bytes. If the first two bytes are $ef $bb, then check the third byte. If it's $bf, then they are probably the UTF-8 byte-order mark. Discard all three and read the rest of the file into an AnsiString or an array of bytes, and then use a function like UTF8Decode to convert it into a WideString.
Once you have your data in a WideString, the debugger will show that it contains version, and you should have no trouble using a Unicode-enabled version of StringReplace to do your replacement.
It seems that you load a unicode encoded text file. 0 indicates Latin character.
If you don't want to deal with unicode text, choose ANSI encoding in your editor when you save the file.
If you need unicode encoding, use WideCharToString to convert it to an ANSI string, or just remove yourself the 0s, though the latter isn't the best solution. Also remove the 2 leading characters, ÿþ.
The editor put those bytes to mark the file as unicode.