receiving wm_copydata messages in delphi xe2 - unicode related - delphi

I have an application, written in Delphi, which I want to use to open files using the Windows "Open With" option. I could do this perfectly happily in pre-Unicode Delphi versions; Windows puts the filename into a WM_copydata message, so I could fish it out using the CopyDataStruct record. But in the Unicode world, this doesn't work; I only get half the filename in the lpdata buffer (followed by garbage). When I examine the cbdata entry in the CopyDataStruct record, I find it contains the length of the filename, in numbers of characters (plus 1 for the terminator), not (as I would have thought it should) the number of bytes, which is of course now twice the number of characters.
Note that it is not the case that my Delphi code is not reading the rest of the characters in the filename out of lpdata^ - I have looked in lpdata^, and they are not there.
There are many examples on the web (including in StackOverflow) of how to avoid this issue if you are generating the WM_copydata message yourself; my problem is that I am not generating it, I am receiving it from Windows (64-bit Win7 or Win8). Is there something that Delphi could be putting into the application, which I am not seeing, that is converting ANSI strings in lpdata to Unicode before I get at the WM_CopyData message? And if so, how could I disable it (or make it correct the cbdata value)?
Any help would be greatly appreciated.

The system isn't sending the WM_COPYDATA message. One of the apps is doing that. Very likely your own app!
You've probably got code that enforces a single instance. The second instance starts in response to the shell action. It detects an existing app and sends the WM_COPYDATA message. Then the second instance closes. The first instance receives the message and processes it.
The fact that the receiver is a Unicode aware app does not influence the content of the message. The sender determines its content. The system won't magically convert from 8 bit to 16 bit text. How could it? The content is opaque.
So, your next move is to find the code that sends the message and convert it to sending Unicode text.

Related

Delphi - change (ini) file but keep Checksum of crc16 equal

I am looking for a method to change an ini file but keep the checksum as it was before.
I know I can try to change some bytes until the result fits.
But I would like go an programmatical way.
Any ideas are very appreciated.
Background:
program checks the crc of the ini, if it does not fit, shows a messagebox and the user has to click "ok".
The program is launched from a batch and quits automatically when its done.
Very often, there is no user, so the programm can not do his job.
Autoit or similar can not be installed.
The crc check is obsolete in the meantime.
The Program is > 10 years old and we can not reach the developer and have no sources.
Yes, it is relatively easy to modify data to get a chosen CRC, since CRCs are linear functions. My spoof program will do this for you. You choose what CRC you want and the bits in the message you will permit to be changed, and spoof will tell you which of those bits to invert.

Sending WM_COPY from a Delphi app. to another process in Windows 7

I have a Delphi (BDS 2006) application which sends keystrokes to QuickBooks accounting software to traverse QuickBooks forms (invoices), copy text from the current edit control to the Windows clipboard (to gather data), do some calculations based on the gathered data, and finally write results on the form by sending keystrokes.
This application has been developed over a number of years, uses extensive (for me at least) Windows API techniques to identify the foreground window, focused window, etc., and is used by dozens of users worldwide...which I only tell you as evidence that it works on a lot of systems.
But not all. Lately I'm getting a lot of reports of failures, on Windows 7 systems (the version of QuickBooks doesn't seem to matter). Debugging versions sent to the customers who've reported problems show that it is not copying anything to the clipboard--though it still seems to be able to do everything else (send keystrokes to traverse the form, and keystrokes to paste in the calculation result...which unfortunately, is now always zero because no data was gathered.)
Here's the code I use to send a WM_COPY message to the edit control window in QuickBooks. (We can't get this code to fail here, on either XP or Windows 7 systems--but it doesn't work for several users.)
var
iResult : DWORD;
begin
...
//Edit control has the focus on the QB form, so try to copy its contents
if SendMessageTimeout(Wnd, WM_COPY, 0, 0,
SMTO_ABORTIFHUNG or SMTO_NORMAL,
2000,
iResult) = 0 then begin //0 = Failed or timed out
//NOTE: Users DO NOT get the following message--the
//SendMessageTimeout() simply returns without error, as if the
//WM_COPY is being sent correctly.
ShowMessage('SendMessageTimeout FAILED');
Abort;
end;
//At this point, the clipboard has nothing on it, on users'
//machines where it fails to work.
...
end;
Not wanting to wear out the patience of the end users to whom we're sending debug versions, I'm looking for ideas before we send out anything else for them to try/test...
Notes/Questions:
All other keystrokes are sent via SendInput, and they work fine. I believe we began using SendMessageTimeout(WM_COPY) instead of sending Ctrl-C as a keystroke for speed reasons--it allowed us to immediately access the clipboard on return, instead of waiting an unknown/indefinite amout of time for the Ctrl-C to be processed by QuickBooks.
I believe we've asked users to try RunAs...Administrator on our application, but that had no effect (I'll have to verify that's been done).
I'm wondering if the problem could be due to UAC conflicts? Our application currently is not digitally signed and uses no manifest. I've been reading about adding a manifest with UIAccess=True in it. But if our application can already send keystrokes to QuickBooks without problems, would setting UIAccess=True have any effect on allowing the SendMessageTimeout() to succeed? And will I need to use a digital cert. to get the UIAccess setting to have any effect?
If SendMessage won't work without digitally signing & UIAccess in the manifest, is it possible we could fall back to sending Ctrl-C as a keystroke? (I wouldn't think so; surely Microsoft wouldn't allow that end-run around a security concept.)
I'd appreciate any comments to straighten out my thinking...
This might be related to "User Interface Privilege Isolation" (UIPI) instead of UAC. Check the integrity level of each process. A lower-integrity process is not allowed to send window messages to a higher-integrity process, unless the higher-integrity process explicitly allows it by calling ChangeWindowMessageFilter/Ex().
Can you check in this systems Skype plugin for Internet Explorer (IE-Options-Programs-Add ons). There is a bugy version of this plugin who mess data on clipboard. If this plugin is installed, remove and test.

Indy TCPClient and rogue byte in InputBuffer

I am using the following few lines of code to write and read from an external Modem/Router (aka device) via IP.
TCPClient.IOHandler.Write(MsgStr);
TCPClient.IOHandler.InputBuffer.Clear;
TCPClient.IOHandler.ReadBytes(Buffer, 10, True);
MsgStr is a string type which contains the text that I am sending to my device.
Buffer is declared as TIdBytes.
I can confirm that IOHandler.InputBufferIsEmpty returns True immediately prior to calling ReadBytes.
I'm expecting the first 10 bytes received to be very specific hence from my point of view I am only interested in the first 10 bytes received after I've sent my string.
The trouble I am having is, when talking to certain devices, the first byte returned the first time I've sent a string after establishing a connection puts a rogue (random) byte in my Buffer output. The subsequent bytes following are correct.
eg 10 bytes I'm expecting might be: #6A1EF1090#3 but what I get is .#6A1EF1090. in this example I have a full stop where there shouldn't be one.
If I try to send again, it works fine. (ie the 2nd Write sent after a connection has been established). What's weird (to me) is using a Socket Sniffer doesn't show the random byte being returned. If I create my own "server" to receive the response and send something back it works fine 100% of the time. Other software - ie, not my software - communicates fine with the device (but of course I have no idea how they're parsing the data).
Is there anything that I'm doing incorrectly above that would cause this - bearing in mind it only occurs the first time I'm using Write after establishing a connection?
Thanks
EDIT
I'm using Delphi 7 and Indy 10.5.8
UPDATE
Ok. After much testing and looking, I am no closer to finding this solution. I am getting two main scenarios. 1 - First byte missing and 2 - "introduced" byte at the start of received packet. Using TIdLogEvent and TIdLogDebug both either show the missing byte or the initial introduced byte as appropriate. So my ReadBytes statement above is showing consistently what Indy believes is there (in my opinion).
Also, to test it further, I downloaded and installed ICS components. Unfortunately (or fortunately depending on how you look at it) this didn't show the same issues as Indy. This didn't show the first byte missing nor did it show an introduced byte at the beginning. However, I have only done superficial testing, but Indy produces the behaviour "pretty much straight away" whereas ICS hasn't produced it at all yet.
If anyone is interested I can supply a small demo app illustrating the issue and IP I connect to - it's a public IP so anyone can access it. Otherwise for now, I'll just have to work around it. I'm reluctant to switch to ICS as ICS may work fine in this instance and given the use of this socket stuff is pretty much the whole crux of the program, it would be nasty to have to entirely replace Indy with ICS.
The last parameter (True)
TCPClient.IOHandler.ReadBytes(Buffer, 10, True);
causes the read to append instead of replace the buffer content.
This requires that size and content of the buffer are set up correctly first.
If the parameter is False, the buffer content will be replaced for the given number of bytes.
ReadBytes() does not inject rogue bytes into the buffer, so there are only two possibilities I can think of right now given the limited information you have provided:
The device really is sending an extra byte upon initial connection, like mj2008 suggested. If a packet sniffer is not detecting it, try attaching one of Indy's own TIdLog... components to your TIdTCPClient, such as TIdLogFile or TIdLogEvent, to verify what TIdTCPClient is actually receiving from the socket.
you have another thread reading from the same connection at the same time, corrupting the InputBuffer. Even a call to TIdTCPClient.Connected() will perform a read. Don't perform reads in multiple threads at the same time, if you are using the threads.

How to peek at STDIN with Delphi 7?

In a Delphi 7 console application, how can I check whether stdin holds a character, without blocking until one is entered?
My plan is that this console program will be executed by a GUI program, and its stdin will be written to by the GUI program.
So I want my console app to periodically check stdin, but I can't find a way of doing this without blocking.
I have looked at this answer, which gets me a stream pointing to stdin, but there's still no way to "peek" as far as I can see.
I think you have already found the right way to read stdin. It is meant to block when there's nothing more to be read.
The standard way to handle this is to use a separate thread to handle the pipe. When it receives new data from stdin it signals this to the processing thread, for example with a message passing mechanism.
Having said all that, if you really want to poll you can call PeekNamedPipe to check if there is data in the pipe.
You could as the other answer says use threads, but even then you might have problems (using the threading method) unless you also investigate overlapped IO.
I normally use overlapped IO with serial ports rather than stdin, where "read a character if one is ready" is commonly needed, and where non-blocking IO is a usual way of working. You should be able to adapt the technique shown here. However, if I was writing an application that was keyboard driven (instead of purely driven by say, a file redirected to standard input) I would let go of StdIN, and use a CRT type unit. So, if you don't mind letting go of StdIn, and simply want to have a keyboard-driven input model, you could look at console based APIs and abandon the very limiting StdIn capabilities. For an example of a "kbhit" function that uses the Win32 Console APIs see here.
There is no other way (as far as i know), as reading from a pipe inside a separate thread. Otherwise as you already have seen, the readfile operation will block. I wrote an example how to do this, an example project is also available: redirect stdoutput
Edit: Well, reading your question another time, i understand that your problem lies within the console program, not the calling application. I wonder what your console application expects, normally a console application knows when it needs input and cannot proceede until the user enters this information. Do you need to check for an exit?
For a Stream if you .Read() the function result is the number of bytes read which will be zero if there was nothing there even if you asked for more. From the Delphi help for Classes.TStream.Read:
Read is used in cases where the number of bytes to read from the stream is not necessarily fixed. It attempts to read up to Count bytes into buffer and returns the number of bytes actually read.

Strange rare out-of-order data received using Indy

We're having a bizarre problem with Indy10 where two large strings (a few hundred characters each) that we send out one after the other using TCP are appearing at the other end intertwined oddly. This happens extremely infrequently.
Each string is a complete XML message terminated with a LF and in general the READ process reads an entire XML message, returning when it sees the LF.
The call to actually send the message is protected by a critical section around the call to the IOHandler's writeln method and so it is not possible for two threads to send at the same time. (We're certain the critical section is implemented/working properly). This problem happens very rarely. The symptoms are odd...when we send string A followed by string B what we received at the other end (on the rare occasions where we have failure) is the trailing section of string A by itself (i.e., there's a LF at the end of it) followed by the leading section of string A and then the entire string B followed by a single LF. We've verified that the "timed out" property is not true after the partial read - we log that property after every read that returns content. Also, we know there are no embedded LF characters in the string, as we explicitly replace all non-alphanumeric characters in the string with spaces before appending the LF and sending it.
We have log mechanisms inside the critical sections on both the transmission and receiving ends and so we can see this behavior at the "wire".
We're completely baffled and wondering (although always the lowest possibility) whether there could be some low-level Indy issues that might cause this issue, e.g., buffers being sent in the wrong order....very hard to believe this could be the issue but we're grasping at straws.
Does anyone have any bright ideas?
You could try Wireshark to find out how the data is tranferred. This way you can find out whether the problem is in the server or in the client. Also remember to use TCP to get "guaranteed" valid data in right order.
Are you using TCP or UDP? If you are using UDP, it is possible (and expected) that the UDP packets can be received in a different order than they were transmitted due to the routing across the network. If this is the case, you'll need to add some sort of packet ID to each UDP packet so that the receiver can properly order the packets.
Do you have multiple threads reading from the same socket at the same time on the receiving end? Even just to query the Connected() status causes a read to occur. That could cause your multiple threads to read the inbound data and store it into the IOHandler.InputBuffer in random order if you are not careful.
Have you checked the Nagle settings of the IOHandler? We had a similar problem that we fixed by setting UseNagle to false. In our case sending and receiving large amounts of data in bursts was slow due to Nagle coalescing, so it's not quite the same as your situation.

Resources