I'm trying to generate a list of files matching a certain file mask and Indy falls over with this error
EidReplyRFCError with message '.': No such file or directory.
I've tried several variations and this is the result:
FTP.List( aFiles, '', true ); => this works
FTP.List( aFiles, '*.*', false ); => this works too
FTP.List( aFiles, '*.*', true ); => this fails
FTP.List( aFiles, '*.zip', true ); => this fails too (despite it being the example in the latest documentation)
FTP.List( '*.*', false ); => this works
FTP.List( '*.*', true ); => this fails
I'm using Delphi XE5 & Indy version 10.6. The same issue exists in XE8 if relevant.
Maybe the functionality has changed and the documentation is now wrong or it's a bug in Indy?
I need the "details" so I can compare timestamps & sizes too.
This is not a bug in TIdFTP. It is more an omission in the Indy documentation.
EIdReplyRFCError means the FTP server itself is reporting an error in response to the command that TIdFTP.List() is sending. Depending on the values of the ADetails parameter and TIdFTP's UseMLIS+CanUseMLS properties, List() can send one of three different commands:
ADetails=False:
NLST [ASpecifier]
ADetails=True:
TIdFTP.UseMLIS=True and TIdFTP.CanUseMLS=True:
MLSD [ASpecifier]
TIdFTP.UseMLIS=False or TIdFTP.CanUseMLS=False:
LIST [ASpecifier]
Thus:
FTP.List( aFiles, '', true ); // this works
// sends either 'LIST' or 'MLSD'
FTP.List( aFiles, '*.*', false ); // this works too
// sends 'NLST *.*'
FTP.List( aFiles, '*.*', true ); // this fails
// sends either 'LIST *.*' or 'MLSD *.*'
FTP.List( aFiles, '*.zip', true ); // this fails too
// sends either 'LIST *.zip' or 'MLSD *.zip'
FTP.List( '*.*', false ); // this works
// sends 'NLST *.*'
FTP.List( '*.*', true ); // this fails
// sends either 'LIST *.*' or 'MLSD *.*'
Note that all of the commands that "fail" have something in common - they might be sending an MLSD ASpecifier command.
Per RFC 959, which defines the LIST and NLST commands:
LIST (LIST)
This command causes a list to be sent from the server to the
passive DTP. If the pathname specifies a directory or other
group of files, the server should transfer a list of files
in the specified directory. If the pathname specifies a
file then the server should send current information on the
file. A null argument implies the user's current working or
default directory. ...
NAME LIST (NLST)
This command causes a directory listing to be sent from
server to user site. The pathname should specify a
directory or other system-specific file group descriptor; a
null argument implies the current directory. ...
Per RFC 3659, which defines the MLSD command:
The MLST and MLSD commands each allow a single optional argument.
This argument may be either a directory name or, for MLST only, a
file name. For these purposes, a "file name" is the name of any
entity in the server NVFS which is not a directory. Where TVFS is
supported, any TVFS relative pathname valid in the current working
directory, or any TVFS fully qualified pathname, may be given. If a
directory name is given then MLSD must return a listing of the
contents of the named directory, otherwise it issues a 501 reply, and
does not open a data connection. ...
If no argument is given then MLSD must return a listing of the
contents of the current working directory, and MLST must return a
listing giving information about the current working directory
itself. ...
...
If the Client-FTP sends an invalid argument, the server-FTP MUST
reply with an error code of 501.
*.* and *.zip are not directory names, thus the server will fail if TIdFTP.List() sends an MLSD *.* or MLSD *.zip command. So it stands to reason that TIdFTP.UseMLIS and TIdFTP.CanUseMLS are likely both True in your case (UseMLIS is True by default, and CanUseMLS is commonly True on modern FTP servers).
The MLSD command does not support server-side filtering like the LIST/NLST commands do. So you cannot use things like *.* and *.zip with MLSD. You would have to retrieve the full directory listing and then ignore any entries that you are not interested in. Otherwise, set TIdFTP.UseMLIS to False before calling TIdFTP.List(), but then you run the risk of TIdFTP.DirectoryListing incorrectly parsing the directory listing of some servers, as the format used by the LIST command was never standardized, and there are hundreds of custom formats being used all over the Internet (and why TIdFTP in Indy 10 includes dozens of listing parsers when LIST is used). Unlike MLSx, which has a standardized format (which is why it was introduced in the first place, to replace the shortcomings of LIST).
Thus, what this all comes down to is - when TIdFTP.UseMLIS and TIdFTP.CanUseMLS are both True, ASpecifier MUST be blank or a directory, NOT a file mask.
The TIdFTP.List() documentation does state that List() may internally call TIdFTP.ExtListDir() to send an MLSD command, but it does not specifically mention this particular restriction on the ASpecifier parameter in that case:
If CanUseMLS contains True, the ExtListDir is called to capture and store the results of the FTP MLSD command in the ADest parameter variable instead of the LIST or NLST commands. No additional processing is performed in the List method under this circumstance, and the method is exited.
When ADetails is False, only the file or directory name is returned in the ADest string list using the FTP NLST command. When ADetails is True, List can return FTP server-dependent details including the file size, date modified, and file permissions for the Owner, Group, and User using the FTP LIST command.
The TIdFTP.ExtListDir() documentation does state that its input parameter must be a directory name, though:
The MLSD command, supported in ExtListDir, accepts an optional directory name or relative path in Adirectory for the directory listing. If am empty string is passed in ADirectory, the current directory is used for the directory listing operation.
On a side note: the TIdFTP.DirFormat property will tell you which listing format was detected after TIdFTP.DirectoryListing has parsed the results. Or you can look at the Details and UsedMLS properties of TListFTP.ListResult (type-cast it to TIdFTPListResult to access the properties) to deduce which command was sent by TIdFTP.List() (if successful).
An alternative solution is to include
IdAllFTPListParsers
In your uses clause and to disable UseMLIS.
Like so:
uses
....
IdAllFTPListParsers;
.....
procedure TForm1.DoThis;
var
i: integer;
begin
if not IDFTP1.Connected then IDFTP1.Connect;
IDFTP1.UseMLIS:= false;
IDFTP1.List;
for i:= 0 to IDFTP1.DirectoryListing.Count -1 do begin
.. process directory items.
IdFTP1.TransferType:= ftBinary;
..Get your files
Related
I'm making a very simple Delphi console application ({$APPTYPE CONSOLE}) with a single TChromiumWindow on the main form. The purpose of the application is to retrieve a webpage, process the HTML and output some JSON to the console. This can not be done using plain HTTP requests due to the nature of the webpage, which requires running some javascript as well.
Everything works as expected, except for one problem. The chromium components output some error messages to the console as well, which makes my JSON invalid! For example, I always get the following two error messages on startup:
[0529/133941.811:ERROR:gpu_process_transport_factory.cc(990)] Lost UI shared context.
[0529/133941.832:ERROR:url_request_context_getter_impl.cc(130)] Cannot use V8 Proxy resolver in single process mode.
Of course the best solution would be to not get any error messages in the first place, but for several reasons (which mostly have to do with company legacy code) I can't for example disable single process mode.
So the next best thing would be to keep these error messages from being printed to the console. I've tried setting
GlobalCEFApp.LogSeverity := LOGSEVERITY_DISABLE;
but that didn't help. Specifying a logfile using GlobalCEFApp.LogFile doesn't help either.
So how can I prevent the Chromium components from writing to the console at all?
The TChromium component provides an OnConsoleMessage event with signature :
TOnConsoleMessage = procedure(Sender: TObject; const browser: ICefBrowser;
const message, source: ustring; line: Integer;
out Result: Boolean) of object;
If you handle this event and set the Result variable to true the message output to the console is suppressed.
Set LogSeverity to LogSeverity.Fatal or n other desired.
var settings = new CefSettings()
{
//By default CefSharp will use an in-memory cache, you need to specify a Cache Folder to persist data
CachePath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "CefSharp\\Cache"),
//Set log severity to showup only fatal errors.
LogSeverity = LogSeverity.Fatal,
};
//Autoshutdown when closing
CefSharpSettings.ShutdownOnExit = true;
//Perform dependency check to make sure all relevant resources are in our output directory.
Cef.Initialize(settings, performDependencyCheck: true, browserProcessHandler: null);
I need to get certain special windows folders in Windows 10 from a TD 6.3 program - for instance, Program Files, user, or Appdata. Is there a certain function for this? I've looked through the help but can't seem to find it.
I also need to check if the program currently has read/write access to a folder I specify. I suspect the latter can be achieved by trying a SalFileOpen or SalFileWrite respectively and checking the result.
The point is that I need to get some temporary files from a network location to the local machine to be able to use them, as I only have read access to the network drive.
As of now I've simply created a temp folder in C:\, this works perfectly in debug, but when I build the program and then try running it, for some reason it doesn't get the files and the temp folder stays empty. Thinking this was a permission issue, I tried running as admin to no avail.
I'm kind of at a loss as to why it won't work, so any input is appreciated.
I simply copy the needed files from the network drive to the temp folder using SalFileCopy with the overwrite flag set to true.
You can use windows API-functions for that.
To get the temp-path you can use the following:
Define an external function:
Kernel32.dll
Function: GetTempPathW
Return
DWORD
Parameters:
Number: DWORD ! nBufferLength [in] The size of the string buffer identified by lpBuffer, in TCHARs.
Receive String: LPWSTR ! lpBuffer [out] A pointer to a string buffer that receives the null-terminated string
Use it like that:
Function: GetTempPath ! __exported
Description: WinAPI: This function retrieves the path of the directory designated for temporary files.
Returns
String:
Parameters
Local variables
String: sStrBuffer
Number: nBuffLen
Number: nNumChars
Actions
Set nBuffLen = 0
Call SalSetBufferLength( sStrBuffer, nBuffLen )
Set nBuffLen = GetTempPathW( nBuffLen, sStrBuffer )
Call SalSetBufferLength( sStrBuffer, nBuffLen * 2 )
Call GetTempPathW( nBuffLen, sStrBuffer )
If SalStrRightX( sStrBuffer, 1 ) != '\\'
Set sStrBuffer = sStrBuffer || '\\'
Return sStrBuffer
To check if you have write access just create a file in that folder and delete it afterwards.
Here some more info about the windows API-function:
https://msdn.microsoft.com/de-de/library/windows/desktop/aa364992(v=vs.85).aspx
In case you need any path of the environment variables (e.g. appdata) you can use
VisDosGetEnvString( 'appdata' )
The method is part of the Visual Tool Chest (vt.apl lib within the installation dir)
In my Informix 4GL program, I have an input field where the user can insert a URL and the feed is later being sent over to the web via a script.
How can I validate the URL at the time of input, to ensure that it's a live link? Can I make a call and see if I get back any errors?
I4GL checking the URL
There is no built-in function to do that (URLs didn't exist when I4GL was invented, amongst other things).
If you can devise a C method to do that, you can arrange to call that method through the C interface. You'll write the method in native C, and then write an I4GL-callable C interface function using the normal rules. When you build the program with I4GL c-code, you'll link the extra C functions too. If you build the program with I4GL-RDS (p-code), you'll need to build a custom runner with the extra function(s) exposed. All of this is standard technique for I4GL.
In general terms, the C interface code you'll need will look vaguely like this:
#include <fglsys.h>
// Standard interface for I4GL-callable C functions
extern int i4gl_validate_url(int nargs);
// Using obsolescent interface functions
int i4gl_validate_url(int nargs)
{
if (nargs != 1)
fgl_fatal(__FILE__, __LINE__, -1318);
char url[4096];
popstring(url, sizeof(url));
int r = validate_url(url); // Your C function
retint(r);
return 1;
}
You can and should check the manuals but that code, using the 'old style' function names, should compile correctly. The code can be called in I4GL like this:
DEFINE url CHAR(256)
DEFINE rc INTEGER
LET url = "http://www.google.com/"
LET rc = i4gl_validate_url(url)
IF rc != 0 THEN
ERROR "Invalid URL"
ELSE
MESSAGE "URL is OK"
END IF
Or along those general lines. Exactly what values you return depends on your decisions about how to return a status from validate_url(). If need so be, you can return multiple values from the interface function (e.g. error number and text of error message). Etc. This is about the simplest possible design for calling some C code to validate a URL from within an I4GL program.
Modern C interface functions
The function names in the interface library were all changed in the mid-00's, though the old names still exist as macros. The old names were:
popstring(char *buffer, int buflen)
retint(int retval)
fgl_fatal(const char *file, int line, int errnum)
You can find the revised documentation at IBM Informix 4GL v7.50.xC3: Publication library in PDF in the 4GL Reference Manual, and you need Appendix C "Using C with IBM Informix 4GL".
The new names start ibm_lib4gl_:
ibm_libi4gl_popMInt()
ibm_libi4gl_popString()
As to the error reporting function, there is one — it exists — but I don't have access to documentation for it any more. It'll be in the fglsys.h header. It takes an error number as one argument; there's the file name and a line number as the other arguments. And it will, presumably, be ibm_lib4gl_… and there'll be probably be Fatal or perhaps fatal (or maybe Err or err) in the rest of the name.
I4GL running a script that checks the URL
Wouldn't it be easier to write a shell script to get the status code? That might work if I can return the status code or any existing results back to the program into a variable? Can I do that?
Quite possibly. If you want the contents of the URL as a string, though, you'll might end up wanting to call C. It is certainly worth thinking about whether calling a shell script from within I4GL is doable. If so, it will be a lot simpler (RUN "script", IIRC, where the literal string would probably be replaced by a built-up string containing the command and the URL). I believe there are file I/O functions in I4GL now, too, so if you can get the script to write a file (trivial), you can read the data from the file without needing custom C. For a long time, you needed custom C to do that.
I just need to validate the URL before storing it into the database. I was thinking about:
#!/bin/bash
read -p "URL to check: " url
if curl --output /dev/null --silent --head --fail "$url"; then
printf '%s\n' "$url exist"
else
printf '%s\n' "$url does not exist"
fi
but I just need the output instead of /dev/null to be into a variable. I believe the only option is to dump the output into a temp file and read from there.
Instead of having I4GL run the code to validate the URL, have I4GL run a script to validate the URL. Use the exit status of the script and dump the output of curl into /dev/null.
FUNCTION check_url(url)
DEFINE url VARCHAR(255)
DEFINE command_line VARCHAR(255)
DEFINE exit_status INTEGER
LET command_line = "check_url ", url
RUN command_line RETURNING exit_status
RETURN exit_status
END FUNCTION {check_url}
Your calling code can analyze exit_status to see whether it worked. A value of 0 indicates success; non-zero indicates a problem of some sort, which can be deemed 'URL does not work'.
Make sure the check_url script (a) exits with status zero on success and non-zero on any sort of failure, and (b) doesn't write anything to standard output (or standard error) by default. The writing to standard error or output will screw up screen layouts, etc, and you do not want that. (You can obviously have options to the script that enable standard output, or you can invoke the script with options to suppress standard output and standard error, or redirect the outputs to /dev/null; however, when used by the I4GL program, it should be silent.)
Your 'script' (check_url) could be as simple as:
#!/bin/bash
exec curl --output /dev/null --silent --head --fail "${1:-http://www.example.com/"
This passes the first argument to curl, or the non-existent example.com URL if no argument is given, and replaces itself with curl, which generates a zero/non-zero exit status as required. You might add 2>/dev/null to the end of the command line to ensure that error messages are not seen. (Note that it will be hell debugging this if anything goes wrong; make sure you've got provision for debugging.)
The exec is a minor optimization; you could omit it with almost no difference in result. (I could devise a scheme that would probably spot the difference; it involves signalling the curl process, though — kill -9 9999 or similar, where the 9999 is the PID of the curl process — and isn't of practical significance.)
Given that the script is just one line of code that invokes another program, it would be possible to embed all that in the I4GL program. However, having an external shell script (or Perl script, or …) has merits of flexibility; you can edit it to log attempts, for example, without changing the I4GL code at all. One more file to distribute, but better flexibility — keep a separate script, even though it could all be embedded in the I4GL.
As Jonathan said "URLs didn't exist when I4GL was invented, amongst other things". What you will find is that the products that have grown to superceed Informix-4gl such as FourJs Genero will cater for new technologies and other things invented after I4GL.
Using FourJs Genero, the code below will do what you are after using the Informix 4gl syntax you are familiar with
IMPORT com
MAIN
-- Should succeed and display 1
DISPLAY validate_url("http://www.google.com")
DISPLAY validate_url("http://www.4js.com/online_documentation/fjs-fgl-manual-html/index.html#c_fgl_nf.html") -- link to some of the features added to I4GL by Genero
-- Should fail and display 0
DISPLAY validate_url("http://www.google.com/testing")
DISPLAY validate_url("http://www.google2.com")
END MAIN
FUNCTION validate_url(url)
DEFINE url STRING
DEFINE req com.HttpRequest
DEFINE resp com.HttpResponse
-- Returns TRUE if http request to a URL returns 200
TRY
LET req = com.HttpRequest.create(url)
CALL req.doRequest()
LET resp = req.getResponse()
IF resp.getStatusCode() = 200 THEN
RETURN TRUE
END IF
-- May want to handle other HTTP status codes
CATCH
-- May want to capture case if not connected to internet etc
END TRY
RETURN FALSE
END FUNCTION
I'm wondering if someone has an example on how can be used the TJvProgramVersionCheck component performing the check via HTTP.
The example in the JVCL examples dir doesn't show how to use HTTP
thank you
The demo included in your $(JVCL)\Examples\JvProgramVersionCheck folder seems to be able to do so. Edit the properties of the JVProgramVersionHTTPLocation, and add the URL to it's VersionInfoLocation list (a TStrings). You can also set up any username, password, proxy, and port settings if needed.
You also need to add an OnLoadFileFromRemote event handler. I don't see anything in the demo that addresses that requirement, but the source code says:
{ Simple HTTP location class with no http logic.
The logic must be implemented manually in the OnLoadFileFromRemote event }
It appears from the parameters that event receives that you do your checking there:
function TJvProgramVersionFTPLocation.LoadFileFromRemoteInt(
const ARemotePath, ARemoteFileName, ALocalPath, ALocalFileName: string;
ABaseThread: TJvBaseThread): string;
So you'll need to add an event handler for this event, and then change the TJVProgramVersionCheck.LocationType property to pvltHTTP and run the demo. After testing, it seems you're provided the server and filename for the remote version, and a local path and temp filename for the file you download. The event handler's Result should be the full path and filename of the newly downloaded file. Your event handler should take care of the actual retrieval of the file.
There are a couple of additional types defined in JvProgramVersionCheck.pas, (TJvProgramVersionHTTPLocationICS and TJvProgramVersionHTTPLocationIndy, both protected by compiler defines so they don't exist in the default compilation. However, setting the ICS related define resulted in lots of compilation errors (it apparently was written against an old version of ICS), and setting the Indy define (and then setting it again to use Indy10 instead) allowed it to compile but didn't change any of the behavior. I'm going to look more into this later today.
Also, make sure that the VersionInfoLocation entry is only the URL (without the filename); the filename itself goes in the VersionInfoFileName property. If you put it in the URL, it gets repeated (as in http://localhost/Remote/ProjectVersions_http.iniProjectVersions_http.ini, and will fail anyway. (I found this while tracing through the debugger trying to solve the issue.)
Finally...
The solution is slightly (but not drastically) complicated. Here's what I did:
Copy JvProgramVersionCheck.pas to the demo folder. (It needs to be recompiled because of the next step.)
Go to Project->Options->Directories and Conditionals, and add the following line to the DEFINES entry:
USE_3RDPARTY_INDY10;USE_THIRDPARTY_INDY;
Delete the JvProgramVersionHTTPLocation component from the demo form.
Add a new private section to the form declaration:
private
HTTPLocation: TJvProgramVersionHTTPLocationIndy;
In the FormCreate event, add the following code:
procedure TForm1.FormCreate(Sender: TObject);
const
RemoteFileURL = 'http://localhost/';
RemoteFileName = 'ProjectVersions_http.ini';
begin
HTTPLocation := TJvProgramVersionHTTPLocationIndy.Create(Self); // Self means we don't free
HTTPLocation.VersionInfoLocationPathList.Add(RemoteFileURL);
HTTPLocation.VersionInfoFileName := RemoteFileName;
ProgramVersionCheck.LocationHTTP := HTTPLocation;
ProgramVersionCheck.LocationType := pvltHTTP;
VersionCheck; // This line is already there
end;
In the ProgramVersionCheck component properties, expand the VersionInfoFileOptions property, and change the FileFormat from hffXML to hffIni.
Delete or rename the versioninfolocal.ini from the demo's folder. (If you've run the app once, it stores the http location info, and the changes above are overwritten. This took a while to track down.)
Make sure your local http server is running, and the ProjectVersions_http.ini file is in the web root folder. You should then be able to run the demo. Once the form appears, click on the Edit History button to see the information retrieved from the remote version info file. You'll also have a new copy of the versioninfolocal.ini that has the saved configuration info you entered above.
Microsoft has recently broken our longtime (and officially recommended by them) code to read the version of Excel and its current omacro security level.
What used to work:
// Get the program associated with workbooks, e.g. "C:\Program Files\...\Excel.exe"
SHELLAPI.FindExecutable( 'OurWorkbook.xls', ...)
// Get the version of the .exe (from it's Properties...)
WINDOWS.GetFileVersionInfo()
// Use the version number to access the registry to determine the security level
// '...\software\microsoft\Office\' + VersionNumber + '.0\Excel\Security'
(I was always amused that the security level was for years in an insecure registry entry...)
In Office 2010, .xls files are now associated with "“Microsoft Application Virtualization DDE Launcher," or sftdde.exe. The version number of this exe is obviously not the version of Excel.
My question:
Other than actually launching Excel and querying it for version and security level (using OLE CreateOLEObject('Excel.Application')), is there a cleaner, faster, or more reliable way to do this that would work with all versions starting with Excel 2003?
Use
function GetExcelPath: string;
begin
result := '';
with TRegistry.Create do
try
RootKey := HKEY_LOCAL_MACHINE;
if OpenKey('SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\excel.exe', false) then
result := ReadString('Path') + 'excel.exe';
finally
Free;
end;
end;
to get the full file name of the excel.exe file. Then use GetFileVersionInfo as usual.
As far as I know, this approach will always work.
using OLE CreateOLEObject('Excel.Application'))
you can get installed Excel versions by using the same registry place, that this function uses.
Basically you have to clone a large part of that function registry code.
You can spy on that function call by tools like Microsoft Process Monitor too see exactly how does Windows look for installed Excel - and then to do it exactly the same way.
You have to open registry at HKEY_CLASSES_ROOT\ and enumerate all the branches, whose name starts with "Excel.Application."
For example at this my workstation I only have Excel 2013 installed, and that corresponds to HKEY_CLASSES_ROOT\Excel.Application.15
But on my another workstation I have Excel 2003 and Excel 2010 installed, testing different XLSX implementations in those two, so I have two registry keys.
HKEY_CLASSES_ROOT\Excel.Application.12
HKEY_CLASSES_ROOT\Excel.Application.14
So, you have to enumerate all those branches with that name, dot, and number.
Note: the key HKEY_CLASSES_ROOT\Excel.Application\CurVer would have name of "default" Excel, but what "default" means is ambiguous when several Excels are installed. You may take that default value, if you do not care, or you may decide upon your own idea what to choose, like if you want the maximum Excel version or minimum or something.
Then when for every specific excel branch you should read the default key of its CLSID sub-branch.
Like HKEY_CLASSES_ROOT\Excel.Application.15\CLSID has nil-named key equal to
{00024500-0000-0000-C000-000000000046} - fetch that index to string variable.
Then do a second search - go into a branch named like HKEY_CLASSES_ROOT\CLSID\{00024500-0000-0000-C000-000000000046}\LocalServer ( use the fetched index )
If that branch exists - fetch the nil-named "default key" value to get something like C:\PROGRA~1\MICROS~1\Office15\EXCEL.EXE /automation
The last result is the command line. It starts with a filename (non-quoted in this example, but may be in-quotes) and is followed by optional command line.
You do not need command line, so you have to extract initial commanlind, quoted or not.
Then you have to check if such an exe file exists. If it does - you may launch it, if not - check the registry for other Excel versions.