Can xgettext be used to extract specific domain strings only? - localization

(Really surprised this isn't answered anywhere online; couple posts over the past few years with a similar question, but never answered. Let's hope the Stackoverflow crew can come to the rescue)
Situation:
When using gettext to support application localization, one sometimes wishes to specify a 'domain' with dgettext('domain', 'some text string'). However, when running xgettext, all strings wrapped with dgettext(...) are spit out into one file (default: messages.po).
Given the following example:
dgettext('menus', 'login link');
dgettext('menus', 'account link');
dgettext('footer', 'copyright notice');
dgettext('footer', 'contact form');
is there any way to end up with
menus.po
footer.po
using an extractor such as xgettext?
PHP response desired, although I believe this should be applicable across all languages

The only way I've found to do this is to redefine gettext functions...
example:
function _menus ($str) {
return dgettext('menus', $str);
}
function _footer ($_str) {
return dgettext('footer', $str);
}
_menus('login link');
_menus('account link');
_footer('copyright notice');
_footer('contact form');
else, you only have to run following commands:
xgettext [usual options] -k --keyword=_menus:1 -d menus
xgettext [usual options] -k --keyword=_footer:1 -d footer
Bye!

I do not know how to put different contexts in different files, but I did find that xgettext can extract the domain names into msgctxt fields in the po file. For PHP this is not done by default. To enable this, use for example --keyword=dgettext:1c,2 (in poedit, add "dgettext:1c,2") to the keyword list.
See also:
http://developer.gnome.org/glib/2.28/glib-I18N.html
https://www.gnu.org/savannah-checkouts/gnu/gettext/manual/html_node/xgettext-Invocation.html

Achieving this is best done through either code separation or the use of context disambiguation.
If you can separate your menu code from your footer code, then you can truly consider them different domains and extract them accordingly from known locations.
If modular separation is impossible and all the code lives together, then really you should be using context instead of domains. e.g.
translate( 'A string', 'myproject', 'some module' )
Where "myproject" is your domain and "some module" disambiguates the string.
However, reality doesn't always align with best practice, so if you can't refactor your code as Asevere suggests (and that is probably the best answer) then I have a massive hack to offer.
You could exploit the context flag mentioned in Boris's answer - We can repurpose this but only if we're not otherwise going to be using contexts.
I'll repeat that. This hack will only work if your code is not using contexts.
Some PHP holding two domains (including one string used in both) -
<?php // test.php
dgettext( 'abc', 'foo' );
dgettext( 'abc', 'bar' );
dgettext( 'xyz', 'bar' );
We can cheat, and take the domain argument as if we intended it to be the message context (msgctxt field). Extracting from the command line:
xgettext -LPHP --keyword=dgettext:1,2c -o - test.php \
| sed 's/CHARSET/utf-8/' \
> combined.pot
This generates a combined.pot file containing all the strings with our context hack. (note we also fixed the placeholder character set field which would break the next bit)
We can now filter out all messages of a given context into separate files using msggrep. Note we also trash the context field as we're not using it.
msggrep -J -e foo -o - combined.pot | sed '/^msgctxt/d' > foo.pot
msggrep -J -e bar -o - combined.pot | sed '/^msgctxt/d' > bar.pot
Improper, but it works.

Related

Validate URL in Informix 4GL program

In my Informix 4GL program, I have an input field where the user can insert a URL and the feed is later being sent over to the web via a script.
How can I validate the URL at the time of input, to ensure that it's a live link? Can I make a call and see if I get back any errors?
I4GL checking the URL
There is no built-in function to do that (URLs didn't exist when I4GL was invented, amongst other things).
If you can devise a C method to do that, you can arrange to call that method through the C interface. You'll write the method in native C, and then write an I4GL-callable C interface function using the normal rules. When you build the program with I4GL c-code, you'll link the extra C functions too. If you build the program with I4GL-RDS (p-code), you'll need to build a custom runner with the extra function(s) exposed. All of this is standard technique for I4GL.
In general terms, the C interface code you'll need will look vaguely like this:
#include <fglsys.h>
// Standard interface for I4GL-callable C functions
extern int i4gl_validate_url(int nargs);
// Using obsolescent interface functions
int i4gl_validate_url(int nargs)
{
if (nargs != 1)
fgl_fatal(__FILE__, __LINE__, -1318);
char url[4096];
popstring(url, sizeof(url));
int r = validate_url(url); // Your C function
retint(r);
return 1;
}
You can and should check the manuals but that code, using the 'old style' function names, should compile correctly. The code can be called in I4GL like this:
DEFINE url CHAR(256)
DEFINE rc INTEGER
LET url = "http://www.google.com/"
LET rc = i4gl_validate_url(url)
IF rc != 0 THEN
ERROR "Invalid URL"
ELSE
MESSAGE "URL is OK"
END IF
Or along those general lines. Exactly what values you return depends on your decisions about how to return a status from validate_url(). If need so be, you can return multiple values from the interface function (e.g. error number and text of error message). Etc. This is about the simplest possible design for calling some C code to validate a URL from within an I4GL program.
Modern C interface functions
The function names in the interface library were all changed in the mid-00's, though the old names still exist as macros. The old names were:
popstring(char *buffer, int buflen)
retint(int retval)
fgl_fatal(const char *file, int line, int errnum)
You can find the revised documentation at IBM Informix 4GL v7.50.xC3: Publication library in PDF in the 4GL Reference Manual, and you need Appendix C "Using C with IBM Informix 4GL".
The new names start ibm_lib4gl_:
ibm_libi4gl_popMInt()
ibm_libi4gl_popString()
As to the error reporting function, there is one — it exists — but I don't have access to documentation for it any more. It'll be in the fglsys.h header. It takes an error number as one argument; there's the file name and a line number as the other arguments. And it will, presumably, be ibm_lib4gl_… and there'll be probably be Fatal or perhaps fatal (or maybe Err or err) in the rest of the name.
I4GL running a script that checks the URL
Wouldn't it be easier to write a shell script to get the status code? That might work if I can return the status code or any existing results back to the program into a variable? Can I do that?
Quite possibly. If you want the contents of the URL as a string, though, you'll might end up wanting to call C. It is certainly worth thinking about whether calling a shell script from within I4GL is doable. If so, it will be a lot simpler (RUN "script", IIRC, where the literal string would probably be replaced by a built-up string containing the command and the URL). I believe there are file I/O functions in I4GL now, too, so if you can get the script to write a file (trivial), you can read the data from the file without needing custom C. For a long time, you needed custom C to do that.
I just need to validate the URL before storing it into the database. I was thinking about:
#!/bin/bash
read -p "URL to check: " url
if curl --output /dev/null --silent --head --fail "$url"; then
printf '%s\n' "$url exist"
else
printf '%s\n' "$url does not exist"
fi
but I just need the output instead of /dev/null to be into a variable. I believe the only option is to dump the output into a temp file and read from there.
Instead of having I4GL run the code to validate the URL, have I4GL run a script to validate the URL. Use the exit status of the script and dump the output of curl into /dev/null.
FUNCTION check_url(url)
DEFINE url VARCHAR(255)
DEFINE command_line VARCHAR(255)
DEFINE exit_status INTEGER
LET command_line = "check_url ", url
RUN command_line RETURNING exit_status
RETURN exit_status
END FUNCTION {check_url}
Your calling code can analyze exit_status to see whether it worked. A value of 0 indicates success; non-zero indicates a problem of some sort, which can be deemed 'URL does not work'.
Make sure the check_url script (a) exits with status zero on success and non-zero on any sort of failure, and (b) doesn't write anything to standard output (or standard error) by default. The writing to standard error or output will screw up screen layouts, etc, and you do not want that. (You can obviously have options to the script that enable standard output, or you can invoke the script with options to suppress standard output and standard error, or redirect the outputs to /dev/null; however, when used by the I4GL program, it should be silent.)
Your 'script' (check_url) could be as simple as:
#!/bin/bash
exec curl --output /dev/null --silent --head --fail "${1:-http://www.example.com/"
This passes the first argument to curl, or the non-existent example.com URL if no argument is given, and replaces itself with curl, which generates a zero/non-zero exit status as required. You might add 2>/dev/null to the end of the command line to ensure that error messages are not seen. (Note that it will be hell debugging this if anything goes wrong; make sure you've got provision for debugging.)
The exec is a minor optimization; you could omit it with almost no difference in result. (I could devise a scheme that would probably spot the difference; it involves signalling the curl process, though — kill -9 9999 or similar, where the 9999 is the PID of the curl process — and isn't of practical significance.)
Given that the script is just one line of code that invokes another program, it would be possible to embed all that in the I4GL program. However, having an external shell script (or Perl script, or …) has merits of flexibility; you can edit it to log attempts, for example, without changing the I4GL code at all. One more file to distribute, but better flexibility — keep a separate script, even though it could all be embedded in the I4GL.
As Jonathan said "URLs didn't exist when I4GL was invented, amongst other things". What you will find is that the products that have grown to superceed Informix-4gl such as FourJs Genero will cater for new technologies and other things invented after I4GL.
Using FourJs Genero, the code below will do what you are after using the Informix 4gl syntax you are familiar with
IMPORT com
MAIN
-- Should succeed and display 1
DISPLAY validate_url("http://www.google.com")
DISPLAY validate_url("http://www.4js.com/online_documentation/fjs-fgl-manual-html/index.html#c_fgl_nf.html") -- link to some of the features added to I4GL by Genero
-- Should fail and display 0
DISPLAY validate_url("http://www.google.com/testing")
DISPLAY validate_url("http://www.google2.com")
END MAIN
FUNCTION validate_url(url)
DEFINE url STRING
DEFINE req com.HttpRequest
DEFINE resp com.HttpResponse
-- Returns TRUE if http request to a URL returns 200
TRY
LET req = com.HttpRequest.create(url)
CALL req.doRequest()
LET resp = req.getResponse()
IF resp.getStatusCode() = 200 THEN
RETURN TRUE
END IF
-- May want to handle other HTTP status codes
CATCH
-- May want to capture case if not connected to internet etc
END TRY
RETURN FALSE
END FUNCTION

What is the proper way to execute a batchfile with multiple params?

I have a batchfile which I use for managing the translation of various programs.
Now I want a delphi application to call this batchfile and pass on the parameter it needs for further processing. Unfortunately the parameters contain spaces which leads to a splitup. Is there a way to keep all parameters tied up as intended?
this is how my batchfile looks:
ECHO Scan for new ressources
%MLDIR%\Ml7Build.exe s %1%
ECHO Import glossary for new translation
%MLDIR%\MlBuild.exe i %2%
ECHO Create translated application
%MLDIR%\Ml7Build.exe b %3%
I tried to use the ShellExecute-Command from ShellApi because I found several similar questions on SO, but none of them could help me in solving my problem. My delphi code looks like this:
param1 := ExtractFileName(hMLProj);
param2 := '-f: '+MLWorkDir+'Prev_'+ExtractFileName(hMLProj)+' -settings:Auftrag_Test.importsettings-method:2 -overwri:3 -error:2 '+ExtractFileName(hMLProj)+' ';
param3 := ExtractFileName(hMLProj);
ShellExecute(0,'open',PCHAR(MLWorkDir+'__AutomatedTranslationFUBAR.bat'),PChar(param1 +param2 +param3),nil,SW_SHOWDEFAULT);
ECHO Scan for new resources
%MLDIR%\Ml7Build.exe s %~1
ECHO Import glossary for new translation
%MLDIR%\MlBuild.exe i %~2
REM is 7 ^ omitted here?
ECHO Create translated application
%MLDIR%\Ml7Build.exe b %~3
Note that %n not %n% (n=1..9) refers to the parameter n supplied to the batch. The tilde removes "any enclosing quotes."
Parameters require to be "enclosed in quotes" (and they must be double-quotes) if they contain separators such as spaces.

How to view default zsh settings (HISTSIZE, SAVEHIST, ...)

How do I see the current values for all the zsh settings?
e.g., I don't currently have HISTSIZE and SAVEHIST set, so env | grep HIST and set | grep HIST show nothing. So then how can I see what default values are being used?
There is no option to get the default value for an undefined variable except parsing documentation or source code.
HISTSIZE and SAVEHIST are not settings, they are special variables. There is a way to list all variables, but I know of no way to list those that are special and are used as settings.
To help you list parameters implemented as variables, there is the zsh/parameter module (zmodload zsh/parameter to load it). It has an associative array $parameters where keys are variable names and values are variable type descriptions. Both HISTSIZE and SAVEHIST appear there as integer-special. HISTCHARS appears there as scalar-special. Note though that RANDOM appears here just as HISTSIZE: integer-special, so you can’t use this to get special variables used as options. But you can always use the PARAMETERS USED BY THE SHELL section of man zshparam.
I don't know of any option that will allow you to determine default values of those parameters, except parsing documentation or source code.
# setopt | grep hist
nobanghist
extendedhistory
histfcntllock
histignorealldups
histignorespace
histnostore
histreduceblanks
histsavenodups
histverify
incappendhistory
If you want to see non-default settings:
If no arguments are supplied, the names of all options currently set are printed. The form is chosen so as to minimize the differences from the default options for the current emulation (the default
emulation being native zsh, shown as in zshoptions(1)). Options that are on by default for the emulation are shown with the prefix no only if they are off, while other options are shown without
the prefix no and only if they are on. In addition to options changed from the default state by the user, any options activated automatically by the shell (for example, SHIN_STDIN or INTERACTIVE) will
be shown in the list. The format is further modified by the option KSH_OPTION_PRINT, however the rationale for choosing options with or without the no prefix remains the same in this case.
It also makes sense to use:
# unsetopt | grep hist
noappendhistory
cshjunkiehistory
histallowclobber
nohistbeep
histexpiredupsfirst
histfindnodups
histignoredups
histlexwords
histnofunctions
nohistsavebycopy
histsubstpattern
sharehistory
If no arguments are supplied, the names of all options currently unset are printed.
Or just follow the help and use
# setopt kshoptionprint
# setopt | grep hist
noappendhistory off
nobanghist on
cshjunkiehistory off
extendedhistory on
histallowclobber off
nohistbeep off
histexpiredupsfirst off
histfcntllock on
histfindnodups off
histignorealldups on
histignoredups off
histignorespace on
histlexwords off
histnofunctions off
histnostore on
histreduceblanks on
nohistsavebycopy off
histsavenodups on
histsubstpattern off
histverify on
incappendhistory on
sharehistory off
Note that the output of setopt and unsetopt match when the kshoptionprint option is used.
To show the current value, whether you have set it or not (in which case it shows the default value):
➜ ~ echo $SAVEHIST
10000
➜ ~ echo $HISTSIZE
10000
i dunno about YOU... (i mean, i do use .prezto), but this is the "autocompletion" I get upon entering setoptTAB...
which is telling me useful things like..
-- zsh options (set) --
noaliases noautoresume nohashdirs nohistverify nonomatch ...
and
-- zsh options (unset) --
allexport cshjunkiehistory hashexecutablesonly kshglob
nullglob singlecommand ...

Comparing generated executables for equivilance

I need to compare 2 executables and/or shared objects, compiled using the same compiler/flags and verify that they have not changed. We work in a regulated environment, so it would be really useful for testing purposes to isolate exactly what parts of the executable has changed.
Using MD5Sums/Hashes doesn't work due to the headers containing information about the file.
Does anyone know of a program or way to verify that 2 files are executionally the same even if they were built at a different time?
An interesting question. I have a similar problem on linux. Intrusion detection systems like OSSEC or tripwire may generate false positives if the hashsum of an executable changes all of a sudden. This may be nothing worse than the Linux "prelink" program patching the executable file for faster startups.
In order to compare two binaries (in the ELF format), one can use the "readelf" executable and then "diff" to compare outputs. I'm sure there are refined solutions, but without further ado, a poor man's comparator in Perl:
#!/usr/bin/perl -w
$exe = $ARGV[0];
if (!$exe) {
die "Please give name of executable\n"
}
if (! -f $exe) {
die "Executable $exe not found or not a file\n";
}
if (! (`file '$exe'` =~ /\bELF\b.*?\bexecutable\b/)) {
die "file command says '$exe' is not an ELF executable\n";
}
# Identify sections in ELF
#lines = pipeIt("readelf --wide --section-headers '$exe'");
#sections = ();
for my $line (#lines) {
if ($line =~ /^\s*\[\s*(\d+)\s*\]\s+(\S+)/) {
my $secnum = $1;
my $secnam = $2;
print "Found section $1 named $2\n";
push #sections, $secnam;
}
}
# Dump file header
#lines = pipeIt("readelf --file-header --wide '$exe'");
print #lines;
# Dump all interesting section headers
#lines = pipeIt("readelf --all --wide '$exe'");
print #lines;
# Dump individual sections as hexdump
for my $section (#sections) {
#lines = pipeIt("readelf --hex-dump='$section' --wide '$exe'");
print #lines;
}
sub pipeIt {
my($cmd) = #_;
my $fh;
open ($fh,"$cmd |") or die "Could not open pipe from command '$cmd': $!\n";
my #lines = <$fh>;
close $fh or die "Could not close pipe to command '$cmd': $!\n";
return #lines;
}
Now you can run for example, on machine 1:
./checkexe.pl /usr/bin/curl > curl_machine1
And on machine 2:
./checkexe.pl /usr/bin/curl > curl_machine2
After having copypasted, SFTP-ed or NSF-ed (you don't use FTP, do you?) the files into the same filetree, compare the files:
diff --side-by-side --width=200 curl_machine1 curl_machine2 | less
In my case, differences exist in section ".gnu.conflict", ".gnu.liblist", ".got.plt" and ".dynbss", which might be ok for a "prelink" intervention, but in the code section, ".text", which would be a Bad Sign.
To follow up, here is what I came up with finally:
Instead of comparing the final executables & shared objects, we compared the .o files output before linking. We assumed that the linking process was sufficiently reproducible that this would be fine.
It works in some of our cases, where we have two builds were we've made some small change that shouldn't effect the final code (Code pretty-printer) but doesn't help us if we do not have the build intermediary output.
You can compare the contents of RO and RW initialized sections by generating a binary file from the ELF file.
objcopy <elf_file> -O binary <binary_file>
Use the generated binary files to compare if they are identical, using diff, for example.
In my opinion, this is enough to grantee you are generating the same executable.
A few years back I had to do the same thing. We had to prove that we could rebuild the executable from source when given only a revision number, revision control repository, build tools, and build configuration. Note: If any of these change you may see a difference.
I remember there is some timestamps in the executable. The trick is to realize that the file is not just a bunch of bytes, that can not be interpreted. The file has sections, most will not change, but there will be a section for time of build (or some such thing).
I don't remember all the details, but the commands you will need are { objcopy, objdump, nm }, I think objdump would be the first to try.
Hope this helps.

How to monitor a text file in realtime [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
For debugging purposes in a somewhat closed system, I have to output text to a file.
Does anyone know of a tool that runs on windows (console based or not) that detects changes to a file and outputs them in real-time?
I like tools that will perform more than one task, Notepad++ is a great notepad replacement and has a Document Monitor plugin (installs with standard msi) that works great. It also is portable so you can have it on a thumb drive for use anywhere.
For a command line option, PowerShell (which is really a new command line) has a great feature already mentioned.
Get-Content someFile.txt -wait
But you can also filter at the command line using a regular expression
Get-Content web.log -wait | where { $_ -match "ERROR" }
Tail for Win32
Apache Chainsaw - used this with log4net logs, may require file to be in a certain format
When using Windows PowerShell you can do the following:
Get-Content someFile.txt -wait
I use "tail -f" under cygwin.
I use BareTail for doing this on Windows. It's free and has some nice features, such as tabs for tailing multiple files and configurable highlighting.
Tail is the best answer so far.
If you don't use Windows, you probably already have tail.
If you do use Windows, you can get a whole slew of Unix command line tools from here. Unzip them and put them somewhere in your PATH.
Then just do this at the command prompt from the same folder your log file is in:
tail -n 50 -f whatever.log
This will show you the last 50 lines of the file and will update as the file updates.
You can combine grep with tail with great results - something like this:
tail -n 50 -f whatever.log | grep Error
gives you just lines with "Error" in it.
Good luck!
FileSystemWatcher works a treat, although you do have to be a little careful about duplicate events firing - 1st link from Google - but bearing that in mind can produce great results.
Late answer, though might be helpful for someone -- LOGEXPERT seems to be interesting tail utility for windows.
Try SMSTrace from Microsoft (now called CMTrace, and directly available in the Start Menu on some versions of Windows)
Its a brilliant GUI tool that monitors updates to any text file in real time, even if its locked for writing by another file.
Don't be fooled by the description, its capable of monitoring any file, including .txt, .log or .csv.
Its ability to monitor locked files is extremely useful, and is one of the reasons why this utility shines.
One of the nicest features is line coloring. If it sees the word "ERROR", the line becomes red. If it sees the word "WARN", the line becomes yellow. This makes the logs a lot easier to follow.
I have used FileSystemWatcher for monitoring of text files for a component I recently built. There may be better options (I never found anything in my limited research) but that seemed to do the trick nicely :)
Crap, my bad, you're actually after a tool to do it all for you..
Well if you get unlucky and want to roll your own ;)
Yor can use the FileSystemWatcher in System.Diagnostics.
From MSDN:
public class Watcher
{
public static void Main()
{
Run();
}
[PermissionSet(SecurityAction.Demand, Name="FullTrust")]
public static void Run()
{
string[] args = System.Environment.GetCommandLineArgs();
// If a directory is not specified, exit program.
if(args.Length != 2)
{
// Display the proper way to call the program.
Console.WriteLine("Usage: Watcher.exe (directory)");
return;
}
// Create a new FileSystemWatcher and set its properties.
FileSystemWatcher watcher = new FileSystemWatcher();
watcher.Path = args[1];
/* Watch for changes in LastAccess and LastWrite times, and
the renaming of files or directories. */
watcher.NotifyFilter = NotifyFilters.LastAccess | NotifyFilters.LastWrite
| NotifyFilters.FileName | NotifyFilters.DirectoryName;
// Only watch text files.
watcher.Filter = "*.txt";
// Add event handlers.
watcher.Changed += new FileSystemEventHandler(OnChanged);
watcher.Created += new FileSystemEventHandler(OnChanged);
watcher.Deleted += new FileSystemEventHandler(OnChanged);
watcher.Renamed += new RenamedEventHandler(OnRenamed);
// Begin watching.
watcher.EnableRaisingEvents = true;
// Wait for the user to quit the program.
Console.WriteLine("Press \'q\' to quit the sample.");
while(Console.Read()!='q');
}
// Define the event handlers.
private static void OnChanged(object source, FileSystemEventArgs e)
{
// Specify what is done when a file is changed, created, or deleted.
Console.WriteLine("File: " + e.FullPath + " " + e.ChangeType);
}
private static void OnRenamed(object source, RenamedEventArgs e)
{
// Specify what is done when a file is renamed.
Console.WriteLine("File: {0} renamed to {1}", e.OldFullPath, e.FullPath);
}
}
You can also follow this link Watching Folder Activity in VB.NET
Snake Tail. It is a good option.
http://snakenest.com/snaketail/
Just a shameless plug to tail onto the answer, but I have a free web based app called Hacksaw used for viewing log4net files. I've put in an auto refresh options so you can give yourself near real time updates without having to refresh the browser all the time.
Yeah I've used both Tail for Win32 and tail on Cygwin. I've found both to be excellent, although I prefer Cygwin slightly as I'm able to tail files over the internet efficiently without crashes (Tail for Win32 has crashed on me in some instances).
So basically, I would use tail on Cygwin and redirect the output to a file on my local machine. I would then have this file open in Vim and reload (:e) it when required.
+1 for BareTail. I actually use BareTailPro, which provides real-time filtering on the tail with basic search strings or search strings using regex.
To make the list complete here's a link to the GNU WIN32 ports of many useful tools (amongst them is tail).
GNUWin32 CoreUtils
Surprised no one has mentioned Trace32 (or Trace64). These are great (free) Microsoft utilities that give a nice GUI and highlight any errors, etc. It also has filtering and sounds like exactly what you need.
Here's a utility I wrote to do just that:
It uses a FileSystemWatcher to look for changes in log files within local folders or network shares (don't have to be mounted, just provide the UNC path) and appends the new content to the console.
on github: https://github.com/danbyrne84/multitail
http://www.danielbyrne.net/projects/multitail
Hope this helps
#echo off
set LoggingFile=C:\foo.txt
set lineNr=0
:while1
for /f "usebackq delims=" %%i in (`more +%lineNr% %LoggingFile%`) DO (
echo %%i
set /a lineNr+=1
REM Have an appropriate stop condition here by checking i
)
goto :while1
A command prompt way of doing it.
FileMon is a free stand alone tool that can detect all kinds of file access. You can filter out any unwanted. It does not show you the data that has actually changed though.
I second "tail -f" in cygwin. I assume that Tail for Win32 will accomplish the same thing.
Tail for Win32
I did a tiny viewer by my own:
https://github.com/enexusde/Delphi/wiki/TinyLog

Resources