TCL - How to print on the screen th messages that where printed during execution of exec command? - printing

Say I have want to execute a script or and executable file by printing runtime the output of execution.
When I do:
set log [exec ./executable_file]
puts $log
Then it waits a long time and then prints everything at once. But I want runtime printing. How can I do this?

Not perfect (as it require writing to external file):
set log [exec executable_file | tee log.txt >#stdout]
The output will be displayed immediately, at the same time, saved to 'log.txt'. If you don't care about saving the output:
set log [exec executable_file >#stdout]

Use open "| ..." and asyncronous linewise reading from the returned descriptor, like this:
proc ReadLine fd {
if {[gets $fd line] < 0} {
if {[chan eof $fd]} {
chan close $fd
set ::forever now
return
}
}
puts $line
}
set fd [open "| ./executable_file"]
chan configure $fd -blocking no
chan event $fd readable [list ReadLine $fd]
vwait forever
See this wiki page for more involved examples.
In a real program you will probably already have an event loop running so there would be no need for a vwait specific to reading the output of one command.
Also if you need to collect the output, not just [puts] each line after it has been read, you will pobably need to create a global (usually namespaced) variable, initialize it to "", pass its name as another argument to the callback procedure (ReadLine here) and append the line to that variable's value.

May be you can launch another process in background before your executable, like tail -f logfile.txt, and outputting the results of your executable in this logfile.txt ?

Related

Print STDOUT to both file and STDOUT with perl

In one of the post, below method is suggested to capture STDOUT to a file without affecting logging at STDOUT (terminal).
open my $tee, "|-", "tee E:/log.txt";
For a sequence like below:
print $tee "Log1\n";
print $tee "Log2\n";
my $input = <STDIN>;
print $tee "Log3\n";
I don't see any message at terminal unless I provide the input. Once I type any character and press enter, then I see logs coming as
Log1
Log2
Log3
Is there a way such that I get first two outputs and then it waits for input and then the third output?
Or is there any way to capture STDOUT logs to a file while STDOUT logs keep coming on terminal too?
Set your filehandle to unbuffered output using
$old_fh = select($tee);
$| = 1;
select($old_fh);

Simple program that reads and writes to a pipe

Although I am quite familiar with Tcl this is a beginner question. I would like to read and write from a pipe. I would like a solution in pure Tcl and not use a library like Expect. I copied an example from the tcl wiki but could not get it running.
My code is:
cd /tmp
catch {
console show
update
}
proc go {} {
puts "executing go"
set pipe [open "|cat" RDWR]
fconfigure $pipe -buffering line -blocking 0
fileevent $pipe readable [list piperead $pipe]
if {![eof $pipe]} {
puts $pipe "hello cat program!"
flush $pipe
set got [gets $pipe]
puts "result: $got"
}
}
go
The output is executing go\n result:, however I would expect that reading a value from the pipe would return the line that I have sent to the cat program.
What is my error?
--
EDIT:
I followed potrzebie's answer and got a small example working. That's enough to get me going. A quick workaround to test my setup was the following code (not a real solution but a quick fix for the moment).
cd /home/stephan/tmp
catch {
console show
update
}
puts "starting pipe"
set pipe [open "|cat" RDWR]
fconfigure $pipe -buffering line -blocking 0
after 10
puts $pipe "hello cat!"
flush $pipe
set got [gets $pipe]
puts "got from pipe: $got"
Writing to the pipe and flushing won't make the OS multitasking immediately leave your program and switch to the cat program. Try putting after 1000 between the puts and the gets command, and you'll see that you'll probably get the string back. cat has then been given some time slices and has had the chance to read it's input and write it's output.
You can't control when cat reads your input and writes it back, so you'll have to either use fileevent and enter the event loop to wait (or periodically call update), or periodically try reading from the stream. Or you can keep it in blocking mode, in which case gets will do the waiting for you. It will block until there's a line to read, but meanwhile no other events will be responded to. A GUI for example, will stop responding.
The example seem to be for Tk and meant to be run by wish, which enters the event loop automatically at the end of the script. Add the piperead procedure and either run the script with wish or add a vwait command to the end of the script and run it with tclsh.
PS: For line-buffered I/O to work for a pipe, both programs involved have to use it (or no buffering). Many programs (grep, sed, etc) use full buffering when they're not connected to a terminal. One way to prevent them to, is with the unbuffer program, which is part of Expect (you don't have to write an Expect script, it's a stand-alone program that just happens to be included with the Expect package).
set pipe [open "|[list unbuffer grep .]" {RDWR}]
I guess you're executing the code from http://wiki.tcl.tk/3846, the page entitled "Pipe vs Expect". You seem to have omitted the definition of the piperead proc, indeed, when I copy-and-pasted the code from your question, I got an error invalid command name "piperead". If you copy-and-paste the definition from the wiki, you should find that the code works. It certainly did for me.

Getting return status AND program output

I need to use Lua to run a binary program that may write something in its stdout and also returns a status code (also known as "exit status").
I searched the web and couldn't find something that does what I need. However I found out that in Lua:
os.execute() returns the status code
io.popen() returns a file handler that can be used to read process output
However I need both. Writing a wrapper function that runs both functions behind the scene is not an option because of process overhead and possibly changes in result on consecutive runs. I need to write a function like this:
function run(binpath)
...
return output,exitcode
end
Does anyone has an idea how this problem can be solved?
PS. the target system rung Linux.
With Lua 5.2 I can do the following and it works
-- This will open the file
local file = io.popen('dmesg')
-- This will read all of the output, as always
local output = file:read('*all')
-- This will get a table with some return stuff
-- rc[1] will be true, false or nil
-- rc[3] will be the signal
local rc = {file:close()}
I hope this helps!
I can't use Lua 5.2, I use this helper function.
function execute_command(command)
local tmpfile = '/tmp/lua_execute_tmp_file'
local exit = os.execute(command .. ' > ' .. tmpfile .. ' 2> ' .. tmpfile .. '.err')
local stdout_file = io.open(tmpfile)
local stdout = stdout_file:read("*all")
local stderr_file = io.open(tmpfile .. '.err')
local stderr = stderr_file:read("*all")
stdout_file:close()
stderr_file:close()
return exit, stdout, stderr
end
This is how I do it.
local process = io.popen('command; echo $?') -- echo return code of last run command
local lastline
for line in process:lines() do
lastline = line
end
print(lastline) -- the return code is the last line of output
If the last line has fixed length you can read it directly using file:seek("end", -offset), offset should be the length of the last line in bytes.
This functionality is provided in C by pclose.
Upon successful return, pclose() shall return the termination status
of the command language interpreter.
The interpreter returns the termination status of its child.
But Lua doesn't do this right (io.close always returns true). I haven't dug into these threads but some people are complaining about this brain damage.
http://lua-users.org/lists/lua-l/2004-05/msg00005.html
http://lua-users.org/lists/lua-l/2011-02/msg00387.html
If you're running this code on Win32 or in a POSIX environment, you could try this Lua extension: http://code.google.com/p/lua-ex-api/
Alternatively, you could write a small shell script (assuming bash or similar is available) that:
executes the correct executable, capturing the exit code into a shell variable,
prints a newline and terminal character/string onto standard out
prints the shell variables value (the exit code) onto standard out
Then, capture all the output of io.popen and parse backward.
Full disclosure: I'm not a Lua developer.
yes , your are right that os.execute() has returns and it's very simple if you understand how to run your command with and with out lua
you also may want to know how many variables it returns , and it might take a while , but i think you can try
local a, b, c, d, e=os.execute(-what ever your command is-)
for my example a is an first returned argument , b is the second returned argument , and etc.. i think i answered your question right, based off of what you are asking.

Does popen() add a line feed (\n) at the end of the result?

I've wrote a small function that returns the result of executing a command.
function axsh(cmd)
local fullCmd=cmd:lower()
local f,err=io.popen(fullCmd,"r")
if not f then
return nil,"Could not create the process '"..fullCmd.."' \nError:"..err
end
return f:read("*all")
end
s=axsh("echo hi")
--print all bytes
print(s:byte(1,s:len()))
The output always has a \n at the end no matter what is the command:
104 105 10
Edit: it happens not only for my own binary command line application but also for almost all OS commands: Windows: "dir", "ipconfig", "echo"... Linux: "ls", "pwd", "ls"...
But when I run the command separately (i.e. windows command prompt) there is no trailing line feed. I don't need it, so need to remove the last character before returning the result.
Question: does this line feed always exist in the result of popen()? I can't find any reference to this behavior in the documentation.
No. io.popen just returns whatever string the command produces. You use echo as command, which happens to put a newline after the string ( this is what makes the command prompt appear on the next line, instead of just after the output).
You can test it by using trying this:
s=axsh([[lua -e "io.write([=[hi]=])"]])
return string.byte(s,1,-1)
which does not end the output with a newline.

Capturing output from WshShell.Exec using Windows Script Host

I wrote the following two functions, and call the second ("callAndWait") from JavaScript running inside Windows Script Host. My overall intent is to call one command line program from another. That is, I'm running the initial scripting using cscript, and then trying to run something else (Ant) from that script.
function readAllFromAny(oExec)
{
if (!oExec.StdOut.AtEndOfStream)
return oExec.StdOut.ReadLine();
if (!oExec.StdErr.AtEndOfStream)
return "STDERR: " + oExec.StdErr.ReadLine();
return -1;
}
// Execute a command line function....
function callAndWait(execStr) {
var oExec = WshShell.Exec(execStr);
while (oExec.Status == 0)
{
WScript.Sleep(100);
var output;
while ( (output = readAllFromAny(oExec)) != -1) {
WScript.StdOut.WriteLine(output);
}
}
}
Unfortunately, when I run my program, I don't get immediate feedback about what the called program is doing. Instead, the output seems to come in fits and starts, sometimes waiting until the original program has finished, and sometimes it appears to have deadlocked. What I really want to do is have the spawned process actually share the same StdOut as the calling process, but I don't see a way to do that. Just setting oExec.StdOut = WScript.StdOut doesn't work.
Is there an alternate way to spawn processes that will share the StdOut & StdErr of the launching process? I tried using "WshShell.Run(), but that gives me a "permission denied" error. That's problematic, because I don't want to have to tell my clients to change how their Windows environment is configured just to run my program.
What can I do?
You cannot read from StdErr and StdOut in the script engine in this way, as there is no non-blocking IO as Code Master Bob says. If the called process fills up the buffer (about 4KB) on StdErr while you are attempting to read from StdOut, or vice-versa, then you will deadlock/hang. You will starve while waiting for StdOut and it will block waiting for you to read from StdErr.
The practical solution is to redirect StdErr to StdOut like this:
sCommandLine = """c:\Path\To\prog.exe"" Argument1 argument2"
Dim oExec
Set oExec = WshShell.Exec("CMD /S /C "" " & sCommandLine & " 2>&1 """)
In other words, what gets passed to CreateProcess is this:
CMD /S /C " "c:\Path\To\prog.exe" Argument1 argument2 2>&1 "
This invokes CMD.EXE, which interprets the command line. /S /C invokes a special parsing rule so that the first and last quote are stripped off, and the remainder used as-is and executed by CMD.EXE. So CMD.EXE executes this:
"c:\Path\To\prog.exe" Argument1 argument2 2>&1
The incantation 2>&1 redirects prog.exe's StdErr to StdOut. CMD.EXE will propagate the exit code.
You can now succeed by reading from StdOut and ignoring StdErr.
The downside is that the StdErr and StdOut output get mixed together. As long as they are recognisable you can probably work with this.
Another technique which might help in this situation is to redirect the standard error stream of the command to accompany the standard output.
Do this by adding "%comspec% /c" to the front and "2>&1" to the end of the execStr string.
That is, change the command you run from:
zzz
to:
%comspec% /c zzz 2>&1
The "2>&1" is a redirect instruction which causes the StdErr output (file descriptor 2) to be written to the StdOut stream (file descriptor 1).
You need to include the "%comspec% /c" part because it is the command interpreter which understands about the command line redirect. See http://technet.microsoft.com/en-us/library/ee156605.aspx
Using "%comspec%" instead of "cmd" gives portability to a wider range of Windows versions.
If your command contains quoted string arguments, it may be tricky to get them right:
the specification for how cmd handles quotes after "/c" seems to be incomplete.
With this, your script needs only to read the StdOut stream, and will receive both standard output and standard error.
I used this with "net stop wuauserv", which writes to StdOut on success (if the service is running)
and StdErr on failure (if the service is already stopped).
First, your loop is broken in that it always tries to read from oExec.StdOut first. If there is no actual output then it will hang until there is. You wont see any StdErr output until StdOut.atEndOfStream becomes true (probably when the child terminates). Unfortunately, there is no concept of non-blocking I/O in the script engine. That means calling read and having it return immediately if there is no data in the buffer. Thus there is probably no way to get this loop to work as you want. Second, WShell.Run does not provide any properties or methods to access the standard I/O of the child process. It creates the child in a separate window, totally isolated from the parent except for the return code. However, if all you want is to be able to SEE the output from the child then this might be acceptable. You will also be able to interact with the child (input) but only through the new window (see SendKeys).
As for using ReadAll(), this would be even worse since it collects all the input from the stream before returning so you wouldn't see anything at all until the stream was closed. I have no idea why the example places the ReadAll in a loop which builds a string, a single if (!WScript.StdIn.AtEndOfStream) should be sufficient to avoid exceptions.
Another alternative might be to use the process creation methods in WMI. How standard I/O is handled is not clear and there doesn't appear to be any way to allocate specific streams as StdIn/Out/Err. The only hope would be that the child would inherit these from the parent but that's what you want, isn't it? (This comment based upon an idea and a little bit of research but no actual testing.)
Basically, the scripting system is not designed for complicated interprocess communication/synchronisation.
Note: Tests confirming the above were performed on Windows XP Sp2 using Script version 5.6. Reference to current (5.8) manuals suggests no change.
Yes, the Exec function seems to be broken when it comes to terminal output.
I have been using a similar function function ConsumeStd(e) {WScript.StdOut.Write(e.StdOut.ReadAll());WScript.StdErr.Write(e.StdErr.ReadAll());} that I call in a loop similar to yours. Not sure if checking for EOF and reading line by line is better or worse.
You might have hit the deadlock issue described on this Microsoft Support site.
One suggestion is to always read both from stdout and stderr.
You could change readAllFromAny to:
function readAllFromAny(oExec)
{
var output = "";
if (!oExec.StdOut.AtEndOfStream)
output = output + oExec.StdOut.ReadLine();
if (!oExec.StdErr.AtEndOfStream)
output = output + "STDERR: " + oExec.StdErr.ReadLine();
return output ? output : -1;
}

Resources