how print file content to stdout in vxWorks? - stdout

I'm using putty to telnet a robot controller.
I need an equivalent of the "cat" command in vxWorks, or some workaround to print the content of a text file to stdout.
how can i do it?

The copy function is overloaded to handle this. Call it with only one filename argument and stdout will be assumed as the second argument, e.g.:
-> copy "foo.txt"
bar
->
http://www.vxdev.com/docs/vx55man/vxworks/ref/usrFsLib.html#copy

Related

How to redirect output to file if no console detected in Nim

I want my Nim program to write to the console if there is one, and redirect echo to write to a file if there isn't. Is there an equivalent to the Environment.UserInteractive property in .NET which I could use to detect if no console is available and redirect stdout in that case?
It's a combination of using isatty() as suggested by genotrance and the code that you found :)
# stdout_to_file.nim
import terminal, strformat, times
if isatty(stdout): # ./stdout_to_file
echo "This is output to the terminal."
else: # ./stdout_to_file | cat
const
logFileName = "log.txt"
let
# https://github.com/jasonrbriggs/nimwhistle/blob/183c19556d6f11013959d17dfafd43486e1109e5/tests/cgitests.nim#L15
logFile = open(logFileName, fmWrite)
stdout = logFile
echo fmt"This is output to the {logFileName} file."
echo fmt"- Run using nim {NimVersion} on {now()}."
Save above file as stdout_to_file.nim.
On running:
nim c stdout_to_file.nim && ./stdout_to_file | cat
I get this in the created log.txt:
This is output to the log.txt file.
- Run using nim 0.19.9 on 2019-01-23T22:42:27-05:00.
You should be able to use isatty().
Here's an example in Nimble.
Edit:
#tjohnson this is in response to your comment. I don't have enough points to respond to your comment directly or something? Thanks Stack Overflow...
It's hard to say without seeing more of the code.
What version of Nim are you using?
I suspect stdout has been shadowed by a read only symbol.
Are you calling this code inside of a proc and passing stdout as an argument?
like this:
proc foo(stdout: File)
If so, you will need to change it to a var parameter to make the argument writable:
proc test(stdout: var File)
Or use stdout as a global variable instead.

Print STDOUT to both file and STDOUT with perl

In one of the post, below method is suggested to capture STDOUT to a file without affecting logging at STDOUT (terminal).
open my $tee, "|-", "tee E:/log.txt";
For a sequence like below:
print $tee "Log1\n";
print $tee "Log2\n";
my $input = <STDIN>;
print $tee "Log3\n";
I don't see any message at terminal unless I provide the input. Once I type any character and press enter, then I see logs coming as
Log1
Log2
Log3
Is there a way such that I get first two outputs and then it waits for input and then the third output?
Or is there any way to capture STDOUT logs to a file while STDOUT logs keep coming on terminal too?
Set your filehandle to unbuffered output using
$old_fh = select($tee);
$| = 1;
select($old_fh);

Assign bash output to variable in .lua script

I would like to assign output of a bash command to a variable in .lua script. Is it possible?
For instance, something similar to:
var = `ps uax | grep myprocess`
Yes, you need to use io.popen for this.
io.popen (prog [, mode])
Starts program prog in a separated process and returns a file handle that you can use to read data from this program (if mode is "r", the default) or to write data to this program (if mode is "w").
This function is system dependent and is not available on all platforms.
Also see How to execute an external command?.
io.popen calls a command but returns a file object so you can read the output of the command, if the second argument is 'r', but you can also pass input to a command with a second argument of 'w'. Unfortunately, you don't get a io.popen2, and you don't get the return code.

Calling back to a main file from a dofile in lua

Say i have two files:
One is called mainFile.lua:
function altDoFile(name)
dofile(debug.getinfo(1).source:sub(debug.getinfo(1).source:find(".*\\")):sub(2)..name)
end
altDoFile("libs/caller.lua")
function callBack()
print "called back"
end
doCallback()
The other called caller.lua, located in a libs folder:
function doCallback()
print "performing call back"
_G["callBack"]()
end
The output of running the first file is then:
"performing call back"
Then nothing more, i'm missing a line!
Why is callBack never getting executed? is this intended behavior, and how do i get around it?
The fact that the function is getting called from string is important, so that can't be changed.
UPDATE:
I have tested it further, and the _G["callBack"] does resolve to a function (type()) but it still does not get called
Why not just use dofile?
It seems that the purpose of altDoFile is to replace the running script's filename with the script you want to call thereby creating an absolute path. In this case the path for caller.lua is a relative path so you shouldn't need to change anything for Lua to load the file.
Refactoring your code to this:
dofile("libs/caller.lua")
function callBack()
print "called back"
end
doCallback()
Seems to give the result you are looking for:
$ lua mainFile.lua
performing call back
called back
Just as a side note, altDoFile throws an error if the path does not contain a \ character. Windows uses the backslash for path names, but other operating systems like Linux and MacOS do not.
In my case running your script on Linux throws an error because string.find returns nill instead of an index.
lua: mainFile.lua:2: bad argument #1 to 'sub' (number expected, got nil)
If you need to know the working path of the main script, why not pass it as a command line argument:
C:\LuaFiles> lua mainFile.lua C:/LuaFiles
Then in Lua:
local working_path = arg[1] or '.'
dofile(working_path..'/libs/caller.lua')
If you just want to be able to walk back up one directory, you can also modify the loader
package.path = ";../?.lua" .. package.path;
So then you could run your file by doing:
require("caller")
dofile "../Untitled/SensorLib.lua" --use backpath librarys
Best Regards
K.

Capturing output from WshShell.Exec using Windows Script Host

I wrote the following two functions, and call the second ("callAndWait") from JavaScript running inside Windows Script Host. My overall intent is to call one command line program from another. That is, I'm running the initial scripting using cscript, and then trying to run something else (Ant) from that script.
function readAllFromAny(oExec)
{
if (!oExec.StdOut.AtEndOfStream)
return oExec.StdOut.ReadLine();
if (!oExec.StdErr.AtEndOfStream)
return "STDERR: " + oExec.StdErr.ReadLine();
return -1;
}
// Execute a command line function....
function callAndWait(execStr) {
var oExec = WshShell.Exec(execStr);
while (oExec.Status == 0)
{
WScript.Sleep(100);
var output;
while ( (output = readAllFromAny(oExec)) != -1) {
WScript.StdOut.WriteLine(output);
}
}
}
Unfortunately, when I run my program, I don't get immediate feedback about what the called program is doing. Instead, the output seems to come in fits and starts, sometimes waiting until the original program has finished, and sometimes it appears to have deadlocked. What I really want to do is have the spawned process actually share the same StdOut as the calling process, but I don't see a way to do that. Just setting oExec.StdOut = WScript.StdOut doesn't work.
Is there an alternate way to spawn processes that will share the StdOut & StdErr of the launching process? I tried using "WshShell.Run(), but that gives me a "permission denied" error. That's problematic, because I don't want to have to tell my clients to change how their Windows environment is configured just to run my program.
What can I do?
You cannot read from StdErr and StdOut in the script engine in this way, as there is no non-blocking IO as Code Master Bob says. If the called process fills up the buffer (about 4KB) on StdErr while you are attempting to read from StdOut, or vice-versa, then you will deadlock/hang. You will starve while waiting for StdOut and it will block waiting for you to read from StdErr.
The practical solution is to redirect StdErr to StdOut like this:
sCommandLine = """c:\Path\To\prog.exe"" Argument1 argument2"
Dim oExec
Set oExec = WshShell.Exec("CMD /S /C "" " & sCommandLine & " 2>&1 """)
In other words, what gets passed to CreateProcess is this:
CMD /S /C " "c:\Path\To\prog.exe" Argument1 argument2 2>&1 "
This invokes CMD.EXE, which interprets the command line. /S /C invokes a special parsing rule so that the first and last quote are stripped off, and the remainder used as-is and executed by CMD.EXE. So CMD.EXE executes this:
"c:\Path\To\prog.exe" Argument1 argument2 2>&1
The incantation 2>&1 redirects prog.exe's StdErr to StdOut. CMD.EXE will propagate the exit code.
You can now succeed by reading from StdOut and ignoring StdErr.
The downside is that the StdErr and StdOut output get mixed together. As long as they are recognisable you can probably work with this.
Another technique which might help in this situation is to redirect the standard error stream of the command to accompany the standard output.
Do this by adding "%comspec% /c" to the front and "2>&1" to the end of the execStr string.
That is, change the command you run from:
zzz
to:
%comspec% /c zzz 2>&1
The "2>&1" is a redirect instruction which causes the StdErr output (file descriptor 2) to be written to the StdOut stream (file descriptor 1).
You need to include the "%comspec% /c" part because it is the command interpreter which understands about the command line redirect. See http://technet.microsoft.com/en-us/library/ee156605.aspx
Using "%comspec%" instead of "cmd" gives portability to a wider range of Windows versions.
If your command contains quoted string arguments, it may be tricky to get them right:
the specification for how cmd handles quotes after "/c" seems to be incomplete.
With this, your script needs only to read the StdOut stream, and will receive both standard output and standard error.
I used this with "net stop wuauserv", which writes to StdOut on success (if the service is running)
and StdErr on failure (if the service is already stopped).
First, your loop is broken in that it always tries to read from oExec.StdOut first. If there is no actual output then it will hang until there is. You wont see any StdErr output until StdOut.atEndOfStream becomes true (probably when the child terminates). Unfortunately, there is no concept of non-blocking I/O in the script engine. That means calling read and having it return immediately if there is no data in the buffer. Thus there is probably no way to get this loop to work as you want. Second, WShell.Run does not provide any properties or methods to access the standard I/O of the child process. It creates the child in a separate window, totally isolated from the parent except for the return code. However, if all you want is to be able to SEE the output from the child then this might be acceptable. You will also be able to interact with the child (input) but only through the new window (see SendKeys).
As for using ReadAll(), this would be even worse since it collects all the input from the stream before returning so you wouldn't see anything at all until the stream was closed. I have no idea why the example places the ReadAll in a loop which builds a string, a single if (!WScript.StdIn.AtEndOfStream) should be sufficient to avoid exceptions.
Another alternative might be to use the process creation methods in WMI. How standard I/O is handled is not clear and there doesn't appear to be any way to allocate specific streams as StdIn/Out/Err. The only hope would be that the child would inherit these from the parent but that's what you want, isn't it? (This comment based upon an idea and a little bit of research but no actual testing.)
Basically, the scripting system is not designed for complicated interprocess communication/synchronisation.
Note: Tests confirming the above were performed on Windows XP Sp2 using Script version 5.6. Reference to current (5.8) manuals suggests no change.
Yes, the Exec function seems to be broken when it comes to terminal output.
I have been using a similar function function ConsumeStd(e) {WScript.StdOut.Write(e.StdOut.ReadAll());WScript.StdErr.Write(e.StdErr.ReadAll());} that I call in a loop similar to yours. Not sure if checking for EOF and reading line by line is better or worse.
You might have hit the deadlock issue described on this Microsoft Support site.
One suggestion is to always read both from stdout and stderr.
You could change readAllFromAny to:
function readAllFromAny(oExec)
{
var output = "";
if (!oExec.StdOut.AtEndOfStream)
output = output + oExec.StdOut.ReadLine();
if (!oExec.StdErr.AtEndOfStream)
output = output + "STDERR: " + oExec.StdErr.ReadLine();
return output ? output : -1;
}

Resources