I'd like to get data from an output when a system command is finished in Lua,
even while that command may take a few minutes to an end.
Obviously popen executes the command separately from the lua process.
Does anyone has an idea to solve this?
r = popen('command','r')
for line in r:lines() do
print(line)
end
If the command uses buffered output (the default) then there's nothing you can do. Some commands (e.g., cat -u) have an option to use unbuffered output, but they're rare.
Related
I tried to execute the same command in cmd/powershell,they just take ~400ms.
I also tried to execute other program thought dart process with run or start method.they are fine.
But once I start ffmpeg process, it will take a sec to warmup…even the log level is quiet, even the input parameter is empty.
At the end the ffmpeg execution time+warmup time will cost 1400ms.
Does anyone has familiar issue? Is fixable?
I'm trying to run a function in my lisp program. It is a bot that is connected to an IRC channel and with a special command you can query the bot to evaluate a simple lisp command. Because it is extremely dangerous to execute arbitrary code from people on the internet I want the actual evaluation to happen in a VM that is running a docker for every evaluation query the bot gets.
My function looks like this:
(defun run-command (cmd)
(uiop:run-program (list "docker" "run" "--rm" "-it" "my/docker" "sbcl" "--noinform" "--no-sysinit" "--no-userinit" "--noprint" "--disable-debugger" "--eval" (string-trim '(#\^M) (format nil "~A" cmd))) "--eval" "'(quit)'") :output '(:string :stripped t))
The idea behind this function is to start a docker that contains SBCL, runs the command via SBCL --eval and prints it to the docker std-out. And this printed string should be the result of run-command.
If I call
docker run --rm -it my/docker sbcl --noinform --no-sysinit --no-userinit --noprint --disable-debugger --eval "(print (+ 2 3))" --eval "(quit)"
on my command line I just get 5 as an result, what is exactly what I want.
But if I run the same command within lisp, with the uiop:run-program function I get
Subprocess #<UIOP/LAUNCH-PROGRAM::PROCESS-INFO {1004FC3923}>
with command ("docker" "run" "--rm" "-it"
"my/docker" "sbcl" "--noinform"
"--no-sysinit" "--no-userinit" "--noprint"
"--disable-debugger" "--eval" "(+ 2 3)")
as an error message, which means the process failed somehow. But I don't know what exactly could be wrong here. If I just execute for example "ls" I get the output, so the function seems to work properly.
Is there some special knowledge I need about uiop:run-program or am I doing something completely wrong?
Thanks in advance.
Edit: So it turns out that the -it flag caused issues. After removing the flag a new error emerges. Now the bot has not the permissions to execute docker. Is there a way to give it the permissions without granting it sudo rights?
There's, probably, something wrong with the way docker is invoked here (or SBCL). To get the error message, invoke uiop:run-program with :error-output :string argument, and then choose the continue restart to, actually, terminate execution and get the error output printed (if you're running from SLIME or some other REPL). If you call this in a non-interactive environment, you can wrap the call in a handler-bind:
(handler-bind ((uiop/run-program:subprocess-error
(lambda (e) (invoke-restart (find-restart 'continue)))))
(run-command ...))
It turned out the -it did cause trouble. After removing it and elevating the correct permissions to the bot everything worked out fine.
I'm executing the following command which executes a group of scripts with each script being a curl download.
parallel --resume-failed --joblog logshd.log {1} ::: SH/*.sh
The set of files downloaded is quite large. I've noticed some files don't download.
I hoped that the resume-failed parameter would ensure that all the downloads that fail resume and complete.
I'm not clear on if that means I need to run the process again a second time or if that should occur when I run the one time.
From the gnu documentation
Where --resume-failed reads the commands from the command line (and
ignores the commands in the joblog), --retry-failed ignores the
command line and reruns the commands mentioned in the joblog.
I'm not clear on what ignoring the command line or ignores the commands in the job log means. Could that be clarified.
Can --resume-failed and --retry-failed be declared within the same command and if so what is the effect of that?
Regards
Conteh
If we assume the download fails intermittently then your answer is --retries 10. It will run the command 10 times before giving up.
--resume-failed and --retry-failed are both used when GNU Parallel has finished, and you then figure out that you want to retry some of the jobs again.
The difference between the two is in how to retry the command.
--retry-failed will run exactly the same command as failed before. It does that by looking in the joblog for the command. This is typically what you want.
--resume-failed is used if you figure out that the failing command actually needed some other parameter: i.e. GNU Parallel should not run exactly the same command, but it should run a (typically slightly changed) command with the same parameters instead.
I would like to start two different services in my Docker container and exit the container as soon as one of them exits. I looked at supervisor, but I can't find how to get it to quit as soon as one of the managed applications exits. It tries to restart them up to three times, as is the standard setting and then just sits there doing nothing. Is supervisor able to do this or is there any other tool for this? A bonus would be if there also was a way to let both managed programs write to stdout, tagged with their application name, e.g.:
[Program 1] Some output
[Program 2] Some other output
[Program 1] Output again
Since you asked if there was another tool... we designed and wrote a powerful replacement for supervisord that is designed specifically for Docker. It automatically terminates when all applications quit, as well as has special service settings to control this behavior, plus will redirect stdout with tagged syslog-compatible output lines as well. It's open source, and being used in production.
Here is a quick start for Docker: http://garywiz.github.io/chaperone/guide/chap-docker-simple.html
There is also a complete set of tested base-images which are a good example at: https://github.com/garywiz/chaperone-docker, but these might be overkill and the earlier quickstart may do the trick.
I found solutions to both of my requirements by reading through the docs some more.
Exit supervisord on application exit
This can be achieved by using a custom eventlistener. I had to add the following segment into my supervisord configuration file:
[eventlistener:shutdownevent]
command=/shutdownhandler.sh
events=PROCESS_STATE_EXITED
supervisord will start the referenced script and upon the given event being triggered (PROCESS_STATE_EXITED is triggered after the exit of one of the managed programs and it not restarting automatically) will send a line containing data about the event on the scripts stdin.
The referenced shutdownhandler-script contains:
#!/bin/bash
while :
do
echo -en "READY\n"
read line
kill $(cat /supervisord.pid)
echo -en "RESULT 2\nOK"
done
The script has to indicate being ready by sending "READY\n" on its stdout, after which it may receive an event data line on its stdin. For my use case upon receival of a line (meaning one of the managed programs has exited), a SIGTERM is sent to the supervisord process being found by the pid it leaves in its pid file (situated in the root directory by default). For technical completeness, I also included a positive answer for the eventlistener, though that one should never matter.
Tagged output on stdout
I did this by simply starting a tail process in the background before starting supervisord, tailing the programs output log and piping the lines through ts (from the moreutils package) to prepend a tag to it. This way it shows up via docker logs with an easy way to see which program actually wrote the line.
tail -fn0 /var/log/supervisor/program1.log | ts '[Program 1]' &
I want to let a user select a script to run from the cmd line and see the output of the script as it runs in (close to) real time. I know this isn't safe, it's for an internal tool used by a small team to allow a level of self service.
Here's a quick example of a ruby script that runs something on the cmd line and displaying the output in real time. What i can't figure out is how to get this to work from rails.
cmd = %q[echo '3...'; sleep 1;
echo '2...'; sleep 1;
echo '1...'; sleep 1;
echo 'Liftoff!']
puts '------ beginning command ------'
output_log = []
IO.popen(cmd).each do |line|
puts line
output_log << "[#{Time.now}] #{line}"
end.close # Without close, you won't be able to access $?
puts '------ done with command ------'
puts "The command's exit code was: #{$?.exitstatus}"
puts 'Here is the log:'
puts output_log.join('')
Are there any existing gems for this? Can I have an ajax request call a file that runs the command, outputs to a buffer, and then I can flush the buffer as the script runs and send a response back to the page it runs on? I'm even fine with using iframes if it helps.
Any help or pointing in the right direction is hugely appreciated!
To do this , you will need a delayed job or sidekick object - to actually run the script.
Use ruby IO.popen method to execute command and read the output and flush it into database meanwhile.
On your webapplication you just poll data via ajax from that database object.