Redirect Output to stderr - ruby-on-rails

I want to redirect my output to stderr. I have a cron job
function auth_tester {
cd /data/$1/current && bundle exec rake 'authentication:tester' 1> /dev/null
}
which calls a rake task
namespace :authentication do
desc "Automatically runs authentication tester and notifies in case of failure"
task :tester => :environment do
auth_tester_results = AuthenticationTester.perform
exit(auth_tester_results.success? ? 0 : 1)
end
end
If the 'auth_tester_results' boolean is a false I want to redirect the output to stderr. How can it be done?

Since you are already dealing with shell, do it in shell:
function auth_tester {
cd /data/$1/current && \
bundle exec rake 'authentication:tester' >/dev/null 2>&1
# HERE: ⇑⇑⇑⇑⇑⇑⇑⇑⇑⇑⇑⇑⇑⇑⇑
}
stderr has an id = 2, and we are redirecting stdout to /dev/null, and stderr to stdout, eventually redirecting it to /dev/null.
To redirect stdout to stderr, use the opposite:
1>&2
To redirect the output from ruby, one uses proper receiver with IO#puts:
$stderr.puts "Goes to stderr"

Sending output to STDERR can be done via print or puts. Here I opt to send to STDERR rather than $stderr, but this will work for either:
STDERR.puts 'my message here'
If you'd like to change the exit status of your Ruby script (to show the script did not finish successfully, for example) you can use exit with a parameter of false:
exit(false)
To both send output to STDERR and return an unsuccessful exit status, you can use abort:
abort 'my message here'
For additional information, ref this article from honeybadger.io.

Related

expect output only stdout of the command and nothing else

How to write expect script which executes command and prints just the command's output?
I've tried various things but none works, e.g.
#!/usr/bin/expect
log_user 0
spawn bash
send "echo 1\r"
log_user 1
expect "1"
log_user 0
send "exit\r"
expect eof
Gives in output:
echo 1
While I need just "1" . I hope somebody knows simple solution how to fix my example
Capturing the output from sent commands is a bit of a pain in expect.
Here's a more general case that does not rely on the log_user setting, it captures the output with a regular expression:
#!/usr/bin/expect
log_user 0
spawn bash
# set the prompt to a known value
send "PS1='>'\r"
expect -re {>$}
# send a command: we don't know what the output is going to be
send "echo \$RANDOM\r"
# capture the portion of the output that occurs just before the prompt
expect -re "\r\n(.*?)\r\n>$"
puts "output is: $expect_out(1,string)"
send "exit\r"
expect eof
A thought just occurred to me: if the command does not require any interaction, then expect is overkill: just use exec
set output [exec bash -c {echo $RANDOM}]
Ok, it looks following script does (at least similar to) what I need:
log_user 0
spawn bash
expect "#" {} "\\\$" {}
send -- "echo AA\r"
expect -- "echo AA\r" {}
log_user 1
expect -- "AA"
log_user 0
send -- "exit\r"
expect eof

how to use linux 'mail' command from a ruby file to send a mail?

In the below code, I have to send an e-mail on the status of the process whether it is completed, error out or timed out...
def check_for_forecasts
wait_until_time = Time.now + timeout.minutes
loop do
RAILS_DEFAULT_LOGGER.info "Checking if process has finished"
if find_token != 0
update_completion_status
RAILS_DEFAULT_LOGGER.info "Process has finished"
break
elsif find_error != 0
update_timed_out_field
RAILS_DEFAULT_LOGGER.info "Process has errored"
break
elsif DateTime.now > wait_until_time
update_timed_out_field
RAILS_DEFAULT_LOGGER.info "Process has timed out"
break
else
RAILS_DEFAULT_LOGGER.info "Waiting for Process to finish"
sleep(60) # if it hasn't completed then wait 1 min and try again.
end
end
end
In general, we use linux 'mail' command only in .sh files not in the .rb files. Below is how we write the 'mail' command to send mails in .sh files.
mail -s "the process has been finished" abc#xyz.com<<EOM
The process has finished successfully.
EOM
Is there any way to use the simple mail command in the .rb file? Or do I have to install any gems for the same?
Please Help.
Thank you.
A really ugly but useful way to do the same from ruby:
to = "abc#xyz.com"
subject = "the process has been finished"
content = "The process has finished successfully."
`mail -s "#{subject}" #{to}<<EOM
#{content}
EOM`
You can use backticks in Ruby to run command line instructions. So what you would want is something like:
`mail -s "the process has been finished" abc#xyz.com<<EOM
The process has finished successfully.
EOM`

How to capture command not found with popen4

I'm using popen4 to capture stdout, stderr, and the exit status of a command line. I'm not tied to popen4 as long as I can capture those 3 things above. Currently I've not found a good way to capture command not found errors. I could do a which cmd in a pre-task I suppose, but hoping for something built in.
Below you can run a good task, bad task, and a fake task to see the differences. I'm doing this in a fresh rails new app with the popen4 gem
#!/usr/bin/env rake
# Add your own tasks in files placed in lib/tasks ending in .rake,
# for example lib/tasks/capistrano.rake, and they will automatically be available to Rake.
require File.expand_path('../config/application', __FILE__)
require 'open4'
# returns exit status 0, all is good
task :convert_good do
puts "convert good"
`wget https://www.google.com/images/srpr/logo3w.png`
status = Open4.popen4("convert logo3w.png output.jpg") do |pid, stdin,stdout,stderr|
stdin.close
puts "stdout:"
stdout.each_line { |line| puts line }
puts "stderr: #{stderr.inspect}"
stderr.each_line { |line| puts line }
end
puts "status: #{status.inspect}"
puts "exit: #{status.exitstatus}"
end
# returns exit status 1, we messed up our command
task :convert_bad do
puts "convert bad"
status = Open4.popen4("convert logo3w-asdfasdf.png output.jpg") do |pid, stdin,stdout,stderr|
stdin.close
puts "stdout:"
stdout.each_line { |line| puts line }
puts "stderr: #{stderr.inspect}"
stderr.each_line { |line| puts line }
end
puts "status: #{status.inspect}"
puts "exit: #{status.exitstatus}"
end
# I want this to return exit code 127 for command not found
task :convert_none do
puts "convert bad"
status = Open4.popen4("convert_not_installed") do |pid, stdin,stdout,stderr|
stdin.close
puts "stdout:"
stdout.each_line { |line| puts line }
puts "stderr: #{stderr.inspect}"
#it doesnt like stderr in this case
#stderr.each_line { |line| puts line }
end
puts "status: #{status.inspect}"
puts "exit: #{status.exitstatus}"
end
Here are the 3 local outputs
# good
stdout:
stderr: #<IO:fd 11>
status: #<Process::Status: pid 17520 exit 0>
exit: 0
# bad arguments
convert bad
stdout:
stderr: #<IO:fd 11>
convert: unable to open image `logo3w-asdfasdf.png': No such file or directory # blob.c/OpenBlob/2480.
convert: unable to open file `logo3w-asdfasdf.png' # png.c/ReadPNGImage/2889.
convert: missing an image filename `output.jpg' # convert.c/ConvertImageCommand/2800.
status: #<Process::Status: pid 17568 exit 1>
exit: 1
# fake command not found, but returns exit 1 and stderr has no lines
convert bad
stdout:
stderr: #<IO:fd 11>
status: #<Process::Status: pid 17612 exit 1>
exit: 1
A couple of points first.
You're not actually using the popen4 gem - which is a wrapper around the open4 gem (if you're running on a Unix system, at least) - you're using the open4 gem directly. If you wanted to use popen4, you'd call it like this:
status = POpen4.popen4('cmd') do |stdout, stderr, stdin, pid|
# ...
end
The popen4 method ultimately executes the specified command via the Kernel#exec method, and the behaviour of that depends on whether it determines the given command should be run in a shell or not. (You can see http://www.ruby-doc.org/core-1.9.3/Kernel.html#method-i-exec, but it's not terribly helpful. The source code is a better bet.)
For example:
> fork { exec "wibble" }
=> 1570
> (irb):56:in `exec': No such file or directory - wibble (Errno::ENOENT)
from (irb):56:in `irb_binding'
from (irb):56:in `fork'
from (irb):56:in `irb_binding'
from /Users/evilrich/.rvm/rubies/ree-1.8.7-2011.03/lib/ruby/1.8/irb/workspace.rb:52:in `irb_binding'
from :0
Here, exec was trying to execute the non-existent command 'wibble' directly - hence the exception.
> fork { exec "wibble &2>1" }
=> 1572
> sh: wibble: command not found
Here, exec saw I was using redirection and so executed my command in a shell. The difference? I get an error on STDERR and no exception. You can also force a shell to be used by specifying that in the command to be executed:
> fork { exec "sh -c 'wibble -abc -def'" }
Anyway, understanding the behaviour of Kernel#exec may help in getting the popen4 method to behave the way you want it to.
To answer your question, if I use the popen4 gem and construct the command in such a way that (by exec's rules) it'll get run in a shell or if I use "sh -c ..." in the command myself, then I get the kind of behaviour I think you are looking for:
> status = POpen4.popen4("sh -c 'wibble -abc -def'") {|stdout, stderr, stdin, pid| puts "Pid: #{pid}"}
Pid: 1663
=> #<Process::Status: pid=1663,exited(127)>
> puts status.exitstatus
127
Update
Interesting. Open4.popen will also return a 127 exit status if you read from stderr. So, there's no need to use the popen gem.
> status = Open4.popen4("sh -c 'wibble -abc -def'") {|pid, stdin, stdout, stderr| stderr.read }
=> #<Process::Status: pid 1704 exit 127>

PHP CLI doesn't use stderr to output errors

I'm running the PHP CLI through a NSTask in MacOS, but this question is more about the CLI itself.
I'm listening to the stderr pipe, but nothing is output there no matter what file I try to run:
If the file type is not a plain text, stdout sets to ?.
If the file is a php script with errors, the error messages are still printed to stdout.
Is there a switch to the interpreter to handle errors through stderr? Do I have an option to detect errors other than parsing stdout?
The display_errors directive (can be set everywhere) takes optionally the parameter "stderr" for it to report errors to stderr instead of stdout or completely disabled error output. Quoting from the PHP manual entry:
Value "stderr" sends the errors to stderr instead of stdout. The value is available as of PHP 5.2.4.
Alternatively if you're using the commandline interface and you want to output the errors your own you can re-use the command-line nput/output streams:
fwrite(STDERR, 'error message');
Here STDERR is an already opened stream to stderr.
Alternatively if you want to do it just for this script and not in CLI you can open a filed handler to php://stderr and write the error messages there.
$fe = fopen('php://stderr', 'w');
fwrite($fe, 'error message');
If you want the error messages sent by the php interpreter should go to the stderr-pipe, you must set display_errors to stderr
This is required to return from PHP realm into shell environment in order to parse properly error message. You still need to exit(1) or whatever integer in order to return exit status code from PHP to shell.
fwrite(STDERR, 'error message'); //output message into 2> buffer
exit(0x0a); //return error status code to shell
Then, your crontab entry will look like:
30 3 * * * /usr/bin/php /full/path/to/phpFile.php >> /logdir/fullpath/journal.log 2>> /logdir/fullpath/error_journal.log
You can also use file_put_contents() with "php://stderr" to output to standard error, like:
php -r 'file_put_contents("php://stderr", "Hiya, PHP!\n"); echo "Bye!\n";' 1>/dev/null
which outputs "Hiya, PHP!\n" to standard error and nothing to standard output when executed in a Bash shell.

Capistrano & Bash: ignore command exit status

I'm using Capistrano run a remote task. My task looks like this:
task :my_task do
run "my_command"
end
My problem is that if my_command has an exit status != 0, then Capistrano considers it failed and exits. How can I make capistrano keep going when exit when the exit status is not 0? I've changed my_command to my_command;echo and it works but it feels like a hack.
The simplest way is to just append true to the end of your command.
task :my_task do
run "my_command"
end
Becomes
task :my_task do
run "my_command; true"
end
For Capistrano 3, you can (as suggested here) use the following:
execute "some_command.sh", raise_on_non_zero_exit: false
The +grep+ command exits non-zero based on what it finds. In the use case where you care about the output but don't mind if it's empty, you'll discard the exit state silently:
run %Q{bash -c 'grep #{escaped_grep_command_args} ; true' }
Normally, I think the first solution is just fine -- I'd make it document itself tho:
cmd = "my_command with_args escaped_correctly"
run %Q{bash -c '#{cmd} || echo "Failed: [#{cmd}] -- ignoring."'}
You'll need to patch the Capistrano code if you want it to do different things with the exit codes; it's hard-coded to raise an exception if the exit status is not zero.
Here's the relevant portion of lib/capistrano/command.rb. The line that starts with if (failed... is the important one. Basically it says if there are any nonzero return values, raise an error.
# Processes the command in parallel on all specified hosts. If the command
# fails (non-zero return code) on any of the hosts, this will raise a
# Capistrano::CommandError.
def process!
loop do
break unless process_iteration { #channels.any? { |ch| !ch[:closed] } }
end
logger.trace "command finished" if logger
if (failed = #channels.select { |ch| ch[:status] != 0 }).any?
commands = failed.inject({}) { |map, ch| (map[ch[:command]] ||= []) << ch[:server]; map }
message = commands.map { |command, list| "#{command.inspect} on #{list.join(',')}" }.join("; ")
error = CommandError.new("failed: #{message}")
error.hosts = commands.values.flatten
raise error
end
self
end
I find the easiest option to do this:
run "my_command || :"
Notice: : is the NOP command so the exit code will simply be ignored.
I just redirect STDERR and STDOUT to /dev/null, so your
run "my_command"
becomes
run "my_command > /dev/null 2> /dev/null"
this works for standard unix tools pretty well, where, say, cp or ln could fail, but you don't want to halt deployment on such a failure.
I not sure what version they added this code but I like handling this problem by using raise_on_non_zero_exit
namespace :invoke do
task :cleanup_workspace do
on release_roles(:app), in: :parallel do
execute 'sudo /etc/cron.daily/cleanup_workspace', raise_on_non_zero_exit: false
end
end
end
Here is where that feature is implemented in the gem.
https://github.com/capistrano/sshkit/blob/4cfddde6a643520986ed0f66f21d1357e0cd458b/lib/sshkit/command.rb#L94

Resources