Sinatra print string on webpage - ruby-on-rails

I am trying to print process status on webpage. But when execute host:port/status method I dont see any response. It returns a blank page. ps -ef command executes on command line. I tried printing it on getStatus Method but it doesnt print it.
I want to display process execution status on website.
def getStatus
puts #{system('ps -ef | grep abc.jar|grep -v grep')? "Running": "Stopped"}
return #{system('ps -ef | grep abc.jar|grep -v grep')? "Running": "Stopped"}
end
get '/status' do
return getStatus
end

The expression
puts #{…
will only print a newline character, since # outside a string introduces a comment, same with return #….
To get the actual output, use something like this (I took the freedom to transformed your code snippet into more idiomatic Ruby):
def running?
`ps -ef` =~ /abc\.jar/
end
get '/status' do
status = running? ? 'Running' : 'Stopped'
logger.debug "Status: #{status}"
status
end
Now the running? method performs your check:
get the result of ps -ef via Kernel#`
match that result against the regular expression /abc\.jar/ via String#=~ (basically perform grep abc\.jar in Ruby land)
Step 1 is performed in a sub shell and everything in the sub shell is returned into Ruby land, whereas Kernel#system will only return whether the command exited with a non-zero exit status. Any output from commands started with system('...') is also redirected to stdout Your inital code snippet would not have worked that way, since grep -v grep will always exit with status 0.
(technically, a sub shell is not required, but the IO.popen call is more complex)

Related

Ansible shell module returns error when grep results are empty

I am using Ansible's shell module to find a particular string and store it in a variable. But if grep did not find anything I am getting an error.
Example:
- name: Get the http_status
shell: grep "http_status=" /var/httpd.txt
register: cmdln
check_mode: no
When I run this Ansible playbook if http_status string is not there, playbook is stopped. I am not getting stderr.
How can I make Ansible run without interruption even if the string is not found?
grep by design returns code 1 if the given string was not found. Ansible by design stops execution if the return code is different from 0. Your system is working properly.
To prevent Ansible from stopping playbook execution on this error, you can:
add ignore_errors: yes parameter to the task
use failed_when: parameter with a proper condition
Because grep returns error code 2 for exceptions, the second method seems more appropriate, so:
- name: Get the http_status
shell: grep "http_status=" /var/httpd.txt
register: cmdln
failed_when: "cmdln.rc == 2"
check_mode: no
You might also consider adding changed_when: false so that the task won't be reported as "changed" every single time.
All options are described in the Error Handling In Playbooks document.
Like you observed, ansible will stop execution if the grep exit code is not zero. You can ignore it with ignore_errors.
Another trick is to pipe the grep output to cat. So cat exit code will always be zero since its stdin is grep's stdout. It works if there is a match and also when there is no match. Try it.
- name: Get the http_status
shell: grep "http_status=" /var/httpd.txt | cat
register: cmdln
check_mode: no

ruby pipes, IO and stderr redirect

I am looking to have a ruby program (a rake task) observe an output from another rake task. The output writer outputs to stderr. I'd like to read those lines. I'm having difficulty setting it up. If I have a writer (stdout_writer.rb) that constantly prints something:
#!/usr/bin/env ruby
puts 'writing...'
while true
$stdout.puts '~'
sleep 1
end
and a file that reads it and echoes (stdin_reader.rb):
#!/usr/bin/env ruby
puts 'reading...'
while input = ARGF.gets
puts input
input.each_line do |line|
begin
$stdout.puts "got line #{line}"
rescue Errno::EPIPE
exit(74)
end
end
end
and I'm trying to have them work together, nothing happens:
$ ./stdout_writer.rb 2>&1 | ./stdin_reader.rb
$ ./stdout_writer.rb | ./stdin_reader.rb
nothing... although if I just echo into stdin_reader.rb, I get what I expect:
piousbox#e7440:~/projects/opera_events/sendgrid-example-operaevent$ echo "ok true" | ./stdin_reader.rb
reading...
ok true
got line ok true
piousbox#e7440:~/projects/opera_events/sendgrid-example-operaevent$
so how would I setup a script that gets stderr piped into it, so that it can read it line-by-line? Additional info: this will be an ubuntu upstart service script1.rb | script2.rb where script1 sends a message, and script2 verifies that the message was sent by script1
The issue seems to be that as stdout_writer runs infinitely, stdin_reader will never get a chance to read the STDOUT from stdout_writer as the pipe, in this case, is waiting for stdout_writer to be finished before stdin_reader starts reading. I tested this by changing while true to 5.times do. If you do that, and wait 5 seconds, the result of ./stdout_writer.rb | ./stdin_reader.rb is
reading...
writing...
got line writing...
~
got line ~
~
got line ~
~
got line ~
~
got line ~
~
got line ~
This isn't an issue with your code itself, but more so an issue with the way that ruby execution in terms of STDOUT | STDIN handling works.
Also, I don't think I've ever learned as much as I learned researching this question. Thank you for the fun exercise.
The output from stdout_writer.rb is being buffered by Ruby, so the reader process doesn’t see it. If you wait long enough, you should see the result appear in chunks.
You can turn buffering off and get the result you’re expecting by setting sync to true on $stdout at the start of stdout_writer.rb:
$stdout.sync = true

Grep regular expression stop after first match

I am trying to figure out what the grep syntax would be to get one\unique result of a search.
For example :
grep "^SEVERE" server.out
SEVERE: Cannot connect repository, Error occurred when calling service PING_SERVER.
SEVERE: Cannot connect repository, Error occurred when calling service PING_SERVER.
SEVERE: Cannot connect repository, Error occurred when calling service PING_SERVER.
I would like the output to show only the first find of the that occurrence.
Any Help would be great!
tcwbot
GNU grep 2.16 which comes with cygwin has this option (from 'man grep'):
-m NUM, --max-count=NUM
Stop reading a file after NUM matching lines. If the input is
standard input from a regular file, and NUM matching lines are
output, grep ensures that the standard input is positioned to
just after the last matching line before exiting, regardless of
the presence of trailing context lines. This enables a calling
process to resume a search. When grep stops after NUM matching
lines, it outputs any trailing context lines. When the -c or
--count option is also used, grep does not output a count
greater than NUM. When the -v or --invert-match option is also
used, grep stops after outputting NUM non-matching lines.
So try:
$ grep -m 1 "^SEVERE" server.out

grep show all lines, not just matches, set exit status

I'm piping some output of a command to egrep, which I'm using to make sure a particular failure string doesn't appear in.
The command itself, unfortunately, won't return a proper non-zero exit status on failure, that's why I'm doing this.
command | egrep -i -v "badpattern"
This works as far as giving me the exit code I want (1 if badpattern appears in the output, 0 otherwise), BUT, it'll only output lines that don't match the pattern (as the -v switch was designed to do). For my needs, those lines are the most interesting lines.
Is there a way to have grep just blindly pass through all lines it gets as input, and just give me the exit code as appropriate?
If not, I was thinking I could just use perl -ne "print; exit 1 if /badpattern/". I use -n rather than -p because -p won't print the offending line (since it prints after running the one-liner). So, I use -n and call print myself, which at least gives me the first offending line, but then output (and execution) stops there, so I'd have to do something like
perl -e '$code = 0; while (<>) { print; $code = 1 if /badpattern/; } exit $code'
which does the whole deal, but is a bit much, is there a simple command line switch for grep that will just do what I'm looking for?
Actually, your perl idea is not bad. Try:
perl -pe 'END { exit $status } $status=1 if /badpattern/;'
I bet this is at least as fast as the other options being suggested.
$ tee /dev/tty < ~/.bashrc | grep -q spam && echo spam || echo no spam
How about doing a redirect to /dev/null, hence removing all lines, but you still get the exit code?
$ grep spam .bashrc > /dev/null
$ echo $?
1
$ grep alias .bashrc > /dev/null
$ echo $?
0
Or you can simply use the -q switch
-q, --quiet, --silent
Quiet; do not write anything to standard output. Exit
immediately with zero status if any match is found, even if an
error was detected. Also see the -s or --no-messages option.
(-q is specified by POSIX.)

unix at command pass variable to shell script?

I'm trying to setup a simple timer that gets started from a Rails Application. This timer should wait out its duration and then start a shell script that will start up ./script/runner and complete the initial request. I need script/runner because I need access to ActiveRecord.
Here's my test lines in Rails
output = `at #{(Time.now + 60).strftime("%H:%M")} < #{Rails.root}/lib/parking_timer.sh STRING_VARIABLE`
return render :text => output
Then my parking_timer.sh looks like this
#!/bin/sh
~/PATH_TO_APP/script/runner -e development ~/PATH_TO_APP/lib/ParkingTimer.rb $1
echo "All Done"
Finally, ParkingTimer.rb reads the passed variable with
ARGV.each do|a|
puts "Argument: #{a}"
end
The problem is that the Unix command "at" doesn't seem to like variables and only wants to deal with filenames. I either get one of two errors depending on how I position "s
If I put quotes around the right hand side like so
... "~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE"
I get,
-bash: ~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE: No such file or directory
I I leave the quotes out, I get,
at: garbled time
This is all happening on a Mac OS 10.6 box running Rails 2.3 & Ruby 1.8.6
I've already messed around w/ BackgrounDrb, and decided its a total PITA. I need to be able to cancel the job at any time before it is due.
After playing around with irb a bit, here's what I found.
The backtick operator invokes the shell after ruby has done any interpretation necessary. For my test case, the strace output looked something like this:
execve("/bin/sh", ["sh", "-c", "echo at 12:57 < /etc/fstab"], [/* 67 vars */]) = 0
Since we know what it's doing, let's take a look at how your command will be executed:
/bin/sh -c "at 12:57 < RAILS_ROOT/lib/parking_timer.sh STRING_VARIABLE"
That looks very odd. Do you really want to pipe parking_timer.sh, the script, as input into the at command?
What you probably ultimately want is something like this:
/bin/sh -c "RAILS_ROOT/lib/parking_timer.sh STRING_VARIABLE | at 12:57"
Thus, the output of the parking_timer.sh command will become the input to the at command.
So, try the following:
output = `#{Rails.root}/lib/parking_timer.sh STRING_VARIABLE | at #{(Time.now + 60).strftime("%H:%M")}`
return render :text => output
You can always use strace or truss to see what's happening. For example:
strace -o strace.out -f -ff -p $IRB_PID
Then grep '^exec' strace.out* to see where the command is being executed.

Resources