Is there a way to call a command like last inside a ruby script? I can use %x to call commands like ls and ls -l within a script, but would that be acceptable for the complex and constantly expanding log information provided by the last command?
Here's an example, to retrieve user name, ip, startup and duration :
%x(last -i).each_line do |line|
line.chomp! # Removes newline
break if line.empty? # last is done
# 3 possibilities to extract information :
user, *other_columns = line.split(' ') # 1. Use split
start = Time.parse(line[39,16]) # 2. Use known position of a column
ip = line[22,17].strip
if line =~/\((\d+):(\d+)\)/ # 3. Use a regex
duration = $1.to_i*60+$2.to_i
else
duration = nil
end
info={user: user, ip: ip, start: start, duration: duration}
#TODO: Check that user isn't "reboot"
puts info
end
# {:user=>"ricou", :ip=>"0.0.0.0", :start=>2016-11-01 21:29:00 +0100, :duration=>141}
# {:user=>"ricou", :ip=>"0.0.0.0", :start=>2016-11-01 15:21:00 +0100, :duration=>57}
Which information do you need exactly?
Related
I need to call a command(in a sinatra or rails app) like this:
`command sub`
Some log will be outputed when the command is executing.
I want to see the log displaying continuously in the process.
But I just can get the log string after it's done with:
result = `command sub`
So, is there a way to implement this?
On windows i have the best experience with IO.popen
Here is a sample
require 'logger'
$log = Logger.new( "#{__FILE__}.log", 'monthly' )
#here comes the full command line, here it is a java program
command = %Q{java -jar getscreen.jar #{$userid} #{$password}}
$log.debug command
STDOUT.sync = true
begin
# Note the somewhat strange 2> syntax. This denotes the file descriptor to pipe to a file. By convention, 0 is stdin, 1 is stdout, 2 is stderr.
IO.popen(command+" 2>&1") do |pipe|
pipe.sync = true
while str = pipe.gets #for every line the external program returns
#do somerthing with the capturted line
end
end
rescue => e
$log.error "#{__LINE__}:#{e}"
$log.error e.backtrace
end
There's six ways to do it, but the way you're using isn't the correct one because it waits for the process the return.
Pick one from here:
http://tech.natemurray.com/2007/03/ruby-shell-commands.html
I would use IO#popen3 if I was you.
I want to pass array in argument in such a way suppose process.rb is my script and the argument will be like:
i/p
process.rb server{1..4}
process.rb prodserver{2..3}
process.rb devserver3
The process.rb should accept all the inputs and parse it in such a way that when I print the variable which holds the arguments give me below result.
o/p
puts arguments
server1
server2
server3
server4
or
prodserver2
prodserver3
or
devserver3
I have a shell script which does the same:
for i in "$#"
do
echo $i
done
i/p
server{1..4}
o/p
server1server2server3server4
I wanted to have the same logic in the ruby.
Since I am a new bie in ruby I am not able to find the same on google.
Please let me know how I can get this output or any article about the related to my question
The list is expanded by the shell before it ever hits your script. In other words, both your shell script and your Ruby script do not receive a single argument server{1..4} but rather they receive four arguments server1 server2 server3 server4, before they even start interpreting the arguments themselves.
You can simply just iterate over those, there is no need to parse the {1..4} shell expansion syntax yourself because you will never see it! It is already parsed and expanded by the shell before the shell passes off the arguments to your script.
ruby -e 'p ARGV' -- server{1..4}
# ["server1", "server2", "server3", "server4"]
#!ruby
ARGV.each do |i|
puts i
end
Basically ARGV holds all arguments passed to program, and puts prints string with new line added (the same as echo without -n flag in shell).
Command-line arguments in Ruby end up in ARGV. You can duplicate your shell script's functionality by iterating over that:
ARGV.each do |a|
puts a
end
If I understand you correctly you want to expand the range that comes in string form from your argument ARGV[0] ? My samples use a string to demonstrate it workd, replace the string by ARGV[0]
def expand_range arg
string, range = arg.split("{") #split arg in string part and rangestring part
if range #if a range is given
# parse the rangestring to an range by splitting the string on ..
# and splash this array to both its elements, convert them to integer
# and transform into a real range
# and enumerate each number in the range
Range.new(*range.split("..").map(&:to_i)).each do |val|
#concatenate the string part with the number
p "#{string}#{val}"
end
else #else just pass the string
p string
end
end
expand_range 'server{1..4}'
# "server1"
# "server2"
# "server3"
# "server4"
expand_range 'devserver3'
#"devserver3"
Personally I would return an array and print that instead of printing in the method itself, that would be more multifunctional.
script.rb:
puts 'hello'
puts 'foo'
main.rb:
puts `jruby script.rb` # receive the expected result
The question:
How can the same be achieved with reading the "script" before execution?
main.rb:
code=File.open('script.rb', 'r').read.gsub('"', '\"')
# puts `jruby -e '#{code}'` # Does not work for relatively big files;
Windows and unicode are the reasons of this question;
Please note that `jruby script.rb' creates a new process which is essential.
Store the modified script in a Tempfile and run that instead of passing the whole contents as an eval argument:
require 'tempfile'
code = IO.read('script.rb').gsub('"', '\"')
begin
tempfile = Tempfile.new 'mytempfile'
f.write code
f.close
puts `jruby '#{f.path}'`
ensure
f.close
f.unlink
end
The reason you’re likely getting an error is either a lack of proper escaping or a limit on the maximum argument length in the shell.
Also, beware that in your original implementation you never close the original file. I’ve fixed that by instead using IO.read.
In the command line, using
$ getconf ARG_MAX
will give the upper limit on how many bytes can be used for the command line argument and environment variables.
#Andrew Marshall's answer is better, but suppose you don't want to use a temp file, and assuming we can use fork in JRuby,
require 'ffi'
module Exec
extend FFI::Library
ffi_lib FFI::Platform::LIBC
attach_function :fork, [], :int
end
code = IO.read('script.rb')
pid = Exec.fork
if 0 == pid
eval code
exit 0
else
Process.waitpid pid
end
use require
main.rb:
require "script.rb"
I've wrote a small function that returns the result of executing a command.
function axsh(cmd)
local fullCmd=cmd:lower()
local f,err=io.popen(fullCmd,"r")
if not f then
return nil,"Could not create the process '"..fullCmd.."' \nError:"..err
end
return f:read("*all")
end
s=axsh("echo hi")
--print all bytes
print(s:byte(1,s:len()))
The output always has a \n at the end no matter what is the command:
104 105 10
Edit: it happens not only for my own binary command line application but also for almost all OS commands: Windows: "dir", "ipconfig", "echo"... Linux: "ls", "pwd", "ls"...
But when I run the command separately (i.e. windows command prompt) there is no trailing line feed. I don't need it, so need to remove the last character before returning the result.
Question: does this line feed always exist in the result of popen()? I can't find any reference to this behavior in the documentation.
No. io.popen just returns whatever string the command produces. You use echo as command, which happens to put a newline after the string ( this is what makes the command prompt appear on the next line, instead of just after the output).
You can test it by using trying this:
s=axsh([[lua -e "io.write([=[hi]=])"]])
return string.byte(s,1,-1)
which does not end the output with a newline.
In shell, I can do
$ cat name_of_file_with_a_lot_of_text | grep "What I am looking for"
Inside the Rails Console, can I achieve something similar, say when I run a command and the output is huge, especially say a DB query.
I am aware of outputting it as YAML but that Is not what I am looking for.
Thanks.
Yes, you can. The method is called gr... wait for it ...ep. Ruby's grep works on String, Array and many other built-in objects. For example to get all to_xxx methods of a number, just do:
1.methods.grep(/to_/)
I had the same question and wasn't very satisfied with the somewhat snarky response from ream88, so I decided to take a crack at it.
# Allows you to filter output to the console using grep
# Ex:
# def foo
# puts "Some debugging output here"
# puts "The value of x is y"
# puts "The value of foo is bar"
# end
#
# grep_stdout(/value/) { foo }
# # => The value of x is y
# # => The value of foo is bar
# # => nil
def grep_stdout(expression)
# First we need to create a ruby "pipe" which is two sets of IO subclasses
# the first is read only (which represents a fake $stdin) and the second is
# write only (which represents a fake $stdout).
placeholder_in, placeholder_out = IO.pipe
# This child process handles the grep'ing. Its done in a child process so that
# it can operate in parallel with the main process.
pid = fork {
# sync $stdout so we can report any matches asap
$stdout.sync
# replace $stdout with placeholder_out
$stdin.reopen(placeholder_in)
# we have to close both placeholder_out and placeholder_in because all instances
# of an IO stream must be closed in order for it to ever reach EOF. There's two
# in this method; one in the child process and one in the main process.
placeholder_in.close
placeholder_out.close
# loop continuously until we reach EOF (which happens when all
# instances of placeholder_out have closed)
read_buffer = ''
loop do
begin
read_buffer << $stdin.readpartial(4096)
if line_match = read_buffer.match(/(.*\n)(.*)/)
print line_match[1].grep(expression) # grep complete lines
read_buffer = line_match[2] # save remaining partial line for the next iteration
end
rescue EOFError
print read_buffer.grep(expression) # grep any remaining partial line at EOF
break
end
end
}
# Save the original stdout out to a variable so we can use it again after this
# method is done
original_stdout = $stdout
# Redirect stdout to our pipe
$stdout = placeholder_out
# sync $stdout so that we can start operating on it as soon as possible
$stdout.sync
# allow the block to execute and save its return value
return_value = yield
# Set stdout back to the original so output will flow again
$stdout = original_stdout
# close the main instances of placeholder_in and placeholder_out
placeholder_in.close
placeholder_out.close
# Wait for the child processes to finish
Process.wait pid
# Because the connection to the database has a tendency to go away when calling this, reconnect here
# if we're using ActiveRecord
if defined?(ActiveRecord)
suppress_stdout { ActiveRecord::Base.verify_active_connections! }
end
# return the value of the block
return_value
end
The obvious drawback of my solution is that the output is lost. I'm not sure how to get around that without calling yield twice.
EDIT I've changed my answer to only call fork once, which allows me to keep the output of the block and return it at the end. Win.
EDIT 2 You can get all of this functionality (and more!) in this gem now https://github.com/FutureAdvisor/console_util