Printing to screen in a rake task - ruby-on-rails

I have a long running rake task. Every now and then I print an update to the screen to let me know how far along the task has come.
puts "Almost there..."
My problem is all the puts statements seem to buffer somewhere and won't print to the screen until after the task is complete. At which point, they will be printed all at once.
Is there some way to force them to print as the task is running?

STDOUT.sync = true

May be you could flush the standart output:
STDOUT.flush

Related

Code not getting executed in Ruby threads

I have a rake task that uses some threads and now I'm getting to a really strange case...
some code didn't get executed so I started playing with simple puts statements...
Basically I have this:
Thread.new do
puts "hi"
puts "there"
[more code]
end
These are three consecutive runs of my rake task:
$ rake task:execute
hi
there
$ rake task:execute
[nothing!]
$ rake task:execute
hi
I tried Ruby 2.0 and 2.1.
I don't know if the problem is just in puts but I think not because the code didn't get executed and that's why I started debugging with printouts only to discover that even this doesn't get executed (always).
Strange?
You need to save the reference to all the threads and then call join on each of them to wait for them to complete. Ruby will not wait for other threads once the main thread exits.
threads = 3.times.map { Thread.new { puts "hello" } }
# do something else while threads run, if you want
threads.each(&:join)
Your main thread (the rake task itself) is probably completing before your subthread completes. You can do something like this:
t = Thread.new do
puts "hi"
puts "there"
[more code]
end
[do other stuff in the main thread]
t.join # Let the subthread catch up

Simple program that reads and writes to a pipe

Although I am quite familiar with Tcl this is a beginner question. I would like to read and write from a pipe. I would like a solution in pure Tcl and not use a library like Expect. I copied an example from the tcl wiki but could not get it running.
My code is:
cd /tmp
catch {
console show
update
}
proc go {} {
puts "executing go"
set pipe [open "|cat" RDWR]
fconfigure $pipe -buffering line -blocking 0
fileevent $pipe readable [list piperead $pipe]
if {![eof $pipe]} {
puts $pipe "hello cat program!"
flush $pipe
set got [gets $pipe]
puts "result: $got"
}
}
go
The output is executing go\n result:, however I would expect that reading a value from the pipe would return the line that I have sent to the cat program.
What is my error?
--
EDIT:
I followed potrzebie's answer and got a small example working. That's enough to get me going. A quick workaround to test my setup was the following code (not a real solution but a quick fix for the moment).
cd /home/stephan/tmp
catch {
console show
update
}
puts "starting pipe"
set pipe [open "|cat" RDWR]
fconfigure $pipe -buffering line -blocking 0
after 10
puts $pipe "hello cat!"
flush $pipe
set got [gets $pipe]
puts "got from pipe: $got"
Writing to the pipe and flushing won't make the OS multitasking immediately leave your program and switch to the cat program. Try putting after 1000 between the puts and the gets command, and you'll see that you'll probably get the string back. cat has then been given some time slices and has had the chance to read it's input and write it's output.
You can't control when cat reads your input and writes it back, so you'll have to either use fileevent and enter the event loop to wait (or periodically call update), or periodically try reading from the stream. Or you can keep it in blocking mode, in which case gets will do the waiting for you. It will block until there's a line to read, but meanwhile no other events will be responded to. A GUI for example, will stop responding.
The example seem to be for Tk and meant to be run by wish, which enters the event loop automatically at the end of the script. Add the piperead procedure and either run the script with wish or add a vwait command to the end of the script and run it with tclsh.
PS: For line-buffered I/O to work for a pipe, both programs involved have to use it (or no buffering). Many programs (grep, sed, etc) use full buffering when they're not connected to a terminal. One way to prevent them to, is with the unbuffer program, which is part of Expect (you don't have to write an Expect script, it's a stand-alone program that just happens to be included with the Expect package).
set pipe [open "|[list unbuffer grep .]" {RDWR}]
I guess you're executing the code from http://wiki.tcl.tk/3846, the page entitled "Pipe vs Expect". You seem to have omitted the definition of the piperead proc, indeed, when I copy-and-pasted the code from your question, I got an error invalid command name "piperead". If you copy-and-paste the definition from the wiki, you should find that the code works. It certainly did for me.

Net-ssh timeout for execution?

In my application I want to terminate the exec! command of my SSH connection after a specified amount of time.
I found the :timeout for the Net::SSH.start command but following the documentation this is only for the initial connection. Is there something equivalent for the exec command?
My first guess would be not using exec! as this will wait until the command is finished but using exec and surround the call with a loop that checks the execution status with every iteration and fails after the given amount of time.
Something like this, if I understood the documentation correctly:
server = NET::SSH.start(...)
server.exec("some command")
start_time = Time.now
terminate_calculation = false
trap("TIME") { terminate_calculation = ((Time.now - start_time) > 60) }
ssh.loop(0.1) { not terminate_calculation }
However this seems dirty to me. I expect something like server.exec("some command" { :timeout=>60}). Maybe there is some built in function for achieving this functionality?
I am not sure if this would actually work in a SSH context but Ruby itself has a timeout method:
server = NET::SSH.start ...
timeout 60 do
server.exec! "some command"
end
This would raise Timeout::Error after 60 seconds. Check out the docs.
I don't think there's a native way to do it in net/ssh. See the code, there's no additional parameter for that option.
One way would be to handle timeouts in the command you call - see this answer on Unix & Linux SE.
I think your way is better, as you don't introduce external dependencies in the systems you connect to.
Another solution is to set ConnectTimeout option in OpenSSH configuration files (~/.ssh/config, /etc/ssh_config, ...)
Check more info in
https://github.com/net-ssh/net-ssh/blob/master/lib/net/ssh/config.rb
what I did is have a thread that's doing the event handling. Then I loop for a defined number of seconds until channel closed.If after these seconds pass, the channel is still open, then close it and continue execution.

Why is my Ruby script utilizing 90% of my CPU?

I wrote a admin script that tails a heroku log and every n seconds, it summarizes averages and notifies me if i cross a certain threshold (yes I know and love new relic -- but I want to do custom stuff).
Here is the entire script.
I have never been a master of IO and threads, I wonder if I am making a silly mistake. I have a couple of daemon threads that have while(true){} which could be the culprit. For example:
# read new lines
f = File.open(file, "r")
f.seek(0, IO::SEEK_END)
while true do
select([f])
line = f.gets
parse_heroku_line(line)
end
I use one daemon to watch for new lines of a log, and the other to periodically summarize.
Does someone see a way to make it less processor-intensive?
This probably runs hot because you never really block while reading from the temporary file. IO::select is a thin layer over POSIX select(2). It looks like you're trying to block until the file is ready for reading, but select(2) considers EOF to be ready ("a file descriptor is also ready on end-of-file"), so you always return right away from select then call gets which returns nil at EOF.
You can get a truer EOF reading and nice blocking behavior by avoiding the thread which writes to the temp file and instead using IO::popen to fork the %x[heroku logs --ps router --tail --app pipewave-cedar] log tailer, connected to a ruby IO object on which you can loop over gets, exiting when gets returns nil (indicating the log tailer finished). gets on the pipe from the tailer will block when there's nothing to read and your script will only run as hot as it takes to do your line parsing and reporting.
EDIT: I'm not set up to actually try your code, but you should be able to replace the log tailer thread and your temp file read loop with this code to get the behavior described above:
IO.popen( %w{ heroku logs --ps router --tail --app my-heroku-app } ) do |logf|
while line = logf.gets
parse_heroku_line(line) if line =~ /^/
end
end
I also notice your reporting thread does not do anything to synchronize access to #total_lines, #total_errors, etc. So, you have some minor race conditions where you can get inconsistent values from the instance vars that parse_heroku_line method updates.
select is about whether a read would block. f is just a plain old file, so you when get to the end reads don't block, they just return nil instantly. As a result select returns instantly rather than waiting for something to be appending to the file as I assume you're expecting. Because of this you're sitting in a tight busy loop, so high cpu is to be expected.
If you are at eof (you could either check f.eof? or whether gets returns nil), then you could either start sleeping (perhaps with some sort of back off) or use something like listen to be notified of filesystem changes

Strange variable-argument problem in Capistrano Task

Hello stackoverflow experts,
I got a very strange problem in a task I'm creating with Capistrano. I'm trying to pass a variable from the command line:
>> cap create_dir -s name_of_dir=mydir
task :create_dir do
printf("#{name_of_dir}")
if !(exists?(:name_of_dir)) then
name_of_dir = Capistrano::CLI.ui.ask("Name of dir to be created.")
end
full_path = "/home/#{name_of_dir}"
run "mkdir #{full_path}"
end
The very strange this is that correctly parses the variable when I do printf, but parses as a blank(empty) string in the following command. I really find no explanation for this and I'm sure is not a stuping typo or anything like that?
I'm not expierenced in Ruby like in Java and PHP, I'm affraid that there maybe a strange rule?
Thanks!!
A few suggestions:
Avoid using variables with the same name of internal task variables
use fetch() instead of dealing with if exits? else then...
Here's the code
>> cap create_dir -s name_of_dir=mydir
task :create_dir do
printf("#{name_of_dir}")
directory = fetch(:name_of_dir) { Capistrano::CLI.ui.ask("Name of dir to be created.") }
full_path = "/home/#{directory}"
run "mkdir #{full_path}"
end
In newer versions of capistrano, at least from 2.5.19 which I run now the whole command line argument thing works different now. You call it like this.
cap command argument=value
And the syntax in the code is
ENV.has_key?('argument') and ENV['argument']
That's basically it, but you can look at my blogpost about it for a working example
It looks like in the second line you are checking if the symbol :name_of_dir exists - not the actual value of the variable name_of_dir.
Because you're unlikely to have a filename name_of_dir it will count as not existing... and then name_of_dir (the variable) is overwritten by the Capistrano::CLI.ui.ask command.
Not sure why but that must be killing it somehow.
Try removing the ":" and seeing if that fixes the problem.

Resources