Net-ssh timeout for execution? - ruby-on-rails

In my application I want to terminate the exec! command of my SSH connection after a specified amount of time.
I found the :timeout for the Net::SSH.start command but following the documentation this is only for the initial connection. Is there something equivalent for the exec command?
My first guess would be not using exec! as this will wait until the command is finished but using exec and surround the call with a loop that checks the execution status with every iteration and fails after the given amount of time.
Something like this, if I understood the documentation correctly:
server = NET::SSH.start(...)
server.exec("some command")
start_time = Time.now
terminate_calculation = false
trap("TIME") { terminate_calculation = ((Time.now - start_time) > 60) }
ssh.loop(0.1) { not terminate_calculation }
However this seems dirty to me. I expect something like server.exec("some command" { :timeout=>60}). Maybe there is some built in function for achieving this functionality?

I am not sure if this would actually work in a SSH context but Ruby itself has a timeout method:
server = NET::SSH.start ...
timeout 60 do
server.exec! "some command"
end
This would raise Timeout::Error after 60 seconds. Check out the docs.

I don't think there's a native way to do it in net/ssh. See the code, there's no additional parameter for that option.
One way would be to handle timeouts in the command you call - see this answer on Unix & Linux SE.
I think your way is better, as you don't introduce external dependencies in the systems you connect to.

Another solution is to set ConnectTimeout option in OpenSSH configuration files (~/.ssh/config, /etc/ssh_config, ...)
Check more info in
https://github.com/net-ssh/net-ssh/blob/master/lib/net/ssh/config.rb

what I did is have a thread that's doing the event handling. Then I loop for a defined number of seconds until channel closed.If after these seconds pass, the channel is still open, then close it and continue execution.

Related

Way to get some sort of schedule in TCL without blocking on-going code

I need some sort of schedule thing to schedule a task to happen at x:y (12:00 for example) in Tcl.
The scenario is a router using Openwrt with Tcl 8.6.10 with limited RAM and storage where I have some sort of IRC client "bot" (using socket to connect). The "bot" was just a barebone that I modify to suit my needs. Most of the things work fine, except that I don't have way to schedule easily things. I wanted something like how eggdrop has "bind time" where the bind thing is "bind time flag "cron-style string" caller".
The "bot" scheme is like:
Main Tcl script:
<info+code to connect to IRC>
<while loop>
<some code in case of IRC disconnection>
<list of files with tcl code aka sub-scripts>
<usage of source based from a list of the filenames>
<code for error handling>
<end of while loop>
The list of files is source filelist.tcl, where filelist.tcl is a set var {filename1.tcl filename2.tcl...}. The filenamex.tcl has some basic code to respond to IRC server or IRC input from channels and reply to channels.
I can make some sort of schedule if I base a execution like if {[clock format [clock seconds] -format "%H:%M"]=="12:00"} {code to execute} and hopefully wait for a server ping/pong but that can lead to repeated code inside of the if body.
I been looking around and found a package called cron but I don't know how to use it correctly because there are not many examples and I don't know to use vwait properly and I don't want vwait to hang the bot waiting for a value to change. I also read about tcl threads for maybe parallel execution.
So I need some code inside of a sub-script that looks like (a package cron style):
#beginning of file
#add a task specifying hour and minute
task-at "12:00" proccaller
proc procname {optional} {
<some code to be executed at specific hour+time>
}
#end of file
I also don't know how to use after command to use it.
How can I accomplish I want?
Thanks for the replies and yes, it would help if I study event loops and coroutine, which probably comes next.
Some time has passed since I posted the question and kinda sorted the thing by creating a sub-script in a folder named scripts with the following structure:
#beginning of the script
if {![file exists executed]} {set executed "no"}
#the following clock instruction returns for example: Tuesday 22:14
switch -glob -- [clock format [clock seconds] -format "%A %H:%M"] {
"*12:00" - "*12:01" {
#Basic example of sending a message to the irc channel when it's midday
if {$executed=="no"} {
puts $fd "PRIVMSG #CODE :It's midday right now."
flush $fd
set executed "yes"
}
}
#...more time comparisions and code
default {set executed "no"}
}
#end of script
And the script is almost the top of the list of scripts to be loaded so if I wish to send some command down stream at giving time, the command can be executed.
There is double timings because the "bot" reacts, at least at minimum, to the irc server's ping which happens each 90 seconds and it may skip some minutes.
This is not an answer but an unproper workaround.

How to make Postman/Newman to fail a test after certain time has passed?

So, in my collection I have about ten requests, with the last two being:
/Wait 10 seconds
/Check Complete
The first makes a call to the postman's echo (delay by 10 seconds) and the second is the call to my system to check for the status complete. Now, if status is unavailable I wait another 10s:
postman.setNextRequest("Wait 10 seconds");
The complete status on my system can appear in a minute or so. Now, as one can see - it is an infinite loop if something goes wrong with the system and status is never complete. Is there a way in postman/newman test to fail a test if it has been going for more than 2 minutes, for example.
Additionally, this will be executed in jenkins with command line, so I am not really looking into postman settings or delays between requests in the runner.
you may have a look to newman options here : https://www.npmjs.com/package/newman#newman-run-collection-file-source-options. The interesting option is
--timeout-request : it will surely fulfill your need.
In Postman itself, you may test the responseTime. I recall that there is a snippet, on the right part, which looks like this:
tests["Response time is less than 200ms"] = responseTime < 200;
and which could help you as the test fails if response does not occur within the requested time.
Alexandre
If you are going to be using Jenkins pipeline you can use the timeout step to cause long running jobs to result in failure, here's on for 2 mins.
timeout(120) {
node {
sh 'newman command'
}
}
Check out the "Pipeline Syntax" editor in Jenkins to generated your code block and look for other useful functions.

Python FTP hang

Say I want to use FTP in Python using the ftplib. I begin with this:
from ftplib import ftp
ftp = FTP('10.10.10.151')
If the FTP server is not online, however, it will hang right there indefinitely. The only thing that can kick it out is a keyboard interrupt as far as I know. I've tried this:
ftp.connect('10.10.10.151','21', 5)
With the five being a five second timeout. But the problem here is that I do not know of any way to use that line without first assigning ftp something. But if the server is offline, then the "ftp =" line will hang. So what use is ftp.connect()'s timeout function?!?
Does anybody know a workaround or anything? Is there a way to time out the "ftp = FTP(xxx)" command that I haven't found? Thanks.
I'm using Python 2.7 on Linux Mint.
Your call to connect() is redundant since FTP() method documentation states:
When host is given, the method call connect(host) is made.
Also, since Python 2.6, FTP() does have a timeout parameter:
class ftplib.FTP([host[, user[, passwd[, acct[, timeout]]]]])
The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if is not specified, the global default timeout setting will be used).

When to give up on getting results from an external web service?

I'm using a gem to get code results from Ideone.com. The gem submits code to Ideone and then checks for the results page. It checks timeout times and then gives up if there's no result. The problem is it might give up too early, but I also don't want it to wait too long if there's not going to be a result. Is there a way to know when one should give up hope?
This is the relevant code:
begin
sleep 3 if i > 0
res = JSON.load(
Net::HTTP.post_form(
URI.parse("http://ideone.com/ideone/Index/view/id/#{loc}/ajax/1"),
{}
).body
)
i += 1
end while res['status'] != '0' && i < timeout
if i == timeout
raise IdeoneError, "Timed out while waiting for code result."
end
Sounds like you want to adjust sleep timeout and number of attempts parameters. There is no absolute values suitable for each case, so you should pick some which are most appropriate for you application.
Unfortunatelly the gem code have both this parameters (3 seconds delay and 4 attempts) hardcoded so you don't have an elegant way to change them. So you can either fork the gem and change its code or try to monkey-patch the value of TIMEOUT constant with http://apidock.com/ruby/Module/const_set . However you won't be able to monkey-patch the delay between attempts value without rewriting method .run of the gem.
FYI. Net::HTTP has their own timeouts - how much time to wait for ideone.com connection and response. If they are exceeded Net::HTTP raises Timeout exception. The setters are
http://ruby-doc.org/stdlib-2.0/libdoc/net/http/rdoc/Net/HTTP.html#method-i-read_timeout-3D and #open_timeout=.

ActiveResource timeout not functioning [duplicate]

This question already has an answer here:
Overriding/Modifying Rails Class (ActiveResource)
(1 answer)
Closed 3 years ago.
I'm trying to contact a REST API using ActiveResource on Rails 2.3.2.
I'm attempting to use the timeout functionality so that if the resource I'm contacting is down I can fail quickly - I'm doing this with the following:
class WorkspaceResource < ActiveResource::Base
self.timeout = 5
self.site = "http://mysite.com/restAPI"
end
However, when I try to contact the service when I know it isn't available, the class only times out after the default 60 seconds. I can see from the error stack that the timeout error does indeed come from an ActiveResource class in my gem folder that has the proper functions to allow timeout settings, but my set timeout never seems to work.
Any thoughts?
So apparently the issue is not that timeout is not functioning. I can run a server locally, make it not return a response within the timeout limit, and see that timeout works.
The issue is in fact that if the server does not accept the connection, timeout does not function as I expected it to - it doesn't function at all. It appears as though timeout only works when the server accepts the connection but takes too long to respond.
To me, this seems like an issue - shouldn't timeout also work when the server I'm contacting is down? If not, there should be another mechanism to stop a bunch of requests from hanging...anyone know of a quick way to do this?
The problem
If you're running on Ruby 1.8.x then the problem is its lack of real system threads.
As you can read first hereand then here, there are systemic problems with timeouts in Ruby. An interesting discussion but for you in particular some comments suggest that the timeout is effectively ignored and defaults to 60 seconds - exactly what you are seeing.
Solutions ...
I have a similar issue with our own product when trying to send emails - if the email server is down the thread blocks. For me the solution was to spin the request off on a separate thread and therefore my main request-processing thread doesn't block.
There are non-blocking libraries out there for Ruby but perhaps you could take a look first at this System Timeout Gem.
An option open to anyone using Rails behind a proxy like nginx would be to set the upstream timeout to a lower number - that way you'll get notified if the server is taking too long. I'd only do this if I were really stuck for a solution.
Last but not least, it's possible that running Rails 2.3.2 on top of Ruby 1.9.1 will fix the issue.
Alternatively, you could try to catch these connection errors and retry once (after certain period of time) just to make sure the connection is really out.
retried = false
begin
#businesses = Business.find(:all, :params => { :shop_domain => #shop.domain })
retried = false
rescue ActiveResource::TimeoutError => ex
#raise ex
rescue ActiveResource::ConnectionError, ActiveResource::ServerError, ActiveResource::ClientError => ex
unless retried
sleep(((ex.respond_to?(:response) && ex.response['Retry-After']) || 5).to_i)
retried = true
retry
else
# raise ex
end
end
Inspired by this solution from Shopify for paginating a large number of records. https://ecommerce.shopify.com/c/shopify-apis-and-technology/t/paginate-api-results-113066

Resources