I ran into this weird issue while working with ruby(on rails) time outs. This time out
timeout(10) do
//some code involving http calls that takes more than 10 seconds
end
is not working. But this timeout
timeout(20) do
timeout(10) do
//some code involving http calls that takes more than 10 seconds
end
end
times out after 20 seconds. I read that timeout in ruby wont work properly if it involves system calls. If that be the case then any number of nested timeout should also not work. Why would this work on the second timeout?
btw..the link I referred
http://ph7spot.com/musings/system-timer
Thanks in advance
You might have better luck using a combination of timeout and terminator to do this sort of thing.
One of the known deficiencies of the timeout method is it's not always strictly enforced and many things can block it.
Related
I am using OmniThread Parallel.foreach(). There are instances where the loop takes a long time or gets stuck.
I would like to know, is it possible to timeout each process in the Parallel.foreach() loop?
In short: Nope, there isn't.
Unless you program the timeout handling in your 'thread body' code (what gets called in the execute).
eg my database engine allows sending a CancelProcessing call to running queries from a different thread that runs the query, this would 'cleanly' end the running subthread.
'Dirty' end of the subthreads:
I added a FR to Omnithread's github site to add an (Dirty) Terminate method to the IOmniParallel interfaces (and alikes). Which has is drawback because killing subthreads will probably leave you with memory/resource leaks.
Meanwile you might use this dirty shutdown solution/workaround wich actually comes down fixing a similar problem (I had a deadlock in my parallel processed routine, so my parallel.Waitfor never returned true, and worse my IOmniParallelTask interface variable was never released causing the calling thread to block as well.
I benchmarked the execution time for method
tests.each do |test|
time = Benchmark.realtime { method(test) }
end
def method(test)
code
end
Which is returning time in seconds
But what I want is to break the loop if this method is taking more than 30 sec of execution time.
suggest a clean way to do it.
Use stdlib Timeout
require 'timeout'
def method(test)
Timeout::timeout(30){
#code here
}
end
This will raise Timeout::TimeoutError if code takes longer to run
You can use Ruby Timeout method
require 'timeout'
tests.each do |test|
Timeout::timeout(30) {
method(test)
}
end
You already got several answers, especially some regarding Timeout.
Please be cautious here. Timeout is implemented with an ALARM signal in standard ruby (i.e., I'm not talking about JRuby here), which means
You cannot nest timeouts (that is, you can, but it will silently fail).
If your code or some gem also uses the ALARM signal, it will go wrong.
Things can go plainly wrong (unexpected behaviour) due to it being such a clunky mechanism.
Don't even try to mix it with the "green multithreading" of standard ruby unless you like to have major headaches.
If you can, it will always be safer to somehow do your timeout yourself. That is, if you can, then have your method check for time spent regularly. Of course, this may or may not be useful to you; you don't want to bring test stuff into your production code. And it may be hard if you want to timeout system calls (for example, blocking network calls).
At least, keep it in mind.
Try this:
tests.each do |test|
time = Benchmark.realtime { method(test) }
break if time > 30
end
Not sure about the unit of time. Adjust the condition time > 30 according to that.
I am trying to understand the race_condition_ttl directive in Rails when using Rails.cache.fetch.
I have a controller action that looks like this:
def foo
#foo = Rails.cache.fetch("foo-testing", expires_in: 30.seconds, race_condition_ttl: 60.seconds) do
Time.now.to_s
end
#foo # this gets used in a view down the line...
end
Based on what I'm reading in the Rails docs, this value should expire after 30 seconds, but the stale value is allowed to be served for another 60 seconds. However, I can't figure out how to reproduce conditions that will show me this behavior working. Here is how I'm trying to test it.
100.times.map do
t = Thread.new { RestClient.get("http://myenvironment/foo") }
t
end.map {|t| t.join.value }.uniq
I have my Rails app running on a VM behind a standard nginx/unicorn setup. I am trying to spawn 100 threads hitting the site simultaneously to simulate the "dog pile effect". However, when I run my test code, all the threads report the same value back. What I would expect to see is that one thread gets the fresh value, while at least one other thread gets served some stale content.
Any pointers are welcome! Thanks so much.
You are setting race_condition_ttl to 60 seconds which means your threads will only start getting the new value after this time expires, even not taking into account the initial 30 seconds.
Your test doesn't look like it would take 1.5 minutes to run which would be required in order for the threads to start getting the new value. From the Rails Cache docs:
Yes, this process is extending the time for a stale value by another few seconds. Because of extended life of the previous cache, other processes will continue to use slightly stale data for a just a bit longer.
The text implies using a small race_condition_ttl and it makes sense both for its purpose and your test.
UPDATE
Also note that the life of stale cache is extended only if it expired recently. Otherwise a new value is generated and :race_condition_ttl does not play any role.
Without reading source it is not particularly clear how Rails decides when its server is getting hammered or what exactly recently means in the quote above. It seems clear though that the first process (of many) of those waiting to access the cache gets to set the new value while extending life of the previous one. The presence of waiting processes might be the condition Rails looks for. In any case the expected behaviour should be observed after both initial timeout and ttl expire and cache starts serving the updated value. The delay between initial timeout and the time new value starts showing up should be similar to the ttl. Of course the precondition is the server should be hammered around the moment of initial timeout expiration.
My Survey model has about 2500 instances and I need to apply the set_state method to each instance twice. I need to apply it the second time only after every instance has had the method applied to it once. (The state of an instance can depend on the state of other instances.)
I'm using delayed_job to create delayed jobs and workless to automatically scale up/down my worker dynos as required.
The set_state method typically takes about a second to execute. So I've run the following at the heroku console:
2.times do
Survey.all.each do |survey|
survey.delay.set_state
sleep(4)
end
end
Shouldn't be any issues with overloading the API, right?
And yet I'm still seeing the following in my logs for each delayed job:
Heroku::API::Errors::ErrorWithResponse: Expected(200) <=> Actual(429 Unknown)
I'm not seeing any infinite loops -- it just returns this message as soon as I create the delayed job.
How can I avoid blowing Heroku's API rate limits?
Reviewing workless, it looks like it incurs an API call per delayed job to check the worker count and potentially a second API call to scale up/down. So if you are running 5000 (2500x2) jobs within a short period, you'll end up with 5000+ API calls. Which would be well in excess of the 1200/requests per hour limit. I've commented over there to hopefully help toward reducing the overall API usage (https://github.com/lostboy/workless/issues/33#issuecomment-20982433), but I think we can offer a more specific solution for you.
In the mean time, especially if your workload is pretty predictable (like this). I'd recommend skipping workless and doing that portion yourself. ie it sounds like you already know WHEN the scaling would need to happen (scale up right before the loop above, scale down right after). If that is the case you could do something like this to emulate the behavior in workless:
require 'heroku-api'
heroku = Heroku::API.new(:api_key => ENV['HEROKU_API_KEY'])
client.post_ps_scale(ENV['APP_NAME'], 'worker', Survey.count)
2.times do
Survey.all.each do |survey|
survey.delay.set_state
sleep(4)
end
end
min_workers = ENV['WORKLESS_MIN_WORKERS'].present? ? ENV['WORKLESS_MIN_WORKERS'].to_i : 0
client.post_ps_scale(ENV['APP_NAME'], 'worker', min_workers)
Note that you'll need to remove workless from these jobs also. I didn't see a particular way to do this JUST for certain jobs though, so you might want to ask on that project if you need that. Also, if this needs to be 2 pass (the first time through needs to finish before the second), the 4 second sleep may in some cases be insufficient but that is a different can of worms.
I hope that helps narrow in on what you needed, but I'm certainly happy to discuss further and/or elaborate on the above as needed. Thanks!
When reading data from a potentially slow website, I want to ensure that get_response can not hang, and so added a timer to timeout after x seconds. So far, so good. I then read http://ph7spot.com/musings/system-timer which illustrates that in certain situations timer.rb doesn't work due to ruby's implementation of threads.
Does anyone know if this is one of these situations?
url = URI.parse(someurl)
begin
Timeout::timeout(30) do
response = Net::HTTP.get_response(url)
#responseValue = CGI.unescape(response.body)
end
rescue Exception => e
dosomething
end
well, first of all Timeout is not a class defined in Rails but in Ruby, second, Timeout is not reliable in cases when you make system calls.
Ruby uses what it's so called Green Threads. Let's suppose you have 3 threads, you think all of them will run in parallel but if one of the threads makes a syscall all the rest of the threads will be blocked until the syscall finishes, in this case Timeout won't work as expected, so it's always better to use something reliable like SystemTimer.