I implemented executeAndWait Interceptor in Struts2 application and it is running fine. Is there any way to stop this long running process in between through interceptor? Can I limit maximum time to execute an action?
Related
From our Advanced Installer setup, we install/upgrade a service that needs up to a minute to shut down. We cannot decrease the time it needs, and it will be shut down after a minute.
If AI tries to stop that service, it comes up with an error message after less than a minute ("The setup was unable to automatically close all requested applications. Please ensure that the applications holding files in use are closed before continuing with the installation").
I have not found an option in Advanced Installer Professional to change the timeout of the wait.
Is this possible?
I don't think this is possible. You can try to use a custom action to stop the service. That means you could write your own code to trigger a service stop operation and wait for a minute. This code can be executed as a custom action.
To make sure the described error message is not thrown you should execute your custom action before "Paths Resolution" action.
So I'm using Heroku to host a simple script, which runs whenever a specific page is loaded. The script takes longer than 30 seconds to run, which Heroku returns as an H12 error - Request Timeout (https://devcenter.heroku.com/articles/limits#router). I can't use a background process for this task, as I'm using its run time as a loading screen for the user. I know the process will still complete, but I want a 200 code to be spent when the script finishes.
Is there a way to send a single byte every, say, 20 seconds, so that the request doesn't time-out, and will stop whenever the script finishes? (a response from the heroku page will start a rolling 55-second window preventing timeout). Do I have to run another process simultaneously to check if the longer process is finished, sending a kind of 'heartbeat' to the requesting page, letting it know the process is still running - and preventing heroku from timing out? I'm extremely new to rails, any and all help is appreciated!
I'm currently considering moving to jRuby, and I'm still unsure how everything will work, but let's consider this hypothetical situation.
A user 1 loads a page in my app which takes about 2.5 seconds, and at about 500ms in the execution a user 2 tries to open different page which takes 1 second to load.
If my estimate is correct, this is what would happen if you ran it in MRI with single process :
User 1 waits for 2,5 seconds for his page to load
User 2 waits for 3 seconds for his page to load (2 seconds waiting for user 1 to complete the loading of his page, and 1 second for his page to finish rendering)
Is my estimate correct?
And let's say if I ran the same app under jRuby this would happen :
User 1 waits 2,5 seconds for his page to load
User 2 waits for 1 or more seconds but less than 3, depending how much of the memory/cpu the request from user 1 takes
Is my other estimate correct? Of course assuming your code is thread safe. If my estimate is incorrect please correct me, or if it is correct, do I need to make sure that some config is set at rails app level or should I be careful about something else other than thread safe code?
Update
I've just done a small jRuby POC app, used the warbler gem to build a war file, and deployed a war to a Tomcat web server. I don't think my estimate was correct for jRuby, this is what I observed :
User 1 waits 2,5 seconds for his page to load
User 2 waits for 3 seconds
Which is identical to MRI, in terms of request processing, shouldn't jRuby process these in parallel?
we're talking were hypothetical things (and assumptions)
if "loads a page in my app which takes about 2.5 seconds" all users will keep loading this thing (concurrently) unless of course you do some caching or store it after the first load for other users.
the difference is that in MRI whenever Ruby code is executed (not waiting on IO such as a database or loading something from http://) 2 threads won't run concurrently, while in JRuby they will.
if you're seeing User 2 waits for 3 seconds on JRuby it means that smt is blocking multiple requests e.g. there's a Mutex somewhere along the way (e.g. Rack::Lock).
I have a simple app and I want to use it as webservice.
My problem is that I can't receive more than 1 request at the same time.
Apparently, the requests are enqueued and executed one by one. So, if I make 2 requests on the same URL, the second has to wait for the first one.
I've already tried to use Unicorn, Puma and Thin to enable concurrency on the requests, but it seems to keep queuing the requests by URL.
Example:
I make the request 1 at localhost:3000/example
I make another request at localhost:3000/another_example
I make the last request at localhost:3000/example
The first and second requests are executed concurrently, but the last one (that has the same URL that the first) has to wait for the first to finish.
Unicorn, Puma and Thin enable concurrency, but on different URLs.
NOTES:
I added on my config/application.rb:
config.allow_concurrency = true
I'm running the app with:
rails s Puma
How can I perform my requests concurrently?
You're right, each Puma/Thin/Unicorn/Passenger/Webrick workers houses a single Rails app instance (or Sinatra app instance, etc) per Ruby process. So it's 1 web worker = 1 app instance = 1 Ruby process.
Each request blocks the process until the response is ready. So it's usually 1 request per process.
Ruby itself has the so called "GIL" (Global Interpreter Lock) which blocks execution of multiple threads because of C extensions lack of thread-safe controles such as mutexes and semaphores. It means that threads won't run concurrently. In practice, they "can". I/O operations can block execution waiting for a response. For example, reading a file or waiting response from a network socket. In this case, Ruby allow another thread to resume until the I/O operation of the previous thread finishes.
But Rails used to have a single block of execution per request as well, it's own lock. But in Rails 3, they added thread-safe controls through the Rails code to ensure it could run in JRuby for example. And in Rails 4 they decided to have the thread-safe controls on by default.
In theory this means that more than one request can run in parallel even in Ruby MRI (as it supports native threads since 1.9). In practice one request can run while another is waiting for a database process to return for example. So you should see a few more requests running in parallel. If your example is CPU bound (more internal processing than I/O blocks) the effect should be as if the requests are running one after the other. Now, if you have more I/O blocks (as waiting for a large SQL select to return), you should see it running more in parallel (not completely though).
You will see parallel requests more often if you use a virtual machine with not only native threads but no Global Interpreter Lock, such is the case of JRuby. So I recommend using JRuby with Puma.
Puma and Passenger are both multi-threaded. Unicorn is fork-based. Thin is Eventmachine based. I'd personally recommend testing Passenger as well.
http://tenderlovemaking.com/2012/06/18/removing-config-threadsafe.html
https://bearmetal.eu/theden/how-do-i-know-whether-my-rails-app-is-thread-safe-or-not/
How can I specify timeout of 2 minutes for a particular request in rails application. One of my application request is taking morethan 5 minutes in some cases. In that case I would like to stop processing that request if it is taking morethan 2 mins.
I need this configuration at application level so that in future if there are any other such type of requests I should not do any special changes otherthan mentioning that action in that configuration. There are some requests which take morethan 10mins also. But they should not have any effect.
Thanks,
Setting the timeout for a request back that far is generally bad practice. Making your users wait for minutes on end for a request to finish isn't a good idea.
Instead, this type of long-running task should be placed into a job queue for a worker process to run at it's convenience independent of the web request. This
allows the web request to finish very quickly, making your user happy
the long-running task to stay out of your web process, freeing it up to do what its supposed to (serve web requests)
Consider a gem like delayed_job. Describing how to work it into your application is outside of the scope of this question; my answer here serves only to point out that looking to modify the timeout is very likely the wrong 'answer' and than you're better off looking at a job queue.