Does anyone knows how to force WEBrick to process more than one request at a time? I'm using some Ajax on my page for long running database-related tasks and I can clearly see the requests are being processed in a pipeline.
If you use JRuby, check out the GlassFish gem (stripped-down GlassFish server in gem form), the Trinidad gem (same thing using Tomcat), or various other options like warbler (produces .war files you can run directly or deploy to any app server). JRuby is the easiest way for sure to deploy a highly-concurrent application on Ruby, and makes the C Ruby options look rather primitive in comparison.
webrick only processes one request at a time, which is usually fine for development.
If you want things to run in parallel have a look at mongrel_cluster or the awesome unicorn or passenger of course.
You should definitely not be using WEBrick for long running requests. The best web server for the job would probably be Thin, as it's powered by EventMachine which makes it possible to write asynchronous code so that the server doesn't block.
If you are running rails 4 you can apply the following patch and webrick will start serving requests concurrently.
diff --git a/railties/lib/rails/commands/server.rb b/railties/lib/rails/commands/server.rb
index e3119ec..ef04aa8 100644
--- a/railties/lib/rails/commands/server.rb
+++ b/railties/lib/rails/commands/server.rb
## -93,14 +93,6 ## module Rails
middlewares << [Rails::Rack::Debugger] if options[:debugger]
middlewares << [::Rack::ContentLength]
- # FIXME: add Rack::Lock in the case people are using webrick.
- # This is to remain backwards compatible for those who are
- # running webrick in production. We should consider removing this
- # in development.
- if server.name == 'Rack::Handler::WEBrick'
- middlewares << [::Rack::Lock]
- end
-
Hash.new(middlewares)
end
Related
I am working on a multiple projects that talk to each other sometimes and I've run into an issue where app
A calls B (request 1, still running)
B calls A (request 2)
based on request 2's result, B responds to request 1
This requires me running multi-threaded rails in development mode.
I know I can set it up using puma or something like that but ... Isn't there really a simpler way?
I would like to avoid changing anything in the project (adding gems, config files..).
Something like rails s --multi would be nice, can't webrick just run with multiple threads or spawn more processes?
Can I perhaps install a standalone gem to do what I need and run something like thin run . -p 3?
The puma web server can provide multi-threading and multiple workers bound to a single local address.
Install the puma gem:
bundle add puma
or
gem install puma
Add a puma configuration file at config/puma.rb:
workers 1 # 1 worker in addition to master instance (i.e. handle 2 requests concurrently).
preload_app!
Launch the Rails server.
bundle exec rails s
Puma automatically starts and loads in the config file at config/puma.rb.
Bump up the value for workers if you need to handle more than 2 concurrent requests at the same time.
One way to solve this is to use POW, which uses two workers by default.
The nice thing is I don't have to modify the project files to do it so it satisfies my requirements.
Update: The up to day successor of POW is puma-dev, which is also zero-configuration.
My current solution, that's super kludgy, is to use Foreman and a Procfile to run two copies of my app on different ports. You'd have to configure your B service to make requests to the secondary port.
I'm using EventMachine and Monetarily to start e TCP server along with my rails application. This is started from config/initializers/momentarily.rb.
My problem is that it starts also when I run rake tasks, like db:migrate. I only want it to start when when I start the HTTP server. Environments won't help, since both the server start and rake tasks are under Development environment. Is there a way of knowing that the application is running the HTTP server as opposed to anything else? Note that is not only rake tasks, the EM starts also if I run the rails console, which is again something not desirable for my case.
unless File.basename($0) == "rake" && ARGV.include?("db:migrate")
# you are not in rake db:migrate
end
There's not a great way of doing this that I know of. You could copy newrelic's approach (check discover_dispatcher in local_environment.rb) which basically has a list of heuristics used to detect if it is running inside passenger, thin, etc.
For passenger it checks
defined?(::PhusionPassenger)
for thin it checks
if defined?(::Thin) && defined?(::Thin::Server)
Set an environment variable in config.ru file, and use it anywhere in the code to detect if it's executed using a rails server command only.
For e.g.
File: config.ru
ENV['server_mode'] = '1'
And using it somewhere as:
File: config/environment.rb
Thread.new { infinite_loop! }.join if ENV['server_mode'] = '1'
Reference: Answer
Maybe you can implement a switch in the initializer based on ARGV?
Something like:
if ARGV.join(' ').match /something/
# your initializer code here
end
Don't start that other server from an initializer. Create a daemon in script/momentarily and start it from within your app.
After your application launches, you could have it shell out to check ps. If ps shows that the HTTP server is running and the running HTTP server has the same pid as your application (check the pid by inspecting $$), then you could launch the TCP server.
In addition to a great answer by Frederick Cheung above, there can be some other "footprints" in actual process environment. Eg. Phusion Passenger adds certain variables to ENV such as:
PASSENGER_APP_ENV
IN_PASSENGER
PASSENGER_SPAWN_WORK_DIR
PASSENGER_USE_FEEDBACK_FD
Web servers typically can also set SERVER_SOFTWARE variable eg.:
SERVER_SOFTWARE=nginx/1.15.8 Phusion_Passenger/6.0.2
I'm using rufus-scheduler for handling cron jobs for a Rails 3.2.x app. The root worker.rb is being fired off by foreman (actually upstart on this particular server) and therefore when it starts off it does not have the Rails context in which to operate. Obviously when I attempt to call logger or Rails.logger it will fail each time.
I'm using log4r as a replacement for the default Rails Logger, which is quite nice, but I am wondering what the proper solution for this problem would be:
1) Do I need to give the script the Rails context at startup (it is simply running rake tasks right now so it ends up getting the Rails environment when the worker script hands off to the rake task and I fear giving the script the Rails env before running the timed task would be overkill since it takes so long to fire up the env)? or
2) Should I just set up log4r as one would do in a non-Rails Ruby script that simply reads in the same log4r.yml config that the Rails app is using and therefore participate in the same log structure?
3) Or some other option I'm not thinking of?
Additionally, I would appreciate either an example or the steps that I should consider with the recommended implementation.
For reference, I followed "How to configure Log4r with Rails 3.0.x?" and I think this could be helpful when integrated with the above: "Properly using Log4r in Ruby Application?"
I think this might be what you're looking for..
Use this within the worker itself, and create a custom named log file
require 'log4r'
logger = Log4r::Logger.new('test')
logger.outputters << Log4r::Outputter.stdout
logger.outputters << Log4r::FileOutputter.new('logtest', :filename => 'logtest.log')
logger.info('started script')
## You're actual worker methods go here
logger.info('finishing')
I need to setup a connection to an external service in my Rails app. I do this in an initializer. The problem is that the service library uses threaded delivery (which I need, because I can't have it bogging down requests), but the Unicorn life cycle causes the thread to be killed and the workers never see it. One solution is to invoke a new connection on every request, but that is unnecessarily wasteful.
The optimal solution is to setup the up the connection in an after_fork block in the unicorn config. The problem there is that doesn't get invoked outside of unicorn, which means we can't test it in development/testing environments.
So the question is, what is the best way to determine whether a Rails app is running under Unicorn (either master or worker process)?
There is an environment variable that is accessible in Rails (I know it exists in 3.0 and 3.1), check the value of env['SERVER_SOFTWARE']. You could just put a regex or string compare against that value to determine what server you are running under.
I have a template in my admin that goes through the env variable and spits out its content.
Unicorn 4.0.1
env['SERVER_SOFTWARE'] => "Unicorn 4.0.1"
rails server (webrick)
env['SERVER_SOFTWARE'] => "WEBrick/1.3.1 (Ruby/1.9.3/2011-10-30)"
You can check for defined?(Unicorn) and in your Gemfile set: gem :unicorn, require: false
In fact you don't need Unicorn library loaded in you rails application.
Server is started by unicorn command from shell
Checking for Unicorn constant seems a good solution, BUT it depends very much on whether require: false is provided in the Gemfile. If it isn't (which is quite probable), the check might give a false positive.
I've solved it in a very straightforward manner:
# `config/unicorn.rb` (or alike):
ENV["UNICORN"] = 1
...
# `config/environments/development.rb` (or alike):
...
# Log to stdout if Web server is Unicorn.
if ENV["UNICORN"].to_i > 0
config.logger = Logger.new(STDOUT)
end
Cheers!
You could check to see if the Unicorn module has been defined with Object.constants.include?('Unicorn').
This is very specific to Unicorn, of course. A more general approach would be to have a method which sets up your connection and remembers it's already done so. If it gets called multiple times, it just returns doing nothing on subsequent calls. Then you call the method in after_fork and in a before_filter in your application controller. If it's been run in the after_fork it does nothing in the before_filter, if it hasn't been run yet it does its thing on the first request and nothing on subsequent requests.
Inside config/unicorn.rb
Define ENV variable as
ENV['RAILS_STDOUT_LOG']='1'
worker_processes 3
timeout 90
and then this variable ENV['RAILS_STDOUT_LOG'] will be accessible anywhere in your Rails app worker thread.
my issue:
I wanted to output all the logs(SQL queries) when on the Unicorn workers and not on any other workers on Heroku, so what I did is adding env variable in the unicorn configuration file
If you use unicorn_rails, below code will help
defined?(::Unicorn::Launcher)
I have an app that runs on multiple servers:
- locally on dev machines
- on heroku
- on a specific server with Passanger on Nginx
I am trying to launch a particular code (loading some REDIS keys) that is only required if the web server is launched.
I have done quite a bit of digging, and the nicest solution I found was to execute my code in an initializer with:
if defined?(Rails::Server)
#my code
end
This works well locally, but it seems that Rails::Server never gets defined either on Heroku or Passanger.
I need a solution that works in every case, please help, this is really important.
Thanks,
Alex
ps: I am running Rails 3.0.4, Ruby 1.8.7
Putting code in your config.ru file might be a more robust way of detecting server mode across different types of servers (Unicorn/Passenger/Rails::Server/etc).
e.g., in rails-root/config.ru:
# This file is used by Rack-based servers to start the application.
# ADD this line and read the value later:
ENV['server_mode'] = '1'
require ::File.expand_path...
What about?
config.serve_static_assets = ( defined?(Mongrel) || defined?(WEBrick) ) ? true : false