I have a rails 4.2.1 app running with Unicorn as app server.
I need to provide the user with the ability to download csv data.
I'm trying to stream the data, but when the file take too long time than Unicorn timeout and Unicorn will kill this process
Is there any way to solve this problem
My stream code :
private
def render_csv(data)
set_file_headers()
set_streaming_headers()
response.status = 200
self.response_body = csv_lines(data)
Rails.logger.debug("end")
end
def set_file_headers
file_name = "transactions.csv"
headers["Content-Type"] = "text/csv"
headers["Content-disposition"] = "attachment; filename=\"#{file_name}\""
end
def set_streaming_headers
#nginx doc: Setting this to "no" will allow unbuffered responses suitable for Comet and HTTP streaming applications
headers['X-Accel-Buffering'] = 'no'
headers["Cache-Control"] ||= "no-cache"
headers.delete("Content-Length")
end
def csv_lines(data)
Enumerator.new do |y|
#ideally you'd validate the params, skipping here for brevity
data.find_each(batch_size: 2000) do |row|
y << "jhjj"+ "\n"
end
end
end
If you use configuration file. Change timeout there. Here is how I do it.
In config/unicorn.rb
root = "/home/deployer/apps/appname/current"
working_directory root
pid "#{root}/tmp/pids/unicorn.pid"
stderr_path "#{root}/log/unicorn.log"
stdout_path "#{root}/log/unicorn.log"
listen "/tmp/unicorn.appname.sock"
worker_processes 2
timeout 60 #<<< if you need you can increase it.
Then you would start unicorn by
bundle exec unicorn -D -E production -c config/unicorn.rb
Related
I use Ruby v2.2.3, Rails v4.2.4, Unicorn v5.1.0 in production.
Activerecord validation rules and callbacks for all models of app has been disabled after unicorn restart (from log rotation script). Here it is
/var/www/fitness_crm/shared/log/production.log
/var/www/fitness_crm/shared/log/unicorn.stderr.log
/var/www/fitness_crm/shared/log/unicorn.stdout.log {
daily
missingok
rotate 30
compress
notifempty
create 640 deploy deploy
sharedscripts
postrotate
kill -s USR2 `cat /var/www/fitness_crm/shared/tmp/pids/unicorn.pid`
endscript
}
We've changed the script to send USR1 instead USR2, but nothing has changed. When USR1 is sent validations/callbacks simply keep being disabled. Here is our unicorn configuration file
working_directory "/var/www/fitness_crm/current"
pid "/var/www/fitness_crm/shared/tmp/pids/unicorn.pid"
stdout_path "/var/www/fitness_crm/shared/log/unicorn.stdout.log"
stderr_path "/var/www/fitness_crm/shared/log/unicorn.stderr.log"
listen "/tmp/unicorn.fitness_crm_production.sock"
worker_processes 8
timeout 30
preload_app true
before_exec do |server|
ENV["BUNDLE_GEMFILE"] = "/var/www/fitness_crm/current/Gemfile"
end
before_fork do |server, worker|
# Disconnect since the database connection will not carry over
if defined? ActiveRecord::Base
ActiveRecord::Base.connection.disconnect!
end
# Quit the old unicorn process
old_pid = "#{server.config[:pid]}.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
if defined?(Resque)
Resque.redis.quit
end
sleep 1
end
after_fork do |server, worker|
# Start up the database connection again in the worker
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
if defined?(Resque)
Resque.redis = 'localhost:6379'
end
end
After this we kill unicorn and start it manually with command:
bundle exec unicorn -D -c /var/www/fitness_crm/shared/config/unicorn.rb -E production
After this everything is good and validations and callbacks are enabled again.
Please, help to find out what is the reason of such a behavior and how to fix it.
I keep getting a 503 error 30 seconds after a client sends a message through faye. After the 30 seconds the client then receives the message and it is appended to the chat but the error still occurs and the socket will eventually close. How can i modify my existing code to keep the websocket alive? And how can I get rid of the 30 second delay that heroku throws when a message is sent?
messages/add.js.erb
<% broadcast #path do %>
var $chat = $("#chat<%= #conversation.id %>");
$chat.append("<%= j render(#message) %>");
//Set the scroll bar to the bottom of the chat box
var messageBox = document.getElementById('chat<%= #conversation.id %>');
messageBox.scrollTop = messageBox.scrollHeight;
<% end %>
$("#convoId<%=#conversation.id%>")[0].reset();
application_helper.rb
def broadcast(channel, &block)
message = {:channel => channel, :data => capture(&block), :ext => {:auth_token => FAYE_TOKEN}}
uri = URI.parse(FAYE_END_PT)
Net::HTTP.post_form(uri, :message => message.to_json)
end
application.rb
config.middleware.delete Rack::Lock
config.middleware.use FayeRails::Middleware, mount: '/faye', :timeout => 25
faye.ru
require 'faye'
require File.expand_path('../faye_token.rb', __FILE__)
class ServerAuth
def incoming(message, callback)
if message['channel'] !~ %r{^/meta/}
if message['ext']['auth_token'] != FAYE_TOKEN
message['error'] = 'Invalid authentication token'
end
end
callback.call(message)
end
end
Faye::WebSocket.load_adapter('thin')
faye_server = Faye::RackAdapter.new(:mount => '/faye', :timeout => 45)
faye_server.add_extension(ServerAuth.new)
run faye_server
Procfile
web: bundle exec rails server -p $PORT
worker: bundle exec foreman start -f Procfile.workers
Procile.workers
faye_worker: rackup middlewares/faye.ru -s thin -E production
503 Error
/messages/add Failed to load resource: the server responded with a status of 503 (Service Unavailable)
I tried adding a worker to heroku along with a web dyno with no luck. Everything works fine on my local host when running heroku local. The process on the local host look like
forego | starting web.1 on port 5000
forego | starting worker.1 on port 5100
worker.1 | 20:33:18 faye_worker.1 | started with pid 16534
where as even with the web dyno and worker on heroku
=== web (1X): bundle exec rails server -p $PORT
web.1: up 2015/12/28 20:08:02 (~ 1h ago)
=== worker (1X): bundle exec foreman start -f Procfile.workers
worker.1: up 2015/12/28 21:18:39 (~ 40s ago)
A lot of this code was taken from various tutorials so hopefully if we can solve this issue it will make using Faye with Heroku easier to someone else as well. Thanks!
Heroku has a 30 seconds timeout for all the requests, after that raise an H12 error. https://devcenter.heroku.com/articles/limits#http-timeouts
If your request takes more that 30 seconds you should consider put it into a background job using Delayed_Job or Sidekiq for example.
I'm trying to monitor a redis server with god (ruby gem). I changed the PID file path in /etc/redis/redis.conf to a rails app temp folder of the user deploying my app (using capistrano), and in the redis.god file I added the line "w.pid_file= ..." which points to the same PID path as the one I changed in the redis.conf file. So the redis.god file looks like this at the moment:
rails_env = ENV['RAILS_ENV'] || 'production'
raise "Please specify RAILS_ENV." unless rails_env
rails_root = ENV['RAILS_ROOT'] || File.expand_path(File.join(File.dirname(__FILE__), '..', '..'))
# Redis
%w{6379}.each do |port|
God.watch do |w|
w.dir = "#{rails_root}"
w.name = "redis"
w.interval = 30.seconds
w.start = "/etc/init.d/redis-server start /etc/redis/redis.conf"
w.stop = "/etc/init.d/redis-server stop"
w.restart = "/etc/init.d/redis-server restart"
w.start_grace = 10.seconds
w.restart_grace = 10.seconds
w.log = "#{rails_root}/log/redis.log"
w.pid_file = "/home/deployer/myapp/current/tmp/pids/redis-server.pid"
w.behavior(:clean_pid_file)
w.start_if do |start|
start.condition(:process_running) do |c|
c.interval = 5.seconds
c.running = false
end
end
end
end
So the problem that I'm having is that god can get redis started. I looked at its log for the god file and it says the following:
Starting redis-server: touch: cannot touch `/var/run/redis/redis-server.pid': Permission denied
Why is it still trying to look in /var/run/redis/redis-server.pid? I changed the PID path in the redis.conf file to the new one shown above because I was getting Permission denied, but it is still insisting in looking in /var/run/redis/redis-server.pid. FYI, this where I got the idea to changed the PID path: God configuration file to monitor existing processes?
Please make sure you have disabled SElinux
you can disable selinux with this command:
setenforce 0
In our deploy scripts we use the following snippet to restart unicorn:
desc "Zero downtime restart of Unicorn"
task :restart do
run "kill -s USR2 unicorn_pid"
end
The master process forks, starts the new workers and then kills the old one. But now it seems the new master kills the old master and takes over any new connections before the new children are fully up. As we disabled preloading of the app using preload_app false the new workers take around 30 - 60 seconds to start. During this time the new connections/ the website hangs. How to avoid this, so making the new master only take over once the new children are fully up and ready to server requests? :)
Update:
My unicorn.rb looks like this:
# name of application
application = "myapp"
# environment specific stuff
case ENV["RAILS_ENV"]
when "integration", "staging"
worker_processes 1
when "production"
worker_processes 4
else
raise "Invalid runtime environment '#{ENV["RAILS_ENV"]}'"
end
# set directories
base_dir = "/srv/#{application}/current"
shared_path = "/srv/#{application}/shared"
working_directory base_dir
# maximum runtime of requests
timeout 300
# multiple listen directives are allowed
listen "#{shared_path}/unicorn.sock", :backlog => 64
# location of pid file
pid "#{shared_path}/pids/unicorn.pid"
# location of log files
stdout_path "#{shared_path}/log/unicorn.log"
stderr_path "#{shared_path}/log/unicorn.err"
# combine REE with "preload_app true" for memory savings
# http://rubyenterpriseedition.com/faq.html#adapt_apps_for_cow
preload_app false
if GC.respond_to?(:copy_on_write_friendly=)
GC.copy_on_write_friendly = true
end
before_exec do |server|
# see http://unicorn.bogomips.org/Sandbox.html
ENV["BUNDLE_GEMFILE"] = "#{base_dir}/Gemfile"
end
before_fork do |server, worker|
# the following is highly recomended for "preload_app true"
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
if defined?(Sequel::Model)
Sequel::DATABASES.each{ |db| db.disconnect }
end
# This allows a new master process to incrementally
# phase out the old master process with SIGTTOU to avoid a
# thundering herd (especially in the "preload_app false" case)
# when doing a transparent upgrade. The last worker spawned
# will then kill off the old master process with a SIGQUIT.
old_pid = "#{server.config[:pid]}.oldbin"
if old_pid != server.pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
I think the main problem is that there's no child_ready hook. The before_fork and after_hook are called "too early". I think I could add some logic in the after_fork hook to somehow detect when a child is ready...but I hope there's an easier solution? :)
How can I make each unicorn worker of my Rails application writting in a different log file ?
The why : problem of mixed log files...
In its default configuration, Rails will write its log messages to a single log file: log/<environment>.log.
Unicorn workers will write to the same log file at once, the messages can get mixed up. This is a problem when request-log-analyzer parses a log file. An example:
Processing Controller1#action1 ...
Processing Controller2#action2 ...
Completed in 100ms...
Completed in 567ms...
In this example, what action was completed in 100ms, and what action in 567 ms? We can never be sure.
add this code to after_fork in unicorn.rb:
#one log per unicorn worker
if log = Rails.logger.instance_values['log']
ext = File.extname log.path
new_path =log.path.gsub %r{(.*)(#{Regexp.escape ext})}, "\\1.#{worker.nr}\\2"
Rails.logger.instance_eval do
#log.close
#log= open_log new_path, 'a+'
end
end
#slact's answer doesn't work on Rails 3. This works:
after_fork do |server, worker|
# Override the default logger to use a separate log for each Unicorn worker.
# https://github.com/rails/rails/blob/3-2-stable/railties/lib/rails/application/bootstrap.rb#L23-L49
Rails.logger = ActiveRecord::Base.logger = ActionController::Base.logger = begin
path = Rails.configuration.paths["log"].first
f = File.open(path.sub(".log", "-#{worker.nr}.log"), "a")
f.binmode
f.sync = true
logger = ActiveSupport::TaggedLogging.new(ActiveSupport::BufferedLogger.new(f))
logger.level = ActiveSupport::BufferedLogger.const_get(Rails.configuration.log_level.to_s.upcase)
logger
end
end