Rails Unicorn - Delay between starting request and reaching controller - ruby-on-rails

I am using Unicorn as my app server for my Rails app, and am trying to figure out why there sometimes is sometimes a non-trivial (> 5 seconds) delay between the start of a request, and when it reaches my controller.
This is what my production.log prints out:
Started GET "/search/articles.json?q=mashable.com" for 138.7.7.33 at 2015-07-23 14:59:19 -0400**
Parameters: {"q"=>"mashable.com"}
Searching articles for keyword: mashable.com, format: json, Time: 2015-07-23 14:59:26 -0400
Notice how there is a 7 second delay in between STARTED GET: and "Searching articles for keyword", which is the first thing the controller method does.
articles.json is routed to my controller method "articles" which simply does this for now:
def articles
format = params[:format]
keyword = params["q"]
Rails.logger.info "Searching articles for keyword: #{keyword}, format: #{format}, Time: #{Time.now.to_s}"
end
This is my routes.rb
MyApp::Application.routes.draw do
match '/search/articles' => 'search#articles'
#more routes here, but articles is the first route
end
What could possibly cause this delay? Is it because an Unicorn worker is busy? Is it because an Unicorn worker is taking up too much memory which leads the system to be slow?
Note: I don't believe the delay is in making any database connections but I could be wrong. The code doesn't need to make a database call, and the max connections for my database is 1000, and there are usually at most 1-2 connections.

Three thoughts:
You'll probably be better served using Puma instead of Unicorn
It could be that your system is running out of memory, or it could have plenty of memory available: install New Relic to troubleshoot where the bottleneck is
It could also be that you have more Unicorn instances than the number of connections your DB allows, in which case the instance is having to wait for others to disconnect before it can connect. This would likely manifest itself with irregular 5-second delays rather than happening every time.

Actually, it might be caused by an before_filter callback, you should check it

I think it can be because of lack of memory and thus frequent garbage collection, which freeze whole system.

If it's a production problem it could be caused by slow clients sending requests. New Relic and Monit are good options. You could consider sending signals to Unicorn workers to restart them to better understand the problem.
You could also try adding preload_app true in your Unicorn config to speed up the startup time of worker processes.

Related

Using Ruby Mongo with ActiveJob

I am using Ruby 2.7 with Mongo 2.17 client. Currently using Sidekiq with ActiveJob to perform millions of Jobs executions to do single transactions to AWS DocumentDB. While reading the Mongo client documentation I see that they claim that is a bad idea to instantiate a Client per request, rather than just having 1 and reusing it.
Currently the job that runs millions of times does instantiate a client and closes it at the end, the job has many threads executing per Sidekiq process, currently running multiple Sidekiq processes:
jobs/my_job.rb
def perform(document)
client = Mongo::Client.new(DOCUMENTDB_HOST, DOCUMENTDB_OPTIONS)
client.insert_one(document)
client.close
end
From the documentation it states:
The default configuration for a Mongo::Client works for most applications:
client = Mongo::Client.new(["localhost:27017"])
Create this client once for each process, and reuse it for all operations. It is a common mistake to create a new client for each request, which is very inefficient and not what the client was designed for.
To support extremely high numbers of concurrent MongoDB operations within one process, increase max_pool_size:
client = Mongo::Client.new(["localhost:27017"], max_pool_size: 200)
Any number of threads are allowed to wait for connections to become available, and they can wait the default (1 second) or the wait_queue_timeout setting:
client = Mongo::Client.new(["localhost:27017"], wait_queue_timeout: 0.5)
When #close is called on a client by any thread, all connections are closed:
client.close
Note that when creating a client using the block syntax described above, the client is automatically closed after the block finishes executing.
My question would be if this statement also applies for isolated Sidekiq jobs execution, and if so, how could i recycle Mongo Client connection object along a Sidekiq process? I could think of having a global ##client in the Sidekiq initializer:
config/initializers/sidekiq.rb:
##client = Mongo::Client.new(DOCUMENTDB_HOST, DOCUMENTDB_OPTIONS)
and then:
jobs/my_job.rb:
def perform(document)
##client[:my_collection].insert_one(document)
end
Note:
No significant errors are raised, the whole system just get frozen and I get the following exception thrown randomly after the system has several minutes running correctly:
OpenSSL::SSL::SSLError: SSL_connect SYSCALL returned=5 errno=0 state=SSLv3/TLS write client hello (for 10.0.0.123:27017
UPDATE:
I tried 'reusing' the connection client by creating a global variable with the connection object in an initializer:
config/initializers/mongodb_client.rb
$mongo_client = Mongo::Client.new(DOCUMENTDB_HOST, DOCUMENTDB_OPTIONS)
and then using it inside my ActiveJob class. So far it seems to work good but I am unaware of side effects; actually I did start many Sidekiq processes and I am closely watching at the logs looking for exceptions thrown, so far all good.
jobs/my_job.rb
def perfom(document)
$mongo_client[:activity_log].insert_one(log_document)
end
It looks like the MongoClient is threadsafe, just set that :max_pool_size to your Sidekiq concurrency so each job thread can concurrently use the client.

Memory consumpton inside sidekiq worker

Does loading multiple Models in sidekiq worker can cause memory leak? Does it get garbage collected?
For example:
class Worker
include Sidekiq::Worker
def perform
Model.find_each do |item|
end
end
end
Does using ActiveRecord::Base.connection inside worker can cause problems? Or this connection automatically closes?
I think you are running into a problem that I also had with a "worker" - the actual problem was the code, not Sidekiq in any way, shape or form.
In my problematic code, I thoughtlessly just loaded up a boatload of models with a big, fat, greedy query (hundreds of thousands of instances).
I fixed my worker/code quite simply. For my instance, I transitioned my DB call from all to use find_in_batches with a lower number of objects pulled for the batch.
Model.find_in_batches(100) do |record|
# ... I like find_in_batches better than find_each because you can use lower numbers for the batch size
# ... other programming stuff
As soon as I did this, a job that would bring down Sidekiq after a while (running out of memory on the box) has run with find_in_batches for 5 months without me even having to restart Sidekiq ... Ok, I may have restarted Sidekiq some in the last 5 months when I've deployed or done maintenance :), but not because of the worker!

Resque worker logging slows down Rails responsiveness

I have a Resque job which pulls a csv list of data off of a remote server and then runs through the +40k entries to add any new items to an existing database table. The job is running fine however it severely slows down the response time of any subsequent requests to the server. In the console which I've launched 'bundle exec rails server', I see not print statements though the job is running. However once I hit my rails server (via a page referesh), I see multiple SELECT / INSERT statements roll by before the server responds. The SELECT/INSERT statements are clearly generated by my Resque job but oddly they wait to print to the console unit I hit the server through the browser.
It sure feels like I'm doing something wrong or not following the 'rails way'. Advice?
Here is the code in my Resque job which does the SELECT/INSERTS
# data is an array of hashes formed from parsing csv input. Max size is 1000
ActiveRecord::Base.transaction do
data.each do |h|
MyModel.find_or_create_by_X_and_Y( h[:x], h[:y], h )
end
end
Software Stack
Rails 3.2.0
postgresql 9.1
Resque 1.20.0
EDIT
I've finally take the time to debug this a bit more. Even a very simple worker, like below, slows down the next server response. In the console where I've launched the rail sever process I see that the delay occurs b/c stdout from the worker is being printed only after I ping the server.
def perform()
s = Time.now
0.upto( 90000 ) do |i|
Rails.logger.debug i * i
end
e = Time.now
Rails.logger.info "Start: #{s} ---- End #{e}"
Rails.logger.info "Total Time: #{e - s }"
end
I can get the rails server back to its normal responsiveness again if I suppress stdout when I launch rails but it doesn't seem like that should be necessary... bundle exec rails server > /dev/nul
Any input on a better way to solve this issue?
I think this answer to "Logging issues with Resque" will help.
The Rails server, in development mode, has the log file open. My understanding -- I need to confirm this -- is that it flushes the log before writing anything new to it, in order to preserve the order. If you have the Rails server attached to a terminal, it wants to output all of the changes first! This can lead to large delays if your workers have written large quantities to the log.
Note: this has been happening to me for some time, but I just put my finger on it recently.

max-requests-per-worker in unicorn

I sought, but did not find, a max-requests-per-worker option in unicorn similar to gunicorn's max_requests or apache's MaxRequestsPerChild.
Does it exist?
If not, has anyone implemented it?
I'm thinking of putting it in the file where I have oobgc, since that gets control after every requests anyway. Does that sound about right?
The problem is that my unicorn workers are getting big and fat, and garbage collection is taking more and more of my CPU.
i've just released 'unicorn-worker-killer' gem. This enables you to kill Unicorn worker based on 1) Max number of requests and 2) Process memory size (RSS), without affecting the request. It's really easy to use. At first, please add this line to your Gemfile.
gem 'unicorn-worker-killer'
Then, please add the following lines to your config.ru.
# Unicorn self-process killer
require 'unicorn/worker_killer'
# Max requests per worker
use Unicorn::WorkerKiller::MaxRequests, 3072, 4096
# Max memory size (RSS) per worker
use Unicorn::WorkerKiller::Oom, (256*(1024**2)), (384*(1024**2))
It's highly recommended to randomize the threshold to avoid killing all workers at once.
Unicorn doesn't offer a max-requests.
The unicorn master will re-spawn any worker which exits and a worker will gracefully exit at the end of the current request when it receives a QUIT signal, so you could easily roll your own max request logic into your worker request life-cycle.
With Rails, something like the following in your application controller (alternatively, similar logic in a rack middleware)
after_filter do
##request_count ||= 0
Process.kill('QUIT',$$) if (##request_count += 1) > MAX_REQUESTS
end

SQLite3::BusyException

Running a rails site right now using SQLite3.
About once every 500 requests or so, I get a
ActiveRecord::StatementInvalid (SQLite3::BusyException: database is locked:...
What's the way to fix this that would be minimally invasive to my code?
I'm using SQLLite at the moment because you can store the DB in source control which makes backing up natural and you can push changes out very quickly. However, it's obviously not really set up for concurrent access. I'll migrate over to MySQL tomorrow morning.
You mentioned that this is a Rails site. Rails allows you to set the SQLite retry timeout in your database.yml config file:
production:
adapter: sqlite3
database: db/mysite_prod.sqlite3
timeout: 10000
The timeout value is specified in miliseconds. Increasing it to 10 or 15 seconds should decrease the number of BusyExceptions you see in your log.
This is just a temporary solution, though. If your site needs true concurrency then you will have to migrate to another db engine.
By default, sqlite returns immediatly with a blocked, busy error if the database is busy and locked. You can ask for it to wait and keep trying for a while before giving up. This usually fixes the problem, unless you do have 1000s of threads accessing your db, when I agree sqlite would be inappropriate.
// set SQLite to wait and retry for up to 100ms if database locked
sqlite3_busy_timeout( db, 100 );
All of these things are true, but it doesn't answer the question, which is likely: why does my Rails app occasionally raise a SQLite3::BusyException in production?
#Shalmanese: what is the production hosting environment like? Is it on a shared host? Is the directory that contains the sqlite database on an NFS share? (Likely, on a shared host).
This problem likely has to do with the phenomena of file locking w/ NFS shares and SQLite's lack of concurrency.
If you have this issue but increasing the timeout does not change anything, you might have another concurrency issue with transactions, here is it in summary:
Begin a transaction (aquires a SHARED lock)
Read some data from DB (we are still using the SHARED lock)
Meanwhile, another process starts a transaction and write data (acquiring the RESERVED lock).
Then you try to write, you are now trying to request the RESERVED lock
SQLite raises the SQLITE_BUSY exception immediately (indenpendently of your timeout) because your previous reads may no longer be accurate by the time it can get the RESERVED lock.
One way to fix this is to patch the active_record sqlite adapter to aquire a RESERVED lock directly at the begining of the transaction by padding the :immediate option to the driver. This will decrease performance a bit, but at least all your transactions will honor your timeout and occurs one after the other. Here is how to do this using prepend (Ruby 2.0+) put this in a initializer:
module SqliteTransactionFix
def begin_db_transaction
log('begin immediate transaction', nil) { #connection.transaction(:immediate) }
end
end
module ActiveRecord
module ConnectionAdapters
class SQLiteAdapter < AbstractAdapter
prepend SqliteTransactionFix
end
end
end
Read more here: https://rails.lighthouseapp.com/projects/8994/tickets/5941-sqlite3busyexceptions-are-raised-immediately-in-some-cases-despite-setting-sqlite3_busy_timeout
Just for the record. In one application with Rails 2.3.8 we found out that Rails was ignoring the "timeout" option Rifkin Habsburg suggested.
After some more investigation we found a possibly related bug in Rails dev: http://dev.rubyonrails.org/ticket/8811. And after some more investigation we found the solution (tested with Rails 2.3.8):
Edit this ActiveRecord file: activerecord-2.3.8/lib/active_record/connection_adapters/sqlite_adapter.rb
Replace this:
def begin_db_transaction #:nodoc:
catch_schema_changes { #connection.transaction }
end
with
def begin_db_transaction #:nodoc:
catch_schema_changes { #connection.transaction(:immediate) }
end
And that's all! We haven't noticed a performance drop and now the app supports many more petitions without breaking (it waits for the timeout). Sqlite is nice!
bundle exec rake db:reset
It worked for me it will reset and show the pending migration.
Sqlite can allow other processes to wait until the current one finished.
I use this line to connect when I know I may have multiple processes trying to access the Sqlite DB:
conn = sqlite3.connect('filename', isolation_level = 'exclusive')
According to the Python Sqlite Documentation:
You can control which kind of BEGIN
statements pysqlite implicitly
executes (or none at all) via the
isolation_level parameter to the
connect() call, or via the
isolation_level property of
connections.
I had a similar problem with rake db:migrate. Issue was that the working directory was on a SMB share.
I fixed it by copying the folder over to my local machine.
Most answers are for Rails rather than raw ruby, and OPs question IS for rails, which is fine. :)
So I just want to leave this solution over here should any raw ruby user have this problem, and is not using a yml configuration.
After instancing the connection, you can set it like this:
db = SQLite3::Database.new "#{path_to_your_db}/your_file.db"
db.busy_timeout=(15000) # in ms, meaning it will retry for 15 seconds before it raises an exception.
#This can be any number you want. Default value is 0.
Source: this link
- Open the database
db = sqlite3.open("filename")
-- Ten attempts are made to proceed, if the database is locked
function my_busy_handler(attempts_made)
if attempts_made < 10 then
return true
else
return false
end
end
-- Set the new busy handler
db:set_busy_handler(my_busy_handler)
-- Use the database
db:exec(...)
What table is being accessed when the lock is encountered?
Do you have long-running transactions?
Can you figure out which requests were still being processed when the lock was encountered?
Argh - the bane of my existence over the last week. Sqlite3 locks the db file when any process writes to the database. IE any UPDATE/INSERT type query (also select count(*) for some reason). However, it handles multiple reads just fine.
So, I finally got frustrated enough to write my own thread locking code around the database calls. By ensuring that the application can only have one thread writing to the database at any point, I was able to scale to 1000's of threads.
And yea, its slow as hell. But its also fast enough and correct, which is a nice property to have.
I found a deadlock on sqlite3 ruby extension and fix it here: have a go with it and see if this fixes ur problem.
https://github.com/dxj19831029/sqlite3-ruby
I opened a pull request, no response from them anymore.
Anyway, some busy exception is expected as described in sqlite3 itself.
Be aware with this condition: sqlite busy
The presence of a busy handler does not guarantee that it will be invoked when there is
lock contention. If SQLite determines that invoking the busy handler could result in a
deadlock, it will go ahead and return SQLITE_BUSY or SQLITE_IOERR_BLOCKED instead of
invoking the busy handler. Consider a scenario where one process is holding a read lock
that it is trying to promote to a reserved lock and a second process is holding a reserved
lock that it is trying to promote to an exclusive lock. The first process cannot proceed
because it is blocked by the second and the second process cannot proceed because it is
blocked by the first. If both processes invoke the busy handlers, neither will make any
progress. Therefore, SQLite returns SQLITE_BUSY for the first process, hoping that this
will induce the first process to release its read lock and allow the second process to
proceed.
If you meet this condition, timeout isn't valid anymore. To avoid it, don't put select inside begin/commit. or use exclusive lock for begin/commit.
Hope this helps. :)
this is often a consecutive fault of multiple processes accessing the same database, i.e. if the "allow only one instance" flag was not set in RubyMine
Try running the following, it may help:
ActiveRecord::Base.connection.execute("BEGIN TRANSACTION; END;")
From: Ruby: SQLite3::BusyException: database is locked:
This may clear up the any transaction holding up the system
I believe this happens when a transaction times out. You really should be using a "real" database. Something like Drizzle, or MySQL. Any reason why you prefer SQLite over the two prior options?

Resources