How Rails handles database connection in the background? - ruby-on-rails

I am trying show controller specific pages in my rails app when the database connection goes away. I do this by catching the Mysql::Error in the rescue_action method and rendering appropriate pages. When the mysql service alone is stopped , i get the Mysql::Error exception really quickly and i could render the pages without any delay.
But when the server itself is shut down, rails takes 3 mins to throw the Mysql::Error and after 5-6 request the whole website becomes unresponsive.
I tried to figure out, which method in rails framework takes such a long time , when the mysql server is shut down. It was a method connection.real_connect (in the active record mysql_adapter file),which took 3 mins to return with an exception.
so i decided to timeout out this method using systemTimer gem. This monkey patch worked perfectly, when i start the website with database connection and immediately shutdown the database server.
But when i start the website with database, and access the website for sometime and then shut down the database server, it doest work at all. and the whole website becomes unresponsive as before. I wonder what is the difference between the two scenarios.
I think i need to know more in detail about how rails handle database connection . how it reacts when the database connection goes off. so that i could identify exact places where i can put monkey patches and make it work for my spefic requirement. I havent seen any relevant article explaining this.
Any help will be very useful for me
Thanks,

I've not tried this, but you can add connect_timeout as one of the specified options (along with port, host, etc) for the MySQL connection in the database.yml file. That value is passed to the real_connect call to establish the connection to MySQL.
Furthermore, since you are experiencing a delay after the initial connection is made and the DB is shutdown, you may need to use the read_timeout config option.

Related

Gracefully force database reconnection in Rails 4

We have a Rails 4 app that uses a database that sometimes gets updated by having a new instance of the database spun up and then ops updates the DNS record to point to the new instance (not ideal, but I can't change that). The problem is that the Rails connection pool still keeps its connections open to the old database and won't talk to the new database unless we restart Rails. We can do that, but it's a pain.
We would like to have an administrative endpoint we could hit that tells the app to gracefully close database connections and restart. ActiveRecord::Base.connection_pool.disconnect! certainly closes the old database connections and when new ones are requested, they talk to the new instance, but it also takes a shotgun to any running queries and terminates them rather than letting them finish.
Is there a way to tell Rails to refresh all of its database connections at runtime in a way that will allow currently running queries to finish before being closed?

Rails 5 server hangs when receives multiple requests at once

My development Rails 5 server with Puma keeps freezing and hanging when sending multiple requests at one time from my separate frontend app to the Rails API. There is no error, it just hangs on the POST requests. When I try to kill the server with CTRL + C, nothing happens. I have to manually kill the port.
I've tried setting config.eager_load=true in development.rb. I've tried adding config.allow_concurrency in application.rb. I've Googled relentlessly to no avail. I am sending around 5 requests concurrently from frontend, so I believe this amount of requests is causing it, but I don't know for sure.
Has anyone else experienced this or have an idea of what needs to be done here? I can usually get all the requests coming back to the frontend successfully around 3-4 times, then the server just freezes.
It especially occurs after I change any one line of code in any file in the project while the server is running.
It's been nearly 2 years but I finally happened to stumble upon what had been causing my issue.
Basically it boiled down to a method in my code not being thread-safe. Since my current_user variable was only accessible from my controller, I had a before_action on my base controller to assign the current user to User.current so that I could access the current user globally via User.current, not just in my controllers.
So PLEASE make sure you're not dynamically updating classes like I this in your controllers. It is not thread-safe. I ended up following this thread-safe solution instead for my particular case: https://stackoverflow.com/a/2513456/7629239
What is your puma configuration? How many threads and workers(Puma workers not rails workers).
Ensure that your puma has enough threads, and that your db pool is large enough. Changing a line of code should not cause your server to get exhausted in resources. Are you using a watcher like watchman?

Cancel Rails DB connection

I am writing an app that uses Server Side events with ActionController::Live. It is using the puma app server. A method in the Messages controller stays alive while the user is connected waiting for messages from Redis.
The problem is that I don't want to connect to Postgres on this method. After I open the app in six tabs it has over five connections defined by the pool size in the config/database.yml file and the app crashes.
Is there anyway to tell my app when that method is called it doesn't need to connect to the database as there is no ActiveRecord query calls in it?
One possible way to do this is to use middleware. A good resource for setting up your own middleware is http://railscasts.com/episodes/151-rack-middleware?view=asciicast
However, I'm not convinced that the problem you're experiencing is because of too many connections to Postgres. This is just a hunch, but I think your problem may lie elsewhere.

Workaround for Heroku 30 second timeout w/ long external query

Note: There are going to be things in this post which are less-than-best-practices. Be warned :)
I'm working on an admin dashboard which connects to a micro-instance AWS server.
The DB has tens of millions of records.
Most queries come back within a few seconds but some take up to a minute or two to return, based on a few things outside of my control.
Due to Heroku's 30-second limit (https://devcenter.heroku.com/articles/request-timeout), I need to find some way to buy time to keep the connection open until the query returns. Heroku does say that you can buy time by sending bytes to the client in the meantime, which buys you another 55 seconds.
Anyways, just curious if you guys have a solution to stall time for Heroku. Thanks!
I have made a workaround for this. Our app is running Sinatra and I have used EventMachine gem to keep writing \0 into stream every 10 seconds so Heroku doesn't close connection until action is complete, see the example https://gist.github.com/troex/31790323fb4a8a29c8b8cd84e50ad1e8
My example is using Puma but it should work for Unicorn and Thin as well (you don't need EventMachine.run for Thin). For Rails I think you can use before/after_action to start/stop event timer.
You could break down the thing into multiple queries.
You may send a query, have your AWS server respond immediately just saying that it received query and then once it pulls the data, have it send that data via a POST request to your Heroku instance.
Yes, do it via ajax, send back a response that says ask again in a bit...

In rails ,Mysql.real_connect takes like 3 mins for every request when the mysql server is shut down

I am trying to render few static pages in my rails app when the mysql server is shut down. I tried to catch the Mysql::Error exception and render the corresponding static page for each controller.
When we just stop the mysql service in the machine where the mysql is installed. The Mysql::Error exception is thrown immediately and i am able to render the pages without any delay. but if i just shut down the server. The whole website becomes irresponsive.
I traced down the actual function in the rails framework , which is taking 3 mins to complete. It was this statement
Mysql.real_connect
in the active_record gem. which takes so long. Is there any way i can give a time out so that , when the mysql server is powered off. it returns with the Mysql::Error exception really quickly so that i can render the pages without any delay??
This is probably coming from the socket timeout within the mysql adapter. When the service is stopped, the server will respond quickly with a connection refused error. When the server itself is down, the socket will have to get a connection timeout before it returns. What you'll probably have to do is monkey patch the #real_connect method so that it first validates that the server is running by attempting a socket connection (with a timeout) before continuing on with the original implementation. This question may be of some help to you there:
How do I set the socket timeout in Ruby?
dbh = Mysql.init
dbh.options(Mysql::OPT_CONNECT_TIMEOUT, 6)

Resources