We have a Rails 4 app that uses a database that sometimes gets updated by having a new instance of the database spun up and then ops updates the DNS record to point to the new instance (not ideal, but I can't change that). The problem is that the Rails connection pool still keeps its connections open to the old database and won't talk to the new database unless we restart Rails. We can do that, but it's a pain.
We would like to have an administrative endpoint we could hit that tells the app to gracefully close database connections and restart. ActiveRecord::Base.connection_pool.disconnect! certainly closes the old database connections and when new ones are requested, they talk to the new instance, but it also takes a shotgun to any running queries and terminates them rather than letting them finish.
Is there a way to tell Rails to refresh all of its database connections at runtime in a way that will allow currently running queries to finish before being closed?
Related
Can anyone suggest a way of forcing the above exception to occur, in the context of a Rails app?
I have a particular situation where it arises (involving scheduled database maintenance) and I need to be able to trigger it locally so I can test my application handles it correctly.
I would guess there's either something I could do to the DB itself, or else some method I could call on the ActiveRecord connection that would trigger this, but I haven't been able to figure it out.
You are probably getting this error because the MySQL connection is killed during maintenance while a SQL query is being made. (Here is a test case of this scenario https://github.com/brianmario/mysql2/blob/a8c96fbe277723e53985983415f9875e759e1d47/spec/mysql2/client_spec.rb#L597)
To reproduce this locally, you can run a long running SQL query in rails. E.g.
ActiveRecord::Base.connection.execute("select sleep(100)")
While that is running, find and kill the rails SQL connections by running
SELECT id FROM INFORMATION_SCHEMA.PROCESSLIST WHERE `db` = '<your-database-name>';
kill <id>; -- Run for each id listed in the previous query.
Find connection ID
ActiveRecord::Base.connection.raw_connection.thread_id
# or
ActiveRecord::Base.connection_pool.connections.map { |conn| conn.raw_connection.thread_id }
or by SQL like mentioned in Cameron's answer
By mysql client invoke
KILL <ID>; -- which you have got by #thread_id
Future attempts to query via this connection will fail with "Mysql2::Error: MySQL client is not connected"
Notes:
Option reconnect: true in database.yml will lead to immediate reconnect after a KILL. You able observe it by calling #thread_id again, it will return new ID.
ActiveRecord::Base.connection uses separate connection for thread where it have been called. While we have killed single connection, another threads will be able to query mysql without error.
You able to access all process connections by
ActiveRecord::Base.connection_pool.connections
You may wonder why in console for pool size, for say, 5 (ActiveRecord::Base.connection_pool.size) you have got ActiveRecord::Base.connection_pool.connections.count == 1. In this case you may checkout more connections by
ActiveRecord::Base.connection_pool.checkout
I have deployed a Rails app at Engineyard in production and staging environment. I am curious to know if every HTTP request for my app initializes new instance of my Rails App or not?
Rails is stateless, which means each request to a Rails application has its own environment and variables that are unique to that request. So, a qualified "yes", each request starts a new instance[1] of your app; you can't determine what happened in previous requests, or other requests happening at the same time. But, bear in mind the app will be served from a fixed set of workers.
With Rails on EY, you will be running something like thin or unicorn as the web server. This will have a defined number of workers, let's say 5. Each worker can handle only one request at a time, because that's how rails works. So if your requests take 200ms each, that means you can handle approximately 5 requests per second, for each worker. If one request takes a long time (a few seconds), that worker is not available to take any other requests. Workers are typically not created and removed on Engineyard; they are set up and run continuously until you re-deploy, though for something like Heroku, your app may not have any workers (dynos) and if there are no requests coming in it will have to spin up.
[1] I'm defining instance, as in, a new instance of the application class. Each model and class will be re-instantiated and the #request and #session built from scratch.
According to what I have understood. No, It will definitely not initialize new instance for every request. Then again two questions might arise.
How can multiple user simultaneously login and access my system without interference?
Even though one user takes up too much processing time, how is another user able to access other features.
Answer to the first question is that HTTP is stateless, everything is stored in session, which is in cookie, which is in client machine and not in server. So when you send a HTTP request for a logged in user, browser actually sends the HTTP request with the required credentials/user information from clients cookies to the server without the user knowing it. Multiple requests are just queued and served accordingly. Since our server are very very fast, I feel its just processing instantly.
For the second query, your might might be concurrency. The server you are using (nginx, passenger) has the capacity to serve multiple request at same time. Even if our server might be busy for a particular user(Lets say for video processing), it might serve another request through another thread so that multiple user can simultaneously access our system.
after login to my application and waiting some time like half and hour, somehow connection to db thrue entity framework is lost and I got this massage.
You must call the "WebSecurity.InitializeDatabaseConnection" method before you call any other method
Is there anything I could do ?
Two things:
configure timeouts of your database. For example, if you use MySQL, you can configure wait_timeout and interactive_timeout etc. Other databases have similar configurations.
Your application should handle timeout and reconnect. It is the right thing for database to timeout idle sessions, so that resources can be released and to used by active sessions.
I am writing an app that uses Server Side events with ActionController::Live. It is using the puma app server. A method in the Messages controller stays alive while the user is connected waiting for messages from Redis.
The problem is that I don't want to connect to Postgres on this method. After I open the app in six tabs it has over five connections defined by the pool size in the config/database.yml file and the app crashes.
Is there anyway to tell my app when that method is called it doesn't need to connect to the database as there is no ActiveRecord query calls in it?
One possible way to do this is to use middleware. A good resource for setting up your own middleware is http://railscasts.com/episodes/151-rack-middleware?view=asciicast
However, I'm not convinced that the problem you're experiencing is because of too many connections to Postgres. This is just a hunch, but I think your problem may lie elsewhere.
I am trying show controller specific pages in my rails app when the database connection goes away. I do this by catching the Mysql::Error in the rescue_action method and rendering appropriate pages. When the mysql service alone is stopped , i get the Mysql::Error exception really quickly and i could render the pages without any delay.
But when the server itself is shut down, rails takes 3 mins to throw the Mysql::Error and after 5-6 request the whole website becomes unresponsive.
I tried to figure out, which method in rails framework takes such a long time , when the mysql server is shut down. It was a method connection.real_connect (in the active record mysql_adapter file),which took 3 mins to return with an exception.
so i decided to timeout out this method using systemTimer gem. This monkey patch worked perfectly, when i start the website with database connection and immediately shutdown the database server.
But when i start the website with database, and access the website for sometime and then shut down the database server, it doest work at all. and the whole website becomes unresponsive as before. I wonder what is the difference between the two scenarios.
I think i need to know more in detail about how rails handle database connection . how it reacts when the database connection goes off. so that i could identify exact places where i can put monkey patches and make it work for my spefic requirement. I havent seen any relevant article explaining this.
Any help will be very useful for me
Thanks,
I've not tried this, but you can add connect_timeout as one of the specified options (along with port, host, etc) for the MySQL connection in the database.yml file. That value is passed to the real_connect call to establish the connection to MySQL.
Furthermore, since you are experiencing a delay after the initial connection is made and the DB is shutdown, you may need to use the read_timeout config option.