Rails Controller Without Database - ruby-on-rails

I am building a Rails service that uses Server-Sent Events (SSE) to stream events to the browser. There is a controller with standard RESTful endpoints for manipulating / querying data, and then another controller (inheriting from ActionController::Live) that handles the asynchronous code. Then, in the middle I have Redis as a pubsub
Because I am pre-computing the messages I'd like to send in the RESTful controller, I do not use the database in the SSE controller (the auth does not require a database connection).
The Problem:
Because the database connection is being unnecessarily grabbed from the pool, I am limited in the number of connections, but the number of database connections I allow.
Question:
Is there a way to skip_before_filter (or similar) to avoid requiring a database connection?

You can disable db connections by default. I think this SO post tells you how:
Rails 3 - how do I avoid database altogether?

Related

How to dynamically connect an abstract class to different databases for a single request in Rails 5?

In our application, we have several models which need to connect to different external databases that hold the same tables and columns, but are each separate and cannot be unified.
Currently, the application runs on separate servers, in which each is connected only to a specific external database. However, these are 10+ servers, all serving the exact same application, with the only difference being the external database they connect to.
The goal is to have a single server running the application and have the application decide which database to query based on a certain parameter passed into the controller.
Our current approach is the following. We have an abstract class from which relevant models inherit, with a method to reconnect it to the specific database:
class AbstractRecord < ApplicationRecord
self.abstract_class = true
def self.reconnect
database = Thread.current[:database_name].constantize
self.establish_connection database
end
end
Then, we have every controller inherit from a controller class with a before_action that sets the current database name in Thread.current and calls that method:
class AccessController < ApplicationController
before_action :set_current_database
private
def set_current_database
Thread.current[:database_name] = current_user.database_name
AbstractRecord.reconnect
end
end
Each user has the information on which database they need to connect to, and so the application reconnects the database based on the current user.
This application also serves an API, with controllers inheriting from a similar controller that also reconnects the database based on the current API user.
We know all of the databases we need to connect to and keep them in yml files, and all of them are loaded into constants inside an initializer.
This approach works for the most part. Whenever a request is made, the database is successfully reconnected to the appropiate database, and the application functions as normal.
However, issues arise when a request is sent at the same time that another request is being processed, both in development and production:
ActiveRecord::ConnectionNotEstablished (No connection pool with 'AbstractRecord' found.)
This error is raised whenever any model that needs to query the AbstractRecord database does so after a new connection has been initiated in a different request.
Given enough time to finish, requests don't seem to interfere with each other and the database reconnections work fine.
It is my understanding that Rails handles requests on individual threads for each of them, and each thread uses a different database connection, which raises the question: Why is establish_connection causing other requests to lose their connection? Is there a major misunderstanding on how threads and database connections work in Rails in this case?
Back to the main question: How can I dynamically connect my models to a specific database during a single request in this version of Rails? Is this approach correct, or is there a more adequate solution?
Rails version: 5.2.4.3
Ruby version: 2.6.3p62
#Joaquin for me this is clearly a case of multi-tenancy, where I must have a central database with a customer table and their respective database connection. There are some libraries that do this elegantly, with the ar-octopus gem.
In your case, there is a concorrency failure, as the key you are using in Thread.current is probably being used in two or more simultaneous executions. A change I would make would be to make your Thread.current key more specific as
Thread.current[:"#{current-table-name}_database_name"] = current_user.database_name
where the Person class would have the key Thread.current[:"#{Person.table_name}_database_name"], but this approach is not a silver bullet and is certainly has flaws.
I suggest looking at gem ar-octopus, it will bring you many benefits.

How to dynamically and efficiently pull information from database (notifications) in Rails

I am working in a Rails application and below is the scenario requiring a solution.
I'm doing some time consuming processes in the background using Sidekiq and saves the related information in the database. Now when each of the process gets completed, we would like to show notifications in a separate area saying that the process has been completed.
So, the notifications area really need to pull things from the back-end (This notification area will be available in every page) and show it dynamically. So, I thought Ajax must be an option. But, I don't know how to trigger it for a particular area only. Or is there any other option by which Client can fetch dynamic content from the server efficiently without creating much traffic.
I know it would be a broad topic to say about. But any relevant info would be greatly appreciated. Thanks :)
You're looking at a perpetual connection (either using SSE's or Websockets), something Rails has started to look at with ActionController::Live
Live
You're looking for "live" connectivity:
"Live" functionality works by keeping a connection open
between your app and the server. Rails is an HTTP request-based
framework, meaning it only sends responses to requests. The way to
send live data is to keep the response open (using a perpetual connection), which allows you to send updated data to your page on its
own timescale
The way to do this is to use a front-end method to keep the connection "live", and a back-end stack to serve the updates. The front-end will need either SSE's or a websocket, which you'll connect with use of JS
The SEE's and websockets basically give you access to the server out of the scope of "normal" requests (they use text/event-stream content / mime type)
Recommendation
We use a service called pusher
This basically creates a third-party websocket service, to which you can push updates. Once the service receives the updates, it will send it to any channels which are connected to it. You can split the channels it broadcasts to using the pub/sub pattern
I'd recommend using this service directly (they have a Rails gem) (I'm not affiliated with them), as well as providing a super simple API
Other than that, you should look at the ActionController::Live functionality of Rails
The answer suggested in the comment by #h0lyalg0rithm is an option to go.
However, primitive options are.
Use setinterval in javascript to perform a task every x seconds. Say polling.
Use jQuery or native ajax to poll for information to a controller/action via route and have the controller push data as JSON.
Use document.getElementById or jQuery to update data on the page.

Does a before_filter in the application controller slow down the app?

I have a few before_filter in my application controller to check 1) If the current_user is banned, 2) If the current_user has received a new message and 3) If the current_user has any pending friend requests.
This means that before every request the app will check for these things. Will this cause server issues in the future, possible a server overload?
I wouldn't definitely say that it would create a server overload on it's own, for a server overload you need many concurrent requests and rails have a connection pool to the database out of the box, but this will slow down the process as you have 3 queries before each request is even at the controller to do what it was intended to do.
Facebook solved this at 2009 using what they called BigPipe, it is not a new technology rather it is leveraging the browsers and the ability to send a few requests with fragmented parts of the page and only then compose it using some Javascript.
You can have a read here http://www.facebook.com/note.php?note_id=389414033919.
As for your check if the user is banned, yes that is something you'd have to check either way, perhaps you can have this in cache using memcached or redis so it won't hit your database directly every time.

Is it okay to authenticate with MongoDB per request?

I have a Rails controller that needs to write some data to my MongoDB. This is what it looks like at the moment.
def index
data = self.getCheckinData
dbCollection = self.getCheckinsCollection
dbCollection.insert(data)
render(:json => data[:_id].to_s())
end
protected
def getCheckinsCollection
connection = Mongo::Connection.new('192.168.1.2', 27017)
db = connection['local']
db.authenticate('arman', 'arman')
return db['checkins']
end
Is it okay to authenticate with MongoDB per request?
It is probably unnecessarily expensive and creating a lot more connections than needed.
Take a look at the documentation:
http://www.mongodb.org/display/DOCS/Rails+3+-+Getting+Started
They connect inside an initializer. It does some connection pooling so that connections are re-used.
Is there only one user in the database?
I'd say: don't do the db authentication. If MongoDB server is behind a good firewall, it's pretty secure. And it never ever should be exposed to the internet (unless you know what you're doing).
Also, don't establish a new connection per request. This is expensive. Initialize one on startup and reuse it.
In general, this should be avoided.
If you authenticate per request and you get many requests concurrently, you could have a problem where all connections to the database are taken. Moreover, creating and destroying database connections can use up resources within your database server -- it will add a load to the server that you can easily avoid.
Finally, this approach to programming can result in problems when database connections aren't released -- eventually your database server can run out of connections.

Rails w/custom TCP data service

I'm building a Rails app that needs to connect to a custom TCP data service, which uses XML messages to exchange data. Functionally, this is not a problem, but I'm having trouble architecting it in a way that feels "clean".
Brief overview:
User logs in to the Rails app. At login, the credentials are validated with the data service and a "context id" is returned.
Request:
<login><username>testuser</username><password>mypass</password></login>
Response:
<reply><context_id>123456</context_id></reply>
This context_id is basically a session token. All subsequent requests for this user must supply this context_id in the XML message.
Request:
<history><context_id>123456</context_id><start_date>1/1/2010</start_date><end_date>1/31/2010</end_date></history>
Response:
<reply><history_item>...</history_item><history_item>..</history_item></reply>
I have hidden away all the XML building/parsing in my models, which is working really well. I can store the context_id in the user's session and retrieve it in my controllers, passing it to the model functions.
#transactions = Transaction.find( { :context_id => 123456, :start_date => '1/1/2010', :end_date => '1/31/2010' } )
From a design point of view, I have 2 problems I'd like to solve:
Passing the context_id to every Model action is a bit of a pain. It would be nice if the model could just retrieve the id from the session itself, but I know this breaks the separation of concerns rule.
There is a TcpSocket connection that gets created/destroyed by the models on every request. The connection is not tied to the context_id directly, so it would be nice if the socket could be stored somewhere and retrieved by the models, so I'm not reestablishing the connection for every request.
This probably sounds really convoluted, and I'm probably going about this all wrong. If anybody has any ideas I'd love to hear them.
Technical details: I'm running Apache/mod_rails, and I have 0 control over the TCP service and it's architecture.
Consider moving the API access to a new class, and store the TcpSocket instance and the context ID there. Change your models to talk to this API access class instead of talking to the socket themselves.
Add an around_filter to your controller(s) that pulls the context ID out of the session, stores it into the API access class, and nils it after running the action. As long as your Rails processes remain single-threaded, you'll be fine. If you switch to a multi-threaded model, you'll also need to change the API access class to store the context ID and the TcpSocket in thread-local storage, and you'll need one TcpSocket per thread.

Resources