Using Octopus gem with Rails SQL Caching - ruby-on-rails

We are using octopus in our rails app to forward read queries directly to our slave boxes, and writes to our master. Have to say its a great gem, but we noticed that queries to the slaves forgoes Active Record's default SQL caching. Kind of defeats the purpose to scale the DB servers horizontally only to lose out on the caching layer that would help scale.
Does anyone have an idea on how to fix this, or is there a better gem to use. We don't need the sharding functionality that octopus gives; just the replication.
thanks ahead of time

The way SQL caching is turned on for a connection is by doing something like
ActiveRecord::Base.connection.cache do
# the query cache is used here
end
# and turned off by here
For the main activerecord connection rails has a rack middleware that sets this up for each request. If you want this to happen for your slave connection then you'll need to do something similar yourself to enable caching for your second connection (slightly easier than a rack middleware would be an around_filter)
I'm not very familiar with octopus, but it seems likely that these two connections will be independant - activerecord won't know to invalidate its cache on your read connection just because you've done some writes on your other connection

Related

Ruby/Rails db connection pool implementation

I have a ruby on rails application that takes a user http request, connects to the database, and sends back the response. To make the application faster, I would like to implement the db connection pool to avoid creating a new connection every time. I tried looking into the connection pool library, but did not fully grasp how to use it. Any help or pointers would be highly appreciated? Thanks.
ActiveRecord is the default ORM library that Rails uses and it automatically handles connection pooling for you so unless your using some other library you don't need to do anything.
Some of the pool options are configurable if you feel like you need to mess with them but I doubt you would http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html

Reading pending work from PhantomJS

I am building a pool of PhantomJS instances, and I am trying to make it so that each instance is autonomous (it fetches next job to be done).
My concern is to choose between these two:
Right now I have a Rails app that can give to PhantomJS which URL needs to be parsed next. So, I could do an HTTP get call from PhantomJS to my Rails app and Rails would respond with a URL that is pending to be done (most likely Rails would get that from a queue).
I am thinking on building a stand alone Redis server that PhantomJS would access via Webdis, so Rails would push the jobs there, and PhantomJS instances would fetch from it directly.
I am trying to think what would be the correct decision in terms of performance: PhantomJS hitting the Rails server (so Rails needs to get the job from the queue and send it to PhantomJS), or just making PhantomJS to access a Redis server directly.
Maybe I need more info but why isn't the performance answer obvious? Phantom JS hitting the Redis server directly means less stuff to go through.
I'd consider developing whatever is easier to maintain. What's the ballpark req/minute? What sort of company (how funded / resource-strapped are you)?
There's also more OOTB solutions like IronMQ that may ease the pain

Too Many Connection with ActiveRecord and Server-Sent Events

I use SSE for something like "instant messaging" in my rails app. I have a problem when I have more connections than my pool database value.
What's the solution to allow more users without this "pool" limit?
I use something similar to http://rubysnippets.com/2013/04/10/rails-4-live-streaming-versus-node-dot-js/
My server is Puma and I use Redis and ActionController::Live.

Re-initialize ActiveRecord after rails startup

I'm building a system which allows the user to modify the database.yml contents via an admin frontend interface.
That changes to database.yml obviously don't have any affect until the application is restarted. Rather than forcing the user (who may not have SSH access to the physical box) to restar the application manually, I'd like a way to force ActiveRecord to reload the config post startup.
Rather than requiring them to restart the server, is there a way to force ActiveRecord to re-initialize after initial startup?
Rationale
There are two use cases for this - a) initial setup wizard b) moving from sqlite evaluation database to production supported database.
Initial Setup Wizard
After installing our application and starting it for the first time the user will be presented with a setup wizard, which amongst other things, allows the user to choose between the built in evaluation database using sqlite or to specify a production supported database. They need to specify the database properties. Rather than asking users to edit yml files on the server we wish the present a frontend to do so during initial setup.
Moving from sqlite evaluation database to production supported database
If the user opted to go with the built in evaluation database, but alter wants to migrate to a production database they need to update the database settings to reflect this change. Same reasons as above, a front end rather than requiring the user to edit config files is desired, especially since we can validate connectivity, permissions etc from the application rather than the user finding out it didn't work when they try to start the application and they get an activerecord exception.
Restart your Rails stack on the next request just as you would if you had access to the server.
system("touch #{File.join(Rails.root,'tmp','restart.txt')")
Building on #wless1's answer in order to get ERB-like ENV vars (e.g. <%= ENV['DB_HOST'] %>) working try this:
YAML::load(ERB.new(File.read(File.join(Rails.root, "config/database.yml"))).result)
ActiveRecord::Base.establish_connection config[Rails.env]
Theoretically, you could achieve this with the following:
config = YAML::load File.read(File.join(Rails.root, "config/database.yml"))
ActiveRecord::Base.establish_connection config[Rails.env]
You'd have to execute the reconnection code in every rails process you're running for it to be effective. There's a bunch of dangerous things to worry about here though - race conditions when changing the database config, syntax problems in the new database.yml, etc.

How to configure login when using multiple servers running a distributed service (HAProxy, Apache, Ruby on Rails)

I have 3 servers running a website. I now need to implement login system and I am having problems with it as user gets a different behavior (logged in or logged out) depending on the server it gets connected to.
I am using Memcache for session store in Rails -
config.action_controller.session_store = :mem_cache_store
ActiveSupport::Cache::MemCacheStore.new("server1","server2","server3")
I thought the second line will either keep caches in sync or something like that ...
Each server has its own db with 1 master, 2 slaves. I have tried going the route of doing sessions in sql store but that really hurts sql servers and replication load becomes really heavy.
Is there an easy way to say, use this Memcache for all session store on all 3 servers?
Will that solve my problem?
I will really appreciate it.
I haven't used memcached to store sessions before ( I feel like redis is a better solution ), but I think as long as you have the
ActiveSupport::Cache::MemCacheStore.new("server1","server2","server3")
line on each of your application servers, your sessions should stay synced up.
I've had a lot of success with just using regular cookie sessions using the same setup you've described.

Resources