Sharing session data between Rails and Node? - ruby-on-rails

The main question is: Can I read Rails session data in Node?
More details:
I have a project that is written in Ruby on Rails. It works but I want to add to it and eventually replace it using NodeJS. Both are running on the same server, just on different ports.
As of now RoR will serve up all the HTML files (and continue handeling the existing functionality) and then I'll connect to the Node server via AJAX. Node will just dish up JSON for the time being.
The problem is, how can I work with session variables between the two? More specifically, can I get to RoRs session variables in Node? Mostly I just need to know which user is logged in.
If it matters, I am running Rails 2.3.5, Ruby 1.8.7, and Node 0.8.17.

I haven't tried exactly same stuff, myself, but, we did something similar but with Sinatra and Java.
I wouldn't comment about your approach on application design, but, in case you don't mind using Memcached session store in your rails application, yes it is possible. Configuring Memcached with Ruby app is explained on Heroku Doc
In Node application you can use Memcached Client like 3rd-Eden and access session variable from memcache
You would have to explicitly pass session id generated by rails to Node.

Related

Get Rails application domain name without request

Is there a way to get Rails` app domain name without use of the request?
I know how to get URL from the request, but what if there is no request and Rails is just running a delayed job task, can i get a domain name of the server where the Rails app is hosted?
There isn't a per se way of doing this due to how Rails is built but if it is a big enough project it might me stored somewhere.
So you could look for a place where it is being set and use that elsewhere.
For example, if the project uses request_store you could search for something like:
RequestStore.store[:host] = SomeTable.stored_host_name
And use SomeTable.stored_host_name in your worker or migration or the place you want to use it which doesn't have a request at hand.
Have a great day!

Rails caching not working on server

In my application, I have used caching. This is the code, I have used. In after_filter, I called the method which include this one line code.
Rails.cache.write("properties", #properties.to_xml)
I try to get this in another action in before_filter like
#hotels = Rails.cache.fetch("properties")
this all working fine in development machine. But in server it returns null value. the application run in same development mode in server. Can you please anyone suggest me the right way. Thanks in advance.
It sounds like you haven't configured a backend for the store, so it will use ActiveSupport::Cache::MemoryStore
From the documentation:
If you're running multiple Ruby on Rails server processes (which is the case if you're using mongrel_cluster or Phusion Passenger), then this means that Rails server process instances won't be able to share cache data with each other.
This works in development since you are likely using a single server instance, so the cache is only stored in one process. For production you need to configure an alternative shared store. I'd recommend running a memcached instance, and installing and using the Dalli Gem as per the README.

Reading pending work from PhantomJS

I am building a pool of PhantomJS instances, and I am trying to make it so that each instance is autonomous (it fetches next job to be done).
My concern is to choose between these two:
Right now I have a Rails app that can give to PhantomJS which URL needs to be parsed next. So, I could do an HTTP get call from PhantomJS to my Rails app and Rails would respond with a URL that is pending to be done (most likely Rails would get that from a queue).
I am thinking on building a stand alone Redis server that PhantomJS would access via Webdis, so Rails would push the jobs there, and PhantomJS instances would fetch from it directly.
I am trying to think what would be the correct decision in terms of performance: PhantomJS hitting the Rails server (so Rails needs to get the job from the queue and send it to PhantomJS), or just making PhantomJS to access a Redis server directly.
Maybe I need more info but why isn't the performance answer obvious? Phantom JS hitting the Redis server directly means less stuff to go through.
I'd consider developing whatever is easier to maintain. What's the ballpark req/minute? What sort of company (how funded / resource-strapped are you)?
There's also more OOTB solutions like IronMQ that may ease the pain

Why am I getting RuntimeError: Session collision on '...'

I've been getting quite a lot of session collision exceptions. Usually at least one per day, but sometimes I deploy and get 2-3 in a row and then nothing.
The app runs on Rails 3.2.2 and unicorn, and sessions are stored in memcached.
The exceptions happen in different places in different controllers and I'm not really able to find anything they have in common. What could be causing this?
I don't know how ruby/rails handles session data using memcached but normally the work is as follows:
new session -> using command ADD
update session -> using GET with token and than the command CAS (check and set)
If there is a hash collision the command ADD fails because the session already exists.
Another possible issue is if another process updated the same session between GET and CAS.

Rails and Node in the same app on Heroku?

I'm building a Rails application that deals with file uploads through CarrierWave. Currently, larger file uploads block the server for a significant amount of time. I have seen solutions like the s3-swf-upload-plugin gem that skip the local server and send files straight from the browser to S3, but this would require some modifications for pre-generating unique filenames and synchronizing them with the database. I'm sure it wouldn't be too much trouble, but Heroku's new Cedar stack gave me the idea of offloading these long running requests to a node.js instance running in the same app. I'm not very experienced with these kinds of things, so excuse my wording if it's a bit off.
Would something like this be possible? How would you configure things such that certain requests (ones involving file uploads, in this case) would be handled by a node app bundled in the same heroku repository as the main rails app?
I don't think it's possible to mix Rails and Node in the same app. However, you could get roughly the same functionality by using two separate apps that communicate with each other.
You can use ENV['DATABASE_URL'] to determine your database connection string. Use the heroku console to set it as an ENV variable for your Node app (e.g. heroku config:add OTHER_DB=your_connection_string) should then be able to use the same connection string to connect to the same database from your other heroku app. You could even access it outside of heroku if you have a dedicated database, see: http://devcenter.heroku.com/articles/external-database-access
For seamless integration between the two apps, you could have a form rendered by the Rails app post to a URL of the Node app. In addition to the file upload, include in that form via hidden input fields any other variables you need to communicate to the Node app. When the upload to the Node app is done, it could redirect the client back to the Rails app, passing any status or variables as get parameters.
Run the two apps under two subdomains of the same domain and you could even share cookies between them.
You need two apps. I am doing exactly what's described in this question. I wanted large streaming uploads, and since Rack writes downloads to a temp file before passing them through to the handler, it is not possible to do this with Rails.
Node.js, on the other hand, does this beautifully. So there are two Heroku apps, the Rails web app and the Node.js (Express) web app. The Rails web app uses SWFUpload as the client-side solution. The Rails app and the Node.js app both have a secret key as a Heroku config variable. When it's time for the user to upload, client-side Javascript requests an upload URL from the Rails server. The Rails server forms an upload URL with an Expires parameter and computes a signature using the secret key. The client-side Javascript handler passes this URL along to SWFUpload (upload_url property). The user selects the files to upload, and SWFUpload starts posting them to the upload_url. The Node.js app verifies that the URL is not expired and that the signature is valid. It processes the form data with the formidable library.
One other detail. Flash requires the Node.js app to serve a crossdomain.xml that permits the cross-site request.
My Node.js app doesn't touch the database; but if it did I would share DATABASE_URL as previously suggested. Note that you can't share a DATABASE_URL outside of Heroku unless you have a dedicated DB. The DATABASE_URLs for shared databases are not reachable from outside Heroku (unlike some other services like RedisToGo).

Resources