Where does a connection manager fit in rails? - ruby-on-rails

I'm writing a rails application that acts as a proxy, thus hereby referred to as the proxy. The idea is that the user should be able to manage his servers through a web UI, that's always up and running even if his servers are down.
To accomplish this, the proxy needs to keep an open connections to the servers at all times. For this I've created a background process using daemonz that accept incomming connections from servers and spawns threads that are constantly listening on the sockets.
Now I have two problems: I need to be able to send messages on these sockets from my rails controllers and I need to know which socket to use, to reach the right server. I was planning to use a ConnectionManager class to take care of this for me, but I don't know where such a class fits into rails structure and I don't know how to make the object and the sockets available to both processes.
That makes two questions:
Where does the connection manager belong?
How do I share the connection manager and the sockets between the processes?
If you only know the answer to the first question, please go ahead and answer. It's possible that I should create a separate post for my second question.

This does not seem like a useful thing to build in Rails/Ruby.
What might be more useful would be a Rails admin application that configured an existing load balancer/proxy like haproxy under the covers.
You could have a mapping of servers/ports/configuration in your Rails app and then project that into an haproxy config and restart the load balancer. A great place to start would be the haproxy-tools gem, that allows you to parse/generate an haproxy config file.
It doesn't make sense to re-write your own load balancer and Ruby/Rails is a poor technology stack even if you were going to do that.

Related

How to use Nodejs and Ruby on Rails together?

I really like Ruby on Rails but I have also developed in node js. Currently I'm making a web app which has chat functionality that could have 30 people in it. For that I want to use node js.
I have never done this, I'm confused on how's traffic divided between app. How is the state shared between apps for example how would I share the user session will I have to hit the database for every request.
My first recommendation is to not split a web app between two separate server platforms. It overly complicates the project and isn't necessary at all.
That being said, if it must be done, you could use one of the platforms as the 'main' one and the other one for API endpoints stationed at localhost:some-port-number. This way, if you were in the main platform (Rails let's say), you could request data via the node.js API by redirecting it to whatever IP address (make it a local IP) that node is running on.
Again, I recommend against this. But that's one solution if it must be done.

Communication between Rails processes

Consider the tic-tac-toe game built with Nginx as a reverse proxy and having multiple Rails backends. Each client sets up a websocket connection with some Rails backends. If two clients playing a game are each connected to a different Rails backend, then a move sent to one backend needs to be routed to the other backend so it can be pushed on the other websocket as shown in the picture below.
In Rails what is the idiomatic way to communicate between two Rails backends?
In this situation you should setup separate WebSocket server and connect both users and Rails servers to it. This way you will be able to handle all users from one server without worrying about sharding.
In case of high traffic you could also setup several WebSocket servers and implement some kind of queue or message bus between them that will propagate new messages - for example master server that will only handle propagating messages and slave servers that will be connected to it and sent all messages received from users to it. Please note that in such configuration master server should not handle connection from users and server only for propagation of messages between slaves.
Finaly, answering your last question directly, there is usually no need to contact between Rails servers directly - as opposite to WebSocket servers they serve on request-response basis so exchanging informations via database is enough in most cases. If you really need immediate change then solutions like AMQP should help.

Communicating between Node.Js and ASP.NET MVC Application

I have an existing complex website built using ASP.NET MVC, including a database backend, data layer, as well as the Web UI layer. Rebuilding this website in another language is not a feasible option.
There are some UI elements on some views (client side) which would benefit from live interactivity, involving both push and pull, so rather than implement some kind of custom long polling or websocket server in asp.net, I am looking to leverage node.js for Windows, and Socket.io.
My problem is that I need two way communication between both applications. Each user should only be able to receive data once they are authorised on the ASP.NET website, so I first need communication for this. Secondly, once certain events occur on the ASP.NET website I want to immediately push this data to the Node server, to be broadcast to specific users or groups of users. Thirdly, I would like any data sent to the node.js server to be pushed to the ASP.NET website for processing, as this is where all our business logic lies. The sole reason for adding Node.js is to have the possibility to push data directly to the client, I do not want to build any business logic into it (or as little as possible).
I would like to know what the fastest method of two-way push communication is between Node.Js and ASP.NET. The only good option I'm aware of so far is to create a special listener on a specific port on the node.js server and connect to that, but I was wondering if there's a more elegant or more efficient method? I also know that you could use a database inbetween but surely this would need to be polled and would be less efficient? Both servers will be running on the same server under a Visual Studio project.
Many thanks for any help you can provide.
I'm not an ASP.NET expert, but I think there are multiple ways you can achieve this:
1) As you said, you could make Node listen on a specific port for data and then react based on the data received (TCP)
2) You can make POST requests to Node.js (HTTP) and also send an auth-key in the process to be extra-secure. Like on 1) Node would react to the data you send.
3) Use something like Redis for pub-sub, send messages from ASP.NET (pub) and get them on the Node.js part (sub). This is even better if you want to scale your app across multiple machines etc.
The only good option I'm aware of so far is to create a special
listener on a specific port on the node.js server and connect to that,
but I was wondering if there's a more elegant or more efficient
method?
You can try to look at redis pub/sub model where ASP.NET MVC application and node.js would communicate through separate channels in order to achieve full-duplex communication. Or you can also try to use CouchDB change nofitications.
I also know that you could use a database inbetween but surely this
would need to be polled and would be less efficient?
Former techniques do not require you to poll for changes, but instead they will notify you when the changes happens or channel message arrives.

How can I retrieve updated records in real-time? (push notifications?)

I'm trying to create a ruby on rails ecommerce application, where potential customers will be able to place an order and the store owner will be able to receive the order in real-time.
The finalized order will be recorded into the database (at the moment SQLite), and the storeowner will have a browser window open, where the new orders will appear just after the order is finalized.
(Application info: I'm using the HOBO rails framework, and planning to host the app in Heroku)
I'm now considering the best technology to implement this, as the application is expected to have a lot of users sending in a lot of orders:
1) Each browser window refreshes the page every X minutes, polling the server continuously for new records (new orders). Of course, this puts a heavy load on the server.
2) As above, but poll the server with some kind of AJAX framework.
3) Use some kind of server push technology, like 'comet' asynchronous messaging. Found Juggernaut, only problem is that it is using Flash and custom ports, and this could be a problem as my app should be accessible behind corporate firewalls and NAT.
4) I'm also checking node.js framework, seems to be efficient for this kind of asynchronous messaging, though it is not supported in Heroku.
Which is the most efficient way to implement this kind of functionality? Is there perhaps another method that I have not thought of?
Thank you for your time and help!
Node.js would probably be a nice fit - it's fast, loves realtime and has great comet support. Only downside is that you are introducing another technology into your solution. It's pretty fun to program in tho and a lot of the libraries have been inspired by rails and sinatra.
I know heroku has been running a node.js beta for a while and people were using it as part of the recent nodeknockout competition. See this blog post. If that's not an option, you could definitely host it elsewhere. If you host it at heroku, you might be able to proxy requests. Otherwise, you could happily run it off a sub domain so you can share cookies.
Also checkout socket.io. It does a great job of choosing the best way to do comet based on the browser's capabilities.
To share data between node and rails, you could share cookies and then store the session data in your database where both applications can get to it. A more involved architecture might involve using Redis to publish messages between them. Or you might be able to get away with passing everything you need in the http requests.
In HTTP, requests can only come from the client. Thus the best options are what you already mentioned (polling and HTTP streaming).
Polling is the easier to implement option; it will use quite a bit of bandwidth though. That's why you should keep the requests and responses as small as possible, so you should definitely use XHR (Ajax) for this.
Your other option is HTTP streaming (Comet); it will require more work on the set up, but you might find it worth the effort. You can give Realtime on Rails a shot. For more information and tips on how to reduce bandwidth usage, see:
http://ajaxpatterns.org/Periodic_Refresh
http://ajaxpatterns.org/HTTP_Streaming
Actually, if you have your storeowner run Chrome (other browsers will follow soon), you can use WebSockets (just for the storeowner's notification though), which allows you to have a constant connection open, and you can send data to the browser without the browser requesting anything.
There are a few websocket libraries for node.js, but i believe you can do it easily yourself using just a regular tcp connection.

Receiving REST response on local machine

I use a web service to convert files. The service returns the converted file as an HTTP POST, along with identifier data. My app receives the response, updates its database and saves the file to the appropriate location.
At least that's the idea, but how do I develop and test this on a local machine? Since it isn't publicly facing, I can't provide a directive URL. What's the best way to handle this? I want to keep the process as clean as possible, and the only ideas I can come up with have seemed excessively kludgey.
Given how common REST API development is, I assume there are well-established best practices for this. Any help appreciated.
The solution will change a bit depending on which server your using.
But the generally accepted method is using the loopback address: 127.0.0.1 in place of a fully qualified domain name. Your server may need to be reconfigured to listen on this IP address, but that's usually a trivial fix.
example: http://127.0.0.1/path/to/resource.html
You can use curl or even your browser if your application has a proper frontend. There are many other similar tools to test this from a command line, and each language has a set of libraries for establishing http connections and transferring data along them.
If your machine isn't accessible to the service you are using, then your only option would really be to build a local implementation of the service that will exercise your API. A rake task that sends the POST with the file and the info would be a nice thing so you could start your rails app locally, and then kick off the task with some params to run your application through its paces.
This is the case any time you are trying to develop a system that can't connect to a required resource during development. You need to build a development harness of sorts so that you can exercise all the different types of actions the external service will call on your application.
This certainly won't be easy or straight forward, especially if your interface to this external service is complicated. Be sure to have your test cases send bad POSTs to your application so that you are sure you handle both what you expect, and what you don't.
Also make sure that you do some integration testing with the actual service before you "go-live" with the application. Hopefully you can deploy to an external server that the web service will be able to access in order to test. Amazon's EC2 hosting environment would let you set up a server very quickly, run your tests, and then shut down without much cost at all.
You have 2 options:
Set up dynamic dns and expose your app to the outside world. This only works if you have full control over your network.
Use something like webrat to fake the posts to your app. Since it's only 1 request, this seems pretty trivial.
Considering that you should be writing automated tests for this, I'd go with #2. I used to do #1 when developing facebook apps since there was far to many requests to mock them all out with webrat.
If your question is about testing, why don't you use mocks to fake the server? It's more elegant than using Webrat, and easier to deploy (you only have one app instead of an app and a test environment).
More info about mocks http://blog.floehopper.org/presentations/lrug-mock-objects-2007-07-09/
You've got some info about mocks with Rspec here http://rspec.info/documentation/mocks/

Resources