Partial migration from Rails to Phoenix - ruby-on-rails

I have a rails app, and I want to gradually move it to Phoenix. While I implement functionality I want phoenix intercept the requests that are already implemented while passing unknown requests down to Rails app. What would be the best strategy in this case?
1) If I'm ready to accept some overhead, I could create a plug and route all unknown requests there (last route /*path). But how do I pass request intact and return the response? Parse it and then build again with HTTPoison would by unnecessary work, any better ideas?
2) I'm not sure, if it's possible with haproxy, but old app could be a fallback, where request would be passed if main backend responds with some error. Would this introduce less overhead?
3) Finally I could just split requests by mask in haproxy, but it seems like to much work, cause I'm planning on using rails for CUD actions and phoenix for R for some resources.
Any other options? Examples how someone done that? Thank you!

Read an excellent post about your exact problem here .
The basic idea is to use a gem rails-reverse-proxy to define a proxy to your Phoenix application.
Then, develop your feature on Phoenix, and add the necessary routes. Keep the rails conventions(it's the way the phoenix router works anyway).
Wire your rails app with a 'dummy' controller and set it to use rails-reverse-proxy.
It is recommended that you make all the ActiveRecord models that are owned by the Phoenix app read only. By adding a hook after_initialize :readonly! to the models owned by pheonix. This way you can use the models in Rails without compromising the phoenix ownership. Only the Phoenix app can change the model state.

Related

Rails/Angular: How to implement internal and external REST/JSON APIs in same app?

I'm planning on implementing a single-page application in Rails/AngularJS which also has some pieces that are exposed as a "public" API. My question is, what's the best way to architect the two APIs in such an application? E.g. Is it wise to have them both housed/versioned in the same namespace, or should they be kept separate somehow?
This is relatively new territory for me, but at first blush it seems like providing a single API covering both internal and external needs, then parsing up which pieces are available via some kind of authorization system based on the provided token would be the best way of going about this.
Is this the right direction, or would you recommend some other path?
FWIW, I will give you my opinion.
CAVEAT: I'm not a rails guy so I'm coming at this from nodejs/expressjs land.
There are many ways to skin this cat, but I'll just say that you are headed in the right direction. if you want to look at a very opinionated way to do things (and one people might hate) in node, see this: https://github.com/DaftMonk/fullstack-demo/blob/master/server/api/user/index.js. here you see this bit:
var router = express.Router();
router.get('/', auth.hasRole('admin'), controller.index);
router.delete('/:id', auth.hasRole('admin'), controller.destroy);
router.get('/me', auth.isAuthenticated(), controller.me);
router.put('/:id/password', auth.isAuthenticated(), controller.changePassword);
router.get('/:id', auth.isAuthenticated(), controller.show);
router.post('/', controller.create);
these routes correspond to calls to http:/serverurl/api/user/ etc. obviously, these are all checking authentication, but you could easily create a resource route that didn't need to check for authentication before passing control to the controller and (eventually) sending back a resource.
the approach this takes is to have middleware on the server check for auth tokens to make sure the client can call the api. without making you look into the code too much, i'll just give you a basic rundown.
client(requests Auth)->server(approves passes back token)->client(stores token)
LATER:
client(requests api call sends token in request)->server(passes request to middleware that checks token to make sure its kosher)->server(sends back resource and token)->client(uses resource and stores token)
then the whole thing repeats.
as far as whether to have separate apis vs one namespace, i don't have a very strong opinion. it really depends on how you structure your app. if you know in advance what resources will be public, then its probably easy to create a namespaced api.
angular can easily adapt to multiple api calls. you can create services for your public vs private http calls (or whatever way you decide to call the api.)
hope this was somewhat helpful! sorry its not railsy! (but nodejs/express is awesome!)

Rails or Sinatra app - how to maintain background threads

I'd like to maintain a data structure on a Sinatra or Rails server (doesn't matter) that is accessible for all HTTP requests that arrive to it (i.e. to support concurrent modification). I don't want to rely on a database or similar because that doesn't allow me to code callbacks for the modification of this data structure and put concurrent blocks on the HTTP response threads.
Since HTTP is stateless there's apparently no easy way to achieve this.
How can I make a process to maintain data in the background for all the requests that arrive to an HTTP server without reliying on external programs and middleware? Does it require me to modify Rails or Sinatra to achieve this? Is there any alternative even outside ruby?
When using Sinatra, you can just code in a thread at the end of your application:
http://blog.markwatson.com/2011/11/ruby-sinatra-web-apps-with-background.html
Using this, you could maintain a worker that would do things even as http requests come in and out.
Sinatra also has the methods before and after which run before and after each request, respectively.
So if you wanted to add data to a data structure before each request is handled you could:
before do
puts request
end
Using these tools, you can easily achieve what you want to do.

Sending data from an analytics engine to a Rails server

I have an analytics engine which periodically packages a bunch of stats in JSON format. I want to send these packages to a Rails server. Upon a package arriving, the Rails server should examine it, generate a model instance out of it (for historical purposes), and then display the contents to the user. I've thought of two approaches.
1) Have a little app residing on the same host as the Rails server to be listening for these packages (using ZeroMQ). Upon receiving a package, the app would invoke a Rails action through CURL, passing on the package as a parameter. My concern with this approach is that my Rails server checks that only signed-in users can access actions which affect models. By creating an action accessible to this listening app (and therefore other entities), am I exposing myself to a major security flaw?
2) The second approach is to simply have the listening app dump the package into a special database table. The Rails server will then periodically check this table for new packages. Upon detecting one or more, it will process them and remove them from the table.
This is the first time I'm doing something like this, so if you have techniques or experiences you can share for better solutions, I'd love to learn.
Thank you.
you can restrict access to a certain call by limiting the host name that is allowed for the request in routes.rb
post "/analytics" => "analytics#create", :constraints => {:ip => /127.0.0.1/}
If you want the users to see updates, you can use polling to refresh the page every minute orso.
1) Yes you are exposing a major security breach unless :
Your zeroMQ app provides the needed data to do authentification and authorization on the rails side
Your rails app is configured to listen only on the 127.0.0.1 interface and is thus not accessible from the outside
Like Benjamin suggests, you restrict specific routes to certain IP
2) This approach looks a lot like what delayed_job does. You might wanna take a look there : https://github.com/collectiveidea/delayed_job and use a rake task to add a new job.
In short, your listening app will call a rake task that will add a custom delayed_job when receiving a packet. Then let delayed_job handle the load. You benefit from delayed_job goodness (different queues, scaling, ...). The hard part is getting the result.
One idea would be to associated a unique ID with each job, and have the delayed_job task output the result in a data store wich associated the job ID with the result. This data store can be a simple relational table
+----+--------+
| ID | Result |
+----+--------+
or a memecache/redis/whatever instance. You just need to poll that data store looking for the result associated with the job ID. And delete everything when you are done displaying that to the user.
3) Why don't you directly POST the data to the rails server ?
Following Benjamin's lead, I implemented a filter for this particular action.
def verify_ip
#ips = ['127.0.0.1']
if not #ips.include? request.remote_ip
redirect_to root_url
end
end
The listening app on the localhost now invokes the action, passing the JSON package received from the analytics engine as a param. Thank you.

Rails 3.1 - Firing an specific event with the EventMachine

I would like to use the plugin em-eventsource ( https://github.com/AF83/em-eventsource ) for server-sent events in a Rails 3.1-project. My problem is, that there is only explained how to listen on events and receive messages, but not how to fire a specific event up and send the message. I would like to produce the event in an Active Record-Observer. Am I right when I think that I have to defer a operation with EventMachine to produce this event, or how can I solve this?
And yes, it has to be Ruby on Rails. If I don't get this to work with EventMachine, I would try to bypass the whole ruby-part with node.js.
Actually I worked on this library a little with the maintainer. I think you mixed the client part with the server one. em-eventsource is a client library which you can use to consume a ServerSentEvent API, it's not meant to fire SSE.
On the server side, it quite doesn't matter whether you are using Rails or any other stack (nodejs, php…) as long as the server you are running on supports streaming. The default web server shipped with Rails does not (Webrick) but there are many others which do: Thin, Puma, Goliath…
In order to fire SSE in Rails, you would have to use both a streaming-capable server among those cited, and abide by the SSE specification. It mostly falls down to, first, responding with the proper Content-type header ("text/event-stream") so that the client (browser) knows it should hang-on, and then start streaming on the socket. That latter part is the one not easily possible as of today in Rails 3 (yet not impossible!); Rails 4 actually now supports streaming in an easy way, with a clean and simple internal API, so it's definitely coming.
In the mean time, you'd either:
mess with Rack's API in Rails (using EventMachine I guess, there are some examples in the wild)
or have it smart and make use of the streaming feature provided by Sinatra, built on top of Rack (see https://gist.github.com/1476463 for an example of Sinatra app which can be mounted in a Rails one!)
or you could use an external service such as Pusher
or leverage a entirely different stack…
A good overview: http://blog.phusion.nl/2012/08/03/why-rails-4-live-streaming-is-a-big-deal/
Maybe I'm wrong, but if IIRC Rails can't support long pooling. Rails block whole server (or thread if you have more than one running inside server) for each request and can't reuse them unless whole response was send. That's why you should setup reverse proxy (like nginx) in front of Rails application if you suspect there could be many concurrent connections - to proxy slow client requests and send them to Rails when whole request is received. It's just how Rack works, there's not much you can do about this probably.

Rails + SSL: Per controller or application-wide?

I could use some wisdom from any developers who have worked with Rails and SSL. I have a fairly simple app and I'm in the process of implementing payment processing. Obviously payment processing calls for SSL, so I'm setting that up now.
My intention when I started working on this today was to find the simplest / cleanest way to enforce SSL on specific controller actions - namely anything having to do with payment. I figured there was no reason to run the rest of my site on SSL.
I found the ssl_requirement gem which seems to take care of setting SSL per-controller-action without much difficulty, so that's good. I also found this question which seems to indicate that handling SSL with a gem is now out-of-style.
I also found several answers / comments etc. suggesting that a site should just use Rack middleware like Rack-SSL to force the entire site to SSL mode.
So now I'm kind of confused, and not sure what I should do. Could anyone with experience working with Rails 3 and SSL help me understand:
Whether I should force the whole site to SSL, or only per certain actions.
What gotchas to look out for using SSL in Rails (I've never done it before).
If per-controller is the way to go, whether it makes sense to use the ssl-requirement gem or whether I should just use the new routing and link helper options...
I'd very much appreciate your insight, this has become a paralyzing decision for me. Thanks!
I've found myself "paralyzed" by this decision in the past, and here's what I think about each time.
First, keep in mind that some browsers will throw pop-up warnings if you keep switching out of and into SSL, or if you serve some content (the page) with SSL and other content (images, css) without. Obviously that's not a good experience for users.
The only possible downside to requiring SSL everywhere is performance. But unless you're expecting 1000+ users/day who will be doing lots of things that *don't * require SSL, this is negligible.
SSL is handled at the Apache/Nginx/whatever level. So if you decide to put your entire app behind SSL, it makes most sense to deal with it at the Webserver level (redirect http:/yoursite.com to https://yoursite.com.
And if, for performance reasons, you decide not to put everything behind SSL, then it still could make sense to handle SSL redirects at the Webserver level. Allowing your user through your Webserver, then sending him through half Rails stack, just to boot him back out to start over again is very wasteful.
Of course there's something to be said for simplicity and domains of knowledge, which would suggest handling redirects in your Rails app or middleware, since it "knows" what's safe and unsafe.
But those are things you'll have to weigh yourself. It depends on whether raw performance or simplicity of development/maintenance is more important.
I usually end up with a virtual host for http://mysite.com which redirects everything (or sometimes only certain uris) to https://mysite.com/$1. Hope that's helpful.

Resources