I consider using New Relic for profiling performance of a rails application. I know how useful New Relic is, but I worry about security. Concretely I want to know what data does New Relic collect and send it to their servers in default setting.
If you know about this question or any Web page, please tell me.
I knew this page, but too long, and I want to know more concretely for Rails application.
Privacy Policy
New Relic has a good deal of documentation on security considerations with our Ruby agent:
https://newrelic.com/docs/subscriptions/security
One of the best ways to find out every single thing that the agent is sending to the data collection server is the audit logs in Ruby Agent version 3.5.5 and later: https://newrelic.com/docs/ruby/audit-log
Related
I've inherited a Rails 3.2 production environment which is 'humming' away nicely.
The client now wants another major piece of work doing but I want to do it in Rails 5. The web address would be the same for both the old site and the new project. The new project would be additional functionality which would be accessed via the old site.
Any one know of a way of keeping the old site running whilst I develop and deliver the new work via Rails 5? Eventually if this all works then I get the opportunity to migrate the old site to Rails 5. However for the moment I need to serve up both Rails 3.2 and Rails 5 from the same site.
It's possible to do what you describe with a reverse proxy, e.g. nginx, configured to serve from different web servers based on different paths on the same host. This answer has some details on how to do that. We would need to know how your website is hosted in order to give more details on exactly how to do that.
However, there are other concerns that come up when you start separating your apps which you may not have considered. For example, if your website allows users to log in, do you want them to still be logged in when they visit the new site? To do so transparently will require sharing the session cookie, which this post describes a bit (you'll need to use the same secret key for both apps, or use a remote session store like Memcached). I'm not sure if it'll work properly when shared between Rails 3.2 and 5, though.
As a final note, breaking up your monolithic app into a distributed system is never a decision to take lightly. It would likely end up being less work, and less overall architectural overhead, to simply invest the time in upgrading from 3.2 -> 4.0 -> 4.2 -> 5.0.
Personally I wouldn't touch that old app and its server, especially if the client is happy. Deploying the new app to a new server or a container service like heroku is something you should consider.
Recently I've been looking for a solution to implement real-time updating web pages, for example, Twitter-like news feed or real-time chat. I've discovered some ways, as Pusher service, faye, and quite a lot of ruby gems, like private_pub or sync.
The problem is that this solutions don't seem to be a completely right way to follow. Pusher is rather expensive, and in fact I would not prefer to use other servie in my project. Faye seems insecure, and it is quite hard to implement security for it. Private_pub does the right thing, but last commit on github was in 2013 and in fact it is quite outdated.
All in all, ways that I have discovered do not seem to be professional-grade solutions for Rails startups. I have come up to the question whether I should completely switch to NodeJS or other technologies, or I can integrate NodeJS app inside a Rails one?
To sum up, is there such solution for Rails framework, or switch to another technologies is inevitable?
It may not help you right now, but at RailsConf last month DHH announced that Rails 5 will add support for websockets via a new library called ActionCable.
https://www.youtube.com/watch?v=oMlX9i9Icno
MessageBus might be a good fit. It's currently used in Discourse to implement live updates.
I'm also not sure what your security concerns about Faye are exactly. You should have no issues if everything is operating over HTTPS with proper CORS settings.
As for a mixed Node/Rails solution, you could push some list (e.g. post and list of those to be notified) on an update in the Rails app to a Redis instance. A Node app subscribed to Redis could then notify clients to make a request back to the Rails server for the latest updates.
I am looking appropriate resources on "how to replicate session (user session in a stateful app) and cached objects (retrieved from an underlying database through transactional operations) of an application server". The app server can preferably Rails or any other popular one which fully supports MVC framework (using ActiveRecord or DataMapper design patterns).
It will be also helpful to know similar thing about Memcached replication internals as well if it supports this kind of replication.
If anyone can further suggest on how to integrate a NoSQL key-value stores to keep session and cached objects generated in an app server like Rails that would another appreciated answer.
My goal is to find out suitable way to replicate an app server instance for performance (either for local users or geo-distributed) and high-availability. Any point-out to current industry practice and available solutions will be much helpful in this regard.
Thanks a lot in advance.
I'm trying to create a ruby on rails ecommerce application, where potential customers will be able to place an order and the store owner will be able to receive the order in real-time.
The finalized order will be recorded into the database (at the moment SQLite), and the storeowner will have a browser window open, where the new orders will appear just after the order is finalized.
(Application info: I'm using the HOBO rails framework, and planning to host the app in Heroku)
I'm now considering the best technology to implement this, as the application is expected to have a lot of users sending in a lot of orders:
1) Each browser window refreshes the page every X minutes, polling the server continuously for new records (new orders). Of course, this puts a heavy load on the server.
2) As above, but poll the server with some kind of AJAX framework.
3) Use some kind of server push technology, like 'comet' asynchronous messaging. Found Juggernaut, only problem is that it is using Flash and custom ports, and this could be a problem as my app should be accessible behind corporate firewalls and NAT.
4) I'm also checking node.js framework, seems to be efficient for this kind of asynchronous messaging, though it is not supported in Heroku.
Which is the most efficient way to implement this kind of functionality? Is there perhaps another method that I have not thought of?
Thank you for your time and help!
Node.js would probably be a nice fit - it's fast, loves realtime and has great comet support. Only downside is that you are introducing another technology into your solution. It's pretty fun to program in tho and a lot of the libraries have been inspired by rails and sinatra.
I know heroku has been running a node.js beta for a while and people were using it as part of the recent nodeknockout competition. See this blog post. If that's not an option, you could definitely host it elsewhere. If you host it at heroku, you might be able to proxy requests. Otherwise, you could happily run it off a sub domain so you can share cookies.
Also checkout socket.io. It does a great job of choosing the best way to do comet based on the browser's capabilities.
To share data between node and rails, you could share cookies and then store the session data in your database where both applications can get to it. A more involved architecture might involve using Redis to publish messages between them. Or you might be able to get away with passing everything you need in the http requests.
In HTTP, requests can only come from the client. Thus the best options are what you already mentioned (polling and HTTP streaming).
Polling is the easier to implement option; it will use quite a bit of bandwidth though. That's why you should keep the requests and responses as small as possible, so you should definitely use XHR (Ajax) for this.
Your other option is HTTP streaming (Comet); it will require more work on the set up, but you might find it worth the effort. You can give Realtime on Rails a shot. For more information and tips on how to reduce bandwidth usage, see:
http://ajaxpatterns.org/Periodic_Refresh
http://ajaxpatterns.org/HTTP_Streaming
Actually, if you have your storeowner run Chrome (other browsers will follow soon), you can use WebSockets (just for the storeowner's notification though), which allows you to have a constant connection open, and you can send data to the browser without the browser requesting anything.
There are a few websocket libraries for node.js, but i believe you can do it easily yourself using just a regular tcp connection.
For a recent project a friend of mine and I have been working on, we want to build a RESTful web API for client application usage. I believe that I have a fairly good grasp of the top-down picture after reading this, but am fairly clueless when it comes to security issues.
I know of OAuth and plan on implementing it, but are there any other concerns we should address first thing? I would hate to spend a large amount of time developing these features to find out later that we've left the site open for malicious attack.
Thanks.
If you are looking for general information on Web security, check out OWASP Ruby on Rails Security Guide V.2. (There's also a first edition which I read back in the day.) Check out OWASP's web site for more security related information.
A few more resources for you:
Great walkthrough of common web attacks and how to deal with them in rails
https://www.honeybadger.io/blog/guides/2013/03/09/ruby-security-tutorial-and-rails-security-guide
Rails insecure defaults
http://blog.codeclimate.com/blog/2013/03/27/rails-insecure-defaults
All about sql injection, goes beyond the simple examples
http://rails-sqli.org
New security issues are listed at