We are running a rails project behind haproxy. There is a keep-alive sent to the application every second. This is causing very noisy log files which makes it a bit of a pain to dig through and is making them unnecessarily large.
My first thought was to change the logging level for that action to debug, but someone else proposed changing the logging level in an around_filter. I am not crazy about that idea, but it could just be how I implemented it. I am open to different solutions, but the general requirements are that I can quiet those actions, but I could change the logging level if I needed to see them for whatever reason.
Another solution is to insert some Rack middleware which handles the keep-alive check before it gets to the Rails ApplicationController lifecycle.
Step 1: make some middleware which respondes to the keep-alive check. In my example the keep-alive request is a GET /health-check so it would look like:
class HealthCheckMiddleware
def initialize(app)
#app = app
end
def call(env)
if env['PATH_INFO'] == '/health-check'
return [200, {}, ['healthy']]
end
#app.call(env)
end
end
Of course, adjust this health check as necessary. Maybe you need to check other Request / CGI variables...
Step 2: make sure you insert this middleware before Rails::Rack::Logger:
config.middleware.insert_before Rails::Rack::Logger, "HealthCheckMiddleware"
Now your middleware will handle the health check and your logs have been totally by-passed.
Related
I'm trying to add some middleware to a Rails project I'm working on, and when I try to do so, it seems to cause an endless loop.
Specifically, I have the following middleware shell file:
# app/middleware/log_data.rb
class LogData
def initialize(app)
#app = app
end
def call(env)
# To-do: Write code here.
end
end
I then created a new middleware directory under the app directory and put the file in that directory.
After that, I added the following towards the bottom of config/application.rb:
config.middleware.use("LogData")
After restarting the Puma server running on Vagrant with sudo service puma restart, if I run rake middleware, I can see the middleware correctly show up in the list towards the bottom.
However, when I try to refresh the website, it fails with an endless loop, displaying the following in Chrome:
If I comment out the config.middleware.use("LogData") line in config/application.rb, then the middleware disappears from the rake middleware command list, and the website stops crashing and loads properly.
What am I doing wrong? What am I missing? I imagine it's something simple, but I'm not sure why a simple (and empty) shell middleware file would cause the whole site to crash. Thank you.
I should note that I'm using Rails 4.2.11, which I know is old, but upgrading right now is not an option.
Your middleware does nothing, returns nil (which translates to an Incomplete Server Response), and basically the request ends there. It needs to return something (an array of [status, headers, response], or call the env) to allow the request to pass through the middleware chain.
# app/middleware/log_data.rb
class LogData
def initialize(app)
#app = app
end
def call(env)
# To-do: Write code here.
# this should be at the very end of the method
#app.call(env)
end
end
Here is more info about middlewares.
Edit: I re-added the options method in a pull request to Rails which should now be live. The answer below should no longer be necessary. Call process(:options, path, **args) in order to preform the options request.
See commit 1f979184efc27e73f42c5d86c7f19437c6719612 for more information if required.
I've read around the other answers and none of them seemed to work in Rails 5. It's surprising that Rails doesn't just ship with an options method, but here we are. Of course if you can use xdomain, you probably should (edit: I no longer hold this view, there are advantages to CORS) because it's both faster (no preflight check doubling latency!), easier (no need for silly headers / HTTP methods!), and more supported (works basically everywhere!) but sometimes you just need to support CORS and something about the CORS gem makes it not work for you.
At the top of your config/routes.rb file place the following:
match "/example/",
controller: "example_controller",
action: "options_request",
via: [:options]
And in your controller write:
def options_request
# Your code goes here.
end
If you are interested in writing an integration test there is some misinformation around the process method, which is not actually a public method. In order to support OPTIONS requests from your integration tests create an initializer (mine is at: config/initializers/integration_test_overrides.rb because I override a number of things) and add the following code:
class ActionDispatch::Integration::Session
def http_options_request(path)
process(:options, path)
end
end
So that you can call http_options_request from your integration test.
I have a typical Rails REST Api written for a http consumers. However, it turns out they need web socket API because of the integration POS Machines.
The typical API looks like this;
class Api::Pos::V1::TransactionsController < ApplicationController
before_action :authenticate
def index
#transactions = #current_business.business_account.business_deposits.last(5)
render json: {
status: 200,
number: #transactions.count,
transactions: #transactions.as_json(only: [:created_at, :amount, :status, :client_card_number, :client_phone_number])
}
end
private
def request_params
params.permit(:account_number, :api_key)
end
def authenticate
render status: 401, json: {
status: 401,
error: "Authentication Failed."
} unless current_business
end
def current_business
account_number = request_params[:account_number].to_s
api_key = request_params[:api_key].to_s
if account_number and api_key
account = BusinessAccount.find_by(account_number: account_number)
if account && Business.find(account.business_id).business_api_key.token =~ /^(#{api_key})/
#current_business = account.business
else
false
end
end
end
end
How can i serve the same responses using web-sockets?
P.S: Never worked with sockets before
Thank you
ActionCable
I would second Dimitris's reference to ActionCable, as it's expected to become part of Rails 5 and should (hopefully) integrate with Rails quite well.
Since Dimitris suggested SSE, I would recommend against doing so.
SSE (Server Sent Events) use long polling and I would avoid this technology for many reasons which include the issue of SSE connection interruptions and extensibility (websockets allow you to add features that SSE won't support).
I am almost tempted to go into a rant about SSE implementation performance issues, but... even though websocket implementations should be more performant, many of them suffer from similar issues and the performance increase is often only in thanks to the websocket connection's longer lifetime...
Plezi
Plezi* is a real-time web application framework for Ruby. You can either use it on it's own (which is not relevant for you) or together with Rails.
With only minimal changes to your code, you should be able to use websockets to return results from your RESTful API. Plezi's Getting Started Guide has a section about unifying the backend's RESTful and Websocket API's. Implementing it in Rails should be similar.
Here's a bit of Demo code. You can put it in a file called plezi.rb and place it in your application's config/initializers folder...
Just make sure you're not using any specific Servers (thin, puma, etc'), allowing Plezi to override the server and use the Iodine server, and remember to add Plezi to your Gemfile.
class WebsocketDemo
# authenticate
def on_open
return close unless current_business
end
def on_message data
data = JSON.parse(data) rescue nil
return close unless data
case data['msg']
when /\Aget_transactions\z/i
# call the RESTful API method here, if it's accessible. OR:
transactions = #current_business.business_account.business_deposits.last(5)
write {
status: 200,
number: transactions.count,
# the next line has what I think is an design flaw, but I left it in
transactions: transactions.as_json(only: [:created_at, :amount, :status, :client_card_number, :client_phone_number])
# # Consider, instead, to avoid nesting JSON streams:
# transactions: transactions.select(:created_at, :amount, :status, :client_card_number, :client_phone_number)
}.to_json
end
end
# don't disclose inner methods to the router
protected
# better make the original method a class method, letting you reuse it.
def current_business
account_number = params[:account_number].to_s
api_key = params[:api_key].to_s
if account_number && api_key
account = BusinessAccount.find_by(account_number: account_number)
if account && Business.find(account.business_id).business_api_key.token =~ /^(#{api_key})/
return (#current_business = account.business)
end
false
end
end
end
Plezi.route '/(:api_key)/(:account_number)', WebsocketDemo
Now we have a route that looks something like: wss://my.server.com/app_key/account_number
This route can be used to send and receive data in JSON format.
To get the transaction list, the client side application can send:
JSON.stringify({msg: "get_transactions"})
This will result in data being send to the client's websocket.onmessage callback with the last five transactions.
Of course, this is just a short demo, but I think it's a reasonable proof of concept.
* I should point out that I'm biased, as I'm Plezi's author.
P.S.
I would consider moving the authentication into a websocket "authenticate" message, allowing the application key to be sent in a less conspicuous manner.
EDIT
These are answers to the questions in the comments.
Capistrano
I don't use Capistrano, so I'm not sure... but, I think it would work if you add the following line to your Capistrano tasks:
Iodine.protocol = false
This will prevent the server from auto-starting, so your Capistrano tasks flow without interruption.
For example, at the beginning of the config/deploy.rb you can add the line:
Iodine.protocol = false
# than the rest of the file, i.e.:
set :deploy_to, '/var/www/my_app_name'
#...
You should also edit your rakefile and add the same line at the beginning of the rakefile, so your rakefile includes the line:
Iodine.protocol = false
Let me know how this works. Like I said, I don't use Capistrano and I haven't tested it out.
Keeping Passenger using a second app
The Plezi documentation states that:
If you really feel attached to your thin, unicorn, puma or passanger server, you can still integrate Plezi with your existing application, but they won't be able to share the same process and you will need to utilize the Placebo API (a guide is coming soon).
But the guide isn't written yet...
There's some information in the GitHub Readme, but it will be removed after the guide is written.
Basically you include the Plezi application with the Redis URL inside your Rails application (remember to make sure to copy all the gems used in the gemfile). than you add this line:
Plezi.start_placebo
That should be it.
Plezi will ignore the Plezi.start_placebo command if there is no other server defined, so you can put the comment in a file shared with the Rails application as long as Plezi's gem file doesn't have a different server.
You can include some or all of the Rails application code inside the Plezi application. As long as Plezi (Iodine, actually) is the only server in the Plezi GEMFILE, it should work.
The applications will synchronize using Redis and you can use your Plezi code to broadcast websocket events inside your Rails application.
You may want to have a look at https://github.com/rails/actioncable which is the Rails way to deal with WebSockets, but currently in Alpha.
Judging from your code snippet, the client seems to only consume data from your backend. I'm skeptical whether you really need WebSockets. Ιf the client won't push data back to the server, Server Sent Events seem more appropriate.
See relevant walk-through and documentation.
I have a custom rack middleware used by my Rails 4 application. The middleware itself is just here to default Accept and Content-Type headers to application/json if the client did not provide a valid information (I'm working on an API). So before each request it changes those headers and after each request it adds a custom X-Something-Media-Type head with a custom media type information.
I would like to switch to Puma, therefore I'm a bit worried about the thread-safety of such a middleware. I did not play with instances variables, except once for the common #app.call that we encounter in every middleware, but even here I reproduced something I've read in RailsCasts' comments :
def initialize(app)
#app = app
end
def call(env)
dup._call(env)
end
def _call(env)
...
status, headers, response = #app.call(env)
...
Is the dup._call really useful in order to handle thread-safety problems ?
Except that #app instance variable I only play with the current request built with the current env variable :
request = Rack::Request.new(env)
And I call env.update to update headers and forms informations.
Is it dangerous enough to expect some issues with that middleware when I'll switch from Webrick to a concurrent web server such as Puma ?
If yes, do you know a handful way to make some tests en isolate portions of my middleware which are non-thread-safe ?
Thanks.
Yes, it's necessary to dup the middleware to be thread-safe. That way, anything instance variables you set from _call will be set on the duped instance, not the original. You'll notice that web frameworks that are built around Rack work this way:
Pakyow
Sinatra
One way to unit test this is to assert that _call is called on a duped instance rather than the original.
I'm looking for a quick and easy way to generate a unique per-request ID in rails that I can then use for logging across a particular request.
Any solution should ideally not make too much use of the default logging code, as I'm running the application under both jruby and ruby.
Backupify produced a great article about this: http://blog.backupify.com/2012/06/27/contextual-logging-with-log4r-and-graylog/
We wanted the request_id (that is generated by rails and available at request.uuid to be present on all messages throughout the request. In order to get it into the rack logging (the list of parameters and the timing among others), we added it to the MDC in a rack middleware.
application.rb:
config.middleware.insert_after "ActionDispatch::RequestId", "RequestIdContext"
app/controllers/request_id_context.rb: (had trouble finding it in lib for some reason)
class RequestIdContext
def initialize(app)
#app = app
end
def call(env)
Log4r::MDC.get_context.keys.each {|k| Log4r::MDC.remove(k) }
Log4r::MDC.put("pid", Process.pid)
Log4r::MDC.put("request_id", env["action_dispatch.request_id"])
#app.call(env)
end
end
If you push jobs onto delay job/resque, put the request_id into the queue. and in your worker pull it off and set into the MDC. Then you can trace the requests the whole way through
It looks like lograge (gem) automatically puts request.uuid in your logs.
They have this pattern:
bfb1bf03-8e12-456e-80f9-85afaf246c7f
This is now a feature of rails:
class WidgetsController < ApplicationController
def get
puts request.request_id
end
end
Maybe the NDC feature of log4r is usefull to you.