I have a Rails application but after some time of development/debugging I realized that it would be very helpful to be able to see the whole HTTP request in the logfiles - log/development.log, not just the parameters.
I also want to have a separate logfile based on user, not session.
Any ideas will be appreciated!
You can rapidly see the request.env in a view via:
VIEW: <%= request.env.inspect %>
If instead you want to log it in development log, from your controller:
CONTROLLER: Rails.logger.info(request.env)
Here you can see a reference for the Request object.
Rails automatically sets up logging to a file in the log/ directory using Logger from the Ruby Standard Library. The logfile will be named corresponding to your environment, e.g. log/development.log.
To log a message from either a controller or a model, access the Rails logger instance with the logger method:
class YourController < ActionController::Base
def index
logger.info request.env
end
end
About the user, what are you using to authenticate It?
That logger.info request.env code works fine in a Rails controller, but to see a more original version of that, or if you're using Grape or some other mounted app, you have to intercept the request on its way through the rack middleware chain...
Put this code in your lib directory (or at the bottom of application.rb):
require 'pp'
class Loggo
def initialize(app)
#app = app
end
def call(env)
pp env
#app.call(env)
end
end
then in with the other configs in application.rb:
config.middleware.use "Loggo"
You can use rack middleware to log the requests as the middleware sees them (as parsed by the http-server and as transformed by preceding middleware). You can also configure your http-server to log full requests, if your http-server supports such a feature.
The http-server (web-server) is responsible for receiving http requests, parsing them, and transmitting data structures to the application-server (e.g., a rack application). The application-server does not see the original request, but sees what the http-server sends its way.
I've initially used the code snippet by #AlexChaffee, but I've since switched to using mitmproxy, a specialized HTTP proxy that records the requests and responses passing through it.
This is obviously only helpful for development scenarios when you control the applications making the requests. You might be able to achieve similar results with a reverse proxy for production applications (the advantage being that you don't have to touch the Rails application itself for this), but I haven't looked into this.
Related
I have the following system: my Rails server issues commands to the Flask server and the latest one responses immediately with status 200. After that Flask server runs a background task with some time-consuming function. After a little while, it comes up with some results and designed to send data back to the Rails server via HTTP (see diagram)
Each Flask data portion can affect several Rails models (User, Post etc...). Here I faced with two questions:
How I should structure my controllers/actions on the Rails side in this case? Currently, I think about one controller and each action of it corresponds to Python 'delayed' data portion.
Is it a normal way of microservices communication? Or I can organize it in a different, more simple way?
This sounds like pretty much your standard webhook process. Rails pokes Flask with a GET or POST request and Flask pokes back after a while.
For example lets say we have reports, and after creating the report we need flask to verify the report:
class ReportsController
# POST /reports
def create
#report = Report.new(report_params)
if #report.save
FlaskClient.new.verify(report) # this could be delegated to a background job
redirect_to #report
else
render :new
end
end
# PATCH /reports/:id/verify
def verify
# process request from flask
end
end
class FlaskClient
include Httparty
base_uri 'example.com/api'
format 'json'
def verify(report)
self.class.post('/somepath', data: { id: report.id, callback_url: "/reports/#{report.id}/verify", ... })
end
end
Of course the Rails app does not actually know when Flask will respond or that Flask and the background service are different. It just sends and responds to http requests. And you definitely don't want rails to wait around so save what you have and then later the hook can update the data.
If you have to update the UI on the Rails side without the user having to refresh manually you can use polling or websockets in the form of ActionCable.
WebMock works fine for requests made by the app. But how to mock AJAX requests made by the browser to 3rd party services?
Rails 4.
Finally found 2 answers:
Approach #1: Puffing Billy
A rewriting web proxy for testing interactions between your browser and external sites.
Works like WebMock:
proxy.stub('http://www.google.com/')
.and_return(:text => "I'm not Google!")
Approach #2: Rack Middleware
Although the previous approach can work, I discovered that it does not work with WebMock. So, if you want to mock your browser requests to external services, you can't mock your app requests.
The approach that worked for me is to run a separate Rack app and inject it into the middleware stack:
spec_helper.rb:
def test_app
Rack::Builder.new{
use ExternalServiceMock
run app
}.to_app
end
Capybara.app = test_app
ExternalServiceMock is a rack app that only responds to certain request paths.
For this particular app, all of the external service URI's were stored in configs, and I set them in the spec helper:
ENV[ 'EXTERNAL_SERVICE_URI' ] = 'http://localhost:3000'
This forces all external requests to be sent to the ExternalServiceMock.
So basically you only need to save the response from 3rd party services and stub the request, then you can test it! Also checkout VCR: https://github.com/vcr/vcr.
Because of the javascript injected due to newrelic which changes on every requests, the content of the page is changing, thus forcing a new etag to be generated everytime.
I understand that the Rack::Etag middleware needs to be before the newrelic middlewares, but I just can't find the newrelic middlewares. As per the documentation of newrelic_rpm, it says that for rails, the gem will include the middlewares, however on running rake middleware, I do not see any newrelic middlewares.
I can add the middlewares myself, but is there a better way?
I work for New Relic.
The reason that New Relic's middlewares are not showing up when running rake middleware is that they are conditionally inserted into the middleware stack. These middlewares are inserted only if the agent is configured to run in the current environment. You can force New Relic's middlewares to be inserted when running rake middleware in order to inspect the middleware stack by setting NEW_RELIC_AGENT_ENABLED=true on the command line when starting the rake task.
Adding the following code to config/application.rb should ensure that the Rack::ETag middleware has a chance to calculate and inject the ETag before the browser monitoring middleware injects its dynamic content:
config.after_initialize do
config.middleware.delete "Rack::ETag"
config.middleware.insert_after "NewRelic::Rack::BrowserMonitoring", "Rack::ETag"
end
The reason the JavaScript code injected into responses by the NewRelic::Rack::BrowserMonitoring middleware is dynamic is that it contains timings of how long the response took to generate on the server-side, and (if applicable) how long it was queued before reaching the Rails stack. These timings will vary with each incoming request. If ETags are generated based on hashing the page content before the dynamic information is inserted, then out-of-date server-side timings will potentially be used when a response is serviced from the cache. You can read details about how this is handled by New Relic here: https://newrelic.com/docs/features/how-does-real-user-monitoring-work#cached-pages
This is also a nice overview of ordering your middleware correctly: http://verboselogging.com/2010/01/20/proper-rack-middleware-ordering
If you need more in-depth help, please open up a ticket with us by emailing support#newrelic.com
Another option:
Set browser_monitoring.auto_instrument: false in the newrelic.yml file to disable automatic insertion of the BrowserMonitoring middleware
Add the following code to config/application.rb:
config.middleware.delete "Rack::ETag"
require 'new_relic/rack/browser_monitoring'
config.middleware.use NewRelic::Rack::BrowserMonitoring
config.middleware.use "Rack::ETag"
This will remove the ETag middleware, append the BrowserMonitoring middleware, and then append the ETag middleware again so that it runs before BrowserMonitoring injects its dynamic payload.
Are Rails controllers multithreaded?
If so, can I protect a certain piece of code (which fires only once every ten minutes) from being run from multiple threads by simply doing
require 'thread'
Thread.exclusive do
# stuff here
end
on do I need to somehow synchronize on a monitor?
Running rake middleware on a basic rails app gives the following:
use Rack::Lock
use ActionController::Failsafe
use ActionController::Reloader
use ActiveRecord::ConnectionAdapters::ConnectionManagement
use ActiveRecord::QueryCache
use ActiveRecord::SessionStore, #<Proc:0x017fb394#(eval):8>
use ActionController::RewindableInput
use ActionController::ParamsParser
use Rack::MethodOverride
use Rack::Head
run ActionController::Dispatcher.new
The first item on the rack stack is Rack::Lock. This puts a lock around each request, so only one request is handled at a time. As such a standard rails app is single threaded. You can however spawn new threads within a request that would make your app multi threaded, most people never encounter this.
If you are having issues…
require 'thread'
Thread.exclusive do
# stuff here
end
… would ensure that stuff inside the block is never run in parallel with any other code. Creating a shared Mutext between all threads (in a class variable or something, but this could be wiped when reloaded in dev mode, so be careful), and locking on it as Rack::Lock#call does is to be preferred if you just want to ensure no two instances of the same code is executed at the same time.
Also, for the record, each request creates and dereferences one controller in each request cycle. No two requests should see the same instance, although they may see the same class.
Setting config.threadsafe! voids almost everything I said. That removes Rack::Lock from the stack, and means you will need to set a mutex manually to prevent double entry. Don't do it unless you have a really good reason.
Even without Rack::Lock you will still get one controller instance per request. The entry point to your controller ensures that, notice the call to new in process.
My understanding is that a new controller instance is created for each HTTP request that is processed by a controller.
Ruby is single threaded. So at anytime, a controller can only handle one request at a time. If there are more than one requests, these requests are queued up. To avoid this, people usually run a small set of Mongrels to get good concurrency. It works like this(straight from Mongrel WIKI FAQ):
A request hits mongrel.
Mongrel makes a thread and parses the HTTP request headers
If the body is small, then it puts the body into a StringIO
If the body is large then it streams the body to a temp file
When the request is "cooked" it call the RailsHandler.
The RailsHandler sees if the file is possibly page cached, if so then it sends the cached page.
Now you're finally ready to process the Rails request. LOCK!
Still locked, Mongrel calls the Rails Dispatcher to handle the request, passing in the headers, and StringIO or Tempfile for body.
When Rails is done, UNLOCK! . Rails has (hopefully) put all of its output into a StringIO.
Mongrel then takes this StringIO output, any output headers, and streams them back to the client super fast.
Notice that if there is no locking if the page is cached.
In Rails 2.3.4, the way Accept headers are handled has changed:
http://github.com/rails/rails/commit/1310231c15742bf7d99e2f143d88b383c32782d3
We won't Accept it
The way in which Rails handles incoming Accept headers has been updated. This was primarily due to the fact that web browsers do not always seem to know what they want ... let alone are able to consistently articulate it. So, Accept headers are now only used for XHR requests or single item headers - meaning they're not requesting everything. If that fails, we fall back to using the params[:format].
It's also worth noting that requests to an action in which you've only declared an XML template will no longer be automatically rendered for an HTML request (browser request). This had previously worked, not necessarily by design, but because most browsers send a catch-all Accept header ("/"). So, if you want to serve XML directly to a browser, be sure to provide the :xml format or explicitly specify the XML template (render "template.xml").
I have an active API which is being used by many clients who are all sending both a Content-Type and an Accept header, both set to application/xml. This works fine, but my testing under Rails 2.3.4 demonstrates that this no longer works -- I get a 403 Unauthorised response. Remove the Accept header and just sending Content-Type works, but this clearly isn't an acceptable solution since it will require that all my clients re-code their applications.
If I proceed to deploy to Rails 2.3.4 all the client applications which use the API will break. How can I modify my Rails app such that I can continue to serve existing API requests on Rails 2.3.4 without the clients having to change their code?
If I understand correctly the problem is in the Request headers. You can simply add a custom Rack middleware that corrects it.
Quick idea:
class AcceptCompatibility
def initialize(app)
#app = app
end
def call(env)
if env['Accept'] == "application/xml" && env['Content-Type'] == "application/xml"
# Probably an API call
env.delete('Accept')
end
#app.call(env)
end
end
And then in your environment.rb
require 'accept_compatibility'
config.middleware.use AcceptCompatibility
Embarrassingly enough, this actually turned out to be an Apache configuration issue. Once I resolved this, everything worked as expected. Sorry about that.
As coderjoe correctly pointed out, setting the Content-Type header isn't necessary at all -- only setting the Accept header.