In Rails, can middleware execute after each request? - ruby-on-rails

I have no need for this feature and am only asking out of curiosity.
I'm aware that middlewares run before to each request.
Is it, however, reasonable to expect middleware to run after each request?
If that's the case, how can we go about doing it?
If not, how does the logger middleware report the response to the request?

in Rails, middlewares are arranged in a stack (you could consider this stack is a pipeline), the request and the response go throw the stack in 2 opposite directions.
rails middleware stack
$ rails middleware
request # ... ^
| use Rack::ETag |
| use Rack::TempfileReaper |
| use Warden::Manager |
| run Bookstore::Application.routes response
V
In order to those middlewares in stack link together, the higher middlewares will call recursive the lower middlewares, let see some code to understand how it works, suppose we have 2 middlewares m1 and m2 in which m1 is arranged right above m2 in the stack, then the flow the request and the response go throw m1, m2 as below (steps in order [1], [2], ...):
class M1
def initialize(app)
#app = app
end
def call(env) # [1] higher middleware call
# [2] before call recursive
# you could get request
request = ActionDispatch::Request.new env
# log request , or something else ...
status, headers, body = \ # [9] get response from M2
#app.call(env) # [3] call recursive M2
# log response, or do something else ...
[status, headers, body] # [10] return response to higher middleware
end
end
class M2
def initialize(app)
#app = app
end
def call(env) # [4] M1 call
# [5] before call recursive lower middleware
# same as above
status, headers, body = \ # [7] get response from lower middlewares
#app.call(env) # [6] call recursive lower middleware
# log response, or do something else ...
[status, headers, body] # [8] return response to M1 above
end
end
And the lowest middleware is Rails app, so the call stack look like a chain
call( ... call( ... call( ... rails_app.call() ))..)
so you could change app behavior (or how your app handle request/response) by adding/inserting_before(after)/removing nodes in this chain, very flexible !!!

Related

ActiveStorage: Old urls still request deprecated :combine_options in variation_key

Recently I upgraded from Rails 6.0.3.4 to 6.1.3. ActiveStorage deprecated combine_options, which I cleared from my app. All fresh request work as expected.
Internet Bots (Facebook, Google, ...) cache urls to images hosted on a website (like mine). According to my Rollbar records they request these a couple of times a day.
The cached URL's that should load ActiveStorage attachments include an old variation_key in the URL. When the blob wants to load using the decoded variation_key, I see that combine_options is still present. This throws a 500 Internal Server Error with ArgumentError (Active Storage's ImageProcessing transformer doesn't support :combine_options, as it always generates a single ImageMagick command.):.
Is there any way I can stop these errors from showing up?
Rails version: 6.1.3.
Ruby version: 2.7.2p137
I have resolved this issue using some middleware. This will intercept all incoming requests, scan if they are ActiveStorage urls, find the ones with the deprecated combine_options and just return 404 not found. This code will also raise an error is the current environment is development, this way I don't accidentally introduce the deprecated code again.
For those of you who might have the same problem, here's the code.
application.rb
require_relative '../lib/stopper'
config.middleware.use ::Stopper
lib/stopper.rb
class Stopper
def initialize(app)
#app = app
end
def call(env)
req = Rack::Request.new(env)
path = req.path
if problematic_active_storage_url?(path)
if ENV["RACK_ENV"] == 'development'
raise "Problematic route, includes deprecated combine_options"
end
[404, {}, ['not found']]
else
#app.call(env)
end
end
def problematic_active_storage_url?(path)
if active_storage_path?(path) && !valid_variation_key?(variation_key_from_path(path))
return true
end
false
end
def active_storage_path?(path)
path.start_with?("/rails/active_storage/representations/")
end
def variation_key_from_path(path)
if path.start_with?("/rails/active_storage/representations/redirect/")
path.split('/')[6]
elsif path.start_with?("/rails/active_storage/representations/")
path.split('/')[5]
end
end
def valid_variation_key?(var_key)
if decoded_variation = ActiveStorage::Variation.decode(var_key)
if transformations = decoded_variation.transformations
if transformations[:combine_options].present?
return false
end
end
end
true
end
end
I thought the stopper was a great solution but eventually I wanted to get rid of it. Unforunately most of our old requests were stilling coming through months later and no one was honoring the 404s. So I decided to monkey patch based off the previous rails versions. This is was I did.
config/initalizers/active_storage.rb
Rails.application.config.after_initialize do
require 'active_storage'
ActiveStorage::Transformers::ImageProcessingTransformer.class_eval do
private
def operations
transformations.each_with_object([]) do |(name, argument), list|
if name.to_s == "combine_options"
list.concat argument.keep_if { |key, value| value.present? and key.to_s != "host" }.to_a
elsif argument.present?
list << [ name, argument ]
end
end
end
end
end

uninitialized constant Process::RLIMIT_NOFILE (NameError)

I'm getting error "uninitialized constant Process::RLIMIT_NOFILE (NameError)" while executing command "rpush start".
I am trying to implement push notification using rpush in ruby on rails windows but not able to do that.
I'm pretty beginner in ruby on rails.
Please help.
mentioning my persistant.rb file
require 'net/http'
require 'uri'
require 'cgi' # for escaping
require 'connection_pool'
begin
require 'net/http/pipeline'
rescue LoadError
end
autoload :OpenSSL, 'openssl'
class Net::HTTP::Persistent
##
# The beginning of Time
EPOCH = Time.at 0 # :nodoc:
##
# Is OpenSSL available? This test works with autoload
HAVE_OPENSSL = defined? OpenSSL::SSL # :nodoc:
##
# The default connection pool size is 1/4 the allowed open files.
DEFAULT_POOL_SIZE = Process.getrlimit(Process::RLIMIT_NOFILE).first / 4
##
# The version of Net::HTTP::Persistent you are using
VERSION = '3.0.0'
##
# Exceptions rescued for automatic retry on ruby 2.0.0. This overlaps with
# the exception list for ruby 1.x.
RETRIED_EXCEPTIONS = [ # :nodoc:
(Net::ReadTimeout if Net.const_defined? :ReadTimeout),
IOError,
EOFError,
Errno::ECONNRESET,
Errno::ECONNABORTED,
Errno::EPIPE,
(OpenSSL::SSL::SSLError if HAVE_OPENSSL),
Timeout::Error,
].compact
##
# Error class for errors raised by Net::HTTP::Persistent. Various
# SystemCallErrors are re-raised with a human-readable message under this
# class.
class Error < StandardError; end
##
# Use this method to detect the idle timeout of the host at +uri+. The
# value returned can be used to configure #idle_timeout. +max+ controls the
# maximum idle timeout to detect.
#
# After
#
# Idle timeout detection is performed by creating a connection then
# performing a HEAD request in a loop until the connection terminates
# waiting one additional second per loop.
#
# NOTE: This may not work on ruby > 1.9.
def self.detect_idle_timeout uri, max = 10
uri = URI uri unless URI::Generic === uri
uri += '/'
req = Net::HTTP::Head.new uri.request_uri
http = new 'net-http-persistent detect_idle_timeout'
http.connection_for uri do |connection|
sleep_time = 0
http = connection.http
loop do
response = http.request req
$stderr.puts "HEAD #{uri} => #{response.code}" if $DEBUG
unless Net::HTTPOK === response then
raise Error, "bad response code #{response.code} detecting idle timeout"
end
break if sleep_time >= max
sleep_time += 1
$stderr.puts "sleeping #{sleep_time}" if $DEBUG
sleep sleep_time
end
end
rescue
# ignore StandardErrors, we've probably found the idle timeout.
ensure
return sleep_time unless $!
end
##
# This client's OpenSSL::X509::Certificate
attr_reader :certificate
##
# For Net::HTTP parity
alias cert certificate
##
# An SSL certificate authority. Setting this will set verify_mode to
# VERIFY_PEER.
attr_reader :ca_file
##
# A directory of SSL certificates to be used as certificate authorities.
# Setting this will set verify_mode to VERIFY_PEER.
attr_reader :ca_path
##
# An SSL certificate store. Setting this will override the default
# certificate store. See verify_mode for more information.
attr_reader :cert_store
##
# The ciphers allowed for SSL connections
attr_reader :ciphers
##
# Sends debug_output to this IO via Net::HTTP#set_debug_output.
#
# Never use this method in production code, it causes a serious security
# hole.
attr_accessor :debug_output
##
# Current connection generation
attr_reader :generation # :nodoc:
##
# Headers that are added to every request using Net::HTTP#add_field
attr_reader :headers
##
# Maps host:port to an HTTP version. This allows us to enable version
# specific features.
attr_reader :http_versions
##
# Maximum time an unused connection can remain idle before being
# automatically closed.
attr_accessor :idle_timeout
##
# Maximum number of requests on a connection before it is considered expired
# and automatically closed.
attr_accessor :max_requests
##
# The value sent in the Keep-Alive header. Defaults to 30. Not needed for
# HTTP/1.1 servers.
#
# This may not work correctly for HTTP/1.0 servers
#
# This method may be removed in a future version as RFC 2616 does not
# require this header.
attr_accessor :keep_alive
##
# A name for this connection. Allows you to keep your connections apart
# from everybody else's.
attr_reader :name
##
# Seconds to wait until a connection is opened. See Net::HTTP#open_timeout
attr_accessor :open_timeout
##
# Headers that are added to every request using Net::HTTP#[]=
attr_reader :override_headers
##
# This client's SSL private key
attr_reader :private_key
##
# For Net::HTTP parity
alias key private_key
##
# The URL through which requests will be proxied
attr_reader :proxy_uri
##
# List of host suffixes which will not be proxied
attr_reader :no_proxy
##
# Test-only accessor for the connection pool
attr_reader :pool # :nodoc:
##
# Seconds to wait until reading one block. See Net::HTTP#read_timeout
attr_accessor :read_timeout
##
# By default SSL sessions are reused to avoid extra SSL handshakes. Set
# this to false if you have problems communicating with an HTTPS server
# like:
#
# SSL_connect [...] read finished A: unexpected message (OpenSSL::SSL::SSLError)
attr_accessor :reuse_ssl_sessions
##
# An array of options for Socket#setsockopt.
#
# By default the TCP_NODELAY option is set on sockets.
#
# To set additional options append them to this array:
#
# http.socket_options << [Socket::SOL_SOCKET, Socket::SO_KEEPALIVE, 1]
attr_reader :socket_options
##
# Current SSL connection generation
attr_reader :ssl_generation # :nodoc:
##
# SSL session lifetime
attr_reader :ssl_timeout
##
# SSL version to use.
#
# By default, the version will be negotiated automatically between client
# and server. Ruby 1.9 and newer only.
attr_reader :ssl_version
##
# Where this instance's last-use times live in the thread local variables
attr_reader :timeout_key # :nodoc:
##
# SSL verification callback. Used when ca_file or ca_path is set.
attr_reader :verify_callback
##
# Sets the depth of SSL certificate verification
attr_reader :verify_depth
##
# HTTPS verify mode. Defaults to OpenSSL::SSL::VERIFY_PEER which verifies
# the server certificate.
#
# If no ca_file, ca_path or cert_store is set the default system certificate
# store is used.
#
# You can use +verify_mode+ to override any default values.
attr_reader :verify_mode
##
# Enable retries of non-idempotent requests that change data (e.g. POST
# requests) when the server has disconnected.
#
# This will in the worst case lead to multiple requests with the same data,
# but it may be useful for some applications. Take care when enabling
# this option to ensure it is safe to POST or perform other non-idempotent
# requests to the server.
attr_accessor :retry_change_requests
##
# Creates a new Net::HTTP::Persistent.
#
# Set +name+ to keep your connections apart from everybody else's. Not
# required currently, but highly recommended. Your library name should be
# good enough. This parameter will be required in a future version.
#
# +proxy+ may be set to a URI::HTTP or :ENV to pick up proxy options from
# the environment. See proxy_from_env for details.
#
# In order to use a URI for the proxy you may need to do some extra work
# beyond URI parsing if the proxy requires a password:
#
# proxy = URI 'http://proxy.example'
# proxy.user = 'AzureDiamond'
# proxy.password = 'hunter2'
#
# Set +pool_size+ to limit the maximum number of connections allowed.
# Defaults to 1/4 the number of allowed file handles. You can have no more
# than this many threads with active HTTP transactions.
def initialize name: nil, proxy: nil, pool_size: DEFAULT_POOL_SIZE
#name = name
#debug_output = nil
#proxy_uri = nil
#no_proxy = []
#headers = {}
#override_headers = {}
#http_versions = {}
#keep_alive = 30
#open_timeout = nil
#read_timeout = nil
#idle_timeout = 5
#max_requests = nil
#socket_options = []
#ssl_generation = 0 # incremented when SSL session variables change
#socket_options << [Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1] if
Socket.const_defined? :TCP_NODELAY
#pool = Net::HTTP::Persistent::Pool.new size: pool_size do |http_args|
Net::HTTP::Persistent::Connection.new Net::HTTP, http_args, #ssl_generation
end
#certificate = nil
#ca_file = nil
#ca_path = nil
#ciphers = nil
#private_key = nil
#ssl_timeout = nil
#ssl_version = nil
#verify_callback = nil
#verify_depth = nil
#verify_mode = nil
#cert_store = nil
#generation = 0 # incremented when proxy URI changes
if HAVE_OPENSSL then
#verify_mode = OpenSSL::SSL::VERIFY_PEER
#reuse_ssl_sessions = OpenSSL::SSL.const_defined? :Session
end
#retry_change_requests = false
self.proxy = proxy if proxy
end
##
# Sets this client's OpenSSL::X509::Certificate
def certificate= certificate
#certificate = certificate
reconnect_ssl
end
# For Net::HTTP parity
alias cert= certificate=
##
# Sets the SSL certificate authority file.
def ca_file= file
#ca_file = file
reconnect_ssl
end
##
# Sets the SSL certificate authority path.
def ca_path= path
#ca_path = path
reconnect_ssl
end
##
# Overrides the default SSL certificate store used for verifying
# connections.
def cert_store= store
#cert_store = store
reconnect_ssl
end
##
# The ciphers allowed for SSL connections
def ciphers= ciphers
#ciphers = ciphers
reconnect_ssl
end
##
# Creates a new connection for +uri+
def connection_for uri
use_ssl = uri.scheme.downcase == 'https'
net_http_args = [uri.host, uri.port]
net_http_args.concat #proxy_args if
#proxy_uri and not proxy_bypass? uri.host, uri.port
connection = #pool.checkout net_http_args
http = connection.http
connection.ressl #ssl_generation if
connection.ssl_generation != #ssl_generation
if not http.started? then
ssl http if use_ssl
start http
elsif expired? connection then
reset connection
end
http.read_timeout = #read_timeout if #read_timeout
http.keep_alive_timeout = #idle_timeout if #idle_timeout
return yield connection
rescue Errno::ECONNREFUSED
address = http.proxy_address || http.address
port = http.proxy_port || http.port
raise Error, "connection refused: #{address}:#{port}"
rescue Errno::EHOSTDOWN
address = http.proxy_address || http.address
port = http.proxy_port || http.port
raise Error, "host down: #{address}:#{port}"
ensure
#pool.checkin net_http_args
end
##
# Returns an error message containing the number of requests performed on
# this connection
def error_message connection
connection.requests -= 1 # fixup
age = Time.now - connection.last_use
"after #{connection.requests} requests on #{connection.http.object_id}, " \
"last used #{age} seconds ago"
end
##
# URI::escape wrapper
def escape str
CGI.escape str if str
end
##
# URI::unescape wrapper
def unescape str
CGI.unescape str if str
end
##
# Returns true if the connection should be reset due to an idle timeout, or
# maximum request count, false otherwise.
def expired? connection
return true if #max_requests && connection.requests >= #max_requests
return false unless #idle_timeout
return true if #idle_timeout.zero?
Time.now - connection.last_use > #idle_timeout
end
##
# Starts the Net::HTTP +connection+
def start http
http.set_debug_output #debug_output if #debug_output
http.open_timeout = #open_timeout if #open_timeout
http.start
socket = http.instance_variable_get :#socket
if socket then # for fakeweb
#socket_options.each do |option|
socket.io.setsockopt(*option)
end
end
end
##
# Finishes the Net::HTTP +connection+
def finish connection
connection.finish
connection.http.instance_variable_set :#ssl_session, nil unless
#reuse_ssl_sessions
end
##
# Returns the HTTP protocol version for +uri+
def http_version uri
#http_versions["#{uri.host}:#{uri.port}"]
end
##
# Is +req+ idempotent according to RFC 2616?
def idempotent? req
case req
when Net::HTTP::Delete, Net::HTTP::Get, Net::HTTP::Head,
Net::HTTP::Options, Net::HTTP::Put, Net::HTTP::Trace then
true
end
end
##
# Is the request +req+ idempotent or is retry_change_requests allowed.
def can_retry? req
#retry_change_requests && !idempotent?(req)
end
##
# Adds "http://" to the String +uri+ if it is missing.
def normalize_uri uri
(uri =~ /^https?:/) ? uri : "http://#{uri}"
end
##
# Pipelines +requests+ to the HTTP server at +uri+ yielding responses if a
# block is given. Returns all responses recieved.
#
# See
# Net::HTTP::Pipeline[http://docs.seattlerb.org/net-http-pipeline/Net/HTTP/Pipeline.html]
# for further details.
#
# Only if <tt>net-http-pipeline</tt> was required before
# <tt>net-http-persistent</tt> #pipeline will be present.
def pipeline uri, requests, &block # :yields: responses
connection_for uri do |connection|
connection.http.pipeline requests, &block
end
end
##
# Sets this client's SSL private key
def private_key= key
#private_key = key
reconnect_ssl
end
# For Net::HTTP parity
alias key= private_key=
##
# Sets the proxy server. The +proxy+ may be the URI of the proxy server,
# the symbol +:ENV+ which will read the proxy from the environment or nil to
# disable use of a proxy. See #proxy_from_env for details on setting the
# proxy from the environment.
#
# If the proxy URI is set after requests have been made, the next request
# will shut-down and re-open all connections.
#
# The +no_proxy+ query parameter can be used to specify hosts which shouldn't
# be reached via proxy; if set it should be a comma separated list of
# hostname suffixes, optionally with +:port+ appended, for example
# <tt>example.com,some.host:8080</tt>.
def proxy= proxy
#proxy_uri = case proxy
when :ENV then proxy_from_env
when URI::HTTP then proxy
when nil then # ignore
else raise ArgumentError, 'proxy must be :ENV or a URI::HTTP'
end
#no_proxy.clear
if #proxy_uri then
#proxy_args = [
#proxy_uri.host,
#proxy_uri.port,
unescape(#proxy_uri.user),
unescape(#proxy_uri.password),
]
#proxy_connection_id = [nil, *#proxy_args].join ':'
if #proxy_uri.query then
#no_proxy = CGI.parse(#proxy_uri.query)['no_proxy'].join(',').downcase.split(',').map { |x| x.strip }.reject { |x| x.empty? }
end
end
reconnect
reconnect_ssl
end
##
# Creates a URI for an HTTP proxy server from ENV variables.
#
# If +HTTP_PROXY+ is set a proxy will be returned.
#
# If +HTTP_PROXY_USER+ or +HTTP_PROXY_PASS+ are set the URI is given the
# indicated user and password unless HTTP_PROXY contains either of these in
# the URI.
#
# The +NO_PROXY+ ENV variable can be used to specify hosts which shouldn't
# be reached via proxy; if set it should be a comma separated list of
# hostname suffixes, optionally with +:port+ appended, for example
# <tt>example.com,some.host:8080</tt>. When set to <tt>*</tt> no proxy will
# be returned.
#
# For Windows users, lowercase ENV variables are preferred over uppercase ENV
# variables.
def proxy_from_env
env_proxy = ENV['http_proxy'] || ENV['HTTP_PROXY']
return nil if env_proxy.nil? or env_proxy.empty?
uri = URI normalize_uri env_proxy
env_no_proxy = ENV['no_proxy'] || ENV['NO_PROXY']
# '*' is special case for always bypass
return nil if env_no_proxy == '*'
if env_no_proxy then
uri.query = "no_proxy=#{escape(env_no_proxy)}"
end
unless uri.user or uri.password then
uri.user = escape ENV['http_proxy_user'] || ENV['HTTP_PROXY_USER']
uri.password = escape ENV['http_proxy_pass'] || ENV['HTTP_PROXY_PASS']
end
uri
end
##
# Returns true when proxy should by bypassed for host.
def proxy_bypass? host, port
host = host.downcase
host_port = [host, port].join ':'
#no_proxy.each do |name|
return true if host[-name.length, name.length] == name or
host_port[-name.length, name.length] == name
end
false
end
##
# Forces reconnection of HTTP connections.
def reconnect
#generation += 1
end
##
# Forces reconnection of SSL connections.
def reconnect_ssl
#ssl_generation += 1
end
##
# Finishes then restarts the Net::HTTP +connection+
def reset connection
http = connection.http
finish connection
start http
rescue Errno::ECONNREFUSED
e = Error.new "connection refused: #{http.address}:#{http.port}"
e.set_backtrace $#
raise e
rescue Errno::EHOSTDOWN
e = Error.new "host down: #{http.address}:#{http.port}"
e.set_backtrace $#
raise e
end
##
# Makes a request on +uri+. If +req+ is nil a Net::HTTP::Get is performed
# against +uri+.
#
# If a block is passed #request behaves like Net::HTTP#request (the body of
# the response will not have been read).
#
# +req+ must be a Net::HTTPRequest subclass (see Net::HTTP for a list).
#
# If there is an error and the request is idempotent according to RFC 2616
# it will be retried automatically.
def request uri, req = nil, &block
retried = false
bad_response = false
uri = URI uri
req = request_setup req || uri
response = nil
connection_for uri do |connection|
http = connection.http
begin
connection.requests += 1
response = http.request req, &block
if req.connection_close? or
(response.http_version <= '1.0' and
not response.connection_keep_alive?) or
response.connection_close? then
finish connection
end
rescue Net::HTTPBadResponse => e
message = error_message connection
finish connection
raise Error, "too many bad responses #{message}" if
bad_response or not can_retry? req
bad_response = true
retry
rescue *RETRIED_EXCEPTIONS => e
request_failed e, req, connection if
retried or not can_retry? req
reset connection
retried = true
retry
rescue Errno::EINVAL, Errno::ETIMEDOUT => e # not retried on ruby 2
request_failed e, req, connection if retried or not can_retry? req
reset connection
retried = true
retry
rescue Exception => e
finish connection
raise
ensure
connection.last_use = Time.now
end
end
#http_versions["#{uri.host}:#{uri.port}"] ||= response.http_version
response
end
##
# Raises an Error for +exception+ which resulted from attempting the request
# +req+ on the +connection+.
#
# Finishes the +connection+.
def request_failed exception, req, connection # :nodoc:
due_to = "(due to #{exception.message} - #{exception.class})"
message = "too many connection resets #{due_to} #{error_message connection}"
finish connection
raise Error, message, exception.backtrace
end
##
# Creates a GET request if +req_or_uri+ is a URI and adds headers to the
# request.
#
# Returns the request.
def request_setup req_or_uri # :nodoc:
req = if URI === req_or_uri then
Net::HTTP::Get.new req_or_uri.request_uri
else
req_or_uri
end
#headers.each do |pair|
req.add_field(*pair)
end
#override_headers.each do |name, value|
req[name] = value
end
unless req['Connection'] then
req.add_field 'Connection', 'keep-alive'
req.add_field 'Keep-Alive', #keep_alive
end
req
end
##
# Shuts down all connections
#
# *NOTE*: Calling shutdown for can be dangerous!
#
# If any thread is still using a connection it may cause an error! Call
# #shutdown when you are completely done making requests!
def shutdown
#pool.available.shutdown do |http|
http.finish
end
end
##
# Enables SSL on +connection+
def ssl connection
connection.use_ssl = true
connection.ciphers = #ciphers if #ciphers
connection.ssl_timeout = #ssl_timeout if #ssl_timeout
connection.ssl_version = #ssl_version if #ssl_version
connection.verify_depth = #verify_depth
connection.verify_mode = #verify_mode
if OpenSSL::SSL::VERIFY_PEER == OpenSSL::SSL::VERIFY_NONE and
not Object.const_defined?(:I_KNOW_THAT_OPENSSL_VERIFY_PEER_EQUALS_VERIFY_NONE_IS_WRONG) then
warn <<-WARNING
!!!SECURITY WARNING!!!
The SSL HTTP connection to:
#{connection.address}:#{connection.port}
!!!MAY NOT BE VERIFIED!!!
On your platform your OpenSSL implementation is broken.
There is no difference between the values of VERIFY_NONE and VERIFY_PEER.
This means that attempting to verify the security of SSL connections may not
work. This exposes you to man-in-the-middle exploits, snooping on the
contents of your connection and other dangers to the security of your data.
To disable this warning define the following constant at top-level in your
application:
I_KNOW_THAT_OPENSSL_VERIFY_PEER_EQUALS_VERIFY_NONE_IS_WRONG = nil
WARNING
end
connection.ca_file = #ca_file if #ca_file
connection.ca_path = #ca_path if #ca_path
if #ca_file or #ca_path then
connection.verify_mode = OpenSSL::SSL::VERIFY_PEER
connection.verify_callback = #verify_callback if #verify_callback
end
if #certificate and #private_key then
connection.cert = #certificate
connection.key = #private_key
end
connection.cert_store = if #cert_store then
#cert_store
else
store = OpenSSL::X509::Store.new
store.set_default_paths
store
end
end
##
# SSL session lifetime
def ssl_timeout= ssl_timeout
#ssl_timeout = ssl_timeout
reconnect_ssl
end
##
# SSL version to use
def ssl_version= ssl_version
#ssl_version = ssl_version
reconnect_ssl
end
##
# Sets the depth of SSL certificate verification
def verify_depth= verify_depth
#verify_depth = verify_depth
reconnect_ssl
end
##
# Sets the HTTPS verify mode. Defaults to OpenSSL::SSL::VERIFY_PEER.
#
# Setting this to VERIFY_NONE is a VERY BAD IDEA and should NEVER be used.
# Securely transfer the correct certificate and update the default
# certificate store or set the ca file instead.
def verify_mode= verify_mode
#verify_mode = verify_mode
reconnect_ssl
end
##
# SSL verification callback.
def verify_callback= callback
#verify_callback = callback
reconnect_ssl
end
end
require 'net/http/persistent/connection'
require 'net/http/persistent/pool'
it happen, caused by net-http-persistent that uses internally referring to constants that do not exist in the Windows environment.
possible solutions:
1) add this line (perhaps other errors while following processing)
Process::RLIMIT_NOFILE = 7 if Gem.win_platform?
2) change your platform to *nix system:
irb(main):001:0> Gem.win_platform?
=> false
irb(main):002:0> Process::RLIMIT_NOFILE
=> 7
3) wait the merge of fix PR:
https://github.com/drbrain/net-http-persistent/pull/90/files

Adding authentication info in Rails proxy middleware

I am using the rack-proxy gem in Rails to proxy requests to an external server. Thing is, the external endpoint requires authentication. How do I provide that information from the middleware?
Here's what I have so far:
require 'rack/proxy'
class MyProxy < Rack::Proxy
MY_REQUEST = %r{^/path/(.*)}
def initialize(app)
#app = app
end
def call(env)
if m = MY_REQUEST.match(env['PATH_INFO'])
env['PATH_INFO'] = "https://otherserver.org/#{m[1]}"
env['HTTP_HOST'] = "otherserver.org"
#the otherserver.org endpoint requires authentication
super env
else
#app.call(env)
end
end
end
Depends on what kind of authentication the other server is using. If its just plain HTTP Authentication you can do something like:
env['Authentication'] = 'Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ=='
Where the value part follows the spec at: http://en.wikipedia.org/wiki/Basic_access_authentication#cite_ref-8
Following the Rack spec here gave me good pointers - http://rubydoc.info/github/rack/rack/master/file/SPEC
Had to do:
env['HTTP_AUTHORIZATION'] = 'Basic <base64 username:password>'

Rails 3.2.x - log elapsed request time

Rails 3.2.X logging mechanism has improved tremendously.
Still, I'm looking for a way to add a prefix of 'elapsed time since request has started' (in milliseconds or a timestamp) before each log item. (Request is identified by uuid)
I figure that I may have to write my own Rack middleware for this, but maybe there's a simpler solution out of the box?
UPDATE
I built my own Rack middleware:
class RequestElapsedTime
def initialize(app)
#app = app
end
def call(env)
# Set request start time so it can be used as part of the 'request' context.
env['REQUEST_START_TIME'] = Time.now
# Call next middleware component
status, headers, response = #app.call(env)
# return a valid rack response
[status, headers, response]
end
end
And added the following to application.rb:
elapsed_time = lambda do |req|
req_start_time = req.env['REQUEST_START_TIME']
return unless req_start_time
(Time.now - req_start_time) * 1000.0
end
# Add request UUID to logs
config.log_tags = [:uuid, elapsed_time]
Unfortunately, for some reason, this outputs the same elapsed time for different logs items in the same request, which leads me to think that it might have to do with the fact that lambda is being evaluated for multiple items at once (Buffered Logging?)...
any idea how to solve this?

Getting rack error Rack::Lint::LintError: Status must be >=100 seen as integer

I'm trying to build my rack based ruby app, and I'm still new here, I'm using ruby 1.9.2 -p180
in my config.ru file I have:
require "rack"
require "./my_app.rb"
require "./auth.rb"
use Auth
run MyApp.new
Now main problem with Middleware Auth, simply, I want it to not continue MyApp if there were less than 2 params for the request and just print out something ( just for testing ) :
class Auth
def initialize(app)
#app = app
end
def call(env)
request = Rack::Request.new(env)
if request.params.count < 2
["200",{"Content-Type" => "text/plain"}, ["Hello World"]]
puts 'Working .. '
else
#app.call(env)
end
end
end
Now when I run my rack app :
rackup -s thin config.ru
And try to get the results :
curl http://localhost:9292/
I keep getting the following error :
Rack::Lint::LintError: Status must be >=100 seen as integer
/Users/Apple/.rvm/gems/ruby-1.9.2-p180/gems/rack-1.4.1/lib/rack/lint.rb:19:in `assert'
/Users/Apple/.rvm/gems/ruby-1.9.2-p180/gems/rack-1.4.1/lib/rack/lint.rb:425:in `check_status'
Of course if I run it with production mode I will not get this error.
Any help would be appreciated here .
Try using an integer for the status:
[200,{"Content-Type" => "text/plain"}, ["Hello World"]]
Instead of:
["200",{"Content-Type" => "text/plain"}, ["Hello World"]]

Resources