I'm getting error "uninitialized constant Process::RLIMIT_NOFILE (NameError)" while executing command "rpush start".
I am trying to implement push notification using rpush in ruby on rails windows but not able to do that.
I'm pretty beginner in ruby on rails.
Please help.
mentioning my persistant.rb file
require 'net/http'
require 'uri'
require 'cgi' # for escaping
require 'connection_pool'
begin
require 'net/http/pipeline'
rescue LoadError
end
autoload :OpenSSL, 'openssl'
class Net::HTTP::Persistent
##
# The beginning of Time
EPOCH = Time.at 0 # :nodoc:
##
# Is OpenSSL available? This test works with autoload
HAVE_OPENSSL = defined? OpenSSL::SSL # :nodoc:
##
# The default connection pool size is 1/4 the allowed open files.
DEFAULT_POOL_SIZE = Process.getrlimit(Process::RLIMIT_NOFILE).first / 4
##
# The version of Net::HTTP::Persistent you are using
VERSION = '3.0.0'
##
# Exceptions rescued for automatic retry on ruby 2.0.0. This overlaps with
# the exception list for ruby 1.x.
RETRIED_EXCEPTIONS = [ # :nodoc:
(Net::ReadTimeout if Net.const_defined? :ReadTimeout),
IOError,
EOFError,
Errno::ECONNRESET,
Errno::ECONNABORTED,
Errno::EPIPE,
(OpenSSL::SSL::SSLError if HAVE_OPENSSL),
Timeout::Error,
].compact
##
# Error class for errors raised by Net::HTTP::Persistent. Various
# SystemCallErrors are re-raised with a human-readable message under this
# class.
class Error < StandardError; end
##
# Use this method to detect the idle timeout of the host at +uri+. The
# value returned can be used to configure #idle_timeout. +max+ controls the
# maximum idle timeout to detect.
#
# After
#
# Idle timeout detection is performed by creating a connection then
# performing a HEAD request in a loop until the connection terminates
# waiting one additional second per loop.
#
# NOTE: This may not work on ruby > 1.9.
def self.detect_idle_timeout uri, max = 10
uri = URI uri unless URI::Generic === uri
uri += '/'
req = Net::HTTP::Head.new uri.request_uri
http = new 'net-http-persistent detect_idle_timeout'
http.connection_for uri do |connection|
sleep_time = 0
http = connection.http
loop do
response = http.request req
$stderr.puts "HEAD #{uri} => #{response.code}" if $DEBUG
unless Net::HTTPOK === response then
raise Error, "bad response code #{response.code} detecting idle timeout"
end
break if sleep_time >= max
sleep_time += 1
$stderr.puts "sleeping #{sleep_time}" if $DEBUG
sleep sleep_time
end
end
rescue
# ignore StandardErrors, we've probably found the idle timeout.
ensure
return sleep_time unless $!
end
##
# This client's OpenSSL::X509::Certificate
attr_reader :certificate
##
# For Net::HTTP parity
alias cert certificate
##
# An SSL certificate authority. Setting this will set verify_mode to
# VERIFY_PEER.
attr_reader :ca_file
##
# A directory of SSL certificates to be used as certificate authorities.
# Setting this will set verify_mode to VERIFY_PEER.
attr_reader :ca_path
##
# An SSL certificate store. Setting this will override the default
# certificate store. See verify_mode for more information.
attr_reader :cert_store
##
# The ciphers allowed for SSL connections
attr_reader :ciphers
##
# Sends debug_output to this IO via Net::HTTP#set_debug_output.
#
# Never use this method in production code, it causes a serious security
# hole.
attr_accessor :debug_output
##
# Current connection generation
attr_reader :generation # :nodoc:
##
# Headers that are added to every request using Net::HTTP#add_field
attr_reader :headers
##
# Maps host:port to an HTTP version. This allows us to enable version
# specific features.
attr_reader :http_versions
##
# Maximum time an unused connection can remain idle before being
# automatically closed.
attr_accessor :idle_timeout
##
# Maximum number of requests on a connection before it is considered expired
# and automatically closed.
attr_accessor :max_requests
##
# The value sent in the Keep-Alive header. Defaults to 30. Not needed for
# HTTP/1.1 servers.
#
# This may not work correctly for HTTP/1.0 servers
#
# This method may be removed in a future version as RFC 2616 does not
# require this header.
attr_accessor :keep_alive
##
# A name for this connection. Allows you to keep your connections apart
# from everybody else's.
attr_reader :name
##
# Seconds to wait until a connection is opened. See Net::HTTP#open_timeout
attr_accessor :open_timeout
##
# Headers that are added to every request using Net::HTTP#[]=
attr_reader :override_headers
##
# This client's SSL private key
attr_reader :private_key
##
# For Net::HTTP parity
alias key private_key
##
# The URL through which requests will be proxied
attr_reader :proxy_uri
##
# List of host suffixes which will not be proxied
attr_reader :no_proxy
##
# Test-only accessor for the connection pool
attr_reader :pool # :nodoc:
##
# Seconds to wait until reading one block. See Net::HTTP#read_timeout
attr_accessor :read_timeout
##
# By default SSL sessions are reused to avoid extra SSL handshakes. Set
# this to false if you have problems communicating with an HTTPS server
# like:
#
# SSL_connect [...] read finished A: unexpected message (OpenSSL::SSL::SSLError)
attr_accessor :reuse_ssl_sessions
##
# An array of options for Socket#setsockopt.
#
# By default the TCP_NODELAY option is set on sockets.
#
# To set additional options append them to this array:
#
# http.socket_options << [Socket::SOL_SOCKET, Socket::SO_KEEPALIVE, 1]
attr_reader :socket_options
##
# Current SSL connection generation
attr_reader :ssl_generation # :nodoc:
##
# SSL session lifetime
attr_reader :ssl_timeout
##
# SSL version to use.
#
# By default, the version will be negotiated automatically between client
# and server. Ruby 1.9 and newer only.
attr_reader :ssl_version
##
# Where this instance's last-use times live in the thread local variables
attr_reader :timeout_key # :nodoc:
##
# SSL verification callback. Used when ca_file or ca_path is set.
attr_reader :verify_callback
##
# Sets the depth of SSL certificate verification
attr_reader :verify_depth
##
# HTTPS verify mode. Defaults to OpenSSL::SSL::VERIFY_PEER which verifies
# the server certificate.
#
# If no ca_file, ca_path or cert_store is set the default system certificate
# store is used.
#
# You can use +verify_mode+ to override any default values.
attr_reader :verify_mode
##
# Enable retries of non-idempotent requests that change data (e.g. POST
# requests) when the server has disconnected.
#
# This will in the worst case lead to multiple requests with the same data,
# but it may be useful for some applications. Take care when enabling
# this option to ensure it is safe to POST or perform other non-idempotent
# requests to the server.
attr_accessor :retry_change_requests
##
# Creates a new Net::HTTP::Persistent.
#
# Set +name+ to keep your connections apart from everybody else's. Not
# required currently, but highly recommended. Your library name should be
# good enough. This parameter will be required in a future version.
#
# +proxy+ may be set to a URI::HTTP or :ENV to pick up proxy options from
# the environment. See proxy_from_env for details.
#
# In order to use a URI for the proxy you may need to do some extra work
# beyond URI parsing if the proxy requires a password:
#
# proxy = URI 'http://proxy.example'
# proxy.user = 'AzureDiamond'
# proxy.password = 'hunter2'
#
# Set +pool_size+ to limit the maximum number of connections allowed.
# Defaults to 1/4 the number of allowed file handles. You can have no more
# than this many threads with active HTTP transactions.
def initialize name: nil, proxy: nil, pool_size: DEFAULT_POOL_SIZE
#name = name
#debug_output = nil
#proxy_uri = nil
#no_proxy = []
#headers = {}
#override_headers = {}
#http_versions = {}
#keep_alive = 30
#open_timeout = nil
#read_timeout = nil
#idle_timeout = 5
#max_requests = nil
#socket_options = []
#ssl_generation = 0 # incremented when SSL session variables change
#socket_options << [Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1] if
Socket.const_defined? :TCP_NODELAY
#pool = Net::HTTP::Persistent::Pool.new size: pool_size do |http_args|
Net::HTTP::Persistent::Connection.new Net::HTTP, http_args, #ssl_generation
end
#certificate = nil
#ca_file = nil
#ca_path = nil
#ciphers = nil
#private_key = nil
#ssl_timeout = nil
#ssl_version = nil
#verify_callback = nil
#verify_depth = nil
#verify_mode = nil
#cert_store = nil
#generation = 0 # incremented when proxy URI changes
if HAVE_OPENSSL then
#verify_mode = OpenSSL::SSL::VERIFY_PEER
#reuse_ssl_sessions = OpenSSL::SSL.const_defined? :Session
end
#retry_change_requests = false
self.proxy = proxy if proxy
end
##
# Sets this client's OpenSSL::X509::Certificate
def certificate= certificate
#certificate = certificate
reconnect_ssl
end
# For Net::HTTP parity
alias cert= certificate=
##
# Sets the SSL certificate authority file.
def ca_file= file
#ca_file = file
reconnect_ssl
end
##
# Sets the SSL certificate authority path.
def ca_path= path
#ca_path = path
reconnect_ssl
end
##
# Overrides the default SSL certificate store used for verifying
# connections.
def cert_store= store
#cert_store = store
reconnect_ssl
end
##
# The ciphers allowed for SSL connections
def ciphers= ciphers
#ciphers = ciphers
reconnect_ssl
end
##
# Creates a new connection for +uri+
def connection_for uri
use_ssl = uri.scheme.downcase == 'https'
net_http_args = [uri.host, uri.port]
net_http_args.concat #proxy_args if
#proxy_uri and not proxy_bypass? uri.host, uri.port
connection = #pool.checkout net_http_args
http = connection.http
connection.ressl #ssl_generation if
connection.ssl_generation != #ssl_generation
if not http.started? then
ssl http if use_ssl
start http
elsif expired? connection then
reset connection
end
http.read_timeout = #read_timeout if #read_timeout
http.keep_alive_timeout = #idle_timeout if #idle_timeout
return yield connection
rescue Errno::ECONNREFUSED
address = http.proxy_address || http.address
port = http.proxy_port || http.port
raise Error, "connection refused: #{address}:#{port}"
rescue Errno::EHOSTDOWN
address = http.proxy_address || http.address
port = http.proxy_port || http.port
raise Error, "host down: #{address}:#{port}"
ensure
#pool.checkin net_http_args
end
##
# Returns an error message containing the number of requests performed on
# this connection
def error_message connection
connection.requests -= 1 # fixup
age = Time.now - connection.last_use
"after #{connection.requests} requests on #{connection.http.object_id}, " \
"last used #{age} seconds ago"
end
##
# URI::escape wrapper
def escape str
CGI.escape str if str
end
##
# URI::unescape wrapper
def unescape str
CGI.unescape str if str
end
##
# Returns true if the connection should be reset due to an idle timeout, or
# maximum request count, false otherwise.
def expired? connection
return true if #max_requests && connection.requests >= #max_requests
return false unless #idle_timeout
return true if #idle_timeout.zero?
Time.now - connection.last_use > #idle_timeout
end
##
# Starts the Net::HTTP +connection+
def start http
http.set_debug_output #debug_output if #debug_output
http.open_timeout = #open_timeout if #open_timeout
http.start
socket = http.instance_variable_get :#socket
if socket then # for fakeweb
#socket_options.each do |option|
socket.io.setsockopt(*option)
end
end
end
##
# Finishes the Net::HTTP +connection+
def finish connection
connection.finish
connection.http.instance_variable_set :#ssl_session, nil unless
#reuse_ssl_sessions
end
##
# Returns the HTTP protocol version for +uri+
def http_version uri
#http_versions["#{uri.host}:#{uri.port}"]
end
##
# Is +req+ idempotent according to RFC 2616?
def idempotent? req
case req
when Net::HTTP::Delete, Net::HTTP::Get, Net::HTTP::Head,
Net::HTTP::Options, Net::HTTP::Put, Net::HTTP::Trace then
true
end
end
##
# Is the request +req+ idempotent or is retry_change_requests allowed.
def can_retry? req
#retry_change_requests && !idempotent?(req)
end
##
# Adds "http://" to the String +uri+ if it is missing.
def normalize_uri uri
(uri =~ /^https?:/) ? uri : "http://#{uri}"
end
##
# Pipelines +requests+ to the HTTP server at +uri+ yielding responses if a
# block is given. Returns all responses recieved.
#
# See
# Net::HTTP::Pipeline[http://docs.seattlerb.org/net-http-pipeline/Net/HTTP/Pipeline.html]
# for further details.
#
# Only if <tt>net-http-pipeline</tt> was required before
# <tt>net-http-persistent</tt> #pipeline will be present.
def pipeline uri, requests, &block # :yields: responses
connection_for uri do |connection|
connection.http.pipeline requests, &block
end
end
##
# Sets this client's SSL private key
def private_key= key
#private_key = key
reconnect_ssl
end
# For Net::HTTP parity
alias key= private_key=
##
# Sets the proxy server. The +proxy+ may be the URI of the proxy server,
# the symbol +:ENV+ which will read the proxy from the environment or nil to
# disable use of a proxy. See #proxy_from_env for details on setting the
# proxy from the environment.
#
# If the proxy URI is set after requests have been made, the next request
# will shut-down and re-open all connections.
#
# The +no_proxy+ query parameter can be used to specify hosts which shouldn't
# be reached via proxy; if set it should be a comma separated list of
# hostname suffixes, optionally with +:port+ appended, for example
# <tt>example.com,some.host:8080</tt>.
def proxy= proxy
#proxy_uri = case proxy
when :ENV then proxy_from_env
when URI::HTTP then proxy
when nil then # ignore
else raise ArgumentError, 'proxy must be :ENV or a URI::HTTP'
end
#no_proxy.clear
if #proxy_uri then
#proxy_args = [
#proxy_uri.host,
#proxy_uri.port,
unescape(#proxy_uri.user),
unescape(#proxy_uri.password),
]
#proxy_connection_id = [nil, *#proxy_args].join ':'
if #proxy_uri.query then
#no_proxy = CGI.parse(#proxy_uri.query)['no_proxy'].join(',').downcase.split(',').map { |x| x.strip }.reject { |x| x.empty? }
end
end
reconnect
reconnect_ssl
end
##
# Creates a URI for an HTTP proxy server from ENV variables.
#
# If +HTTP_PROXY+ is set a proxy will be returned.
#
# If +HTTP_PROXY_USER+ or +HTTP_PROXY_PASS+ are set the URI is given the
# indicated user and password unless HTTP_PROXY contains either of these in
# the URI.
#
# The +NO_PROXY+ ENV variable can be used to specify hosts which shouldn't
# be reached via proxy; if set it should be a comma separated list of
# hostname suffixes, optionally with +:port+ appended, for example
# <tt>example.com,some.host:8080</tt>. When set to <tt>*</tt> no proxy will
# be returned.
#
# For Windows users, lowercase ENV variables are preferred over uppercase ENV
# variables.
def proxy_from_env
env_proxy = ENV['http_proxy'] || ENV['HTTP_PROXY']
return nil if env_proxy.nil? or env_proxy.empty?
uri = URI normalize_uri env_proxy
env_no_proxy = ENV['no_proxy'] || ENV['NO_PROXY']
# '*' is special case for always bypass
return nil if env_no_proxy == '*'
if env_no_proxy then
uri.query = "no_proxy=#{escape(env_no_proxy)}"
end
unless uri.user or uri.password then
uri.user = escape ENV['http_proxy_user'] || ENV['HTTP_PROXY_USER']
uri.password = escape ENV['http_proxy_pass'] || ENV['HTTP_PROXY_PASS']
end
uri
end
##
# Returns true when proxy should by bypassed for host.
def proxy_bypass? host, port
host = host.downcase
host_port = [host, port].join ':'
#no_proxy.each do |name|
return true if host[-name.length, name.length] == name or
host_port[-name.length, name.length] == name
end
false
end
##
# Forces reconnection of HTTP connections.
def reconnect
#generation += 1
end
##
# Forces reconnection of SSL connections.
def reconnect_ssl
#ssl_generation += 1
end
##
# Finishes then restarts the Net::HTTP +connection+
def reset connection
http = connection.http
finish connection
start http
rescue Errno::ECONNREFUSED
e = Error.new "connection refused: #{http.address}:#{http.port}"
e.set_backtrace $#
raise e
rescue Errno::EHOSTDOWN
e = Error.new "host down: #{http.address}:#{http.port}"
e.set_backtrace $#
raise e
end
##
# Makes a request on +uri+. If +req+ is nil a Net::HTTP::Get is performed
# against +uri+.
#
# If a block is passed #request behaves like Net::HTTP#request (the body of
# the response will not have been read).
#
# +req+ must be a Net::HTTPRequest subclass (see Net::HTTP for a list).
#
# If there is an error and the request is idempotent according to RFC 2616
# it will be retried automatically.
def request uri, req = nil, &block
retried = false
bad_response = false
uri = URI uri
req = request_setup req || uri
response = nil
connection_for uri do |connection|
http = connection.http
begin
connection.requests += 1
response = http.request req, &block
if req.connection_close? or
(response.http_version <= '1.0' and
not response.connection_keep_alive?) or
response.connection_close? then
finish connection
end
rescue Net::HTTPBadResponse => e
message = error_message connection
finish connection
raise Error, "too many bad responses #{message}" if
bad_response or not can_retry? req
bad_response = true
retry
rescue *RETRIED_EXCEPTIONS => e
request_failed e, req, connection if
retried or not can_retry? req
reset connection
retried = true
retry
rescue Errno::EINVAL, Errno::ETIMEDOUT => e # not retried on ruby 2
request_failed e, req, connection if retried or not can_retry? req
reset connection
retried = true
retry
rescue Exception => e
finish connection
raise
ensure
connection.last_use = Time.now
end
end
#http_versions["#{uri.host}:#{uri.port}"] ||= response.http_version
response
end
##
# Raises an Error for +exception+ which resulted from attempting the request
# +req+ on the +connection+.
#
# Finishes the +connection+.
def request_failed exception, req, connection # :nodoc:
due_to = "(due to #{exception.message} - #{exception.class})"
message = "too many connection resets #{due_to} #{error_message connection}"
finish connection
raise Error, message, exception.backtrace
end
##
# Creates a GET request if +req_or_uri+ is a URI and adds headers to the
# request.
#
# Returns the request.
def request_setup req_or_uri # :nodoc:
req = if URI === req_or_uri then
Net::HTTP::Get.new req_or_uri.request_uri
else
req_or_uri
end
#headers.each do |pair|
req.add_field(*pair)
end
#override_headers.each do |name, value|
req[name] = value
end
unless req['Connection'] then
req.add_field 'Connection', 'keep-alive'
req.add_field 'Keep-Alive', #keep_alive
end
req
end
##
# Shuts down all connections
#
# *NOTE*: Calling shutdown for can be dangerous!
#
# If any thread is still using a connection it may cause an error! Call
# #shutdown when you are completely done making requests!
def shutdown
#pool.available.shutdown do |http|
http.finish
end
end
##
# Enables SSL on +connection+
def ssl connection
connection.use_ssl = true
connection.ciphers = #ciphers if #ciphers
connection.ssl_timeout = #ssl_timeout if #ssl_timeout
connection.ssl_version = #ssl_version if #ssl_version
connection.verify_depth = #verify_depth
connection.verify_mode = #verify_mode
if OpenSSL::SSL::VERIFY_PEER == OpenSSL::SSL::VERIFY_NONE and
not Object.const_defined?(:I_KNOW_THAT_OPENSSL_VERIFY_PEER_EQUALS_VERIFY_NONE_IS_WRONG) then
warn <<-WARNING
!!!SECURITY WARNING!!!
The SSL HTTP connection to:
#{connection.address}:#{connection.port}
!!!MAY NOT BE VERIFIED!!!
On your platform your OpenSSL implementation is broken.
There is no difference between the values of VERIFY_NONE and VERIFY_PEER.
This means that attempting to verify the security of SSL connections may not
work. This exposes you to man-in-the-middle exploits, snooping on the
contents of your connection and other dangers to the security of your data.
To disable this warning define the following constant at top-level in your
application:
I_KNOW_THAT_OPENSSL_VERIFY_PEER_EQUALS_VERIFY_NONE_IS_WRONG = nil
WARNING
end
connection.ca_file = #ca_file if #ca_file
connection.ca_path = #ca_path if #ca_path
if #ca_file or #ca_path then
connection.verify_mode = OpenSSL::SSL::VERIFY_PEER
connection.verify_callback = #verify_callback if #verify_callback
end
if #certificate and #private_key then
connection.cert = #certificate
connection.key = #private_key
end
connection.cert_store = if #cert_store then
#cert_store
else
store = OpenSSL::X509::Store.new
store.set_default_paths
store
end
end
##
# SSL session lifetime
def ssl_timeout= ssl_timeout
#ssl_timeout = ssl_timeout
reconnect_ssl
end
##
# SSL version to use
def ssl_version= ssl_version
#ssl_version = ssl_version
reconnect_ssl
end
##
# Sets the depth of SSL certificate verification
def verify_depth= verify_depth
#verify_depth = verify_depth
reconnect_ssl
end
##
# Sets the HTTPS verify mode. Defaults to OpenSSL::SSL::VERIFY_PEER.
#
# Setting this to VERIFY_NONE is a VERY BAD IDEA and should NEVER be used.
# Securely transfer the correct certificate and update the default
# certificate store or set the ca file instead.
def verify_mode= verify_mode
#verify_mode = verify_mode
reconnect_ssl
end
##
# SSL verification callback.
def verify_callback= callback
#verify_callback = callback
reconnect_ssl
end
end
require 'net/http/persistent/connection'
require 'net/http/persistent/pool'
it happen, caused by net-http-persistent that uses internally referring to constants that do not exist in the Windows environment.
possible solutions:
1) add this line (perhaps other errors while following processing)
Process::RLIMIT_NOFILE = 7 if Gem.win_platform?
2) change your platform to *nix system:
irb(main):001:0> Gem.win_platform?
=> false
irb(main):002:0> Process::RLIMIT_NOFILE
=> 7
3) wait the merge of fix PR:
https://github.com/drbrain/net-http-persistent/pull/90/files
Related
I deployed discourse applicattion in rails on heroku.
deployed succesfully
Getting error when i do heroku run rake db:migrate db:seed_fu
alse getting an other error on heroku run rake db:create
config/discourse_defaults.conf
# message bus redis server address
message_bus_redis_host = redis://h:p27179a7ac96b0d215c36e5d5a4bda0c0565e1e6d96cdf043b9f5481d68dd1541#ec2-3-219-59-76.compute-1.amazonaws.com:27969
# message bus redis server port
message_bus_redis_port = 27969
# message bus redis slave server address
message_bus_redis_slave_host =
# message bus redis slave server port
message_bus_redis_slave_port = 27969
config/intializer/001-redis.rb
if Rails.env.development? && ENV['DISCOURSE_FLUSH_REDIS']
puts "Flushing redis (development mode)"
$redis.flushall
end
application.rb
# frozen_string_literal: true
# note, we require 2.5.2 and up cause 2.5.1 had some mail bugs we no longer
# monkey patch, so this avoids people booting with this problem version
begin
if !RUBY_VERSION.match?(/^2\.(([67])|(5\.[2-9]))/)
STDERR.puts "Discourse requires Ruby 2.5.2 or up"
exit 1
end
rescue
# no String#match?
STDERR.puts "Discourse requires Ruby 2.5.2 or up"
exit 1
end
require File.expand_path('../boot', __FILE__)
require 'active_record/railtie'
require 'action_controller/railtie'
require 'action_view/railtie'
require 'action_mailer/railtie'
require 'sprockets/railtie'
# Plugin related stuff
require_relative '../lib/discourse_event'
require_relative '../lib/discourse_plugin'
require_relative '../lib/discourse_plugin_registry'
require_relative '../lib/plugin_gem'
# Global config
require_relative '../app/models/global_setting'
GlobalSetting.configure!
unless Rails.env.test? && ENV['LOAD_PLUGINS'] != "1"
require_relative '../lib/custom_setting_providers'
end
GlobalSetting.load_defaults
if ENV['SKIP_DB_AND_REDIS'] == '1'
GlobalSetting.skip_db = true
GlobalSetting.skip_redis = true
end
require 'pry-rails' if Rails.env.development?
if defined?(Bundler)
bundler_groups = [:default]
if !Rails.env.production?
bundler_groups = bundler_groups.concat(Rails.groups(
assets: %w(development test profile)
))
end
Bundler.require(*bundler_groups)
end
module Discourse
class Application < Rails::Application
def config.database_configuration
if Rails.env.production?
GlobalSetting.database_config
else
super
end
end
# Settings in config/environments/* take precedence over those specified here.
# Application configuration should go into files in config/initializers
# -- all .rb files in that directory are automatically loaded.
# this pattern is somewhat odd but the reloader gets very
# confused here if we load the deps without `lib` it thinks
# discourse.rb is under the discourse folder incorrectly
require_dependency 'lib/discourse'
require_dependency 'lib/es6_module_transpiler/rails'
require_dependency 'lib/js_locale_helper'
# tiny file needed by site settings
require_dependency 'lib/highlight_js/highlight_js'
# mocha hates us, active_support/testing/mochaing.rb line 2 is requiring the wrong
# require, patched in source, on upgrade remove this
if Rails.env.test? || Rails.env.development?
require "mocha/version"
require "mocha/deprecation"
if Mocha::VERSION == "0.13.3" && Rails::VERSION::STRING == "3.2.12"
Mocha::Deprecation.mode = :disabled
end
end
# Disable so this is only run manually
# we may want to change this later on
# issue is image_optim crashes on missing dependencies
config.assets.image_optim = false
# Custom directories with classes and modules you want to be autoloadable.
config.autoload_paths += Dir["#{config.root}/app/serializers"]
config.autoload_paths += Dir["#{config.root}/lib/validators/"]
config.autoload_paths += Dir["#{config.root}/app"]
if Rails.env.development? && !Sidekiq.server?
config.autoload_paths += Dir["#{config.root}/lib"]
end
# Only load the plugins named here, in the order given (default is alphabetical).
# :all can be used as a placeholder for all plugins not explicitly named.
# config.plugins = [ :exception_notification, :ssl_requirement, :all ]
config.assets.paths += %W(#{config.root}/config/locales #{config.root}/public/javascripts)
if Rails.env == "development" || Rails.env == "test"
config.assets.paths << "#{config.root}/test/javascripts"
config.assets.paths << "#{config.root}/test/stylesheets"
config.assets.paths << "#{config.root}/node_modules"
end
# Allows us to skip minifincation on some files
config.assets.skip_minification = []
# explicitly precompile any images in plugins ( /assets/images ) path
config.assets.precompile += [lambda do |filename, path|
path =~ /assets\/images/ && !%w(.js .css).include?(File.extname(filename))
end]
config.assets.precompile += %w{
vendor.js
admin.js
preload-store.js
browser-update.js
break_string.js
ember_jquery.js
pretty-text-bundle.js
wizard-application.js
wizard-vendor.js
plugin.js
plugin-third-party.js
markdown-it-bundle.js
service-worker.js
google-tag-manager.js
google-universal-analytics.js
preload-application-data.js
print-page.js
omniauth-complete.js
activate-account.js
auto-redirect.js
wizard-start.js
onpopstate-handler.js
embed-application.js
}
# Precompile all available locales
unless GlobalSetting.try(:omit_base_locales)
Dir.glob("#{config.root}/app/assets/javascripts/locales/*.js.erb").each do |file|
config.assets.precompile << "locales/#{file.match(/([a-z_A-Z]+\.js)\.erb$/)[1]}"
end
end
# out of the box sprockets 3 grabs loose files that are hanging in assets,
# the exclusion list does not include hbs so you double compile all this stuff
initializer :fix_sprockets_loose_file_searcher, after: :set_default_precompile do |app|
app.config.assets.precompile.delete(Sprockets::Railtie::LOOSE_APP_ASSETS)
start_path = ::Rails.root.join("app/assets").to_s
exclude = ['.es6', '.hbs', '.js', '.css', '']
app.config.assets.precompile << lambda do |logical_path, filename|
filename.start_with?(start_path) &&
!exclude.include?(File.extname(logical_path))
end
end
# Set Time.zone default to the specified zone and make Active Record auto-convert to this zone.
# Run "rake -D time" for a list of tasks for finding time zone names. Default is UTC.
config.time_zone = 'UTC'
# auto-load locales in plugins
# NOTE: we load both client & server locales since some might be used by PrettyText
config.i18n.load_path += Dir["#{Rails.root}/plugins/*/config/locales/*.yml"]
# Configure the default encoding used in templates for Ruby 1.9.
config.encoding = 'utf-8'
config.assets.initialize_on_precompile = false
# Configure sensitive parameters which will be filtered from the log file.
config.filter_parameters += [
:password,
:pop3_polling_password,
:api_key,
:s3_secret_access_key,
:twitter_consumer_secret,
:facebook_app_secret,
:github_client_secret,
:second_factor_token,
]
# Enable the asset pipeline
config.assets.enabled = true
# Version of your assets, change this if you want to expire all your assets
config.assets.version = '1.2.4'
# see: http://stackoverflow.com/questions/11894180/how-does-one-correctly-add-custom-sql-dml-in-migrations/11894420#11894420
config.active_record.schema_format = :sql
# per https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet
config.pbkdf2_iterations = 64000
config.pbkdf2_algorithm = "sha256"
# rack lock is nothing but trouble, get rid of it
# for some reason still seeing it in Rails 4
config.middleware.delete Rack::Lock
# wrong place in middleware stack AND request tracker handles it
config.middleware.delete Rack::Runtime
# ETags are pointless, we are dynamically compressing
# so nginx strips etags, may revisit when mainline nginx
# supports etags (post 1.7)
config.middleware.delete Rack::ETag
unless Rails.env.development?
require 'middleware/enforce_hostname'
config.middleware.insert_after Rack::MethodOverride, Middleware::EnforceHostname
end
require 'content_security_policy/middleware'
config.middleware.swap ActionDispatch::ContentSecurityPolicy::Middleware, ContentSecurityPolicy::Middleware
require 'middleware/discourse_public_exceptions'
config.exceptions_app = Middleware::DiscoursePublicExceptions.new(Rails.public_path)
# Our templates shouldn't start with 'discourse/templates'
config.handlebars.templates_root = 'discourse/templates'
config.handlebars.raw_template_namespace = "Discourse.RAW_TEMPLATES"
require 'discourse_redis'
require 'logster/redis_store'
require 'freedom_patches/redis'
# Use redis for our cache
config.cache_store = DiscourseRedis.new_redis_store
$redis = DiscourseRedis.new
Logster.store = Logster::RedisStore.new(DiscourseRedis.new)
# we configure rack cache on demand in an initializer
# our setup does not use rack cache and instead defers to nginx
config.action_dispatch.rack_cache = nil
# ember stuff only used for asset precompliation, production variant plays up
config.ember.variant = :development
config.ember.ember_location = "#{Rails.root}/vendor/assets/javascripts/production/ember.js"
config.ember.handlebars_location = "#{Rails.root}/vendor/assets/javascripts/handlebars.js"
require 'auth'
if GlobalSetting.relative_url_root.present?
config.relative_url_root = GlobalSetting.relative_url_root
end
if Rails.env == "test"
if ENV['LOAD_PLUGINS'] == "1"
Discourse.activate_plugins!
end
else
Discourse.activate_plugins!
end
require_dependency 'stylesheet/manager'
require_dependency 'svg_sprite/svg_sprite'
config.after_initialize do
# require common dependencies that are often required by plugins
# in the past observers would load them as side-effects
# correct behavior is for plugins to require stuff they need,
# however it would be a risky and breaking change not to require here
require_dependency 'category'
require_dependency 'post'
require_dependency 'topic'
require_dependency 'user'
require_dependency 'post_action'
require_dependency 'post_revision'
require_dependency 'notification'
require_dependency 'topic_user'
require_dependency 'topic_view'
require_dependency 'topic_list'
require_dependency 'group'
require_dependency 'user_field'
require_dependency 'post_action_type'
# Ensure that Discourse event triggers for web hooks are loaded
require_dependency 'web_hook'
# So open id logs somewhere sane
OpenID::Util.logger = Rails.logger
# Load plugins
Discourse.plugins.each(&:notify_after_initialize)
# we got to clear the pool in case plugins connect
ActiveRecord::Base.connection_handler.clear_active_connections!
# This nasty hack is required for not precompiling QUnit assets
# in test mode. see: https://github.com/rails/sprockets-rails/issues/299#issuecomment-167701012
ActiveSupport.on_load(:action_view) do
default_checker = ActionView::Base.precompiled_asset_checker
ActionView::Base.precompiled_asset_checker = -> logical_path do
default_checker[logical_path] ||
%w{qunit.js qunit.css test_helper.css test_helper.js wizard/test/test_helper.js}.include?(logical_path)
end
end
end
if ENV['RBTRACE'] == "1"
require 'rbtrace'
end
config.generators do |g|
g.test_framework :rspec, fixture: false
end
# we have a monkey_patch we need to require early... prior to connection
# init
require 'freedom_patches/reaper'
end
end
app/model/global_settingg.rb
# frozen_string_literal: true
class GlobalSetting
def self.register(key, default)
define_singleton_method(key) do
provider.lookup(key, default)
end
end
VALID_SECRET_KEY ||= /^[0-9a-f]{128}$/
# this is named SECRET_TOKEN as opposed to SECRET_KEY_BASE
# for legacy reasons
REDIS_SECRET_KEY ||= 'SECRET_TOKEN'
REDIS_VALIDATE_SECONDS ||= 30
# In Rails secret_key_base is used to encrypt the cookie store
# the cookie store contains session data
# Discourse also uses this secret key to digest user auth tokens
# This method will
# - use existing token if already set in ENV or discourse.conf
# - generate a token on the fly if needed and cache in redis
# - enforce rules about token format falling back to redis if needed
def self.safe_secret_key_base
if #safe_secret_key_base && #token_in_redis && (#token_last_validated + REDIS_VALIDATE_SECONDS) < Time.now
#token_last_validated = Time.now
token = $redis.without_namespace.get(REDIS_SECRET_KEY)
if token.nil?
$redis.without_namespace.set(REDIS_SECRET_KEY, #safe_secret_key_base)
end
end
#safe_secret_key_base ||= begin
token = secret_key_base
if token.blank? || token !~ VALID_SECRET_KEY
#token_in_redis = true
#token_last_validated = Time.now
token = $redis.without_namespace.get(REDIS_SECRET_KEY)
unless token && token =~ VALID_SECRET_KEY
token = SecureRandom.hex(64)
$redis.without_namespace.set(REDIS_SECRET_KEY, token)
end
end
if !secret_key_base.blank? && token != secret_key_base
STDERR.puts "WARNING: DISCOURSE_SECRET_KEY_BASE is invalid, it was re-generated"
end
token
end
rescue Redis::CommandError => e
#safe_secret_key_base = SecureRandom.hex(64) if e.message =~ /READONLY/
end
def self.load_defaults
default_provider = FileProvider.from(File.expand_path('../../../config/discourse_defaults.conf', __FILE__))
default_provider.keys.concat(#provider.keys).uniq.each do |key|
default = default_provider.lookup(key, nil)
instance_variable_set("##{key}_cache", nil)
define_singleton_method(key) do
val = instance_variable_get("##{key}_cache")
unless val.nil?
val == :missing ? nil : val
else
val = provider.lookup(key, default)
if val.nil?
val = :missing
end
instance_variable_set("##{key}_cache", val)
val == :missing ? nil : val
end
end
end
end
def self.skip_db=(v)
#skip_db = v
end
def self.skip_db?
#skip_db
end
def self.skip_redis=(v)
#skip_redis = v
end
def self.skip_redis?
#skip_redis
end
def self.use_s3?
(#use_s3 ||=
begin
s3_bucket &&
s3_region && (
s3_use_iam_profile || (s3_access_key_id && s3_secret_access_key)
) ? :true : :false
end) == :true
end
def self.s3_bucket_name
#s3_bucket_name ||= s3_bucket.downcase.split("/")[0]
end
# for testing
def self.reset_s3_cache!
#use_s3 = nil
end
def self.database_config
hash = { "adapter" => "postgresql" }
%w{
pool
connect_timeout
timeout
socket
host
backup_host
port
backup_port
username
password
replica_host
replica_port
}.each do |s|
if val = self.public_send("db_#{s}")
hash[s] = val
end
end
hash["adapter"] = "postgresql_fallback" if hash["replica_host"]
hostnames = [ hostname ]
hostnames << backup_hostname if backup_hostname.present?
hostnames << URI.parse(cdn_url).host if cdn_url.present?
hash["host_names"] = hostnames
hash["database"] = db_name
hash["prepared_statements"] = !!self.db_prepared_statements
{ "production" => hash }
end
# For testing purposes
def self.reset_redis_config!
#config = nil
#message_bus_config = nil
end
def self.redis_config
#config ||=
begin
c = {}
c[:host] = redis_host if redis_host
c[:port] = redis_port if redis_port
if redis_slave_host && redis_slave_port
c[:slave_host] = redis_slave_host
c[:slave_port] = redis_slave_port
c[:connector] = DiscourseRedis::Connector
end
c[:password] = redis_password if redis_password.present?
c[:db] = redis_db if redis_db != 0
c[:db] = 1 if Rails.env == "test"
c[:id] = nil if redis_skip_client_commands
c.freeze
end
end
def self.message_bus_redis_config
return redis_config unless message_bus_redis_enabled
#message_bus_config ||=
begin
c = {}
c[:host] = message_bus_redis_host if message_bus_redis_host
c[:port] = message_bus_redis_port if message_bus_redis_port
if message_bus_redis_slave_host && message_bus_redis_slave_port
c[:slave_host] = message_bus_redis_slave_host
c[:slave_port] = message_bus_redis_slave_port
c[:connector] = DiscourseRedis::Connector
end
c[:password] = message_bus_redis_password if message_bus_redis_password.present?
c[:db] = message_bus_redis_db if message_bus_redis_db != 0
c[:db] = 1 if Rails.env == "test"
c[:id] = nil if message_bus_redis_skip_client_commands
c.freeze
end
end
def self.add_default(name, default)
unless self.respond_to? name
define_singleton_method(name) do
default
end
end
end
class BaseProvider
def self.coerce(setting)
return setting == "true" if setting == "true" || setting == "false"
return $1.to_i if setting.to_s.strip =~ /^([0-9]+)$/
setting
end
def resolve(current, default)
BaseProvider.coerce(
if current.present?
current
else
default.present? ? default : nil
end
)
end
end
class FileProvider < BaseProvider
attr_reader :data
def self.from(file)
if File.exists?(file)
parse(file)
end
end
def initialize(file)
#file = file
#data = {}
end
def read
ERB.new(File.read(#file)).result().split("\n").each do |line|
if line =~ /^\s*([a-z_]+[a-z0-9_]*)\s*=\s*(\"([^\"]*)\"|\'([^\']*)\'|[^#]*)/
#data[$1.strip.to_sym] = ($4 || $3 || $2).strip
end
end
end
def lookup(key, default)
var = #data[key]
resolve(var, var.nil? ? default : "")
end
def keys
#data.keys
end
def self.parse(file)
provider = self.new(file)
provider.read
provider
end
private_class_method :parse
end
class EnvProvider < BaseProvider
def lookup(key, default)
var = ENV["DISCOURSE_" + key.to_s.upcase]
resolve(var , var.nil? ? default : nil)
end
def keys
ENV.keys.select { |k| k =~ /^DISCOURSE_/ }.map { |k| k[10..-1].downcase.to_sym }
end
end
class BlankProvider < BaseProvider
def lookup(key, default)
if key == :redis_port
return ENV["DISCOURSE_REDIS_PORT"] if ENV["DISCOURSE_REDIS_PORT"]
end
default
end
def keys
[]
end
end
class << self
attr_accessor :provider
end
def self.configure!
if Rails.env == "test"
#provider = BlankProvider.new
else
#provider =
FileProvider.from(File.expand_path('../../../config/discourse.conf', __FILE__)) ||
EnvProvider.new
end
end
end
error
Failed to report error: Name or service not known 2 Name or service not known subscribe failed, reconnecting in 1 second. Call stack ["/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/connection/hiredis.rb:19:in `connect'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/connection/hiredis.rb:19:in `connect'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:334:in `establish_connection'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:99:in `block in connect'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:291:in `with_reconnect'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:98:in `connect'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:274:in `with_socket_timeout'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/client.rb:131:in `call_loop'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/subscribe.rb:43:in `subscription'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis/subscribe.rb:12:in `subscribe'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis.rb:2824:in `_subscription'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis.rb:2192:in `block in subscribe'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis.rb:45:in `block in synchronize'",
"/app/vendor/ruby-2.5.5/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis.rb:45:in `synchronize'",
"/app/vendor/bundle/ruby/2.5.0/gems/redis-4.0.1/lib/redis.rb:2191:in `subscribe'",
"/app/vendor/bundle/ruby/2.5.0/gems/message_bus-2.2.2/lib/message_bus/backends/redis.rb:287:in `global_subscribe'",
"/app/vendor/bundle/ruby/2.5.0/gems/message_bus-2.2.2/lib/message_bus.rb:721:in `global_subscribe_thread'",
"/app/vendor/bundle/ruby/2.5.0/gems/message_bus-2.2.2/lib/message_bus.rb:669:in `block in new_subscriber_thread'"]
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
rake aborted!
Name or service not known
Add your db and redis-host name and port no. in discourse_defaults.conf
Settinggs are available in Hroku_addons. In my case it is as following :
redis_host = ***-***-***.compute-1.amazonaws.com
redis_port = *****
TLDR;
Why Sneakers worker can't connect to the database or can't query it?
(General advices on "do's" and "dont's" are also welcome in comments)
Full question:
I am able to execute RPC call that returns a simple string, but I can't execute RPC call that is querying the database on the server side. I read the docs, tried many SO posts and blog tutorials, but I am still missing some piece.
I have two services. First service (Client) is using Bunny gem and is making an RPC call to second service (RPCServer) which is listening on workers using Sneakers gem. Both services are Rails apps.
RabbitMQ is serving in a docker container:
docker run -p 5672:5672 -p 15672:15672 rabbitmq:3-management
Postgres database is installed on a local machine.
Client service (mostly from Rabbitbunny docs ):
# app/services/client.rb
class Client
attr_accessor :call_id, :lock, :condition, :reply_queue, :exchange, :params, :response, :server_queue_name, :channel, :reply_queue_name
def initialize(rpc_route:, params:)
#channel = channel
#exchange = channel.fanout("Client.Server.exchange.#{params[:controller]}")
#server_queue_name = "Server.Client.queue.#{rpc_route}"
#reply_queue_name = "Client.Server.queue.#{params[:controller]}"
#params = params
setup_reply_queue
end
def setup_reply_queue
#lock = Mutex.new
#condition = ConditionVariable.new
that = self
#reply_queue = channel.queue(reply_queue_name, durable: true)
reply_queue.subscribe do |_delivery_info, properties, payload|
if properties[:correlation_id] == that.call_id
that.response = payload
that.lock.synchronize { that.condition.signal }
end
end
end
def call
#call_id = "NAIVE_RAND_#{rand}#{rand}#{rand}"
exchange.publish(params.to_json,
routing_key: server_queue_name,
correlation_id: call_id,
reply_to: reply_queue.name)
lock.synchronize { condition.wait(lock) }
connection.close
response
end
def channel
#channel ||= connection.create_channel
end
def connection
#connection ||= Bunny.new.tap { |c| c.start }
end
end
RPCServer service, using this gist (comments here are the "meat" of my question:
# app/workers/posts_worker.rb
require 'sneakers'
require 'sneakers/runner'
require 'byebug'
require 'oj'
class RpcServer
include Sneakers::Worker
from_queue 'Client.Server.queue.v1/filters/posts', durable: true, env: nil
def work_with_params(deserialized_msg, delivery_info, metadata)
post = {}
p "ActiveRecord::Base.connected?: #{ActiveRecord::Base.connected?}" # => true
##### This gets logged
Rails.logger.info "ActiveRecord::Base.connection_pool: #{ActiveRecord::Base.connection_pool}\n\n-------"
##### This never gets logged
Rails.logger.info "ActiveRecord::Base.connection_pool.with_connection: #{ActiveRecord::Base.connection_pool.with_connection}\n\n--------"
### interpreter never reaches this place when ActiveRecord methods like `with_connection`, `where`, `count` etc. are used
ActiveRecord::Base.connection_pool.with_connection do
post = Post.first.to_json
end
##### first commented `publish()` works fine and RPC works when no ActiveRecord is involved (this is, assuming above code using ActiveRecord is commented out)
##### second publish is not working
# publish("response from RPCServer", {
publish(post.to_json, {
to_queue: metadata[:reply_to],
correlation_id: metadata[:correlation_id],
content_type: metadata[:content_type]
})
ack!
end
end
Sneakers::Runner.new([RpcServer]).run
RPCServer sneakers configuration:
# config/initializers/sneakers.rb
Sneakers.configure({
amqp: "amqp://guest:guest#localhost:5672",
vhost: '/',
workers: 4,
log: 'log/sneakers.log',
pid_path: "tmp/pids/sneakers.pid",
timeout_job_after: 5,
prefetch: 10,
threads: 10,
durable: true,
ack: true,
heartbeat: 2,
exchange: "",
hooks: {
before_fork: -> {
Rails.logger.info('Worker: Disconnect from the database')
ActiveRecord::Base.connection_pool.disconnect!
Rails.logger.info("before_fork: ActiveRecord::Base.connected?: #{ActiveRecord::Base.connected?}") # => false
},
after_fork: -> {
ActiveRecord::Base.connection
Rails.logger.info("after_fork: ActiveRecord::Base.connected?: #{ActiveRecord::Base.connected?}") # => true
Rails.logger.info('Worker: Reconnect to the database')
},
timeout_job_after: 60
})
Sneakers.logger.level = Logger::INFO
RPCServer puma configuration:
# config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
port ENV.fetch("PORT") { 3000 }
environment ENV.fetch("RAILS_ENV") { "development" }
workers ENV.fetch("WEB_CONCURRENCY") { 2 }
preload_app!
### tried and did not work
# on_worker_boot do
# ActiveSupport.on_load(:active_record) do
# ActiveRecord::Base.establish_connection
# end
# end
before_fork do |server, worker|
# other settings
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
end
after_worker_boot do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
plugin :tmp_restart
for completeness, I also have an external Rakefile that is binding queues to exchanges (probably not important in this case)
namespace :rabbitmq do
desc "Setup routing"
task :setup do
conn = start_bunny
rpc_route service: :blog, from: 'v1/filters/posts_mappings', to: 'v1/filters/posts'
conn.close
end
def rpc_route(service:, from:, to:)
...
end
def start_bunny
...
end
end
I tried many sneakers configurations, and many orders of launching rabbitmq, resetting it, deleting queues, connections, etc. All of it is hard to list here and probably not the case.
Why I can't connect to the database or execute ActiveRecord methods? What Am I missing?
Ok I got it. The problem was last line of worker in RPCServer:
Sneakers::Runner.new([RpcServer]).run
It was running worker outside of Rails app. Commenting this out solved my problem of worker not being able to query database.
doorkeeper.rb
Doorkeeper.configure do
# Change the ORM that doorkeeper will use (needs plugins)
orm :active_record
# This block will be called to check whether the resource owner is authenticated or not.
resource_owner_authenticator do
fail "Please configure doorkeeper resource_owner_authenticator block located in #{__FILE__}"
# Put your resource owner authentication logic here.
# Example implementation:
# User.find_by_id(session[:user_id]) || redirect_to(new_user_session_url)
end
#Make by phone instead email
resource_owner_from_credentials do |_routes|
if params[:scope].present?
case params[:scope]
when "passenger"
PassengerUser.authenticate(params[:email], params[:password])
when "driver"
DriverUser.authenticate(params[:email], params[:password])
end
else
PassengerUser.authenticate(params[:email], params[:password])
end
end
grant_flows %w(password)
skip_authorization do
true
end
# If you want to restrict access to the web interface for adding oauth authorized applications, you need to declare the block below.
# admin_authenticator do
# # Put your admin authentication logic here.
# # Example implementation:
# Admin.find_by_id(session[:admin_id]) || redirect_to(new_admin_session_url)
# end
# Authorization Code expiration time (default 10 minutes).
# authorization_code_expires_in 10.minutes
# Access token expiration time (default 2 hours).
# If you want to disable expiration, set this to nil.
# access_token_expires_in 2.hours
# Assign a custom TTL for implicit grants.
# custom_access_token_expires_in do |oauth_client|
# oauth_client.application.additional_settings.implicit_oauth_expiration
# end
# Use a custom class for generating the access token.
# https://github.com/doorkeeper-gem/doorkeeper#custom-access-token-generator
# access_token_generator '::Doorkeeper::JWT'
# The controller Doorkeeper::ApplicationController inherits from.
# Defaults to ActionController::Base.
# https://github.com/doorkeeper-gem/doorkeeper#custom-base-controller
# base_controller 'ApplicationController'
# Reuse access token for the same resource owner within an application (disabled by default)
# Rationale: https://github.com/doorkeeper-gem/doorkeeper/issues/383
#reuse_access_token
# Issue access tokens with refresh token (disabled by default)
use_refresh_token
# Provide support for an owner to be assigned to each registered application (disabled by default)
# Optional parameter confirmation: true (default false) if you want to enforce ownership of
# a registered application
# Note: you must also run the rails g doorkeeper:application_owner generator to provide the necessary support
# enable_application_owner confirmation: false
# Define access token scopes for your provider
# For more information go to
# https://github.com/doorkeeper-gem/doorkeeper/wiki/Using-Scopes
default_scopes :passenger
optional_scopes :driver
# Change the way client credentials are retrieved from the request object.
# By default it retrieves first from the `HTTP_AUTHORIZATION` header, then
# falls back to the `:client_id` and `:client_secret` params from the `params` object.
# Check out https://github.com/doorkeeper-gem/doorkeeper/wiki/Changing-how-clients-are-authenticated
# for more information on customization
# client_credentials :from_basic, :from_params
# Change the way access token is authenticated from the request object.
# By default it retrieves first from the `HTTP_AUTHORIZATION` header, then
# falls back to the `:access_token` or `:bearer_token` params from the `params` object.
# Check out https://github.com/doorkeeper-gem/doorkeeper/wiki/Changing-how-clients-are-authenticated
# for more information on customization
# access_token_methods :from_bearer_authorization, :from_access_token_param, :from_bearer_param
# Change the native redirect uri for client apps
# When clients register with the following redirect uri, they won't be redirected to any server and the authorization code will be displayed within the provider
# The value can be any string. Use nil to disable this feature. When disabled, clients must provide a valid URL
# (Similar behaviour: https://developers.google.com/accounts/docs/OAuth2InstalledApp#choosingredirecturi)
#
# native_redirect_uri 'urn:ietf:wg:oauth:2.0:oob'
# Forces the usage of the HTTPS protocol in non-native redirect uris (enabled
# by default in non-development environments). OAuth2 delegates security in
# communication to the HTTPS protocol so it is wise to keep this enabled.
#
# Callable objects such as proc, lambda, block or any object that responds to
# #call can be used in order to allow conditional checks (to allow non-SSL
# redirects to localhost for example).
#
# force_ssl_in_redirect_uri !Rails.env.development?
#
# force_ssl_in_redirect_uri { |uri| uri.host != 'localhost' }
# Specify what redirect URI's you want to block during creation. Any redirect
# URI is whitelisted by default.
#
# You can use this option in order to forbid URI's with 'javascript' scheme
# for example.
#
# forbid_redirect_uri { |uri| uri.scheme.to_s.downcase == 'javascript' }
# Specify what grant flows are enabled in array of Strings. The valid
# strings and the flows they enable are:
#
# "authorization_code" => Authorization Code Grant Flow
# "implicit" => Implicit Grant Flow
# "password" => Resource Owner Password Credentials Grant Flow
# "client_credentials" => Client Credentials Grant Flow
#
# If not specified, Doorkeeper enables authorization_code and
# client_credentials.
#
# implicit and password grant flows have risks that you should understand
# before enabling:
# http://tools.ietf.org/html/rfc6819#section-4.4.2
# http://tools.ietf.org/html/rfc6819#section-4.4.3
#
# grant_flows %w[authorization_code client_credentials]
# Hook into the strategies' request & response life-cycle in case your
# application needs advanced customization or logging:
#
# before_successful_strategy_response do |request|
# puts "BEFORE HOOK FIRED! #{request}"
# end
#
# after_successful_strategy_response do |request, response|
# puts "AFTER HOOK FIRED! #{request}, #{response}"
# end
# Under some circumstances you might want to have applications auto-approved,
# so that the user skips the authorization step.
# For example if dealing with a trusted application.
# skip_authorization do |resource_owner, client|
# client.superapp? or resource_owner.admin?
# end
# WWW-Authenticate Realm (default "Doorkeeper").
# realm "Doorkeeper"
end
Doorkeeper.configuration.token_grant_types << "password"
migration:
class CreateDoorkeeperTables < ActiveRecord::Migration[5.1]
def change
create_table :oauth_access_tokens do |t|
t.integer :resource_owner_id
t.integer :application_id
t.string :token, null: false
t.string :refresh_token
t.integer :expires_in
t.datetime :revoked_at
t.datetime :created_at, null: false
t.string :scopes
end
add_index :oauth_access_tokens, :token, unique: true
add_index :oauth_access_tokens, :resource_owner_id
add_index :oauth_access_tokens, :refresh_token, unique: true
add_foreign_key(
:oauth_access_tokens,
:passenger_users,
column: :resource_owner_id
)
end
end
Is it possible revoke all tokens of user after log in? (except new one created after log in) User should be able use app only from one device.
As was answered in original issue for those who will search similar on SO:
How about something like this in resource_owner_from_credentials (or whatever authenticator you use):
resource_owner_from_credentials do |_routes|
owner = if params[:scope].present?
case params[:scope]
when "passenger"
PassengerUser.authenticate(params[:email], params[:password])
when "driver"
DriverUser.authenticate(params[:email], params[:password])
end
else
PassengerUser.authenticate(params[:email], params[:password])
end
Doorkeeper::AccessToken.where(resource_owner_id: owner.id).delete_all # it will remove all the tokens for this user, and a new one will be created after this method finish
owner
end
The only thing you need to know is in case you are using different models as resource owners you could face a problem when AccessToken was granted for Admin #1 and other token for Passanger #1 (both tokens will be removed). In this case you could add an additional field to Doorkeeper::AccessToken model and patch it and maybe some internal gem classes to store required information about resource owner model name.
I am trying to package new vagrant base box. And I get this error while doing so.
C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.5.1/plugins/commands/package/command.rb:59:in `package_base': C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.5.1/lib/vagrant/machine.rb:358: syntax error, unexpected end-of-input, expecting keyword_end (SyntaxError)
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant1.5.1/plugins/commands/package/command.rb:42:in `execute'
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.5.1/lib/vagrant/cli.rb:42:in `execute'
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.5.1/lib/vagrant/environment.rb:248:in `cli'
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.5.1/bin/vagrant:158:in `<main>'
The content of "machine.rb" is below.
require "thread"
require "log4r"
module Vagrant
# This represents a machine that Vagrant manages. This provides a singular
# API for querying the state and making state changes to the machine, which
# is backed by any sort of provider (VirtualBox, VMWare, etc.).
class Machine
# The box that is backing this machine.
#
# #return [Box]
attr_accessor :box
# Configuration for the machine.
#
# #return [Object]
attr_accessor :config
# Directory where machine-specific data can be stored.
#
# #return [Pathname]
attr_reader :data_dir
# The environment that this machine is a part of.
#
# #return [Environment]
attr_reader :env
# ID of the machine. This ID comes from the provider and is not
# guaranteed to be of any particular format except that it is
# a string.
#
# #return [String]
attr_reader :id
# Name of the machine. This is assigned by the Vagrantfile.
#
# #return [Symbol]
attr_reader :name
# The provider backing this machine.
#
# #return [Object]
attr_reader :provider
# The provider-specific configuration for this machine.
#
# #return [Object]
attr_accessor :provider_config
# The name of the provider.
#
# #return [Symbol]
attr_reader :provider_name
# The options given to the provider when registering the plugin.
#
# #return [Hash]
attr_reader :provider_options
# The UI for outputting in the scope of this machine.
#
# #return [UI]
attr_reader :ui
# The Vagrantfile that this machine is attached to.
#
# #return [Vagrantfile]
attr_reader :vagrantfile
# Initialize a new machine.
#
# #param [String] name Name of the virtual machine.
# #param [Class] provider The provider backing this machine. This is
# currently expected to be a V1 `provider` plugin.
# #param [Object] provider_config The provider-specific configuration for
# this machine.
# #param [Hash] provider_options The provider-specific options from the
# plugin definition.
# #param [Object] config The configuration for this machine.
# #param [Pathname] data_dir The directory where machine-specific data
# can be stored. This directory is ensured to exist.
# #param [Box] box The box that is backing this virtual machine.
# #param [Environment] env The environment that this machine is a
# part of.
def initialize(name, provider_name, provider_cls, provider_config, provider_options, config, data_dir, box, env, vagrantfile, base=false)
#logger = Log4r::Logger.new("vagrant::machine")
#logger.info("Initializing machine: #{name}")
#logger.info(" - Provider: #{provider_cls}")
#logger.info(" - Box: #{box}")
#logger.info(" - Data dir: #{data_dir}")
#box = box
#config = config
#data_dir = data_dir
#env = env
#vagrantfile = vagrantfile
#guest = Guest.new(
self,
Vagrant.plugin("2").manager.guests,
Vagrant.plugin("2").manager.guest_capabilities)
#name = name
#provider_config = provider_config
#provider_name = provider_name
#provider_options = provider_options
#ui = Vagrant::UI::Prefixed.new(#env.ui, #name)
#ui_mutex = Mutex.new
# Read the ID, which is usually in local storage
#id = nil
# XXX: This is temporary. This will be removed very soon.
if base
#id = name
else
# Read the id file from the data directory if it exists as the
# ID for the pre-existing physical representation of this machine.
id_file = #data_dir.join("id")
#id = id_file.read.chomp if id_file.file?
end
# Initializes the provider last so that it has access to all the
# state we setup on this machine.
#provider = provider_cls.new(self)
#provider._initialize(#provider_name, self)
end
# This calls an action on the provider. The provider may or may not
# actually implement the action.
#
# #param [Symbol] name Name of the action to run.
# #param [Hash] extra_env This data will be passed into the action runner
# as extra data set on the environment hash for the middleware
# runner.
def action(name, extra_env=nil)
#logger.info("Calling action: #{name} on provider #{#provider}")
# Get the callable from the provider.
callable = #provider.action(name)
# If this action doesn't exist on the provider, then an exception
# must be raised.
if callable.nil?
raise Errors::UnimplementedProviderAction,
:action => name,
:provider => #provider.to_s
end
# Run the action with the action runner on the environment
env = {
:action_name => "machine_action_#{name}".to_sym,
:machine => self,
:machine_action => name,
:ui => #ui
}.merge(extra_env || {})
#env.action_runner.run(callable, env)
end
# Returns a communication object for executing commands on the remote
# machine. Note that the _exact_ semantics of this are up to the
# communication provider itself. Despite this, the semantics are expected
# to be consistent across operating systems. For example, all linux-based
# systems should have similar communication (usually a shell). All
# Windows systems should have similar communication as well. Therefore,
# prior to communicating with the machine, users of this method are
# expected to check the guest OS to determine their behavior.
#
# This method will _always_ return some valid communication object.
# The `ready?` API can be used on the object to check if communication
# is actually ready.
#
# #return [Object]
def communicate
if !#communicator
# For now, we always return SSH. In the future, we'll abstract
# this and allow plugins to define new methods of communication.
klass = Vagrant.plugin("2").manager.communicators[:ssh]
#communicator = klass.new(self)
end
#communicator
end
# Returns a guest implementation for this machine. The guest implementation
# knows how to do guest-OS specific tasks, such as configuring networks,
# mounting folders, etc.
#
# #return [Guest]
def guest
raise Errors::MachineGuestNotReady if !communicate.ready?
#guest.detect! if !#guest.ready?
#guest
end
# This sets the unique ID associated with this machine. This will
# persist this ID so that in the future Vagrant will be able to find
# this machine again. The unique ID must be absolutely unique to the
# virtual machine, and can be used by providers for finding the
# actual machine associated with this instance.
#
# **WARNING:** Only providers should ever use this method.
#
# #param [String] value The ID.
def id=(value)
#logger.info("New machine ID: #{value.inspect}")
# The file that will store the id if we have one. This allows the
# ID to persist across Vagrant runs.
id_file = #data_dir.join("id")
if value
# Write the "id" file with the id given.
id_file.open("w+") do |f|
f.write(value)
end
else
# Delete the file, since the machine is now destroyed
id_file.delete if id_file.file?
# Delete the entire data directory contents since all state
# associated with the VM is now gone.
#data_dir.children.each do |child|
begin
child.rmtree
rescue Errno::EACCES
#logger.info("EACCESS deleting file: #{child}")
end
end
end
# Store the ID locally
#id = value.nil? ? nil : value.to_s
# Notify the provider that the ID changed in case it needs to do
# any accounting from it.
#provider.machine_id_changed
end
# This returns a clean inspect value so that printing the value via
# a pretty print (`p`) results in a readable value.
#
# #return [String]
def inspect
"#<#{self.class}: #{#name} (#{#provider.class})>"
end
# This returns the SSH info for accessing this machine. This SSH info
# is queried from the underlying provider. This method returns `nil` if
# the machine is not ready for SSH communication.
#
# The structure of the resulting hash is guaranteed to contain the
# following structure, although it may return other keys as well
# not documented here:
#
# {
# :host => "1.2.3.4",
# :port => "22",
# :username => "mitchellh",
# :private_key_path => "/path/to/my/key"
# }
#
# Note that Vagrant makes no guarantee that this info works or is
# correct. This is simply the data that the provider gives us or that
# is configured via a Vagrantfile. It is still possible after this
# point when attempting to connect via SSH to get authentication
# errors.
#
# #return [Hash] SSH information.
def ssh_info
# First, ask the provider for their information. If the provider
# returns nil, then the machine is simply not ready for SSH, and
# we return nil as well.
info = #provider.ssh_info
return nil if info.nil?
# Delete out the nil entries.
info.dup.each do |key, value|
info.delete(key) if value.nil?
end
# We set the defaults
info[:host] ||= #config.ssh.default.host
info[:port] ||= #config.ssh.default.port
info[:private_key_path] ||= #config.ssh.default.private_key_path
info[:username] ||= #config.ssh.default.username
# We set overrides if they are set. These take precedence over
# provider-returned data.
info[:host] = #config.ssh.host if #config.ssh.host
info[:port] = #config.ssh.port if #config.ssh.port
info[:username] = #config.ssh.username if #config.ssh.username
info[:password] = #config.ssh.password if #config.ssh.password
# We also set some fields that are purely controlled by Varant
info[:forward_agent] = #config.ssh.forward_agent
info[:forward_x11] = #config.ssh.forward_x11
# Add in provided proxy command config
info[:proxy_command] = #config.ssh.proxy_command if #config.ssh.proxy_command
# Set the private key path. If a specific private key is given in
# the Vagrantfile we set that. Otherwise, we use the default (insecure)
# private key, but only if the provider didn't give us one.
if !info[:private_key_path] && !info[:password]
if #config.ssh.private_key_path
info[:private_key_path] = #config.ssh.private_key_path
else
info[:private_key_path] = #env.default_private_key_path
end
end
# If we have a private key in our data dir, then use that
if !#data_dir.nil?
data_private_key = #data_dir.join("private_key")
if data_private_key.file?
info[:private_key_path] = [data_private_key.to_s]
end
# Setup the keys
info[:private_key_path] ||= []
if !info[:private_key_path].is_a?(Array)
info[:private_key_path] = [info[:private_key_path]]
end
# Expand the private key path relative to the root path
info[:private_key_path].map! do |path|
File.expand_path(path, #env.root_path)
end
# Return the final compiled SSH info data
info
end
# Returns the state of this machine. The state is queried from the
# backing provider, so it can be any arbitrary symbol.
#
# #return [MachineState]
def state
result = #provider.state
raise Errors::MachineStateInvalid if !result.is_a?(MachineState)
result
end
# Temporarily changes the machine UI. This is useful if you want
# to execute an {#action} with a different UI.
def with_ui(ui)
#ui_mutex.synchronize do
begin
old_ui = #ui
#ui = ui
yield
ensure
#ui = old_ui
end
end
end
end
end
In method ssh_info you are missing an end statement after line 318:
Your code:
# If we have a private key in our data dir, then use that
if !#data_dir.nil?
data_private_key = #data_dir.join("private_key")
if data_private_key.file?
info[:private_key_path] = [data_private_key.to_s]
end
What it should be:
# If we have a private key in our data dir, then use that
if !#data_dir.nil?
data_private_key = #data_dir.join("private_key")
if data_private_key.file?
info[:private_key_path] = [data_private_key.to_s]
end
end # <-- this is missing
This type of error always gets reported on the last line of the file because it's looking to close the block and reaches the end of the file without getting enough end statements.
I want to make socket in ruby. First I run ruby - server.rb in cmd
require 'socket' # Get sockets from stdlib
server = TCPServer.open(2000) # Socket to listen on port 2000
loop { # Servers run forever
Thread.start(server.accept) do |client|
puts "Connected!!!"
client.puts(Time.now.ctime) # Send the time to the client
client.puts "Closing the connection. Bye!"
client.close # Disconnect from the client
end
}
then in my controller :
class StaticPagesController < ApplicationController
helper :all
def home
Thread.new do
require 'socket' # Sockets are in standard library
hostname = 'localhost'
port = 2000
s = TCPSocket.open(hostname, port)
while line = s.gets # Read lines from the socket
puts line.chop # And print with platform line terminator
end
s.close # Close the socket when done
end
end
I don't know why it don't work, can someone help me ?
reference:http://www.tutorialspoint.com/ruby/ruby_socket_programming.htm