I'm currently trying to upgrade a Rails 3.2 app to Rails 4.0.13 (ruby 2.3.5, PostgreSQL 9.4.13) and I'm getting this error when running my integration tests;
: PG::UnableToSend at /companies/get_current_firm
: ===============================================
:
: > socket not open
:
:
: activerecord (4.0.13) lib/active_record/connection_adapters/postgresql_adapter.rb, line 798
: -------------------------------------------------------------------------------------------
:
: ``` ruby
: 793 end
: 794
: 795 FEATURE_NOT_SUPPORTED = "0A000" #:nodoc:
: 796
: 797 def exec_no_cache(sql, binds)
: > 798 #connection.async_exec(sql, [])
: 799 end
: 800
: 801 def exec_cache(sql, binds)
: 802 stmt_key = prepare_statement sql
: 803
: ```
This integration test logs in and then there is a number of calls from the frontend (AngularJS) to get details like the current firm selected, etc. Each one of these calls gets the same message, it's like the connection has been shutdown. I'm using the chrome browser through selenium & capybara in LIVE mode so I can see what is happening.
Here is my database.yml file for test
test: &test
adapter: postgresql
encoding: unicode
database: test_<%= ENV['USER'] %><%= ENV['TEST_ENV_NUMBER'] %>
pool: 5
username: <%= ENV['DATABASE_USER'] %>
password: <%= ENV['DATABASE_PASSWORD'] %>
Here is the call it is making from the frontend
AngularJS
current_firm: (scope) ->
$http({method: 'GET', url: '/companies/get_current_firm'}).success (data) ->
scope.current_firm = data['firm']
Rails Controller
def get_current_firm
render json: {firm: current_firm.organisation}
end
There are three separate calls to get different data when a user logs in, sometimes in the tests it will get 1 or 2 of them but never three.
I tried the work around from http://www.ruby-railings.com/en/rails/postgresql/2014/01/11/connection-problems-in-rails-4.html. This didn't fix the problem at all.
Update
When the user logs in there are three http calls fired off from AngularJS to the Rails backend. If I remark out two of the calls then that call works every time. If I remark out only one, so we have two calls then it fails one or both of the calls. I get a 'pending' message on one of the calls.
I've worked that this was an AngularJS problem sending too many requests to Rails 4 at once. To fix the problem I put each call in a promise.
Related
Summary: I'm using a single Redis instance for the Rails cache, actioncable and (non-cache) use in my rails code. Should all these uses share a single connection pool and if so how can I config this to make it happen since there seem to be totally different ways to setup the pooling for each?
Details follow since people seem to like to see them.
I'm using redis as the adapter for rails cache using the following config.
config.cache_store = :redis_cache_store, {
url: "redis://XXX.net:6379/0",
pool_size: ENV.fetch('RAILS_MAX_THREADS') { 5 },
password: Rails.application.credentials.dig(:redis, :password),
expires_in: 24.hours,
pool_timeout: 5
}
I've set the expires_in option so that I can set the option in my redis config to evict keys with expiration set so I can use the same redis instance for both cache and non-cache data. Now, I want to also access REDIS directly for non-cache related tasks via something like the example config below
pool_size = ENV.fetch("RAILS_MAX_THREADS", 5)
redis_pool = ConnectionPool.new(size: pool_size) do
Redis.new(
url: "redis://XXX.net:6379/0",
)
end
But I'm not sure if that is correct. Shouldn't I be sharing a connection pool between the cache_store connections and the other connections to Redis? If so, how can I do this?
To complicate matters further I'm also using Redis for actioncable via a config like
production:
adapter: redis
url: <%= ENV.fetch("REDIS_URL") { "redis://XXX.net:6379/0" } %>
password: <%= Rails.application.credentials.dig(:redis, :password) %>
I've seen suggestions that actioncable will automatically handle connection pooling with Redis if I'm using the connection_pool gem (is this right?) but I feel like all these connections should be drawing from the same pool. If so how can I make that happen?
I have a data scraper in ruby that retrieves article data.
Another dev on my team needs my scraper to spin up a webServer he can make a request to so that he may import the data on a Node Application he's built.
Being a junior, I do not understand the following :
a) Is there a proper convention in Rails that tells me where to place my scraper.rb file
b) Once that file is properly placed, how would i get the server to accept connections with the scrapedData
c)What (functionally) is the relationship between the ports, sockets, and routing
I understand this may be a "rookieQuestion" but I honestly dont know.
Can someone please BREAK THIS DOWN.
I have already:
i) Setup a server.rb file and have it linking to localhost:2000 but Im not sure how to create a proper route or connection that allows someone to use Postman for a valid route and connect to my data.
require 'socket'
require 'mechanize'
require 'awesome_print'
port = ENV.fetch("PORT",2000).to_i
server = TCPServer.new(port)
puts "Listening on port #{port}..."
puts "Current Time : #{Time.now}"
loop do
client = server.accept
client.puts "= Running Web Server ="
general_sites = [
"https://www.lovebscott.com/",
"https://bleacherreport.com/",
"https://balleralert.com/",
"https://peopleofcolorintech.com/",
"https://afrotech.com/",
"https://bossip.com/",
"https://www.itsonsitetv.com/",
"https://theshaderoom.com/",
"https://shadowandact.com/",
"https://hollywoodunlocked.com/",
"https://www.essence.com/",
"http://karencivil.com/",
"https://www.revolt.tv/"
]
holder=[]
agent = Mechanize.new
general_sites.each do |site|
page=agent.get(site);
newRet = page.search('a')
newRet.each do |e|
data = e.attr('href').to_s
if(data.length > 50)
holder.push(data)
end
end
pp holder.length.to_s + " [ posts total] ==> Now Scraping --> " + site
end
client.write(holder)
client.close
end
In Rails you don't spin up a web server manually, as it's done for you using rackup, unicorn, puma or any other compatible application server.
Rails itself is never "talking" to the HTTP clients directly, it is just a specific application that exposes a rack-compatible API (basically have an object that responds to call(hash) and returns [integer, hash, enumerable_of_strings]); the app server will get the data from unix/tcp sockets and call your application.
If you want to expose your scraper to an external consumer (provided it's fast enough), you can create a controller with a method that accepts some data, runs the scraper, and finally renders back the scraping results in some structured way. Then in the router you connect some URL to your controller method.
# config/routes.rb
post 'scrape/me', to: 'my_controller#scrape'
# app/controllers/my_controller.rb
class MyController < ApplicationController
def scrape
site = params[:site]
results = MyScraper.run(site)
render json: results
end
end
and then with a simple POST yourserver/scrape/me?site=www.example.com you will get back your data.
I have a Rails API with a PostgreSQL database.
Some requests to the API show a strange behavior that
doesn't depend on the endpoint.
These requests (around 5-10% of total requests) start with
the same 7 database queries :
SET client_min_messages TO ?
SET standard_conforming_strings = on
SET SESSION timezone TO ?
SELECT t.oid, t.typname FROM pg_type WHERE t.typname IN ( ? )
...
The request also takes a long time to start before the 7 queries are executed.
It seems to be the database adapter initiating a connection.
ActiveRecord::ConnectionAdapters::PostgreSQLAdapter
This significantly slows down the query.
I am using a PostegreSQL 11.6 AWS RDS instance, with default parameters.
Here is my database.yml config :
default: &default
adapter: postgresql
encoding: unicode
username: *****
password: *****
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
production:
<<: *default
database: *****
username: *****
password: *****
pool: 50
How do I reduce the number of connections initiating ?
Is there a way to cache the queries ?
Thank you,
Ran into the same thing and here's what I think is happening:
Every time a new connection is instantiated it performs the bootstrapping queries you mention above. Assuming a new process is not spawned, a new connection would need to be instantiated because existing connections have been reaped by ActiveRecord.
By default, the ConnectionPool::Reaper will disconnect any connection that has been idle for over 5 minutes.
See: https://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html
If your API does not receive any requests for a period of 5 minutes and all the connections are reaped the next request will need to instantiate a new connection and therefore run the queries.
How do I reduce the number of connections initiating ?
You could set an idle_timeout of 0 in database.yml. This would prevent ActiveRecord from reaping the connections but could potentially cause issues depending on how many processes are running and what your PG max_connections value is.
Is there a way to cache the queries ?
There's a closed issue that talks about this but it doesn't look like it's possible to cache these today.
https://github.com/rails/rails/issues/35311
I have a Rails API 4.2.10 and a NodeJS API.
Some of the Rails Request are deprecated and needs to be redirected to the NodeJS API.
I would like to not do the redirection by my WebServer (NgInx) but somewhere in Rails.
I would like to redirect the entire request => Url, Headers, body....
And return the NodeJS API body & status code.
Being a noob in ruby I tried some like making the requests myself but the Rails API still uses the create methods & so the related objects for example.
Here is my controller code :
class Api::V2::RedirectController < ActionController::API
def apiV3
redirect_to 'http://api-v3-node/api/v3' end
end
When I try with Postman, I get logs but "Could not get any response"
Here are the corresponding outputs :
[2018-09-11T13:08:12.633224 #61] INFO -- : Started POST "/api/v2/notes?token=[FILTERED]" for 172.19.0.1 at 2018-09-11 13:08:12 +0000
[2018-09-11T13:08:12.656050 #61] INFO -- : {:_method=>"POST", :_path=>"/api/v2/notes", :_format=>:json, :_controller=>"Api::V2::RedirectController", :_action=>"apiV3", :_status=>302, :_duration=>1.05, :_view=>0.0, :_location=>"http://api-v3-node/api/v3", :short_message=>"[302] POST /api/v2/notes (Api::V2::RedirectController#apiV3)"}
please help resolve issue with
Message
PG::UnableToSend: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
Traceback
PG::UnableToSend: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
full log: https://gist.githubusercontent.com/igorkasyanchuk/cc96dba31b0b0f83b77faadf077433e8/raw/70af94105b0a7649b6ad0dde086ac539902d1d3f/gistfile1.txt
sample code:
class RedZoneCrossedWorker < Worker
def perform(event_id)
Chewy.strategy(:atomic) do
puts "#{self.class.name} Performing: #{event_id} #{jid}"
event = RedZoneCrossedShipmentEvent.find_by(id: event_id)
shipment = event.try(:shipment)
if shipment
# generate driver/oo payments
shipment.generate_red_zone_payments
# send mails and notifications if red zone crossed successfully
notifier = event.successfully_completed? ? 'ShipmentNotifiers::ShipmentRedZoneSuccessNotifier' : 'ShipmentNotifiers::ShipmentRedZoneViolationsNotifier'
ShipmentNotificationWorker.new.perform(event.shipment_id, notifier)
end
end
end
end
we have 2 app servers + 1 db. we have pool 25 and workers 20, rails, sidekiq, pg.
it happen only with sidekiq background workers.
thanks for advices
Looks like problem was solved using:
production:
....
pool: 20
variables:
tcp_keepalives_idle: 60
tcp_keepalives_interval: 60
tcp_keepalives_count: 100