unable to connect to PostgreSQL from cucumber tests - ruby-on-rails

In my feature test I use byebug to check the database connection
/myfeaturesteps.rb
Given(/^I have an application "(.*?)"$/) do |name|
byebug
>Rails.env
>'test'
>ActiveRecord::Base.connected?
>true
When I try to touch the database
>User.first
>ActiveRecord::StatementInvalid Exception: PG::ConnectionBad: PQsocket() can't get socket descriptor:
When I run console on test environment
rails console test
>Rails.env
>'test'
>User.first
User Load (1.2ms) SELECT "users".* FROM "users" ORDER BY "users"."id" ASC LIMIT 1
=> nil
The same command on the same rails environment gives different results, where does the cucumber gets its connection settings ?

I don't know why it does not use a valid connection to connect to your database. This requires a lot of investigation. However, you can try a workaround. Add the following lines in order to create a Before hook:
Before do
ActiveRecord::Base.connection.verify!
end
I believe that this will solve your problem, until you find out what is the real cause of it.

Related

Rails PgBouncer client_idle_timeout

my problem is a pgbouncer which is dropping connections. Sysadmin set the client_idle_timeout to 60 seconds. Is there any solve on rails side for this? I mean, is there a possibility (a gem, or settings) to connect and drop connection (reconnection) to database every query we want to send ?
ruby 2.0.0
rails 4.2.3
pg 0.17.1
Im getting following error when connection is longer than x seconds (for example when I open rails console, wait x seconds and then run some ActiveRecord query to db):
ERROR: client_idle_timeout
Contract Load (0.5ms) SELECT
"contracts".* FROM "contracts" ORDER BY "contracts"."id" ASC LIMIT 1
PG::ConnectionBad: PQconsumeInput() SSL connection has been closed
unexpectedly : SELECT "contracts".* FROM "contracts" ORDER BY
"contracts"."id" ASC LIMIT 1 ActiveRecord::StatementInvalid:
PG::ConnectionBad: PQconsumeInput() SSL connection has been closed
unexpectedly : SELECT "contracts".* FROM "contracts" ORDER BY
"contracts"."id" ASC LIMIT 1
Calling ActiveRecord::Base.clear_active_connections! will return the stale connections back into the thread pool and check out fresh connections so you don't have to restart your console/server

Multi threaded slower without mutex when benchmarking (ruby 2.0, rails Rails 4.2.7.1)

I am trying to benchmark the below code and see that the non-threaded version runs faster than the treaded version.
puts Benchmark.measure{
100.times do |i|
carrier.services.map do |service_obj|
Thread.new {
rate << Rate.rate_by_service(service_obj,#package,#from,#to,#credentials)
}
end.each(&:join)
end
}
below are the timings
#threading
4.940000 0.730000 5.670000 ( 5.795008)
4.740000 0.740000 5.480000 ( 5.554500)
4.740000 0.730000 5.470000 ( 5.436129)
4.840000 0.710000 5.550000 ( 5.524418)
4.710000 0.720000 5.430000 ( 5.431673)
#no threading
3.640000 0.190000 3.830000 ( 3.962347)
3.670000 0.220000 3.890000 ( 4.402259)
3.430000 0.200000 3.630000 ( 3.780768)
3.480000 0.190000 3.670000 ( 3.830547)
3.650000 0.210000 3.860000 ( 4.065442)
one thing i have observed in the log is when non-threaded version runs, the queries results are returned from cache like below and in threaded version it does not fetch from cache but accessed the DB.
CACHE (0.0ms) SELECT
CACHE (0.0ms) SELECT
CACHE (0.0ms) SELECT
CACHE (0.0ms) SELECT
CACHE (0.0ms) SELECT
CACHE (0.0ms) SELECT
CACHE (0.0ms) SELECT
CACHE (0.0ms) SELECT
CACHE (0.0ms) SELECT
CACHE (0.0ms) SELECT
CACHE (0.0ms) SELECT
Note: I have removed the actual queries for readability purpose.
Can anyone please explain as to what could be causing the threaded version to run a bit slow than non-threaded
According to Caching with Rails # SQL Caching,
If Rails encounters the same query again for that request, it will use the cached result set as opposed to running the query against the database again... Query caches are created at the start of an action and destroyed at the end of that action and thus persist only for the duration of the action.
ActiveRecord checks out from the pool one connection per thread.
If your Ruby implementation is MRI, then because of the GVL, CPU works are not in parallel.
So when you run the code in single threaded mode inside an action, the query result is cached. When you run the code in multi-threaded mode, each thread acquires its own database connection, and the query result is not cached, so each thread has to access the database, so it could be slow.

SQLite3 and Postgres/Heroku Ruby on Rails Query issues

I am trying to query by joining two tables from my database. The query works prefectcly in localhost (database powered by SQLite3), but when it push it to heroku server (with postgres), it does not work anymore. I tried a few methods and it works, but all of them works on local.
In my controller, I have
#user = User.find(params[:id])
I am trying to query the following statement by using different methods and return the same result. Below are the different methods I have tried. All of them works prefectly with SQLite3, but not on Heroku with Postgres.
#locations_user_rated = Location.joins('INNER JOIN rates').where("rates.rateable_id = locations.id AND rates.rater_id =?" , 2) (Assume current user ID = 2)
#locations_user_rated = Location.joins('INNER JOIN rates').where("rates.rateable_id = locations.id AND rates.rater_id =?" , User.find(params[:id]))
#locations_user_rated = Location.joins('INNER JOIN rates').where("rates.rateable_id = locations.id AND rates.rater_id =?" , #user)
#locations_user_rated = Location.joins('INNER JOIN rates').where('rates.rater_id' => #user).where("rates.rateable_id = locations.id")
I found out that the #user is the one that causing the issue. So I replaced it with User.find(params[:id]). However, the server refused to take it.
I read on the rails website that as long as it works in any of the SQL, it should work on Heroku (Postgres). But this is not the case here.
Below is the logs that I received from Heroku server.
2016-05-01T01:52:41.674598+00:00 app[web.1]: (1.3ms) SELECT COUNT(*) FROM "locations" INNER JOIN rates WHERE (rates.rateable_id = locations.id AND rates.rater_id =2) 2016-05-01T01:52:41.674760+00:00 app[web.1]: Completed 500 Internal Server Error in 10ms (ActiveRecord: 5.8ms)
2016-05-01T01:52:41.675453+00:00 app[web.1]: ActiveRecord::StatementInvalid (PG::SyntaxError: ERROR: syntax error at or near "WHERE"
2016-05-01T01:52:41.675454+00:00 app[web.1]: LINE 1: SELECT COUNT(*) FROM "locations" INNER JOIN rates WHERE (rat...
. .^ (^ shows the where clause that is causing the problem)
What is the syntax difference between Postgres and SQLite for the WHERE clause?
UPDATE: I found out that it requires on clause. How do I add an on clause here?
Answer found here. I found out that it requires ON clause and DO NOT use WHERE statement to specify it. Use ON clause instead.

Server is timing out because of Sunspot-Solr reindex'ing problem

Not too sure how to debug this. Any tips would be greatly appreciated.
Basically, I just did a large commit, and now my server can't boot up because of a Sunspot-solr issue.
I notice it when I try to manually reindex.
This is the return :
Processing MainController#index (for 69.114.195.64 at 2011-08-02 06:47:21) [GET]
Parameters: {"action"=>"index", "controller"=>"main"}
HomepageBackground Load (0.2ms) SELECT * FROM `homepage_backgrounds`
HomepageBackground Columns (23.4ms) SHOW FIELDS FROM `homepage_backgrounds`
HomepageBackground Load (0.8ms) SELECT * FROM `homepage_backgrounds` ORDER BY RAND() LIMIT 1
SQL (30.2ms) SHOW TABLES
Organization Columns (1.8ms) SHOW FIELDS FROM `organizations`
Solr Select (Error) {:q=>"*:*", :start=>0, :fq=>["type:Organization", "published_b:true", "updated_at_d:[2010\\-08\\-02T13\\:47\\:21Z TO *]"], :rows=>1000000}
Timeout::Error (execution expired):
/usr/lib/ruby/1.8/timeout.rb:64:in `rbuf_fill'
vendor/gems/right_http_connection-1.2.4/lib/net_fix.rb:51:in `rbuf_fill'
/usr/lib/ruby/1.8/net/protocol.rb:116:in `readuntil'
UPDATE
Ok so I reverted and rebased to the last working commit. And I still got the same error. So then I ps aux | grep solr, and found five instances of solr running. Strange, I thought, and killed every single one of them. Blam server was back up and running strong. So now I'm trying my new commits again, but with my eye on these feral sunspot instances.
This problem was caused by feral sunspot-solr instances running amuck. Nothing kill -9 couldn't handle. Problem solved.

Rails app logging duplicate requests

I have a Rails app that is generating duplicate requests for every request in development. The app is running Rails 2.3.5 with my primary development machine running Ubuntu 10.4. However, the same code runs fine without showing duplicate requests on my OS X 10.6 box. It also runs in Production mode on either machine without problems.
Processing DashboardController#index (for 127.0.0.1 at 2010-07-16 10:23:08) [GET]
Parameters: {"action"=>"index", "controller"=>"dashboard"}
Rendering template within layouts/application
Rendering dashboard/index
Term Load (1.9ms) SELECT * FROM "date_ranges" WHERE ('2010-07-16' BETWEEN begin_date and end_date ) AND ( ("date_ranges"."type" = 'Term' ) )
StaticData Load (1.1ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
Rendered dashboard/_news (0.1ms)
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
StaticData Load (0.9ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'TAG_LINE') LIMIT 1
Completed in 67ms (View: 58, DB: 5) | 200 OK [http://localhost/dashboard]
SQL (0.4ms) SET client_min_messages TO 'panic'
SQL (0.4ms) SET client_min_messages TO 'notice'
Processing DashboardController#index (for 127.0.0.1 at 2010-07-16 10:23:08) [GET]
Parameters: {"action"=>"index", "controller"=>"dashboard"}
Rendering template within layouts/application
Rendering dashboard/index
Term Load (1.9ms) SELECT * FROM "date_ranges" WHERE ('2010-07-16' BETWEEN begin_date and end_date ) AND ( ("date_ranges"."type" = 'Term' ) )
StaticData Load (1.1ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
Rendered dashboard/_news (0.1ms)
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
StaticData Load (0.9ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'TAG_LINE') LIMIT 1
Completed in 67ms (View: 58, DB: 5) | 200 OK [http://localhost/dashboard]
SQL (0.4ms) SET client_min_messages TO 'panic'
SQL (0.4ms) SET client_min_messages TO 'notice'
Notice that the requests are exactly the same, even down to the timestamps.
I have tried using Ruby 1.8.7 & 1.9.1 as well as swapping between Mongrel & Webrick and it always processes each request twice (or at least it generates two log entries). I tried removing most of the routes to see if I had something weird going on, but the problem persists. I tried different browsers (Chrome, Safari, eLinks) from different machines to see if that would help, but the problem persists. I removed all of my gems and only replaced the necessary ones but to no avail.
Does anyone have any idea why Rails would cause duplicate requests like this? I am about at my wits end and am grasping at straws. The only bright spark is that this behavior does not happen under the Production environment, only Development.
When people come to this question from google it's important they disambiguate their problem between duplicate logs that look like this:
A
A
B
B
C
C
From duplicate logs that look like this:
A
B
C
A
B
C
The former is likely from duplicate LOGGING. The later is likely from duplicate REQUESTS. In the case it is the latter as shown by the Question Asker (OP), you should strongly consider #www's answer of hunting down a <img src="#"> or a similar self-referential url tag. I spent hours trying to figure out why my application was appearing to make two duplicate requests, and after reading #www's answer (or #aelor's on Double console output?), I found
%link{href: "", rel: "shortcut icon"}/
in my code! It was causing every page of my production app to be double rendering!!!! So bad for performance and so annoying!
Check your code see if there is something like this inside it:
I have got the same problem just now becase of a tag
<img src="#">
this will cause rails to make duplicate requests!
This was happening to me in rails 4.2.3 after installing the heroku rails_12factor gem which depends on rails_stdout_logging
The "answer" to the problem was to move to a new directory and fetch the original code from Github. After getting everything configured and setup in the new directory the application works as it should with no duplicate requests. I still don't know why the code in the original directory borked out; I even diff'ed the directories and the only outliers were the log files.
I'm answering my own question here for the sanity of others that might experience the same problem.
I resolved this problem by commenting the following line in app/config/environments/development.rb:
config.middleware.use Rails::Rack::LogTailer
I do not remember exactly the reason to use this setting
I solve this same problem here by cleaning all precompiled assets with:
rake assets:clean
I've tried to delete the app folder and then checkout him back from GitHub but didnt work.
hope this help.
thanks.
This started happening to me on development after playing around with some custom middleware I wrote.
Running rake assets:clean:all solved it.
This small workaround solved my issue. Follow these steps:
Under Rails External Libraries, Search for railties module.
Go to this path: /lib/commands/server.rb
In this file comment this line,
Rails.logger.extend(ActiveSupport::Logger.broadcast(console))
This command will switch off broadcasting, and just restart your rails server. You will not see any repeated logs anymore. Happy coding.

Resources