I am trying to query by joining two tables from my database. The query works prefectcly in localhost (database powered by SQLite3), but when it push it to heroku server (with postgres), it does not work anymore. I tried a few methods and it works, but all of them works on local.
In my controller, I have
#user = User.find(params[:id])
I am trying to query the following statement by using different methods and return the same result. Below are the different methods I have tried. All of them works prefectly with SQLite3, but not on Heroku with Postgres.
#locations_user_rated = Location.joins('INNER JOIN rates').where("rates.rateable_id = locations.id AND rates.rater_id =?" , 2) (Assume current user ID = 2)
#locations_user_rated = Location.joins('INNER JOIN rates').where("rates.rateable_id = locations.id AND rates.rater_id =?" , User.find(params[:id]))
#locations_user_rated = Location.joins('INNER JOIN rates').where("rates.rateable_id = locations.id AND rates.rater_id =?" , #user)
#locations_user_rated = Location.joins('INNER JOIN rates').where('rates.rater_id' => #user).where("rates.rateable_id = locations.id")
I found out that the #user is the one that causing the issue. So I replaced it with User.find(params[:id]). However, the server refused to take it.
I read on the rails website that as long as it works in any of the SQL, it should work on Heroku (Postgres). But this is not the case here.
Below is the logs that I received from Heroku server.
2016-05-01T01:52:41.674598+00:00 app[web.1]: (1.3ms) SELECT COUNT(*) FROM "locations" INNER JOIN rates WHERE (rates.rateable_id = locations.id AND rates.rater_id =2) 2016-05-01T01:52:41.674760+00:00 app[web.1]: Completed 500 Internal Server Error in 10ms (ActiveRecord: 5.8ms)
2016-05-01T01:52:41.675453+00:00 app[web.1]: ActiveRecord::StatementInvalid (PG::SyntaxError: ERROR: syntax error at or near "WHERE"
2016-05-01T01:52:41.675454+00:00 app[web.1]: LINE 1: SELECT COUNT(*) FROM "locations" INNER JOIN rates WHERE (rat...
. .^ (^ shows the where clause that is causing the problem)
What is the syntax difference between Postgres and SQLite for the WHERE clause?
UPDATE: I found out that it requires on clause. How do I add an on clause here?
Answer found here. I found out that it requires ON clause and DO NOT use WHERE statement to specify it. Use ON clause instead.
Related
In my feature test I use byebug to check the database connection
/myfeaturesteps.rb
Given(/^I have an application "(.*?)"$/) do |name|
byebug
>Rails.env
>'test'
>ActiveRecord::Base.connected?
>true
When I try to touch the database
>User.first
>ActiveRecord::StatementInvalid Exception: PG::ConnectionBad: PQsocket() can't get socket descriptor:
When I run console on test environment
rails console test
>Rails.env
>'test'
>User.first
User Load (1.2ms) SELECT "users".* FROM "users" ORDER BY "users"."id" ASC LIMIT 1
=> nil
The same command on the same rails environment gives different results, where does the cucumber gets its connection settings ?
I don't know why it does not use a valid connection to connect to your database. This requires a lot of investigation. However, you can try a workaround. Add the following lines in order to create a Before hook:
Before do
ActiveRecord::Base.connection.verify!
end
I believe that this will solve your problem, until you find out what is the real cause of it.
I am doing a batch insert with the following method
module DatabaseHelper
ActionHelper = ActionController::Base.helpers
CONN = ActiveRecord::Base.connection
def self.mass_sql_insert(klass, columns, values, batch_size = 500)
table = klass.constantize.table_name if klass.constantize.kind_of?(Class)
values.each_slice(batch_size) do |batch|
sql = ActionHelper.sanitize("INSERT INTO #{table} (#{columns.join(', ')}) VALUES #{batch.join(', ')}")
CONN.execute sql
end
end
This method was working great, but just recently started throwing the following error:
ActiveRecord::StatementInvalid (SQLite3::BusyException: database is locked:
What follows this error is the sql insert command that I'm trying to perform. When I put that straight into a dbconsole, it works fine. Any suggestions?
SQLite is not really supposed to be used for concurrent access which is the issue you are running into here. You can try increasing the timeout in your database.yml
development:
adapter: sqlite3
database: db/my_dev.sqlite3
timeout: 10000
better to use another db engine for improving concurrency
I have a DBIx::Class query that's taking too long to complete.
All SQL below were generated by DBIx::Class.
First scenario (DBIx Simple select):
SELECT me.pf_id, me.origin_id, me.event_time, me.proto_id FROM pf me ORDER BY event_time DESC LIMIT 10;
DBIx query time: 0.390221s (ok)
Second scenario (DBIx Simple select using where):
SELECT me.pf_id, me.origin_id, me.event_time, me.proto_id FROM pf me WHERE ( proto_id = 7 ) ORDER BY event_time DESC LIMIT 10;
DBIx query time: 29.27025s!! :(
Third scenario (Using pgadmin3 to run the query above):
SELECT me.pf_id, me.origin_id, me.event_time, me.proto_id FROM pf me WHERE ( proto_id = 7 ) ORDER BY event_time DESC LIMIT 10;
Pgadmin query time: 25ms (ok)
The same query is pretty fast using pgdamin.
Some info:
Catalyst 5.90091
DBIx::Class 0.082820 (latest)
Postgres 9.1
I did all tests on localhost, using Catalyst internal server.
I have no problems with any other table/column combination, it's specific with proto_id.
Database Schema automatically generated by DBIx::Class::Schema::Loader
proto_id definition:
"proto_id",
{ data_type => "smallint", is_foreign_key => 1, is_nullable => 0 },
Anybody have a clue why DBIx is taking so long to run this simple query?
Edit 1: Column is using index (btree).
Edit 2: This is a partitioned table, I'm checking if all the sub-tables have all the indexes, but still doesn't explain why the same query is slower on DBIx::Class.
Edit 3: I did a simple DBIx::Class script and I got the same results, just to make sure the problem is not the Catalyst Framework.
Edit 4: Using tcpdump I noticed postgres is taking too long to respond, still trying...
Edit 5: Using DBI with SQL seems pretty fast, I'm almost convinced this is a DBIx::Class problem.
After some tests, I found the problem:
When I do the query using DBI bind_param() (As DBIx::Class does) for some reason it became very slow.
SELECT me.pf_id, me.origin_id, me.event_time, me.proto_id FROM pf me WHERE ( proto_id = ? ) ORDER BY event_time DESC LIMIT ?;
my $sth = $dbh->prepare($sql);
$sth->bind_param(1, 80, { TYPE => SQL_INTEGER });
$sth->bind_param(2, 10, { TYPE => SQL_INTEGER });
$sth->execute();
So after some time searching CPAN I've noticed that my DBD::Pg was outdated (My bad). I downloaded the source from CPAN, compiled and the problem is gone. Must be some bug from older versions.
TL;DR: If you're having problems with DBI or DBIx::Class make sure your DBI database driver is updated.
Not too sure how to debug this. Any tips would be greatly appreciated.
Basically, I just did a large commit, and now my server can't boot up because of a Sunspot-solr issue.
I notice it when I try to manually reindex.
This is the return :
Processing MainController#index (for 69.114.195.64 at 2011-08-02 06:47:21) [GET]
Parameters: {"action"=>"index", "controller"=>"main"}
HomepageBackground Load (0.2ms) SELECT * FROM `homepage_backgrounds`
HomepageBackground Columns (23.4ms) SHOW FIELDS FROM `homepage_backgrounds`
HomepageBackground Load (0.8ms) SELECT * FROM `homepage_backgrounds` ORDER BY RAND() LIMIT 1
SQL (30.2ms) SHOW TABLES
Organization Columns (1.8ms) SHOW FIELDS FROM `organizations`
Solr Select (Error) {:q=>"*:*", :start=>0, :fq=>["type:Organization", "published_b:true", "updated_at_d:[2010\\-08\\-02T13\\:47\\:21Z TO *]"], :rows=>1000000}
Timeout::Error (execution expired):
/usr/lib/ruby/1.8/timeout.rb:64:in `rbuf_fill'
vendor/gems/right_http_connection-1.2.4/lib/net_fix.rb:51:in `rbuf_fill'
/usr/lib/ruby/1.8/net/protocol.rb:116:in `readuntil'
UPDATE
Ok so I reverted and rebased to the last working commit. And I still got the same error. So then I ps aux | grep solr, and found five instances of solr running. Strange, I thought, and killed every single one of them. Blam server was back up and running strong. So now I'm trying my new commits again, but with my eye on these feral sunspot instances.
This problem was caused by feral sunspot-solr instances running amuck. Nothing kill -9 couldn't handle. Problem solved.
I have a Rails app that is generating duplicate requests for every request in development. The app is running Rails 2.3.5 with my primary development machine running Ubuntu 10.4. However, the same code runs fine without showing duplicate requests on my OS X 10.6 box. It also runs in Production mode on either machine without problems.
Processing DashboardController#index (for 127.0.0.1 at 2010-07-16 10:23:08) [GET]
Parameters: {"action"=>"index", "controller"=>"dashboard"}
Rendering template within layouts/application
Rendering dashboard/index
Term Load (1.9ms) SELECT * FROM "date_ranges" WHERE ('2010-07-16' BETWEEN begin_date and end_date ) AND ( ("date_ranges"."type" = 'Term' ) )
StaticData Load (1.1ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
Rendered dashboard/_news (0.1ms)
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
StaticData Load (0.9ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'TAG_LINE') LIMIT 1
Completed in 67ms (View: 58, DB: 5) | 200 OK [http://localhost/dashboard]
SQL (0.4ms) SET client_min_messages TO 'panic'
SQL (0.4ms) SET client_min_messages TO 'notice'
Processing DashboardController#index (for 127.0.0.1 at 2010-07-16 10:23:08) [GET]
Parameters: {"action"=>"index", "controller"=>"dashboard"}
Rendering template within layouts/application
Rendering dashboard/index
Term Load (1.9ms) SELECT * FROM "date_ranges" WHERE ('2010-07-16' BETWEEN begin_date and end_date ) AND ( ("date_ranges"."type" = 'Term' ) )
StaticData Load (1.1ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
Rendered dashboard/_news (0.1ms)
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
CACHE (0.0ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'SITE_NAME') LIMIT 1
StaticData Load (0.9ms) SELECT * FROM "static_data" WHERE ("static_data"."name" = E'TAG_LINE') LIMIT 1
Completed in 67ms (View: 58, DB: 5) | 200 OK [http://localhost/dashboard]
SQL (0.4ms) SET client_min_messages TO 'panic'
SQL (0.4ms) SET client_min_messages TO 'notice'
Notice that the requests are exactly the same, even down to the timestamps.
I have tried using Ruby 1.8.7 & 1.9.1 as well as swapping between Mongrel & Webrick and it always processes each request twice (or at least it generates two log entries). I tried removing most of the routes to see if I had something weird going on, but the problem persists. I tried different browsers (Chrome, Safari, eLinks) from different machines to see if that would help, but the problem persists. I removed all of my gems and only replaced the necessary ones but to no avail.
Does anyone have any idea why Rails would cause duplicate requests like this? I am about at my wits end and am grasping at straws. The only bright spark is that this behavior does not happen under the Production environment, only Development.
When people come to this question from google it's important they disambiguate their problem between duplicate logs that look like this:
A
A
B
B
C
C
From duplicate logs that look like this:
A
B
C
A
B
C
The former is likely from duplicate LOGGING. The later is likely from duplicate REQUESTS. In the case it is the latter as shown by the Question Asker (OP), you should strongly consider #www's answer of hunting down a <img src="#"> or a similar self-referential url tag. I spent hours trying to figure out why my application was appearing to make two duplicate requests, and after reading #www's answer (or #aelor's on Double console output?), I found
%link{href: "", rel: "shortcut icon"}/
in my code! It was causing every page of my production app to be double rendering!!!! So bad for performance and so annoying!
Check your code see if there is something like this inside it:
I have got the same problem just now becase of a tag
<img src="#">
this will cause rails to make duplicate requests!
This was happening to me in rails 4.2.3 after installing the heroku rails_12factor gem which depends on rails_stdout_logging
The "answer" to the problem was to move to a new directory and fetch the original code from Github. After getting everything configured and setup in the new directory the application works as it should with no duplicate requests. I still don't know why the code in the original directory borked out; I even diff'ed the directories and the only outliers were the log files.
I'm answering my own question here for the sanity of others that might experience the same problem.
I resolved this problem by commenting the following line in app/config/environments/development.rb:
config.middleware.use Rails::Rack::LogTailer
I do not remember exactly the reason to use this setting
I solve this same problem here by cleaning all precompiled assets with:
rake assets:clean
I've tried to delete the app folder and then checkout him back from GitHub but didnt work.
hope this help.
thanks.
This started happening to me on development after playing around with some custom middleware I wrote.
Running rake assets:clean:all solved it.
This small workaround solved my issue. Follow these steps:
Under Rails External Libraries, Search for railties module.
Go to this path: /lib/commands/server.rb
In this file comment this line,
Rails.logger.extend(ActiveSupport::Logger.broadcast(console))
This command will switch off broadcasting, and just restart your rails server. You will not see any repeated logs anymore. Happy coding.