How do I stop these PG::ProtocolViolation errors? - ruby-on-rails

My method is:
class Survey < ActiveRecord::Base
def create_matching_batteries
unless inactive?
update_column :battery_id, nil unless battery
#battery = Battery.where(:review_id => review.id, :question_id => question.id).first_or_create
#battery.surveys << self
end
end
end
When I run #survey.create_matching_batteries, I get this:
Survey Load (5.2ms) SELECT "surveys".* FROM "surveys" WHERE "surveys"."competitor_id" = $1 AND "surveys"."id" = $1 ORDER BY "surveys"."id" ASC LIMIT 1 [["competitor_id", 248], ["id", 15183]]
D, [2014-01-21T22:28:08.830446 #3392] DEBUG -- : Survey Load (5.2ms) SELECT "surveys".* FROM "surveys" WHERE "surveys"."competitor_id" = $1 AND "surveys"."id" = $1 ORDER BY "surveys"."id" ASC LIMIT 1 [["competitor_id", 248], ["id", 15183]]
PG::ProtocolViolation: ERROR: bind message supplies 2 parameters, but prepared statement "a15" requires 1
: SELECT "surveys".* FROM "surveys" WHERE "surveys"."competitor_id" = $1 AND "surveys"."id" = $1 ORDER BY "surveys"."id" ASC LIMIT 1
E, [2014-01-21T22:28:08.830528 #3392] ERROR -- : PG::ProtocolViolation: ERROR: bind message supplies 2 parameters, but prepared statement "a15" requires 1
: SELECT "surveys".* FROM "surveys" WHERE "surveys"."competitor_id" = $1 AND "surveys"."id" = $1 ORDER BY "surveys"."id" ASC LIMIT 1
(0.3ms) ROLLBACK
D, [2014-01-21T22:28:08.833655 #3392] DEBUG -- : (0.3ms) ROLLBACK
ActiveRecord::StatementInvalid: PG::ProtocolViolation: ERROR: bind message supplies 2 parameters, but prepared statement "a15" requires 1
: SELECT "surveys".* FROM "surveys" WHERE "surveys"."competitor_id" = $1 AND "surveys"."id" = $1 ORDER BY "surveys"."id" ASC LIMIT 1
from /Users/steven/.rvm/gems/ruby-2.1.0/gems/activerecord-4.0.2/lib/active_record/connection_adapters/postgresql_adapter.rb:786:in `get_last_result'
In "Railspeak" not "Postgrespeak", what does ActiveRecord::StatementInvalid: PG::ProtocolViolation: ERROR: bind message supplies 2 parameters, but prepared statement "a15" requires 1 mean? And how can it help me to debug my method?
My environment:
$ rails -v
Rails 4.0.2
$ ruby -v
ruby 2.1.0p0 (2013-12-25 revision 44422) [x86_64-darwin12.0]
$ psql --version
psql (PostgreSQL) 9.3.1

Disable Prepared Statements
production:
adapter: postgresql
prepared_statements: false
checkout http://edgeguides.rubyonrails.org/configuring.html

I experienced the same problem, but after counter checking my code, I found out that I had used the same variables in two different fields that is, I had written fieldA=$2 and also fieldB=$2. After correction everything worked fine.

My problem was I used quotes around variable
quiz_subject = subject_id WHERE subject_name = '$1';
I removed those quotes
quiz_subject = subject_id WHERE subject_name = $1;

Related

How to circumvent ActiveRecord query optimization when counter cache is 0

It seems that ActiveRecord optimizes the loading of associated models: when the counter cache column is 0, it assumes there are no associated models and therefore does not execute a SELECT, and immediately returns an empty CollectionProxy.
This causes an annoyance in a model test, where fixtures are INSERTed into the database outside of the ActiveRecord lifecycle - all counter caches are 0. One workaround is to explicitly define values for counter cache attributes in the fixture files. Another is to invoke update_counters in a test.
But, is there a way to "force" ActiveRecord to execute an association query when a counter cache column is 0? (And, why the heck does it execute the query within the debugger but not in the test? See below for the suspense on that...)
Here are the details of a scenario to illustrate.
After adding a counter cache column to a model, a deletion test is now failing, because the fixture data does not seem to be loading associated objects the same way as it was before the counter cache was added.
Given existing Customer and CustomerType models:
class Customer < ApplicationRecord
belongs_to :customer_type
# ...
end
class CustomerType < ApplicationRecord
has_many :customers, dependent: :restrict_with_error
# ...
end
I have the following fixture in customers.yml:
one:
name: Foo
customer_type: one
And the following fixture in customer_types.yml:
one:
name: Fake Customer Type 1
Prior to the addition of the counter cache, this test passes:
test 'cannot be deleted if it has associated customers' do
customer_type = customers.first.customer_type
assert_not_empty customer_type.customers
customer_type.destroy
refute customer_type.destroyed?
end
I run the following migration:
class AddCustomersCountToCustomerType < ActiveRecord::Migration[7.0]
def change
add_column :customer_types, :customers_count, :integer, default: 0, null: false
reversible do |dir|
dir.up { update_counter }
end
end
def update_counter
execute <<-SQL.squish
UPDATE customer_types
SET customers_count = (SELECT count(1)
FROM customers
WHERE customers.customer_type_id = customer_types.id);
SQL
end
end
And update the belongs_to declaration in Customer:
class Customer < ApplicationRecord
belongs_to :customer_type, counter_cache: true
# ...
end
And then the test fails on the first assertion!
test 'cannot be deleted if it has associated customers' do
customer_type = customers.first.customer_type
assert_not_empty customer_type.customers
Now, when I add a binding.break before the assertion, and I evaluate:
customer_type.customers
The collection has a customer in it and is not empty. Continuing from the breakpoint then passes the assert_not_empty assertion!
But, if I rerun the test and, at the breakpoint, I only call assert_not_empty customer_type.customers the assertion fails, and calling customer_type.customers returns an empty list. (!)
What is maddening is that, if I invoke customer_type.customers in the test before the assertion, it still fails, despite seeing different behavior when I drop into the breakpoint!
This fails:
customer_type = customers.first.customer_type
customer_type.customers
binding.break # in the debugger, immediately continue
assert_not_empty customer_type.customers
But this passes:
customer_type = customers.first.customer_type
customer_type.customers
binding.break # in the debugger, call customer_type.customers
assert_not_empty customer_type.customers
There are no other callbacks or interesting persistence behaviors in these models - they are a very vanilla 1:M.
Here are some observations of the test log.
Here is the test.
test 'cannot be deleted if it has associated customers' do
Rails.logger.info("A")
customer_type = customers.first.customer_type
Rails.logger.info("B")
customer_type.customers
Rails.logger.info("C")
binding.break
Rails.logger.info("D")
assert_not_empty customer_type.customers
Rails.logger.info("E")
customer_type.destroy
refute customer_type.destroyed?
end
Now, without the counter_cache option, the test passes and I see this in the log:
-----------------------------------------------------------------------
CustomerTypeTest: test_cannot_be_deleted_if_it_has_associated_customers
-----------------------------------------------------------------------
A
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
Customer Load (0.5ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 298486374], ["LIMIT", 1]]
Customer Load (0.3ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 338564009], ["LIMIT", 1]]
CustomerType Load (0.5ms) SELECT "customer_types".* FROM "customer_types" WHERE "customer_types"."id" = $1 ORDER BY name ASC LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
B
C
D
Customer Exists? (0.9ms) SELECT 1 AS one FROM "customers" WHERE "customers"."customer_type_id" = $1 LIMIT $2 [["customer_type_id", 980190962], ["LIMIT", 1]]
E
CACHE Customer Exists? (0.0ms) SELECT 1 AS one FROM "customers" WHERE "customers"."customer_type_id" = $1 LIMIT $2 [["customer_type_id", 980190962], ["LIMIT", 1]]
TRANSACTION (0.8ms) ROLLBACK
Makes sense.
Now, I re-enable the counter_cache option. When I hit the breakpoint, I immediately continue. Test fails. Here is the log output:
-----------------------------------------------------------------------
CustomerTypeTest: test_cannot_be_deleted_if_it_has_associated_customers
-----------------------------------------------------------------------
A
Customer Load (0.2ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 298486374], ["LIMIT", 1]]
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 338564009], ["LIMIT", 1]]
CustomerType Load (0.6ms) SELECT "customer_types".* FROM "customer_types" WHERE "customer_types"."id" = $1 ORDER BY name ASC LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
B
C
D
TRANSACTION (0.6ms) ROLLBACK
So, here it seems that, when an association has a counter_cache column, and the value is 0 (It is zero because fixtures do not trigger counter incrementing), ActiveRecord is "optimizing" by not even executing a query. (Can anyone confirm this in the source / changelog?)
Now, here is the messed up thing. Same test, but when I hit the breakpoint, in the debugger I invoke customer_type.customers. Test passes. Here is the log.
-----------------------------------------------------------------------
CustomerTypeTest: test_cannot_be_deleted_if_it_has_associated_customers
-----------------------------------------------------------------------
A
Customer Load (0.6ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 298486374], ["LIMIT", 1]]
Customer Load (0.3ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 338564009], ["LIMIT", 1]]
CustomerType Load (0.8ms) SELECT "customer_types".* FROM "customer_types" WHERE "customer_types"."id" = $1 ORDER BY name ASC LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
B
C
Customer Load (48.1ms) SELECT "customers".* FROM "customers" WHERE "customers"."customer_type_id" = $1 ORDER BY last_name ASC [["customer_type_id", 980190962]]
D
E
TRANSACTION (1.6ms) ROLLBACK
Why the heck is an explicit customer_type.customers invocation in the debugger causing a query, but that exact same statement in my test is not?

Rails 5 API - controller update action sometimes does not reflect database changes (cache issue)?

I have 2 simple models in a has_many relationship. A Template has_many TemplateItems. A Template has a template_type which can be one of two values ('template' or 'checklist').
For brevity I have removed non-relevant code.
template.rb
class Template < ApplicationRecord
# Relationships
belongs_to :account
has_many :template_items, -> { order('sort ASC') }, dependent: :destroy
accepts_nested_attributes_for :template_items, allow_destroy: true
# Enums
enum template_type: {template: 0, checklist: 1}
enum status: {not_started: 0, started: 1, completed: 2}
# Callbacks
before_save :set_status, unless: :is_template? # only care about status for checklists
def is_template?
return self.template_type == 'template'
end
def set_status
completed = 0
self.template_items.each do |item|
completed += 1 if item.is_completed
end
case completed
when 0
self.status = Template.statuses[:not_started]
when 1..(self.template_items.length - 1)
self.status = Template.statuses[:started]
when self.template_items.length
self.status = Template.statuses[:completed]
end
end
end
template_item.rb
class TemplateItem < ApplicationRecord
# Relationships
belongs_to :template
# Validations
validates_presence_of :template
end
When a client sends an update to Template Controller, it includes the template_items nested:
templates_controller.rb
def template_params
params.require(:template).
permit(:id, :account_id, :list_type, :name, :title, :info, :status,
template_items_attributes:
[:id, :template_id, :is_completed, :content, :item_type, :sort, :_destroy])
end
Notice that one of the attributes of an item is called sort. Notice also that the sort order is used in the Template model to sort the template_items (see the has_many line).
If a client resorts the template_items, the following update action is called:
templates_controller.rb
def update
if #template.update(template_params)
render json: #template, serializer: TemplateSerializer, status: :ok
else
render json: ErrorSerializer.serialize(#template.errors), status: :unprocessable_entity
end
end
The strange behaviour is that the database is always updated (verified in the logs and in the db) but sometimes the render does not render the new sort order but instead renders the previous sort order.
Here is the log when the action incorrectly returns the previous data:
I, [2018-02-20T20:22:55.997835 #1852] INFO -- : Processing by Api::TemplatesController#update as JSON
...parameters here...
D, [2018-02-20T20:22:56.002965 #1852] DEBUG -- : User Load (1.7ms) SELECT "users".* FROM "users" WHERE "users"."uid" = $1 LIMIT $2 [["uid", "rmcsharry+owner#gmail.com"], ["LIMIT", 1]]
D, [2018-02-20T20:22:56.115190 #1852] DEBUG -- : Template Load (2.6ms) SELECT "templates".* FROM "templates" WHERE "templates"."id" = $1 ORDER BY LOWER(templates.name) ASC LIMIT $2 [["id", "f9f6bca2-cb84-4349-8546-ca38026db407"], ["LIMIT", 1]]
D, [2018-02-20T20:22:56.121995 #1852] DEBUG -- : (0.4ms) BEGIN
D, [2018-02-20T20:22:56.129177 #1852] DEBUG -- : TemplateItem Load (2.5ms) SELECT "template_items".* FROM "template_items" WHERE "template_items"."template_id" = $1 AND "template_items"."id" IN ('419cb7ec-ca3f-4911-8a00-bec20f5ca89c', 'a7ac1687-8cb5-4199-a03b-d7cc975a0387', 'd7d885b6-2a75-487a-918c-6f3abaae7df1', 'b1b0277c-632f-4fe1-82e5-d020ee313d5b') ORDER BY sort ASC [["template_id", "f9f6bca2-cb84-4349-8546-ca38026db407"]]
D, [2018-02-20T20:22:56.137975 #1852] DEBUG -- : Account Load (1.4ms) SELECT "accounts".* FROM "accounts" WHERE "accounts"."id" = $1 LIMIT $2 [["id", "c379e356-4cce-4de2-b1b4-984b773dd43e"], ["LIMIT", 1]]
D, [2018-02-20T20:22:56.144421 #1852] DEBUG -- : CACHE Template Load (0.0ms) SELECT "templates".* FROM "templates" WHERE "templates"."id" = $1 ORDER BY LOWER(templates.name) ASC LIMIT $2 [["id", "f9f6bca2-cb84-4349-8546-ca38026db407"], ["LIMIT", 1]]
D, [2018-02-20T20:22:56.148992 #1852] DEBUG -- : CACHE Template Load (0.0ms) SELECT "templates".* FROM "templates" WHERE "templates"."id" = $1 ORDER BY LOWER(templates.name) ASC LIMIT $2 [["id", "f9f6bca2-cb84-4349-8546-ca38026db407"], ["LIMIT", 1]]
D, [2018-02-20T20:22:56.156300 #1852] DEBUG -- : TemplateItem Load (2.4ms) SELECT "template_items".* FROM "template_items" WHERE "template_items"."template_id" = $1 ORDER BY sort ASC [["template_id", "f9f6bca2-cb84-4349-8546-ca38026db407"]]
D, [2018-02-20T20:22:56.171567 #1852] DEBUG -- : SQL (1.9ms) UPDATE "template_items" SET "sort" = $1, "updated_at" = $2 WHERE "template_items"."id" = $3 [["sort", 2], ["updated_at", "2018-02-20 19:22:56.167142"], ["id", "d7d885b6-2a75-487a-918c-6f3abaae7df1"]]
D, [2018-02-20T20:22:56.175072 #1852] DEBUG -- : SQL (0.7ms) UPDATE "template_items" SET "sort" = $1, "updated_at" = $2 WHERE "template_items"."id" = $3 [["sort", 1], ["updated_at", "2018-02-20 19:22:56.172797"], ["id", "a7ac1687-8cb5-4199-a03b-d7cc975a0387"]]
D, [2018-02-20T20:22:56.176305 #1852] DEBUG -- : (0.6ms) COMMIT
I, [2018-02-20T20:22:56.183481 #1852] INFO -- : Rendered TemplateSerializer with ActiveModelSerializers::Adapter::Attributes (2.97ms)
Here is the log when the action correctly returns the new data - I have marked the differences (1) and (2):
I, [2018-02-20T20:52:47.490513 #3087] INFO -- : Processing by Api::TemplatesController#update as JSON
...parameters...
D, [2018-02-20T20:52:47.499201 #3087] DEBUG -- : User Load (2.0ms) SELECT "users".* FROM "users" WHERE "users"."uid" = $1 LIMIT $2 [["uid", "rmcsharry+owner#gmail.com"], ["LIMIT", 1]]
D, [2018-02-20T20:52:47.706520 #3087] DEBUG -- : Template Load (2.3ms) SELECT "templates".* FROM "templates" WHERE "templates"."id" = $1 ORDER BY LOWER(templates.name) ASC LIMIT $2 [["id", "c965c3ed-ace2-43af-9abd-f85392bdb948"], ["LIMIT", 1]]
D, [2018-02-20T20:52:47.727668 #3087] DEBUG -- : (0.3ms) BEGIN
D, [2018-02-20T20:52:47.777126 #3087] DEBUG -- : TemplateItem Load (2.2ms) SELECT "template_items".* FROM "template_items" WHERE "template_items"."template_id" = $1 AND "template_items"."id" IN ('ff034c14-252f-4366-9b31-526b5211e92b', '4e6ec7ef-ba53-4ec2-ab2e-97dd3b2c41bc', '3628b6ca-cddb-4d65-a6c3-86dfdcaa92f4', '35e61d68-143c-4bac-ab15-fbbb2b3f13d1') ORDER BY sort ASC [["template_id", "c965c3ed-ace2-43af-9abd-f85392bdb948"]]
D, [2018-02-20T20:52:47.820226 #3087] DEBUG -- : Account Load (1.4ms) SELECT "accounts".* FROM "accounts" WHERE "accounts"."id" = $1 LIMIT $2 [["id", "c379e356-4cce-4de2-b1b4-984b773dd43e"], ["LIMIT", 1]]
D, [2018-02-20T20:52:47.847928 #3087] DEBUG -- : CACHE Template Load (0.0ms) SELECT "templates".* FROM "templates" WHERE "templates"."id" = $1 ORDER BY LOWER(templates.name) ASC LIMIT $2 [["id", "c965c3ed-ace2-43af-9abd-f85392bdb948"], ["LIMIT", 1]]
D, [2018-02-20T20:52:47.850995 #3087] DEBUG -- : CACHE Template Load (0.0ms) SELECT "templates".* FROM "templates" WHERE "templates"."id" = $1 ORDER BY LOWER(templates.name) ASC LIMIT $2 [["id", "c965c3ed-ace2-43af-9abd-f85392bdb948"], ["LIMIT", 1]]
(1) D, [2018-02-20T20:52:47.856858 #3087] DEBUG -- : Template Exists (0.9ms) SELECT 1 AS one FROM "templates" WHERE "templates"."name" = $1 AND ("templates"."id" != $2) AND "templates"."account_id" = 'c379e356-4cce-4de2-b1b4-984b773dd43e' AND "templates"."template_type" = $3 LIMIT $4 [["name", "Daffy"], ["id", "c965c3ed-ace2-43af-9abd-f85392bdb948"], ["template_type", 0], ["LIMIT", 1]]
D, [2018-02-20T20:52:47.863415 #3087] DEBUG -- : SQL (1.1ms) UPDATE "template_items" SET "sort" = $1, "updated_at" = $2 WHERE "template_items"."id" = $3 [["sort", 2], ["updated_at", "2018-02-20 19:52:47.859495"], ["id", "3628b6ca-cddb-4d65-a6c3-86dfdcaa92f4"]]
D, [2018-02-20T20:52:47.865969 #3087] DEBUG -- : SQL (0.6ms) UPDATE "template_items" SET "sort" = $1, "updated_at" = $2 WHERE "template_items"."id" = $3 [["sort", 3], ["updated_at", "2018-02-20 19:52:47.864044"], ["id", "35e61d68-143c-4bac-ab15-fbbb2b3f13d1"]]
D, [2018-02-20T20:52:47.868568 #3087] DEBUG -- : (2.0ms) COMMIT
(2) D, [2018-02-20T20:52:47.918381 #3087] DEBUG -- : TemplateItem Load (1.5ms) SELECT "template_items".* FROM "template_items" WHERE "template_items"."template_id" = $1 ORDER BY sort ASC [["template_id", "c965c3ed-ace2-43af-9abd-f85392bdb948"]]
I, [2018-02-20T20:52:47.930257 #3087] INFO -- : Rendered TemplateSerializer with ActiveModelSerializers::Adapter::Attributes (17.22ms)
Notice the differences:
(1) the log shows a 'Template Exists' message
(2) after the commit Rails reloads the template_items to get the updated data from the database.
I know that I can fix this and force the update action to always do (2) and reload the template_items child objects:
templates_controller.rb
def update
if #template.update(template_params)
#template.template_items.reload
render json: #template, serializer: TemplateSerializer, status: :ok
else
render json: ErrorSerializer.serialize(#template.errors), status: :unprocessable_entity
end
end
But why do I need to do that if Rails has the ability (sometimes) to figure that out on its own? Although the cache is used in both calls, in the correct second example Rails has figured out it needs to reload the child objects after the database was updated, but not in the first case.
So what I am trying to understand is what controls this behaviour. It seems to me that it must be related to the before_save action in the Template model, since that action only fires for the 2nd case (template_type is 'template') and not the 1st (template_type is 'checklist'). In other words it seems when that action fires it 'changes' the behaviour of the update action.
So my questions are:
Why the different behaviour for the same action? If it is the
before_save, then why?
Why in the correct case does the log show Template Exists (since it
does exist in both cases)?
How does Rails know to reload the updated children in the correct case
but not in the incorrect case?
** UPDATE **
Here is the template_serializer.rb
class TemplateSerializer < ActiveModel::Serializer
attributes :id,
:account_id,
:name,
:info,
:title,
:template_type,
:status
has_many :template_items, include_nested_associations: true
end
The issue here is that you are requesting the items prior changing the sort. This means that the array of items that you have will no longer be sorted since you changed the property they are sorted on. Put another way, after you modify them, there isn't another query which returns the correct order.
So, I'll say the possible solutions are:
Reload the items after you mutate the sort.
Don't pull the items until after you mutate the sort.
Mutate the order of template_items based on sort values that changed.
The tradeoffs:
You have 2 select queries as well as the updates.
You have to update the records using TemplateItem.update(id, sort: sort) with all those updates within a transaction prior to selecting the records.
If you aren't rendering all the results, or decide not to in the future, it is possible that you will be modifying an item which will no longer be on the page. And, possibly other issues.
Why the different behaviour for the same action? If it is the before_save, then why?
The before_save is requesting template_items prior to them being saved. Otherwise, template_items doesn't get called until the serializer renders them. Note, that this means your before_save callback isn't performing the way you want it to since it is modifying the status based on the previous values.
Why in the correct case does the log show Template Exists (since it does exist in both cases)?
SELECT 1 AS one FROM "templates" WHERE
"templates"."name" = 'Daffy' AND
("templates"."id" != 'c965c3ed-ace2-43af-9abd-f85392bdb948') AND
"templates"."account_id" = 'c379e356-4cce-4de2-b1b4-984b773dd43e' AND
"templates"."template_type" = 0
LIMIT 1
Looking at the SQL, this looks like a validation to ensure name is unique across templates and type.
How does Rails know to reload the updated children in the correct case but not in the incorrect case?
Rails does not know. It is only loading them once in both cases. Just, with the before_save it is running before the records are updated.
Summary:
The easiest way to fix this timing issue would using a different callback which fires after updating the children such as after_update.

Handling a massive query in Rails

What's the best way to handle a large result set with Rails and Postgres? I didn't have a problem until today, but now I'm trying to return a 124,000 record object of #network_hosts, which has effectively DoS'd my development server.
My activerecord orm isn't the prettiest, but I'm pretty sure cleaning it up isn't going to help in relation to performance.
#network_hosts = []
#host_count = 0
#company.locations.each do |l|
if l.grace_enabled == nil || l.grace_enabled == false
l.network_hosts.each do |h|
#host_count += 1
#network_hosts.push(h)
#network_hosts.sort! { |x,y| x.ip_address <=> y.ip_address }
#network_hosts = #network_hosts.first(5)
end
end
end
In the end, I need to be able to return #network_hosts to the controller for processing into the view.
Is this something that Sidekiq would be able to help with, or is it going to be just as long? If Sidekiq is the path to take, how do I handle not having the #network_hosts object upon page load since the job is running asyncronously?
I believe you want to (1) get rid of all that looping (you've got a lot of queries going on) and (2) do your sorting with your AR query instead of in the array.
Perhaps something like:
NetworkHost.
where(location: Location.where.not(grace_enabed: true).where(company: #company)).
order(ip_address: :asc).
tap do |network_hosts|
#network_hosts = network_hosts.limit(5)
#host_count = network_hosts.count
end
Something like that ought to do it in a single DB query.
I had to make some assumptions about how your associations are set up and that you're looking for locations where grace_enabled isn't true (nil or false).
I haven't tested this, so it may well be buggy. But, I think the direction is correct.
Something to remember, Rails won't execute any SQL queries until the result of the query is actually needed. (I'll be using User instead of NetworkHost so I can show you the console output as I go)
#users = User.where(first_name: 'Random');nil # No query run
=> nil
#users # query is now run because the results are needed (they are being output to the IRB window)
# User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."first_name" = $1 LIMIT $2 [["first_name", "Random"], ["LIMIT", 11]]
# => #<ActiveRecord::Relation [...]>
#users = User.where(first_name: 'Random') # query will be run because the results are needed for the output into the IRB window
# User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."first_name" = $1 LIMIT $2 [["first_name", "Random"], ["LIMIT", 11]]
# => #<ActiveRecord::Relation [...]>
Why is this important? It allows you to store the query you want to run in the instance variable and not execute it until you get to a view where you can use some of the nice methods of ActiveRecord::Batches. In particular, if you have some view (or export function, etc.) where you are iterating the #network_hosts, you can use find_each.
# Controller
#users = User.where(first_name: 'Random') # No query run
# view
#users.find_each(batch_size: 1) do |user|
puts "User's ID is #{user.id}"
end
# User Load (0.5ms) SELECT "users".* FROM "users" WHERE "users"."first_name" = $1 ORDER BY "users"."id" ASC LIMIT $2 [["first_name", "Random"], ["LIMIT", 1]]
# User's ID is 1
# User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."first_name" = $1 AND ("users"."id" > 1) ORDER BY "users"."id" ASC LIMIT $2 [["first_name", "Random"], ["LIMIT", 1]]
# User's ID is 2
# User Load (0.3ms) SELECT "users".* FROM "users" WHERE "users"."first_name" = $1 AND ("users"."id" > 2) ORDER BY "users"."id" ASC LIMIT $2 [["first_name", "Random"], ["LIMIT", 1]]
# => nil
Your query is not executed until the view, where it will now load only 1,000 records (configurable) into memory at a time. Once it reaches the end of those 1,000 records, it will automatically run another query to fetch the next 1,000 records. So your memory is much more sane, at the cost of extra database queries (which are usually pretty quick)

Rails Console always returning Nil

After RAILS_ENV=development rails c
rails console development
etc...
My console cant return any values for my queries. And of course , i have data.
I am using vagrant and port_fowarding.
[20] pry(main)> User.first
Empresa::Empresa Load (1.1ms) SELECT "empresas".* FROM "empresas" WHERE "empresas"."id" = $1 LIMIT 1 [["id", nil]]
User Load (1.1ms) SELECT "usuarios".* FROM "usuarios" INNER JOIN "pessoas" ON "pessoas"."id" = "usuarios"."pessoa_id" AND "pessoas"."empresa_id" IS NULL WHERE (pessoas.empresa_id = NULL) ORDER BY "usuarios"."id" ASC LIMIT 1
=> nil
No matter what, always returning Nil.

Rails: How to step-by-step debug a request

I have an ajax request that is causing problems in my Rails 3.0.9 app. I can see the problem in the logs, but I don't have any idea what is triggering it between the ajax call and the render. Here's the log, and the event I don't want with ** beside it:
Started DELETE "/notifications/13" for 127.0.0.1 at 2011-06-21 22:08:39 -0500
Processing by NotificationsController#destroy as JS
Parameters: {"id"=>"13"}
SQL (0.4ms) SELECT name
FROM sqlite_master
WHERE type = 'table' AND NOT name = 'sqlite_sequence'
SQL (0.3ms) SELECT name
FROM sqlite_master
WHERE type = 'table' AND NOT name = 'sqlite_sequence'
User Load (0.8ms) SELECT "users".* FROM "users" WHERE "users"."id" = 1 LIMIT 1
Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 1 AND "slugs".sluggable_type = 'User') ORDER BY id DESC LIMIT 1
****AREL (0.3ms) UPDATE "users" SET "remember_token" = NULL, "remember_created_at" = NULL, "updated_at" = '2011-06-22 03:08:40.084049', "preferences" = '---
:email_notifications: ''true''
' WHERE "users"."id" = 1
Notification Load (0.2ms) SELECT "notifications".* FROM "notifications" WHERE "notifications"."id" = 13 LIMIT 1
User Load (0.9ms) SELECT "users".* FROM "users" WHERE "users"."id" = 1 LIMIT 1
AREL (0.3ms) UPDATE "users" SET "notifications_count" = COALESCE("notifications_count", 0) - 1 WHERE "users"."id" = 1
AREL (0.1ms) DELETE FROM "notifications" WHERE "notifications"."id" = 13
Completed 200 OK in 1334ms
I'd like to somehow step by step debug this request, sort of like the way you can step through a function in javascript using firebug.
Is there a way to debug like this so I can see how that specific AREL command is getting called??
Have you looked at ruby on rails guides - debugging?? you can debug just like in gdb
This railscast is also quite useful.

Resources