Is there a way to order the blobs in ActiveStorage?
The following works
class Project < ApplicationRecord
has_many :attachments, as: :attachable
scope :with_attached_files, -> { includes(:attachments).merge(Attachment.with_attached_file.order('active_storage_blobs.filename')) }
end
However, it is case sensitive.
I have tried
scope :with_attached_files, -> { includes(:attachments).merge(Attachment.with_attached_file.order(Arel.sql('lower(active_storage_blobs.filename)'))) }
but it generates a SQL error.
PG::UndefinedTable - ERROR: missing FROM-clause entry for table
"active_storage_blobs"
The generated query is
SELECT "projects".* FROM "projects" WHERE "projects"."slug" = $1
ORDER BY lower(active_storage_blobs.filename) LIMIT $2 [["slug",
"aa0001-18"], ["LIMIT", 1]]
My current idea is to overwrite the default scope on ActiveStorageBlob but it doesn't seem to work so far.
Related
It seems that ActiveRecord optimizes the loading of associated models: when the counter cache column is 0, it assumes there are no associated models and therefore does not execute a SELECT, and immediately returns an empty CollectionProxy.
This causes an annoyance in a model test, where fixtures are INSERTed into the database outside of the ActiveRecord lifecycle - all counter caches are 0. One workaround is to explicitly define values for counter cache attributes in the fixture files. Another is to invoke update_counters in a test.
But, is there a way to "force" ActiveRecord to execute an association query when a counter cache column is 0? (And, why the heck does it execute the query within the debugger but not in the test? See below for the suspense on that...)
Here are the details of a scenario to illustrate.
After adding a counter cache column to a model, a deletion test is now failing, because the fixture data does not seem to be loading associated objects the same way as it was before the counter cache was added.
Given existing Customer and CustomerType models:
class Customer < ApplicationRecord
belongs_to :customer_type
# ...
end
class CustomerType < ApplicationRecord
has_many :customers, dependent: :restrict_with_error
# ...
end
I have the following fixture in customers.yml:
one:
name: Foo
customer_type: one
And the following fixture in customer_types.yml:
one:
name: Fake Customer Type 1
Prior to the addition of the counter cache, this test passes:
test 'cannot be deleted if it has associated customers' do
customer_type = customers.first.customer_type
assert_not_empty customer_type.customers
customer_type.destroy
refute customer_type.destroyed?
end
I run the following migration:
class AddCustomersCountToCustomerType < ActiveRecord::Migration[7.0]
def change
add_column :customer_types, :customers_count, :integer, default: 0, null: false
reversible do |dir|
dir.up { update_counter }
end
end
def update_counter
execute <<-SQL.squish
UPDATE customer_types
SET customers_count = (SELECT count(1)
FROM customers
WHERE customers.customer_type_id = customer_types.id);
SQL
end
end
And update the belongs_to declaration in Customer:
class Customer < ApplicationRecord
belongs_to :customer_type, counter_cache: true
# ...
end
And then the test fails on the first assertion!
test 'cannot be deleted if it has associated customers' do
customer_type = customers.first.customer_type
assert_not_empty customer_type.customers
Now, when I add a binding.break before the assertion, and I evaluate:
customer_type.customers
The collection has a customer in it and is not empty. Continuing from the breakpoint then passes the assert_not_empty assertion!
But, if I rerun the test and, at the breakpoint, I only call assert_not_empty customer_type.customers the assertion fails, and calling customer_type.customers returns an empty list. (!)
What is maddening is that, if I invoke customer_type.customers in the test before the assertion, it still fails, despite seeing different behavior when I drop into the breakpoint!
This fails:
customer_type = customers.first.customer_type
customer_type.customers
binding.break # in the debugger, immediately continue
assert_not_empty customer_type.customers
But this passes:
customer_type = customers.first.customer_type
customer_type.customers
binding.break # in the debugger, call customer_type.customers
assert_not_empty customer_type.customers
There are no other callbacks or interesting persistence behaviors in these models - they are a very vanilla 1:M.
Here are some observations of the test log.
Here is the test.
test 'cannot be deleted if it has associated customers' do
Rails.logger.info("A")
customer_type = customers.first.customer_type
Rails.logger.info("B")
customer_type.customers
Rails.logger.info("C")
binding.break
Rails.logger.info("D")
assert_not_empty customer_type.customers
Rails.logger.info("E")
customer_type.destroy
refute customer_type.destroyed?
end
Now, without the counter_cache option, the test passes and I see this in the log:
-----------------------------------------------------------------------
CustomerTypeTest: test_cannot_be_deleted_if_it_has_associated_customers
-----------------------------------------------------------------------
A
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
Customer Load (0.5ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 298486374], ["LIMIT", 1]]
Customer Load (0.3ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 338564009], ["LIMIT", 1]]
CustomerType Load (0.5ms) SELECT "customer_types".* FROM "customer_types" WHERE "customer_types"."id" = $1 ORDER BY name ASC LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
B
C
D
Customer Exists? (0.9ms) SELECT 1 AS one FROM "customers" WHERE "customers"."customer_type_id" = $1 LIMIT $2 [["customer_type_id", 980190962], ["LIMIT", 1]]
E
CACHE Customer Exists? (0.0ms) SELECT 1 AS one FROM "customers" WHERE "customers"."customer_type_id" = $1 LIMIT $2 [["customer_type_id", 980190962], ["LIMIT", 1]]
TRANSACTION (0.8ms) ROLLBACK
Makes sense.
Now, I re-enable the counter_cache option. When I hit the breakpoint, I immediately continue. Test fails. Here is the log output:
-----------------------------------------------------------------------
CustomerTypeTest: test_cannot_be_deleted_if_it_has_associated_customers
-----------------------------------------------------------------------
A
Customer Load (0.2ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 298486374], ["LIMIT", 1]]
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 338564009], ["LIMIT", 1]]
CustomerType Load (0.6ms) SELECT "customer_types".* FROM "customer_types" WHERE "customer_types"."id" = $1 ORDER BY name ASC LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
B
C
D
TRANSACTION (0.6ms) ROLLBACK
So, here it seems that, when an association has a counter_cache column, and the value is 0 (It is zero because fixtures do not trigger counter incrementing), ActiveRecord is "optimizing" by not even executing a query. (Can anyone confirm this in the source / changelog?)
Now, here is the messed up thing. Same test, but when I hit the breakpoint, in the debugger I invoke customer_type.customers. Test passes. Here is the log.
-----------------------------------------------------------------------
CustomerTypeTest: test_cannot_be_deleted_if_it_has_associated_customers
-----------------------------------------------------------------------
A
Customer Load (0.6ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
Customer Load (0.4ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 298486374], ["LIMIT", 1]]
Customer Load (0.3ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = $1 LIMIT $2 [["id", 338564009], ["LIMIT", 1]]
CustomerType Load (0.8ms) SELECT "customer_types".* FROM "customer_types" WHERE "customer_types"."id" = $1 ORDER BY name ASC LIMIT $2 [["id", 980190962], ["LIMIT", 1]]
B
C
Customer Load (48.1ms) SELECT "customers".* FROM "customers" WHERE "customers"."customer_type_id" = $1 ORDER BY last_name ASC [["customer_type_id", 980190962]]
D
E
TRANSACTION (1.6ms) ROLLBACK
Why the heck is an explicit customer_type.customers invocation in the debugger causing a query, but that exact same statement in my test is not?
After I upgraded rails from 5.1 to 5.2 I started getting the following error:
NoMethodError: undefined method `expr' for nil:NilClass
from /gems_path/activerecord-5.2.0/lib/active_record/associations/join_dependency/join_association.rb:47:in `block in join_constraints'
from /gems_path/activerecord-5.2.0/lib/active_record/associations/join_dependency/join_association.rb:33:in `reverse_each'
from /gems_path/activerecord-5.2.0/lib/active_record/associations/join_dependency/join_association.rb:33:in `join_constraints'
from /gems_path/activerecord-5.2.0/lib/active_record/associations/join_dependency.rb:167:in `make_constraints'
from /gems_path/activerecord-5.2.0/lib/active_record/associations/join_dependency.rb:177:in `make_join_constraints'
from /gems_path/activerecord-5.2.0/lib/active_record/associations/join_dependency.rb:104:in `block in join_constraints'
from /gems_path/activerecord-5.2.0/lib/active_record/associations/join_dependency.rb:103:in `each'
from /gems_path/activerecord-5.2.0/lib/active_record/associations/join_dependency.rb:103:in `flat_map'
from /gems_path/activerecord-5.2.0/lib/active_record/associations/join_dependency.rb:103:in `join_constraints'
from /gems_path/activerecord-5.2.0/lib/active_record/relation/query_methods.rb:1026:in `build_join_query'
from /gems_path/activerecord-5.2.0/lib/active_record/relation/query_methods.rb:1008:in `build_joins'
from /gems_path/activerecord-5.2.0/lib/active_record/relation/query_methods.rb:928:in `build_arel'
from /gems_path/activerecord-5.2.0/lib/active_record/relation/query_methods.rb:903:in `arel'
from /gems_path/activerecord-5.2.0/lib/active_record/relation.rb:554:in `block in exec_queries'
from /gems_path/activerecord-5.2.0/lib/active_record/relation.rb:578:in `skip_query_cache_if_necessary'
from /gems_path/activerecord-5.2.0/lib/active_record/relation.rb:542:in `exec_queries'
from /gems_path/activerecord-5.2.0/lib/active_record/relation.rb:414:in `load'
from /gems_path/activerecord-5.2.0/lib/active_record/relation.rb:200:in `records'
from /gems_path/activerecord-5.2.0/lib/active_record/relation/delegation.rb:41:in `[]'
The code which causes the error looks the following way:
class Post
has_many :comments
has_one :last_comment, -> {
joins("LEFT JOIN posts on posts.id = comments.post_id")
.where("
comments.id = (
SELECT MAX(comments.id) FROM comments
WHERE comments.post_id = posts.id
)"
)
}, class_name: "Comment"
scope :with_last_comment, -> { joins(:last_comment) }
end
I created this gist which contains full code which helps to reproduce the bug. Download issue_with_joins_in_scopes_in_rails_5_3.rb to your PC and run it with
ruby issue_with_joins_in_scopes_in_rails_5_3.rb
You can look at this Github issue for more details
What is the difference between joins in Rails 5.2 and 5.1 which causes the code Post.with_last_comment to fail with error in Rails 5.2?
How can I change last_comment association and with_last_comment scope in the Post model so it will work in Rails 5.2?
Posting partial solution on how to preload last comment for a collection of posts
First you will need to have the following last_comment association:
class Post < ApplicationRecord
has_many :comments, dependent: :destroy
has_one :last_comment, -> { order(id: :desc) }, class_name: "Comment"
end
It works with preload and includes in Rails 5.2 and generates the following SQL queries:
Post.includes(:last_comment)
Post Load (0.2ms) SELECT "posts".* FROM "posts" LIMIT ? [["LIMIT", 11]]
Comment Load (0.2ms) SELECT "comments".* FROM "comments" WHERE "comments"."post_id" = ? ORDER BY "comments"."id" DESC [["post_id", 1]]
The problem with this solution is that when I use joins it ignores association scope and generates the following SQL query:
Post.joins(:last_comment)
Post Load (0.2ms) SELECT "posts".* FROM "posts" INNER JOIN "comments" ON "comments"."post_id" = "posts"."id" LIMIT ? [["LIMIT", 11]]
I was able to solve it changing the original last_comment association the following way:
class Post < ApplicationRecord
has_many :comments, dependent: :destroy
has_one :last_comment, -> {
joins(:post) # <--- Note this change
.where("
comments.id = (
SELECT MAX(comments.id) FROM comments
WHERE comments.post_id = posts.id
)"
)
}, class_name: "Comment"
scope :with_last_comment, -> { joins(:last_comment) }
end
And now Post.with_last_comment generates the following SQL:
Post Load (0.3ms) SELECT "posts".* FROM "posts" INNER JOIN "comments" ON "comments"."post_id" = "posts"."id" INNER JOIN "posts" "posts_comments" ON "posts_comments"."id" = "comments"."post_id" AND (
comments.id = (
SELECT MAX(comments.id) FROM comments
WHERE comments.post_id = posts.id
)) LIMIT ? [["LIMIT", 11]]
The question on how joins in Rails 5.2 are different from Rails 5.1 is still open
I have an easy many to many relation and It doesn't work and I cannot understand why. I'm sure that is something so obvious... but..
class Content < ApplicationRecord
has_many :content_brands
has_many :brands, through: :content_brands
end
class ContentBrand < ApplicationRecord
belongs_to :content
belongs_to :brand
end
class Brand < ApplicationRecord
establish_connection Rails.application.config.brands_database_configuration
has_many :content_brands
has_many :contents, through: :content_brands
end
But
irb(main):002:0> Content.first.brands
ActiveRecord::StatementInvalid: PG::UndefinedTable: ERRORE: la relazione "content_brands" non esiste
LINE 1: SELECT "brands".* FROM "brands" INNER JOIN "content_brands"...
^
: SELECT "brands".* FROM "brands" INNER JOIN "content_brands" ON "brands"."id" = "content_brands"."brand_id" WHERE "content_brands"."content_id" = $1 ORDER BY "brands"."name" ASC LIMIT $2
The table exists, I can query it
irb(main):006:0> ContentBrand.first.brand
ContentBrand Load (0.5ms) SELECT "content_brands".* FROM "content_brands" ORDER BY "content_brands"."id" ASC LIMIT $1 [["LIMIT", 1]]
Brand Load (27.4ms) SELECT "brands".* FROM "brands" WHERE "brands"."id" = $1 ORDER BY "brands"."name" ASC LIMIT $2 [["id", 1], ["LIMIT", 1]]
=> #<Brand id: 1, name: "Nokia", logo: "nokia.jpeg", created_at: "2016-12-08 15:50:48", updated_at: "2017-02-02 15:51:43", web_site: "http://www.nokia.it">
Why?
I'm getting crazy because the inverse relation works
Brand.first.contents
Brand Load (25.8ms) SELECT "brands".* FROM "brands" ORDER BY "brands"."name" ASC LIMIT $1 [["LIMIT", 1]]
Content Load (0.7ms) SELECT "contents".* FROM "contents" INNER JOIN "content_brands" ON "contents"."id" = "content_brands"."content_id" WHERE "content_brands"."brand_id" = $1 ORDER BY "contents"."published_at" DESC LIMIT $2 [["brand_id", 391], ["LIMIT", 11]]
=> #<ActiveRecord::Associations::CollectionProxy []>
irb(main):011:0>
Update: I forgot to tell you that Brand is on another database...
You can't setup associations to a model that is stored in another database in ActiveRecord. Which makes sense since you can't join another database in a single query in Postgres without jumping through some pretty serious hoops (Postgres_FDW). And with the polyglot nature of ActiveRecord this would just be too much complexity for a very limited use case.
If its in any way possible I would switch to a single database setup even if it means that you have to duplicate data.
If you look at the "inverse query" you can see that it works because its not a single query:
# queries the "brands" database
Brand Load (25.8ms) SELECT "brands".* FROM "brands" ORDER BY "brands"."name" ASC LIMIT $1 [["LIMIT", 1]]
# queries your main database
Content Load (0.7ms) SELECT "contents".* FROM "contents" INNER JOIN "content_brands" ON "contents"."id" = "content_brands"."content_id" WHERE "content_brands"."brand_id" = $1 ORDER BY "contents"."published_at" DESC LIMIT $2 [["brand_id", 391], ["LIMIT", 11]]
However this does not mean that the concept is feasible.
I have a following setup:
class Product < ApplicationRecord
has_many :variants
end
class Variant < ApplicationRecord
belongs_to :product
end
Types::QueryType = GraphQL::ObjectType.define do
connection :products, Types::ProductType.connection_type do
resolve -> (obj, _, _) do
Product.all.includes(:variants)
end
end
end
Types::ProductType = GraphQL::ObjectType.define do
connection :variants, Types::VariantType.connection_type do
resolve -> (obj, _, _) { obj.variants }
end
end
And running a following query:
{
products {
edges {
nodes {
variants {
edges {
node {
id
}
}
}
}
}
}
}
produces following SQL queries:
Product Load (2.7ms) SELECT "products".* FROM "products" LIMIT $1 [["LIMIT", 25]]
Variant Load (8.6ms) SELECT "variants".* FROM "variants" WHERE "variants"."product_id" IN (1, 2, 3)
Variant Load (19.0ms) SELECT "variants".* FROM "variants" WHERE "variants"."product_id" = $1 LIMIT $2 [["product_id", 1], ["LIMIT", 25]]
Variant Load (13.6ms) SELECT "variants".* FROM "variants" WHERE "variants"."product_id" = $1 LIMIT $2 [["product_id", 2], ["LIMIT", 25]]
Variant Load (2.4ms) SELECT "variants".* FROM "variants" WHERE "variants"."product_id" = $1 LIMIT $2 [["product_id", 3], ["LIMIT", 25]]
As we can see in the sql output, includes works but graphql don't care and makes a n+1 anyway. Is that normal behaviour and i'm forced to use solutions like graphql-batch to fix that or something is not right with my setup? As far as i have seen all over the internet, using includes should be enough for such simple scenario and graphql should use the eager loaded data instead of producing the n+1. Have i done anything wrong in here?
I'm on graphql-ruby 1.7.9
I just received a reply on graphql-ruby issue tracker :
Hey, I noticed that LIMIT 25 is being applied to those queries. Do you
know where that's being applied? If you want to use the result from
the initial query, you should remove the LIMIT clause. (I'm guessing
that if you ask for .limit(25), ActiveRecord won't use a cached
relation.) Maybe you have a default_max_page_size? What happens if you
remove it?
So, long story short, i removed the default_max_page_size config from my schema and it resolved the issue.
Given the following functional snippet I'm having trouble reducing the database queries:
class User < ApplicationRecord
belongs_to :account
def self.do_something
self.find_each do |user|
puts "#{self.new.account.name}:#{user.name} did something"
end
end
end
class Account < ApplicationRecord
has_many :users
end
a = Account.first
puts 'starting'
a.users.do_something
Account Load (0.4ms) SELECT "accounts".* FROM "accounts" WHERE
"accounts"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
starting
Account Load (0.3ms) SELECT "accounts".* FROM "accounts" WHERE
"accounts"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
Test:User did something
Account Load (0.3ms) SELECT "accounts".* FROM "accounts" WHERE
"accounts"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
Test:User did something
Account Load (0.3ms) SELECT "accounts".* FROM "accounts" WHERE
"accounts"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
Test:User did something
Account Load (0.3ms) SELECT "accounts".* FROM "accounts" WHERE
"accounts"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
Test:User did something
You can see that the Account model is being fetched from the database per user!
I was hoping to use something like self.account in the Singleton method to reference the original account, but the relationship obviously doesn't exist by default which is why I'm currently using self.new.account.
Is there anywhere else I can fetch the original Account model saved in a from inside self.do_something? I can obviously pass the account in a parameter, but that seems tedious especially if I may add arguments later...
Inside your find_each loop, you should be able to use user.account.
Outside that loop, I don't believe there's a documented / supported / won't-disappear-without-warning way to find the object. Depending on your Rails version, something like self.current_scope.proxy_association.owner might give you the answer you need... but do prefer user.account if at all possible: the more you use private APIs, the harder future upgrades can be.
Alternatively, consider using association extensions to define your do_something method inside the has_many association only -- if it's not suited to be called as User.do_something or User.where(name: "Bob").do_something (because those don't have an associated account), maybe it shouldn't be a top-level method after all.