Rails: Find record by created_at beginning_of_week - ruby-on-rails

Is there a way to find a record by created_at: beginning_of_week?
Something like:
Message.find_by(created_at: Date.today.beginning_of_week).
But it doesn't work.

Because created_at is a DateTime object which includes both a Date and Time value, then your
Message.find_by(created_at: Date.today.beginning_of_week)
# Message Load (6.2ms) SELECT "messages".* FROM "messages" WHERE "messages"."created_at" = $1 LIMIT $2 [["created_at", "2018-01-29"], ["LIMIT", 1]]
... will try to find a record at exactly 2018-01-29 00:00:00 which is a Message record that is exactly created at midnight, instead of 2018-01-29 that you might have expected. You do not want that and instead want ANY record that is created in that day (as far as I understood your question). So, you can try the following instead.
date_beginning_this_week = Date.today.beginning_of_week
Message.where(created_at: date_beginning_this_week..(date_beginning_this_week + 1.day))
# Message Load (0.2ms) SELECT "messages".* FROM "messages" WHERE ("messages"."created_at" BETWEEN $1 AND $2) LIMIT $3 [["created_at", "2018-01-29"], ["created_at", "2018-01-30"], ["LIMIT", 11]]

Related

Rails - Devise's registration controller create action seems to trigger twice

I added this lines of code to create action:
def create
super
#card = Card.find(params[:card_id])
#card.update(:user_id=>current_user)
end
And everything works fine, user gets created, card gets updated, but after redirect this happens:
Couldn't find Card with 'id'=
Extracted source (around line #14):
def create
super
#card = Card.find(params[:card_id])
#card.update(:user_id=>current_user)
end
I checked my terminal to find out the reason why this happens, and it seems that create action triggers twice for no reason:
Started POST "/users" for ::1 at 2020-08-12 11:04:34 +0300
Processing by Users::RegistrationsController#create as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"q1W0+ZhzK85uHTcp1x4jKHvCG0ukIgj2JxZuAy6vuLQl/vPqJVu6eXSEWviYTnWC4cXAJk2xCJhl8mgoWzXIAA==", "user"=>{"name"=>"Терл Кабот", "email"=>"tafff1#gmail.com", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]", "card_id"=>"2000012606"}, "commit"=>"Sign up"}
Card Load (1.0ms) SELECT "cards".* FROM "cards" WHERE "cards"."id" = $1 LIMIT $2 [["id", 2000012606], ["LIMIT", 1]]
(0.0ms)
BEGIN
User Exists (1.0ms) SELECT 1 AS one FROM "users" WHERE "users"."email" = $1 LIMIT $2 [["email", "tafff1#gmail.com"], ["LIMIT", 1]]
SQL (1.0ms) INSERT INTO "users" ("email", "encrypted_password", "name", "created_at", "updated_at") VALUES ($1, $2, $3, $4, $5) RETURNING "id" [["email", "tafff1#gmail.com"], ["encrypted_password", "$2a$12$qTrv/zFUxULi9sqWgYlY/uPjQoJsZxB8PJK2ae/e6YfAFT40ci47e"], ["name", "Терл Кабот"], ["created_at", "2020-08-12 08:04:35.174621"], ["updated_at", "2020-08-12 08:04:35.174621"]]
SQL (1.0ms) UPDATE "cards" SET "user_id" = $1, "updated_at" = $2 WHERE "cards"."id" = $3 [["user_id", 17], ["updated_at", "2020-08-12 08:04:35.178626"], ["id", 2000012606]]
(1.0ms) COMMIT
Redirected to http://localhost:3000/
Card Load (0.0ms) SELECT "cards".* FROM "cards" WHERE "cards"."id" = $1 LIMIT $2 [["id", nil], ["LIMIT", 1]]
Completed 404 Not Found in 378ms (ActiveRecord: 6.0ms)
ActiveRecord::RecordNotFound (Couldn't find Card with 'id'=):
is there any solution for this?
EDIT: I gave up and just changed card and user logic, now user belongs to card, so I dont have to update cards user_id from devises create action.
The card_id is nested in the user key, so it will be: params[:user][:card_id]

Something wrong with Rails logger. DB query params are not shown

From some time I have noticed that I can't see properly database query params with special gems.
(Rails 5,Ruby 2.5.1, Postgresql 10)
Example: rack-mini-profiler, rails_panel:
SELECT "items"."id" FROM "items" WHERE "item"."category_id" IN ($1,$2,$3) LIMIT $4;
Early it was:
SELECT "items"."id" FROM "items" WHERE "item"."category_id" IN (11,12,13) LIMIT 20;
Controller action:
Item.select(:id).where(category_id: #ids).limit(per_page)
#let be #ids = [11,12,13], per_page = 20
The output of clear log in terminal:
Item Load (2.6ms) SELECT "items"."id" FROM "items" WHERE "item"."category_id" IN ($1,$2,$3) LIMIT $4
[["category_id", 11], ["category_id", 12],["category_id", 13], ["LIMIT", 20]]
I tried to rollback many steps of changes, but nothing changed.
What can i do to get back normal output for debug?

Handling a massive query in Rails

What's the best way to handle a large result set with Rails and Postgres? I didn't have a problem until today, but now I'm trying to return a 124,000 record object of #network_hosts, which has effectively DoS'd my development server.
My activerecord orm isn't the prettiest, but I'm pretty sure cleaning it up isn't going to help in relation to performance.
#network_hosts = []
#host_count = 0
#company.locations.each do |l|
if l.grace_enabled == nil || l.grace_enabled == false
l.network_hosts.each do |h|
#host_count += 1
#network_hosts.push(h)
#network_hosts.sort! { |x,y| x.ip_address <=> y.ip_address }
#network_hosts = #network_hosts.first(5)
end
end
end
In the end, I need to be able to return #network_hosts to the controller for processing into the view.
Is this something that Sidekiq would be able to help with, or is it going to be just as long? If Sidekiq is the path to take, how do I handle not having the #network_hosts object upon page load since the job is running asyncronously?
I believe you want to (1) get rid of all that looping (you've got a lot of queries going on) and (2) do your sorting with your AR query instead of in the array.
Perhaps something like:
NetworkHost.
where(location: Location.where.not(grace_enabed: true).where(company: #company)).
order(ip_address: :asc).
tap do |network_hosts|
#network_hosts = network_hosts.limit(5)
#host_count = network_hosts.count
end
Something like that ought to do it in a single DB query.
I had to make some assumptions about how your associations are set up and that you're looking for locations where grace_enabled isn't true (nil or false).
I haven't tested this, so it may well be buggy. But, I think the direction is correct.
Something to remember, Rails won't execute any SQL queries until the result of the query is actually needed. (I'll be using User instead of NetworkHost so I can show you the console output as I go)
#users = User.where(first_name: 'Random');nil # No query run
=> nil
#users # query is now run because the results are needed (they are being output to the IRB window)
# User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."first_name" = $1 LIMIT $2 [["first_name", "Random"], ["LIMIT", 11]]
# => #<ActiveRecord::Relation [...]>
#users = User.where(first_name: 'Random') # query will be run because the results are needed for the output into the IRB window
# User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."first_name" = $1 LIMIT $2 [["first_name", "Random"], ["LIMIT", 11]]
# => #<ActiveRecord::Relation [...]>
Why is this important? It allows you to store the query you want to run in the instance variable and not execute it until you get to a view where you can use some of the nice methods of ActiveRecord::Batches. In particular, if you have some view (or export function, etc.) where you are iterating the #network_hosts, you can use find_each.
# Controller
#users = User.where(first_name: 'Random') # No query run
# view
#users.find_each(batch_size: 1) do |user|
puts "User's ID is #{user.id}"
end
# User Load (0.5ms) SELECT "users".* FROM "users" WHERE "users"."first_name" = $1 ORDER BY "users"."id" ASC LIMIT $2 [["first_name", "Random"], ["LIMIT", 1]]
# User's ID is 1
# User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."first_name" = $1 AND ("users"."id" > 1) ORDER BY "users"."id" ASC LIMIT $2 [["first_name", "Random"], ["LIMIT", 1]]
# User's ID is 2
# User Load (0.3ms) SELECT "users".* FROM "users" WHERE "users"."first_name" = $1 AND ("users"."id" > 2) ORDER BY "users"."id" ASC LIMIT $2 [["first_name", "Random"], ["LIMIT", 1]]
# => nil
Your query is not executed until the view, where it will now load only 1,000 records (configurable) into memory at a time. Once it reaches the end of those 1,000 records, it will automatically run another query to fetch the next 1,000 records. So your memory is much more sane, at the cost of extra database queries (which are usually pretty quick)

Does Rails Really Cache Database Query?

Does Rails actually cache the query result? The documentation says that same query will be never executed twice on the same request:
1.7 SQL Caching
The second time the same query is run against the database, it's not actually going to hit the database. The first time the result is returned from the query it is stored in the query cache (in memory) and the second time it's pulled from memory.
I did an experiment to proof that Rails actually cache the query:
def test
data = ""
User.find(1).update(first_name: 'Suwir Suwirr')
data << User.find(1).first_name
data << "\n"
User.find(1).update(first_name: 'Pengguna')
data << User.find(1).first_name
data << "\n"
render plain: data
end
If the result is cached, i would get the same result for each User.find(1). However, the result was Rails does not actually cache the query; i was expecting the update does not reflected on the result since it was "cached":
Suwir Suwirr
Pengguna
But the console says that it was cached: (Please highlight the CACHE word)
Started GET "/diag/test" for 10.0.2.2 at 2017-02-21 10:30:16 +0700
Processing by DiagController#test as HTML
User Load (0.7ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 ORDER BY "users"."id" ASC LIMIT $2 [["id", 4], ["LIMIT", 1]]
User Load (0.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
(0.1ms) BEGIN
SQL (0.4ms) UPDATE "users" SET "first_name" = $1, "updated_at" = $2 WHERE "users"."id" = $3 [["first_name", "Suwir Suwirr"], ["updated_at", 2017-02-21 03:30:16 UTC], ["id", 1]]
(16.5ms) COMMIT
User Load (0.3ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
CACHE (0.0ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
(0.1ms) BEGIN
SQL (0.3ms) UPDATE "users" SET "first_name" = $1, "updated_at" = $2 WHERE "users"."id" = $3 [["first_name", "Pengguna"], ["updated_at", 2017-02-21 03:30:16 UTC], ["id", 1]]
(0.9ms) COMMIT
User Load (0.5ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
Rendering text template
Rendered text template (0.0ms)
Completed 200 OK in 380ms (Views: 3.5ms | ActiveRecord: 21.9ms)
So my question, does Rails actually cache the query result? Or, only several query result on some request?
Update: Using Batch #update_all
I made another experiment to "fool" the query logic. Now Rails does not "cache" the query. Why this behaviour can happen?
# Controller
def test
data = ""
User.where(id: 1).update_all(first_name: 'Suwir Suwirr')
data << User.find(1).first_name
data << "\n"
User.where(id: 1).update_all(first_name: 'Pengguna')
data << User.find(1).first_name
data << "\n"
logger.info 'hi'
render plain: data
end
# Console
Started GET "/diag/test" for 10.0.2.2 at 2017-02-21 10:45:43 +0700
Processing by DiagController#test as HTML
User Load (0.3ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 ORDER BY "users"."id" ASC LIMIT $2 [["id", 4], ["LIMIT", 1]]
SQL (13.8ms) UPDATE "users" SET "first_name" = 'Suwir Suwirr' WHERE "users"."id" = $1 [["id", 1]]
User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
SQL (2.9ms) UPDATE "users" SET "first_name" = 'Pengguna' WHERE "users"."id" = $1 [["id", 1]]
User Load (0.3ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
hi
Rendering text template
Rendered text template (0.0ms)
Completed 200 OK in 28ms (Views: 0.8ms | ActiveRecord: 17.8ms)
# Browser result
Suwir Suwirr
Pengguna
I was stupid.
Yes, Rails does actually cache the query, but update and destroy will invalidate its query cache. update_all is basically iterating each record with update.
I tried the experiment by really "fooling" the ActiveRecord query mechanism. And yes, it works.
# Controller
def test
data = ""
ActiveRecord::Base.connection.execute('UPDATE "users" SET "first_name" = \'Suwir Suwirr\' WHERE "users"."id" = 1')
data << User.find(1).first_name
data << "\n"
ActiveRecord::Base.connection.execute('UPDATE "users" SET "first_name" = \'Pengguna\' WHERE "users"."id" = 1')
data << User.find(1).first_name
data << "\n"
render plain: data
end
# Browser
Suwir Suwirr
Suwir Suwirr

Ruby on Rails: dependent object destroyed when transfered from guest user to registered user

Here is my problem:
I'm using Devise's guest_user, that contains a logging_in method to transfer guest_user parameters to the registered user when he logs in. So in my case, the user has_many periods, dependent: :destroy, so here is the logging_in method:
def logging_in
guest_periods = guest_user.periods.all
guest_periods.each do |p|
p.user_id = current_user.id
p.save!
end
current_user.latest_entry = guest_user.latest_entry
current_user.is_in_zone = guest_user.is_in_zone
current_user.save
end
However, when a guest_user logs in, his periods gets destroyed instead of being transfered. Here is the log:
Started GET "/" for ::1 at 2015-05-11 00:18:03 +0300
Processing by WelcomeController#index as HTML
User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 ORDER BY "users"."id" ASC LIMIT 1 [["id", 24]]
User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT 1 [["id", 23]]
Period Load (0.3ms) SELECT "periods".* FROM "periods" WHERE "periods"."user_id" = $1 [["user_id", 23]]
(0.2ms) BEGIN
CACHE (0.0ms) SELECT "periods".* FROM "periods" WHERE "periods"."user_id" = $1 [["user_id", 23]]
SQL (0.8ms) UPDATE "periods" SET "user_id" = $1, "updated_at" = $2 WHERE "periods"."id" = $3 [["user_id", 24], ["updated_at", "2015-05-10 21:18:03.863162"], ["id", 170]]
(0.9ms) COMMIT
(0.2ms) BEGIN
SQL (2.1ms) UPDATE "users" SET "is_in_zone" = $1, "latest_entry" = $2, "updated_at" = $3 WHERE "users"."id" = $4 [["is_in_zone", "t"], ["latest_entry", "2015-05-04"], ["updated_at", "2015-05-10 21:18:03.875572"], ["id", 24]]
(15.8ms) COMMIT
(0.5ms) BEGIN
SQL (0.3ms) DELETE FROM "periods" WHERE "periods"."id" = $1 [["id", 170]]
SQL (0.7ms) DELETE FROM "users" WHERE "users"."id" = $1 [["id", 23]]
(1.2ms) COMMIT
So we can see that the transfer is done, but then in the end, the periods are destroyed anyway. They should not be, as they are not belonging to the user to be destroyed any more.
Why is it happening?
Even though Period#user_id has changed, guest_user.periods is still loaded in memory and is what gets destroyed when you destroy the guest user. If you guest_user.reload, its associations will clear out and it becomes safe to destroy. You could also guest_user.periods(true) to force reload of just the periods.
Another option is:
guest_user.periods.update_all(user_id: current_user.id)
This executes a single query to perform the update, which will be nice if there are a lot of periods, and also doesn't load the guest_user.periods association, so it will load fresh during the destroy and find the correct empty set.

Resources