Rails 5 not saving dates correctly - ruby-on-rails

I'm having a weird issue where Rails is not saving a date correctly. I've tried formatting the date a number of ways, including as an actual date object as well as various string formats, but it's wanting to swap the month and day regardless. When this results in an invalid date (ie 28/08/2019 as mm/dd/yyyy), it fails to include the parameter all together when saving.
If the date is valid when swapped (ie 2019-09-03 ==> 2019-03-09), it will save the incorrect (swapped) date.
params[:shift_date] = Date.strptime(shift_params[:shift_date], '%m/%d/%Y').to_date
#shift.save(params)
Results in...
<ActionController::Parameters {"shift_date"=>Tue, 03 Sep 2019, "start_time"=>"00:00:00", "end_time"=>"00:00:00"} permitted: true>
(0.1ms) BEGIN
↳ app/controllers/api/shifts_controller.rb:25
Shift Create (0.4ms) INSERT INTO "shifts" ("shift_date", "start_time", "end_time") VALUES ($1, $2, $3) RETURNING "id" [["shift_date", "2019-03-09"], ["start_time", "00:00:00"], ["end_time", "00:00:00"]]
With a date like '08/29/2019', again correctly interpreted into a Date object, when saved it will be omitted since 2019-29-08 is not a valid date. I've also tried converting to a string 2019-08-29, 2019-08-29 00:00:00, etc etc, and it always get its wrong no matter what I do. I'm very surprised passing a correctly set and valid Date object that Rails would change it at all...
Note this only happens in the controller. Via the console everything works as expected...
2.6.3 :076 > s.shift_date = Date.strptime('08/28/2019', '%m/%d/%Y')
=> Wed, 28 Aug 2019
2.6.3 :077 > s.save
(0.2ms) BEGIN
Shift Update (0.4ms) UPDATE "shifts" SET "shift_date" = $1, "updated_at" = $2 WHERE "shifts"."id" = $3 [["shift_date", "2019-08-28"], ["updated_at", "2019-08-28 20:48:31.944877"], ["id", 54]]
(0.3ms) COMMIT
=> true

Related

Rails controller looks like it is saving all params, but fields are missing when I pull the record - Like post function

I am keeping an eye on my rails server and my rails console. Everything looks great in my server.
When I go to create a Like, my server saves all the stuff I'm looking to save.
Notice that I am passing in an author_id, a likable_id, and a likable_type. This association is polymorphic.
Like Create (0.6ms) INSERT INTO "likes" ("author_id", "likeable_type", "likeable_id", "created_at", "updated_at") VALUES ($1, $2, $3, $4, $5) RETURNING "id" [["author_id", 1], ["likeable_type", "Pin"], ["likeable_id", 34], ["created_at", "2020-12-18 04:52:49.973371"], ["updated_at", "2020-12-18 04:52:49.973371"]]
↳ app/controllers/api/likes_controller.rb:8
(1.9ms) COMMIT
↳ app/controllers/api/likes_controller.rb:8
Rendering api/likes/show.json.jbuilder
Rendered api/likes/_like.json.jbuilder (0.4ms)
Rendered api/likes/show.json.jbuilder (2.4ms)
Completed 200 OK in 75ms (Views: 8.1ms | ActiveRecord: 20.6ms)
Great! I get the 200OK! Awesome.
However, when I check Like.all in the console, It looks like the Like is being created with none of the params I passed into it above.
[55] pry(main)> Like.all
Like Load (0.3ms) SELECT "likes".* FROM "likes"
=> [#<Like:0x00007fbef776c7e0
id: 6, #<--- Only the id for the like is being shown :(
created_at: Fri, 18 Dec 2020 04:30:50 UTC +00:00,
updated_at: Fri, 18 Dec 2020 04:30:50 UTC +00:00>]
[56] pry(main)>
I found the answer. For whatever reason, my schema was not bugged out and somehow had extra text. I dropped the table and re-migrated.

How to set timezone for rails and sqlite?

The test app:
$ rails new ra1
$ cd ra1
$ ./bin/rails g model m1 dt1:datetime
$ ./bin/rake db:migrate
Then I add config.time_zone = 'Europe/Kiev' into config/application.rb, run console:
irb(main):001:0> M1.create dt1: Time.now
(0.1ms) begin transaction
SQL (0.3ms) INSERT INTO "m1s" ("dt1", "created_at", "updated_at") VALUES (?, ?, ?) [["dt1", "2015-03-30 11:11:43.346991"], ["created_at", "2015-03-30 11:11:43.360987"], ["updated_at", "2015-03-30 11:11:43.360987"]]
(33.0ms) commit transaction
=> #<M1 id: 3, dt1: "2015-03-30 11:11:43", created_at: "2015-03-30 11:11:43", updated_at: "2015-03-30 11:11:43">
irb(main):002:0> Time.now
=> 2015-03-30 14:12:27 +0300
irb(main):003:0> Rails.configuration.time_zone
=> "Europe/Kiev"
What am I doing wrong?
Values in the database are always stored in UTC, no matter the time_zone.
The time zone configuration affects only the Ruby environment. The data is fetched from the database and the dates are converted into the selected timezone. The same applies to new time instances, as you noticed using Time.now.
The main reasons the time is normalized in the database is that it allows to easily convert the same value to multiple timezones, or change the timezone after the initial phase of the project without the need to reconvert all the dates. It's a good practice to use UTC in the database.

Why does pg gem automatically cast 'A' to 0 for an integer type

I have a column defined as integer in Postgresql and wrote a test to assign the character 'A' and expected a database error to be thrown.
However, it appears that pg automatically maps characters to '0'.
I can't think of any good reason for this functionality and am inclined to think that it is a bad idea to over-ride the implicit constraint that an integer should in fact be an integer.
Am I missing some big picture here?
Updating:
Thanks for the quick response (mu_is_too_short). You are right in that 'A'.to_i returns 0 and 0 is a valid value so a CHECK would not help. I will have numeric validation in the model so I suppose this doesn't really matter. It's just that I was trying to check that the database column was correctly configured as integer. I suppose Rspec tests are really targeted at the layers above the database.
Updating Again...
I created the following class:
class Notaninteger
def initialize(par)
#par = par
end
def to_i
#par
end
end
I thought this might counter AR's attempts to call to_i. However, this doesn't seem to work. Drat, foiled again!
Updating for the last time...
The following works:
class Notaninteger
def to_i
self
end
end
I suspect AR may call to_i several times and the above always returns a Notaninteger object.
2.1.4 :002 > range.start_of_range = 'A'
=> "A"
2.1.4 :003 > range.save!
(0.4ms) BEGIN SQL (11.6ms) INSERT INTO "coderanges" ("comment", "created_at", "end_of_range", "groupname", "name", "protection", "start_of_range", "updated_at") VALUES ($1, $2, $3, $4, $5, $6, $7, $8) RETURNING "id" [["comment", nil], ["created_at", Wed, 11 Mar 2015 21:46:52 UTC +00:00], ["end_of_range", nil], ["groupname", nil], ["name", nil], ["protection", nil], ["start_of_range", 0], ["updated_at", Wed, 11 Mar 2015 21:46:52 UTC +00:00]]
That's not the pg gem doing the cast, that's ActiveRecord calling to_i on the string when you do the range.start_of_range = 'A' assignment. AR is calling to_i because, presumably, the column is an integer.
If zero isn't a valid value for start_of_range then you should add a validation to check it (and, if you're properly paranoid like me, add a CHECK constraint inside the database to match your validation).

Rails 3 and Rspec: counter cache column being updated to 2 when expected 1

I'm testing with Rspec a model named Solutions which has many Likes. Solution stores how many Likes it have (counter_cache). It has a "likes_count" attribute (and respective db field).
When I create a Like record associated to a Solution, I expect that the solution attribute "likes_count" should be updated from nil to 1. When I do that in console, it works.
But when I run the spec, doing the SAME THING I do in console, it update TWICE the "likes_count" field, setting it to 2.
Take a look (in console) WORKING:
irb(main):001:0> solution = Factory(:solution)
irb(main):004:0> solution.likes_count
=> nil
irb(main):006:0> like = Factory(:like, :likeable => solution)
=> #<Like id: 1, user_id: 2, likeable_id: 1, likeable_type: "Solution",
created_at: "2011-11-23 19:31:23", updated_at: "2011-11-23 19:31:23">
irb(main):007:0> solution.reload.likes_count
=> 1
Take a look at the spec result NOT WORKING:
1) Solution counter cache should be increased when a like is created
Failure/Error: subject.reload.likes_count.should be 1
expected #<Fixnum:3> => 1
got #<Fixnum:5> => 2
Compared using equal?, which compares object identity,
but expected and actual are not the same object. Use
'actual.should == expected' if you don't care about
object identity in this example.
# ./spec/models/solution_spec.rb:45:in `block (3 levels) in <top (required)>'
Here is the spec:
describe "counter cache" do
let(:solution) { Factory(:solution) }
it "should be increased when a like is created" do
Factory(:like, :likeable => solution)
solution.reload.likes_count.should be 1
end
end
I took a look at test.log and I realized that the db query that updates the counter cache column was called two times in the test.
SQL (0.5ms) INSERT INTO "likes" ("created_at", "likeable_id", "likeable_type", "updated_at", "user_id") VALUES (?, ?, ?, ?, ?) [["created_at", Wed, 23 Nov 2011 19:38:31 UTC +00:00], ["likeable_id", 121], ["likeable_type", "Solution"], ["updated_at", Wed, 23 Nov 2011 19:38:31 UTC +00:00], ["user_id", 204]]
SQL (0.3ms) UPDATE "solutions" SET "likes_count" = COALESCE("likes_count", 0) + 1 WHERE "solutions"."id" IN (SELECT "solutions"."id" FROM "solutions" WHERE "solutions"."id" = 121 ORDER BY id DESC)
SQL (0.1ms) UPDATE "solutions" SET "likes_count" = COALESCE("likes_count", 0) + 1 WHERE "solutions"."id" IN (SELECT "solutions"."id" FROM "solutions" WHERE "solutions"."id" = 121 ORDER BY id DESC)
Solution Load (0.3ms) SELECT "solutions".* FROM "solutions" WHERE "solutions"."id" = ? LIMIT 1 [["id", 121]]
I had the same problem. It turned out that my spec_helper.rb was loading the models a second time and therefore creating a second callback to update the counters. Make sure your Solution model isn't being reloaded by another process.
The answer above is also correct: you need to use == instead of be to do the comparison, but that will not fix the multiple updates that you are seeing in your log file.
You've the answer in your logs:
When you use be, it compares the object_id which is always the same for a couple of objects like true and 1. The id of 1 appears to be 2. Try in console: 1.object_id #=> 2
So replace your test with: solution.reload.likes_count.should eql 1 or even solution.reload.likes_count.should == 1

Delayed_job suddenly doesn't seem to do anything?

I have a scraper set up to use delayed_job so that it runs in the background.
class Scraper
def do_scrape
# do some scraping stuff
end
handle_asynchronously :do_scrape
end
Now I can comment out the handle_asynchronously line, open the console and run the scraper just fine. It does exactly what I expect it to do.
However, when I try to fire the scrape as a delayed job, it doesn't seem to do anything at all. Further to that, it doesn't seem to log anything important either.
Here's how my log looks from enqueueing a job to running rake jobs:work.
County Load (1.0ms) SELECT "counties".* FROM "counties" WHERE "counties"."name" = 'Fermanagh' LIMIT 1
(0.1ms) BEGIN
SQL (20.5ms) INSERT INTO "delayed_jobs" ("attempts", "created_at", "failed_at", "handler", "last_error", "locked_at", "locked_by", "priority", "run_at", "updated_at") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) RETURNING "id" [["attempts", 0], ["created_at", Mon, 30 May 2011 21:19:25 UTC +00:00], ["failed_at", nil], ["handler", "---
# serialized object omitted for conciseness
nmethod_name: :refresh_listings_in_the_county_without_delay\nargs: []\n\n"], ["last_error", nil], ["locked_at", nil], ["locked_by", nil], ["priority", 0], ["run_at", Mon, 30 May 2011 21:19:25 UTC +00:00], ["updated_at", Mon, 30 May 2011 21:19:25 UTC +00:00]]
(0.9ms) COMMIT
Delayed::Backend::ActiveRecord::Job Load (0.4ms) SELECT "delayed_jobs".* FROM "delayed_jobs" WHERE (locked_by = 'host:David-Tuites-MacBook-Pro.local pid:7743' AND locked_at > '2011-05-30 17:19:32.116511') LIMIT 1
(0.1ms) BEGIN
SQL (0.3ms) DELETE FROM "delayed_jobs" WHERE "delayed_jobs"."id" = $1 [["id", 42]]
(0.4ms) COMMIT
As you can see, it seems to just inset a job and then delete it straight away? This scraping method should take at least a few minutes.
The worst part is, it was working perfectly last night and I can't think of a single thing I'm doing differently. I tried fixing the gem to a previous version incase it was updated recently but doesn't seem to have fixed the problem.
Any ideas?
Have you configured your delayed job to delete failed jobs? Look for the following setting in your initializer:
Delayed::Worker.destroy_failed_jobs = true
If yes then set it to false and look into the delayed_jobs table for the exception due to which it failed and debug further.

Resources