Rails/Ruby: TimeWithZone comparison inexplicably failing for equivalent values - ruby-on-rails

I am having a terrible time (no pun intended) with DateTime comparison in my current project, specifically comparing two instances of ActiveSupport::TimeWithZone. The issue is that both my TimeWithZone instances have the same value, but all comparisons indicate they are different.
Pausing during execution for debugging (using RubyMine), I can see the following information:
timestamp = {ActiveSupport::TimeWithZone} 2014-08-01 10:33:36 UTC
started_at = {ActiveSupport::TimeWithZone} 2014-08-01 10:33:36 UTC
timestamp.inspect = "Fri, 01 Aug 2014 10:33:36 UTC +00:00"
started_at.inspect = "Fri, 01 Aug 2014 10:33:36 UTC +00:00"
Yet a comparison indicates the values are not equal:
timestamp <=> started_at = -1
The closest answer I found in searching (Comparison between two ActiveSupport::TimeWithZone objects fails) indicates the same issue here, and I tried the solutions that were applicable without any success (tried db:test:prepare and I don't run Spring).
Moreover, even if I try converting to explicit types, they still are not equivalent when comparing.
to_time:
timestamp.to_time = {Time} 2014-08-01 03:33:36 -0700
started_at.to_time = {Time} 2014-08-01 03:33:36 -0700
timestamp.to_time <=> started_at.to_time = -1
to_datetime:
timestamp.to_datetime = {Time} 2014-08-01 03:33:36 -0700
started_at.to_datetime = {Time} 2014-08-01 03:33:36 -0700
timestamp.to_datetime <=> started_at.to_datetime = -1
The only "solution" I've found thus far is to convert both values using to_i, then compare, but that's extremely awkward to code everywhere I wish to do comparisons (and moreover, seems like it should be unnecessary):
timestamp.to_i = 1406889216
started_at.to_i = 1406889216
timestamp.to_i <=> started_at.to_i = 0
Any advice would be very much appreciated!

Solved
As indicated by Jon Skeet above, the comparison was failing because of hidden millisecond differences in the times:
timestamp.strftime('%Y-%m-%d %H:%M:%S.%L') = "2014-08-02 10:23:17.000"
started_at.strftime('%Y-%m-%d %H:%M:%S.%L') = "2014-08-02 10:23:17.679"
This discovery led me down a strange path to finally discover what was ultimately causing the issue. It was a combination of this issue occurring only during testing and from using MySQL as my database.
The issues was showing only in testing because within the test where this cropped up, I'm running some tests against a couple of associated models that contain the above fields. One model's instance must be saved to the database during the test -- the model that houses the timestamp value. The other model, however, was performing the processing and thus is self-referencing the instance of itself that was created in the test code.
This led to the second culprit, which is the fact I'm using MySQL as the database, which when storing datetime values, does not store millisecond information (unlike, say, PostgreSQL).
Invariably, what this means is that the timestamp variable that was being read after its ActiveRecord was retrieved from the MySQL database was effectively being rounded and shaved of the millisecond data, while the started_at variable was simply retained in memory during testing and thus the original milliseconds were still present.
My own (sub-par) solution is to essentially force both models (rather than just one) in my test to retrieve themselves from the database.
TLDR; If at all possible, use PostgreSQL if you can!

This seem to happen if you're comparing time generated in Ruby with time loaded from the database.
For example:
time = Time.zone.now
Record.create!(mark: time)
record = Record.last
In this case record.mark == time will fail because Ruby keeps time down to nanoseconds, while different databases have different precission.
In case of postgres DateTime type it'll be to miliseconds.
You can see that when you check that while record.mark.sec == time.msec - record.mark.nsec != time.nsec

Related

Datetime comparison query doesn't return any results

I'm trying to get a simple date-time comparison to work, but the query doesn't return any results.
The query is
MATCH (n:Event) WHERE n.start_datetime > datetime("2019-06-01T18:40:32.142+0100") RETURN n.start_datetime
According to this documentation page, this type of comparisons should work. I've also tried creating the datetime object explicitly, for instance with datetime({year: 2019, month: 7}).
I've checked that the start_datetime is in fact well formatted, by checking if the values start_datetime.year for example was correct, and couldn't find any error.
Given that all the records in the database are from 2021, the above query should return every event, yet is returning nothing.
Doing the query using only the year comparison instead of doing full datetime comparison works:
MATCH (n:Event) WHERE n.start_datetime.year > datetime("2019-06-01T18:40:32.142+0100").year RETURN n.start_datetime
Double check the data type of start_datetime. It can be either in epoch seconds or epoch milliseconds. You need to convert the epoch format to datetime, so that both are on the same data type. The reason that your 2nd query works (.year) is because .year returns an integer value.
Run below to get samples:
MATCH (n:Event)
RETURN distinct n.start_datetime LIMIT 5
Then if you see that it is 10 digits then it is in epochSeconds. If yes, then run below query:
MATCH (n:Event)
WHERE n.start_datetime is not null
AND datetime({epochSeconds: n.start_datetime}) > datetime("2019-06-01T18:40:32.142+0100")
RETURN n.start_datetime
LIMIT 25
It turns out the error was due to the timezone. Neo4j had saved the properties as LocalDateTime, which apparently can't be compared to ZonedDateTime.
I used py2neo for most of the nodes management, and the solution was to give a specific timezone to the python property. This was done (in my case) using:
datetime.datetime.fromtimestamp(kwargs["end"], pytz.UTC)
After that, I was able to do the comparisons.
Hopes this saves a couple of hours to future developers.

ORA-01857 when executing to_timestamp_tz()

Why executing this query over SQLDeveloper to connect to my database:
select to_timestamp_tz('05/22/2016 10:18:01 PDT', 'MM/DD/YYYY HH24:MI:SS TZD') from dual;
I get the following error:
ORA-01857: "not a valid time zone"
01857. 00000 - "not a valid time zone"
*Cause:
*Action:
But, I'm able to execute the query without any error directly from sqlplus on the host where the database is located, getting the expected result:
TO_TIMESTAMP_TZ('05/22/201610:18:01PDT','MM/DD/YYYYHH24:MI:SSTZD')
---------------------------------------------------------------------------
22-MAY-16 10.18.01.000000000 AM -07:00
So, I'm trying to figure out if I'm doing something incorrectly. I have read that error could be cause because of multiple tzabbrev for a timezone, but this does not explains why on sqlplus runs the query correctly, since I can see the multiple tzabbrev for different time regions on both host and SQLDeveloper (query from v$timezone_names).
The real issue is that our application uses this query, so we notice that this issue reproduces sometimes, even if the application is deploy on the same host as the database.
I add 2 new lines to sqldeveloper\sqldeveloper\bin\sqldeveloper.conf
AddVMOption -Doracle.jdbc.timezoneAsRegion=false
AddVMOption -Duser.timezone=CET
and this fix the problem.
Updated
To eliminate the ambiguity of boundary cases when the time switches from Standard Time to Daylight Saving Time, use both the TZR format element and the corresponding TZD format element
To make your query work without changing anything from the JVM configuration, you should provide the timezone region
select to_timestamp_tz('05/22/2016 10:18:01 PDT US/Pacific', 'MM/DD/YYYY HH24:MI:SS TZD TZR') from dual;
Because you didn't provide the timezone region, it will get the default one. Let's look at the first parameter 'oracle.jdbc.timezoneAsRegion'. This is defined by the jdbc driver as follow:
CONNECTION_PROPERTY_TIMEZONE_AS_REGION
Use JVM default timezone as specified rather than convert to a GMT offset. Default is true.
So without defining this property, you force your query to use the default timezone region defined by property 'user.timezone'. But actually you haven't set it yet. So the solution is either you set the property 'oracle.jdbc.timezoneAsRegion' to false (and the database current session time zone region will be used) or provide the it implicitly with 'user.timezone' property

Querying in MongoDb

I have a model named "Class Sessions" with :scheduled_at as one of the fields, i need to extract ClassSessions whose :scheduled_at is later than a specific date.
P.S.: Scheduled_at stores date in UTC format.
Try some thing like ClassSessions.where(:scheduled_at.gte => Time.now.utc) where Time.now.utc will return the time now in utc format..
gte, lte, lt ... etc are the comparisons you're looking here. please refer to Mongoid Docs for further info.
and Rails by default combines all the condition seperated by , in where as AND so for range use that logic.
Note: Where returns a Mongoid#Criteria Object and you might have to call .results on it to return the result set.
Note2: you might've to call .to_s on Time.now.utc before passing it to mongodb where but I am not sure, try doing both if it works without to_s don't call that.
For Time you can have a look at Ruby Time Docs

Rails date equality not working in where clause

Say I have ModelA and ModelB. When I save an instance of ModelA to the db it also creates/saves an instance of ModelB. In the db I end up with UTC for created_at that show up exactly the same. eg:
puts ModelA.first.created_at # Wed, 31 Aug 2011 22:49:28 UTC +00:00
puts ModelB.first.created_at # Wed, 31 Aug 2011 22:49:28 UTC +00:00
So I'd expect a query like the following to return matching records (but it doesn't)
# model_instance is instance of SomeModel
ModelA.where(created_at:model_b_instance.created_at) # returns []
But something like this, using to_s(:db) does work
ModelA.each do |m|
if m.created_at.to_s(:db) == model_b_instance.created_at.to_s(:db)
... # found matches here
end
end
Can someone explain what I am doing wrong here? I want to be able to write queries like ModelA.where(created_at: ... ) but I'm currently stuck having to iterate and match against to_s(:db).
Two things to try:
see what the created_at values are in the db -- maybe there's something off like the time zone (maybe rails is inferring a time zone for one but not the other)
look in the rails log at the actual sql query that is being generated by your ModelA.where invocation -- maybe it's doing some sort of unexpected thing.

Rails Time inconsistencies with rspec

I'm working with Time in Rails and using the following code to set up the start date and end date of a project:
start_date ||= Time.now
end_date = start_date + goal_months.months
I then clone the object and I'm writing rspec tests to confirm that the attributes match in the copy. The end dates match:
original[end_date]: 2011-08-24 18:24:53 UTC
clone[end_date]: 2011-08-24 18:24:53 UTC
but the spec gives me an error on the start dates:
expected: Wed Aug 24 18:24:53 UTC 2011,
got: Wed, 24 Aug 2011 18:24:53 UTC +00:00 (using ==)
It's clear the dates are the same, just formatted differently. How is it that they end up getting stored differently in the database, and how do I get them to match? I've tried it with DateTime as well with the same results.
Correction: The end dates don't match either. They print out the same, but rspec errors out on them as well. When I print out the start date and end date, the values come out in different formats:
start date: 2010-08-24T19:00:24+00:00
end date: 2011-08-24 19:00:24 UTC
This usually happens because rspec tries to match different objects: Time and DateTime, for instance. Also, comparable times can differ a bit, for a few milliseconds.
In the second case, the correct way is to use stubbing and mock. Also see TimeCop gem
In the first case, possible solution can be to compare timestamps:
actual_time.to_i.should == expected_time.to_i
I use simple matcher for such cases:
# ./spec/spec_helper.rb
#
# Dir[File.dirname(__FILE__) + "/support/**/*.rb"].each {|f| require f}
#
#
# Usage:
#
# its(:updated_at) { should be_the_same_time_as updated_at }
#
#
# Will pass or fail with message like:
#
# Failure/Error: its(:updated_at) { should be_the_same_time_as 2.days.ago }
# expected Tue, 07 Jun 2011 16:14:09 +0300 to be the same time as Mon, 06 Jun 2011 13:14:09 UTC +00:00
RSpec::Matchers.define :be_the_same_time_as do |expected|
match do |actual|
expected.to_i == actual.to_i
end
end
You should mock the now method of Time to make sure it always match the date in the spec. You never know when a delay will make the spec fail because of some milliseconds. This approach will also make sure that the time on the real code and on the spec are the same.
If you're using the default rspec mock lib, try to do something like:
t = Time.parse("01/01/2010 10:00")
Time.should_receive(:now).and_return(t)
I totally agree with the previous answers about stubbing Time.now. That said there is one other thing going on here. When you compare datetimes from a database you lose some of the factional time that can be in a ruby DateTime obj. The best way to compare date in that way in Rspec is:
database_start_date.should be_within(1).of(start_date)
One gotcha is that Ruby Time objects have nanosecond precision, but most databases have at most microsecond precision. The best way to get around this is to stub Time.now (or use timecop) with a round number. Read the post I wrote about this: http://blog.solanolabs.com/rails-time-comparisons-devil-details-etc/
Depending on your specs, you might be able to use Rails-native travel helpers:
# in spec_helper.rb
config.include ActiveSupport::Testing::TimeHelpers
start_date ||= Time.current.change(usecs: 0)
end_date = start_date + goal_months.months
travel_to start_date do
# Clone here
end
expect(clone.start_date).to eq(start_date)
Without Time.current.change(usecs: 0) it's likely to complain about the difference between time zones. Or between the microseconds, since the helper will reset the passed value internally (Timecop has a similar issue, so reset usecs with it too).
My initial guess would be that the value of Time.now is formatted differently from your database value.
Are you sure that you are using == and not eql or be? The latter two methods use object identity rather than comparing values.
From the output it looks like the expected value is a Time, while the value being tested is a DateTime. This could also be an issue, though I'd hesitate to guess how to fix it given the almost pathological nature of Ruby's date and time libraries ...
One solution I like is to just add the following to spec_helper:
class Time
def ==(time)
self.to_i == time.to_i
end
end
That way it's entirely transparent even in nested objects.
Adding .to_datetime to both variables will coerce the datetime values to be equivalent and respect timezones and Daylight Saving Time. For just date comparisons, use .to_date.
An example spec with two variables:
actual_time.to_datetime.should == expected_time.to_datetime
A better spec with clarity:
actual_time.to_datetime.should eq 1.month.from_now.to_datetime
.to_i produces ambiguity regarding it's meaning in the specs.
+1 for using TimeCop gem in specs. Just make sure to test Daylight Saving Time in your specs if your app is affected by DST.
Our current solution is to have a freeze_time method that handles the rounding:
def freeze_time(time = Time.zone.now)
# round time to get rid of nanosecond discrepancies between ruby time and
# postgres time
time = time.round
Timecop.freeze(time) { yield(time) }
end
And then you can use it like:
freeze_time do
perform_work
expect(message.reload.sent_date).to eq(Time.now)
end

Resources