ORA-01857 when executing to_timestamp_tz() - timezone

Why executing this query over SQLDeveloper to connect to my database:
select to_timestamp_tz('05/22/2016 10:18:01 PDT', 'MM/DD/YYYY HH24:MI:SS TZD') from dual;
I get the following error:
ORA-01857: "not a valid time zone"
01857. 00000 - "not a valid time zone"
*Cause:
*Action:
But, I'm able to execute the query without any error directly from sqlplus on the host where the database is located, getting the expected result:
TO_TIMESTAMP_TZ('05/22/201610:18:01PDT','MM/DD/YYYYHH24:MI:SSTZD')
---------------------------------------------------------------------------
22-MAY-16 10.18.01.000000000 AM -07:00
So, I'm trying to figure out if I'm doing something incorrectly. I have read that error could be cause because of multiple tzabbrev for a timezone, but this does not explains why on sqlplus runs the query correctly, since I can see the multiple tzabbrev for different time regions on both host and SQLDeveloper (query from v$timezone_names).
The real issue is that our application uses this query, so we notice that this issue reproduces sometimes, even if the application is deploy on the same host as the database.

I add 2 new lines to sqldeveloper\sqldeveloper\bin\sqldeveloper.conf
AddVMOption -Doracle.jdbc.timezoneAsRegion=false
AddVMOption -Duser.timezone=CET
and this fix the problem.
Updated
To eliminate the ambiguity of boundary cases when the time switches from Standard Time to Daylight Saving Time, use both the TZR format element and the corresponding TZD format element
To make your query work without changing anything from the JVM configuration, you should provide the timezone region
select to_timestamp_tz('05/22/2016 10:18:01 PDT US/Pacific', 'MM/DD/YYYY HH24:MI:SS TZD TZR') from dual;
Because you didn't provide the timezone region, it will get the default one. Let's look at the first parameter 'oracle.jdbc.timezoneAsRegion'. This is defined by the jdbc driver as follow:
CONNECTION_PROPERTY_TIMEZONE_AS_REGION
Use JVM default timezone as specified rather than convert to a GMT offset. Default is true.
So without defining this property, you force your query to use the default timezone region defined by property 'user.timezone'. But actually you haven't set it yet. So the solution is either you set the property 'oracle.jdbc.timezoneAsRegion' to false (and the database current session time zone region will be used) or provide the it implicitly with 'user.timezone' property

Related

Influx QL Variables Integer and Variable Embedding Not working

I was trying to write a simple FluxQL Query in Grafana Dashboard that uses a variable
m1(of type constant)(which contains the name of the measurement)
I created the variable m1 in grafana dashboard variables
m1 = my-measurement
and tried to run the following queries but non of them worked and they either say expression request error or No Data)
i.e
SELECT count("fails") FROM "/^${m1:raw}$/"
SELECT count("fails") FROM "/^${m1}$/"
SELECT count("fails") FROM $m1" (expression request error)
SELECT count("fails") FROM "$m1"
SELECT count("fails") FROM "${m1}"
The only query worked was without dashboard variables
SELECT count("fails") FROM "my-measurement"
How can I use the variables to work for that query.
On the similar ground I tried to make a custom variable(myVar) for which we take integer input values from user and on that basis where clause should work, but same error occurs either no data or expression request error
What I tried was
SELECT count(*) from "my-measurement-2" WHERE ("value" > $myVar)
How should I solve these issues?Please help
You may have a problem with
1.) syntax
SELECT count("fails")
FROM "${m1:raw}"
2.) data
You may correct query syntax, but query can be very inefficient. Query execution may need a lot of time - so it's better to have timefilter, which will use selected dashboard time range (make sure you have some data in that time range)
SELECT count("fails")
FROM "${m1:raw}"
WHERE $timeFilter
3.) Grafana panel configuration
Make sure you are using suitable panel - for query above Stat panel is a good option (that query returns only single value, not timeseries, so time series panel types may have a problem with that).
Generally, use query inspector to see how are variables interpolated - there can be "magic", which is not obvious - e.g. quotes which are added around numeric variables, so then it is string filtering and not numeric filtering on the InfluxDB level.

Neo4j- convert Time String into Integer in query

How can I convert String Time into Integer in Neo4j. I have been trying for quite a long time, but there does not seem to be any clear solution for it. Only solution, I found is to add a new property to my node while loading it from CSV. I don't want to do this.
I have Time in the following format:
"18:11:00"
and I want to do some subtraction on them.
I tried doing the following but to no avail:
match (st1:Stoptime)-[r:PRECEDES]->(st2:Stoptime)
return st1.arrival_time, toInt(st1.arrival_time)
limit 10
But I get following output:
"18:11:00" null
You can install APOC procedures and do it using the function apoc.date.parse:
return apoc.date.parse('18:11:00','s','HH:mm:ss')
Running this example the output will be:
╒════════════════════════════════════════════╕
│"apoc.date.parse("18:11:00",'s','HH:mm:ss')"│
╞════════════════════════════════════════════╡
│65460 │
└────────────────────────────────────────────┘
The first parameter is the date/time to be parsed. The second parameter is the target time unit. In this case I have specified seconds (s). The third parameter indicates the date/time format of the first parameter.
Note: remember to install APOC procedures according the version of Neo4j you are using. Take a look in the version compatibility matrix.

What formats are accepted in VSTS logging commands?

When using the logging commands (documentation here), there are many string-only properties, but there are also several GUID and time properties especially for the task.logdetail command.
What format are these properties supposed to be when written to the output? i.e. Should GUIDs be included in "N" (no hyphens) or "D" (hyphens), and should time values include dates?
You just need to specify the value of GUID.
》》should time values include dates?
Both datetime include time value are working fine, for example: 2016/11/28; 2016/11/28 3:00:05; 11/28/2016; 11/28/2016 4:00:10
Simple code:
Write-Host "##vso[task.logdetail id=a804d160-f69f-4e8a-bdd2-0076d716a01f;name=project1;type=build;order=1;starttime=2016/11/27]create new timeline record"
Write-Host "##vso[task.logdetail id=a804d160-f69f-4e8a-bdd2-0076d716a01f;progress=100;state= Completed;finishtime=2016/11/29]update"
Result

Rails/Ruby: TimeWithZone comparison inexplicably failing for equivalent values

I am having a terrible time (no pun intended) with DateTime comparison in my current project, specifically comparing two instances of ActiveSupport::TimeWithZone. The issue is that both my TimeWithZone instances have the same value, but all comparisons indicate they are different.
Pausing during execution for debugging (using RubyMine), I can see the following information:
timestamp = {ActiveSupport::TimeWithZone} 2014-08-01 10:33:36 UTC
started_at = {ActiveSupport::TimeWithZone} 2014-08-01 10:33:36 UTC
timestamp.inspect = "Fri, 01 Aug 2014 10:33:36 UTC +00:00"
started_at.inspect = "Fri, 01 Aug 2014 10:33:36 UTC +00:00"
Yet a comparison indicates the values are not equal:
timestamp <=> started_at = -1
The closest answer I found in searching (Comparison between two ActiveSupport::TimeWithZone objects fails) indicates the same issue here, and I tried the solutions that were applicable without any success (tried db:test:prepare and I don't run Spring).
Moreover, even if I try converting to explicit types, they still are not equivalent when comparing.
to_time:
timestamp.to_time = {Time} 2014-08-01 03:33:36 -0700
started_at.to_time = {Time} 2014-08-01 03:33:36 -0700
timestamp.to_time <=> started_at.to_time = -1
to_datetime:
timestamp.to_datetime = {Time} 2014-08-01 03:33:36 -0700
started_at.to_datetime = {Time} 2014-08-01 03:33:36 -0700
timestamp.to_datetime <=> started_at.to_datetime = -1
The only "solution" I've found thus far is to convert both values using to_i, then compare, but that's extremely awkward to code everywhere I wish to do comparisons (and moreover, seems like it should be unnecessary):
timestamp.to_i = 1406889216
started_at.to_i = 1406889216
timestamp.to_i <=> started_at.to_i = 0
Any advice would be very much appreciated!
Solved
As indicated by Jon Skeet above, the comparison was failing because of hidden millisecond differences in the times:
timestamp.strftime('%Y-%m-%d %H:%M:%S.%L') = "2014-08-02 10:23:17.000"
started_at.strftime('%Y-%m-%d %H:%M:%S.%L') = "2014-08-02 10:23:17.679"
This discovery led me down a strange path to finally discover what was ultimately causing the issue. It was a combination of this issue occurring only during testing and from using MySQL as my database.
The issues was showing only in testing because within the test where this cropped up, I'm running some tests against a couple of associated models that contain the above fields. One model's instance must be saved to the database during the test -- the model that houses the timestamp value. The other model, however, was performing the processing and thus is self-referencing the instance of itself that was created in the test code.
This led to the second culprit, which is the fact I'm using MySQL as the database, which when storing datetime values, does not store millisecond information (unlike, say, PostgreSQL).
Invariably, what this means is that the timestamp variable that was being read after its ActiveRecord was retrieved from the MySQL database was effectively being rounded and shaved of the millisecond data, while the started_at variable was simply retained in memory during testing and thus the original milliseconds were still present.
My own (sub-par) solution is to essentially force both models (rather than just one) in my test to retrieve themselves from the database.
TLDR; If at all possible, use PostgreSQL if you can!
This seem to happen if you're comparing time generated in Ruby with time loaded from the database.
For example:
time = Time.zone.now
Record.create!(mark: time)
record = Record.last
In this case record.mark == time will fail because Ruby keeps time down to nanoseconds, while different databases have different precission.
In case of postgres DateTime type it'll be to miliseconds.
You can see that when you check that while record.mark.sec == time.msec - record.mark.nsec != time.nsec

Timestamp can't be saved - postgresql

I have the below code snippet:
class Foo
include DataMapper::Resource
property :id, Serial
property :timestamp, DateTime
end
I just want to convert the current time to ms:
class Time
def to_ms
(self.to_f * 1000.0).to_i
end
end
def current_time
time = Time.now
return time.to_ms
end
time = current_time # => 1352633569151
but when I am going to save the Foo with above timestamp, then it can't be saved to the database and I'm not getting any error message.
foo = Foo.new
foo.timestamp = time
foo.save
Any idea?
are you using a correct format for your :datetime property?
should be like:
DateTime.now.to_s
=> "2012-11-11T14:04:02+02:00"
or a "native" DateTime object, without any conversions.
DataMapper will carry to convert it to according values based on adapter used.
also, to have exceptions raised when saving items:
DataMapper::Model.raise_on_save_failure = true
that's a global setting, i.e. all models will raise exceptions.
to make only some model to behave like this:
YourModel.raise_on_save_failure = true
http://datamapper.org/docs/create_and_destroy.html
See "Raising an exception when save fails" chapter
btw, to see what's wrong with your item before saving it, use and item.valid? and item.errors
foo = Foo.new
foo.timestamp = time
if foo.valid?
foo.save
else
p foo.errors
end
I replicated your code and got following error:
#errors={:timestamp=>["Timestamp must be of type DateTime"]}
See live demo here
The PostgreSQL data types would be timestamp or timestamp with time zone. But that contradicts what you are doing. You take the epoch value and multiply by 1000. You'd have to save that as integer or some numeric type.
More about the handling of timestamps in Postgres in this related answer.
I would save the value as timestamp with time zone as is (no multiplication). You can always extract ms out of it if need should be.
If you need to translate the Unix epoch value back to a timestamp, use:
SELECT to_timestamp(1352633569.151);
--> timestamptz 2012-11-11 12:32:49.151+01
Just save "now"
If you actually want to save "now", i.e. the current point in time, then let Postgres do it for you. Just make sure the database server has a reliable local time - install ntp. This is generally more reliable, accurate and simple.
Set the DEFAULT of the timestamp column to now() or CURRENT_TIMESTAMP.
If you want timestamp instead of timestamptz you can still use now(), which is translated to "local" time according to the servers timezone setting. Or, to get the time for a given time zone:
now() AT ZIME ZONE 'Europe/Vienna' -- your time zone here
Or, in your particular case, since you seem to want only three fractional digits: now()::timestamp(3) or CURRENT_TIMESTAMP(3) or CURRENT_TIMESTAMP(3) AT ZIME ZONE 'Europe/Vienna'.
Or, if you define the type of the column as timestamp(3), all timestamp values are coerced to the type and rounded to 3 fractional decimal digits automatically.
So this would be all you need:
CREATE TABLE tbl (
-- other columns
,ts_column timestamp(3) DEFAULT now()
);
The value is set automatically on INSERT, you don't even have to mention the column.
If you want to update it ON UPDATE, add a TRIGGER like this:
Trigger function:
CREATE OR REPLACE FUNCTION trg_up_ts()
RETURNS trigger AS
$BODY$
BEGIN
NEW.ts_column := now();
RETURN NEW;
END
$BODY$ LANGUAGE plpgsql VOLATILE
Trigger:
CREATE TRIGGER log_up_ts
BEFORE UPDATE ON tbl
FOR EACH ROW EXECUTE PROCEDURE trg_up_ts();
Now, everything works automatically.
If that's not what you are after, #slivu's answer seems to cover the Ruby side just nicely.
I'm not familiar with PostgreSQL, but why are you assigning a Fixnum (time) to timestamp (which is a DateTime)? Your model must be failing to convert time to a proper DateTime value before generating the SQL.
Try foo.save!. I'm pretty sure you'll see an error, either reported from PostgreSQL, saying 1352633569151 is not a valid value for the table column, or your model will say it can't parse 1352633569151 to a valid DateTime.
foo.timestamp = Time.now or foo.timestamp = '2012-11-11 00:00:00' is something that'll work.

Resources