While loop working with Coalesce function - stored-procedures

I have situation where coalesce function is used in stored procedures with while loop but whenever i am trying to debug its get return back to that while condition
Here it is the condition
While coalesce (#date, '01 Jan 2001')<>'01 Jan 2001'

Related

Datetime comparison query doesn't return any results

I'm trying to get a simple date-time comparison to work, but the query doesn't return any results.
The query is
MATCH (n:Event) WHERE n.start_datetime > datetime("2019-06-01T18:40:32.142+0100") RETURN n.start_datetime
According to this documentation page, this type of comparisons should work. I've also tried creating the datetime object explicitly, for instance with datetime({year: 2019, month: 7}).
I've checked that the start_datetime is in fact well formatted, by checking if the values start_datetime.year for example was correct, and couldn't find any error.
Given that all the records in the database are from 2021, the above query should return every event, yet is returning nothing.
Doing the query using only the year comparison instead of doing full datetime comparison works:
MATCH (n:Event) WHERE n.start_datetime.year > datetime("2019-06-01T18:40:32.142+0100").year RETURN n.start_datetime
Double check the data type of start_datetime. It can be either in epoch seconds or epoch milliseconds. You need to convert the epoch format to datetime, so that both are on the same data type. The reason that your 2nd query works (.year) is because .year returns an integer value.
Run below to get samples:
MATCH (n:Event)
RETURN distinct n.start_datetime LIMIT 5
Then if you see that it is 10 digits then it is in epochSeconds. If yes, then run below query:
MATCH (n:Event)
WHERE n.start_datetime is not null
AND datetime({epochSeconds: n.start_datetime}) > datetime("2019-06-01T18:40:32.142+0100")
RETURN n.start_datetime
LIMIT 25
It turns out the error was due to the timezone. Neo4j had saved the properties as LocalDateTime, which apparently can't be compared to ZonedDateTime.
I used py2neo for most of the nodes management, and the solution was to give a specific timezone to the python property. This was done (in my case) using:
datetime.datetime.fromtimestamp(kwargs["end"], pytz.UTC)
After that, I was able to do the comparisons.
Hopes this saves a couple of hours to future developers.

Are there only 16 digits in a sum of decimal

create table t (mnt decimal(20,2));
insert into t values (111340534626262);
insert into t values (0.56);
select sum(mnt) from t;
select sum(mnt::decimal(20,2))::decimal(20,2) from t;
I can't get more than 16 digits. Any idea?
Using IDS 12.10FC10.
When I run the code shown in my sqlcmd program, I get the output:
111340534626262.56
111340534626262.56
When I run the code shown in Informix's DB-Access program, I get this output (slightly altered):
(sum)
111340534626263
1 row(s) retrieved.
(expression)
111340534626263
1 row(s) retrieved.
The problem, therefore, is probably in the display mechanism in DB-Access rather than in the server itself.
If you're writing your own code, it is relatively straight-forward to ensure that the display is accurate and complete. Using DB-Access is not necessarily the best way to go.

Query Execution Time Varies - IBM Informix - Data Studio

I am executing one SQL statement in Informix Data Studio 12.1. It takes around 50 to 60 ms for execution(One day date).
SELECT
sum( (esrt.service_price) * (esrt.confirmed_qty + esrt.pharmacy_confirm_quantity) ) AS net_amount
FROM
episode_service_rendered_tbl esrt,
patient_details_tbl pdt,
episode_details_tbl edt,
ms_mat_service_header_sp_tbl mmshst
WHERE
esrt.patient_id = pdt.patient_id
AND edt.patient_id = pdt.patient_id
AND esrt.episode_id = edt.episode_id
AND mmshst.material_service_sp_id = esrt.material_service_sp_id
AND mmshst.bill_heads_id = 1
AND esrt.delete_flag = 1
AND esrt.customer_sp_code != '0110000006'
AND pdt.patient_category_id IN(1001,1002,1003,1004,1005,1012,1013)
AND edt.episode_type ='ipd'
AND esrt.generated_date BETWEEN '2017-06-04' AND '2017-06-04';
When i am trying to execute the same by creating function it takes around 35 to 40 Seconds for execution.
Please find the code below.
CREATE FUNCTION sb_pharmacy_account_summary_report_test1(START_DATE DATE,END_DATE DATE)
RETURNING VARCHAR(100),DECIMAL(10,2);
DEFINE v_sale_credit_amt DECIMAL(10,2);
BEGIN
SELECT
sum( (esrt.service_price) * (esrt.confirmed_qty +
esrt.pharmacy_confirm_quantity) ) AS net_amount
INTO
v_sale_credit_amt
FROM
episode_service_rendered_tbl esrt,
patient_details_tbl pdt,
episode_details_tbl edt,
ms_mat_service_header_sp_tbl mmshst
WHERE
esrt.patient_id = pdt.patient_id
AND edt.patient_id = pdt.patient_id
AND esrt.episode_id = edt.episode_id
AND mmshst.material_service_sp_id = esrt.material_service_sp_id
AND mmshst.bill_heads_id = 1
AND esrt.delete_flag = 1
AND esrt.customer_sp_code != '0110000006'
AND pdt.patient_category_id IN(1001,1002,1003,1004,1005,1012,1013)
AND edt.episode_type ='ipd'
AND esrt.generated_date BETWEEN START_DATE AND END_DATE;
RETURN 'SALE CREDIT','' with resume;
RETURN 'IP SB Credit Amount',v_sale_credit_amt;
END
END FUNCTION;
Can someone tell me what is the reason for this time variation?
..in very easy words.
If you create a function the sql is parsed and stored with some optimization stuff in the database. If you call the function, optimizer knows about the sql and execute it. So optimization is done only once, if you create the function.
If you run the SQL, Optimizer parse the sql, optimizes it and then execute it, every time you execute the SQL.
This explains the time difference.
I would say the difference in time is due the parametrized query.
The first SQL has hardcoded dates values, the one in the SPL has parameters. That may cause a different query plan (e.g: which index to follow) to be applied to the query in the SPL than the one executed from Data Studio.
You can try getting the query plan (using set explain) from the first SQL and then use directives in the SPL to force the engine to use that same path.
have a look at:
https://www.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.perf.doc/ids_prf_554.htm
it explains how to use optimizer directives to speed up queries.

Is it possible to combine active record query with postgreSQL raw query

Is it possible to combine active record query with postgreSQL raw query ?
Czce.where("time > ? ", "2014-02-09".to_datetime).raw_query
def self.raw_query
raw_q = 'SELECT cast(ticktime as timestamp(1)) AS ticktime
,max(bid_price) as price, max(bid_volume) as volume
From czces
Group BY 1
ORDER BY 1
limit 1000;'
ActiveRecord::Base.connection.select_all(raw_q)
end
If I do this with find_by_sql, the result from database is missing many columns.
And the conditional .where("ticktime > ? ", "2014-02-09".to_datetime) still not works
Here's the query expression Czce.where("ticktime > ? ", "2014-02-09".to_datetime).find_by_sql(raw_q)
[998] #<Czce:0x007fc881443080> {
:id => nil,
:ticktime => Fri, 07 Feb 2014 01:16:41 UTC +00:00
},
[999] #<Czce:0x007fc881442d38> {
:id => nil,
:ticktime => Fri, 07 Feb 2014 01:16:42 UTC +00:00
}
But the expected result should contains price, volume
from (pry):3:in `block (2 levels) in <top (required)>'
[4] pry(main)> result[0]
{
"ticktime" => "2014-02-28 07:00:00",
"price" => "7042",
"volume" => "2"
}
[5] pry(main)> result[1]
{
"ticktime" => "2014-02-28 06:59:59",
"price" => "18755",
"volume" => "525"
}
In short, no.
Raw select_all queries are done as pure SQL sent to the server, and records sent back as raw data.
From the Rails Guide for select_all (emphasis mine):
select_all will retrieve objects from the database using custom SQL
just like find_by_sql but will not instantiate them. Instead, you will
get an array of hashes where each hash indicates a record.
You could iterate over the resulting records and do something with those, perhaps store them in your own class and then use that information in subsequent calls via ActiveRecord, but you can't actually directly chain the two. If you're going to drop down into raw SQL (and certainly there are myriad reasons you may want to do this), you might as well grab everything else you would need in that same context at the same time, in the same raw query.
There's also find_by_sql, which will return an array of instantiated ActiveRecord objects.
From the guide:
The find_by_sql method will return an array of objects even if the
underlying query returns just a single record.
And:
find_by_sql provides you with a simple way of making custom calls to
the database and retrieving instantiated objects.
However, that's an actual instantiated object, which, while perhaps easier in many respects, since they would be mapped to an instance of the model and not simply a hash, chaining, say, where to that is not the same as a chained call to the base model class, as would normally be done.
I would recommend doing everything you can in the SQL itself, all server side, and then any further touch-up filtering you want to do can be done client-side in Rails by iterating over the records that are returned.
Just try to not use base connection itself, try expand the standard rails sql form like:
Czces.select("cast(ticktime as timestamp(1)) AS ticktime,max(bid_price) as price, max(bid_volume) as volume")
.group("1").order("1").limit(1000)
but if you just explain a condition, i.e. want do you really wnat to get from a query, we'll try to write a proper sql.

Rails/Ruby: TimeWithZone comparison inexplicably failing for equivalent values

I am having a terrible time (no pun intended) with DateTime comparison in my current project, specifically comparing two instances of ActiveSupport::TimeWithZone. The issue is that both my TimeWithZone instances have the same value, but all comparisons indicate they are different.
Pausing during execution for debugging (using RubyMine), I can see the following information:
timestamp = {ActiveSupport::TimeWithZone} 2014-08-01 10:33:36 UTC
started_at = {ActiveSupport::TimeWithZone} 2014-08-01 10:33:36 UTC
timestamp.inspect = "Fri, 01 Aug 2014 10:33:36 UTC +00:00"
started_at.inspect = "Fri, 01 Aug 2014 10:33:36 UTC +00:00"
Yet a comparison indicates the values are not equal:
timestamp <=> started_at = -1
The closest answer I found in searching (Comparison between two ActiveSupport::TimeWithZone objects fails) indicates the same issue here, and I tried the solutions that were applicable without any success (tried db:test:prepare and I don't run Spring).
Moreover, even if I try converting to explicit types, they still are not equivalent when comparing.
to_time:
timestamp.to_time = {Time} 2014-08-01 03:33:36 -0700
started_at.to_time = {Time} 2014-08-01 03:33:36 -0700
timestamp.to_time <=> started_at.to_time = -1
to_datetime:
timestamp.to_datetime = {Time} 2014-08-01 03:33:36 -0700
started_at.to_datetime = {Time} 2014-08-01 03:33:36 -0700
timestamp.to_datetime <=> started_at.to_datetime = -1
The only "solution" I've found thus far is to convert both values using to_i, then compare, but that's extremely awkward to code everywhere I wish to do comparisons (and moreover, seems like it should be unnecessary):
timestamp.to_i = 1406889216
started_at.to_i = 1406889216
timestamp.to_i <=> started_at.to_i = 0
Any advice would be very much appreciated!
Solved
As indicated by Jon Skeet above, the comparison was failing because of hidden millisecond differences in the times:
timestamp.strftime('%Y-%m-%d %H:%M:%S.%L') = "2014-08-02 10:23:17.000"
started_at.strftime('%Y-%m-%d %H:%M:%S.%L') = "2014-08-02 10:23:17.679"
This discovery led me down a strange path to finally discover what was ultimately causing the issue. It was a combination of this issue occurring only during testing and from using MySQL as my database.
The issues was showing only in testing because within the test where this cropped up, I'm running some tests against a couple of associated models that contain the above fields. One model's instance must be saved to the database during the test -- the model that houses the timestamp value. The other model, however, was performing the processing and thus is self-referencing the instance of itself that was created in the test code.
This led to the second culprit, which is the fact I'm using MySQL as the database, which when storing datetime values, does not store millisecond information (unlike, say, PostgreSQL).
Invariably, what this means is that the timestamp variable that was being read after its ActiveRecord was retrieved from the MySQL database was effectively being rounded and shaved of the millisecond data, while the started_at variable was simply retained in memory during testing and thus the original milliseconds were still present.
My own (sub-par) solution is to essentially force both models (rather than just one) in my test to retrieve themselves from the database.
TLDR; If at all possible, use PostgreSQL if you can!
This seem to happen if you're comparing time generated in Ruby with time loaded from the database.
For example:
time = Time.zone.now
Record.create!(mark: time)
record = Record.last
In this case record.mark == time will fail because Ruby keeps time down to nanoseconds, while different databases have different precission.
In case of postgres DateTime type it'll be to miliseconds.
You can see that when you check that while record.mark.sec == time.msec - record.mark.nsec != time.nsec

Resources