so I'm having a join issue with Snowsql query. I have 3 seperate styles in query below AND I cannot get the AUDIT_LOGGIN_TABLE_NAME to show up for this specific table name. I have even hard coded the table name in 'Named Query' and 'CTE' style. BUT no luck. When I run each piece of code alone I get the dataset. there are two fields to join on. Table_Name and ETL data.
For second round I thought it would be space issues. Hence TRIM is also added. BUT still no luck.
Would anyone be able to guide me in general right direction on this. Pick your style of query.
Expected result set is below as well. Sadly, this is going into a VIEW. So I do not think I can use temp tables within the View definition.
select a.Full_Table_Name
,replace(UPPER(Full_Table_Name),'RAW_DB.JA.','' ) as Short_Table_Name
,log.TABLE_NAME as AUDIT_LOGGIN_TABLE_NAME
,a.ROWS_INSERTED
,log.RECORD_COUNT AS AUDIT_LOGGING_RECORD_COUNT
,a.ETL_DATE
, to_date(concat(substring(log.ETL_DATE,0,4),'-',substring(log.ETL_DATE,5,2),'-',substring(log.ETL_DATE,7,2) ) ) as AUDIT_LOGGING_ETL_DATE
,a.START_TIME
,a.END_TIME
,a.DURATION_IN_SECONDS
,a.EXECUTION_STATUS
,case when a.ROWS_INSERTED = log.RECORD_COUNT then 1 else 0 end VALIDATION_RECORD_COUNT_INSERT
from (
select UPPER(substring(QUERY_TEXT,11,charindex('from',QUERY_TEXT)-12)) as Full_Table_Name
,ROWS_INSERTED
,to_date(START_TIME) as ETL_DATE
,START_TIME
,END_TIME
,datediff(second,START_TIME,END_TIME) as Duration_in_seconds
,EXECUTION_STATUS
from VW_QUERY_HISTORY as vw
where substring(QUERY_TEXT,11,charindex('from',QUERY_TEXT)-12) like '%JA%'
and QUERY_TYPE = 'COPY'
and UPPER(substring(QUERY_TEXT,11,charindex('from',QUERY_TEXT)-12)) like '%RAW_DB.JA%'
and to_date(START_TIME) >= dateadd(day,-8,current_date() )
) as a
LEFT JOIN SOURCE_TABLE_COUNTS as log on UPPER(replace(UPPER(log.TABLE_NAME),'MGR.','')) = replace(UPPER(Full_Table_Name),'RAW_DB.JA.','' )
and
to_date(a.ETL_DATE) = to_date(concat(substring(log.ETL_DATE,0,4),'-',substring(log.ETL_DATE,5,2),'-',substring(log.ETL_DATE,7,2) ) )
order by ETL_DATE
///////NAMED QUERY STYLE /////////////////////////////////////////////
select Full_Table_Name
,Short_Table_Name
,AUDIT_LOGGIN_TABLE_NAME
,ROWS_INSERTED
,AUDIT_LOGGING_RECORD_COUNT
,ETL_DATE
,AUDIT_LOGGING_ETL_DATE
,START_TIME
,END_TIME
,DURATION_IN_SECONDS
,case when ROWS_INSERTED = AUDIT_LOGGING_RECORD_COUNT then 1 else 0 end as VALIDATION_RECORD_COUNT_INSERT
from (
select * from (
select TRIM(UPPER(substring(QUERY_TEXT,11,charindex('from',QUERY_TEXT)-12))) as Full_Table_Name
,TRIM(replace(UPPER(Full_Table_Name),'RAW_DB.JA.','' )) as Short_Table_Name
,ROWS_INSERTED
,to_date(START_TIME) as ETL_DATE
,START_TIME
,END_TIME
,datediff(second,START_TIME,END_TIME) as Duration_in_seconds
,EXECUTION_STATUS
from VW_QUERY_HISTORY as vw
where substring(QUERY_TEXT,11,charindex('from',QUERY_TEXT)-12) like '%JA%'
and QUERY_TYPE = 'COPY'
and UPPER(substring(QUERY_TEXT,11,charindex('from',QUERY_TEXT)-12)) like '%RAW_DB.JA%'
and to_date(START_TIME) >= dateadd(day,-8,current_date() )
) as sub_qury where trim(Short_Table_Name) like '%UDT_SKU%'
) as a
left join
( select trim(UPPER(replace(UPPER(log.TABLE_NAME),'MGR.',''))) as AUDIT_LOGGIN_TABLE_NAME
,log.RECORD_COUNT AS AUDIT_LOGGING_RECORD_COUNT
, to_date(concat(substring(log.ETL_DATE,0,4),'-',substring(log.ETL_DATE,5,2),'-',substring(log.ETL_DATE,7,2) ) ) as AUDIT_LOGGING_ETL_DATE
from SOURCE_TABLE_COUNTS as log
where UPPER(replace(UPPER(log.TABLE_NAME),'MGR.','')) = 'UDT_SKU'
) as audit_log_query
on trim(a.Short_Table_Name) = trim(audit_log_query.AUDIT_LOGGIN_TABLE_NAME) and
audit_log_query.AUDIT_LOGGING_ETL_DATE = ETL_DATE
//////////// CTE style
WITH audit_log_query as
( select trim(UPPER(replace(UPPER(log.TABLE_NAME),'MGR.',''))) as AUDIT_LOGGIN_TABLE_NAME
,log.RECORD_COUNT AS AUDIT_LOGGING_RECORD_COUNT
, to_date(concat(substring(log.ETL_DATE,0,4),'-',substring(log.ETL_DATE,5,2),'-',substring(log.ETL_DATE,7,2) ) ) as AUDIT_LOGGING_ETL_DATE
from SOURCE_TABLE_COUNTS as log
where UPPER(replace(UPPER(log.TABLE_NAME),'MGR.','')) = 'UDT_SKU'
)
select Full_Table_Name
,Short_Table_Name
,AUDIT_LOGGIN_TABLE_NAME
,ROWS_INSERTED
,AUDIT_LOGGING_RECORD_COUNT
,ETL_DATE
,AUDIT_LOGGING_ETL_DATE
,START_TIME
,END_TIME
,DURATION_IN_SECONDS
,case when ROWS_INSERTED = AUDIT_LOGGING_RECORD_COUNT then 1 else 0 end as VALIDATION_RECORD_COUNT_INSERT
from (
select * from (
select TRIM(UPPER(substring(QUERY_TEXT,11,charindex('from',QUERY_TEXT)-12))) as Full_Table_Name
,TRIM(replace(UPPER(Full_Table_Name),'RAW_DB.JA.','' )) as Short_Table_Name
,ROWS_INSERTED
,to_date(START_TIME) as ETL_DATE
,START_TIME
,END_TIME
,datediff(second,START_TIME,END_TIME) as Duration_in_seconds
,EXECUTION_STATUS
from VW_QUERY_HISTORY as vw
where substring(QUERY_TEXT,11,charindex('from',QUERY_TEXT)-12) like '%JA%'
and QUERY_TYPE = 'COPY'
and UPPER(substring(QUERY_TEXT,11,charindex('from',QUERY_TEXT)-12)) like '%RAW_DB.JA%'
and to_date(START_TIME) >= dateadd(day,-8,current_date() )
) as sub_qury where trim(Short_Table_Name) like '%UDT_SKU%'
) as a
left join audit_log_query on trim(a.Short_Table_Name) = trim(audit_log_query.AUDIT_LOGGIN_TABLE_NAME)
and audit_log_query.AUDIT_LOGGING_ETL_DATE = ETL_DATE
Expected DATA
FULL_TABLE_NAME|SHORT_TABLE_NAME|AUDIT_LOGGIN_TABLE_NAME|ROWS_INSERTED|AUDIT_LOGGING_RECORD_COUNT|ETL_DATE|AUDIT_LOGGING_ETL_DATE|START_TIME|END_TIME|DURATION_IN_SECONDS|VALIDATION_RECORD_COUNT_INSERT
RAW_DB.JDA.UDT_SKU|UDT_SKU||19697||2021-04-01||2021-04-01 07:59:39.101 -0700|2021-04-01 07:59:40.048 -0700|1|0
RAW_DB.JDA.UDT_SKU|UDT_SKU||27144||2021-04-05||2021-04-05 08:03:37.907 -0700|2021-04-05 08:03:39.377 -0700|2|0
RAW_DB.JDA.UDT_SKU|UDT_SKU||16536||2021-03-31||2021-03-31 08:03:05.626 -0700|2021-03-31 08:03:06.921 -0700|1|0
RAW_DB.JDA.UDT_SKU|UDT_SKU||19182||2021-04-02||2021-04-02 08:03:33.296 -0700|2021-04-02 08:03:34.803 -0700|1|0
RAW_DB.JDA.UDT_SKU|UDT_SKU||885||2021-04-03||2021-04-03 08:04:15.123 -0700|2021-04-03 08:04:16.071 -0700|1|0
RAW_DB.JDA.UDT_SKU|UDT_SKU||0||2021-04-04||2021-04-04 07:30:23.213 -0700|2021-04-04 07:30:23.862 -0700|0|0
RAW_DB.JDA.UDT_SKU|UDT_SKU||1262||2021-04-04||2021-04-04 17:35:01.110 -0700|2021-04-04 17:35:02.500 -0700|1|0
RAW_DB.JDA.UDT_SKU|UDT_SKU||197899||2021-04-06||2021-04-06 08:00:56.860 -0700|2021-04-06 08:00:59.798 -0700|3|0
RAW_DB.JDA.UDT_SKU|UDT_SKU||107433||2021-03-30||2021-03-30 08:02:34.231 -0700|2021-03-30 08:02:36.846 -0700|2|0
RAW_DB.JDA.UDT_SKU|UDT_SKU||17794||2021-04-07||2021-04-07 08:00:40.782 -0700|2021-04-07 08:00:41.590 -0700|1|0
So as you noted the tables schema are JDA not JA so your filters need to be changed.
there are a number of places you select to output you want and then reapply the same logic in the WHERE clauses, but slightly different, you can just use the aliased SELECT columns in your WHERE.
Also your date parsing can just use the formatted verse and save of the string splicing.
TO_DATE(log.etl_date, 'YYYYMMDD') AS audit_logging_etl_date
also it's nice you show input data, as trying to guess the audit format was rather painful.
WITH source_table_counts AS (
SELECT * FROM VALUES
('mgr.UDT_SKU',123, '20210401')
v(table_name, record_count, etl_date)
), vw_query_history AS (
SELECT * FROM VALUES
('0123456789 RAW_DB.JDA.UDT_SKU from',19697,'2021-04-01 07:59:39.101 -0700','2021-04-01 07:59:40.048 -0700',NULL,'COPY'),
('0123456789 RAW_DB.JDA.UDT_SKU from',27144,'2021-04-05 08:03:37.907 -0700','2021-04-05 08:03:39.377 -0700',NULL,'COPY'),
('0123456789 RAW_DB.JDA.UDT_SKU from',16536,'2021-03-31 08:03:05.626 -0700','2021-03-31 08:03:06.921 -0700',NULL,'COPY'),
('0123456789 RAW_DB.JDA.UDT_SKU from',19182,'2021-04-02 08:03:33.296 -0700','2021-04-02 08:03:34.803 -0700',NULL,'COPY'),
('01234567891RAW_DB.JDA.UDT_SKU from',885,'2021-04-03 08:04:15.123 -0700','2021-04-03 08:04:16.071 -0700',NULL,'COPY'),
('01234567891RAW_DB.JDA.UDT_SKU from',0,'2021-04-04 07:30:23.213 -0700','2021-04-04 07:30:23.862 -0700',NULL,'COPY'),
('01234567891RAW_DB.JDA.UDT_SKU from',1262,'2021-04-04 17:35:01.110 -0700','2021-04-04 17:35:02.500 -0700',NULL,'COPY'),
('01234567891RAW_DB.JDA.UDT_SKU from',197899,'2021-04-06 08:00:56.860 -0700','2021-04-06 08:00:59.798 -0700',NULL,'COPY'),
('01234567891RAW_DB.JDA.UDT_SKU from',107433,'2021-03-30 08:02:34.231 -0700','2021-03-30 08:02:36.846 -0700',NULL,'COPY'),
('01234567891RAW_DB.JDA.UDT_SKU from',17794,'2021-04-07 08:00:40.782 -0700','2021-04-07 08:00:41.590 -0700',NULL,'COPY')
v(query_text, rows_inserted, start_time, end_time, execution_status,query_type)
)
, audit_log_query AS (
select
TRIM(REPLACE(UPPER(log.table_name),'MGR.','')) AS audit_loggin_table_name
,log.record_count AS audit_logging_record_count
,TO_DATE(log.etl_date, 'YYYYMMDD') AS audit_logging_etl_date
--,TO_DATE(CONCAT(SUBSTRING(log.etl_date,0,4),'-',SUBSTRING(log.etl_date,5,2),'-', SUBSTRING(log.etl_date,7,2) ) ) AS audit_logging_etl_date
FROM source_table_counts AS log
WHERE audit_loggin_table_name = 'UDT_SKU'
)
SELECT
a.full_table_name
,a.short_table_name
,al.audit_loggin_table_name
,a.rows_inserted
,al.audit_logging_record_count
,a.etl_date
,al.audit_logging_etl_date
,a.start_time
,a.end_time
,a.duration_in_seconds
,IFF(a.rows_inserted = al.audit_logging_record_count, 1, 0) AS validation_record_count_insert
FROM (
SELECT *
FROM (
SELECT
TRIM(UPPER(SUBSTRING(query_text, 11, CHARINDEX('from', query_text)-12))) AS full_table_name
,REPLACE(full_table_name,'RAW_DB.JDA.','' ) AS short_table_name
,rows_inserted
,TO_DATE(start_time) AS etl_date
,start_time
,end_time
,DATEDIFF('second', start_time, end_time) AS duration_in_seconds
,execution_status
FROM vw_query_history AS vw
WHERE full_table_name like '%JDA%'
AND query_type = 'COPY'
AND full_table_name like '%RAW_DB.JDA%'
AND to_date(START_TIME) >= dateadd(day,-8,current_date() )
) as sub_qury
where short_table_name like '%UDT_SKU%'
) as a
LEFT JOIN audit_log_query AS al
on a.short_table_name = al.audit_loggin_table_name
and al.audit_logging_etl_date = a.etl_date;
Related
I am using find_by_sql to do a query on my Conversationalist model using Postgres as the DB server:
Conversationalist.find_by_sql(['
SELECT * FROM (
SELECT * FROM conversationalists
WHERE conversable_id = ? AND conversable_type = ?
) t1
LEFT JOIN (
SELECT * FROM conversationalists
WHERE conversable_id = ? AND conversable_type = ?
) t2
ON t1.chat_id = t2.chat_id',
recipient_id, recipient_type, sender_id, sender_type])
It works fine if there is a result. But if there is no result then I get an array with an empty Conversationalist object: [#<Conversationalist id: nil, conversable_type: nil...>]
Here is what I get as a result doing a direct query on the DB:
What I am expecting is an empty array since no rows should be returned but instead I get a result. How would I get an empty array if no results are returned?
ADDITIONAL CONTEXT
What I am trying to do is essentially a chat. When someone messages another user, the code above first checks to see if those two people are already chatting. If they are the message gets added to the chat. If not, a new Chat gets created and the message gets added:
class MessagesController < ApplicationController
def create
message = new_message
conversation = already_conversing?
if conversation.empty? || conversation.first.id.nil?
chat = Chat.new
chat.messages << message
chat.conversationalists << sender
chat.conversationalists << recipient
chat.save!
else
chat = Chat.find(conversation.first.chat_id)
chat.messages << message
end
head :ok
end
private
def new_message
Message.new(
sender_id: params[:sender_id],
sender_type: params[:sender_type],
recipient_id: params[:recipient_id],
recipient_type: params[:recipient_type],
message: params[:message]
)
end
def already_conversing?
Conversationalist.conversing?(
params[:recipient_id],
params[:recipient_type],
params[:sender_id],
params[:sender_type]
)
end
end
The Model:
class Conversationalist < ApplicationRecord
def self.conversing?(recipient_id, recipient_type, sender_id, sender_type)
Conversationalist.find_by_sql(['
SELECT * FROM (
SELECT * FROM conversationalists
WHERE conversable_id = ? AND conversable_type = ?
) t1
LEFT JOIN (
SELECT * FROM conversationalists
WHERE conversable_id = ? AND conversable_type = ?
) t2
ON t1.chat_id = t2.chat_id',
recipient_id, recipient_type, sender_id, sender_type])
end
end
So I was able to figure it out with the help of #Beartech from the comments above. Essentially the issue was happening because of the LEFT JOIN. If there are any results in t1 then Rails returns an array with an empty object. Similarly, if it was a RIGHT JOIN and t2 had a result, Rails would do the same. So the fix, in order to get an empty array, is to change the join to an INNER JOIN:
Conversationalist.find_by_sql(['
SELECT * FROM (
SELECT * FROM conversationalists
WHERE conversable_id = ? AND conversable_type = ?
) t1
INNER JOIN (
SELECT * FROM conversationalists
WHERE conversable_id = ? AND conversable_type = ?
) t2
ON t1.chat_id = t2.chat_id',
recipient_id, recipient_type, sender_id, sender_type])
So, I'm getting an error that looks like: [ERROR] addons/itemstore/lua/itemstore/lua/itemsure/vgui/container.lua:43 'for' limit must be a number
Here is container.lua
local PANEL = {}
AccessorFunc( PANEL, "ContainerID", "ContainerID" )
AccessorFunc( PANEL, "Rows", "Rows" )
AccessorFunc( PANEL, "Columns", "Columns" )
function PANEL:Init()
self.Items = {}
table.insert( itemstore.containers.Panels, self )
end
function PANEL:Refresh()
local container = itemstore.containers.Get( self:GetContainerID() )
if ( container ) then
for i = 1, container.Size do
if ( not self.Items[ i ] ) then
self.Items[ i ] = self:Add( "ItemStoreSlot" )
end
local panel = self.Items[ i ]
panel:SetItem( container:GetItem( i ) )
panel:SetContainerID( self:GetContainerID() )
panel:SetSlot( i )
panel:InvalidateLayout()
end
self:InvalidateLayout()
end
end
function PANEL:SetContainerID( containerid )
self.ContainerID = containerid
self:Refresh()
end
function PANEL:PerformLayout()
self:SetSpaceX( 1 )
self:SetSpaceY( 1 )
local container = itemstore.containers.Get( self:GetContainerID() )
if ( container ) then
for i = 1, container.Size do
local panel = self.Items[ i ]
if ( panel ) then
panel:SetSize( unpack( itemstore.config.SlotSize ) )
end
end
end
self.BaseClass.PerformLayout( self )
end
vgui.Register( "ItemStoreContainer", PANEL, "DIconLayout" )
Any solutions anybody can think of? I can't think of anything because to me, it should be working fine?
The error is pretty clear. In line 43 you have a for statement that uses container.Size as its limit, which in your case is not a number.
Solution:
use a number as the for limit. If you have to use container.Size and it comes from "outside", find out why it is not a number and what you can do about it. If you cannot make sure its a number then you cannot use it as your for limit.
So put your for loop inside an if type(container.Size) == "number" then statement or similar.
I have a query which looks like this:
#inventory = Pack.find_by_sql("SELECT Packs.id, "+
" (SELECT COUNT(*) FROM Stocks WHERE (Stocks.pack_id = Packs.id AND Stocks.status = 'online' AND Stocks.user_id = #{current_user.id})) AS online,"+
" (SELECT COUNT(*) FROM Stocks WHERE (Stocks.pack_id = Packs.id AND Stocks.status = 'offline' AND Stocks.user_id = #{current_user.id})) AS offline,"+
" (SELECT COUNT(*) FROM Stocks WHERE (Stocks.pack_id = Packs.id AND Stocks.status = 'depositing' AND Stocks.user_id = #{current_user.id})) AS depositing,"+
" (SELECT COUNT(*) FROM Stocks WHERE (Stocks.pack_id = Packs.id AND Stocks.status = 'withdrawing' AND Stocks.user_id = #{current_user.id})) AS withdrawing,"+
" (SELECT COUNT(*) FROM Stocks WHERE (Stocks.pack_id = Packs.id AND Stocks.status = 'selling' AND Stocks.user_id = #{current_user.id})) AS selling,"+
" (SELECT COUNT(*) FROM Transactions WHERE (Transactions.pack_id = Packs.id AND Transactions.status = 'buying' AND Transactions.buyer_id = #{current_user.id})) AS buying"+
" FROM Packs WHERE disabled = false")
I am thinking there's a way to make a new sub-query so that instead of
SELECT FROM Stocks
the query selects from a stored table
SELECT FROM (Stocks WHERE (Stocks.pack_id = Packs.id AND Stocks.user_id = #{current_user.id}))
which would only be queried once. Then the WHERE Stocks.status = ? stuff would be applied to that stored table.
Any help guys?
The best query depends on data distribution and other details.
This is very efficient as long as most pack_id from the subqueries are actually used in the join to packs (most packs are NOT disabled):
SELECT p.id
, s.online, s.offline, s.depositing, s.withdrawing, s.selling, t.buying
FROM packs p
LEFT JOIN (
SELECT pack_id
, count(status = 'online' OR NULL) AS online
, count(status = 'offline' OR NULL) AS offline
, count(status = 'depositing' OR NULL) AS depositing
, count(status = 'withdrawing' OR NULL) AS withdrawing
, count(status = 'selling' OR NULL) AS selling
FROM stocks
WHERE user_id = #{current_user.id}
AND status = ANY('{online,offline,depositing,withdrawing,selling}'::text[])
GROUP BY 1
) s ON s.pack_id = p.id
LEFT JOIN (
SELECT pack_id, count(*) AS buying
FROM transactions
WHERE status = 'buying'
AND buyer_id = #{current_user.id}
) t ON t.pack_id = p.id
WHERE NOT p.disabled;
In pg 9.4 you can use the aggregate FILTER clause:
SELECT pack_id
, count(*) FILTER (WHERE status = 'online') AS online
, count(*) FILTER (WHERE status = 'offline') AS offline
, count(*) FILTER (WHERE status = 'depositing') AS depositing
, count(*) FILTER (WHERE status = 'withdrawing') AS withdrawing
, count(*) FILTER (WHERE status = 'selling') AS selling
FROM stocks
WHERE ...
Details:
How can I simplify this game statistics query?
Use crosstab() for the pivot table to make that faster, yet:
SELECT p.id
, s.online, s.offline, s.depositing, s.withdrawing, s.selling, t.buying
FROM packs p
LEFT JOIN crosstab(
$$
SELECT pack_id, status, count(*)::int AS ct
FROM stocks
WHERE user_id = $$ || #{current_user.id} || $$
AND status = ANY('{online,offline,depositing,withdrawing,selling}'::text[])
GROUP BY 1, 2
ORDER BY 1, 2
$$
,$$SELECT unnest('{online,offline,depositing,withdrawing,selling}'::text[])$$
) s (pack_id int
, online int
, offline int
, depositing int
, withdrawing int
, selling int
) USING (pack_id)
LEFT JOIN (
SELECT pack_id, count(*) AS buying
FROM transactions
WHERE status = 'buying'
AND buyer_id = #{current_user.id}
) t ON t.pack_id = p.id
WHERE NOT p.disabled;
Details here:
PostgreSQL Crosstab Query
If most packs are disabled, LATERAL joins will be faster (requires pg 9.3 or later):
SELECT p.id
, s.online, s.offline, s.depositing, s.withdrawing, s.selling, t.buying
FROM packs p
LEFT JOIN LATERAL (
SELECT pack_id
, count(status = 'online' OR NULL) AS online
, count(status = 'offline' OR NULL) AS offline
, count(status = 'depositing' OR NULL) AS depositing
, count(status = 'withdrawing' OR NULL) AS withdrawing
, count(status = 'selling' OR NULL) AS selling
FROM stocks
WHERE user_id = #{current_user.id}
AND status = ANY('{online,offline,depositing,withdrawing,selling}'::text[])
AND pack_id = p.id
GROUP BY 1
) s ON TRUE
LEFT JOIN LATERAL (
SELECT pack_id, count(*) AS buying
FROM transactions
WHERE status = 'buying'
AND buyer_id = #{current_user.id}
AND pack_id = p.id
) t ON TRUE
WHERE NOT p.disabled;
Why LATERAL? And are there alternatives in pg 9.1?
Record returned from function has columns concatenated
If what you're after is a count of the various types, something like the following would be much less code and easier to read/maintain, IMO...
You could split them up into the different tables, so, for stocks, something like this:
#inventory = Pack.find_by_sql("SELECT status, count(*)
FROM stocks
WHERE user_id = ?
GROUP BY status
ORDER BY status", current_user.id)
Note the importance of using ? to prevent SQL injection. Also, Ruby supports multiline strings, so there's no need to quote and concatenate every line.
You can do something similar for the other tables.
I have built a grails application and am using HQL in my controller to pass parameters to my "index.gsp" using g-select tags. There is a very simple issue I am facing, when the value comes to the front-end (browser/client side), the numbers are rounded off.
I was not facing this before while using SQL in my controller, but now since my index.gsp and controller are communicating using "params" and g select, HQL had to be used and this rounds off all numbers (basic metrics, calculated metrics etc)
Query example (controller) :
In this query, I am taking revenue, it's YoY and WoW from my backend table and even this is rounded off ( all values shown in visualization)
def com = Com.executeQuery("
SELECT p.date_hour
,p.total_revenue
,CASE
WHEN total_revenue_ly IN (
0
,NULL
)
THEN 0
ELSE ((total_revenue / total_revenue_ly - 1) * 100)
END AS yoy
,CASE
WHEN total_revenue_lw IN (
0
,NULL
)
THEN 0
ELSE ((total_revenue / total_revenue_lw - 1) * 100)
END AS wow
FROM Com p
WHERE p.department = ?
AND p.device = ?
AND p.browser = ?
AND p.platform = ?
AND p.mv = ?
AND p.time_period = ?
ORDER BY col_0_0_ ASC",
[params.department, params.device, params.browser,
params.platform, params.mv, params.time_period])
render com as JSON
I also have to write queries for "conversion rate" etc(calculated metrics) :
def com = Tablev1.executeQuery("
SELECT p.date_hour
,CASE
WHEN visits IN ((0,NULL)
THEN 0
ELSE ((p.orders / p.visits) * 100)
END AS metric
,CASE
WHEN visits IN ((0,NULL)
THEN 0
WHEN orders_ly IN ((0,NULL)
THEN 0
WHEN visits_ly IN ((0,NULL)
THEN 0
ELSE ((((orders / visits) / (orders_ly / visits_ly)) - 1) * 100)
END AS yoy
,CASE
WHEN visits IN ((0,NULL)
THEN 0
WHEN orders_lw IN ((0,NULL)
THEN 0
WHEN visits_lw IN ((0,NULL)
THEN 0
ELSE ((((orders / visits) / (orders_lw / visits_lw)) - 1) * 100)
END AS wow
FROM Tablev1 p
WHERE p.platform = ?
AND p.mv = ?
AND p.time_period = ?
ORDER BY col_0_0_ ASC",
[params.platform, params.mv, params.time_period])
render com as JSON
Even these values are rounded off. I am displaying the values in the console of browser and in the array only they are rounded off, my visualization is a graph using highcharts.js but I don't think there is an issue with highcharts.js as the array feeded to highcharts is only rounded off.
The data type of revenue in first query was also float, but still it's rounded off.
The problem lies in HQL or index-controller communication
In a different application, using case when rounds off numbers and not using case when displays the decimals, I don't understand this. Can someone pls explain it?
How do I resolve this issue?
Any approches/suggestions are most welcome.
UPDATE:
Ignore the entire case when, a simple query like this also is passing rounded values
def com = Com.executeQuery("
SELECT p.date_hour
,p.total_revenue
FROM Com p
WHERE p.department = ?
AND p.device = ?
AND p.browser = ?
AND p.platform = ?
AND p.mv = ?
AND p.time_period = ?
ORDER BY col_0_0_ ASC",
[params.department, params.device, params.browser,
params.platform, params.mv, params.time_period])
render com as JSON
Try using a float value when multiplying:
... ELSE ((total_revenue / total_revenue_ly - 1) * 100.0)
i have made this query to select posts from a WordPress blog filtering by Category, Tag and Custom Fields.
SELECT wp_posts.*
FROM wp_posts
WHERE wp_posts.post_type = 'post' AND wp_posts.post_status = 'publish'
AND ( SELECT COUNT(*)
FROM wp_term_relationships
LEFT JOIN wp_term_taxonomy ON ( wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id )
LEFT JOIN wp_terms ON ( wp_term_taxonomy.term_id = wp_terms.term_id )
WHERE wp_posts.ID = wp_term_relationships.object_id
AND ( wp_terms.name = 'collaborazioni' && wp_term_taxonomy.taxonomy = 'category' )
||
( wp_terms.name = 'jammin' && wp_term_taxonomy.taxonomy = 'post_tag' )
) >= 1
AND ( SELECT COUNT(*) FROM wp_postmeta
WHERE wp_postmeta.post_id = wp_posts.ID
AND wp_postmeta.meta_key = 'Product-code'
AND wp_postmeta.meta_value = 'xxxxxx'
) >= 1
but i think that is a little heavy .. did you have some better solution ?
thanks, Pietro.
What exactly would you like to make shorter? It already looks shorten pretty well.
PS. You might want to use {$wpdb->prefix} instead of wp_ in queries.