SELECT
cntdpts."PROJECT_SID",
cntdpts."USER_SID",
"CNTDPTS",
"CNTQUERIES"
FROM (
SELECT
"PROJECT_SID",
"USER_SID",
COUNT("DATA_POINT_SID") AS "CNTDPTS"
FROM
CNTDPTS
GROUP BY
"PROJECT_SID",
"USER_SID" WITH HINT(RESULT_CACHE) ) cntdpts
INNER JOIN (
SELECT
"PROJECT_SID",
"USER_SID",
COUNT("QUERY_SID") AS "CNTQUERIES"
FROM
CNTQUERIES
GROUP BY
"PROJECT_SID",
"USER_SID" WITH HINT(RESULT_CACHE) ) cntqueries ON
cntdpts."PROJECT_SID" = cntqueries."PROJECT_SID"
AND cntdpts."USER_SID" = cntqueries."USER_SID" WITH HINT(RESULT_CACHE)
I am having troubles with using cached table functions. If I run the two subqueries "cntdpts" and "cntqueries" individually they return the result within <100ms (because they use the cache of the table function CNTDPTS and CNTQUERIES. However if I run the full query with joining the two subqueries it takes >5s and HANA does not seem to take advantage of the cached results from the subqueries. Is there any HINT I still need to add maybe?
You will need to add WITH HINT(RESULT_CACHE_NON_TRANSACTIONAL) to your outermost query.
See also https://help.sap.com/viewer/9de0171a6027400bb3b9bee385222eff/2.0.05/en-US/3ad0e93de0aa408e9238fa862e4780df.html
Related
We are currently running a number of hand-crafted and optimized OData queries on Exact Online using Python. This runs on several thousand of divisions. However, I want to migrate them to Invantive SQL for ease of maintenance.
But some of the optimizations like explicit orderby in the OData query are not forwarded to Exact Online by Invantive SQL; they just retrieve all data or the top x and then do an orderby.
Especially for maximum value determination that can be a lot slower.
Simple sample on small table:
https://start.exactonline.nl/api/v1/<<division>>/financial/Journals?$select=BankAccountIBAN,BankAccountDescription&$orderby=BankAccountIBAN desc&$top=5
Is there an alternative to optimize the actual OData queries executed by Invantive SQL?
You can either use the Data Replicator or send the hand-craft OData query through a native platform request, such as:
insert into NativePlatformScalarRequests
( url
, orig_system_group
)
select replace('https://start.exactonline.nl/api/v1/{division}/financial/Journals?$select=BankAccountIBAN,BankAccountDescription&$orderby=BankAccountIBAN desc&$top=5', '{division}', code)
, 'MYSTUFF-' || code
from systempartitions#datadictionary
limit 100 /* First 100 divisions. */
create or replace table exact_online_download_journal_top5#inmemorystorage
as
select jte.*
from ( select npt.result
from NativePlatformScalarRequests npt
where npt.orig_system_group like 'MYSTUFF-%'
and npt.result is not null
) npt
join jsontable
( null
passing npt.result
columns BankAccountDescription varchar2 path 'd[0].BankAccountDescription'
, BankAccountIBAN varchar2 path 'd[0].BankAccountIBAN'
) jte
From here on you can use the in memory table, such as:
select * from exact_online_download_journal_top5#inmemorystorage
But of course you can also 'insert into sqlserver'.
I have a update query in a stored procedure which is the main reason for causing deadlock.
This stored procedure is used in SSIS package in a foreach loop.
It looks like that the stored procedure calls the Salespreprocessing table and goes into deadlock state. This occurs when we make a call to this SSIS package simultaneously. Here is my SQL query
UPDATE SPP
SET SPP.Promotion_Id = T.PromotionID
FROM staging.SalesPreProcessing SPP WITH(INDEX(staging_CIDXSalesPreprocessing1))
INNER JOIN #WithConcatenatedPromotionID T
ON SPP.DocLineNo = T.BillItem
AND SPP.DocNum = T.BillNumber
AND SPP.Cust_Code = T.CustomerCode
AND SPP.ZCS_EAN_CODE = T.ProductCode
AND SPP.BILLING_REPORTING_DATE = T.PricingDate
WHERE SPP.InterfaceStatusTrackingID = #in_InterfaceStatusTrackingId AND SPP.setupid=#in_SetupId
I have created clustered index for setupid and a non-clustered indexes for rest of the columns of the table.
Here is my non-clustered Index
CREATE NONCLUSTERED INDEX [staging_CIDXSalesPreprocessing] on salespreprocessing
(
[SetupId] ASC,
[InterfaceStatusTrackingID] ASC
) INCLUDE`enter code here`
([DocLineNo] ,
[DocNum] ,
[Cust_Code] ,
[ZCS_EAN_CODE] ,
[Billing_Reporting_Date]
)
I am still getting Deadlock
Firstly the non-clustered index seems pointless as its first column is setupId which you say is the column for the clustered index. Thus, assuming that the setupId values are sufficiently variegated, queries will always use the clustered index over and above the nonclustered one. What is the primary key?
In terms of avoiding the deadlock you need to:
1) Ensure that the locks are taken in the same order each time that the SP is called within the foreach loop. I don't know what you're looping round? The results of another SP/query? If so ensure that there is an ORDER BY in that.
2) Is the foreach loop within a transaction? If it is does it need to be? Could you release the locks after each call to the SP by calling it from a non-transactional environment?
3) Take as few locks as possible within the SP. I can't see what query is used to create the temporary table you join to but that may be the issue. You need to use SQL Profiler to find out what object exactly the deadlock is occurring on but using hints such as ROWLOCK may help.
I have a sqlite database that I'm trying to build a query. The table column I need to retrieve is iEDLID from the table below :
Right now all I have to go on is a known iEventID from the table below :
And the the nClientLocationID from the table below.
So the requirements are I need to get current iEDLID to write, lookup from tblEventDateLocations for dEventDate and the tblLocation.nClientLocationID based on the tblLocations.iLocationID I already have and event selected on this screen.
So I would need a query that does a "SELECT DISTINCT table EventDateLocations.iEDLID FROM tblEventDateLocations ...."
So basically from another query I have the iEventID I need, and I have the event ID i need but where the dEventDate=(select date('now')) I need to retrieve the iEventDateID from table EventDates.iEventDateID to use on the table EventDateLocations
this is the point where I'm trying to wrap my head around the joins for this query and the syntax...
It seems like you want this:
select distinct edl.iEDLDID
from
tblEventDateLocations edl
join tblEventDates ed on edl.EventDateId = ed.EventDateId
where
ed.EventId = ?
and ed.dEventDate = date('now')
and edl.nClientLocationID = ?
where the ? of course represent the known event ID and location ID parameters.
Since nClientLocationId appears on table tblEventDateLocations you do not need to join table tblLocations unless you want to filter out results whose location ID does not appear in that table.
I'm stuck trying to create a query that pulls results from at least three different tables with many to many relationships.
I want to end up with a table that lists cases, the outcomes and complaints.
All cases may have none, one or multiple outcomes, same relationship applies to the complaints. I want to be able to have the case listed once, then subsequent columns to list all the outcomes and complaints related to that case.
I have tried GROUP_CONCAT to get the outcomes in one column instead of repeating the cases but when I use UNION to combine the outcomes and complaints one column header overwrites the other.
Any help appreciated and here's the link to the fiddle http://sqlfiddle.com/#!2/d111e/2/0
I suggest you START with this this query structure:
SELECT
c.caseID, c.caseTitle, c.caseSynopsis /* if more columns ... add to group by also */
, group_concat(co.concern)
, group_concat(re.resultText)
FROM caseSummaries AS c
LEFT JOIN JNCT_CONCERNS_CASESUMMARY AS JCC ON c.caseID = JCC.caseSummary_FK
LEFT JOIN CONCERNS AS co ON JCC.concerns_FK = co.concernsID
LEFT JOIN JNCT_RESULT_CASESUMMARY AS JRC ON c.caseID = JRC.caseSummary_FK
LEFT JOIN RESULTS AS re ON JRC.result_FK = re.result_ID
GROUP BY
c.caseID, c.caseTitle, c.caseSynopsis /* add more ... here also */
;
Treat the table caseSummaries as the most important and then everything else "hangs off" that.
Please note that although MySQL will allow it, you should place EVERY non-aggregating column that you include in the select clause into the group by clause also.
also see: http://sqlfiddle.com/#!2/2d1a79/7
I have two tables - tool_downloads and tool_configurations. I am trying to retrieve the most recent build date for each tool in my database. The layout of the DB is simple. One table called tool_downloads keeps track of when a tool is downloaded. Another table is called tool_configurations and stores the actual data about the tool. They are linked together by the tool_conf_id.
If I run the following query which omits dates, I get back 200 records.
SELECT DISTINCT a.tool_conf_id, b.tool_conf_id
FROM tool_downloads a
JOIN tool_configurations b
ON a.tool_conf_id = b.tool_conf_id
ORDER BY a.tool_conf_id
When I try to add in date information I get back hundreds of thousands of records! Here is the query that fails horribly.
SELECT DISTINCT a.tool_conf_id, max(a.configured_date) as config_date, b.configuration_name
FROM tool_downloads a
JOIN tool_configurations b
ON a.tool_conf_id = b.tool_conf_id
ORDER BY a.tool_conf_id
I know the problem has something to do with group-bys/aggregate data and joins. I can't really search google since I don't know the name of the problem I'm encountering. Any help would be appreciated.
Solution is:
SELECT b.tool_conf_id, b.configuration_name, max(a.configured_date) as config_date
FROM tool_downloads a
JOIN tool_configurations b
ON a.tool_conf_id = b.tool_conf_id
GROUP BY b.tool_conf_id, b.configuration_name