very slow insert into SQL temp table in stored procedure - stored-procedures

I found a terrible bottleneck in this one procedure. No idea why this little block of code is running so slowly here. If I comment this one block out, the entire thing takes around 7 seconds to do its job. This block adds over a minute and a half.
Here's the definition of #TempFC:
CREATE TABLE #TempFC (
NoticeID int,
AttyID int,
AFN9 varchar(9),
FirstPubDate smalldatetime,
MortgagorName varchar(255),
PropAddress varchar(255)
)
Here's the definition of #NoticeIDs
CREATE TABLE #NoticeIDs (
NoticeID INT,
CircuitCourtPubDateID INT,
CircuitCourtAdjournmentPublicationDate SMALLDATETIME
)
At the point where this runs, #TempFC is empty and there are only 2 rows in #NoticeIDs. A minute and 1/2 to insert 2 narrow rows into an empty table.
INSERT INTO #TempFC (
NoticeID,
AttyID,
AFN9,
FirstPubDate,
MortgagorName,
PropAddress
)
SELECT
tN.NoticeID,
tN.AttyID,
tN.AttyFileNum9Chars as AFN9,
tN.FirstPubDate,
tN.MortgagorName,
tN.PropAddress
FROM
dbo.tblNotices tN
INNER JOIN #NoticeIDs ON #NoticeIDs.NoticeID = tN.NoticeID
INNER JOIN dbo.tblAttorneys tA ON tN.AttyID = tA.AttyID
INNER JOIN dbo.tblParentAttorneys tPA on tA.ParentAttorneyID = tPA.ParentAttorneyID
WHERE
tPA.PubAffTiming = 2
If I comment out the "INSERT" line and just run the select (with a RETURN after it), it takes a few seconds to run all the code above and then the select. If I comment out the SELECT and add a "VALUES" line to the INSERT statement, that runs fast too.
I also put all the above into a separate query window and ran it as is. it ran very fast, sub second. In addition, I put it into a single small stored procedure all by itself and it ran lightning fast as well.
I'm lost as to why this would slow down so drastically here. Any ideas?

try to replace
CREATE TABLE #TempFC & INSERT INTO #TempFC
with SELECT...into #TempFC
and also try to replace '2' with #x

Related

How to pass a table name as a parameter in BigQuery procedure?

I am trying to build bigquery stored procedure where I need to pass the table name as a parameter. My code is:
CREATE OR REPLACE PROCEDURE `MKT_DS.PXV2DWY_CREATE_PROPERTY_FEATURES` (table_name STRING)
BEGIN
----step 1
CREATE OR REPLACE TABLE `MKT_DS.PXV2DWY_CREATE_PROPERTY_FEATURES_01` AS
SELECT DISTINCT XX.HH_ID, A.ECR_PRTY_ID, XX.ANCHOR_DT
FROM table_name XX
LEFT JOIN
(
SELECT DISTINCT HH_ID, ECR_PRTY_ID
FROM `analytics-mkt-cleanroom.Master.EDW_ECR_ECR_MAPPING`
WHERE HH_ID NOT LIKE 'U%'
AND ECR_PRTY_ID IS NOT NULL
)A
ON XX.HH_ID = A.HH_ID----one (HH) to many (ecr)
;
END;
CALL MKT_DS.PXV2DWY_CREATE_PROPERTY_FEATURES(`analytics-mkt-cleanroom.MKT_DS.Home_Services_Multi_Class_Aesthetic_Baseline_Final_Training_Sample`);
I followed a couple of similar questions here and here, tried writing an EXECUTE IMMEDIATE version of the above but not able to work out the right syntax.
I think issue is; the SELECT statement in my code is selecting multiple columns XX.HH_ID, A.ECR_PRTY_ID, XX.ANCHOR_DT and the EXECUTIVE IMMEDIATE setup is meant to work only for one column. But I'm not sure. Please advise. Thank you.
I am basically trying to write stored procedures for data pipeline building.
Hope below is helpful.
pass a parameter as a string.
CALL MKT_DS.PXV2DWY_CREATE_PROPERTY_FEATURES(`analytics-mkt-cleanroom.MKT_DS.Home_Services_Multi_Class_Aesthetic_Baseline_Final_Training_Sample`);
-->
CALL MKT_DS.PXV2DWY_CREATE_PROPERTY_FEATURES('analytics-mkt-cleanroom.MKT_DS.Home_Services_Multi_Class_Aesthetic_Baseline_Final_Training_Sample');
use EXECUTE IMMEDIATE since a table name can't be parameterized as a variable in a query.
----step 1
EXECUTE IMMEDIATE FORMAT("""
CREATE OR REPLACE TABLE `MKT_DS.PXV2DWY_CREATE_PROPERTY_FEATURES_01` AS
SELECT DISTINCT XX.HH_ID, A.ECR_PRTY_ID, XX.ANCHOR_DT
FROM `%s` XX
LEFT JOIN
(
SELECT DISTINCT HH_ID, ECR_PRTY_ID
FROM `analytics-mkt-cleanroom.Master.EDW_ECR_ECR_MAPPING`
WHERE HH_ID NOT LIKE 'U%%'
AND ECR_PRTY_ID IS NOT NULL
)A
ON XX.HH_ID = A.HH_ID----one (HH) to many (ecr)
;
""", table_name);
escape % in a format string with additional %
LIKE 'U%'
-->
LIKE 'U%%'
see PARSE_DATE not working in FORMAT() in BigQuery

esper query is half executed - only part of it is actually running

we have a strange issue, where esper query is only partial executed...
select cseShutDownAlarm.INSTANCEID as INSTANCEID, swtDownAlarm.source
from Alarm(severity.getValue()=5, eventType.getValue() = 'SWT_SWITCH_DOWN').win:time(60 sec) as swtDownAlarm,
sql:stormdb['select id as INSTANCEID, source as SOURCE
from Alarm where EVENTTYPE_VALUE =\'cseShutDownNotify\' and source = ${swtDownAlarm.source} and severity !=5'] as cseShutDownAlarm,
sql:stormdb['insert into wcsdba.dyinggasp (parameter, value) VALUES (\'cseShutDownAlarm.SOURCE\', ${cseShutDownAlarm.SOURCE}) '],
sql:stormdb['insert into wcsdba.dyinggasp (parameter, value) VALUES (\'swtDownAlarm.source\', ${swtDownAlarm.source}) '] where swtDownAlarm.source = cseShutDownAlarm.SOURCE
and in the DB, we see that:
SQL> /
PARAMETER VALUE
cseShutDownAlarm.SOURCE 172.16.148.48
cseShutDownAlarm.SOURCE 172.16.148.48
SQL>
but second source (swtDownAlarm.source) is not printed...
If I switch the order then the other one will be inserted only.
any reason why it does not execute both inserts? also the rest of the condition is not checked... as the source on both are identical, but condition not fulfilled.
Thanks,
fcbman
Joins are meant to be the place for event stream joins against SQL database for purpose of querying. The "insert into" isn't a query and would never return rows. And if an inner join doesn't return rows it ends early (outer join would be right then)

Return 2 resultset from cursor based on one query (nested cursor)

I'm trying to obtain 2 different resultset from stored procedure, based on a single query. What I'm trying to do is that:
1.) return query result into OUT cursor;
2.) from this cursor results, get all longest values in each column and return that as second OUT
resultset.
I'm trying to avoid doing same thing twice with this - get data and after that get longest column values of that same data. I'm not sure If this is even possible, but If It is, can somebody show me HOW ?
This is an example of what I want to do (just for illustration):
CREATE OR REPLACE PROCEDURE MySchema.Test(RESULT OUT SYS_REFCURSOR,MAX_RESULT OUT SYS_REFCURSOR)
AS
BEGIN
OPEN RESULT FOR SELECT Name,Surname FROM MyTable;
OPEN MAX_RESULT FOR SELECT Max(length(Name)),Max(length(Surname)) FROM RESULT; --error here
END Test;
This example compiles with "ORA-00942: table or view does not exist".
I know It's a silly example, but I've been investigating and testing all sorts of things (implicit cursors, fetching cursors, nested cursors, etc.) and found nothing that would help me, specially when working with stored procedure returning multiple resultsets.
My overall goal with this is to shorten data export time for Excel. Currently I have to run same query twice - once for calculating data size to autofit Excel columns, and then for writing data into Excel.
I believe that manipulating first resultset in order to get second one would be much faster - with less DB cycles made.
I'm using Oracle 11g, Any help much appreciated.
Each row of data from a cursor can be read exactly once; once the next row (or set of rows) is read from the cursor then the previous row (or set of rows) cannot be returned to and the cursor cannot be re-used. So what you are asking is impossible as if you read the cursor to find the maximum values (ignoring that you can't use a cursor as a source in a SELECT statement but, instead, you could read it using a PL/SQL loop) then the cursor's rows would have been "used up" and the cursor closed so it could not be read from when it is returned from the procedure.
You would need to use two separate queries:
CREATE PROCEDURE MySchema.Test(
RESULT OUT SYS_REFCURSOR,
MAX_RESULT OUT SYS_REFCURSOR
)
AS
BEGIN
OPEN RESULT FOR
SELECT Name,
Surname
FROM MyTable;
OPEN MAX_RESULT FOR
SELECT MAX(LENGTH(Name)) AS max_name_length,
MAX(LENGTH(Surname)) AS max_surname_length
FROM MyTable;
END Test;
/
Just for theoretical purposes, it is possible to only read from the table once if you bulk collect the data into a collection then select from a table-collection expression (however, it is going to be more complicated to code/maintain and is going to require that the rows from the table are stored in memory [which your DBA might not appreciate if the table is large] and may not be more performant than compared to just querying the table twice as you'll end up with three SELECT statements instead of two).
Something like:
CREATE TYPE test_obj IS OBJECT(
name VARCHAR2(50),
surname VARCHAR2(50)
);
CREATE TYPE test_obj_table IS TABLE OF test_obj;
CREATE PROCEDURE MySchema.Test(
RESULT OUT SYS_REFCURSOR,
MAX_RESULT OUT SYS_REFCURSOR
)
AS
t_names test_obj_table;
BEGIN
SELECT Name,
Surname
BULK COLLECT INTO t_names
FROM MyTable;
OPEN RESULT FOR
SELECT * FROM TABLE( t_names );
OPEN MAX_RESULT FOR
SELECT MAX(LENGTH(Name)) AS max_name_length,
MAX(LENGTH(Surname)) AS max_surname_length
FROM TABLE( t_names );
END Test;
/

select for update in stored procedure (concurrently increment a field)

I want to retrieve the value of a field and increment it safely in Informix 12.1 when multiple users are connected.
What I want in C terms is lastnumber = counter++; in a concurrent environment.
The documentation mentions one way of doing this which is to make everyone connect with a wait parameter, lock the row, read the data, increment it and release the lock.
So this is what I tried:
begin work;
select
lastnum
from tbllastnums
where id = 1
for update;
And I can see that the row is locked until I commit or end my session.
However when I put this in a stored procedure:
create procedure "informix".select_for_update_test();
define vLastnum decimal(15);
begin work;
select
lastnum
into vLastnum
from tbllastnums
where id = 1
for update;
commit;
end procedure;
The database gives me a syntax error. (tried with different editors) So why is it a syntax error to write for update clause within a stored procedure? Is there an alternative to this?
Edit
Here's what I ended up with:
DROP TABLE if exists tstcounter;
^!^
CREATE TABLE tstcounter
(
id INTEGER NOT NULL,
counter INTEGER DEFAULT 0 NOT NULL
)
EXTENT SIZE 16
NEXT SIZE 16
LOCK MODE ROW;
^!^
ALTER TABLE tstcounter
ADD CONSTRAINT PRIMARY KEY (id)
CONSTRAINT tstcounter00;
^!^
insert into tstcounter values(1, 0);
^!^
select * from tstcounter;
^!^
drop function if exists tstgetlastnumber;
^!^
create function tstgetlastnumber(pId integer)
returning integer as lastCounter
define vCounter integer;
foreach curse for
select counter into vCounter from tstcounter where id = pId
update tstcounter set counter = vCounter + 1 where current of curse;
return vCounter with resume;
end foreach;
end function;
^!^
SPL and cursors 'FOR UPDATE'
If you manage to find the right bit of the manual — Updating or Deleting Rows Identified by Cursor Name under the FOREACH statement in the SPL (Stored Procedure Language) section of the Informix Guide to SQL: Syntax manual — then you'll find the magic information:
Specify a cursor name in the FOREACH statement if you intend to use the WHERE CURRENT OF cursor clause in UPDATE or DELETE statements that operate on the current row of cursor within the FOREACH loop. Although you cannot include the FOR UPDATE keywords in the SELECT ... INTO segment of the FOREACH statement, the cursor behaves like a FOR UPDATE cursor.
So, you'll need to create a FOREACH loop with a cursor name and take it from there.
Access to the manuals
Incidentally, if you go to the IBM Informix Knowledge Center and see this icon:
that is the 'show table of contents' icon and you need to press it to see the useful information for navigating to the manuals. If you see this icon:
it is the 'hide table of contents' icon, but you should be able to see the contents down the left side. It took me a while to find out this trick. And I've no idea why the contents were hidden by default for me, but I think that was a UX design mistake if other people also suffer from it.

MySQL stored procedure causing problems?

EDIT:
I've narrowed my mysql wait timeout down to this line:
IF #resultsFound > 0 THEN
INSERT INTO product_search_query (QueryText, CategoryId) VALUES (keywords, topLevelCategoryId);
END IF;
Any idea why this would cause a problem? I can't work it out!
I've written a stored proc to search for products in certain categories, due to certain constraints I came across, I was unable to do what I wanted (limiting, but whilst still returning the total number of rows found, with sorting, etc..)
It's meant splits up a string of category Ids, from 1,2,3 in to a temporary table, then builds the full-text search query based on sorting options and limits, executes the query string and then selects out the total number of results.
Now, I know I'm no MySQL guru, very far from it, I've got it working, but I keep getting time outs with product searches etc. So I'm thinking this may be causing some kind of problem?
Does anyone have any ideas how I can tidy this up, or even do it in a much better way that I probably don't know about?
Thanks.
DELIMITER $$
DROP PROCEDURE IF EXISTS `product_search` $$
CREATE DEFINER=`root`#`localhost` PROCEDURE `product_search`(keywords text, categories text, topLevelCategoryId int, sortOrder int, startOffset int, itemsToReturn int)
BEGIN
declare foundPos tinyint unsigned;
declare tmpTxt text;
declare delimLen tinyint unsigned;
declare element text;
declare resultingNum int unsigned;
drop temporary table if exists categoryIds;
create temporary table categoryIds
(
`CategoryId` int
) engine = memory;
set tmpTxt = categories;
set foundPos = instr(tmpTxt, ',');
while foundPos <> 0 do
set element = substring(tmpTxt, 1, foundPos-1);
set tmpTxt = substring(tmpTxt, foundPos+1);
set resultingNum = cast(trim(element) as unsigned);
insert into categoryIds (`CategoryId`) values (resultingNum);
set foundPos = instr(tmpTxt,',');
end while;
if tmpTxt <> '' then
insert into categoryIds (`CategoryId`) values (tmpTxt);
end if;
CASE
WHEN sortOrder = 0 THEN
SET #sortString = "ProductResult_Relevance DESC";
WHEN sortOrder = 1 THEN
SET #sortString = "ProductResult_Price ASC";
WHEN sortOrder = 2 THEN
SET #sortString = "ProductResult_Price DESC";
WHEN sortOrder = 3 THEN
SET #sortString = "ProductResult_StockStatus ASC";
END CASE;
SET #theSelect = CONCAT(CONCAT("
SELECT SQL_CALC_FOUND_ROWS
supplier.SupplierId as Supplier_SupplierId,
supplier.Name as Supplier_Name,
supplier.ImageName as Supplier_ImageName,
product_result.ProductId as ProductResult_ProductId,
product_result.SupplierId as ProductResult_SupplierId,
product_result.Name as ProductResult_Name,
product_result.Description as ProductResult_Description,
product_result.ThumbnailUrl as ProductResult_ThumbnailUrl,
product_result.Price as ProductResult_Price,
product_result.DeliveryPrice as ProductResult_DeliveryPrice,
product_result.StockStatus as ProductResult_StockStatus,
product_result.TrackUrl as ProductResult_TrackUrl,
product_result.LastUpdated as ProductResult_LastUpdated,
MATCH(product_result.Name) AGAINST(?) AS ProductResult_Relevance
FROM
product_latest_state product_result
JOIN
supplier ON product_result.SupplierId = supplier.SupplierId
JOIN
category_product ON product_result.ProductId = category_product.ProductId
WHERE
MATCH(product_result.Name) AGAINST (?)
AND
category_product.CategoryId IN (select CategoryId from categoryIds)
ORDER BY
", #sortString), "
LIMIT ?, ?;
");
set #keywords = keywords;
set #startOffset = startOffset;
set #itemsToReturn = itemsToReturn;
PREPARE TheSelect FROM #theSelect;
EXECUTE TheSelect USING #keywords, #keywords, #startOffset, #itemsToReturn;
SET #resultsFound = FOUND_ROWS();
SELECT #resultsFound as 'TotalResults';
IF #resultsFound > 0 THEN
INSERT INTO product_search_query (QueryText, CategoryId) VALUES (keywords, topLevelCategoryId);
END IF;
END $$
DELIMITER ;
Any help is very very much appreciated!
There is little you can do with this query.
Try this:
Create a PRIMARY KEY on categoryIds (categoryId)
Make sure that supplier (supplied_id) is a PRIMARY KEY
Make sure that category_product (ProductID, CategoryID) (in this order) is a PRIMARY KEY, or you have an index with ProductID leading.
Update:
If it's INSERT that causes the problem and product_search_query in a MyISAM table the issue can be with MyISAM locking.
MyISAM locks the whole table if it decides to insert a row into a free block in the middle of the table which can cause the timeouts.
Try using INSERT DELAYED instead:
IF #resultsFound > 0 THEN
INSERT DELAYED INTO product_search_query (QueryText, CategoryId) VALUES (keywords, topLevelCategoryId);
END IF;
This will put the records into the insertion queue and return immediately. The record will be added later asynchronously.
Note that you may lose information if the server dies after the command is issued but before the records are actually inserted.
Update:
Since your table is InnoDB, it may be an issue with table locking. INSERT DELAYED is not supported on InnoDB.
Depending on the nature of the query, DML queries on InnoDB table may place gap locks which will lock the inserts.
For instance:
CREATE TABLE t_lock (id INT NOT NULL PRIMARY KEY, val INT NOT NULL) ENGINE=InnoDB;
INSERT
INTO t_lock
VALUES
(1, 1),
(2, 2);
This query performs ref scans and places the locks on individual records:
-- Session 1
START TRANSACTION;
UPDATE t_lock
SET val = 3
WHERE id IN (1, 2)
-- Session 2
START TRANSACTION;
INSERT
INTO t_lock
VALUES (3, 3)
-- Success
This query, while doing the same, performs a range scan and places a gap lock after key value 2, which will not let insert key value 3:
-- Session 1
START TRANSACTION;
UPDATE t_lock
SET val = 3
WHERE id BETWEEN 1 AND 2
-- Session 2
START TRANSACTION;
INSERT
INTO t_lock
VALUES (3, 3)
-- Locks
Try wrapping your EXECUTE with the following:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
EXECUTE TheSelect USING #keywords, #keywords, #startOffset, #itemsToReturn;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ ;
I do something similiar in TSQL for all report stored proc and searches where repeatable reads aren't important to reduce locking/blocking issues with other processes running on the database.
Turn on slow queries, that will give you an idea of what is taking so long to execute that there is a timeout.
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
Pick the slowest query and optimise that. then run for a while and repeat.
There is some excellent information and tools here http://hackmysql.com/nontech
DC
UPDATE:
Either you have a network problem causing the timeout, if you are using a local mysql instance then that is unlikely, OR something is locking a table for far too long causing a timeout. the process that is locking the table or tables for far too long will be listed in the slow log as a slow query. you can also get the slow log query to display any queries that fail to use an index resulting in an inefficient query.
If you can get the problem to occur while you are present then you can also use a tool like phpmyadmin or the commandline to run "SHOW PROCESSLIST\G" this will give you a list of what queries are running while the problem is occurring.
You think the problem is in your insert statement, therefore something is locking that table. therefore you need to find what is locking that table, therefore you need to find what is running so slow its locking the table for far too long. Slow queries is one way to do that.
Other things to look at
CPU - is it idle or running at full pelt
IO - is io causing holdups
RAM - are you swapping all the time (will cause excessive io)
Does the table product_search_query use an index?
What is the primary key?
If your index uses strings that are too long? you may build a huge index file that causes very slow inserts (slow query log will also show that)
And yes the problem may be elsewhere, but you must start somewhere mustn't you.
DC

Resources