Deadlock in update query - stored-procedures

I have a update query in a stored procedure which is the main reason for causing deadlock.
This stored procedure is used in SSIS package in a foreach loop.
It looks like that the stored procedure calls the Salespreprocessing table and goes into deadlock state. This occurs when we make a call to this SSIS package simultaneously. Here is my SQL query
UPDATE SPP
SET SPP.Promotion_Id = T.PromotionID
FROM staging.SalesPreProcessing SPP WITH(INDEX(staging_CIDXSalesPreprocessing1))
INNER JOIN #WithConcatenatedPromotionID T
ON SPP.DocLineNo = T.BillItem
AND SPP.DocNum = T.BillNumber
AND SPP.Cust_Code = T.CustomerCode
AND SPP.ZCS_EAN_CODE = T.ProductCode
AND SPP.BILLING_REPORTING_DATE = T.PricingDate
WHERE SPP.InterfaceStatusTrackingID = #in_InterfaceStatusTrackingId AND SPP.setupid=#in_SetupId
I have created clustered index for setupid and a non-clustered indexes for rest of the columns of the table.
Here is my non-clustered Index
CREATE NONCLUSTERED INDEX [staging_CIDXSalesPreprocessing] on salespreprocessing
(
[SetupId] ASC,
[InterfaceStatusTrackingID] ASC
) INCLUDE`enter code here`
([DocLineNo] ,
[DocNum] ,
[Cust_Code] ,
[ZCS_EAN_CODE] ,
[Billing_Reporting_Date]
)
I am still getting Deadlock

Firstly the non-clustered index seems pointless as its first column is setupId which you say is the column for the clustered index. Thus, assuming that the setupId values are sufficiently variegated, queries will always use the clustered index over and above the nonclustered one. What is the primary key?
In terms of avoiding the deadlock you need to:
1) Ensure that the locks are taken in the same order each time that the SP is called within the foreach loop. I don't know what you're looping round? The results of another SP/query? If so ensure that there is an ORDER BY in that.
2) Is the foreach loop within a transaction? If it is does it need to be? Could you release the locks after each call to the SP by calling it from a non-transactional environment?
3) Take as few locks as possible within the SP. I can't see what query is used to create the temporary table you join to but that may be the issue. You need to use SQL Profiler to find out what object exactly the deadlock is occurring on but using hints such as ROWLOCK may help.

Related

How to use result cache while using joins in SAP HANA?

SELECT
cntdpts."PROJECT_SID",
cntdpts."USER_SID",
"CNTDPTS",
"CNTQUERIES"
FROM (
SELECT
"PROJECT_SID",
"USER_SID",
COUNT("DATA_POINT_SID") AS "CNTDPTS"
FROM
CNTDPTS
GROUP BY
"PROJECT_SID",
"USER_SID" WITH HINT(RESULT_CACHE) ) cntdpts
INNER JOIN (
SELECT
"PROJECT_SID",
"USER_SID",
COUNT("QUERY_SID") AS "CNTQUERIES"
FROM
CNTQUERIES
GROUP BY
"PROJECT_SID",
"USER_SID" WITH HINT(RESULT_CACHE) ) cntqueries ON
cntdpts."PROJECT_SID" = cntqueries."PROJECT_SID"
AND cntdpts."USER_SID" = cntqueries."USER_SID" WITH HINT(RESULT_CACHE)
I am having troubles with using cached table functions. If I run the two subqueries "cntdpts" and "cntqueries" individually they return the result within <100ms (because they use the cache of the table function CNTDPTS and CNTQUERIES. However if I run the full query with joining the two subqueries it takes >5s and HANA does not seem to take advantage of the cached results from the subqueries. Is there any HINT I still need to add maybe?
You will need to add WITH HINT(RESULT_CACHE_NON_TRANSACTIONAL) to your outermost query.
See also https://help.sap.com/viewer/9de0171a6027400bb3b9bee385222eff/2.0.05/en-US/3ad0e93de0aa408e9238fa862e4780df.html

TFDQuery failing to update?

I'm having a problem with a synchronisation issue... I have a source table (mtAllowanceCategory) which I want to update to a copy (qryAllowanceCategory) of it. To make sure records in the copy are deleted if they are no longer present in the source, the copy has a "StillHere" boolean field, which is set to on when the record is added or updated and otherwise stays off. Afterwards, all records with StillHere=false are deleted.
That's the idea, anyway... in practice, the flag fields isn't turned on when posting updates. When I trace the code, the statement is executed; when I look in Access, it stays off. Hence the delete SQL afterwards clears the entire table.
Been trying to figure this for hours now; what am I missing??
mtAllowanceCategory:TFDMemTable (filled from an API call, this works fine)
qryAllowanceCategory:TFDQuery
conn:TFDConnection to a local Access database (also used for qryAllowanceCategory)
conn.ExecSQL('UPDATE AllowanceCategory SET StillHere=false;');
while not mtAllowanceCategory.eof do
begin
if qryAllowanceCategory.locate('WLPid',mtAllowanceCategory.FieldByName('Id').AsString,[loCaseInsensitive]) then
begin
Updating:=true;
qryAllowanceCategory.Edit;
end
else
begin
Updating:=false;
qryAllowanceCategory.Insert;
end;
qryAllowanceCategory.fieldbyname('createdBy').AsString:=mtAllowanceCategory.FieldByName('createdBy').AsString;
qryAllowanceCategory.fieldbyname('createdOn').AsString:=mtAllowanceCategory.FieldByName('createdOn').AsString;
qryAllowanceCategory.fieldbyname('description').AsString:=mtAllowanceCategory.FieldByName('description').AsString;
qryAllowanceCategory.fieldbyname('WLPid').AsString:=mtAllowanceCategory.FieldByName('id').AsString;
qryAllowanceCategory.fieldbyname('isDeleted').Asboolean:=mtAllowanceCategory.FieldByName('isDeleted').Asboolean;
qryAllowanceCategory.fieldbyname('isInUse').Asboolean:=mtAllowanceCategory.FieldByName('isInUse').Asboolean;
qryAllowanceCategory.fieldbyname('modifiedBy').AsString:=mtAllowanceCategory.FieldByName('modifiedBy').AsString;
qryAllowanceCategory.fieldbyname('modifiedOn').AsString:=mtAllowanceCategory.FieldByName('modifiedOn').AsString;
qryAllowanceCategory.fieldbyname('WLPname').AsString:=mtAllowanceCategory.FieldByName('name').AsString;
qryAllowanceCategory.fieldbyname('number').AsInteger:=mtAllowanceCategory.FieldByName('number').AsInteger;
qryAllowanceCategory.fieldbyname('percentage').AsFloat:=mtAllowanceCategory.FieldByName('number').AsFloat;
qryAllowanceCategory.fieldbyname('remark').AsString:=mtAllowanceCategory.FieldByName('remark').AsString;
qryAllowanceCategory.fieldbyname('LocalEdited').AsBoolean:=false;
qryAllowanceCategory.fieldbyname('LocalInserted').AsBoolean:=false;
qryAllowanceCategory.fieldbyname('LocalDeleted').AsBoolean:=false;
qryAllowanceCategory.fieldbyname('StillHere').AsBoolean:=true;
qryAllowanceCategory.Post;
mtAllowanceCategory.next;
end;
conn.commit;
conn.ExecSQL('DELETE FROM AllowanceCategory WHERE StillHere=false;');
When I read your q, I was struck by two thoughts:
One was that I couldn't immediately
see the cause of your problem and the other that you could probably avoid the problem anyway
if you used Sql rather than table traversals in code.
It seemed to me that you might be able to do most
if not all of what you need, in terms of synchronising the two tables, using Access
Sql rather than traversing the qryAllowanceCategory table using a while not EOF loop.
(btw, in the following I'm going to use 'mtAC' and qryAC to reduce typing & typos)
Using Access SQL
Initially, I did not have much luck, as Access rejected my attempts to
refer to both tables in an Update statement against the qryAC one using a Join
or Outer Join, but then I came across a reference that showed that Access does
support an Inner Join syntax. These SQL statements execute successfully by calling
ExecSQL on the FireDAC connection to the database:
update qryAC set qryAC.StillHere = True
where exists(select mtAC.* from mtAC inner join qryAC on mtAC.WLPid = qryAC.WLPid)
and
update qryAC inner join mtAC on mtAC.WLPid = qryAC.WLPid set qryAC.AValue = mtAC.AValue
This first of these obviously provides a way to update the StillHere field to set it to True,
or False with a trivial modification.
The second shows a way to update a set of fields in qryAC from the matching rows in mtAC
and this could, of course, be limited to a subset of rows with a suitable Where clause.
Access Sql also supports checking whether a row in one table exists in the other, as in
select * from qryAC q where exists (select * from mtac m where q.wlpid = m.wlpid)
and for deleting rows in one table which do not exist in the other
delete from qryAC q where not exists (select * from mtac m where q.wlpid = m.wlpid)
Using FireDAC's LocalSQL
I also mentioned LocalSQL in a comment. This supports a far broader range
of Sql statements that native Access Sql and can operate on any TDataSet descendant,
so if you find something that Access Sql syntax doesn't support, it is worth considering
using LocalSQL instead. Its main downside is that it operates on the datasets using
traversals, so in not quite as "instant" as native Sql. It can be a bit tricky to set up,
so here are the settings from the DFM which show how the components need connecting up. You would use it by feeding what you want to FDQuery1.
object AccessConnection: TFDConnection
Params.Strings = (
'Database=D:\Delphi\Code\FireDAC\LocalSQL\Allowance.accdb'
'DriverID=MSAcc')
Connected = True
LoginPrompt = False
end
object mtAC: TFDQuery
AfterOpen = mtACAfterOpen
Connection = AccessConnection
SQL.Strings = (
'select * from mtAC')
end
object qryAC: TFDQuery
Connection = AccessConnection
end
object LocalSqlConnection: TFDConnection
Params.Strings = (
'DriverID=SQLite')
Connected = True
LoginPrompt = False
end
object FDLocalSQL1: TFDLocalSQL
Connection = LocalSqlConnection
DataSets = <
item
DataSet = mtAC
end
item
DataSet = qryAC
end>
end
object FDGUIxWaitCursor1: TFDGUIxWaitCursor
Provider = 'Forms'
end
object FDPhysSQLiteDriverLink1: TFDPhysSQLiteDriverLink
end
object FDQuery1: TFDQuery
Connection = LocalSqlConnection
end
If anyone is interested:
The problem was in not refreshing qryAllowanceCategory after the initial SQL setting StillHere to false. The memory version (qryAllowanceCategory) of the record didn't get that update, so according to him, the flag was still on; after the field updates it appeared there were no changes (all the other fields were unchanged as well) so the post was ignored. In the actual table it was off though, so the final delete SQL removed it.
The problem was solved by adding a refresh after the first UPDATE SQL statement.

PostgreSQL and ActiveRecord subselect for race condition

I'm experiencing a race condition in ActiveRecord with PostgreSQL where I'm reading a value then incrementing it and inserting a new record:
num = Foo.where(bar_id: 42).maximum(:number)
Foo.create!({
bar_id: 42,
number: num + 1
})
At scale, multiple threads will simultaneously read then write the same value of number. Wrapping this in a transaction doesn't fix the race condition because the SELECT doesn't lock the table. I can't use an auto increment, because number is not unique, it's only unique given a certain bar_id. I see 3 possible fixes:
Explicitly use a postgres lock (a row-level lock?)
Use a unique constraint and retry on fails (yuck!)
Override save to use a subselect, I.E.
INSERT INTO foo (bar_id, number) VALUES (42, (SELECT MAX(number) + 1 FROM foo WHERE bar_id = 42));
All these solutions seem like I'd be reimplementing large parts of ActiveRecord::Base#save! Is there an easier way?
UPDATE:
I thought I found the answer with Foo.lock(true).where(bar_id: 42).maximum(:number) but that uses SELECT FOR UDPATE which isn't allowed on aggregate queries
UPDATE 2:
I've just been informed by our DBA, that even if we could do INSERT INTO foo (bar_id, number) VALUES (42, (SELECT MAX(number) + 1 FROM foo WHERE bar_id = 42)); that doesn't fix anything, since the SELECT runs in a different lock than the INSERT
Your options are:
Run in SERIALIZABLE isolation. Interdependent transactions will be aborted on commit as having a serialization failure. You'll get lots of error log spam, and you'll be doing lots of retries, but it'll work reliably.
Define a UNIQUE constraint and retry on failure, as you noted. Same issues as above.
If there is a parent object, you can SELECT ... FOR UPDATE the parent object before doing your max query. In this case you'd SELECT 1 FROM bar WHERE bar_id = $1 FOR UPDATE. You are using bar as a lock for all foos with that bar_id. You can then know that it's safe to proceed, so long as every query that's doing your counter increment does this reliably. This can work quite well.
This still does an aggregate query for each call, which (per next option) is unnecessary, but at least it doesn't spam the error log like the above options.
Use a counter table. This is what I'd do. Either in bar, or in a side-table like bar_foo_counter, acquire a row ID using
UPDATE bar_foo_counter SET counter = counter + 1
WHERE bar_id = $1 RETURNING counter
or the less efficient option if your framework can't handle RETURNING:
SELECT counter FROM bar_foo_counter
WHERE bar_id = $1 FOR UPDATE;
UPDATE bar_foo_counter SET counter = $1;
Then, in the same transaction, use the generated counter row for the number. When you commit, the counter table row for that bar_id gets unlocked for the next query to use. If you roll back, the change is discarded.
I recommend the counter approach, using a dedicated side table for the counter instead of adding a column to bar. That's cleaner to model, and means you create less update bloat in bar, which can slow down queries to bar.

Deal with PostgreSQL concurrency using Rails find_or_create

For some reason this code can create duplicated games if different users run it at the same moment:
game = Game.find_or_create_by(
status: Game::STATUS[:waiting],
category_id: params[:category_id],
private: 0
) do |g|
is_new = true
g.user = current_user
end
I can't figure out clearly what is the matter, but probably its about different Unicorn processes which use different database connections so transactions can run in parallel.
If so, I need the right way to avoid it, maybe I should use Rails transactions or Postgres locks, but I really need an example of using.
Thank you.
It can happen in high concurrency levels.
According to rails docs, these queries will run:
SELECT * FROM games WHERE status = 'waiting' AND ... LIMIT 1;
INSERT INTO games (status, ...) VALUES ('waiting', ...);
The second only runs, when the first haven't returned a row.
It is possible, if two (or more) connections starts the first query within a few microseconds, that multiple processes will create multiple entries. To prevent that, you can use an ACCESS EXCLUSIVE lock on that table, or use a custom advisory lock.
You can use some unique index too, to prevent multiple entries to be inserted into your database, but if it's used on its own, that will cause SQL exceptions in this situation.
EDIT:
ACCESS EXCLUSIVE lock can be acquired through the LOCK command.
Advisory lock can be acquired through using the pg_advisory_lock(id) function.
Both requires you to run arbitrary SQL commands.
Another way would be to use custom queries with:
insert only if it's not exists (& return with all fields)
select only if not inserted
Something, like:
INSERT INTO games (status, category_id, private)
SELECT 'waiting', 2, 0
WHERE NOT EXISTS (
SELECT 1
FROM games
WHERE status = 'waiting'
AND category_id = 2
AND private = 0
)
RETURNING *;
-- only select, when this not inserted anything
SELECT *
FROM games
WHERE status = 'waiting'
AND category_id = 2
AND private = 0

MySQL stored procedure causing problems?

EDIT:
I've narrowed my mysql wait timeout down to this line:
IF #resultsFound > 0 THEN
INSERT INTO product_search_query (QueryText, CategoryId) VALUES (keywords, topLevelCategoryId);
END IF;
Any idea why this would cause a problem? I can't work it out!
I've written a stored proc to search for products in certain categories, due to certain constraints I came across, I was unable to do what I wanted (limiting, but whilst still returning the total number of rows found, with sorting, etc..)
It's meant splits up a string of category Ids, from 1,2,3 in to a temporary table, then builds the full-text search query based on sorting options and limits, executes the query string and then selects out the total number of results.
Now, I know I'm no MySQL guru, very far from it, I've got it working, but I keep getting time outs with product searches etc. So I'm thinking this may be causing some kind of problem?
Does anyone have any ideas how I can tidy this up, or even do it in a much better way that I probably don't know about?
Thanks.
DELIMITER $$
DROP PROCEDURE IF EXISTS `product_search` $$
CREATE DEFINER=`root`#`localhost` PROCEDURE `product_search`(keywords text, categories text, topLevelCategoryId int, sortOrder int, startOffset int, itemsToReturn int)
BEGIN
declare foundPos tinyint unsigned;
declare tmpTxt text;
declare delimLen tinyint unsigned;
declare element text;
declare resultingNum int unsigned;
drop temporary table if exists categoryIds;
create temporary table categoryIds
(
`CategoryId` int
) engine = memory;
set tmpTxt = categories;
set foundPos = instr(tmpTxt, ',');
while foundPos <> 0 do
set element = substring(tmpTxt, 1, foundPos-1);
set tmpTxt = substring(tmpTxt, foundPos+1);
set resultingNum = cast(trim(element) as unsigned);
insert into categoryIds (`CategoryId`) values (resultingNum);
set foundPos = instr(tmpTxt,',');
end while;
if tmpTxt <> '' then
insert into categoryIds (`CategoryId`) values (tmpTxt);
end if;
CASE
WHEN sortOrder = 0 THEN
SET #sortString = "ProductResult_Relevance DESC";
WHEN sortOrder = 1 THEN
SET #sortString = "ProductResult_Price ASC";
WHEN sortOrder = 2 THEN
SET #sortString = "ProductResult_Price DESC";
WHEN sortOrder = 3 THEN
SET #sortString = "ProductResult_StockStatus ASC";
END CASE;
SET #theSelect = CONCAT(CONCAT("
SELECT SQL_CALC_FOUND_ROWS
supplier.SupplierId as Supplier_SupplierId,
supplier.Name as Supplier_Name,
supplier.ImageName as Supplier_ImageName,
product_result.ProductId as ProductResult_ProductId,
product_result.SupplierId as ProductResult_SupplierId,
product_result.Name as ProductResult_Name,
product_result.Description as ProductResult_Description,
product_result.ThumbnailUrl as ProductResult_ThumbnailUrl,
product_result.Price as ProductResult_Price,
product_result.DeliveryPrice as ProductResult_DeliveryPrice,
product_result.StockStatus as ProductResult_StockStatus,
product_result.TrackUrl as ProductResult_TrackUrl,
product_result.LastUpdated as ProductResult_LastUpdated,
MATCH(product_result.Name) AGAINST(?) AS ProductResult_Relevance
FROM
product_latest_state product_result
JOIN
supplier ON product_result.SupplierId = supplier.SupplierId
JOIN
category_product ON product_result.ProductId = category_product.ProductId
WHERE
MATCH(product_result.Name) AGAINST (?)
AND
category_product.CategoryId IN (select CategoryId from categoryIds)
ORDER BY
", #sortString), "
LIMIT ?, ?;
");
set #keywords = keywords;
set #startOffset = startOffset;
set #itemsToReturn = itemsToReturn;
PREPARE TheSelect FROM #theSelect;
EXECUTE TheSelect USING #keywords, #keywords, #startOffset, #itemsToReturn;
SET #resultsFound = FOUND_ROWS();
SELECT #resultsFound as 'TotalResults';
IF #resultsFound > 0 THEN
INSERT INTO product_search_query (QueryText, CategoryId) VALUES (keywords, topLevelCategoryId);
END IF;
END $$
DELIMITER ;
Any help is very very much appreciated!
There is little you can do with this query.
Try this:
Create a PRIMARY KEY on categoryIds (categoryId)
Make sure that supplier (supplied_id) is a PRIMARY KEY
Make sure that category_product (ProductID, CategoryID) (in this order) is a PRIMARY KEY, or you have an index with ProductID leading.
Update:
If it's INSERT that causes the problem and product_search_query in a MyISAM table the issue can be with MyISAM locking.
MyISAM locks the whole table if it decides to insert a row into a free block in the middle of the table which can cause the timeouts.
Try using INSERT DELAYED instead:
IF #resultsFound > 0 THEN
INSERT DELAYED INTO product_search_query (QueryText, CategoryId) VALUES (keywords, topLevelCategoryId);
END IF;
This will put the records into the insertion queue and return immediately. The record will be added later asynchronously.
Note that you may lose information if the server dies after the command is issued but before the records are actually inserted.
Update:
Since your table is InnoDB, it may be an issue with table locking. INSERT DELAYED is not supported on InnoDB.
Depending on the nature of the query, DML queries on InnoDB table may place gap locks which will lock the inserts.
For instance:
CREATE TABLE t_lock (id INT NOT NULL PRIMARY KEY, val INT NOT NULL) ENGINE=InnoDB;
INSERT
INTO t_lock
VALUES
(1, 1),
(2, 2);
This query performs ref scans and places the locks on individual records:
-- Session 1
START TRANSACTION;
UPDATE t_lock
SET val = 3
WHERE id IN (1, 2)
-- Session 2
START TRANSACTION;
INSERT
INTO t_lock
VALUES (3, 3)
-- Success
This query, while doing the same, performs a range scan and places a gap lock after key value 2, which will not let insert key value 3:
-- Session 1
START TRANSACTION;
UPDATE t_lock
SET val = 3
WHERE id BETWEEN 1 AND 2
-- Session 2
START TRANSACTION;
INSERT
INTO t_lock
VALUES (3, 3)
-- Locks
Try wrapping your EXECUTE with the following:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
EXECUTE TheSelect USING #keywords, #keywords, #startOffset, #itemsToReturn;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ ;
I do something similiar in TSQL for all report stored proc and searches where repeatable reads aren't important to reduce locking/blocking issues with other processes running on the database.
Turn on slow queries, that will give you an idea of what is taking so long to execute that there is a timeout.
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
Pick the slowest query and optimise that. then run for a while and repeat.
There is some excellent information and tools here http://hackmysql.com/nontech
DC
UPDATE:
Either you have a network problem causing the timeout, if you are using a local mysql instance then that is unlikely, OR something is locking a table for far too long causing a timeout. the process that is locking the table or tables for far too long will be listed in the slow log as a slow query. you can also get the slow log query to display any queries that fail to use an index resulting in an inefficient query.
If you can get the problem to occur while you are present then you can also use a tool like phpmyadmin or the commandline to run "SHOW PROCESSLIST\G" this will give you a list of what queries are running while the problem is occurring.
You think the problem is in your insert statement, therefore something is locking that table. therefore you need to find what is locking that table, therefore you need to find what is running so slow its locking the table for far too long. Slow queries is one way to do that.
Other things to look at
CPU - is it idle or running at full pelt
IO - is io causing holdups
RAM - are you swapping all the time (will cause excessive io)
Does the table product_search_query use an index?
What is the primary key?
If your index uses strings that are too long? you may build a huge index file that causes very slow inserts (slow query log will also show that)
And yes the problem may be elsewhere, but you must start somewhere mustn't you.
DC

Resources