I have a question about stack's concept.
in the insertion and deletion operations why we insert and delete from the top of the stack , and not from the tail ? ( like when we insert a new element we insert in the tail , and when we delete we delete from the tail ).
Related
I have following fields in table 1-
db,schema,jobnm,status,runtime, ins_tstmp, upd_tstmp.
A stream has been created on table 1.
A stored procedure was written to loop through another table's dataset (4 records) and write all 4 records to table 1 if they don't already exist else update (using merge sql here; ins_tstmp gets populated via insert part of merge while upd_tstmp gets updated via update part ).
As expected, table1 has all 4 records and Stream also has 4 records with metadata$action as INSERT . UPD_TSTMP is null here.
Now on 2nd run, same 4 records were retrieved. Since they were a match, upd_tstmp got populated in both table 1 and stream but why metadata$action is INSERT only? Not seeing 2 entries for an update. Could someone please explain what I am missing here?
Thanks
Since they were a match, upd_tstmp got populated in both table 1 and
stream but why metadata$action is INSERT only?
The METADATA$ACTION column can have 2 possible values: INSERT and DELETE. So you can't see "UPDATE" in this column.
METADATA$ISUPDATE: This is an extra column indicating whether the operation was part of an UPDATE statement. In your case, you should also see it "false" because Streams record the differences between two offsets. If a row is added and then updated in the current offset, the delta change is a new row. The METADATA$ISUPDATE row records a FALSE value.
https://docs.snowflake.com/en/user-guide/streams-intro.html#stream-columns
I have a update query in a stored procedure which is the main reason for causing deadlock.
This stored procedure is used in SSIS package in a foreach loop.
It looks like that the stored procedure calls the Salespreprocessing table and goes into deadlock state. This occurs when we make a call to this SSIS package simultaneously. Here is my SQL query
UPDATE SPP
SET SPP.Promotion_Id = T.PromotionID
FROM staging.SalesPreProcessing SPP WITH(INDEX(staging_CIDXSalesPreprocessing1))
INNER JOIN #WithConcatenatedPromotionID T
ON SPP.DocLineNo = T.BillItem
AND SPP.DocNum = T.BillNumber
AND SPP.Cust_Code = T.CustomerCode
AND SPP.ZCS_EAN_CODE = T.ProductCode
AND SPP.BILLING_REPORTING_DATE = T.PricingDate
WHERE SPP.InterfaceStatusTrackingID = #in_InterfaceStatusTrackingId AND SPP.setupid=#in_SetupId
I have created clustered index for setupid and a non-clustered indexes for rest of the columns of the table.
Here is my non-clustered Index
CREATE NONCLUSTERED INDEX [staging_CIDXSalesPreprocessing] on salespreprocessing
(
[SetupId] ASC,
[InterfaceStatusTrackingID] ASC
) INCLUDE`enter code here`
([DocLineNo] ,
[DocNum] ,
[Cust_Code] ,
[ZCS_EAN_CODE] ,
[Billing_Reporting_Date]
)
I am still getting Deadlock
Firstly the non-clustered index seems pointless as its first column is setupId which you say is the column for the clustered index. Thus, assuming that the setupId values are sufficiently variegated, queries will always use the clustered index over and above the nonclustered one. What is the primary key?
In terms of avoiding the deadlock you need to:
1) Ensure that the locks are taken in the same order each time that the SP is called within the foreach loop. I don't know what you're looping round? The results of another SP/query? If so ensure that there is an ORDER BY in that.
2) Is the foreach loop within a transaction? If it is does it need to be? Could you release the locks after each call to the SP by calling it from a non-transactional environment?
3) Take as few locks as possible within the SP. I can't see what query is used to create the temporary table you join to but that may be the issue. You need to use SQL Profiler to find out what object exactly the deadlock is occurring on but using hints such as ROWLOCK may help.
I found a terrible bottleneck in this one procedure. No idea why this little block of code is running so slowly here. If I comment this one block out, the entire thing takes around 7 seconds to do its job. This block adds over a minute and a half.
Here's the definition of #TempFC:
CREATE TABLE #TempFC (
NoticeID int,
AttyID int,
AFN9 varchar(9),
FirstPubDate smalldatetime,
MortgagorName varchar(255),
PropAddress varchar(255)
)
Here's the definition of #NoticeIDs
CREATE TABLE #NoticeIDs (
NoticeID INT,
CircuitCourtPubDateID INT,
CircuitCourtAdjournmentPublicationDate SMALLDATETIME
)
At the point where this runs, #TempFC is empty and there are only 2 rows in #NoticeIDs. A minute and 1/2 to insert 2 narrow rows into an empty table.
INSERT INTO #TempFC (
NoticeID,
AttyID,
AFN9,
FirstPubDate,
MortgagorName,
PropAddress
)
SELECT
tN.NoticeID,
tN.AttyID,
tN.AttyFileNum9Chars as AFN9,
tN.FirstPubDate,
tN.MortgagorName,
tN.PropAddress
FROM
dbo.tblNotices tN
INNER JOIN #NoticeIDs ON #NoticeIDs.NoticeID = tN.NoticeID
INNER JOIN dbo.tblAttorneys tA ON tN.AttyID = tA.AttyID
INNER JOIN dbo.tblParentAttorneys tPA on tA.ParentAttorneyID = tPA.ParentAttorneyID
WHERE
tPA.PubAffTiming = 2
If I comment out the "INSERT" line and just run the select (with a RETURN after it), it takes a few seconds to run all the code above and then the select. If I comment out the SELECT and add a "VALUES" line to the INSERT statement, that runs fast too.
I also put all the above into a separate query window and ran it as is. it ran very fast, sub second. In addition, I put it into a single small stored procedure all by itself and it ran lightning fast as well.
I'm lost as to why this would slow down so drastically here. Any ideas?
try to replace
CREATE TABLE #TempFC & INSERT INTO #TempFC
with SELECT...into #TempFC
and also try to replace '2' with #x
Team, consider i have a Stored procedure have declared global temp table and inserting single row in to it and will return back to java and again i ll call same SP and insert another row.
Finally i will call another SP and i will fetch the rows from global temp table.
but issue is am getting only the last row from the temp table.Meaning: its replacing.
Stored Procedure:
DECLARE GLOBAL TEMPORARY TABLE T_LOAD(
.
.
.
)NOT LOGGED WITH REPLACE ON COMMIT PRESERVE ROWS;
Insert into T_LOAD(...)values(...);
Please suggest a way to proceed.
I'm moving from MySql to Postgres, and I noticed that when you delete rows from MySql, the unique ids for those rows are re-used when you make new ones. With Postgres, if you create rows, and delete them, the unique ids are not used again.
Is there a reason for this behaviour in Postgres? Can I make it act more like MySql in this case?
Sequences have gaps to permit concurrent inserts. Attempting to avoid gaps or to re-use deleted IDs creates horrible performance problems. See the PostgreSQL wiki FAQ.
PostgreSQL SEQUENCEs are used to allocate IDs. These only ever increase, and they're exempt from the usual transaction rollback rules to permit multiple transactions to grab new IDs at the same time. This means that if a transaction rolls back, those IDs are "thrown away"; there's no list of "free" IDs kept, just the current ID counter. Sequences are also usually incremented if the database shuts down uncleanly.
Synthetic keys (IDs) are meaningless anyway. Their order is not significant, their only property of significance is uniqueness. You can't meaningfully measure how "far apart" two IDs are, nor can you meaningfully say if one is greater or less than another. All you can do is say "equal" or "not equal". Anything else is unsafe. You shouldn't care about gaps.
If you need a gapless sequence that re-uses deleted IDs, you can have one, you just have to give up a huge amount of performance for it - in particular, you cannot have any concurrency on INSERTs at all, because you have to scan the table for the lowest free ID, locking the table for write so no other transaction can claim the same ID. Try searching for "postgresql gapless sequence".
The simplest approach is to use a counter table and a function that gets the next ID. Here's a generalized version that uses a counter table to generate consecutive gapless IDs; it doesn't re-use IDs, though.
CREATE TABLE thetable_id_counter ( last_id integer not null );
INSERT INTO thetable_id_counter VALUES (0);
CREATE OR REPLACE FUNCTION get_next_id(countertable regclass, countercolumn text) RETURNS integer AS $$
DECLARE
next_value integer;
BEGIN
EXECUTE format('UPDATE %s SET %I = %I + 1 RETURNING %I', countertable, countercolumn, countercolumn, countercolumn) INTO next_value;
RETURN next_value;
END;
$$ LANGUAGE plpgsql;
COMMENT ON get_next_id(countername regclass) IS 'Increment and return value from integer column $2 in table $1';
Usage:
INSERT INTO dummy(id, blah)
VALUES ( get_next_id('thetable_id_counter','last_id'), 42 );
Note that when one open transaction has obtained an ID, all other transactions that try to call get_next_id will block until the 1st transaction commits or rolls back. This is unavoidable and for gapless IDs and is by design.
If you want to store multiple counters for different purposes in a table, just add a parameter to the above function, add a column to the counter table, and add a WHERE clause to the UPDATE that matches the parameter to the added column. That way you can have multiple independently-locked counter rows. Do not just add extra columns for new counters.
This function does not re-use deleted IDs, it just avoids introducing gaps.
To re-use IDs I advise ... not re-using IDs.
If you really must, you can do so by adding an ON INSERT OR UPDATE OR DELETE trigger on the table of interest that adds deleted IDs to a free-list side table, and removes them from the free-list table when they're INSERTed. Treat an UPDATE as a DELETE followed by an INSERT. Now modify the ID generation function above so that it does a SELECT free_id INTO next_value FROM free_ids FOR UPDATE LIMIT 1 and if found, DELETEs that row. IF NOT FOUND gets a new ID from the generator table as normal. Here's an untested extension of the prior function to support re-use:
CREATE OR REPLACE FUNCTION get_next_id_reuse(countertable regclass, countercolumn text, freelisttable regclass, freelistcolumn text) RETURNS integer AS $$
DECLARE
next_value integer;
BEGIN
EXECUTE format('SELECT %I FROM %s FOR UPDATE LIMIT 1', freelistcolumn, freelisttable) INTO next_value;
IF next_value IS NOT NULL THEN
EXECUTE format('DELETE FROM %s WHERE %I = %L', freelisttable, freelistcolumn, next_value);
ELSE
EXECUTE format('UPDATE %s SET %I = %I + 1 RETURNING %I', countertable, countercolumn, countercolumn, countercolumn) INTO next_value;
END IF;
RETURN next_value;
END;
$$ LANGUAGE plpgsql;