I am developing with Microsoft SQL Server 2008 R2 Express.
I have a stored procedure that uses temp tables and outputs some processed data usually within 1 second.
Over a few months, my DB has gathered a lot of data almost reaching the 10 GB limit. At this point, this particular stored procedure started taking as much as 5 mins for the same input parameters. After I emptied some of the huge tables in DB, it got back to normal.
After this incident, I am worried if my stored procedure needs more than necessary space in DB. How can I be sure? Any leads?
Thanks already
Jyotsna
Follow this article
Other old school way is run spwho2 check your spid related to the database see CPU and IO usage.
To validate run DBCC INPUTBUFFER(spid)
Also check STATISTICS of SP in original scenario without purging data from tables.
SET STATISTICS IO ON
EXEC [YourSPName]
see the logical reads , also refer article
Related
Why would a Snowflake stored procedure hang on a statement that, when executed outside the stored procedure, works? Further info: I remove that statement from the stored procedure, then the SP also runs properly. How can this sort of thing be debugged?
(One more piece of info: running as a different user on a different schema, the SP works as intended.)
Update: running the SP on a different warehouse worked, so it might be a problem with the warehouse, not the schema.
Why would a Snowflake stored procedure hang on a statement that, when executed outside the stored procedure, works?
There can be multiple reasons: Query gets queued due to lack of resources, is awaiting a lock to free (if its a transactional query), etc.
How can this sort of thing be debugged?
Check the Query History UI page on Snowflake. If your procedure-executed statement is showing a queued status, you're likely running into a warehouse size limit or a maximum concurrency limit, which can be resolved by reconfiguring your warehouse (via auto-scaling and/or using higher warehouse sizes).
We have several data feeds that run every evening using SSIS packages with SQL table data sources. Part of this standard process is a data engine we've built using stored procedures that run for each data feed and returns that customers data based on their specific parameters. That engine dumps the data into a SQL table where it is retrieved by the package and then remains there until the next evenings run.
Maybe two weeks ago we started to intermittently get the following error running these stored procedures (which are executed using SQL Agent):
"INSERT EXEC failed because the stored procedure altered the schema of the target table. [SQLSTATE 42000] (Error 556). The step failed."
These stored procedures have been running for months, some even years, without being changed. These errors just started intermittently occurring. Inside the stored procedure we do have a temporary table being used that receives the engine data and a table that is dropped and re-created using a statement like this:
SELECT field1, field2 INTO sqlTable FROM #tempTable
I looked for a SQL updated or something that may have changed to cause these errors all of a sudden but can't find anything. It's occurred to several different stored procedures, intermittently, that all have this same kind of structure but I can't identify any particular reason. It will happen one night and not another, to one stored procedure and not another just like it. Any idea what could cause this?
We are running Microsoft SQL Server 2016 Standard 64-bit (13.0.4604.0) on Windows Server 2016 Datacenter 10.0 (Build 14393: ) (Hypervisor). This is all on a VM in the Azure environment.
If you are using "INSERT ... EXEC" and has enabled Query Store, it might be the reason.
The Query Data Store periodically runs auto-cleanup.
This has turned out to cause problems when a stored procedure makes a call to another stored procedure by using "INSERT…EXEC" syntax.
This is only an issue with SQL Server 2016
For more details and possible solution, see: https://support.microsoft.com/en-us/help/4465511/error-556-insert-exec-failed-stored-procedure-altered-table-schema
I wrote a stored procedure in firebird server. The procedure is used on several different servers and databases. On one of them, the procedure is carried out very slowly (a few hours) where in the other servers in 3-5 seconds.Indices in each database are the same.
Do any of you encountered such a problem? We made a backup and restored a database but it did not help.
When I had such a problems, it was always either corrupted database (SELECT at table with 10 records lasted few minutes) or just needed recalculation of index statistics. Try to check and fix database with gfix. If recalculating of index statistic helped, consider adding plan to your SQL statement
We have recently converted our JD Edwards EnterpriseOne system from and AS400/DB2 platform to Windows & SQL Server. In the old system we had a RPG/CL program that would transfer data from AS400 library to the accounting system for further processing. The end users needed to initiate this process so it was executed via a menu command.
To replicate this behavior after the conversion I created a stored procedure in SQL Server 2008 R2 that inserts records into the SQL Server database from the as400 via a linked server and then updates the records on the as400 to indicate that the records have been processed. To allow the end users the ability to execute this process, I created a SSRS (2005) report that executes the stored procedure.
When the SSRS report is executed interactively, we intermittently get an error 'For security resasons DTD is prohibited in this XML document' which from my research is caused by SSRS running out of memory.
Does anyone know of another/better way to transfer the data?
The transfer/update of the stored procedures is essentially
INSERT INTO [SQL DEST TABLE]
SELECT *
FROM [AS400 Linked Server/Table]
UPDATE OPENQUERY (AS400_LINK, 'MY SELECT QUERY')
SET FLAG = PROCESSED;
You should get better database performance can allow a server to perform work with it's own data, where possible, rather than having to transmit it back and forth.
I will make a few guesses about the circumstances, and if you correct me, I will gladly adjust the answer to fit your situation. This sounds like you are extracting data from an accounting transaction table in DB2, and that when done, you want to update the flag in those same records. That might indicate that the records could stay in that table essentially forever, or perhaps that some other process clears them out. There is no WHERE on your SELECT, so I will assume they do get removed. I will assume that we don't know if more records might get added to the transaction table at any time, including any period between extracting the info and updating the flag.
I wonder if you could update the flag immediately upon extraction, before they have actually been processed SQL Server? Would this be allowed logistically, and within your business requirements?
Suppose you...
extract the DB2 unprocessed transaction data into a workfile,
transfer the workfile to SQL Server,
perform whatever processing you want to do in SQL Server
tell DB2 to update the transaction table based on the workfile
So in DB2, #1 might look like
INSERT INTO workfile
SELECT *
FROM transactions
WHERE flag = unprocessed
During step 3, your SQL Server job could update the flag in the workfile to an error status, for any records that SQL Server cannot process properly.
Step 4 on DB2 could be
UPDATE transactions
SET flag = processed
WHERE transid IN (SELECT transid
FROM workfile
WHERE flag <> error
)
Hopefully, processing errors on SQL Server would be fairly rare. If that process updates transactions in the workfile, only on an error, this should be faster than transmitting updates for each success. The UPDATE statement above, should be able to to run faster on DB2, since it is driven by the workfile on the same server, rather than data be transmitted back up to DB2.
I am trying to import a table from an old database (MS Access) to MySQL server using CRBatchMove using Delphi 2007.
The program fetches data from the legacy database over an ODBC connection and stores it on the local hard drive using TADOTable.SaveToFile(). The second part of the program reads this file into another TADOTable and uses TCRBatchMove to transfer it to a MySQL server (via DevArt's TMyTable). In this process the batch move appears to be extremely slow for some reason.
Amount of data in the following trial is about 100,000 records each with about 120 fields. Most of the fields are integers and VARCHAR (each of VARCHAR less than 32 chars).
The performance figures I obtained are:
Time taken to bring data to local file over ODBC connection: 17 seconds
Time taken to load data from local file into TADOTable: 3 seconds
Time taken by TCRBatchMove to move data from TADOTable to TMyTable: > 30 minutes
MySQL server is running locally on the development machine (which is an i7-2.8GHz) and the database is otherwise very snappy).
Why is it so slow for the batch move to push data to MySQL server. Is there a way to speed up this task? Or is there a better way to accomplish this?
Not really an answer, but I'm running out of space in the comments.
MySQL has a function called load data infile
see: http://dev.mysql.com/doc/refman/5.1/en/load-data.html
You can use that to time the fastest time possible to insert data. This will give you a baseline for the insert time into MySQL and allow you to pinpoint whether the delay is in MySQL or Delphi. If you have the source for TMyTable, you can use a profiler as well.
Another option is to download ZEOS data access components at:
http://sourceforge.net/projects/zeoslib/
If there's is some snafu in the component you're using a change of toolset might fix the problem. (Devart's components are usually excellent though).
On the MySQL side you can disable index updates before the bulk-insert and enable the index after. If you have a lot of inserts that usually works out faster.
See: https://stackoverflow.com/a/9524988/650492
SET autocommit=0;
SET unique_checks=0;
SET foreign_key_checks=0;
your insert here
SET autocommit=1;
SET unique_checks=1;
SET foreign_key_checks=1;