The stored Procedure was working before the DB defrag. After the successful defrag one of the stored procedure stopped working (very slow with out any output). The indexes are complete. The stored procedure is doing something wrong, just cant nail it down. All other Stored procedures are working fine. Any idea what would have gone wrong?
Try to generate statistics for tables used in the procedure
update statistics TableName
or on indexes
update index statistics TableName
update index statistics TableName IndexName
Furthermore you can see which statement in the SP is the problem from master..sysprocesses.
Find process running the SP -sysprocesses has id and dbid in it, and you can use object_name(id,dbid) to identify yours, then you'll see stmtnum and linenum.
Get a dba or someone with sa_role to run sp_showplan on the running spid - that'll show the query plan.
If you've changed no index or data volumes at all then the above answer must be right - statistics need updating. If this is Sybase 15 you should normally do UPDATE INDEX statistics, or it's quite likely you'll get some bad query plans (if you're allowing MERGE JOINs and HASH JOINs.)
Related
I was given the batch work to research our 200 stored procedures and find out a bunch of different information about them. Is there anyway in SQL Server 2012 to pull execution history on stored procedures? Also is there anyway to tell what application might be calling the stored procedure? Even an IP address would be helpful because we have several server that do various processing.
Any information you can provide me about this would be extremely helpful. I am relatively new to this type of thing in SQL. Thanks!
Is there anyway in SQL Server 2012 to pull execution history on stored procedures?
You can use sys.dm_exec_procedure_stats to find stored procedure execution times plus most time consuming, CPU intensive ones as well
SELECT TOP 10
d.object_id, d.database_id,
OBJECT_NAME(object_id, database_id) 'proc name',
d.cached_time, d.last_execution_time, d.total_elapsed_time,
d.total_elapsed_time/d.execution_count AS [avg_elapsed_time],
d.last_elapsed_time, d.execution_count
FROM
sys.dm_exec_procedure_stats AS d
ORDER BY
[total_worker_time] DESC;
Also is there anyway to tell what application might be calling the stored procedure? Even an IP address would be helpful because we have several server that do various processing.
The answer to both the above questions is NO, unless you monitor them real time using below query. You can run below query using SQL Server Agent as per your predefined intervals and capture the output in a table. Further please note that this gives you individual statements inside a stored procedure.
select
r.session_id,
s.login_name,
c.client_net_address,
s.host_name,
s.program_name,
st.text
from
sys.dm_exec_requests r
inner join
sys.dm_exec_sessions s on r.session_id = s.session_id
left join
sys.dm_exec_connections c on r.session_id = c.session_id
outer apply
sys.dm_exec_sql_text(r.sql_handle) st
I have written a stored procedure that includes a SELECT on a number of tables that applies logic to calculate values and transforms some of the data.
I have been asked if I can exclude records from the resultset in the stored procedure and write the record to a separate log table. I was looking to loop through the result set from the SELECT statement and delete the record I want to exclude once I have written it to a table. At the moment I am struggling to find the syntax to delete from the result set of a SELECT statement in a stored procedure and can only find how to use the cursor reference to delete from the original database table.
I need to remove the records in the same stored procedure and I am looking to avoid duplicating the logic by using some of the logic to find the records to include and repeat some of the logic again to be able to find the records to exclude. The only other alternative I can think of is using a temporary table, but I think what I am trying to do should be possible.
Any help appreciated.
When you have an open cursor in a stored procedure (or in an application), you can perform positioned deletes. You can execute the statement,
DELETE WHERE CURRENT OF cursorname;
Please be aware that by default issuing a COMMIT statement will close any open cursors, so if you plan to have this delete operation spread over multiple transactions you will need to declare your cursors using WITH HOLD.
Quick question (hopefully)
I have a large dataset (>100,000 records) that I would like to use as a lookup to determine existence or non-existence of multiple keys. The purpose of this is to find FK violations before trying to commit them to the database to try and avoid the resultant EDatabaseError messing up my transaction.
I had been using TClientDataSet/TDatasetProvider with the FindKey method, as this allowed a client-side index to be set up and was faster (2s to scan each key rather than 10s for ADO). However, moving to large datasets the population of the CDS is starting to take far more time than the local index is saving.
I see that I have a few options for alternatives:
client cursor with TADOQuery.locate method
ADO SELECT statements for each check (no client cache)
ADO SEEK method
Extend TADOQuery to mimic FindKey
The Locate method seems easiest and doesn't spam the server with the SELECT/SEEK methods. I like the idea of extending the TADOQuery, but was wondering whether anyone knew of any ready-made solutions for this rather than having to create my own?
I would create a temporary table in the database server. Insert all 100,000 records into this temp table. Do bulk inserts of say 3000 records at a time, to minimise round trips to the server. Then run select statements on this temp table to check for foreign key violations etc. If all okay, do an insert SQL from the temp table to the main table.
I have a Delphi application with 3 forms, I'm using Access 2003 and Microsoft.Jet.OLEDB.4.0, I had an ADOconnection in the main form and use it in all forms.
I use 2 .mdb files, where my.mdb has links to org.mdb tables.
Everything works, but very slowly. So after long hours of searching I came to this.
I don't know why, but after I run this query all other queries increase in speed dramatically (From 10 seconds under 1 second). (Even queries that don't unclude linked tables).
Table tb_odsotnost is in my.mdb
Table Userinfo is linked.
with rQueries.ADOQuery1 do
begin
Close;
SQL.Clear;
SQL.Add('SELECT DISTINCT tb_odsotnost.UserID, Userinfo.Name FROM tb_odsotnost');
SQL.Add('LEFT JOIN Userinfo ON Userinfo.UserID = tb_odsotnost.UserID');
SQL.Add('WHERE datum BETWEEN '+startDate+' AND'+endDate);
SQL.Add('ORDER BY Userinfo.Name ASC');
Open;
end;
I tried to run my app on another computer with win7 and MS Access 2007 and the result was the same.
Ok, for now I just run this query onFormActivate but this is not a permanent solution.
When you run a query against a linked table, Access (or Jet or ADO) acquires a lock on the database for the ldb file. If you close the query, that lock has to be reacquired the next time you query the linked table. The recommended method to get around this is to always keep a background dataset open so that the lock doesn't have to be obtained each time (forcing the lock to remain in effect).
See http://office.microsoft.com/en-us/access-help/improve-performance-of-an-access-database-HP005187453.aspx and look at the "Improve performance of linked tables" section.
If that doesn't help, look at your table definitions in Access to see if you have subdatasheets defined for your table fields in one-to-many relationships.
Let's say I have 'myStoredProcedure' that takes in an Id as a parameter, and returns a table of information.
Is it possible to write a SQL statement similar to this?
SELECT
MyColumn
FROM
Table-ify('myStoredProcedure ' + #MyId) AS [MyTable]
I get the feeling that it's not, but it would be very beneficial in a scenario I have with legacy code & linked server tables
Thanks!
You can use a table value function in this way.
Here is a few tricks...
No it is not - at least not in any official or documented way - unless you change your stored procedure to a TVF.
But however there are ways (read) hacks to do it. All of them basically involved a linked server and using OpenQuery - for example seehere. Do however note that it is quite fragile as you need to hardcode the name of the server - so it can be problematic if you have multiple sql server instances with different name.
Here is a pretty good summary of the ways of sharing data between stored procedures http://www.sommarskog.se/share_data.html.
Basically it depends what you want to do. The most common ways are creating the temporary table prior to calling the stored procedure and having it fill it, or having one permanent table that the stored procedure dumps the data into which also contains the process id.
Table Valued functions have been mentioned, but there are a number of restrictions when you create a function as opposed to a stored procedure, so they may or may not be right for you. The link provides a good guide to what is available.
SQL Server 2005 and SQL Server 2008 change the options a bit. SQL Server 2005+ make working with XML much easier. So XML can be passed as an output variable and pretty easily "shredded" into a table using the XML functions nodes and value. I believe SQL 2008 allows table variables to be passed into stored procedures (although read only). Since you cited SQL 2000 the 2005+ enhancements don't apply to you, but I mentioned them for completeness.
Most likely you'll go with a table valued function, or creating the temporary table prior to calling the stored procedure and then having it populate that.
While working on the project, I used the following to insert the results of xp_readerrorlog (afaik, returns a table) into a temporary table created ahead of time.
INSERT INTO [tempdb].[dbo].[ErrorLogsTMP]
EXEC master.dbo.xp_readerrorlog
From the temporary table, select the columns you want.