I was given the batch work to research our 200 stored procedures and find out a bunch of different information about them. Is there anyway in SQL Server 2012 to pull execution history on stored procedures? Also is there anyway to tell what application might be calling the stored procedure? Even an IP address would be helpful because we have several server that do various processing.
Any information you can provide me about this would be extremely helpful. I am relatively new to this type of thing in SQL. Thanks!
Is there anyway in SQL Server 2012 to pull execution history on stored procedures?
You can use sys.dm_exec_procedure_stats to find stored procedure execution times plus most time consuming, CPU intensive ones as well
SELECT TOP 10
d.object_id, d.database_id,
OBJECT_NAME(object_id, database_id) 'proc name',
d.cached_time, d.last_execution_time, d.total_elapsed_time,
d.total_elapsed_time/d.execution_count AS [avg_elapsed_time],
d.last_elapsed_time, d.execution_count
FROM
sys.dm_exec_procedure_stats AS d
ORDER BY
[total_worker_time] DESC;
Also is there anyway to tell what application might be calling the stored procedure? Even an IP address would be helpful because we have several server that do various processing.
The answer to both the above questions is NO, unless you monitor them real time using below query. You can run below query using SQL Server Agent as per your predefined intervals and capture the output in a table. Further please note that this gives you individual statements inside a stored procedure.
select
r.session_id,
s.login_name,
c.client_net_address,
s.host_name,
s.program_name,
st.text
from
sys.dm_exec_requests r
inner join
sys.dm_exec_sessions s on r.session_id = s.session_id
left join
sys.dm_exec_connections c on r.session_id = c.session_id
outer apply
sys.dm_exec_sql_text(r.sql_handle) st
Related
I am trying to connect to my organisation's SQL database using Power Query to create some reports. I need to delete/edit some tables and join multiple tables to come up with the desired report output...
I don't want the change or edit I will do on the excel-power query to reflect on the live database but just in excel .
The short answer is no, any button you press in the Power Query Editor interface does not modify the source database. I must admit that I have not found any page in the Microsoft Docs on Power Query that states this clearly. The page What is Power Query? states that:
Power Query is a data transformation and data preparation engine. Power Query comes with a graphical interface for getting data from sources and a Power Query Editor for applying transformations.
Other pages contain similarly general and vague descriptions but let me reassure you that any data transformation you carry out by using the Power Query Editor interface will not modify your SQL database. All you see in Power Query is a view of the source database.
Seeing as you are connecting to a SQL database, it is likely that query folding is activated. This means that when you remove a column (or row), this will update the SQL query used to extract the data from the database. That query is written as a single SELECT statement that can contain multiple clauses like GROUP BY and WHERE. Transformations that add data (e.g. Add Custom Column, Fill Down) are not included in the query, they are carried out only within the Power Query engine. You can read more about this in the docs.
How to edit a database with Power Query when native SQL queries are supported
That being said, you can actually edit a database from within Power Query if the database supports the use of native SQL queries, if you have write permission for the database, and if you edit and run one of the two M functions that let you write native SQL queries. Here is an example using the Sql.Database function:
Sql.Database("servername", "dbname", [Query = "DROP TABLE tablename"])
And here is an example using the Value.NativeQuery function:
Source = Sql.Databases("servername"){[Name="dbname"]}[Data],
#"Native Query" = Value.NativeQuery(Source, "DROP TABLE tablename")
Unless you have changed the default Query Options, these functions should raise a warning message requiring you to permit running the query:
This prevents you from modifying the database without confirmation, so any database modification cannot happen just by accident.
I verified this using Excel Microsoft 365 (Version 2108) on Windows 10 64-bit connected to a local SQL Server 2019 (15.x) database.
I'm running a production website for 4 years with azure SQL.
With help of 'Top Slow Request' query from alexsorokoletov on github I have 1 super slow query according to Azure query stats.
The one on top is the one that uses a lot of CPU.
When looking at the linq query and the execution plans / live stats, I can't find the bottleneck yet.
And the live stats
The join from results to project is not directly, there is a projectsession table in between, not visible in the query, but maybe under the hood of entity framework.
Might I be affected by parameter sniffing? Can I reset a hash? Maybe the optimized query plan was used in 2014 and now result table is about 4Million rows and the query is far from optimal?
If I run this query in Management Studio its very fast!
Is it just the stats that are wrong?
Regards
Vincent - The Netherlands.
I would suggest you try adding option(hash join) at the end of the query, if possible. Once you start getting into large arity, loops join is not particularly efficient. That would prove out if there is a more efficient plan (likely yes).
Without seeing more of the details (your screenshots are helpful but cut off whether auto-param or forced parameterization has kicked in and auto-parameterized your query), it is hard to confirm/deny this explicitly. You can read more about parameter sniffing in a blog post I wrote a bit longer ago than I care to admit ;) :
https://blogs.msdn.microsoft.com/queryoptteam/2006/03/31/i-smell-a-parameter/
Ultimately, if you update stats, dbcc freeproccache, or otherwise cause this plan to recompile, your odds of getting a faster plan in the cache are higher if you have this particular query + parameter values being executed often enough to sniff that during plan compilation. Your other option is to add optimize for unknown hints which will disable sniffing and direct the optimizer to use an average value for the frequency of any filters over parameter values. This will likely encourage more hash or merge joins instead of loops joins since the cardinality estimates of the operators in the tree will likely increase.
Can u please share any links/sample source code for generating the graph using neo4j from Oracle database tables data .
And my use case is oracle schema table names as Nodes and columns are properties. And also need to genetate graph in tree structure.
Make sure you commit the transaction after creating the nodes with tx.success(), tx.finish().
If you still don't see the nodes, please post your code and/or any exceptions.
Use JDBC to extract your oracle db data. Then use the Java API to build the corresponding nodes :
GraphDatabaseService db;
try(Transaction tx = db.beginTx()){
Node datanode = db.createNode(Labels.TABLENAME);
datanode.setProperty("column name", "column value"); //do this for each column.
tx.success();
}
Also remember to scale your transactions. I tend to use around 1500 creates per transaction and it works fine for me, but you might have to play with it a little bit.
Just do a SELECT * FROM table LIMIT 1000 OFFSET X*1000 with X being the value for how many times you've run the query before. Then keep those 1000 records stored somewhere in a collection or something so you can build your nodes with them. Repeat this until you've handled every record in your database.
Not sure what you mean with "And also need to genetate graph in tree structure.", if you mean you'd like to convert foreign keys into relationships, remember to just index the key and in stead of adding the FK as a property, create a relationship to the original node in stead. You can find it by doing an index lookup. Or you could just create your own little in-memory index with a HashMap. But since you're already storing 1000 sql records in-memory, plus you are building the transaction... you need to be a bit careful with your memory depending on your JVM settings.
You need to code this ETL process yourself. Follow the below
Write your first Neo4j example by following this article.
Understand how to model with graphs.
There are multiple ways of talking to Neo4j using Java. Choose the one that suits your needs.
I've limitation to work on SQL Server 2000 on a very big project. For one module I've to create 3 to 10 stored procedures. To make it manageable I'm writing one stored procedure to return different SQL queries based on condition like:
If #QueryId = 'SelAllEmp'
Select EmpId,EmpName from EMP
ELSE IF #QueryId = 'SelEmpById'
Select EmpId,EmpName from EMP where EmpId= #EmpId
ELSE IF #QueryId = 'EMPDept'
Select EmpId, DeptId, DeptName from EMPDept
......................................
My question is, are there any hidden consequences or impacts using this technique?
I don't think the way you are approaching this is manageable at all. For the cases you've shown in the question, you should strive to make that a single query. Let the client decide whether or not they'll use the DeptName column - the client has the option to ignore it, and knows to do so because it had to pass the EmpDept argument. If your client can ignore that column, then your three queries can become one:
SELECT EmpId, EmpName, DeptName
FROM dbo.EMP
WHERE EmpId = CASE
WHEN #QueryId = 'SelEmpById' THEN #EmpId ELSE EmpId END;
This query solves all three of your conditions. To avoid getting stuck with a bad plan, you can add OPTION (RECOMPILE) to the statement WITH RECOMPILE to the procedure. Yes this can cause overhead (not as bad as Joon makes it sound), but I'll take a little compilation every time over getting sucked into a horrible plan every other day. By default, SQL Server 2000 can't optimize all of your paths for a single stored procedure.
Another option is to build the query you need with dynamic SQL. This can cause plan cache bloat, but it shouldn't be too bad if all of the options are used frequently. You can avoid the problems this can cause for plan cache bloat by using the optimize for ad hoc workloads server setting.
Two very valuable reads by Erland Sommarskog:
Dynamic Search Conditions in T-SQL (Version for SQL 2005 and Earlier)
The Curse and Blessings of Dynamic SQL
Basically, don't be afraid of dynamic SQL, but be aware of the potential issues.
Sorry, came back and edited since my answer was geared toward newer versions of SQL Server. It's hard to remember that people out there are still using SQL Server 2000 for some reason.
When a stored proc gets above a certain complexity, it will recompile whenever it is called from the client.
This places overhead on the server, and in busy apps can cause overall performance degradation if it happens enough.
That is one potential negative consequence of following this technique.
Also, your result set changes based on the input to your stored proc. That will potentially break clients that expect a certain field to be present or not.
Let's say I have 'myStoredProcedure' that takes in an Id as a parameter, and returns a table of information.
Is it possible to write a SQL statement similar to this?
SELECT
MyColumn
FROM
Table-ify('myStoredProcedure ' + #MyId) AS [MyTable]
I get the feeling that it's not, but it would be very beneficial in a scenario I have with legacy code & linked server tables
Thanks!
You can use a table value function in this way.
Here is a few tricks...
No it is not - at least not in any official or documented way - unless you change your stored procedure to a TVF.
But however there are ways (read) hacks to do it. All of them basically involved a linked server and using OpenQuery - for example seehere. Do however note that it is quite fragile as you need to hardcode the name of the server - so it can be problematic if you have multiple sql server instances with different name.
Here is a pretty good summary of the ways of sharing data between stored procedures http://www.sommarskog.se/share_data.html.
Basically it depends what you want to do. The most common ways are creating the temporary table prior to calling the stored procedure and having it fill it, or having one permanent table that the stored procedure dumps the data into which also contains the process id.
Table Valued functions have been mentioned, but there are a number of restrictions when you create a function as opposed to a stored procedure, so they may or may not be right for you. The link provides a good guide to what is available.
SQL Server 2005 and SQL Server 2008 change the options a bit. SQL Server 2005+ make working with XML much easier. So XML can be passed as an output variable and pretty easily "shredded" into a table using the XML functions nodes and value. I believe SQL 2008 allows table variables to be passed into stored procedures (although read only). Since you cited SQL 2000 the 2005+ enhancements don't apply to you, but I mentioned them for completeness.
Most likely you'll go with a table valued function, or creating the temporary table prior to calling the stored procedure and then having it populate that.
While working on the project, I used the following to insert the results of xp_readerrorlog (afaik, returns a table) into a temporary table created ahead of time.
INSERT INTO [tempdb].[dbo].[ErrorLogsTMP]
EXEC master.dbo.xp_readerrorlog
From the temporary table, select the columns you want.