Why am I getting a sqlcommand timeout in the application? - timeout

I am getting sqlcommand timeout issues when I debug the application even though the stored procedure runs in less than 25 seconds in management studio. I set the timeout attribute to 180 seconds and still get the error. Any suggestions?

Where did you set the timeout attribute at? I used to run into the same issue when I was setting the timeout in the SqlConnection string, but it turns out I also needed it in the SqlCommand itself.

25 seconds is a long time for a stored proc to be running. I'd suggest optimizing the query further.

Are you setting "Connect Timeout" or "Command Timeout" in connection string? It's easy to do mistakes with them.
--EDIT
Try searching in your code if your system isn't locking a table used in the SP.
It won't solve the problem, but logging the start/end of the procedure (on database side, by putting the SP inside another SP) can tell you if the problem is before starting the SP (because of network, webserver load, etc) or while executing it.
I hope it helps

Try running a sp_recompile against that stored procedure, then try from your application again.

Related

MySQL Error 2013: Lost connection to MySQL server during query

I've read all post with the same or very close headline, but still can't find a proper solution or explanation to my problem.
I'm working with MySQL Workbench 6.3 CE. I have been able to create a database with several tables, and create a conexion with python to write data to it. Still, I had a problem related to a varchar field that needed to be set to more than 45 characters. When I try to set it to bigger limits, like VARCHAR(70), no matter how many times I try, wether I set higher limits for timeout, I get the 2013 error, saying my connection was closed during the query.
I'm using the above version of workbench, on windows 10, and I'm trying to modify that field from the workbench. Afer that first time, I can't drop a table either, nor can I connect from python.
What is happening?
Ok, apparently what was happening is that I had a block, and there where a lot of query waiting in a situation of "waiting for table metadata block".
I did the following in the console of workbench
Select concat('KILL ',id,';') from information_schema.processlist where user='root'
that generates a list of all those processes. I copy that list in a new tab, and execute a massive kill of processes. After that it worked again.
Can anybody explain me how did I arrive to that situation and what precautions to take in my python scripts so as to avoid it?
Thnak you

Mysterious time out on .NET MVC application with SQL Server

I have a very peculiar problem and I'm looking for suggestions that might help me get to the bottom of it.
I have an application in .NET 3.5 (MVC3) on a SQL Server 2008 R2 database.
Locally and on two other servers, it runs fine. But on the live server there is a stored procedure that always times out after 30 seconds.
If I run the stored procedure on the database, it takes a couple of seconds. But the if the stored procedure is received by the application, then profiler says it took over 30 seconds.
The same query the profiler receives, runs immediately if we run it directly on the DB.
Furthermore, the same problem doesn't occur on any of the other 3 local servers.
As you can understand, it's driving me nuts and I don't even have a clue how to diagnose this.
The even logs just show the timeout as a warning.
Has anyone had anything like this before and where could I start looking for a fix?
Many thanks
You probably have some locking taking place in your application that doesn't occur when running the query on the server.
To test this run your query in your application using READ UNCOMMITTED or the NOLOCK hint. If it works you need to check your sequence of calls or check to see whether your isolation level isn't too aggressive.
These can be tricky to nail down.

Talend - Lock wait timeout exceeded

I use the ETL Talend Open Studio (TOS). I want transfered a data base A into a data base B. I use a tMap component. When I use a tLogRow to look results, it's ok. TOS shows data correctly. But when I make the transfer, TOS writes "Lock wait timeout exceeded; try restarting transaction".
I don't understand this problem... It's ok for the reading of data but there is a problem for the writing of data.
Can you help me, please ?
Try running your job using a single connection to Mysql ( I assume you are using it as the error is a mysql error )
The error above can occur when you attempt to insert/update/delete from two or more connections concurrently.
to create a single connection and have all components share it you will need a pair of components: "tMysqlConnection" and "tMysqlCommit"
the Connection component should be placed before you attempt to query the database. Once you have it in the job, you can link the tMysqlInput components to it by selecting "use existing connection"
The Commit component will issue the commit command and close the transaction.
You will need Connection components for each separate DB server you are working with.
The base A contains 300 articles. I think that this problem is caused by Talend Open Studio. TOS can't execute more 100 articles. I have tried to "cut" the base A in three bases. Then, I run TOS. The error has missing. It's strange... but it works.

PostgreSQL: Session Timeout?

I search for a way to control the session timeout of the PGSQL (9.0) client (Windows).
When a Session dying? What happened with them after die.
How can I force a Session to die? (For example it is "locked", on some wrong long query, and I want to force the server to release the resources).
Thanks for it:
dd
I extend this to understand it:
The databases need to know which session is dead.
Dead session must be released, because it is only hold the resources, and if this operation not finished, many locks we should get, or we can out of available connections (reach the maximum).
Other DataBases (FireBird, EDB) defines a TimeOut parameter for it.
When it reached, the session set to dead, and user connection aborted.
To avoid exhausting you need to periodically do something, that extend the period.
Theres is 3 ways to reach the timeout:
1.) the client program hangs, or freezed, or closed.
2.) the network connection broken
3.) the client send some very long query/stored procedure that don't finish.
If the timeout not handled by server, may somebody's transaction, lock, etc still alive for X hours, and you have to only one way to remove it: restart the db server service.
Other databases handle dead sessions as they no more interact to the server, so the client got some error, it need to restart the client software.
Some databases supports the return to the "inactive" but "not dead" session, and they can continue the work.
So, with this preface I ask my question again:
How can I control the client's session timeout under pgsql? System variable, SQL parameter, etc?
How can I extend this time?
What happens if a long query is exhausting the period?
When does the pgsql server release the resources held by the client ?
Thanks:
dd
I don't understand the first part of your question, but to kill a running session you can use pg_terminate_backend()
To kill the query of a running session use pg_cancel_query()
Both functions are explained in the manual:
http://www.postgresql.org/docs/current/static/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL-TABLE
Theres is 3 ways to reach the timeout: 1.) the client program hangs, or freezed, or closed. 2.) the network connection broken 3.) the client send some very long query/stored procedure that don't finish.
For 2, the tcp_keepalives_* settings might be useful: http://www.postgresql.org/docs/8.4/static/runtime-config-connection.html
For 3, there is a statement_timeout setting: http://www.postgresql.org/docs/8.4/static/runtime-config-client.html but this will only terminate the statement, not the connection.

BDE, Delphi, ODBC, SQL Native Client & Dead lock

We have some Delphi code that uses the BDE to Access SQL Server 2008 through the SQL Server Native Client ODBC driver (2005 version). Our issue is that we're experiencing some deadlock issues in a loop doing inserts to multiple tables.
The whole loop is done within a [TDatabase].StartTransaction. Looking at the SQL Server Profiler we clearly see that at one point during the loop the SPID (Session ID?) change, and then we naturally end up with a deadlock. (Both SPID doing inserts to the same table)
It seems like the BDE at some point does a second connection to the DB...
(Although I would love to skip the BDE, it's currently not possible. )
Anyone with experiences to share?
In case your app is multithreaded: BDE is not threadsafe. You have to use a separate BDE session (explicitly created instance of TSession) for each thread; the global Session created automatically for the main thread is not sufficient. Also, all database access components (TDatabase, TQuery, etc.) can only be used in the context of the thread where their corresponding instance of TSession has been created.
Verify in the ODBC installation if SQL Server driver is configured to do connection pooling.
Appear that Native Client installation activates it for default... (At least, mine installation had connection pooling active and I don't activated it).
This probably comes too late for the asker, but maybe it helps others.
Everytime there is a cursor that doesn't get closed, the BDE/ODBC combo will establish a new connection for successive querys. The "spid change" is probably the result of a non-closed cursor.
To solve this problem you have to find the BDE-component that caused this stil-opened cursor. Then you call a method that will eventually close the cursor (TTable.Close, TTable.Last ...).
After that the "spid change" should be gone and therefore the deadlock.
Some tips to find that component:
During the lock, execute the following statement (for example using Management Studio):
EXEC sp_who2.
Look in column BlkBy. The blocked connection has a number in it.
This number is the spid (Server Process ID) of the blocking connection.
Then you execute DBCC INPUTBUFFER(spid).
In column EventInfo you will find the sql-statement that has been issued by your programm.
With that information you should be able to find the BDE-component that causes your trouble.

Resources