Persistent/Super procedure when web speed down - procedure

I have a question regarding persistent/super procedure in openedge. If I run a procedure persistently, will that procedure still remain in the memory when my web speed is down or do I have to run it again ?

A persistent procedure will end when the agent ends. So if you restart webspeed, and therefore the agents, you will need to re-initialize the persistent procedure.
FWIW starting a persistent procedure on an agent might not be a very good idea to start with. Especially if you do not understand their life cycle.

Related

Stored procedure hangs on statement.execute()

Why would a Snowflake stored procedure hang on a statement that, when executed outside the stored procedure, works? Further info: I remove that statement from the stored procedure, then the SP also runs properly. How can this sort of thing be debugged?
(One more piece of info: running as a different user on a different schema, the SP works as intended.)
Update: running the SP on a different warehouse worked, so it might be a problem with the warehouse, not the schema.
Why would a Snowflake stored procedure hang on a statement that, when executed outside the stored procedure, works?
There can be multiple reasons: Query gets queued due to lack of resources, is awaiting a lock to free (if its a transactional query), etc.
How can this sort of thing be debugged?
Check the Query History UI page on Snowflake. If your procedure-executed statement is showing a queued status, you're likely running into a warehouse size limit or a maximum concurrency limit, which can be resolved by reconfiguring your warehouse (via auto-scaling and/or using higher warehouse sizes).

How to run Aurora-MySQL time consuming store procedure serverless beyond the 300 second limit

We are running stored procedures on AWS Aurora-MySQL from AWS Lambda.
Some stored procedures take more than 300 seconds so Lambda is not useful to wait and keep the connection alive to MySQL until process is done.
Any other way to serverless run a routine on Aurora without the time restriction?
If a stored procedure takes more than 300 seconds I'd look into
1) Optimizing the stored procedure itself
2) Redesign with asynchronous processing principles using SQS or similar messaging service and also look into Aurora calling Lambda once the stored procedure is executed.

Stored procedure in firebird execute very slow

I wrote a stored procedure in firebird server. The procedure is used on several different servers and databases. On one of them, the procedure is carried out very slowly (a few hours) where in the other servers in 3-5 seconds.Indices in each database are the same.
Do any of you encountered such a problem? We made a backup and restored a database but it did not help.
When I had such a problems, it was always either corrupted database (SELECT at table with 10 records lasted few minutes) or just needed recalculation of index statistics. Try to check and fix database with gfix. If recalculating of index statistic helped, consider adding plan to your SQL statement

Editing iron speed files

Can I edit or update the content of a stored procedure in iron speed or not? If I would update it through sql server management studio and i would rebuild my application in iron speed, will my updated stored procedure deleted or not? Please do help me with this. Badly need your ideas. Thank you
As long as the stored proc name stays the same it has to work. I've done this before with custom stored procedures where I edit the content of the stored proc in sql server management studio and then resync the database with ISD. Just keep the stored proc name the same.

Mysterious time out on .NET MVC application with SQL Server

I have a very peculiar problem and I'm looking for suggestions that might help me get to the bottom of it.
I have an application in .NET 3.5 (MVC3) on a SQL Server 2008 R2 database.
Locally and on two other servers, it runs fine. But on the live server there is a stored procedure that always times out after 30 seconds.
If I run the stored procedure on the database, it takes a couple of seconds. But the if the stored procedure is received by the application, then profiler says it took over 30 seconds.
The same query the profiler receives, runs immediately if we run it directly on the DB.
Furthermore, the same problem doesn't occur on any of the other 3 local servers.
As you can understand, it's driving me nuts and I don't even have a clue how to diagnose this.
The even logs just show the timeout as a warning.
Has anyone had anything like this before and where could I start looking for a fix?
Many thanks
You probably have some locking taking place in your application that doesn't occur when running the query on the server.
To test this run your query in your application using READ UNCOMMITTED or the NOLOCK hint. If it works you need to check your sequence of calls or check to see whether your isolation level isn't too aggressive.
These can be tricky to nail down.

Resources