Stored procedure in firebird execute very slow - stored-procedures

I wrote a stored procedure in firebird server. The procedure is used on several different servers and databases. On one of them, the procedure is carried out very slowly (a few hours) where in the other servers in 3-5 seconds.Indices in each database are the same.
Do any of you encountered such a problem? We made a backup and restored a database but it did not help.

When I had such a problems, it was always either corrupted database (SELECT at table with 10 records lasted few minutes) or just needed recalculation of index statistics. Try to check and fix database with gfix. If recalculating of index statistic helped, consider adding plan to your SQL statement

Related

Stored procedure hangs on statement.execute()

Why would a Snowflake stored procedure hang on a statement that, when executed outside the stored procedure, works? Further info: I remove that statement from the stored procedure, then the SP also runs properly. How can this sort of thing be debugged?
(One more piece of info: running as a different user on a different schema, the SP works as intended.)
Update: running the SP on a different warehouse worked, so it might be a problem with the warehouse, not the schema.
Why would a Snowflake stored procedure hang on a statement that, when executed outside the stored procedure, works?
There can be multiple reasons: Query gets queued due to lack of resources, is awaiting a lock to free (if its a transactional query), etc.
How can this sort of thing be debugged?
Check the Query History UI page on Snowflake. If your procedure-executed statement is showing a queued status, you're likely running into a warehouse size limit or a maximum concurrency limit, which can be resolved by reconfiguring your warehouse (via auto-scaling and/or using higher warehouse sizes).

Receiving a SQL stored procedure error about changing schema but no changes have been mad

We have several data feeds that run every evening using SSIS packages with SQL table data sources. Part of this standard process is a data engine we've built using stored procedures that run for each data feed and returns that customers data based on their specific parameters. That engine dumps the data into a SQL table where it is retrieved by the package and then remains there until the next evenings run.
Maybe two weeks ago we started to intermittently get the following error running these stored procedures (which are executed using SQL Agent):
"INSERT EXEC failed because the stored procedure altered the schema of the target table. [SQLSTATE 42000] (Error 556). The step failed."
These stored procedures have been running for months, some even years, without being changed. These errors just started intermittently occurring. Inside the stored procedure we do have a temporary table being used that receives the engine data and a table that is dropped and re-created using a statement like this:
SELECT field1, field2 INTO sqlTable FROM #tempTable
I looked for a SQL updated or something that may have changed to cause these errors all of a sudden but can't find anything. It's occurred to several different stored procedures, intermittently, that all have this same kind of structure but I can't identify any particular reason. It will happen one night and not another, to one stored procedure and not another just like it. Any idea what could cause this?
We are running Microsoft SQL Server 2016 Standard 64-bit (13.0.4604.0) on Windows Server 2016 Datacenter 10.0 (Build 14393: ) (Hypervisor). This is all on a VM in the Azure environment.
If you are using "INSERT ... EXEC" and has enabled Query Store, it might be the reason.
The Query Data Store periodically runs auto-cleanup.
This has turned out to cause problems when a stored procedure makes a call to another stored procedure by using "INSERT…EXEC" syntax.
This is only an issue with SQL Server 2016
For more details and possible solution, see: https://support.microsoft.com/en-us/help/4465511/error-556-insert-exec-failed-stored-procedure-altered-table-schema

Editing iron speed files

Can I edit or update the content of a stored procedure in iron speed or not? If I would update it through sql server management studio and i would rebuild my application in iron speed, will my updated stored procedure deleted or not? Please do help me with this. Badly need your ideas. Thank you
As long as the stored proc name stays the same it has to work. I've done this before with custom stored procedures where I edit the content of the stored proc in sql server management studio and then resync the database with ISD. Just keep the stored proc name the same.

SQL Server Express: How to determine SP memory usage

I am developing with Microsoft SQL Server 2008 R2 Express.
I have a stored procedure that uses temp tables and outputs some processed data usually within 1 second.
Over a few months, my DB has gathered a lot of data almost reaching the 10 GB limit. At this point, this particular stored procedure started taking as much as 5 mins for the same input parameters. After I emptied some of the huge tables in DB, it got back to normal.
After this incident, I am worried if my stored procedure needs more than necessary space in DB. How can I be sure? Any leads?
Thanks already
Jyotsna
Follow this article
Other old school way is run spwho2 check your spid related to the database see CPU and IO usage.
To validate run DBCC INPUTBUFFER(spid)
Also check STATISTICS of SP in original scenario without purging data from tables.
SET STATISTICS IO ON
EXEC [YourSPName]
see the logical reads , also refer article

Batch Move Data from TADOTable to MySQL TMyTable

I am trying to import a table from an old database (MS Access) to MySQL server using CRBatchMove using Delphi 2007.
The program fetches data from the legacy database over an ODBC connection and stores it on the local hard drive using TADOTable.SaveToFile(). The second part of the program reads this file into another TADOTable and uses TCRBatchMove to transfer it to a MySQL server (via DevArt's TMyTable). In this process the batch move appears to be extremely slow for some reason.
Amount of data in the following trial is about 100,000 records each with about 120 fields. Most of the fields are integers and VARCHAR (each of VARCHAR less than 32 chars).
The performance figures I obtained are:
Time taken to bring data to local file over ODBC connection: 17 seconds
Time taken to load data from local file into TADOTable: 3 seconds
Time taken by TCRBatchMove to move data from TADOTable to TMyTable: > 30 minutes
MySQL server is running locally on the development machine (which is an i7-2.8GHz) and the database is otherwise very snappy).
Why is it so slow for the batch move to push data to MySQL server. Is there a way to speed up this task? Or is there a better way to accomplish this?
Not really an answer, but I'm running out of space in the comments.
MySQL has a function called load data infile
see: http://dev.mysql.com/doc/refman/5.1/en/load-data.html
You can use that to time the fastest time possible to insert data. This will give you a baseline for the insert time into MySQL and allow you to pinpoint whether the delay is in MySQL or Delphi. If you have the source for TMyTable, you can use a profiler as well.
Another option is to download ZEOS data access components at:
http://sourceforge.net/projects/zeoslib/
If there's is some snafu in the component you're using a change of toolset might fix the problem. (Devart's components are usually excellent though).
On the MySQL side you can disable index updates before the bulk-insert and enable the index after. If you have a lot of inserts that usually works out faster.
See: https://stackoverflow.com/a/9524988/650492
SET autocommit=0;
SET unique_checks=0;
SET foreign_key_checks=0;
your insert here
SET autocommit=1;
SET unique_checks=1;
SET foreign_key_checks=1;

Resources