I'm migrating a currently functional SSIS package onto another server which is similar to current one on which it runs on.
However the stored procedures I use to select distinct rows from source table and insert them into destination table stops responding when I try to run the package in the new server.
the sp_who2 results show CPUTime and diskIO increasing (meaning query is not blocked) and select count(*) doesn't work (keeps on running) so I use sp_spaceused procedure to see number of rows which sometimes exceed the rows in the source table!
Stopping the package doesn't stop the query and I have to ask DBA to manually kill the process which incurs long rollback. Letting the package execute continues the execution for days!
I've tried using Data flow task for the same but it takes so much more time compared in stored procedure that I'm trying to avoid using it.
Related
We see an issue occasionally. Stored procedure running a SQL runs very slow. Same SQL when run from command line runs very fast. It seems stored procedure uses a different path. The workaround for us is to drop and recreate the procedure, after which it picks up the right plan.
Is there a way to execute a stored procedure with an instruction to regenerate execution plan at run time, so as to get the best plan every time.
You probably don't want to recompile plans every time you call your procedure, as you lose the performance benefit of having the procedure in the first place.
When you do need to recompile it, which shouldn't happen too frequently in a stable environment, you can use the REBIND_ROUTINE_PACKAGE system stored procedure:
call SYSPROC.REBIND_ROUTINE_PACKAGE('P', 'YOUR_SP', '')
If you do decide that you want the plan to be recreated each time the procedure is called, you can set the REOPT ALWAYS bind option when you create the procedure, using the many ways described in the manual, for example by executing call SYSPROC.SET_ROUTINE_OPTS('REOPT ALWAYS) before creating the procedure
I'm coming here because I've exhausted my knowledge and search skills.
I have a very simple application with Delphi XE2 and Firedac (Version 8.0.3) which uses TADQuery to insert a registry in a table that contains 800k rows (10 columns), primary key is bigInteger incremental.
The first insert runs very slow and the subsequent inserts are almost instantaneous until i restart the application, after restarting the application the first Insert runs very slow again.
I tried to run the same statement in IBExpert, all statements are instantaneous.
So, must be something in the component right ?
I'm using TADConnection + TADQuery to run all statements.
Some properties maybe in the TADConnection or TADQuery maybe ?
Any help will be welcome
I have a piece of SQL that takes around 8 secs to load (Pretty chunky).
When Using this as a proc in a new report it hangs while trying to use the preview mode.
I have restarted the reporting services and deleted the re add the data set but it completely kills everything for a good 5-10 minutes.
There is nothing that it can strip back as this is the only data set and there are no sub reports running off this.
I would Look at converting the stored procedure to create a dataset table instead and populate a table isntead of running on the fly in SSRS (assuming this is what happens). So that when you run the SSRS report it queries a flat table. Schedule the stored procedure to update at regular intervals to keep the report up to date.
Also check how many rows are being returned and if the SSRS report has paging set up (where it limits how many rows it will display)
I have a very strange problem with transactions in Interbase 7.5 which seem to be stuck.
I can track the problem with IBConsole -> right click DB -> Performance Monitor -> Transactions
Usually this list should show only a few active transaction. But I get several hundred active transactions when I start my application (a web module for an apache webserver using Delphi 7 Interbase components, e.g. IBQuery, IBTransaction, ...)
Transaction type is always listed as snapshot, if this is of relevance.
I have already triple checked all sql statements and cannot find anything that should produce such problems...
Is there any way get the sql statements of a specific transaction?
Any other suggestion how to find such a problem would be very welcome.
Is there any way get the sql statements of a specific transaction?
Yes, you can SELECT from TMP$STATEMENTS WHERE TRANSACTION_ID = .... That's from memory, but should get you started.
In IB Performance Monitor, you can locate the transaction from the statements tab, using the button on the toolbar. Can't remember if you can go the other way in that app. It's been a long time since I wrote it!
Active IBX data-sets require an active transaction all the time. If you don't have active data-sets just don't forget to commit all the active transactions.
If you have active data-sets, you can configure all your components to use the same TIbTransaction object, and you can also configure the unique TIbTransaction to commit or rollback after a idle time-out period via the IdleTimer and DefaultAction properties.
Terminating the transaction (by manually or automatically committing or rolling back) will close all the linked datasets (TIBQuery, TIBTable and the like).
You may be tempted to use the CommitRetaining or RollbackRetaining methods to terminate the transaction without closing the related data-sets, but this may affect the performance of the server, and my advise is to always avoid using it.
If you want to improve your application, you should consider changing your database connection layer or introducing a in-memory capable dataset over IBX, for example, Delphi's TClientDataSet, which allows you to retrieve data and retain it in memory while closing all the underlying datasets (and transactions), while allowing you to use the traditional Insert/Append/Edit/Delete methods to modify the data and then apply that changes to the database in a new short-time transaction.
I’m using SSIS to synchronize data between two databases. I’ve used SSIS and DTS in the past, but I generally write an application for things of this nature (I’m coder and it just comes easier to me).
In my package I use a SQL Task that returns about 15,000 rows. I’ve hooked that up to a Foreach Container, and within that I assign the resultset column values to variables, and then map those variables to parameters that are fed to another SQL Task.
The problem I’m having is with debugging, and not just more complicated debugging like breakpoints and evaluating values at runtime. I simply mean that if I run this with debugging rather than without, it takes hours to complete.
I ended up rewriting the process in Delphi, and the following is what I came up with:
Full Push of Data:
This pulls 15,000 rows, updates a destination table for each row, then pulls 11,000 rows and updates a destination table for each row.
Debugging:
Delphi App: 139s
SSIS: 4 hours, 46 minutes
Not Debugging:
Delphi App: 132s
SSIS: 384s
Update of Data:
This pulls 3,000 rows, but no updates are needed or made to the destination table. It then pulls 11,000 rows but, again, no updates are needed or made to the destination table.
Debugging:
Delphi App: 42s
SSIS: 1 hours, 10 minutes
Not Debugging:
Delphi App: 34s
SSIS: 205s
The odd thing is, I get the feeling that most of this time spent debugging is just updating UI elements in Visual Studio. If I watch the progress tab, a node is added to a tree for each iteration (thousands total), and this gets slower and slower as the process goes on. Trying to stop debugging usually doesn’t work, as Visual Studio seems caught in a loop updating the UI. If I check the profiler for SQL Server no actual work is being done. I'm not sure if the machine matters, but it should be more than up to the job (quad core, 4 gig of ram, 512 mb video card).
Is this sort of behavior normal? As I’ve said I’m a coder by trade, so I have no problem writing an app for this sort of thing (in fact it takes much less time for me to code an application than “draw” it in SSIS, but I figure that margin will shrink with more work done in SSIS), but I’m trying to figure out where something like SSIS and DTS would fit into my toolbox. So far nothing about it has really impressed me. Maybe I’m misusing or abusing SSIS in some way?
Any help would be greatly appreciated, thanks in advance!
SSIS control flow and loops are not very high performance, and not designed for processing these amounts of data. Especially during the debugging - before and after each task execution, debugger sends notifications to designer process, which updates colors of the shapes and this could be slow.
You could get much better performance using data flow. Data flow does not operate with single rows, it works with buffers of rows - much faster, and the debugger is only notified about beginning/end of the buffers - so its impact is less noticeable.
SSIS is not designed to do a foreach like that. If you are doing something for each row coming in, you probably want to read those into a dataflow and then using a lookup or merge join, determine whether to do an INSERT (these happen in bulk) or a database command object for multiple SQL UPDATE commands (a better performing option is to batch these into staging table and do a single UPDATE).
In another typical sync situation, you read all the data into a staging table, and do a SQL Server UPDATE on the existing rows (INNER JOIN) and INSERT on the new rows (LEFT JOIN, rhs IS NULL). There is also the possibility of using linked servers, but joins over that can be slow, since all (or a lot of) the data may have to come across the network.
I have SSIS packages that regular import 24 million rows, including handling data conversion and validation and slowly changing dimensions using the TableDifference component, and it performs relatively quickly for that large amount of data versus a separate client program.
I have noticed this is the behavior, I had an SSIS package for moves, that did somewhere in the neighborhood of 3 million entries, it was not possible to debug as it would run for about 3-4 days.
SSIS is still the way I did it, I just don't "debug" with SSIS, I run them when working with the full datasets. If I must debug, I use very small datasets.