I have a piece of SQL that takes around 8 secs to load (Pretty chunky).
When Using this as a proc in a new report it hangs while trying to use the preview mode.
I have restarted the reporting services and deleted the re add the data set but it completely kills everything for a good 5-10 minutes.
There is nothing that it can strip back as this is the only data set and there are no sub reports running off this.
I would Look at converting the stored procedure to create a dataset table instead and populate a table isntead of running on the fly in SSRS (assuming this is what happens). So that when you run the SSRS report it queries a flat table. Schedule the stored procedure to update at regular intervals to keep the report up to date.
Also check how many rows are being returned and if the SSRS report has paging set up (where it limits how many rows it will display)
Related
I'm coming here because I've exhausted my knowledge and search skills.
I have a very simple application with Delphi XE2 and Firedac (Version 8.0.3) which uses TADQuery to insert a registry in a table that contains 800k rows (10 columns), primary key is bigInteger incremental.
The first insert runs very slow and the subsequent inserts are almost instantaneous until i restart the application, after restarting the application the first Insert runs very slow again.
I tried to run the same statement in IBExpert, all statements are instantaneous.
So, must be something in the component right ?
I'm using TADConnection + TADQuery to run all statements.
Some properties maybe in the TADConnection or TADQuery maybe ?
Any help will be welcome
I'm migrating a currently functional SSIS package onto another server which is similar to current one on which it runs on.
However the stored procedures I use to select distinct rows from source table and insert them into destination table stops responding when I try to run the package in the new server.
the sp_who2 results show CPUTime and diskIO increasing (meaning query is not blocked) and select count(*) doesn't work (keeps on running) so I use sp_spaceused procedure to see number of rows which sometimes exceed the rows in the source table!
Stopping the package doesn't stop the query and I have to ask DBA to manually kill the process which incurs long rollback. Letting the package execute continues the execution for days!
I've tried using Data flow task for the same but it takes so much more time compared in stored procedure that I'm trying to avoid using it.
This is very strange. I have a CR that takes over 30 minutes to run. It uses 5 large tables and queries the server. I made a View on the server which is IBM i to gather the data there. For some reason it is not giving me data on the CR past 08/12. When I query past that date on the server,it does have data, and even if I make a quick report on CR it will show all the data incl 2013.
The reason can possibly be this>
When I made the View, I mistakenly had a mix of databases used. And one of the 2 databases was one being used as part of a data purge. So it may have not had data past 8.12/
But since that point, I have also modified the View to add some new columns and this it does and even shows them in the data that it does show (till 8/12)
So this would tell me that the CR is fully using the new View.
So I can re create the CR but this is rather tedious. Perhaps there is one thing I am not doing?
Crystal Reports generally does better in reporting over processing a query. For a faster, and easier way of debugging, it's often better to make a procedure in your database that joins together the data from various sources. Once you have the data you want, then use Crystal to display that data.
In other words, try to avoid doing any more work in Crystal than you have to. Sure, the grouping and headers and pretty formatting will be done there. But all of the querying, joining, and sorting is better done in your database. If the query is slow there, then you can optimize there. If the wrong data is returned, you fix your procedure until it is returning what you want.
An additional benefit is when the report needs to change. If the data needs to come from a different location, you can modify the procedure and never touch Crystal. If the formatting needs to change, you can modify the Crystal and never touch the procedure. You're changing less and thus don't have to test everything.
Is the crystal report attached to a scratch server?
If you are using SQL Server, then you can modify the SQL that constitutes your view by modifying the table names to be like this: databasename..tablename I'm not certain how to do the equivalent in other DBMS.
If you modify your table like that so that the view is querying tables from the correct non-purged database and you are still not getting data more recent than 8/12, then check if there are constraints in the WHERE and/or HAVING statements, or if there are implicit/explicit constraints in ON section of the JOINs.
i have two applications (server and client), that uses TQuery connected with TClientDataSet through TDCOMConnection,
and in some cases clientdataset opens about 300000 records and than application throws exception "Temporary table resource limit".
Is there any workaround how to fix this? (except "do not open such huge dataset"?)
update: oops i'm sorry there is 300K records, not 3 millions..
The error might be from the TQuery rather than the TClientDataSet. When using a TQuery it creates a temporary table and it might be this limit that you are hitting. However in saying this, loading 3,000,000 records into a TClientDataSet is a bad idea also as it will try to load every record into memory - which maybe possible if they are only a few bytes each but it is probably still going to kill your machine (obviously at 1kb each you are going to need 3GB of RAM minimum).
You should try to break your data into smaller chunks. If it is the TQuery failing this will mean adjusting the SQL (fewer fields / fewer records) or moving to a better database (the BDE is getting a little tired after all).
You have the answer already. Don't open such a huge dataset in a ClientDataSet (CDS).
Three million rows in a CDS is a huge memory load (depending on the size of each row, it can be gigantic).
The whole purpose of using a CDS is to work quickly with small datasets that can be manipulated in memory. Adding that many rows is ridiculous; use a real dataset instead, or redesign things so you don't need to retrieve so many rows at a time.
over 3 million records is way too much to handle at once. My guess is that you are performing an export or something like that which requires that many records to be sent down the wire. One method you could use to reduce this issue would be to have the middle-tier generate an export file, and then deliver that file to the client (preferably compressing first using ZLIB or something simular).
If you are pulling data back to the client for viewing purposes, then consider sending summary information only, and then allowing the client to dig thier way thru the data a portion at a time. The users would thank you because your performance will go way up and they won't have to dig thru records they don't care about looking at.
EDIT
Even 300,000 records is way too much to handle at once. If you had that many pennies, would you be able to carry them all? But if you made it into larger denominations, you could. if your sending data to the client for a report, then I strongly suggest a summary method... give them the large picture and let them drill slowly into the data. send grouped data and then let them open up slowly.
If this is a search results screen, then set a limit of the number of records to be returned + 1. For example to display 100 records, set the limit to 101. Still only display 100, the last record means that there were MORE than 100 records so the customer needs to adjust thier search criteria to return a smaller subset.
Temporary table resource limit is not a limit for one single query. it is the limit for all open queries together. so it may be a solution for you to close all other queries at the time.
if it is not possible for you to use ADO connection, also you can design a paging mechanism for querying data page by page.
GOOD LUCK
I’m using SSIS to synchronize data between two databases. I’ve used SSIS and DTS in the past, but I generally write an application for things of this nature (I’m coder and it just comes easier to me).
In my package I use a SQL Task that returns about 15,000 rows. I’ve hooked that up to a Foreach Container, and within that I assign the resultset column values to variables, and then map those variables to parameters that are fed to another SQL Task.
The problem I’m having is with debugging, and not just more complicated debugging like breakpoints and evaluating values at runtime. I simply mean that if I run this with debugging rather than without, it takes hours to complete.
I ended up rewriting the process in Delphi, and the following is what I came up with:
Full Push of Data:
This pulls 15,000 rows, updates a destination table for each row, then pulls 11,000 rows and updates a destination table for each row.
Debugging:
Delphi App: 139s
SSIS: 4 hours, 46 minutes
Not Debugging:
Delphi App: 132s
SSIS: 384s
Update of Data:
This pulls 3,000 rows, but no updates are needed or made to the destination table. It then pulls 11,000 rows but, again, no updates are needed or made to the destination table.
Debugging:
Delphi App: 42s
SSIS: 1 hours, 10 minutes
Not Debugging:
Delphi App: 34s
SSIS: 205s
The odd thing is, I get the feeling that most of this time spent debugging is just updating UI elements in Visual Studio. If I watch the progress tab, a node is added to a tree for each iteration (thousands total), and this gets slower and slower as the process goes on. Trying to stop debugging usually doesn’t work, as Visual Studio seems caught in a loop updating the UI. If I check the profiler for SQL Server no actual work is being done. I'm not sure if the machine matters, but it should be more than up to the job (quad core, 4 gig of ram, 512 mb video card).
Is this sort of behavior normal? As I’ve said I’m a coder by trade, so I have no problem writing an app for this sort of thing (in fact it takes much less time for me to code an application than “draw” it in SSIS, but I figure that margin will shrink with more work done in SSIS), but I’m trying to figure out where something like SSIS and DTS would fit into my toolbox. So far nothing about it has really impressed me. Maybe I’m misusing or abusing SSIS in some way?
Any help would be greatly appreciated, thanks in advance!
SSIS control flow and loops are not very high performance, and not designed for processing these amounts of data. Especially during the debugging - before and after each task execution, debugger sends notifications to designer process, which updates colors of the shapes and this could be slow.
You could get much better performance using data flow. Data flow does not operate with single rows, it works with buffers of rows - much faster, and the debugger is only notified about beginning/end of the buffers - so its impact is less noticeable.
SSIS is not designed to do a foreach like that. If you are doing something for each row coming in, you probably want to read those into a dataflow and then using a lookup or merge join, determine whether to do an INSERT (these happen in bulk) or a database command object for multiple SQL UPDATE commands (a better performing option is to batch these into staging table and do a single UPDATE).
In another typical sync situation, you read all the data into a staging table, and do a SQL Server UPDATE on the existing rows (INNER JOIN) and INSERT on the new rows (LEFT JOIN, rhs IS NULL). There is also the possibility of using linked servers, but joins over that can be slow, since all (or a lot of) the data may have to come across the network.
I have SSIS packages that regular import 24 million rows, including handling data conversion and validation and slowly changing dimensions using the TableDifference component, and it performs relatively quickly for that large amount of data versus a separate client program.
I have noticed this is the behavior, I had an SSIS package for moves, that did somewhere in the neighborhood of 3 million entries, it was not possible to debug as it would run for about 3-4 days.
SSIS is still the way I did it, I just don't "debug" with SSIS, I run them when working with the full datasets. If I must debug, I use very small datasets.