BIDS SSRS Report query timeout issue while using Stored Procedure with timeout settings set appropriately - stored-procedures

I've ran into a Timeout issue while executing a stored procedure for a SSRS Report I've created in Business Intelligence Development studio (BIDS). My stored procedure is pretty large and on average takes nearly 4 minutes to execute in SQL Server Management Studio. So i've accomidated for this by increasing the "Time out (in seconds)" to 600 seconds (10 mins). I've also increased the query timeout in the Tools->Options->Business Intelligence Designers-->Query Timeout AND Connection Timeout to 600 seconds as well.
Lastly, I've since created two other reports that use stored procedures with no problems. (they are a lot smaller and take roughly 30 seconds to execute). For my dataset properties, I always use Query type: "Text", and call the stored procedure with the EXEC command.
Any ideas as to why my stored procedure of interest is still timing out?
Below is the error message I receive after clicking "Refresh Fields":
"Could not create a list of fields for the query. Verify that you can connect to the data source and that your query syntax is correct."
Details
"Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The statement has been terminated."
Thank You for your time.

Check the Add Key="DatabaseQueryTimeout" Value="120" value in your rsreportserver.config file. You may need to increase it there also.
More info on that file:
http://msdn.microsoft.com/en-us/library/ms157273.aspx
Also, in addition to what the first commenter on your post stated, in my experience if you are rendering to PDF, those can time out also. Your large dataset is returned w/i a reasonable amount of time, however the rendering of the PDF can take forever. Try rendering to Excel. The BIDs results will render rather quickly, but exporting the results are what can cause an issue.

Related

Getting actual memory usage per user session in SSAS tabular model

I'm trying to build a report which would show actual memory usage per user session when working with a particular SSAS tabular in-mem model. The model itself is relatively big (~100GB in mem) and the test queries are relatively heavy: no filters, lowest granularity level, couple of SUM measures + exporting 30k rows to CSV.
First, I tried querying following DMV:
select SESSION_SPID
,SESSION_CONNECTION_ID
,SESSION_USER_NAME
,SESSION_CURRENT_DATABASE
,SESSION_USED_MEMORY
,SESSION_WRITES
,SESSION_WRITE_KB
,SESSION_READS
,SESSION_READ_KB
from $system.discover_sessions
where SESSION_USER_NAME='username'
and SESSION_SPID=29445
and got following results:
$system.discover_sessions result
I was expecting SESSION_USED_MEMORY to show at least several hundreds of MBs, but the biggest value I got is 11 KB (MS official documentation for this DMV indicates that SESSION_USED_MEMORY is in kilobytes).
I've also tried querying 2 more DMVs:
SELECT SESSION_SPID
,SESSION_COMMAND_COUNT
,COMMAND_READS
,COMMAND_READ_KB
,COMMAND_WRITES
,COMMAND_WRITE_KB
,COMMAND_TEXT FROM $system.discover_commands
where SESSION_SPID=29445
and
select CONNECTION_ID
,CONNECTION_USER_NAME
,CONNECTION_BYTES_SENT
,CONNECTION_DATA_BYTES_SENT
,CONNECTION_BYTES_RECEIVED
,CONNECTION_DATA_BYTES_RECEIVED from $system.discover_connections
where CONNECTION_USER_NAME='username'
and CONNECTION_ID=2047
But also got quite underwhelming results: 0 used memory from $system.discover_commands and 4,8 MB from $system.discover_connections for CONNECTION_DATA_BYTES_SENT, which still seems to be smaller than the actual session would take.
These results don't seem to correspond to a very blunt test, where users would send similar queries via PowerBI and we would observe ~40GB spike in RAM allocation on the SSAS server per 4 users (so roughly 10GB per user session).
Have anyone used these (or any other DMVs or methods) to get actual user session memory consumption? Using SQL tracer dump would be the last resort since it would require parsing and loading the result into a DB and my goal is to have a real-time report showing active user sessions.

influxdb CLI import failed inserts when using a huge file

I am currently working on NASDAQ data parsing and insertion into the influx database. I have taken care of all the data insertion rules (escaping special characters and organizing the according to the format : <measurement>[,<tag-key>=<tag-value>...] <field-key>=<field-value>[,<field2-key>=<field2-value>...] [unix-nano-timestamp]).
Below is a sample of my data:
apatel17#*****:~/output$ head S051018-v50-U.csv
# DDL
CREATE DATABASE NASDAQData
# DML
# CONTEXT-DATABASE:NASDAQData
U,StockLoc=6445,OrigOrderRef=22159,NewOrderRef=46667 TrackingNum=0,Shares=200,Price=73.7000 1525942800343419608
U,StockLoc=6445,OrigOrderRef=20491,NewOrderRef=46671 TrackingNum=0,Shares=200,Price=73.7800 1525942800344047668
U,StockLoc=952,OrigOrderRef=65253,NewOrderRef=75009 TrackingNum=0,Shares=400,Price=45.8200 1525942800792553625
U,StockLoc=7092,OrigOrderRef=51344,NewOrderRef=80292 TrackingNum=0,Shares=100,Price=38.2500 1525942803130310652
U,StockLoc=7092,OrigOrderRef=80292,NewOrderRef=80300 TrackingNum=0,Shares=100,Price=38.1600 1525942803130395217
U,StockLoc=7092,OrigOrderRef=82000,NewOrderRef=82004 TrackingNum=0,Shares=300,Price=37.1900 1525942803232492698
I have also created the database: NASDAQData inside influx.
The problem I am facing is this:
The file has approximately 13 million rows (12,861,906 to be exact). I am trying to insert this data using the CLI import command as below:
influx -import -path=S051118-v50-U.csv -precision=ns -database=NASDAQData
I usually get upto 5,000,000 lines before I start getting the error for insertion. I have run this code multiple times and sometimes I get the error at 3,000,000 lines as well. To figure out this error, I am running the same code on a part of the file. I divide the data into 500,000 lines each and the code successfully ran for all the smaller files. (all 26 files of 500,000 rows)
Has this happened to somebody else or does somebody know a fix for this problem wherein a huge file shows errors during data insert but if broken down and worked with smaller data size, the import works perfectly.
Any help is appreciated. Thanks
As recommended by influx documentation, it may be necessary to split your data file into several smaller ones as the http request used for issuing your writes can timeout after 5 seconds.
If your data file has more than 5,000 points, it may be necessary to
split that file into several files in order to write your data in
batches to InfluxDB. We recommend writing points in batches of 5,000
to 10,000 points. Smaller batches, and more HTTP requests, will result
in sub-optimal performance. By default, the HTTP request times out
after five seconds. InfluxDB will still attempt to write the points
after that time out but there will be no confirmation that they were
successfully written.
Alternatively you can set a limit on how much points to write per second using the pps option. This should relief some stress off your influxdb.
See:
https://docs.influxdata.com/influxdb/v1.7/tools/shell/#import-data-from-a-file-with-import

Talend- Memory issues. Working with big files

Before admins start to eating me alive, I would like to say to my defense that I cannot comment in the original publications, because I do not have the power, therefore, I have to ask about this again.
I have issues running a job in talend (Open Studio for BIG DATA!). I have an archive of 3 gb. I do not consider that this is too much since I have a computer that has 32 GB in RAM.
While trying to run my job, first I got an error related to heap memory issue, then it changed for a garbage collector error, and now It doesn't even give me an error. (just do nothing and then stops)
I found this SOLUTIONS and:
a) Talend performance
#Kailash commented that parallel is only on the condition that I have to be subscribed to one of the Talend Platform solutions. My comment/question: So there is no other similar option to parallelize a job with a 3Gb archive size?
b) Talend 10 GB input and lookup out of memory error
#54l3d mentioned that its an option to split the lookup file into manageable chunks (may be 500M), then perform the join in many stages for each chunk. My comment/cry for help/question: how can I do that, I do not know how to split the look up, can someone explain this to me a little bit more graphical
c) How to push a big file data in talend?
just to mention that I also went through the "c" but I don't have any comment about it.
The job I am performing (thanks to #iMezouar) looks like this:
1) I have an inputFile MySQLInput coming from a DB in MySQL (3GB)
2) I used the tFirstRows to make it easier for the process (not working)
3) I used the tSplitRow to transform the data form many simmilar columns to only one column.
4) MySQLOutput
enter image description here
Thanks again for reading me and double thanks for answering.
From what I understand, your query returns a lot of data (3GB), and that is causing an error in your job. I suggest the following :
1. Filter data on the database side : replace tSampleRow by a WHERE clause in your tMysqlInput component in order to retrieve fewer rows in Talend.
2. MySQL jdbc driver by default retrieves all data into memory, so you need to use the stream option in tMysqlInput's advanced settings in order to stream rows.

Reading large excel files in VB.NET is slow.

I do not know if the heading is correct - so editing is allowed to make it proper.
Problem - using Vb.net code, when I read an excel file of 100,000 records, using connection string and sql query, it takes 3 minutes ( too long to me - I want a solution, please) to complete.
But, when I submit another excel file of 300,000 records ( my requirement is to read 50 Million records) - the time taken was more than 30 minutes ( I could not tolerate and killed the program)
Please help me understand this disparity and why it takes so long to read.
(I did not give any code samples because thousands of such sample codes are available on the net on how to establish a connection to a excel file ( Office 2010) and how to run a query to read a record )
Thanks in advance for your help and time. As a solution, I thought of chopping the 300,000 record file into files of 10,000 records each - but, how do I do that without wasting opening and reading time ?
Sabya
P.S - using core 2 duo with 8 GB RAM with Windows Server 2008 and Windows 7
So, i don't work with vb.net but if you familiar with java i can advice you Apache POI library. POI load all data in memory and for my cases it works perfect, after that you can store it to mysql or anything else i read a hundred of files with poi and it helps me great.
Here i find a question which looks like similar to yours.
And here you can find POI performance discussion.
And another solution can be to export excel file to csv and read it after that, i think it'll also fast.
You could temporarily disable the Macro run as soon as Excel loads.
Memory limitation is another reason, as excel could use very large amount of memory. I would exhaust out the memory banks to 16GB if I am running this large spreadsheet (100K) cells).
Make sure the excel file and the hard drive is defragmented (you can see a real impact).
If never shutdown the PC, try shutdown and restart. This can liberate processes to unload unused dlls.
Increase the pagefile.sys size to at least 2.5 times RAM so that data transaction occurs smoothly.
Ishikawa asked, if vb.net is essential - my answer is yes, because, it is a part of an application written in VB.Net Framework 4.0. He also talked about exporting the excel to csv and try - but, I am afraid, if opening and reading is taking so many hours, ( it took 9 hours !!) - converting will not help. User will be killing the process - I am sure.
Soandos asked for the query - it is - "Select top 1* from excel-file" - I am reading one by one. I think, the problem is not this query because this same query reads 100,000 records quite well.
KronoS supported Soandos and I have answered above. To his/her 2nd point, the answer is - I have to have excel as - this is whatthe user provides. I can not change it.
I do not see who answered this - but the idea of disabling Macros - is a very good point. Should I not disable all macro, all filters and unhide all - to read all data in simple way ? I will try that.
The total size of the 300,000 record excel file is 61 MB - it is not very large !! to create a memory problem ?
I found that the speed of simply reading records in excel is not linear. It reads 10,000 records in 4 sec but, reads 50,000 in 27 sec and 100,000 in 60 sec etc..I wish - if anyone can tell me how to index an excel file to read large files. I do not know what will be the problem size, when I get an excel file of 50 Million rows ?
I had similar problems with updating large excel file. my solution - update part of it, close, kill excel process, reopen, update again
oexcel.DisplayAlerts = False
obook.SaveAs(fnExcNew, 1)
obook.Close()
obook = Nothing
KillExcel()
oexcel = CreateObject("Excel.Application")
oexcel.Workbooks.Open(fnExcNew)
obook = oexcel.ActiveWorkbook
osheet = oexcel.Worksheets(1)
Private Sub KillExcel()
' Kill all excel processes
Dim pList() As Process
Dim pExcelProcess As System.Diagnostics.Process
pList = pExcelProcess.GetProcesses
For Each pExcelProcess In pList
If pExcelProcess.ProcessName.ToUpper = "EXCEL" Then
pExcelProcess.Kill()
End If
Next
End Sub

How to make LR waiting only time needed?

Our application perfoms the query and then selects on of the results. I would like to automate this in order to measure the system overload, but the main problem I have is: the more users, the longer it takes for backend to return the results. Hence I need loadrunner to perform the query and then perform the action as soon as the results have been returned. Or does LR do this automatically?
LoadRunner will wait until the time specified in client timeout automaticallyb before entering into an error state. If you have no wait time between your query and your next statement and your query finishes within your client timeout window, then loadrunner will continue automatically with your next statement a soon as the current statement is complete.
This is a question normally covered in training. If not in training then as a part of your post training mentoring/internship period.

Resources