Export SSRS report directly without rendering it on ReportViewer - asp.net-mvc

I have a set of RDL reports hosted on the report server instance. Some of the report renders more than 100,000 records on the ReportViewer. So that it takes quite long time to render it on the Viewer. So, we decided to go with Export the content directly from the server based on the user input parameters for the report as well as export file format.
Main thing here, I do not want the user to wait until the export file available for download. Rather, User can submit the action and can proceed to do other works. In the background, the program has to export the file to some physical location. When the download will be available, the user will be informed with some notification about the exported file.
I found the way in this Link. I need to know what are the ways to achieve the above mentioned functionality as well as how to pass the input parameters for the report. Pl suggest me.
Note: I was using XML as datasource for the rdl reports.
EDIT
I found something useful and did the coding like the below,
string path = ServerURL +"?" + _reportFolder + "ReportName&rs:Command=Render&rs:Format=PDF";
WebRequest req = WebRequest.Create(path);
string reportParametersQT = String.Empty;
req.Credentials = CredentialCache.DefaultNetworkCredentials;
WebResponse response = req.GetResponse();
Stream stream = response.GetResponseStream();
//screen.Response.Clear();
string enCodeFileName = HttpUtility.UrlEncode("fileName.pdf", System.Text.Encoding.UTF8);
// The word attachment in Addheader is used to directly show the save dialog box in browser
Response.AddHeader("content-disposition", "attachment; filename=" + enCodeFileName);
Response.BufferOutput = false; // to prevent buffering
Response.ContentType = response.ContentType;
byte[] buffer = new byte[1024];
int bytesRead = 0;
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
}
Response.End();
I am able to download the exported file. But need to save the file in physical location instead of downloading. I dont know how to do that.

Both of these are very easy to do. You essentially just pass the parameters in the URL that you're calling, for example for a parameter called "LearnerList" you add &LearnerList=12345 to the URL. For exporting, add an additional paramter for Format=PDF (or whatever you want the file as) to get the report to export as a PDF instead of generating in Report Viewer.
Here's an example URL:
https://reporting.MySite.net/ReportServer/Pages/ReportViewer.aspx?/Users+Folders/User/My+Reports/Learner+Details&rs:Format=PDF&LearnerList=202307
Read these two pages, and you should be golden:
https://msdn.microsoft.com/en-us/library/ms155391.aspx
https://msdn.microsoft.com/en-us/library/ms154040.aspx

Related

Not able to save manually edited data after filling pdf using iTextSharp

I succeeded filling out a PDF form with database data using the iTextSharp DLL. But my code breaks Adobe's extended features. Once I've filled forms using iTextSharp, the resulting document is a flat form and we can't fill it out manually again.
I already resolved the flattening problem using the following line of code.
pdfStamper.FormFlattening = false;
Now when I open the PDF file with the db data using following code, I am able to edit the form manually:
public ActionResult ViewFile()
{
string fileName = "I9 Form.pdf";
string filenames = string.Concat(Guid.NewGuid().ToString(), ".pdf");
PdfReader pdfReader = new PdfReader(Server.MapPath(String.Format
("~/App_Data/TempletePDF/") + fileName));
MemoryStream stream = new MemoryStream();
PdfStamper pdfStamper = new PdfStamper(pdfReader, stream);
AcroFields formFields = pdfStamper.AcroFields;
formFields.SetField("LastName", "John");
pdfStamper.FormFlattening = false;
pdfStamper.Writer.CloseStream = false;
pdfStamper.Close();
byte[] file = stream.ToArray();
MemoryStream output = new MemoryStream();
output.Write(file, 0, file.Length);
output.Position = 0;
HttpContext.Response.AddHeader
("content-disposition", "inline; filename=form.pdf");
// Return the output stream
return File(output, "application/pdf");
}
I am able to print the file with manually entered data using the pdf print button, but I'm no longer able to save the file with manually entered data.
When i am trying to open this saved file normally. It gives me the following error message:
"This document enabled extended features in Adobe Acrobat Reader DC. The
document has been changed since it was created and use of extended features
is no longer available. Please contact the author for the original version
of this document."
It sounds as if you're filling out a Reader-enabled form. In the comments, I referred to the concept of Reader-enabling:
Can I create a Reader-enabled PDF using iText? (The answer is: no, of course not!)
How can I create a Reader enabled PDF that can be signed in Adobe Reader? (The answer is: this can only be done with Adobe software.)
From these answers, you know that Reader-enabling is achieved by introducing a digital signature that uses a private key owned by Adobe.
You fill out the form using a PdfStamper that is created like this:
PdfStamper pdfStamper = new PdfStamper(pdfReader, stream);
This alters the file and breaks the digital signature. As a result, the Reader-enabling is lost and if usage rights are defined (such as saving the file manually), then these usage rights are no longer valid.
You can work around this by creating the PdfStamper in append mode:
PdfStamper stamper = new PdfStamper(pdfReader, stream, '\0', true);
Now the original file (the bytes that are signed using Adobe's private key) remain unaltered. You just add some extra bytes. This will preserve Reader-enabling.

pdf.js to display output of file created with tcpdf

I really hope you will be able to help me out on this one.
I am new to pdf.js so for the moment, I am playing around with the pre-built version to see if I can integrate this into my web app.
My problem:
I am using tcpdf to generate a pdf file which I would like to visualize using pdf.js without having to save it to a file on the server.
I have a php file (generate_document.php) that I use to generate the pdf. The file ends with the following:
$pdf->Output('test.pdf', 'I');
according to the tcpdf documentation, the second parameter can be used to generate the following formats:
I: send the file inline to the browser (default). The plug-in is used if available. The name given by name is used when one selects the "Save as" option on the link generating the PDF.
D: send to the browser and force a file download with the name given by name.
F: save to a local server file with the name given by name.
S: return the document as a string (name is ignored).
FI: equivalent to F + I option
FD: equivalent to F + D option
E: return the document as base64 mime multi-part email attachment (RFC 2045)
Then, I would like to view the pdf using pdf.js without creating a file on the server (= not using 'F' as a second parameter and passing the file name to pdf.js).
So, I thought I could simply create an iframe and call the pdf.js viewer pointing to the php file:
<iframe width="100%" height="100%" src="/pdf.js_folder/web/viewer.html?file=get_document.php"></iframe>
However, this is not working at all....do you have any idea what I am overlooking? Or is this option not available in pdf.js?
I have done some research and I have seen some posts here on converting a base64 stream to a typed array but I do not see how this would be a solution to this problem.
Many thanks for your help!!!
EDIT
#async, thanks for your anwer.
I got it figured out in the meantime, so I thought I'd share my solution with you guys.
1) In my get_document.php, I changed the output statement to convert it directly to base64 using
$pdf_output = base64_encode($pdf->Output('test_file.pdf', 'S'));
2) In viewer.js, I use an XHR to call the get_document.php and put the return in a variable (pdf_from_XHR)
3) Next, I convert what came in from the XHR request using the solution that was already mentioned in a few other posts (e.g. Pdf.js and viewer.js. Pass a stream or blob to the viewer)
pdf_converted = convertDataURIToBinary(pdf_from_XHR)
function convertDataURIToBinary(dataURI) {
var base64Index = dataURI.indexOf(BASE64_MARKER) + BASE64_MARKER.length;
var base64 = dataURI.substring(base64Index);
var raw = window.atob(base64);
var rawLength = raw.length;
var array = new Uint8Array(new ArrayBuffer(rawLength));
for (i = 0; i < rawLength; i++) {
array[i] = raw.charCodeAt(i);
}
return array;
}
et voilĂ  ;-)
Now i can inject what is coming from that function into the getDocument statement:
PDFJS.getDocument(pdf_converted).then(function (pdf) {
pdfDocument = pdf;
var url = URL.createObjectURL(blob);
PDFView.load(pdfDocument, 1.5)
})

How do I send file back after Kendo upload

I have the following requirements. I need to upload an Excel file to a MVC based site. For this I am using Kendo Upload. In the controller action that handles the upload I need to make a slight modification to the Excel file and then send it back as a file stream. I am using Aspose for the Excel modifications. My question is can I achieve all of this within the one controller action without the Excel file ever hitting the disk of web server?
I managed to get this to work by using the synchronous upload mode. My controller action looks like this:
[POST("SaveExcelFile")]
public FileStreamResult Save(IEnumerable<HttpPostedFileBase> files)
{
// The Name of the Upload component is "files"
if (files != null)
{
foreach (var file in files)
{
// Some browsers send file names with full path.
// We are only interested in the file name.
var fileName = Path.GetFileName(file.FileName);
//var physicalPath = Path.Combine(Server.MapPath("~/App_Data"), fileName);
Workbook excel2 = new Workbook(file.InputStream);
excel2.Worksheets.Add("TEST");
Stream stream = new MemoryStream();
excel2.Save(stream, SaveFormat.Excel97To2003);
stream.Position = 0;
return File(stream, "application/vnd.ms-excel", "junk.xls");
// The files are not actually saved in this demo
// file.SaveAs(physicalPath);
}
}
// Return an empty string to signify success
return null;
}
This is only proof of concept code but you can get the idea of what I was trying to achieve. Upload a file, manipulate it and send the modified Worksheet back down to the client as a stream.
I don't think you can. I have used KendoUI's upload control, and it seems that you'll only get to manipulate the file after it's written on the server side.
What you can do is to first save the file, perform your modification, then overwrite it.

Web server to save a file, then open it and save as different type, then prompt user to download

I have an MVC Razor application where I am returning a view.
I have overloaded my action to accept a null-able "export" bool which will change the action by adding headers but still returning the same view as a file (in a new window).
//if there is a value passed in, set the bool to true
if (export.HasValue)
{
ViewBag.Exporting = true;
var UniqueFileName = string.Format(#"{0}.xls", Guid.NewGuid());
Response.AddHeader("content-disposition", "attachment; filename="+UniqueFileName);
Response.ContentType = "application/ms-excel";
}
As the file was generated based on a view, its not an .xls file so when opening it, I get the message "the file format and extension of don't match". So after a Google, I have found THIS POST on SO where one of the answers uses VBA to open the file on the server (which includes the HTML mark-up) then saves it again (as .xls).
I am hoping to do the same, call the controller action which will call the view and create the .xls file on the server, then have the server open it, save it then return it as a download.
What I don't want to do is to create a new view to return a clean file as the current view is an extremely complex page with a lot of logic which would only need to be repeated (and maintained).
What I have done in the view is to wrap everything except the table in an if statement so that only the table is exported and not the rest of the page layout.
Is this possible?
You can implement the VBA in .net
private void ConvertToExcel(string srcPath, string outputPath, XlFileFormat format)
{
if (srcPath== null) { throw new ArgumentNullException("srcPath"); }
if (outputPath== null) { throw new ArgumentNullException("outputPath"); }
var excelApp = new Application();
try
{
var wb = excelApp.Workbooks.Open(srcPath);
try
{
wb.SaveAs(outputPath, format);
}
finally
{
Marshal.ReleaseComObject(wb);
}
}
finally
{
excelApp.Quit();
}
}
You must install Microsoft.Office.Interop and add reference to a COM oject named Microsoft Excel XX.0 Object Library
Sample usage:
//generate excel file from the HTML output of GenerateHtml action.
var generateHtmlUri = new Uri(this.Request.Url, Url.Action("GenerateHtml"));
ConvertToExcel(generateHtmlUri.AbsoluteUri, #"D:\output.xlsx", XlFileFormat.xlOpenXMLStrictWorkbook);
I however discourage this solution because:
You have to install MS Excel in your web server.
MS Excel may sometimes misbehave like prompting a dialog box.
You must find a way to delete the generated Excel file afterwards.
Ugly design.
I suggest to generate excel directly because there doesn't seem to be better ways to covert HTML to Excel except using Excel itself or DocRaptor.

Uploading a File: MemoryStream vs. File System

In a business app I am creating, we allow our administrators to upload a CSV file with certain data that gets parsed and entered into our databases (all appropriate error handling is occurring, etc.).
As part of an upgrade to .NET 4.5, I had to update a few aspects of this code, and, while I was doing so, I ran across this answer of someone who is using MemoryStream to handle uploaded files as opposed to temporarily saving to the file system. There's no real reason for me to change (and maybe it's even bad to), but I wanted to give it a shot to learn a bit. So, I quickly swapped out this code (from a strongly-typed model due to the upload of other metadata):
HttpPostedFileBase file = model.File;
var fileName = Path.GetFileName(file.FileName);
var path = Path.Combine(Server.MapPath("~/App_Data/Uploads"), fileName);
file.SaveAs(path);
CsvParser csvParser = new CsvParser();
Product product = csvParser.Parse(path);
this.repository.Insert(product);
this.repository.Save();
return View("Details", product);
to this:
using (MemoryStream memoryStream = new MemoryStream())
{
model.File.InputStream.CopyTo(memoryStream);
CsvParser csvParser = new CsvParser();
Product product = csvParser.Parse(memoryStream);
this.repository.Insert(product);
this.repository.Save();
return View("Details", product);
}
Unfortunately, things break when I do this - all my data is coming out with null values and it seems as though there is nothing actually in the MemoryStream (though I'm not positive about this). I know this may be a long shot, but is there anything obvious that I'm missing here or something I can do to better debug this?
You need to add the following:
model.File.InputStream.CopyTo(memoryStream);
memoryStream.Position = 0;
...
Product product = csvParser.Parse(memoryStream);
When you copy the file into the MemoryStream, the pointer is moved to the end of the stream, so when you then try to read it, you're getting a null byte instead of your stream data. You just need to reset the position to the start, i.e. 0.
The problem I believe is your memoryStream has it's position set to the end, and I'm guessing that your CSVParser is only processing from that point onwards which there is no data.
To fix you can simply set the memoryStream position to 0 before you parse it with your csvParser.
memoryStream.Position = 0;

Resources