Uploading a File: MemoryStream vs. File System - asp.net-mvc

In a business app I am creating, we allow our administrators to upload a CSV file with certain data that gets parsed and entered into our databases (all appropriate error handling is occurring, etc.).
As part of an upgrade to .NET 4.5, I had to update a few aspects of this code, and, while I was doing so, I ran across this answer of someone who is using MemoryStream to handle uploaded files as opposed to temporarily saving to the file system. There's no real reason for me to change (and maybe it's even bad to), but I wanted to give it a shot to learn a bit. So, I quickly swapped out this code (from a strongly-typed model due to the upload of other metadata):
HttpPostedFileBase file = model.File;
var fileName = Path.GetFileName(file.FileName);
var path = Path.Combine(Server.MapPath("~/App_Data/Uploads"), fileName);
file.SaveAs(path);
CsvParser csvParser = new CsvParser();
Product product = csvParser.Parse(path);
this.repository.Insert(product);
this.repository.Save();
return View("Details", product);
to this:
using (MemoryStream memoryStream = new MemoryStream())
{
model.File.InputStream.CopyTo(memoryStream);
CsvParser csvParser = new CsvParser();
Product product = csvParser.Parse(memoryStream);
this.repository.Insert(product);
this.repository.Save();
return View("Details", product);
}
Unfortunately, things break when I do this - all my data is coming out with null values and it seems as though there is nothing actually in the MemoryStream (though I'm not positive about this). I know this may be a long shot, but is there anything obvious that I'm missing here or something I can do to better debug this?

You need to add the following:
model.File.InputStream.CopyTo(memoryStream);
memoryStream.Position = 0;
...
Product product = csvParser.Parse(memoryStream);
When you copy the file into the MemoryStream, the pointer is moved to the end of the stream, so when you then try to read it, you're getting a null byte instead of your stream data. You just need to reset the position to the start, i.e. 0.

The problem I believe is your memoryStream has it's position set to the end, and I'm guessing that your CSVParser is only processing from that point onwards which there is no data.
To fix you can simply set the memoryStream position to 0 before you parse it with your csvParser.
memoryStream.Position = 0;

Related

Not able to save manually edited data after filling pdf using iTextSharp

I succeeded filling out a PDF form with database data using the iTextSharp DLL. But my code breaks Adobe's extended features. Once I've filled forms using iTextSharp, the resulting document is a flat form and we can't fill it out manually again.
I already resolved the flattening problem using the following line of code.
pdfStamper.FormFlattening = false;
Now when I open the PDF file with the db data using following code, I am able to edit the form manually:
public ActionResult ViewFile()
{
string fileName = "I9 Form.pdf";
string filenames = string.Concat(Guid.NewGuid().ToString(), ".pdf");
PdfReader pdfReader = new PdfReader(Server.MapPath(String.Format
("~/App_Data/TempletePDF/") + fileName));
MemoryStream stream = new MemoryStream();
PdfStamper pdfStamper = new PdfStamper(pdfReader, stream);
AcroFields formFields = pdfStamper.AcroFields;
formFields.SetField("LastName", "John");
pdfStamper.FormFlattening = false;
pdfStamper.Writer.CloseStream = false;
pdfStamper.Close();
byte[] file = stream.ToArray();
MemoryStream output = new MemoryStream();
output.Write(file, 0, file.Length);
output.Position = 0;
HttpContext.Response.AddHeader
("content-disposition", "inline; filename=form.pdf");
// Return the output stream
return File(output, "application/pdf");
}
I am able to print the file with manually entered data using the pdf print button, but I'm no longer able to save the file with manually entered data.
When i am trying to open this saved file normally. It gives me the following error message:
"This document enabled extended features in Adobe Acrobat Reader DC. The
document has been changed since it was created and use of extended features
is no longer available. Please contact the author for the original version
of this document."
It sounds as if you're filling out a Reader-enabled form. In the comments, I referred to the concept of Reader-enabling:
Can I create a Reader-enabled PDF using iText? (The answer is: no, of course not!)
How can I create a Reader enabled PDF that can be signed in Adobe Reader? (The answer is: this can only be done with Adobe software.)
From these answers, you know that Reader-enabling is achieved by introducing a digital signature that uses a private key owned by Adobe.
You fill out the form using a PdfStamper that is created like this:
PdfStamper pdfStamper = new PdfStamper(pdfReader, stream);
This alters the file and breaks the digital signature. As a result, the Reader-enabling is lost and if usage rights are defined (such as saving the file manually), then these usage rights are no longer valid.
You can work around this by creating the PdfStamper in append mode:
PdfStamper stamper = new PdfStamper(pdfReader, stream, '\0', true);
Now the original file (the bytes that are signed using Adobe's private key) remain unaltered. You just add some extra bytes. This will preserve Reader-enabling.

Export SSRS report directly without rendering it on ReportViewer

I have a set of RDL reports hosted on the report server instance. Some of the report renders more than 100,000 records on the ReportViewer. So that it takes quite long time to render it on the Viewer. So, we decided to go with Export the content directly from the server based on the user input parameters for the report as well as export file format.
Main thing here, I do not want the user to wait until the export file available for download. Rather, User can submit the action and can proceed to do other works. In the background, the program has to export the file to some physical location. When the download will be available, the user will be informed with some notification about the exported file.
I found the way in this Link. I need to know what are the ways to achieve the above mentioned functionality as well as how to pass the input parameters for the report. Pl suggest me.
Note: I was using XML as datasource for the rdl reports.
EDIT
I found something useful and did the coding like the below,
string path = ServerURL +"?" + _reportFolder + "ReportName&rs:Command=Render&rs:Format=PDF";
WebRequest req = WebRequest.Create(path);
string reportParametersQT = String.Empty;
req.Credentials = CredentialCache.DefaultNetworkCredentials;
WebResponse response = req.GetResponse();
Stream stream = response.GetResponseStream();
//screen.Response.Clear();
string enCodeFileName = HttpUtility.UrlEncode("fileName.pdf", System.Text.Encoding.UTF8);
// The word attachment in Addheader is used to directly show the save dialog box in browser
Response.AddHeader("content-disposition", "attachment; filename=" + enCodeFileName);
Response.BufferOutput = false; // to prevent buffering
Response.ContentType = response.ContentType;
byte[] buffer = new byte[1024];
int bytesRead = 0;
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
}
Response.End();
I am able to download the exported file. But need to save the file in physical location instead of downloading. I dont know how to do that.
Both of these are very easy to do. You essentially just pass the parameters in the URL that you're calling, for example for a parameter called "LearnerList" you add &LearnerList=12345 to the URL. For exporting, add an additional paramter for Format=PDF (or whatever you want the file as) to get the report to export as a PDF instead of generating in Report Viewer.
Here's an example URL:
https://reporting.MySite.net/ReportServer/Pages/ReportViewer.aspx?/Users+Folders/User/My+Reports/Learner+Details&rs:Format=PDF&LearnerList=202307
Read these two pages, and you should be golden:
https://msdn.microsoft.com/en-us/library/ms155391.aspx
https://msdn.microsoft.com/en-us/library/ms154040.aspx

Save SpreadsheetDocument in the DB

I have a valid SpreadsheetDocument object created from the stream. I can manipulate it (f.e. add new row). After my changes I need to save this changed document in SQL Server as varbinary and later read it for SQL Server to manipulate further.
Could you provide some example how to achieve it?
I know how to put/read data from SQL Server. What I'm looking for is the way somehow to convert SpreadsheetDocument to byte array and to create back SpreadsheetDocument from byte array for SQL Server.
I'm using Open XML SDK 2.0
Thanks a lot,
Alexander
Not quite the same but I needed to load an Excel template into memory, modify it and send it over HTTP using IIS. I did it by loading the data into a memory stream, then doing the modifications (that seems to be the way Microsoft recomend here:
http://msdn.microsoft.com/en-us/library/ee945362%28v=office.11%29.aspx
This might help you:
MemoryStream ms = new MemoryStream();
byte [] byteArray = System.IO.File.ReadAllBytes("document.xslm");
ms.Write(byteArray, 0, byteArray.Length);
ms.Position = 0;
using (SpreadsheetDocument doc = SpreadsheetDocument.Open(ms, true))
{
<Do stuff>
}
return File(ms.ToArray(), "application/vnd.ms-excel.sheet.macroEnabled.12", "output.xlsm");
Obviously the last line is what I needed to do, you're going to need to save the stream to the database.

How do I send file back after Kendo upload

I have the following requirements. I need to upload an Excel file to a MVC based site. For this I am using Kendo Upload. In the controller action that handles the upload I need to make a slight modification to the Excel file and then send it back as a file stream. I am using Aspose for the Excel modifications. My question is can I achieve all of this within the one controller action without the Excel file ever hitting the disk of web server?
I managed to get this to work by using the synchronous upload mode. My controller action looks like this:
[POST("SaveExcelFile")]
public FileStreamResult Save(IEnumerable<HttpPostedFileBase> files)
{
// The Name of the Upload component is "files"
if (files != null)
{
foreach (var file in files)
{
// Some browsers send file names with full path.
// We are only interested in the file name.
var fileName = Path.GetFileName(file.FileName);
//var physicalPath = Path.Combine(Server.MapPath("~/App_Data"), fileName);
Workbook excel2 = new Workbook(file.InputStream);
excel2.Worksheets.Add("TEST");
Stream stream = new MemoryStream();
excel2.Save(stream, SaveFormat.Excel97To2003);
stream.Position = 0;
return File(stream, "application/vnd.ms-excel", "junk.xls");
// The files are not actually saved in this demo
// file.SaveAs(physicalPath);
}
}
// Return an empty string to signify success
return null;
}
This is only proof of concept code but you can get the idea of what I was trying to achieve. Upload a file, manipulate it and send the modified Worksheet back down to the client as a stream.
I don't think you can. I have used KendoUI's upload control, and it seems that you'll only get to manipulate the file after it's written on the server side.
What you can do is to first save the file, perform your modification, then overwrite it.

DBF Large Char Field

I have a database file that I beleive was created with Clipper but can't say for sure (I have .ntx files for indexes which I understand is what Clipper uses). I am trying to create a C# application that will read this database using the System.Data.OleDB namespace.
For the most part I can sucessfully read the contents of the tables there is one field that I cannot. This field called CTRLNUMS that is defined as a CHAR(750). I have read various articles found through Google searches that suggest field larger than 255 chars have to be read through a different process than the normal assignment to a string variable. So far I have not been successful in an approach that I have found.
The following is a sample code snippet I am using to read the table and includes two options I used to read the CTRLNUMS field. Both options resulted in 238 characters being returned even though there is 750 characters stored in the field.
Here is my connection string:
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\datadir;Extended Properties=DBASE IV;
Can anyone tell me the secret to reading larger fields from a DBF file?
using (OleDbConnection conn = new OleDbConnection(connectionString))
{
conn.Open();
using (OleDbCommand cmd = new OleDbCommand())
{
cmd.Connection = conn;
cmd.CommandType = CommandType.Text;
cmd.CommandText = string.Format("SELECT ITEM,CTRLNUMS FROM STUFF WHERE ITEM = '{0}'", stuffId);
using (OleDbDataReader dr = cmd.ExecuteReader())
{
if (dr.Read())
{
stuff.StuffId = dr["ITEM"].ToString();
// OPTION 1
string ctrlNums = dr["CTRLNUMS"].ToString();
// OPTION 2
char[] buffer = new char[750];
int index = 0;
int readSize = 5;
while (index < 750)
{
long charsRead = dr.GetChars(dr.GetOrdinal("CTRLNUMS"), index, buffer, index, readSize);
index += (int)charsRead;
if (charsRead < readSize)
{
break;
}
}
}
}
}
}
You can find a description of the DBF structure here: http://www.dbf2002.com/dbf-file-format.html
What I think Clipper used to do was modify the Field structure so that, in Character fields, the Decimal Places held the high-order byte of the size, so Character field sizes were really 256*Decimals+Size.
I may have a C# class that reads dbfs (natively, not ADO/DAO), it could be modified to handle this case. Let me know if you're interested.
Are you still looking for an answer? Is this a one-off job or something that needs doing regularly?
I have a Python module that is primarily intended to extract data from all kinds of DBF files ... it doesn't yet handle the length_high_byte = decimal_places hack, but it's a trivial change. I'd be quite happy to (a) share this with you and/or (b) get a copy of such a DBF file for testing.
Added later: Extended-length feature added, and tested against files I've created myself. Offer to share code with anyone who would like to test it still stands. Still interested in getting some "real" files myself for testing.
3 suggestions that might be worth a shot...
1 - use Access to create a linked table to the DBF file, then use .Net to hit the table in the access database instead of going direct to the DBF.
2 - try the FoxPro OLEDB provider
3 - parse the DBF file by hand. Example is here.
My guess is that #1 should work the easiest, and #3 will give you the opportunity to fine tune your cussing skills. :)

Resources