Stream multiple Excel files as one file - asp.net-mvc

I want to deliver large Excel files using a webservice or httphandler.
As the Excel files can be very big in size, I want to split them up into smaller files, to decrease the memory footprint.
So I will have a master excelfile that contains the column headers and data.
And further files which will only contain data.
During download, I want to stream the master excel file first and then append all other related excel files as one download stream.
I don't want to zip them! It should be one file at the end
Is this possible?
Master excel file with headers:
All other files will look like this (without headers):
This will indeed return crap:
void Main()
{
CombineMultipleFilesIntoSingleFile();
}
// Define other methods and classes here
private static void CombineMultipleFilesIntoSingleFile(string outputFilePath= #"C:\exceltest\main.xlsx", string inputDirectoryPath = #"C:\exceltest", string inputFileNamePattern="*.xlsx")
{
string[] inputFilePaths = Directory.GetFiles(inputDirectoryPath, inputFileNamePattern);
Console.WriteLine("Number of files: {0}.", inputFilePaths.Length);
using (var outputStream = File.Create(outputFilePath))
{
foreach (var inputFilePath in inputFilePaths)
{
using (var inputStream = File.OpenRead(inputFilePath))
{
// Buffer size can be passed as the second argument.
inputStream.CopyTo(outputStream);
}
Console.WriteLine("The file {0} has been processed.", inputFilePath);
}
}
}

When you are requesting for file, do not download it at first request.
Request for file names to be downloaded in AJAX request.
For each file name received, prepare its path to the server.
Create hidden iFrames for each file path and specify src as file path for each file to be downloaded.
When iFrame's src attribute is set, it will navigate to the file path and each iFrame will download single file, so multiple iFrame downloads multiple files.
You cannot download multiple files in single request. As if you will append the stream of multiple files, it will create a garbage file, a single garbage file.

Related

Where is the downloaded file?

I'm using the Upload component in a Vaadin8 project to get a file up on the server, as shown in the source code on this page:
https://demo.vaadin.com/sampler/#ui/data-input/other/upload
After I choose the file on my pc and click upload, the window opens up just like in the sampler, and the progress bar goes all the way to the end, but the file is nowhere to be found in the project file system. Is there another step I'm supposed to be doing? How to I configure the destination folder for the uploaded files?
Taken from official documentation here Receiving upload:
The uploaded files are typically stored as files in a file system, in a database, or as temporary objects in memory. The upload component writes the received data to an java.io.OutputStream so you have plenty of freedom in how you can process the upload content.
So in your case, an uploaded file is stored as a temporary object. In V8 documentation example is cut-off, but it's presented in V7: Receiving Upload Data
public OutputStream receiveUpload(String filename,
String mimeType) {
// Create upload stream
FileOutputStream fos = null; // Stream to write to
try {
// Open the file for writing.
file = new File("/tmp/uploads/" + filename);
fos = new FileOutputStream(file);
} catch (final java.io.FileNotFoundException e) {
new Notification("Could not open file<br/>",
e.getMessage(),
Notification.Type.ERROR_MESSAGE)
.show(Page.getCurrent());
return null;
}
return fos; // Return the output stream to write to
}
public void uploadSucceeded(SucceededEvent event) {
// Show the uploaded file in the image viewer
image.setVisible(true);
image.setSource(new FileResource(file));
}
So the idea is that you could create a file yourself, once upload has succeed.

How to zip PDF and XML files which are in memorystreams

I'm working with VS2015 and ASP.Net on a webservice application which is installed in the AWS cloud.
In one of my methods i got two files, a PDF and a XML.
These files just exist as instances of type MemoryStream.
Now i have to compress these two "files" in a ZIP file before adding the zip as attachment to an E-mail (class MailMessage).
It seems that i have to save the memorystreams to files before adding them as entries to the zip.
Is ist true or do i have another possibility to add the streams as entries to the zip?
Thanks in advance!
The answer is no.
It is not necessary to save the files before adding them to the stream for the ZIP file.
I have found a solution with the Nuget package DotNetZip.
Here is a code example how to use it.
In that example there two files which only exist in MemoryStream objects, not on a local disc.
It is important to reset the Position property of the streams to zero before adding them to the ZIP stream.
At last i save the ZIP stream as a file in my local folder to control the results.
//DotNetZip from Nuget
//http://shahvaibhav.com/create-zip-file-in-memory-using-dotnetzip/
string zipFileName = System.IO.Path.GetFileNameWithoutExtension(xmlFileName) + ".zip";
var zipMemStream = new MemoryStream();
zipMemStream.Position = 0;
using (Ionic.Zip.ZipFile zip = new Ionic.Zip.ZipFile())
{
textFileStream.Position = 0;
zip.AddEntry(System.IO.Path.GetFileNameWithoutExtension(xmlFileName) + ".txt", textFileStream);
xmlFileStream.Position = 0;
zip.AddEntry(xmlFileName, xmlFileStream);
zip.Save(zipMemStream);
// Try to save the ZIP-Stream as a ZIP file. And suddenly: It works!
var zipFs = new FileStream(zipFileName, FileMode.Create);
zipMemStream.Position = 0;
zipMemStream.CopyTo(zipFs);
zipMemStream.WriteTo(zipFs);
}

How to generate multiple reports in Grails using Export plugin?

I am using the Export plugin in Grails for generating PDF/Excel report. I am able to generate single report on PDF/Excel button click. But, now I want to generate multiple reports on single button click. I tried for loop, method calls but no luck.
Reference links are ok. I don't expect entire code, needs reference only.
If you take a look at the source code for the ExportService in the plugin you will notice there are various export methods. Two of which support OutputStreams. Using either of these methods (depending on your requirements for the other parameters) will allow you to render a report to an output stream. Then using those output streams you can create a zip file which you can deliver to the HTTP client.
Here is a very rough example, which was written off the top of my head so it's really just an idea rather than working code:
// Assumes you have a list of maps.
// Each map will have two keys.
// outputStream and fileName
List files = []
// call the exportService and populate your list of files
// ByteArrayOutputStream outputStream = new ByteArrayOutputStream()
// exportService.export('pdf', outputStream, ...)
// files.add([outputStream: outputStream, fileName: 'whatever.pdf'])
// ByteArrayOutputStream outputStream2 = new ByteArrayOutputStream()
// exportService.export('pdf', outputStream2, ...)
// files.add([outputStream: outputStream2, fileName: 'another.pdf'])
// create a tempoary file for the zip file
File tempZipFile = File.createTempFile("temp", "zip")
ZipOutputStream out = new ZipOutputStream(new FileOutputStream(tempZipFile))
// set the compression ratio
out.setLevel(Deflater.BEST_SPEED);
// Iterate through the list of files adding them to the ZIP file
files.each { file ->
// Associate an input stream for the current file
ByteArrayInputStream input = new ByteArrayInputStream(file.outputStream.toByteArray())
// Add ZIP entry to output stream.
out.putNextEntry(new ZipEntry(file.fileName))
// Transfer bytes from the current file to the ZIP file
org.apache.commons.io.IOUtils.copy(input, out);
// Close the current entry
out.closeEntry()
// Close the current input stream
input.close()
}
// close the ZIP file
out.close()
// next you need to deliver the zip file to the HTTP client
response.setContentType("application/zip")
response.setHeader("Content-disposition", "attachment;filename=WhateverFilename.zip")
org.apache.commons.io.IOUtils.copy((new FileInputStream(tempZipFile), response.outputStream)
response.outputStream.flush()
response.outputStream.close()
That should give you an idea of how to approach this. Again, the above is just for demonstration purposes and isn't production ready code, nor have I even attempted to compile it.

How to download all files in an Azure Container Directory?

I have an aspnet app which i upload files to the azure blobs. I know that azure don't create structural paths in the containers, just blobs, but you can emulate directories putting a "/" on the uri.
i.e
I'd upload a list of files and my uri is like this
http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/FileName01.jpg
http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/FileName02.jpg
http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/FileName03.jpg
My download method:
public RemoteFile Download(DownloadRequest request)
{
var fileFinal = string.Format("{0}/{1}/{2}",request.IDProtocol ,request.IDDocument, request.FileName);
var blobBlock = InitializeDownload(fileFinal);
if (!blobBlock.Exists())
{
throw new FileNotFoundException("Error");
}
var stream = new MemoryStream();
blobBlock.DownloadToStream(stream);
return File(request.FileName)
}
private CloudBlob InitializeDownload(string uri)
{
var blobBlock = _blobClient.GetBlobReference(uri);
return blobBlock;
}
This way, i'm getting just one file. But i need to see and download all files inside http://myaccount.windowsazure.blob.net/MyProtocolID-01/MyDocumentID-01/
Thanks
Adding more details. You will need to use one of the listing APIs provided by the client library: CloudBlobContainer.ListBlobs(), CloudBlobContainer.ListBlobsSegmented(), and CloudBlobContainer.ListBlobsSegmentedAsync() (and various overloads.). You can specify the directory prefix, and the service will only enumerate blobs matching the prefix. You can then download each blob. You may also want to look at the ‘useFlatBlobListing’ argument, depending on your scenario.
http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.storage.blob.cloudblobcontainer.listblobs.aspx
In addition AzCopy (see http://blogs.msdn.com/b/windowsazurestorage/archive/2012/12/03/azcopy-uploading-downloading-files-for-windows-azure-blobs.aspx) also supports this scenario of downloading all blobs in a given directory path.
Since each blob is a separate web resource, function above will download only one file. One thing you could do is list all blobs using the logic you are using and then download those blobs on your server first, zip them and the return that zip file to your end user.
Use AzCopy functionalities, now, it has a lot of supports.
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10

Open XML SDK: opening a Word template and saving to a different file-name

This one very simple thing I can't find the right technique. What I want is to open a .dotx template, make some changes and save as the same name but .docx extension. I can save a WordprocessingDocument but only to the place it's loaded from. I've tried manually constructing a new document using the WordprocessingDocument with changes made but nothing's worked so far, I tried MainDocumentPart.Document.WriteTo(XmlWriter.Create(targetPath)); and just got an empty file.
What's the right way here? Is a .dotx file special at all or just another document as far as the SDK is concerned - should i simply copy the template to the destination and then open that and make changes, and save? I did have some concerns if my app is called from two clients at once, if it can open the same .dotx file twice... in this case creating a copy would be sensible anyway... but for my own curiosity I still want to know how to do "Save As".
I would suggest just using File.IO to copy the dotx file to a docx file and make your changes there, if that works for your situation. There's also a ChangeDocumentType function you'll have to call to prevent an error in the new docx file.
File.Copy(#"\path\to\template.dotx", #"\path\to\template.docx");
using(WordprocessingDocument newdoc = WordprocessingDocument.Open(#"\path\to\template.docx", true))
{
newdoc.ChangeDocumentType(WordprocessingDocumentType.Document);
//manipulate document....
}
While M_R_H's answer is correct, there is a faster, less IO-intensive method:
Read the template or document into a MemoryStream.
Within a using statement:
open the template or document on the MemoryStream.
If you opened a template (.dotx) and you want to store it as a document (.docx), you must change the document type to WordprocessingDocumentType.Document. Otherwise, Word will complain when you try to open the document.
Manipulate your document.
Write the contents of the MemoryStream to a file.
For the first step, we can use the following method, which reads a file into a MemoryStream:
public static MemoryStream ReadAllBytesToMemoryStream(string path)
{
byte[] buffer = File.ReadAllBytes(path);
var destStream = new MemoryStream(buffer.Length);
destStream.Write(buffer, 0, buffer.Length);
destStream.Seek(0, SeekOrigin.Begin);
return destStream;
}
Then, we can use that in the following way (replicating as much of M_R_H's code as possible):
// Step #1 (note the using declaration)
using MemoryStream stream = ReadAllBytesToMemoryStream(#"\path\to\template.dotx");
// Step #2
using (WordprocessingDocument newdoc = WordprocessingDocument.Open(stream, true)
{
// You must do the following to turn a template into a document.
newdoc.ChangeDocumentType(WordprocessingDocumentType.Document);
// Manipulate document (completely in memory now) ...
}
// Step #3
File.WriteAllBytes(#"\path\to\template.docx", stream.GetBuffer());
See this post for a comparison of methods for cloning (or duplicating) Word documents or templates.

Resources