System.OutOfMemoryException while writing to excel - asp.net-mvc

Public Function GenerateReportAsExcel()
Dim workbook = WorkbookFactory.Create(template)
template.Close()
Dim worksheet = workbook.GetSheetAt(0)
// Writing record to worksheet
Dim workbookStream = New MemoryStream()
workbookStream.Flush()
workbookStream.Position = 0
workbook.Write(workbookStream) //throws error if the rocord is more then 500000...runs fine for 400000
Return New MemoryStream(workbookStream.ToArray())
End Function
WorkbookFactory is using NPOI.SS.UserModel ....
Is there a way to increase the memory stream capacity? I am getting System.OutOfMemoryException while writing 500000 record to the excel but upto 400000 record works fine.
I found couple of similar issue but not getting any solid solution to this problem...
Someone one suggested to use
workbookStream.Flush()
workbookStream.Position = 0
but not of any help....
Thanks for the concern..

What environment you are running in?
If it's 32 bit you get OutOfMemoryException at aprox. 500meg memory stream.
static void Main(string[] args)
{
var buffer = new byte[1024 * 1024];
Console.WriteLine(IntPtr.Size);
using (var memoryStream = new MemoryStream())
{
for (var i = 0; i < 100000000; i++)
{
try
{
memoryStream.Write(buffer, 0, 1024);
}
catch (OutOfMemoryException e)
{
Console.WriteLine("Out of memory at {0} meg", i);
break;
}
}
}
Console.ReadKey();
}
If you run on a 64bit os, make sure you build with 'Prefer 32 bit' switch off.
Turn off the switch in project properties:
I would recommend using a FileStream instead of MemoryStream here.

The following code adds nothing, so you can let it go:
workbookStream.Flush() ' Does nothing
workbookStream.Position = 0 ' Does nothing
But the rest is a matter of memory. You need more working memory (RAM) in order to do what you are trying to do. So if you add RAM memory to the machine you should be good to go... Unless you have a 32-bit-machine and run into the 3GB practical RAM limit. In that case you would need to upgrade to a 64-bit-machine where this memory limit is not an issue.
But if you are generating Excel files, you may want to look at ClosedXML instead of using the Excel object model. This is a library that doesn't require Excel on your machine. Have a look at http://www.campusmvp.net/blog/generating-excel-files-like-a-pro-with-closedxml.

Related

IBM HP5Si Print Stream to XPS print driver

I am hoping someone can have suggestions about this issue.
We have a custom driver taken from https://learn.microsoft.com/en-us/samples/microsoft/windows-driver-samples/xpsdrv-driver-and-filter-sample/
The print driver works well and outputs XPS when the documents are opened in MS word or PDF. But when a document is printed from HP5Si series printer, the driver returns 0 bytes. The job is sent from HP5Si printer to the XPS driver. Why is the driver rejecting this input when the source is a HP series printer. What can I do to fix it?
The printer on the AS400 is setup with an IBM HP5Si driver and sends the job to a windows service on a server. This windows service routes the job to XPS driver as if it were an HP series printer. The XPS driver processes this job and returns XPS to the windows service. The windows service then converts to a tiff file.
For some reason if printing is done using this workflow XPS driver returns 0.
If the same document is opened in word or notepad or any not AS400+ HP, it works and XPS is returned.
To prove my theory, I sent a PCL file in C# code to the driver and it returned 0 bytes.
public static void SendBytesToPrinterPCL(string printerName, string szFileName) {
IntPtr lhPrinter;
OpenPrinter(printerName, out lhPrinter, new IntPtr(0));
if (lhPrinter.ToInt32() == 0) return; //Printer not found!!
var rawPrinter = new DOCINFOA() {
pDocName = "My Document",
pDataType = "RAW"
};
StartDocPrinter(lhPrinter, 1, rawPrinter);
using(var b = new BinaryReader(File.Open(szFileName, FileMode.Open))) {
var length = (int) b.BaseStream.Length;
const int bufferSize = 8192;
var numLoops = length / bufferSize;
var leftOver = length % bufferSize;
for (int i = 0; i < numLoops; i++) {
var buffer = new byte[bufferSize];
int dwWritten;
b.Read(buffer, 0, bufferSize);
IntPtr unmanagedPointer = Marshal.AllocHGlobal(buffer.Length);
Marshal.Copy(buffer, 0, unmanagedPointer, buffer.Length);
WritePrinter(lhPrinter, unmanagedPointer, bufferSize, out dwWritten);
Marshal.FreeHGlobal(unmanagedPointer);
}
if (leftOver > 0) {
var buffer = new byte[leftOver];
int dwWritten;
b.Read(buffer, 0, leftOver);
IntPtr unmanagedPointer = Marshal.AllocHGlobal(buffer.Length);
Marshal.Copy(buffer, 0, unmanagedPointer, buffer.Length);
WritePrinter(lhPrinter, unmanagedPointer, leftOver, out dwWritten);
Marshal.FreeHGlobal(unmanagedPointer);
}
}
EndDocPrinter(lhPrinter);
ClosePrinter(lhPrinter);
}
string filePath = #"C:\Users\tom\Desktop\form.PCL";
string szPrinterName = #"\\server\xpsdrv";
Print.SendBytesToPrinterPCL(szPrinterName, filePath);
Then I sent a regular text file to the driver and it successfully converted to XPS.
public static void SendToPrinterNonPCL(string filePath)
{
ProcessStartInfo info = new ProcessStartInfo();
info.Verb = "print";
info.FileName = filePath;
info.CreateNoWindow = true;
info.WindowStyle = ProcessWindowStyle.Hidden;
Process p = new Process();
p.StartInfo = info;
p.Start();
p.WaitForInputIdle();
System.Threading.Thread.Sleep(3000);
if (false == p.CloseMainWindow())
p.Kill();
}
string filePath = #"C:\Users\tom\Desktop\form.txt";
string szPrinterName = #"\\server\xpsdrv";
Print.SendToPrinterNonPCL(filePath);
Why doesn't the driver in Microsoft samples accept PCL? What should I do. I am not a driver developer. This project was given to me.
EDIT:
Initally I didn't know of this printing from AS400. Our legacy driver was built 15 years back. The developer wrote a custom print driver to PCL and a Custom converter to Tiff. But the driver only supported monochrome. I am not a driver expert or a PCL expert or a converter expert. In order to support color and less pixelated feel for the final Tiff, I decided to change it to a XPS driver. Also it is less custom code and could use Microsoft's XPS conversion in WPF. It is not a very big learning curve for a non-driver development person compared to learning PCL and then changing the converter to accomodate color Tiff. But I guess it is falling apart since the users also print from AS400 which sends PCL.
Do you know any good products which we could purchase a license to? We need a PCL driver and a converter to Tiff
Thank you

Xamarin - WCF Upload Large Files Report progress vis UIProgressView

I have create a WCF Service that allows uploading large files via BasicHttpBinding using streaming and it is working great! I would like to extended this to show a progress bar (UIProgressView) so that when a large file is being uploaded in 65k chunks, the user can see that it is actively working.
The client code calling the WCF Service is:
BasicHttpBinding binding = CreateBasicHttp ();
BTSMobileWcfClient _client = new BTSMobileWcfClient (binding, endPoint);
_client.UploadFileCompleted += ClientUploadFileCompleted;
byte[] b = File.ReadAllBytes (zipFileName);
using (new OperationContextScope(_client.InnerChannel)) {
OperationContext.Current.OutgoingMessageHeaders.Add(System.ServiceModel.Channels.MessageHeader.CreateHeader("SalvageId","",iBTSSalvageId.ToString()));
OperationContext.Current.OutgoingMessageHeaders.Add(System.ServiceModel.Channels.MessageHeader.CreateHeader("FileName","",Path.GetFileName(zipFileName)));
OperationContext.Current.OutgoingMessageHeaders.Add(System.ServiceModel.Channels.MessageHeader.CreateHeader("Length","",b.LongLength));
_client.UploadFileAsync(b);
}
On the server side, I read the file stream in 65k chuncks and do report back to the calling routine "bytes read", etc. A snippet of code for that is:
using (FileStream targetStream = new FileStream(filePath, FileMode.CreateNew,FileAccess.Write)) {
//read from the input stream in 65000 byte chunks
const int chunkSize = 65536;
byte[] buffer = new byte[chunkSize];
do {
// read bytes from input stream
int bytesRead = request.FileData.Read(buffer, 0, chunkSize);
if (bytesRead == 0) break;
// write bytes to output stream
targetStream.Write(buffer, 0, bytesRead);
} while (true);
targetStream.Close();
}
But I don't know how to hook into the callback on the Xamarin side to receive the "bytes read" versus "total bytes to send" so I can update the UIProgressView.
Has anyone tried this or is this even possible?
Thanks In Advance,
Bo

Video streaming to ipad does not work with Tapestry5

I want to stream a video to my IPad via the HTML5 video tag with tapestry5 (5.3.5) on the backend. Usually the serverside framework shouldn't even play a role in this but somehow it does.
Anyway, hopefully someone here can help me out. Please keep in mind that my project is very much a prototype and that what I describe is simplified / reduced to the relevant parts. I would very much appreciate it if people didn't respond with the obligatory "you want to do the wrong thing" or security/performance nitpicks that aren't relevant to the problem.
So here it goes:
Setup
I have a video taken from the Apple HTML5 showcase so I know that format isn't an issue. I have a simple tml page "Play" that just contains a "video" tag.
Problem
I started by implementing a RequestFilter that handles the request from the video control by opening the referenced video file and streaming it to client. That's basic "if path starts with 'file' then copy file inputstream to response outputstream". This works very well with Chrome but not with the Ipad. Fine, I though, must be some headers I'm missing so I looked at the Apple Showcase again and included the same headers and content type but no joy.
Next, I though, well, let's see what happens if I let t5 serve the file. I copied the video to the webapp context, disabled my request filter and put the simple filename in the video's src attribute. This works in Chrome AND IPad.
That surprised me and prompted me to look at how T5 handles static files / context request. Thus far I've only gotten so far as to feel like there are two different paths which I've confirmed by switching out the hardwired "video src" to an Asset with a #Path("context:"). This, again, works on Chrome but not on IPad.
So I'm really lost here. What's this secret juice in the "simple context" requests that allow it to work on the IPad? There is nothing special going on and yet it's the only way this works. Problem is, I can't really serve those vids from my webapp context ...
Solution
So, it turns out that there is this http header called "Range" and that the IPad, unlike Chrome uses it with video. The "secret sauce" then is that the servlet handler for static resource request know how to deal with range requests while T5's doesn't. Here is my custom implementation:
OutputStream os = response.getOutputStream("video/mp4");
InputStream is = new BufferedInputStream( new FileInputStream(f));
try {
String range = request.getHeader("Range");
if( range != null && !range.equals("bytes=0-")) {
logger.info("Range response _______________________");
String[] ranges = range.split("=")[1].split("-");
int from = Integer.parseInt(ranges[0]);
int to = Integer.parseInt(ranges[1]);
int len = to - from + 1 ;
response.setStatus(206);
response.setHeader("Accept-Ranges", "bytes");
String responseRange = String.format("bytes %d-%d/%d", from, to, f.length());
logger.info("Content-Range:" + responseRange);
response.setHeader("Connection", "close");
response.setHeader("Content-Range", responseRange);
response.setDateHeader("Last-Modified", new Date().getTime());
response.setContentLength(len);
logger.info("length:" + len);
byte[] buf = new byte[4096];
is.skip(from);
while( len != 0) {
int read = is.read(buf, 0, len >= buf.length ? buf.length : len);
if( read != -1) {
os.write(buf, 0, read);
len -= read;
}
}
} else {
response.setStatus(200);
IOUtils.copy(is, os);
}
} finally {
os.close();
is.close();
}
I want to post my refined solution from above. Hopefully this will be useful to someone.
So basically the problem seemed to be that I was disregarding the "Range" http request header which the IPad didn't like. In a nutshell this header means that the client only wants a certain part (in this case a byte range) of the response.
This is what an iPad html video request looks like::
[INFO] RequestLogger Accept:*/*
[INFO] RequestLogger Accept-Encoding:identity
[INFO] RequestLogger Connection:keep-alive
[INFO] RequestLogger Host:mars:8080
[INFO] RequestLogger If-Modified-Since:Wed, 10 Oct 2012 22:27:38 GMT
[INFO] RequestLogger Range:bytes=0-1
[INFO] RequestLogger User-Agent:AppleCoreMedia/1.0.0.9B176 (iPad; U; CPU OS 5_1 like Mac OS X; en_us)
[INFO] RequestLogger X-Playback-Session-Id:BC3B397D-D57D-411F-B596-931F5AD9879F
It means that the iPad only wants the first byte. If you disregard this header and simply send a 200 response with the full body then the video won't play. So, you need send a 206 response (partial response) and set the following response headers:
[INFO] RequestLogger Content-Range:bytes 0-1/357772702
[INFO] RequestLogger Content-Length:2
This means "I'm sending you byte 0 through 1 of 357772702 total bytes available".
When you actually start playing the video, the next request will look like this (everything except the range header ommited):
[INFO] RequestLogger Range:bytes=0-357772701
So my refined solution looks like this:
OutputStream os = response.getOutputStream("video/mp4");
try {
String range = request.getHeader("Range");
/** if there is no range requested we will just send everything **/
if( range == null) {
InputStream is = new BufferedInputStream( new FileInputStream(f));
try {
IOUtils.copy(is, os);
response.setStatus(200);
} finally {
is.close();
}
return true;
}
requestLogger.info("Range response _______________________");
String[] ranges = range.split("=")[1].split("-");
int from = Integer.parseInt(ranges[0]);
/**
* some clients, like chrome will send a range header but won't actually specify the upper bound.
* For them we want to send out our large video in chunks.
*/
int to = HTTP_DEFAULT_CHUNK_SIZE + from;
if( to >= f.length()) {
to = (int) (f.length() - 1);
}
if( ranges.length == 2) {
to = Integer.parseInt(ranges[1]);
}
int len = to - from + 1 ;
response.setStatus(206);
response.setHeader("Accept-Ranges", "bytes");
String responseRange = String.format("bytes %d-%d/%d", from, to, f.length());
response.setHeader("Content-Range", responseRange);
response.setDateHeader("Last-Modified", new Date().getTime());
response.setContentLength(len);
requestLogger.info("Content-Range:" + responseRange);
requestLogger.info("length:" + len);
long start = System.currentTimeMillis();
RandomAccessFile raf = new RandomAccessFile(f, "r");
raf.seek(from);
byte[] buf = new byte[IO_BUFFER_SIZE];
try {
while( len != 0) {
int read = raf.read(buf, 0, buf.length > len ? len : buf.length);
os.write(buf, 0, read);
len -= read;
}
} finally {
raf.close();
}
logger.info("r/w took:" + (System.currentTimeMillis() - start));
} finally {
os.close();
}
This solution is better then my first one because it handles all cases for "Range" requests which seems to be a prereq for clients like Chrome to be able to support skipping within the video ( at which point they'll issue a range request for that point in the video).
It's still not perfect though. Further improvments would be setting the "Last-Modified" header correctly and doing proper handling of clients requests an invalid range or a range of something else then bytes.
I suspect this is more about iPad than about Tapestry.
I might invoke Response.disableCompression() before writing the stream to the response; Tapestry may be trying to GZIP your stream, and the iPad may not be prepared for that, as video and image formats are usually already compressed.
Also, I don't see a content type header being set; again the iPad may simply be more sensitive to that than Chrome.

Node.js is running out of memory on large bit-by-bit file read

I'm attempting to write a bit of JS that will read a file and write it out to a stream. The deal is that the file is extremely large, and so I have to read it bit by bit. It seems that I shouldn't be running out of memory, but I do. Here's the code:
var size = fs.statSync("tmpfile.tmp").size;
var fp = fs.openSync("tmpfile.tmp", "r");
for(var pos = 0; pos < size; pos += 50000){
var buf = new Buffer(50000),
len = fs.readSync(fp, buf, 0, 50000, (function(){
console.log(pos);
return pos;
})());
data_output.write(buf.toString("utf8", 0, len));
delete buf;
}
data_output.end();
For some reason it hits 264900000 and then throws FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory. I'd figure that the data_output.write() call would force it to write the data out to data_output, and then discard it from memory, but I could be wrong. Something is causing the data to stay in memory, and I've no idea what it would be. Any help would be greatly appreciated.
I had a very similar problem. I was reading in a very large csv file with 10M lines, and writing out its json equivalent. I saw in the windows task manager that my process was using > 2GB of memory. Eventually I figured out that the output stream was probably slower than the input stream, and that the outstream was buffering a huge amount of data. I was able to fix this by pausing the instream every 100 writes to the outstream, and waiting for the outstream to empty. This gives time for the outstream to catch up with the instream. I don't think it matters for the sake of this discussion, but I was using 'readline' to process the csv file one line at a time.
I also figured out along the way that if, instead of writing every line to the outstream, I concatenate 100 or so lines together, then write them together, this also improved the memory situation and made for faster operation.
In the end, I found that I could do the file transfer (csv -> json) using just 70M of memory.
Here's a code snippet for my write function:
var write_counter = 0;
var out_string = "";
function myWrite(inStream, outStream, string, finalWrite) {
out_string += string;
write_counter++;
if ((write_counter === 100) || (finalWrite)) {
// pause the instream until the outstream clears
inStream.pause();
outStream.write(out_string, function () {
inStream.resume();
});
write_counter = 0;
out_string = "";
}
}
You should be using pipes, such as:
var fp = fs.createReadStream("tmpfile.tmp");
fp.pipe(data_output);
For more information, check out: http://nodejs.org/docs/v0.5.10/api/streams.html#stream.pipe
EDIT: the problem in your implementation, btw, is that by doing it in chunks like that, the write buffer isn't going to get flushed, and you're going to read in the entire file before writing much of it back out.
According to the documentation, data_output.write(...) will return true if the string has been flushed, and false if it has not (due to the kernel buffer being full). What kind of stream is this?
Also, I'm (fairly) sure this isn't the problem, but: how come you allocate a new Buffer on each loop iteration? Wouldn't it make more sense to initialize buf before the loop?
I don't know how the synchronous file functions are implemented, but have you considered using the asynch ones? That would be more likely to allow garbage collection and i/o flushing to happen. So instead of a for loop, you would trigger the next read in the callback function of the previous read.
Something along these lines (note also that, per other comments, I'm reusing the Buffer):
var buf = new Buffer(50000),
var pos = 0, bytesRead;
function readNextChunk () {
fs.read(fp, buf, 0, 50000, pos,
function(err, bytesRead){
if (err) {
// handle error
}
else {
data_output.write(buf.toString("utf8", 0, bytesRead));
pos += bytesRead;
if (pos<size)
readNextChunk();
}
});
}
readNextChunk();

Can we compress a large file as a chunk data for GZIP in blackberry?

I saw the sample APIas below
public static byte[] compress( byte[] data )
{
try
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
GZIPOutputStream gzipStream = new GZIPOutputStream( baos, 6, GZIPOutputStream.MAX_LOG2_WINDOW_LENGTH );
gzipStream.write( data );
gzipStream.close();
}
catch(IOException ioe)
{
return null;
}
return baos.toByteArray();
}
But when I tried to compress with a large file with Curve 8900 OS 4.6, I got a "OutOfMemoryError" so I would like to know that how to compress as a chunk small data?
I already tried with this code as below but it doesn't work, compressed file cannot decompress...
file = (FileConnection)Connector.open(_fileOutputPath, Connector.READ_WRITE);
if (!file.exists()) {
file.create();
}
os = file.openOutputStream();
is = FileUtil.getInputStream(_fileInputPath, 0);
int tmpSize = 1024;
byte[] tmp = new byte[tmpSize];
int len = -1;
gzipStream = new GZIPOutputStream( os, 6, GZIPOutputStream.MAX_LOG2_WINDOW_LENGTH );
while((len = is.read(tmp, 0, tmpSize)) != -1) {
gzipStream.write(tmp, 0, len);
}
GZIPOutputStream does not produce a file suitable for use with the gzip command line tool. This is because it doesn't produce the necessary file headers. How did you test decompressing it? You should write a similar Java program that makes use of GZIPInputStream to test, as 'gunzip' is not going to recognize the input.
The problem of the first code sample is that the ByteArrayOutputStream is getting too big for the limited memory of a mobile device.
An option could be to first write to a file (for instance) on SD card.
The second code sample seems fine, but see Michael's answer.

Resources