An exception of type 'System.OutOfMemoryException' occurred in itextsharp.dll but was not handled in user code - asp.net-mvc

I am using iTextSharp to create pdf. I have 100k records, but I am getting following exception:
An exception of type 'System.OutOfMemoryException' occurred in
itextsharp.dll but was not handled in user code At the line:
bodyTable.AddCell(currentProperty.GetValue(lst, null).ToString());
Code is:
var doc = new Document(pageSize);
PdfWriter.GetInstance(doc, stream);
doc.Open();
//Get exportable count
int columns = 0;
Type currentType = list[0].GetType();
//PREPARE HEADER
//foreach visible columns check if current object has proerpty
//else search in inner properties
foreach (var visibleColumn in visibleColumns)
{
if (currentType.GetProperties().FirstOrDefault(p => p.Name == visibleColumn.Key) != null)
{
columns++;
}
else
{
//check child property objects
var childProperties = currentType.GetProperties();
foreach (var prop in childProperties)
{
if (prop.PropertyType.BaseType == typeof(BaseEntity))
{
if (prop.PropertyType.GetProperties().FirstOrDefault(p => p.Name == visibleColumn.Key) != null)
{
columns++;
break;
}
}
}
}
}
//header
var headerTable = new PdfPTable(columns);
headerTable.WidthPercentage = 100f;
foreach (var visibleColumn in visibleColumns)
{
if (currentType.GetProperties().FirstOrDefault(p => p.Name == visibleColumn.Key) != null)
{
//headerTable.AddCell(prop.Name);
headerTable.AddCell(visibleColumn.Value);
}
else
{
//check child property objects
var childProperties = currentType.GetProperties();
foreach (var prop in childProperties)
{
if (prop.PropertyType.BaseType == typeof(BaseEntity))
{
if (prop.PropertyType.GetProperties().FirstOrDefault(p => p.Name == visibleColumn.Key) != null)
{
//headerTable.AddCell(prop.Name);
headerTable.AddCell(visibleColumn.Value);
break;
}
}
}
}
}
doc.Add(headerTable);
var bodyTable = new PdfPTable(columns);
bodyTable.Complete = false;
bodyTable.WidthPercentage = 100f;
//PREPARE DATA
foreach (var lst in list)
{
int col = 1;
foreach (var visibleColumn in visibleColumns)
{
var currentProperty = currentType.GetProperties().FirstOrDefault(p => p.Name == visibleColumn.Key);
if (currentProperty != null)
{
if (currentProperty.GetValue(lst, null) != null)
bodyTable.AddCell(currentProperty.GetValue(lst, null).ToString());
else
bodyTable.AddCell(string.Empty);
col++;
}
else
{
//check child property objects
var childProperties = currentType.GetProperties().Where(p => p.PropertyType.BaseType == typeof(BaseEntity));
foreach (var prop in childProperties)
{
currentProperty = prop.PropertyType.GetProperties().FirstOrDefault(p => p.Name == visibleColumn.Key);
if (currentProperty != null)
{
var currentPropertyObjectValue = prop.GetValue(lst, null);
if (currentPropertyObjectValue != null)
{
bodyTable.AddCell(currentProperty.GetValue(currentPropertyObjectValue, null).ToString());
}
else
{
bodyTable.AddCell(string.Empty);
}
break;
}
}
}
}
}
doc.Add(bodyTable);
doc.Close();

A back of the envelope computation of the memory requirements given the data you provided for memory consumption gives 100000 * 40 * (2*20+4) = 167MBs. Well within your memory limit, but it is just a lower bound. I imagine each Cell object is pretty big. If each cell would have a 512 byte overhead you could be well looking at 2GB taken. I reckon it might be even more, as PDF is a complex beast.
So you might realistically be looking at a situation where you are actually running out of memory. If not your computers, then at least the bit C# has set aside for its own thing.
I would do one thing first - check memory consumption like here. You might even do well to try with 10, 100, 1000, 10000, 100000 rows and see up until what number of rows the program works.
You could perhaps try a different thing altogether. If you're trying to print a nicely formatted table with a lot of data, perhaps you could output an HTML document, which can be done incrementally and which you can do by just writing stuff to a file, rather than using a third party library. You can then "print" that HTML document to PDF. StackOverflow to the rescue again with this problem.

Related

MVC 5 MultiThreading

I'm trying to setup multithreading in a .NET MVC application. I am processing a large amount of data, so my hope is if I split it into multiple threads I can speed up the processing time. The method isn't waiting for the tasks to complete though before it runs the data, so it's currently returning an empty list. I want to wait for all of the tasks to be completed before it returns the final result.
If there's a better way to this I'm loading a datatable and basically want to take the roster variable and run the CheckCompliance method for each individual. Without multithreading the page could take minutes to load, I'm wanting the page to load with almost no delay.
public async Task<Tuple<List<RosterListing>, int, int>> Search(int employerId, DataTableAjaxPostModel model)
{
List<RosterListing> roster = _service.Search(employerId, searchBy, columnSearches, take, skip, sortBy, sortDir, out int filteredResultsCount, out int totalResultsCount);
IEnumerable<Mandate> enforcedMandates = _mandateService.GetEnforcedMandates();
IEnumerable<IndividualMandateData> individualMandateData = _individualMandateDataService.GetAllForPtbIds(roster.Select(x => Convert.ToInt32(x.PtbId)));
List<RosterListing> newRoster = new List<RosterListing>();
// Split main list into sections for multithreading
List<List<RosterListing>> chunks = roster.ChunkBy(5);
var tasks = new List<Task<List<RosterListing>>>();
foreach (var chunk in chunks)
{
tasks.Add(Task.Run(async () =>
{
List<RosterListing> updatedRoster = new List<RosterListing>();
updatedRoster.AddRange(await CheckCompliance(chunk, enforcedMandates, individualMandateData));
return updatedRoster;
}));
}
var results = Task.WhenAll(tasks);
if (results.Status == TaskStatus.RanToCompletion)
{
foreach (var result in results.Result)
{
newRoster.AddRange(result);
}
}
// This doesn't wait until all tasks are complete
return await Task.Run(() => Tuple.Create(newRoster, filteredResultsCount, totalResultsCount));
}
private async Task<List<RosterListing>> CheckCompliance(List<RosterListing> roster, IEnumerable<Mandate> enforcedMandates, IEnumerable<IndividualMandateData> individualMandateData)
{
foreach (var officer in roster)
{
foreach (var mandate in enforcedMandates)
{
IndividualMandateData data = individualMandateData.FirstOrDefault(x => x.MandateId == mandate.MandateId && x.PtbId == Convert.ToInt32(officer.PtbId));
if (data == null)
{
officer.IsInCompliance = false;
break;
}
if (mandate.MandateRequirementUnit == "Course")
{
if (data.HoursAttended < 0)
{
officer.IsInCompliance = false;
break;
}
}
else
{
if (data.HoursAttended < mandate.Requirement)
{
officer.IsInCompliance = false;
break;
}
}
officer.IsInCompliance = true;
}
}
return await Task.Run(() => roster);
}

Entity Framework 5 - read a record then delete it in loop

I am having issues with my application. I have a db table for a print queue. When I read from that table in a loop, once I add that record to the view model, I then want to delete it from the database...this would be the most efficient way to do it, but EF barks:
An entity object cannot be referenced by multiple instances of IEntityChangeTracker.
I've tried using multiple contexts... but that didn't seem to work either. I've seen articles like Rick Strahl's, but frankly it was above my level of understanding, and not exactly sure if it helps my issue here and seemed quite an in depth solution for something as simple as this.
Is there a simple way to accomplish what I am trying to achieve here?
Here is my code:
public List<InventoryContainerLabelViewModel> CreateLabelsViewModel(int intFacilityId)
{
var printqRep = new Repository<InventoryContainerPrintQueue>(new InventoryMgmtContext());
var printqRepDelete = new Repository<InventoryContainerPrintQueue>(new InventoryMgmtContext());
IQueryable<InventoryContainerPrintQueue> labels =
printqRep.SearchFor(x => x.FacilityId == intFacilityId);
List<InventoryContainerLabelViewModel> labelsViewModel = new List<InventoryContainerLabelViewModel>();
if (labels.Count() > 0)
{
//Get printq record
foreach (InventoryContainerPrintQueue label in labels)
{
IEnumerable<InventoryContainerDetail> icDtls =
label.InventoryContainerHeader.InventoryContainerDetails;
//Get print details
foreach (InventoryContainerDetail icDtl in icDtls)
{
labelsViewModel.Add(new InventoryContainerLabelViewModel()
{
...
populate view model here
}
);//Add label to view model
} //for each IC detail
//Delete the printq record
printqRepDelete.Delete(label); <======== Error Here
} //foreach label loop
}//label count > 0
return labelsViewModel.ToList();
}
In the end, I added a column to the printq table for status, then in the the loop updated it to processed, then called a separate method to delete it.
public List<InventoryContainerLabelViewModel> CreateLabelsViewModel(int intFacilityId)
{
InventoryMgmtContext dbContext = new InventoryMgmtContext();
var printqRep = new Repository<InventoryContainerPrintQueue>(dbContext);
IEnumerable<InventoryContainerPrintQueue> unprintedPrtqRecs =
printqRep.SearchFor(x => x.FacilityId == intFacilityId && x.Printed == false);
List<InventoryContainerLabelViewModel> labelsViewModel = new List<InventoryContainerLabelViewModel>();
if (unprintedPrtqRecs.Count() > 0)
{
//Get printq record
foreach (InventoryContainerPrintQueue unprintedPrtqRec in unprintedPrtqRecs)
{
IEnumerable<InventoryContainerDetail> icDtls =
unprintedPrtqRec.InventoryContainerHeader.InventoryContainerDetails;
//Get container details to print
foreach (InventoryContainerDetail icDtl in icDtls)
{
labelsViewModel.Add(new InventoryContainerLabelViewModel()
{
...
}
);//Get IC details and create view model
} //for each IC detail
unprintedPrtqRec.Printed = true;
printqRep.Update(unprintedPrtqRec, unprintedPrtqRec, false);
} //foreach label loop
//Commit updated to Printed status to db
dbContext.SaveChanges();
}//label count > 0
return labelsViewModel;
}
public ActionConfirmation<int> DeletePrintQRecs(int intFacilityId)
{
InventoryMgmtContext dbContext = new InventoryMgmtContext();
var printqRep = new Repository<InventoryContainerPrintQueue>(dbContext);
IEnumerable<InventoryContainerPrintQueue> printedPrtqRecs =
printqRep.SearchFor(x => x.FacilityId == intFacilityId && x.Printed == true);
foreach (InventoryContainerPrintQueue printedPrtqRec in printedPrtqRecs)
{
//Delete the printq record
printqRep.Delete(printedPrtqRec, false);
}
//Save Changes on all deletes
ActionConfirmation<int> result;
try
{
dbContext.SaveChanges();
result = ActionConfirmation<int>.CreateSuccessConfirmation(
"All Label Print Q records deleted successfully.",
1);
}
catch (Exception ex)
{
result = ActionConfirmation<int>.CreateFailureConfirmation(
string.Format("An error occured attempting to {0}. The error was: {2}.",
"delete Label Print Q records",
ex.Message),
1
);
}
return result;
}

Writing a list of strings to a file

From the API page, I gather there's no function for what I'm trying to do. I want to read text from a file storing it as a list of strings, manipulate the text, and save the file. The first part is easy using the function:
abstract List<String> readAsLinesSync([Encoding encoding = Encoding.UTF_8])
However, there is no function that let's me write the contents of the list directly to the file e.g.
abstract void writeAsLinesSync(List<String> contents, [Encoding encoding = Encoding.UTF_8, FileMode mode = FileMode.WRITE])
Instead, I've been using:
abstract void writeAsStringSync(String contents, [Encoding encoding = Encoding.UTF_8, FileMode mode = FileMode.WRITE])
by reducing the list to a single string. I'm sure I could also use a for loop and feed to a stream line by line. I was wondering two things:
Is there a way to just hand the file a list of strings for writing?
Why is there a readAsLinesSync but no writeAsLinesSync? Is this an oversight or a design decision?
Thanks
I just made my own export class that handles writes to a file or for sending the data to a websocket.
Usage:
exportToWeb(mapOrList, 'local', 8080);
exportToFile(mapOrList, 'local/data/data.txt');
Class:
//Save data to a file.
void exportToFile(var data, String filename) =>
new _Export(data).toFile(filename);
//Send data to a websocket.
void exportToWeb(var data, String host, int port) =>
new _Export(data).toWeb(host, port);
class _Export {
HashMap mapData;
List listData;
bool isMap = false;
bool isComplex = false;
_Export(var data) {
// Check is input is List of Map data structure.
if (data.runtimeType == HashMap) {
isMap = true;
mapData = data;
} else if (data.runtimeType == List) {
listData = data;
if (data.every((element) => element is Complex)) {
isComplex = true;
}
} else {
throw new ArgumentError("input data is not valid.");
}
}
// Save to a file using an IOSink. Handles Map, List and List<Complex>.
void toFile(String filename) {
List<String> tokens = filename.split(new RegExp(r'\.(?=[^.]+$)'));
if (tokens.length == 1) tokens.add('txt');
if (isMap) {
mapData.forEach((k, v) {
File fileHandle = new File('${tokens[0]}_k$k.${tokens[1]}');
IOSink dataFile = fileHandle.openWrite();
for (var i = 0; i < mapData[k].length; i++) {
dataFile.write('${mapData[k][i].real}\t'
'${mapData[k][i].imag}\n');
}
dataFile.close();
});
} else {
File fileHandle = new File('${tokens[0]}_data.${tokens[1]}');
IOSink dataFile = fileHandle.openWrite();
if (isComplex) {
for (var i = 0; i < listData.length; i++) {
listData[i] = listData[i].cround2;
dataFile.write("${listData[i].real}\t${listData[i].imag}\n");
}
} else {
for (var i = 0; i < listData.length; i++) {
dataFile.write('${listData[i]}\n');
}
}
dataFile.close();
}
}
// Set up a websocket to send data to a client.
void toWeb(String host, int port) {
//connect with ws://localhost:8080/ws
//for echo - http://www.websocket.org/echo.html
if (host == 'local') host = '127.0.0.1';
HttpServer.bind(host, port).then((server) {
server.transform(new WebSocketTransformer()).listen((WebSocket webSocket) {
webSocket.listen((message) {
var msg = json.parse(message);
print("Received the following message: \n"
"${msg["request"]}\n${msg["date"]}");
if (isMap) {
webSocket.send(json.stringify(mapData));
} else {
if (isComplex) {
List real = new List(listData.length);
List imag = new List(listData.length);
for (var i = 0; i < listData.length; i++) {
listData[i] = listData[i].cround2;
real[i] = listData[i].real;
imag[i] = listData[i].imag;
}
webSocket.send(json.stringify({"real": real, "imag": imag}));
} else {
webSocket.send(json.stringify({"real": listData, "imag": null}));
}
}
},
onDone: () {
print('Connection closed by client: Status - ${webSocket.closeCode}'
' : Reason - ${webSocket.closeReason}');
server.close();
});
});
});
}
}
I asked Mads Agers about this. He works on the io module. He said that he decided not to add writeAsLines because he didn't find it useful. For one it is trivial to write the for loop and the other thing is that you have to parameterize it which the kind of line separator that you want to use. He said he can add it if there is a strong feeling that it would be valuable. He didn't immediately see a lot of value in it.

Entity Framework Newbie - Save to DB

I have 3 joined tables; ValidationRun has many Result which has many Error
The following code succeeds in saving to the Result and Error tables but not the ValidationRun.
Can you see the problem please?
private void WriteResultsToDB(SqlDataReader dr, XMLValidator validator)
{
using (var context = new ValidationResultsEntities())
{
var run = new ValidationRun { DateTime = DateTime.Now, XSDPath = this.txtXsd.Text };
//loop through table containing the processed XML
while (dr.Read())
{
var result = new Result
{
AddedDateTime = (DateTime)dr["Added"],
CustomerAcc = (string)dr["CustomerAcc"],
CustomerRef = (string)dr["CustomerRef"]
};
if (this.rdoRequest.Checked)
{
result.XMLMsg = (string)dr["RequestMSG"];
}
else
{
result.XMLMsg = (string)dr["ReplyMSG"];
}
if (validator.Validate(result.XMLMsg))
{
foreach (string error in validator.Errors)
{
result.Errors.Add(new Error { ErrorDescription = error });
}
}
else
{
//validator caught an error
result.Errors.Add(new Error { ErrorDescription = "XML could not be parsed" });
}
if (result.Errors.Count == 0) result.ValidFile = true; else result.ValidFile = false;
context.AddToResults(result);
context.SaveChanges();
}
}
You don't appear to be adding the run to any part of the context. If it were referenced by the result you are adding, perhaps, the change tracker would know it was supposed to be saved, but as it is written it is just some orphaned object that doesn't get attached anywhere.

ASP.NET MVC and jqGrid: Persisting Multiselection

I have a jqGrid in an ASP.NET MVC View with the option multiselect:true. There are over 200 records displayed in the grid, so I have paging enabled. This works great, but when I navigate from page to page, the selections are lost when I navigate.
Is there a good, clean way to persist the selections so that they are maintained while paging?
Managed it with some javascript trickery:
var pages = [];
onSelectRow: function(rowid, status) {
var pageId = $('#grdApplications').getGridParam('page');
var selRows = [];
if (status) {
//item selected, add index to array
if (pages[pageId] == null) {
pages[pageId] = [];
}
selRows = pages[pageId];
if (selRows.indexOf(rowid) == -1)
{ selRows.push(rowid); }
}
else {
//item deselected, remove from array
selRows = pages[pageId];
var index = selRows.indexOf(rowid)
if (index != -1) {
pages[pageId].splice(index, 1);
}
}
},
loadComplete: function() {
if (pages[$('#grdApplications').getGridParam('page')] != null) {
var selRows = pages[$('#grdApplications').getGridParam('page')];
var i;
var limit = selRows.length;
for (i = 0; i < limit; i++) {
$('#grdApplications').setSelection(selRows[i], true);
}
}
},
user279248 (I know it's an old post, but it's a good question) - all of the row ids are being stored in the selRows arrays in the pages array, so just iterate through them, ie
for (j=0;j<pages.length;j++) {
var selRow = pages[j];
for (k=0;k<selRow.length;k++) {
alert('RowID:'+selRow[k]);
}
}
Hope this helps someone.
Dave - your solution is still going strong two years later! Thanks for the code. My only tweak is elevating the code into functions - useful to apply to multiple grids on the same page.
function maint_chkbxs_oSR(obj_ref, rowid, status, pages) {
var pageId = $(obj_ref).jqGrid('getGridParam','page');
var selRows = [];
if (status) {
//item selected, add index to array
if (pages[pageId] == null) {
pages[pageId] = [];
}
selRows = pages[pageId];
//if (selRows.indexOf(rowid) == -1)
if ($.inArray(""+rowid,selRows) == -1)
{ selRows.push(rowid); }
}
else {
//item deselected, remove from array
selRows = pages[pageId];
var index = $.inArray(""+rowid,selRows);
if (index != -1) {
pages[pageId].splice(index, 1);
}
}
}
function maint_ckbxs_lC(obj_ref, pages) {
if (pages[$(obj_ref).jqGrid('getGridParam','page')] != null) {
var selRows = pages[$(obj_ref).jqGrid('getGridParam','page')];
var i;
var limit = selRows.length;
for (i = 0; i < limit; i++) {
//$('#grid_bucket').setSelection(selRows[i], true);
$(obj_ref).jqGrid('setSelection',selRows[i],true);
}
}
}
You just have to remember to create a dedicated page array for each grid.

Resources