I am building a web application with MVC. For one of my views I need to fetch a large string and then bind it to jsTree. However, several times I am receiving the OutOfMemory exception. The string that is returned is with a length of 55,582,723 characters. I am dynamically creating a JArray with JObjects within the JArray.
Here is how I originally had it
var jArray = GetJArray();
return jArray.ToString();
With that code I received the exception very frequently. I then did some research and found out that I can write to a stream and then return the stream. So I changed the method to this:
var jArray = GetJArray();
var serializer = new JsonSerializer();
var ms = new MemoryStream();
var sw = new StreamWriter();
var writer = new JsonTextWriter(sw);
serializer.Serialize(writer, jArray);
writer.Flush();
ms.Seek(0, SeekOrigin.Begin);
return new FileStreamResult(ms, "text/plain");
Now with this code, it has improved a lot, however, I am still in some cases getting an OutOfMemory exception. The string that is returned will always be the same length as mentioned above.
Any insight to this will be much appreciated.
EDIT: To add a little more information the exception happened in my original implementation at:
return jArray.ToString();
The exception occurs in the my second implementation at:
serializer.Serialize(writer, jArray);
Related
I use NReco.Data in my Asp.NetCore Application to make db-calls, because I don't want to use EF and DataTable isn't supported yet.
Now I need to call a StoredProcedure and get Multiple RecordSets (or Dictionarylists).
At the moment I called this:
dbAdapter.Select($"STOREDNAME #{nameof(SQLPARAMETER)}", SQLPARAMETER).ToRecordSet()
But the stored gives me more than 1 recordset, can anyone help me to get the others?
Currently NReco.Data.DbDataAdapter has no API for processing multiple result sets returned by single IDbCommand.
You can compose IDbCommand by yourself, execute data reader and read multiple result sets in the following way:
IDbCommand spCmd; // lets assume that this is DB command for 'STOREDNAME'
RecordSet rs1 = null;
RecordSet rs2 = null;
spCmd.Connection.Open();
try {
using (var rdr = spCmd.ExecuteReader()) {
rs1 = RecordSet.FromReader(rdr);
if (rdr.NextResult())
rs2 = RecordSet.FromReader(rdr);
}
} finally {
spCmd.Connection.Close();
}
As NReco.Data author I think that support for multiple result sets may be easily added to DbDataAdapter API (I've just created an issue for that on github).
-- UPDATE --
Starting from NReco.Data v.1.0.2 it is possible to handle multiple result sets in the following way:
(var companies, var contacts) = DbAdapter.Select("exec STOREDNAME").ExecuteReader(
(rdr) => {
var companiesRes = new DataReaderResult(rdr).ToList<CompanyModel>();
rdr.NextResult();
var contactsRes = new DataReaderResult(rdr).ToList<ContactModel>();
return (companiesRes, contactsRes);
});
In the same manner DataReaderResult can map results to dictionaries or RecordSet if needed.
In our ASP.NET MVC web application we send emails as part of scheduled tasks handled by Hangfire for which I am using Postal as described here
The method works fine and we are able to send HTML/text emails. Now we need to generate and attach PDF files as well. The attached PDF needs to be generated dynamically by use of a Razor template. First I tried to use Rotativa in order to generate the PDF. However I encountered the problem that method BuildPdf needs a ControllerContext which is not available in the background HangFire process. I tried to fake the ControllerContext as
using (var memWriter = new StringWriter(sb))
{
var fakeResponse = new HttpResponse(memWriter);
var fakeRequest = new HttpRequest(null, "http://wwww.oururl.com", null);
var fakeHttpContext = new HttpContext(fakeRequest, fakeResponse);
var emailController = new BackgroundEmailController();
var fakeControllerContext = new ControllerContext(new HttpContextWrapper(fakeHttpContext), new RouteData(), emailController);
var attachment = emailController.BillAttachment(email);
var pdf = attachment.BuildPdf(fakeControllerContext);
if (pdf != null && pdf.Count() > 0)
{
using (MemoryStream ms = new MemoryStream(pdf))
{
var contentType = new System.Net.Mime.ContentType(System.Net.Mime.MediaTypeNames.Application.Pdf);
email.Attach(new System.Net.Mail.Attachment(ms, contentType));
}
}
}
However this raised a NullReference error in Rotativa.
Then I tried first to compile the template view with RazorEngine to HTML(and then convert the HTML to pdf by some mean) as
var engineService = RazorEngineService.Create();
engineService.AddTemplate(cache_name, File.ReadAllText(billAttachmentTemplatePath));
engineService.Compile(cache_name, modelType: typeof(BillEmail));
var html = engineService.Run(cache_name, null, email);
using (var ms = CommonHelper.GenerateStreamFromString(html))
{
var contentType = new System.Net.Mime.ContentType(System.Net.Mime.MediaTypeNames.Text.Html);
email.Attach(new System.Net.Mail.Attachment(ms, contentType));
}
And it throws another NullReference in the RazorEngine dynamic DLL:
System.NullReferenceException: Object reference not set to an instance of an object.
at CompiledRazorTemplates.Dynamic.RazorEngine_bb2b366aaef64f2bbc2997353f88cc9e.Execute()
at RazorEngine.Templating.TemplateBase.RazorEngine.Templating.ITemplate.Run(ExecuteContext context, TextWriter reader)
I was wondering if anybody have suggestions for generating PDF files from a template in a Hangfire process?
If you are open to commercial solutions, you can try Telerik reporting and export it as pdf programmatically. You define your report and then invoke it to generate PDF on the server side, finally email the byte[] as email attachment. You can now kickoff this process using Hangfire job.
Here is a pseudo code assuming you have defined the structure of your report, Please look here for more details on how to create your report programatically.
public void GenerateAndEmailReport()
{
var reportSource = new InstanceReportSource();
Telerik.Reporting.Report report = new MyReport();
//populate data into report
reportSource.ReportDocument = report;
var reportProcessor = new ReportProcessor();
reportSource.ReportDocument = report;
var info = new Hashtable();
var result= reportProcessor.RenderReport("PDF", reportSource, info);
byte[]reportBytes = result.DocumentBytes;
SendEmail(reportBytes, "myreport.pdf"); // a method that takes the bytes and attach it to email.
}
Additional references from telerik.
send report as email
Generating PDF in console application
Saving a report programmatically
I have a need to access the encoded stream in OpenRasta before it gets sent to the client. I have tried using a PipelineContributor and registering it before KnownStages.IEnd, tried after KnownStages.IOperationExecution and after KnownStages.AfterResponseConding but in all instances the context.Response.Entity stream is null or empty.
Anyone know how I can do this?
Also I want to find out the requested codec fairly early on yet when I register after KnowStages.ICodecRequestSelection it returns null. I just get the feeling I am missing something about these pipeline contributors.
Without writing your own Codec (which, by the way, is really easy), I'm unaware of a way to get the actual stream of bytes sent to the browser. The way I'm doing this is serializing the ICommunicationContext.Response.Entity before the IResponseCoding known stage. Pseudo code:
class ResponseLogger : IPipelineContributor
{
public void Initialize(IPipeline pipelineRunner)
{
pipelineRunner
.Notify(LogResponse)
.Before<KnownStages.IResponseCoding>();
}
PipelineContinuation LogResponse(ICommunicationContext context)
{
string content = Serialize(context.Response.Entity);
}
string Serialize(IHttpEntity entity)
{
if ((entity == null) || (entity.Instance == null))
return String.Empty;
try
{
using (var writer = new StringWriter())
{
using (var xmlWriter = XmlWriter.Create(writer))
{
Type entityType = entity.Instance.GetType();
XmlSerializer serializer = new XmlSerializer(entityType);
serializer.Serialize(xmlWriter, entity.Instance);
}
return writer.ToString();
}
}
catch (Exception exception)
{
return exception.ToString();
}
}
}
This ResponseLogger is registered the usual way:
ResourceSpace.Uses.PipelineContributor<ResponseLogger>();
As mentioned, this doesn't necessarily give you the exact stream of bytes sent to the browser, but it is close enough for my needs, since the stream of bytes sent to the browser is basically just the same serialized entity.
By writing your own codec, you can with no more than 100 lines of code tap into the IMediaTypeWriter.WriteTo() method, which I would guess is the last line of defense before your bytes are transferred into the cloud. Within it, you basically just do something simple like this:
public void WriteTo(object entity, IHttpEntity response, string[] parameters)
{
using (var writer = XmlWriter.Create(response.Stream))
{
XmlSerializer serializer = new XmlSerializer(entity.GetType());
serializer.Serialize(writer, entity);
}
}
If you instead of writing directly to to the IHttpEntity.Stream write to a StringWriter and do ToString() on it, you'll have the serialized entity which you can log and do whatever you want with before writing it to the output stream.
While all of the above example code is based on XML serialization and deserialization, the same principle should apply no matter what format your application is using.
Over the weekend I realized that an application I'm working on which uses NHibernate as an ORM to a sqlite database has a concurrency issue.
I'm essentially looping through a collection in javascript and executing the following:
var item = new Item();
item.id = 1;
item.name = 2;
$.post("Item/Save", $.toJSON(item), function(data, testStatus) {
/*User can be notified that the item was saved successfully*/
}, "text");
And my server code looks like this:
public ActionResult Save()
{
string json = Request.Form[0];
var serializer = new DataContractJsonSerializer(typeof(JsonItem));
var memoryStream = new MemoryStream(Encoding.Unicode.GetBytes(json));
JsonItem item = (JsonItem)serializer.ReadObject(memoryStream);
memoryStream.Close();
SaveItem(item);
return Content("success");
}
The concurrency issue obviously occurs in the loop calling Save() for each element iterated, but I'm not sure how to accommodate for and prevent this. Any advice is appreciated.
What is the concurrency issue?
I didn't understand your problem with concurrency.
Comment: if you iterate the collection, AND in the postback you reload the window... hmmm... there is a potential problem here. The first postback will throw away any pending work, refreshing completely the page.
Suggestion: don't iterate, send the complete collection in one Ajax call.
I have the following code which I stripped out of any non-essential lines to leave the minimun reproducable case. What I expect is for it to return the image, but it doesn't. As far as I can see it returns an empty file:
public ActionResult Thumbnail(int id) {
var question = GetQuestion(db, id);
var image = new Bitmap(question.ImageFullPath);
MemoryStream stream = new MemoryStream();
image.Save(stream, ImageFormat.Jpeg);
return new FileStreamResult(stream, "image/jpeg");
}
Can you identify what's wrong with this code? In the debugger I can see that the stream grows in size so it seems to be getting the data although I haven't been able to verify it's the correct data. I have no idea how to debug the FileStreamResult itself.
You need to insert
stream.Seek(0, SeekOrigin.Begin);
after the call to
Image.Save()
This will rewind the stream to the beginning of the saved image. Otherwise the stream will be positioned at the end of the stream and nothing is sent to the receiver.
Try rewinding the MemoryStream. The "cursor" is left at the end of the file and there is nothing to read until you "rewind" the stream to the beginning.
image.Save( stream, ImageFormat.Jpeg );
stream.Seek( 0, SeekOrigin.Begin );
return new FileStreamResult( stream, "image/jpeg" );