Not enough storage is available for `Console.ReadLine`.` - memory

I am using a dual service/console model to test a service of mine. The code in the spotlight is:
static void Main(string[] args)
{
// Seems important to use the same service instance, regardless of debug or runtime.
var service = new HostService();
service.EventLog.EntryWritten += EventLogEntryWritten;
if (Environment.UserInteractive)
{
service.OnStart(args);
Console.WriteLine("Host Service is running. Press any key to terminate.");
Console.ReadLine();
service.OnStop();
}
else
{
var servicesToRun = new ServiceBase[] { service };
Run(servicesToRun);
}
}
When I run the app under the debugger, using F5, on the line Console.ReadLine(); I get a System.IO.IOException, with "Not enough storage is available to process this command."
The only purpose of the ReadLine is to wait until someone presses a key to end the app, so I can't imagine where the data is coming from that needs so much storage.

This is a service, and its output is likely set to Windows Application, change the output to Console Application and this should go away.

I having the same problem, I found the setting under project properties but I am creating a windows application so I can not change the application type.
This is the code I use.
Dim t As Task = New Task(AddressOf DownloadPageAsync)
t.Start()
Console.WriteLine("Downloading page...")
Console.ReadLine()
Async Sub DownloadPageAsync()
Using client As HttpClient = New HttpClient()
Using response As HttpResponseMessage = Await client.GetAsync(page)
Using content As HttpContent = response.Content
' Get contents of page as a String.
Dim result As String = Await content.ReadAsStringAsync()
' If data exists, print a substring.
If result IsNot Nothing And result.Length > 50 Then
Console.WriteLine(result.Substring(0, 50) + "...")
End If
End Using
End Using
End Using
End Sub

Related

Can I abort a large FileStreamResult download on server side?

I have an action on which I simply download a file. Sometimes the user wants the download to be aborted and not to wait to be finished if the file was large.
private IActionResult Download(string path)
{
var length = new FileInfo(path).Length;
Response.Headers.Add("size", length.ToString());
Stream str = System.IO.File.OpenRead(path);
return File(str, "application/x-zip-compressed");
}
What I would like to know is if on backend we can abort that process, maybe storing on ram some task to be cancelled on another process.
EDITED for better understanding: the file downloads into a different device which calls the download method after I order, from my browser, that this device has to download the file. So the task need to be aborted on server side by calling an action from my browser.
On the server side, you can get notification when the request is aborted via HttpContext.RequestAborted, which is a CancellationToken that is cancelled when the underlying connection for the request is aborted. So you could pass it down into asynchronous methods that you call if they support cancellation, or you could hook a callback to process cancellation logic via CancellationToken.Register().
However, in your code sample, this is not necessary. All you are doing is creating a FileStream without reading the contents of the file. The actual reading of the file content is performed by the framework, when it processes the FileStreamResult which is created by the call to File(str, "application/x-zip-compressed"). As you can see from the source code for FileResultExecutorBase, the framework will automatically cancel the file reading when HttpContext.RequestAborted is cancelled.
Well, I've finally solved this issue by handling a global collection of HttpContexts. I store the download request HttpContext when my download action is called and the download starts. Also, I have a different action/method/whatelse where I read the context and I call .Abort() method.
private void CancelDownload(string key) //action, method...
{
string uniqueKeyDownload = externalDevice.ID + "#" + fileID;
log.Information("Cancelling download 1/3..." + uniqueKeyDownload );
if (Startup.MyDictionaryContexts.ContainsKey(uniqueKeyDownload ))
{
log.Information("Cancelling download 2/3..." + uniqueKeyDownload );
if (!Startup.MyListDownloadsCancelled.Contains(uniqueKeyDownload ))
{
log.Information("Cancelling download 3/3..." + uniqueKeyDownload );
Startup.MyListDownloadsCancelled.Add(uniqueKeyDownload );
Startup.MyDictionaryContexts[uniqueKeyDownload ].Abort();
}
}
}
public IActionResult Download(string path)
{
string uniqueKeyDownload = externalDevice.ID + "#" + fileID;
if (!Startup.MyDictionaryContexts.ContainsKey(uniqueKeyDownload ))
{
Startup.MyDictionaryContexts.Add(uniqueKeyDownload, HttpContext);
}
HttpContext.Response.OnCompleted(async () =>
{
//remove context from dictionary
if (Startup.MyDictionaryContexts.ContainsKey(uniqueKeyDownload))
Startup.MyDictionaryContexts.Remove(uniqueKeyDownload);
//Log if event was fired on completion or abort call
if (Startup.MyListDownloadsCancelled.Contains(uniqueKeyDownload))
{
Startup.MyListDownloadsCancelled.Remove(uniqueKeyDownload);
log.Information("DOWNLOAD CANCELLED!!!");
}
else
log.Information("DOWNLOAD COMPLETED!!!");
});
//if you are downloading on browser and make recalls, manage this block
// with conditions and return NoContent() or similar
var length = new FileInfo(path).Length;
Response.Headers.Add("size", length.ToString());
Stream str = System.IO.File.OpenRead(path);
log.Information("DOWNLOAD STARTED..." + uniqueKeyDownload);
return File(str, "application/x-zip-compressed");
}
I've realized that, depending on the client that is downloading the file, there is different behaviors. If the client is a http client from Java (my case), there will be a socket exception at client and that's all. But in case you are downloading on a browser (tested with Chrome), after calling .Abort() the browser will request one more time the download Action and, in this case, you will have to manage some flags to return NoContent() or similar instead of return File(..) again.

Response Header issue on Azure Web Application

I am not sure what is happening here.
When I run my web application locally and click a button to download a file, the file is downloaded fine and Response header as you can see in the attached screenshot where it says local.
But when I publish the application to azure web app. Somehow the download button stops working. I checked the Response Header and you can see the difference.
What would cause this problem? The code is the same? Is there any settings that I should be setting in azure web app in azure portal?
Updated to add code
I have debugged remotely to figure out what is going on as #Amor suggested.
It is so strange that When I debug on my local machine first ExportTo action gets hit which prepares the TempData then Download action gets called once the first action completed with ajax call.
However, this is not the case when I debug remotely. Somehow the ExportTo action never gets called. It directly calls the Download action. As a result the TempData null checking is always null.
But why? Why on earth and how that is possible? Is there something cached somewhere?
I have wiped the content of web application on the remote and re-publish evertyhing to ensure everything is updated. But still no success.
here is the code:
[HttpPost]
public virtual ActionResult ExportTo(SearchVm searchVm)
{
var data = _companyService.GetCompanieBySearchTerm(searchVm).Take(150).ToList();
string handle = Guid.NewGuid().ToString();
TempData[handle] = data;
var fileName = $"C-{handle}.xlsx";
var locationUrl = Url.Action("Download", new { fileGuid = handle, fileName });
var downloadUrl = Url.Action("Download");
return Json(new { success = true, locationUrl, guid = handle, downloadUrl }, JsonRequestBehavior.AllowGet);
}
[HttpGet]
public ActionResult Download(string fileGuid, string fileName)
{
if (TempData[fileGuid] != null)
{
var fileNameSafe = $"C-{fileGuid}.xlsx";
var data = TempData[fileGuid] as List<Company>;
using (MemoryStream ms = new MemoryStream())
{
GridViewExtension.WriteXlsx(GetGridSettings(fileNameSafe), data, ms);
MVCxSpreadsheet mySpreadsheet = new MVCxSpreadsheet();
ms.Position = 0;
mySpreadsheet.Open("myDoc", DocumentFormat.Xlsx, () =>
{
return ms;
});
mySpreadsheet.Document.Worksheets.Insert(0);
var image = Server.MapPath("~/images/logo.png");
var worksheet = mySpreadsheet.Document.Worksheets[0];
worksheet.Name = "Logo";
worksheet.Pictures.AddPicture(image, worksheet.Cells[0, 0]);
byte[] result = mySpreadsheet.SaveCopy(DocumentFormat.Xlsx);
DocumentManager.CloseDocument("myDoc");
Response.Clear();
//Response.AppendHeader("Set-Cookie", "fileDownload=true; path=/");
Response.ContentType = "application/force-download";
Response.AddHeader("content-disposition", $"attachment; filename={fileNameSafe}");
Response.BinaryWrite(result);
Response.End();
}
}
return new EmptyResult();
}
here is the javascript:
var exportData = function (urlExport) {
console.log('Export to link in searchController: ' + urlExport);
ExportButton.SetEnabled(false);
var objData = new Object();
var filterData = companyFilterData(objData);
console.log(filterData);
$.post(urlExport, filterData)
.done(function (data) {
console.log(data.locationUrl);
window.location.href = data.locationUrl;
});
};
When Export button is clicked exportData function is called:
var exportToLink = '#Url.Action("ExportTo")';
console.log('Export to link in index: '+exportToLink);
SearchController.exportData(exportToLink);
As I mentioned that this code works perfectly on the local machine. something weird is happening on azure webapp that ExportTo action breakpoint is never gets hit.
I am not sure what else I could change to get the ExportTo action hit?
Based on the Response Header of Azure Web App, we find that the value of Content-Length is 0. It means that no data has been sent from web app server side.
In ASP.NET MVC, we can response file using following ways.
The first way, send the file which hosted on server. For this way, please check whether the excel file has been uploaded to Azure Web App. You could use Kudu or FTP to the folder to check whether the file is exist.
string fileLocation = Server.MapPath("~/Content/myfile.xlsx");
string contentType = System.Net.Mime.MediaTypeNames.Application.Octet;
string fileName = "file.xlsx";
return File(fileLocation, contentType, fileName);
The second way, we can read the file from any location(database, server or azure storage) and send the file content to client side. For this way, please check whether the file has been read successfully. You can remote debug your azure web app to check whether the file content hasn't been read in the right way.
byte[] fileContent = GetFileContent();
string contentType = System.Net.Mime.MediaTypeNames.Application.Octet;
string fileName = "file.xlsx";
return File(fileContent, contentType, fileName);
5/27/2017 Update
Somehow the ExportTo action never gets called. It directly calls the Download action. As a result the TempData null checking is always null.
How many instances does your Web App assigned? If your Web App have multi instances, the ExportTo request is handled by one instance and the Download request is handled by another instance. Since the TempData is store in memory of dedicated instance, it can't be got from another instance. According to the remote debug document. I find out the reason why the ExportTo action never gets called.
If you do have multiple web server instances, when you attach to the debugger you'll get a random instance, and you have no way to ensure that subsequent browser requests will go to that instance.
To solve this issue, I suggest you response the data directly from the ExportTo action or save the temp data in Azure blob storage which can't be accessed from multi instances.

DisplayModeProvider issues

Using DisplayModeProvider to choose between views for "Desktop", "Tablet" and "Phone" in a MVC5 web application. It is my understanding that this class selects the correct provider in order and uses the first provider that returns True. However, when I step through the code, I find there is a repeated cycle through the code (it goes through multiple times, sometimes over 10 cycles) before deciding on the proper mode. I'm using WURFL Cloud for device detection. Lastly, I've started caching WURFL results in a Session variable. Thinking there must be something wrong with my code and/or logic. It's in VB.net since it's an evolution of a legacy project. The first block of code is in Application_Start in global.asax. Before it was in a separate class, but moved it to global.asax in an attempt to solve this problem.
DisplayModeProvider.Instance.Modes.Clear()
DisplayModeProvider.Instance.Modes.Add(New DefaultDisplayMode("Phone") With {.ContextCondition = Function(c) c.Request.IsPhone})
DisplayModeProvider.Instance.Modes.Add(New DefaultDisplayMode("Tablet") With {.ContextCondition = Function(c) c.Request.IsTablet})
DisplayModeProvider.Instance.Modes.Add(New DefaultDisplayMode("") With {.ContextCondition = Function(c) c.Request.IsDesktop})
My understanding is the function would check for each context condition and stop at the first one that is true. However, as mentioned above the code repeatedly executes even though one of the functions returns True.Here are the extension methods I'm using. They reside in a module. The error handling code was added after a "perceived" outage of the WURFL cloud. Each is decorated with the following: System.Runtime.CompilerServices.Extension
Public Function IsPhone(request As HttpRequestBase) As Boolean
Dim ans As Boolean
Try
If Not HttpContext.Current.Session("IsPhone") Is Nothing Then
ans = HttpContext.Current.Session("IsPhone")
Else
wsm = New WURFLServiceModel(New HttpContextWrapper(HttpContext.Current))
ans = wsm.IsPhone
HttpContext.Current.Session("IsPhone") = ans
End If
Catch ex As Exception
...
End Try
Return ans
End Function
Public Function IsTablet(request As HttpRequestBase) As Boolean
Dim ans As Boolean
Try
If Not HttpContext.Current.Session("IsTablet") Is Nothing Then
ans = HttpContext.Current.Session("IsTablet")
Else
wsm = New WURFLServiceModel(New HttpContextWrapper(HttpContext.Current))
ans = wsm.IsTablet
HttpContext.Current.Session("IsTablet") = ans
End If
Catch ex As Exception
...
End Try
Return ans
End Function
Public Function IsDesktop(request As HttpRequestBase) As Boolean
Return True
End Function
Here is the code for the WURFLServiceModel
Imports ScientiaMobile.WurflCloud.Device
Public Class WURFLServiceModel
Private mIsIOS As Boolean
Private mIsTablet As Boolean
Private mIsPhone As Boolean
Private mResponse As String
Private mErrors As Dictionary(Of String, String)
Private api_Key As String = "xxxxxxxxxxxxxxxxxxxxxxxxxx"
Public Sub New(ByVal request As HttpContextBase)
GetDataByRequest(request)
End Sub
Public Sub GetDataByRequest(context As HttpContextBase)
Dim config = New DefaultCloudClientConfig(api_Key)
Dim manager = New CloudClientManager(config)
Dim info = manager.GetDeviceInfo(context)
mIsIOS = info.Capabilities("is_ios")
mIsPhone = info.Capabilities("is_smartphone")
mIsTablet = info.Capabilities("is_tablet")
mBrandName = info.Capabilities("brand_name")
mModelName = info.Capabilities("model_name")
mErrors = info.Errors
mResponse = info.ResponseOrigin
End Sub
Public ReadOnly Property IsDesktop As Boolean
Get
Return True
End Get
End Property
Public ReadOnly Property IsIOS As Boolean
Get
Return mIsIOS
End Get
End Property
Public ReadOnly Property IsTablet As Boolean
Get
Return mIsTablet
End Get
End Property
Public ReadOnly Property IsPhone As Boolean
Get
Return mIsPhone
End Get
End Property
Although the application runs without error, I can't believe the cycling through this routine should be happening. Would like to clear it up, if possible. What am I doing wrong? Many thanks in advance!
As I see it, the issue has more to do with the internal implementation of MVC display modes than the WURFL API. The code bound to the display mode delegate is called back by ASP.NET MVC for each request to render a view, including partial views. This obviously results in multiple calls being made to the WURFL API. In addition, the WURFL Cloud API takes a while to respond because it has to make a HTTP request to the cloud, parse a cookie, and figure out details. The WURFL Cloud is clearly slower than the on-premise WURFL API which uses a direct access memory cache to pick up details of the user agent. I have used WURFL and MVC in a number of web sites and just went through this. For most of such sites I managed to get an on-premise license. As for the cloud, some per-request internal caching perhaps within your WURFLServiceModel class would be helpful so that you end up making a single cloud request for each rendering of the view. I don't particularly like the use of Session, but yes that could just be me. Session is still an acceptable way of do the "internal caching" I was suggesting above.
Luca Passani, ScientiaMobile CTO here. I instructed the support and engineering teams to reach out to you offline and work with you to get to the end of the issue you are experiencing. For the SO admins. We will report a summary here when the issue is identified and resolved. Thanks.

Getting 401 Unauthorized from Google API Resumable Update

I am trying to integrate upload of arbitrary files to Google Docs into an existing application. This used to work before using resumable upload became mandatory. I am using Java client libraries.
The application is doing the upload in 2 steps:
- get the resourceId of the file
- upload the data
To get the resourceId I am uploading a 0-size file (i.e. Content-Length=0). I am passing ?convert=false in the resumable URL (i.e. https://docs.google.com/feeds/upload/create-session/default/private/full?convert=false).
I am passing "application/octet-stream" as content-type. This seems to work, though I do get different resourcesIds - "file:..." resourceIds for things like images, but "pdf:...." resourceIds for PDFs.
The second step constructs a URL based on the resourceId obtained previously and performs a search (getEntry). The URL is in the form of https://docs.google.com/feeds/default/private/full/file%3A.....
Once the entry is found the ResumableGDataFileUploader is used to update the content (0-byte file) with the actual data from the file being uploaded. This operation fails with 401 Unauthorized response when building ResumableGDataFileUploader instance.
I've tried with ?convert=false as well as ?new-revision=true and both of these at the same time. The result is the same.
The relevant piece of code:
MediaFileSource mediaFile = new MediaFileSource(
tempFile, "application/octet-stream");
final ResumableGDataFileUploader.Builder builder =
new ResumableGDataFileUploader.Builder(client, mediaFile, documentListEntry);
builder.executor(MoreExecutors.sameThreadExecutor());
builder.requestType(ResumableGDataFileUploader.RequestType.UPDATE);
// This is where it fails
final ResumableGDataFileUploader resumableGDataFileUploader = builder.build();
resumableGDataFileUploader.start();
return tempFile.length();
The "client" is an instance of DocsService, configured to use OAuth. It is used to find "documentListEntry" immediately before the given piece of code.
I had to explicitly specify request type, since it seems the client library code contains a bug causing NullPointerException for "update existing entry" case.
I have a suspicion that the issue is specifically in the sequence of actions (upload 0-byte file to get the resourceId, then update with actual file) but I can't figure out why it doesn't work.
Please help?
This code snippet works for me using OAuth 1.0 and OAuth 2.0:
static void uploadDocument(DocsService client) throws IOException, ServiceException,
InterruptedException {
ExecutorService executor = Executors.newFixedThreadPool(10);
File file = new File("<PATH/TO/FILE>");
String mimeType = DocumentListEntry.MediaType.fromFileName(file.getName()).getMimeType();
DocumentListEntry documentEntry = new DocumentListEntry();
documentEntry.setTitle(new PlainTextConstruct("<DOCUMENT TITLE>"));
int DEFAULT_CHUNK_SIZE = 2 * 512 * 1024;
ResumableGDataFileUploader.Builder builder =
new ResumableGDataFileUploader.Builder(
client,
new URL(
"https://docs.google.com/feeds/upload/create-session/default/private/full?convert=false"),
new MediaFileSource(file, mimeType), documentEntry).title(file.getName())
.requestType(RequestType.INSERT).chunkSize(DEFAULT_CHUNK_SIZE).executor(executor);
ResumableGDataFileUploader uploader = builder.build();
Future<ResponseMessage> msg = uploader.start();
while (!uploader.isDone()) {
try {
Thread.sleep(100);
} catch (InterruptedException ie) {
throw ie; // rethrow
}
}
DocumentListEntry uploadedEntry = uploader.getResponse(DocumentListEntry.class);
// Print the document's ID.
System.out.println(uploadedEntry.getId());
System.out.println("Upload is done!");
}

Execute .NET application (no-install) from webpage (intranet) and pass argument(s)?

i built an intranet on .NET MVC. I'm also building a separate planning tool in Winforms (performance choice). I would now like to 'open' the planning tool from the intranet (IE7) and pass an argument (e.g. Workorder number) so I can display the planning for that specific item. Is this possible?
I have a .application file for the Winforms application. I'm also able to change everything on both the .NET MVC intranet and the Winforms planning tool.
You can't simply call the application from the HTML; that would be a security hole. However, you can have the application register to be able to handle these requests via the registry. You say "no-install", so this might be a problem. Maybe your app could modify the registry on the first load.
Anyway, the app would register to handle a specific protocol (like when you click on an itunes:// or ftp:// link).
Instead you'd have something like:
View workflow #3472
which then launches your app with the argument specified.
See http://msdn.microsoft.com/en-us/library/aa767914(VS.85).aspx . You say IE7, but this should work with other browsers, too, once the protocol is registered.
Yes you can do it.
private string _output = "";
public string Execute()
{
try
{
Process process = new Process();
process.OutputDataReceived += new DataReceivedEventHandler(process_OutputDataReceived);
process.StartInfo.FileName = "path to exe";
process.StartInfo.Arguments = "here you can pass arguments to exe";
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
Process currentProcess = Process.GetCurrentProcess();
process.StartInfo.UserName = currentProcess.StartInfo.UserName;
process.StartInfo.Password = currentProcess.StartInfo.Password;
process.Start();
process.BeginOutputReadLine();
process.WaitForExit();
return _output;
}
catch (Exception error)
{
return "ERROR : " + error.Message;
}
}
private void process_OutputDataReceived(object sender, DataReceivedEventArgs e)
{
if (e.Data != null)
{
_output += e.Data + Environment.NewLine;
}
}
It's a simple example. You can use different threads to read output and errors from exe.

Resources