Response Header issue on Azure Web Application - asp.net-mvc

I am not sure what is happening here.
When I run my web application locally and click a button to download a file, the file is downloaded fine and Response header as you can see in the attached screenshot where it says local.
But when I publish the application to azure web app. Somehow the download button stops working. I checked the Response Header and you can see the difference.
What would cause this problem? The code is the same? Is there any settings that I should be setting in azure web app in azure portal?
Updated to add code
I have debugged remotely to figure out what is going on as #Amor suggested.
It is so strange that When I debug on my local machine first ExportTo action gets hit which prepares the TempData then Download action gets called once the first action completed with ajax call.
However, this is not the case when I debug remotely. Somehow the ExportTo action never gets called. It directly calls the Download action. As a result the TempData null checking is always null.
But why? Why on earth and how that is possible? Is there something cached somewhere?
I have wiped the content of web application on the remote and re-publish evertyhing to ensure everything is updated. But still no success.
here is the code:
[HttpPost]
public virtual ActionResult ExportTo(SearchVm searchVm)
{
var data = _companyService.GetCompanieBySearchTerm(searchVm).Take(150).ToList();
string handle = Guid.NewGuid().ToString();
TempData[handle] = data;
var fileName = $"C-{handle}.xlsx";
var locationUrl = Url.Action("Download", new { fileGuid = handle, fileName });
var downloadUrl = Url.Action("Download");
return Json(new { success = true, locationUrl, guid = handle, downloadUrl }, JsonRequestBehavior.AllowGet);
}
[HttpGet]
public ActionResult Download(string fileGuid, string fileName)
{
if (TempData[fileGuid] != null)
{
var fileNameSafe = $"C-{fileGuid}.xlsx";
var data = TempData[fileGuid] as List<Company>;
using (MemoryStream ms = new MemoryStream())
{
GridViewExtension.WriteXlsx(GetGridSettings(fileNameSafe), data, ms);
MVCxSpreadsheet mySpreadsheet = new MVCxSpreadsheet();
ms.Position = 0;
mySpreadsheet.Open("myDoc", DocumentFormat.Xlsx, () =>
{
return ms;
});
mySpreadsheet.Document.Worksheets.Insert(0);
var image = Server.MapPath("~/images/logo.png");
var worksheet = mySpreadsheet.Document.Worksheets[0];
worksheet.Name = "Logo";
worksheet.Pictures.AddPicture(image, worksheet.Cells[0, 0]);
byte[] result = mySpreadsheet.SaveCopy(DocumentFormat.Xlsx);
DocumentManager.CloseDocument("myDoc");
Response.Clear();
//Response.AppendHeader("Set-Cookie", "fileDownload=true; path=/");
Response.ContentType = "application/force-download";
Response.AddHeader("content-disposition", $"attachment; filename={fileNameSafe}");
Response.BinaryWrite(result);
Response.End();
}
}
return new EmptyResult();
}
here is the javascript:
var exportData = function (urlExport) {
console.log('Export to link in searchController: ' + urlExport);
ExportButton.SetEnabled(false);
var objData = new Object();
var filterData = companyFilterData(objData);
console.log(filterData);
$.post(urlExport, filterData)
.done(function (data) {
console.log(data.locationUrl);
window.location.href = data.locationUrl;
});
};
When Export button is clicked exportData function is called:
var exportToLink = '#Url.Action("ExportTo")';
console.log('Export to link in index: '+exportToLink);
SearchController.exportData(exportToLink);
As I mentioned that this code works perfectly on the local machine. something weird is happening on azure webapp that ExportTo action breakpoint is never gets hit.
I am not sure what else I could change to get the ExportTo action hit?

Based on the Response Header of Azure Web App, we find that the value of Content-Length is 0. It means that no data has been sent from web app server side.
In ASP.NET MVC, we can response file using following ways.
The first way, send the file which hosted on server. For this way, please check whether the excel file has been uploaded to Azure Web App. You could use Kudu or FTP to the folder to check whether the file is exist.
string fileLocation = Server.MapPath("~/Content/myfile.xlsx");
string contentType = System.Net.Mime.MediaTypeNames.Application.Octet;
string fileName = "file.xlsx";
return File(fileLocation, contentType, fileName);
The second way, we can read the file from any location(database, server or azure storage) and send the file content to client side. For this way, please check whether the file has been read successfully. You can remote debug your azure web app to check whether the file content hasn't been read in the right way.
byte[] fileContent = GetFileContent();
string contentType = System.Net.Mime.MediaTypeNames.Application.Octet;
string fileName = "file.xlsx";
return File(fileContent, contentType, fileName);
5/27/2017 Update
Somehow the ExportTo action never gets called. It directly calls the Download action. As a result the TempData null checking is always null.
How many instances does your Web App assigned? If your Web App have multi instances, the ExportTo request is handled by one instance and the Download request is handled by another instance. Since the TempData is store in memory of dedicated instance, it can't be got from another instance. According to the remote debug document. I find out the reason why the ExportTo action never gets called.
If you do have multiple web server instances, when you attach to the debugger you'll get a random instance, and you have no way to ensure that subsequent browser requests will go to that instance.
To solve this issue, I suggest you response the data directly from the ExportTo action or save the temp data in Azure blob storage which can't be accessed from multi instances.

Related

Can I abort a large FileStreamResult download on server side?

I have an action on which I simply download a file. Sometimes the user wants the download to be aborted and not to wait to be finished if the file was large.
private IActionResult Download(string path)
{
var length = new FileInfo(path).Length;
Response.Headers.Add("size", length.ToString());
Stream str = System.IO.File.OpenRead(path);
return File(str, "application/x-zip-compressed");
}
What I would like to know is if on backend we can abort that process, maybe storing on ram some task to be cancelled on another process.
EDITED for better understanding: the file downloads into a different device which calls the download method after I order, from my browser, that this device has to download the file. So the task need to be aborted on server side by calling an action from my browser.
On the server side, you can get notification when the request is aborted via HttpContext.RequestAborted, which is a CancellationToken that is cancelled when the underlying connection for the request is aborted. So you could pass it down into asynchronous methods that you call if they support cancellation, or you could hook a callback to process cancellation logic via CancellationToken.Register().
However, in your code sample, this is not necessary. All you are doing is creating a FileStream without reading the contents of the file. The actual reading of the file content is performed by the framework, when it processes the FileStreamResult which is created by the call to File(str, "application/x-zip-compressed"). As you can see from the source code for FileResultExecutorBase, the framework will automatically cancel the file reading when HttpContext.RequestAborted is cancelled.
Well, I've finally solved this issue by handling a global collection of HttpContexts. I store the download request HttpContext when my download action is called and the download starts. Also, I have a different action/method/whatelse where I read the context and I call .Abort() method.
private void CancelDownload(string key) //action, method...
{
string uniqueKeyDownload = externalDevice.ID + "#" + fileID;
log.Information("Cancelling download 1/3..." + uniqueKeyDownload );
if (Startup.MyDictionaryContexts.ContainsKey(uniqueKeyDownload ))
{
log.Information("Cancelling download 2/3..." + uniqueKeyDownload );
if (!Startup.MyListDownloadsCancelled.Contains(uniqueKeyDownload ))
{
log.Information("Cancelling download 3/3..." + uniqueKeyDownload );
Startup.MyListDownloadsCancelled.Add(uniqueKeyDownload );
Startup.MyDictionaryContexts[uniqueKeyDownload ].Abort();
}
}
}
public IActionResult Download(string path)
{
string uniqueKeyDownload = externalDevice.ID + "#" + fileID;
if (!Startup.MyDictionaryContexts.ContainsKey(uniqueKeyDownload ))
{
Startup.MyDictionaryContexts.Add(uniqueKeyDownload, HttpContext);
}
HttpContext.Response.OnCompleted(async () =>
{
//remove context from dictionary
if (Startup.MyDictionaryContexts.ContainsKey(uniqueKeyDownload))
Startup.MyDictionaryContexts.Remove(uniqueKeyDownload);
//Log if event was fired on completion or abort call
if (Startup.MyListDownloadsCancelled.Contains(uniqueKeyDownload))
{
Startup.MyListDownloadsCancelled.Remove(uniqueKeyDownload);
log.Information("DOWNLOAD CANCELLED!!!");
}
else
log.Information("DOWNLOAD COMPLETED!!!");
});
//if you are downloading on browser and make recalls, manage this block
// with conditions and return NoContent() or similar
var length = new FileInfo(path).Length;
Response.Headers.Add("size", length.ToString());
Stream str = System.IO.File.OpenRead(path);
log.Information("DOWNLOAD STARTED..." + uniqueKeyDownload);
return File(str, "application/x-zip-compressed");
}
I've realized that, depending on the client that is downloading the file, there is different behaviors. If the client is a http client from Java (my case), there will be a socket exception at client and that's all. But in case you are downloading on a browser (tested with Chrome), after calling .Abort() the browser will request one more time the download Action and, in this case, you will have to manage some flags to return NoContent() or similar instead of return File(..) again.

Save file to path desktop for current user

I have a project ASP.NET Core 2.0 MVC running on IIS.
Want to Export some information from data grid to Excel and save it from web page to the desktop of current user.
string fileName = "SN-export-" + DateTime.Now + ".xlsx";
Regex rgx = new Regex("[^a-zA-Z0-9 -]");
fileName = rgx.Replace(fileName, ".");
string path = Environment.GetFolderPath(Environment.SpecialFolder.Desktop);
string fileName2 = Path.Combine(path, fileName);
FileInfo excelFile = new FileInfo(fileName2);
excel.SaveAs(excelFile);
This works perfect local at Visual Studio, but not after publishing at IIS.
Using simple path string path = #"C:\WINDOWS\TEMP"; It will save this export file at the server temp folder, but not current web page user.
How to get this?
ASP.NET MVC is framework for a web application. So you have fronted and backend parts. This code will executed on the server side of your application. Even if you use Razor pages, they also generated at the backend. So there are several ways to save data on the computer:
use js to iterate data and save it, but I'm not sure that saving to excel with js is easy;
send desired data to backend, save it to excel and then return to the client.
For a second way you can use next code:
[Route("api/[controller]")]
public class DownloadController : Controller {
//GET api/download/12345abc
[HttpGet("{id}"]
public async Task<IActionResult> Download(YourData data) {
Stream stream = await {{__get_stream_based_on_your_data__}}
if(stream == null)
return NotFound();
return File(stream, "application/octet-stream"); // returns a FileStreamResult
}
}
And because of security reasons you can save data only to downloads directory.

Read HTTP Status Code from MVC controller

I am uploading a file to the server. To do that I am using the SOAP Web Service that the developers of the site created.
This service CustomerWebService contains only one void method transferFile. In about 20-30 seconds after the upload I am getting the confirmation email from the site.
It works fine, however I would like to get at least HTTP 200 status right after the execution (like I see in Fiddler right away.)
How can I return one in code?
public ActionResult UploadFile(string fileName)
{
StreamReader sr = new StreamReader(fileName);
String line = sr.ReadToEnd();
CustomerWebService ws = new CustomerWebService ();
ws.transferFile("user", "password", line);
return Json("{Success}", JsonRequestBehavior.AllowGet);
}

Kendo UI Async Upload not working in Internet Explorer

I'm trying to use the Kendo UI Upload (MVC wrapper) in async mode. Things seem to work fine in Chrome, but no such luck in IE (as of now only tested in IE 9). When it initiates the upload, I can see it hitting my action method and the request contains the data I expect, but nothing is actually being saved.
Code samples are below:
_EditForm.cshtml (where the upload is)
#(Html.Kendo().Upload()
.Name(string.Format("upload{0}", "background"))
.Multiple(true)
.Events(evt => evt.Success("refreshBackgroundImages"))
.Messages(msg => msg.DropFilesHere("drag and drop images from your computer here")
.StatusUploaded("Files have been uploaded"))
.Async(a => a.AutoUpload(true)
.SaveField("files")
.Save("UploadImage", "Packages", new { siteId = Model.WebsiteId, type = "background" })))
Controller ActionMethod
[HttpPost]
public ActionResult UploadImage(IEnumerable<HttpPostedFileBase> files, Guid siteId, string type)
{
var site = _websiteService.GetWebsite(siteId);
var path = Path.Combine(_fileSystem.OutletVirtualPath, site.Outlet.AssetBaseFolder);
if (type == "background")
{
path = Path.Combine(path, _backgroundImageFolder);
}
else if (type == "image")
{
path = Path.Combine(path, _foregroundImageFolder);
}
foreach (var file in files)
{
_fileSystem.SaveFile(path, file.FileName, file.InputStream, file.ContentType, true);
}
// Return empty string to signify success
return Content("");
}
Well as another post said, "Welcome to episode 52,245,315 of 'Why Does Internet Explorer suck so badly':
Turns out that when you do file.FileName on an HttpPostedFileBase in Internet Explorer, it thinks you want the whole path of the file on the local machine. It's obviously an IE only thing as Chrome and Firefox seem to have it right.
Make sure to do the following when you only want the actual FileName:
var filename = Path.GetFileName(file.FileName);
The problem is when you actually try to save a file and send back a success response from the server. I don't think your demos are doing any of that. The iframe in ie9 does not receive the response from the server. The browser thinks the response is a download even though it's just a plain text json response. I debugged it down to the fact that the on load event on the iframe never gets fired so the onload handler that needs to handle this response is not doing anything. In all other browsers this is working.
Source: http://www.kendoui.com/forums/kendo-ui-web/upload/async-uploader-and-ie-9-not-working.aspx

Getting 401 Unauthorized from Google API Resumable Update

I am trying to integrate upload of arbitrary files to Google Docs into an existing application. This used to work before using resumable upload became mandatory. I am using Java client libraries.
The application is doing the upload in 2 steps:
- get the resourceId of the file
- upload the data
To get the resourceId I am uploading a 0-size file (i.e. Content-Length=0). I am passing ?convert=false in the resumable URL (i.e. https://docs.google.com/feeds/upload/create-session/default/private/full?convert=false).
I am passing "application/octet-stream" as content-type. This seems to work, though I do get different resourcesIds - "file:..." resourceIds for things like images, but "pdf:...." resourceIds for PDFs.
The second step constructs a URL based on the resourceId obtained previously and performs a search (getEntry). The URL is in the form of https://docs.google.com/feeds/default/private/full/file%3A.....
Once the entry is found the ResumableGDataFileUploader is used to update the content (0-byte file) with the actual data from the file being uploaded. This operation fails with 401 Unauthorized response when building ResumableGDataFileUploader instance.
I've tried with ?convert=false as well as ?new-revision=true and both of these at the same time. The result is the same.
The relevant piece of code:
MediaFileSource mediaFile = new MediaFileSource(
tempFile, "application/octet-stream");
final ResumableGDataFileUploader.Builder builder =
new ResumableGDataFileUploader.Builder(client, mediaFile, documentListEntry);
builder.executor(MoreExecutors.sameThreadExecutor());
builder.requestType(ResumableGDataFileUploader.RequestType.UPDATE);
// This is where it fails
final ResumableGDataFileUploader resumableGDataFileUploader = builder.build();
resumableGDataFileUploader.start();
return tempFile.length();
The "client" is an instance of DocsService, configured to use OAuth. It is used to find "documentListEntry" immediately before the given piece of code.
I had to explicitly specify request type, since it seems the client library code contains a bug causing NullPointerException for "update existing entry" case.
I have a suspicion that the issue is specifically in the sequence of actions (upload 0-byte file to get the resourceId, then update with actual file) but I can't figure out why it doesn't work.
Please help?
This code snippet works for me using OAuth 1.0 and OAuth 2.0:
static void uploadDocument(DocsService client) throws IOException, ServiceException,
InterruptedException {
ExecutorService executor = Executors.newFixedThreadPool(10);
File file = new File("<PATH/TO/FILE>");
String mimeType = DocumentListEntry.MediaType.fromFileName(file.getName()).getMimeType();
DocumentListEntry documentEntry = new DocumentListEntry();
documentEntry.setTitle(new PlainTextConstruct("<DOCUMENT TITLE>"));
int DEFAULT_CHUNK_SIZE = 2 * 512 * 1024;
ResumableGDataFileUploader.Builder builder =
new ResumableGDataFileUploader.Builder(
client,
new URL(
"https://docs.google.com/feeds/upload/create-session/default/private/full?convert=false"),
new MediaFileSource(file, mimeType), documentEntry).title(file.getName())
.requestType(RequestType.INSERT).chunkSize(DEFAULT_CHUNK_SIZE).executor(executor);
ResumableGDataFileUploader uploader = builder.build();
Future<ResponseMessage> msg = uploader.start();
while (!uploader.isDone()) {
try {
Thread.sleep(100);
} catch (InterruptedException ie) {
throw ie; // rethrow
}
}
DocumentListEntry uploadedEntry = uploader.getResponse(DocumentListEntry.class);
// Print the document's ID.
System.out.println(uploadedEntry.getId());
System.out.println("Upload is done!");
}

Resources