I get this error
The process cannot access the file '..\Images\Temp\6574_1212665562989_1419270107_30610848_6661938_n.jpg' because it is being used by another process.
When I tried this:
try
{
var file = Request.Files["FileProfilePicture"];
file.SaveAs(Server.MapPath("~/Images/Temp/" + file.FileName));
Bitmap imageOrj = new Bitmap(System.Web.HttpContext.Current.Server.MapPath("~/Images/Temp/" + file.FileName));
Image imageBig = ResizeImage.Resize(imageOrj, 100, 100);
imageBig.Save(System.Web.HttpContext.Current.Server.MapPath("~/Images/ProfilePicBig/" + file.FileName));
Image imageSmall = ResizeImage.Resize(imageOrj, 50, 50);
imageSmall.Save(System.Web.HttpContext.Current.Server.MapPath("~/Images/ProfilePicSmall/" + file.FileName));
string[] files = System.IO.Directory.GetFiles(Server.MapPath("~/Images/Temp/"));
foreach (string pathFile in files)
{
System.IO.File.Delete(pathFile);
}
return RedirectToAction("Index", "Author");
}
catch (Exception e)
{
ModelState.AddModelError("", "Kullanıcı bilgileri güncellenirken bir hata oluştu. Lütfen daha sonra tekrar deneyin." + e.Message);
}
How can I fix it. Or another better way to keep images as a temp. Should I keep files in a temp folder?
Thanks
Did you make sure you do not have the file in temp open somewhere else?
This error also occurs when you had an error in a previous run and the file has not been closed. You can try a different file name in this case or just delete the unclosed file from the OS side.
I hope this helps...
Otherwise, I usually use a similar way of doing temp files...
Edit: By the comments, it seems that the resolution for the issue above was the following: whenever handled in a try-catch block, the file objects might not get closed down if it's not handled in the catch node.
In this specific case the imageOrj object caused the problem, so it's advised to use an imageOrj.Dispose() after the Bitmap editing is finished.
You don't need to save the temp file. You can create the bitmap in memory and populate it with the request stream. There is no need to save it to the disk.
You seem to delete more files than you create in your deletion loop. You are attempting to delete every single file under ~/Images/Temp/, this might create conflicts (ie data races between different requests). Delete only the files you just created.
Related
I have an NativeScript 6.8 Javascript app that downloads newer data files. I'm discovering that on iOS I cannot create files within the app folder. (At least, in release builds; in debug builds I can.) I can change my code to read data files from the Documents folder, but how can I pre-populate the Documents folder at build time with the original data files? I'd rather not copy all the data files at run time.
Or, have I misinterpreted the restriction that files cannot be created in the app folder (or subfolders) in iOS release builds?
Live-updating files on iOS is more involved than one might expect. So, yes, you need to access the live-updated files from the Documents folder, not back the files up to iCloud, and handle numerous timing conditions, such as the live-update running just before what would seem to be the initial copy of the file to the Documents folder (seems unlikely, but I've seen it happen while testing).
I've included the function I developed, below. For context, when I find a file online to be live-updated, I use an appSetting to save the file's date as a string (storing as a value loses precision).
The function isn't perfect, but for now, it gets the job done. I call this from app.js in a NativeScript 6.8 JavaScript project.
/**
* Copy files in app/files folder to Documents folder so they can be updated
*/
async function copyFilesToDocuments() {
let filesCopied = appSettings.getBoolean("copyFilesToDocuments", false);
if (!filesCopied) { // only copy files on first invocation
let filesFolder = fs.knownFolders.currentApp().getFolder("files"); // Folder object
let documentsFolder = fs.knownFolders.documents(); // Folder object
let fileEntities = await filesFolder.getEntities();
for (entity of fileEntities) {
let sourceFile = fs.File.fromPath(entity.path);
let targetFilePath = fs.path.join(documentsFolder.path, entity.name);
let targetDate = parseInt(appSettings.getString(entity.name, "0"), 10); // live-update date or 0
if (fs.Folder.exists(targetFilePath) && targetDate > global.dataDate ) { // if file has been live-updated
console.log("app.js copyFilesToDocuments: file '" + entity.name + "' skipped to avoid overwrite. ");
continue; // don't overwrite newer file
}
appSettings.remove(entity.name); // remove any live-update timestamp
let targetFile = fs.File.fromPath(targetFilePath);
let content = await sourceFile.read();
try {
await targetFile.write(content);
if (platform.isIOS) {
// Prevent file from being backed up to iCloud
// See https://stackoverflow.com/questions/58363089/using-nsurlisexcludedfrombackupkey-in-nativescript
// See https://stackoverflow.com/questions/26080120/cfurlcopyresourcepropertyforkey-failed-because-passed-url-no-scheme
NSURL.fileURLWithPath(targetFilePath).setResourceValueForKeyError(true, NSURLIsExcludedFromBackupKey);
}
// console.log("app.js copyFilesToDocuments file copied: " + entity.name);
} catch(e) {
console.warn("app.js copyFilesToDocuments error: " + e);
//TODO: app will fail at this point with some files not found :-(
} // end catch
} // end for
appSettings.setBoolean("copyFilesToDocuments", true);
} // end files not yet copied
} // end copyFilesToDocuments
I am trying to export some data to excel and download the file via the browser. I have a method that creates an active workbook that returns a byte array in the following way:
byte[] doc = be.GetActiveWorkbook(excelApp);
The method GetActiveWorkBook looks like this:
public byte[] GetActiveWorkbook(Application app)
{
string path = Path.GetTempFileName();
try
{
app.ActiveWorkbook.SaveCopyAs(path);
return File.ReadAllBytes(path);
}
finally
{
if (File.Exists(path))
File.Delete(path);
}
}
Lastly, the file is returned like this:
var file = File(doc, "application/vnd.ms-excel");
file.FileDownloadName = filename + " " + id + ".xlsx";
return file;
The excel file is indeed downloaded to the browser, however it seems like there are processes in the background that are still active, even though I close the excel file in my desktop. Why is this?
The Excel Interop is a nightmare for disposing of properly. Probably worth checking your code outside of the method you've posted to see how you're cleaning up the Excel Application that's passed in:
How do I properly clean up Excel interop objects?
I have the following requirements. I need to upload an Excel file to a MVC based site. For this I am using Kendo Upload. In the controller action that handles the upload I need to make a slight modification to the Excel file and then send it back as a file stream. I am using Aspose for the Excel modifications. My question is can I achieve all of this within the one controller action without the Excel file ever hitting the disk of web server?
I managed to get this to work by using the synchronous upload mode. My controller action looks like this:
[POST("SaveExcelFile")]
public FileStreamResult Save(IEnumerable<HttpPostedFileBase> files)
{
// The Name of the Upload component is "files"
if (files != null)
{
foreach (var file in files)
{
// Some browsers send file names with full path.
// We are only interested in the file name.
var fileName = Path.GetFileName(file.FileName);
//var physicalPath = Path.Combine(Server.MapPath("~/App_Data"), fileName);
Workbook excel2 = new Workbook(file.InputStream);
excel2.Worksheets.Add("TEST");
Stream stream = new MemoryStream();
excel2.Save(stream, SaveFormat.Excel97To2003);
stream.Position = 0;
return File(stream, "application/vnd.ms-excel", "junk.xls");
// The files are not actually saved in this demo
// file.SaveAs(physicalPath);
}
}
// Return an empty string to signify success
return null;
}
This is only proof of concept code but you can get the idea of what I was trying to achieve. Upload a file, manipulate it and send the modified Worksheet back down to the client as a stream.
I don't think you can. I have used KendoUI's upload control, and it seems that you'll only get to manipulate the file after it's written on the server side.
What you can do is to first save the file, perform your modification, then overwrite it.
I have an MVC Razor application where I am returning a view.
I have overloaded my action to accept a null-able "export" bool which will change the action by adding headers but still returning the same view as a file (in a new window).
//if there is a value passed in, set the bool to true
if (export.HasValue)
{
ViewBag.Exporting = true;
var UniqueFileName = string.Format(#"{0}.xls", Guid.NewGuid());
Response.AddHeader("content-disposition", "attachment; filename="+UniqueFileName);
Response.ContentType = "application/ms-excel";
}
As the file was generated based on a view, its not an .xls file so when opening it, I get the message "the file format and extension of don't match". So after a Google, I have found THIS POST on SO where one of the answers uses VBA to open the file on the server (which includes the HTML mark-up) then saves it again (as .xls).
I am hoping to do the same, call the controller action which will call the view and create the .xls file on the server, then have the server open it, save it then return it as a download.
What I don't want to do is to create a new view to return a clean file as the current view is an extremely complex page with a lot of logic which would only need to be repeated (and maintained).
What I have done in the view is to wrap everything except the table in an if statement so that only the table is exported and not the rest of the page layout.
Is this possible?
You can implement the VBA in .net
private void ConvertToExcel(string srcPath, string outputPath, XlFileFormat format)
{
if (srcPath== null) { throw new ArgumentNullException("srcPath"); }
if (outputPath== null) { throw new ArgumentNullException("outputPath"); }
var excelApp = new Application();
try
{
var wb = excelApp.Workbooks.Open(srcPath);
try
{
wb.SaveAs(outputPath, format);
}
finally
{
Marshal.ReleaseComObject(wb);
}
}
finally
{
excelApp.Quit();
}
}
You must install Microsoft.Office.Interop and add reference to a COM oject named Microsoft Excel XX.0 Object Library
Sample usage:
//generate excel file from the HTML output of GenerateHtml action.
var generateHtmlUri = new Uri(this.Request.Url, Url.Action("GenerateHtml"));
ConvertToExcel(generateHtmlUri.AbsoluteUri, #"D:\output.xlsx", XlFileFormat.xlOpenXMLStrictWorkbook);
I however discourage this solution because:
You have to install MS Excel in your web server.
MS Excel may sometimes misbehave like prompting a dialog box.
You must find a way to delete the generated Excel file afterwards.
Ugly design.
I suggest to generate excel directly because there doesn't seem to be better ways to covert HTML to Excel except using Excel itself or DocRaptor.
In a business app I am creating, we allow our administrators to upload a CSV file with certain data that gets parsed and entered into our databases (all appropriate error handling is occurring, etc.).
As part of an upgrade to .NET 4.5, I had to update a few aspects of this code, and, while I was doing so, I ran across this answer of someone who is using MemoryStream to handle uploaded files as opposed to temporarily saving to the file system. There's no real reason for me to change (and maybe it's even bad to), but I wanted to give it a shot to learn a bit. So, I quickly swapped out this code (from a strongly-typed model due to the upload of other metadata):
HttpPostedFileBase file = model.File;
var fileName = Path.GetFileName(file.FileName);
var path = Path.Combine(Server.MapPath("~/App_Data/Uploads"), fileName);
file.SaveAs(path);
CsvParser csvParser = new CsvParser();
Product product = csvParser.Parse(path);
this.repository.Insert(product);
this.repository.Save();
return View("Details", product);
to this:
using (MemoryStream memoryStream = new MemoryStream())
{
model.File.InputStream.CopyTo(memoryStream);
CsvParser csvParser = new CsvParser();
Product product = csvParser.Parse(memoryStream);
this.repository.Insert(product);
this.repository.Save();
return View("Details", product);
}
Unfortunately, things break when I do this - all my data is coming out with null values and it seems as though there is nothing actually in the MemoryStream (though I'm not positive about this). I know this may be a long shot, but is there anything obvious that I'm missing here or something I can do to better debug this?
You need to add the following:
model.File.InputStream.CopyTo(memoryStream);
memoryStream.Position = 0;
...
Product product = csvParser.Parse(memoryStream);
When you copy the file into the MemoryStream, the pointer is moved to the end of the stream, so when you then try to read it, you're getting a null byte instead of your stream data. You just need to reset the position to the start, i.e. 0.
The problem I believe is your memoryStream has it's position set to the end, and I'm guessing that your CSVParser is only processing from that point onwards which there is no data.
To fix you can simply set the memoryStream position to 0 before you parse it with your csvParser.
memoryStream.Position = 0;