How to read into memory the lines of a text file from an IFormFile in ASP.NET Core? - parsing

Say you have this Action:
public List<string> Index(IFormFile file){
//extract list of strings from the file
return new List<string>();
}
I've found plenty of examples of saving the file to the drive, but what if I instead want to skip this and just read the lines of text into an array in memory, directly from the IFormFile?

The abstraction for the IFormFile has an .OpenReadStream method.
To prevent a ton of undesirable and potentially large allocations, we should read a single line at a time and build up our list from each line that we read. Additionally, we could encapsulate this logic in an extension method. The Index action ends up looking like this:
public List<string> Index(IFormFile file) => file.ReadAsList();
The corresponding extension method looks like this:
public static List<string> ReadAsList(this IFormFile file)
{
var result = new StringBuilder();
using (var reader = new StreamReader(file.OpenReadStream()))
{
while (reader.Peek() >= 0)
result.AppendLine(reader.ReadLine());
}
return result;
}
Likewise you could have an async version as well:
public static async Task<string> ReadAsStringAsync(this IFormFile file)
{
var result = new StringBuilder();
using (var reader = new StreamReader(file.OpenReadStream()))
{
while (reader.Peek() >= 0)
result.AppendLine(await reader.ReadLineAsync());
}
return result.ToString();
}
Alternatively, you could you use an ObjectPool<StringBuilder> and modern C# 8 features.
public static async Task<string> ReadAsStringAsync(
this IFormFile file, Object<StringBuilder> pool)
{
var builder = pool.Get();
try
{
using var reader = new StreamReader(file.OpenReadStream());
while (reader.Peek() >= 0)
{
builder.AppendLine(await reader.ReadLineAsync());
}
return builder.ToString();
}
finally
{
pool.Return(builder);
}
}
Then you could use this version this way:
public Task<List<string>> Index(
IFormFile file, [FromServices] ObjectPool<StringBuilder> pool) =>
file.ReadAsListAsync(pool);

ASP.NET Core 3.0 - Reading a form file's content into string
public static async Task<string> ReadFormFileAsync(IFormFile file)
{
if (file == null || file.Length == 0)
{
return await Task.FromResult((string)null);
}
using (var reader = new StreamReader(file.OpenReadStream()))
{
return await reader.ReadToEndAsync();
}
}

Related

ASP.Net 6 Core CSOM ExecuteQuery when getting a non-available user in SPO

In asp.net core 6 and CSOM library, I'm trying add a permission to a SPO file as following code.
public async Task<IActionResult> AddPermission(Guid guid, String[] emailList)
{
using (var authenticationManager = new AuthenticationManager())
using (var context = authenticationManager.GetContext(_site, _user, _securePwd))
{
File file= context.Web.GetFileById(guid);
SetFilePermission(context, file, emailList);
file.ListItemAllFields.SystemUpdate();
context.Load(file.ListItemAllFields);
await context.ExecuteQueryAsync();
return NoContent();
}
}
private static void SetFilePermission(ClientContext context, File file, string[] emailList)
{
if (emailList != null)
{
file.ListItemAllFields.BreakRoleInheritance(true, false);
var role = new RoleDefinitionBindingCollection(context);
role.Add(context.Web.RoleDefinitions.GetByType(permissionLevel));
foreach (string email in emailList)
{
User user = context.Web.SiteUsers.GetByEmail(email);
file.ListItemAllFields.RoleAssignments.Add(user, role);
}
}
}
This work successfully if only the user is available in SPO, or exception occurs. To avoid non-available user exception, I tried to move Load() and ExecuteQuery() to SetFilePermission method.
public async Task<IActionResult> AddPermission(Guid guid, String[] emailList)
{
using (var authenticationManager = new AuthenticationManager())
using (var context = authenticationManager.GetContext(_site, _user, _securePwd))
{
File file= context.Web.GetFileById(guid);
SetFilePermission(context, file, emailList);
// file.ListItemAllFields.SystemUpdate();
// context.Load(file.ListItemAllFields);
// await context.ExecuteQueryAsync();
return NoContent();
}
}
private static void SetFilePermission(ClientContext context, File file, string[] emailList)
{
if (emailList != null)
{
file.ListItemAllFields.BreakRoleInheritance(true, false);
var role = new RoleDefinitionBindingCollection(context);
role.Add(context.Web.RoleDefinitions.GetByType(permissionLevel));
foreach (string email in emailList)
{
User user = context.Web.SiteUsers.GetByEmail(email);
file.ListItemAllFields.RoleAssignments.Add(user, role);
// Move load and executequery method to here.
file.ListItemAllFields.SystemUpdate();
context.Load(file.ListItemAllFields);
context.ExecuteQueryAsync();
}
}
}
Suddenly, exception disappear even though the user is not available in SPO!
But other available emails in emailList also fail to add permission, just result in return NoContent eventually. Does anyone know the myth behind the process and explain it to me?

How to change below code from asp.net to razor page

I am new to Razor page but have been working in aspx. This below is my code - please help me convert this to a Razor page:
void Page_Load(object sender, EventArgs e)
{
foreach(string f in Request.Files.AllKeys)
{
HttpPostedFile file = Request.Files[f];
file.SaveAs("C:\\e_data\\WorkPage\\IMS18\\ALBAB_Dynamic\\20008\\Case_Manager\\" + file.FileName);
}
}
I want to change to razor page code.
Here's what I use for uploading a single file and storing the path to the file in a database. It'll explain the bits that Microsoft left out of it's docs (for instance the path to the base directory in .netcore2.2) Note that security is not much of a concern for me as this is a small company intranet... but there's bits in there about getting filename without extension, and you may want to store without the file extension for security reasons (or remove and then add your own extension):
public async Task<IActionResult> OnPostAsync()
{
if (id == null)
{
return NotFound();
}
Kit = await _context.Kits.FirstOrDefaultAsync(m => m.ID == id);
if (Kit == null)
{
return NotFound();
}
if (Request.Form.Files.Count > 0)
{
IFormFile file = Request.Form.Files[0];
string folderName = "UploadedOriginalBOMs";
string OrgBOMRootPath = Path.Combine(AppContext.BaseDirectory, folderName);
if (!Directory.Exists(OrgBOMRootPath))
{
Directory.CreateDirectory(OrgBOMRootPath);
}
string sFileExtension = Path.GetExtension(file.FileName).ToLower();
string fullPath = Path.Combine(OrgBOMRootPath, file.FileName);
// StringBuilder sb = new StringBuilder();
if (file.Length > 0)
{
String cleanFilename = Path.GetFileNameWithoutExtension(file.FileName);
using (var stream = new FileStream(fullPath, FileMode.Create))
{
file.CopyTo(stream);
}
Kit.PathToOriginalBOM = "UploadedOriginalBOMs/" + file.FileName;
_context.Kits.Attach(Kit).State = EntityState.Modified;
await _context.SaveChangesAsync();
}
}
else
{
if (!ModelState.IsValid)
{
return Page();
}
}
return RedirectToPage("./Index");
}
You'll notice that you can just use the same forloop as in your .aspx file.

uploading and reading from an excel file in asp.net core 2

previously Asp.Net MVC had this third party library which easily allowed uploading and reading from an excel file called Excel Data Reader. We didn't need to have the file on the local disk, which was great because my application needs to run on Azure.
However we are now porting this functionality to asp.net core 2, and it seems from searching that this is not possible. Does anybody know any libraries that would allow me to do this? Please note, I am not looking for solutions that read from a disk. I want to upload an excel file and read data from the stream directly.
I Could Read Excel File In 'Asp .Net Core' By This Code.
Import And Export Data Using EPPlus.Core.
[HttpPost]
public IActionResult ReadExcelFileAsync(IFormFile file)
{
if (file == null || file.Length == 0)
return Content("File Not Selected");
string fileExtension = Path.GetExtension(file.FileName);
if (fileExtension != ".xls" && fileExtension != ".xlsx")
return Content("File Not Selected");
var rootFolder = #"D:\Files";
var fileName = file.FileName;
var filePath = Path.Combine(rootFolder, fileName);
var fileLocation = new FileInfo(filePath);
using (var fileStream = new FileStream(filePath, FileMode.Create))
{
await file.CopyToAsync(fileStream);
}
if (file.Length <= 0)
return BadRequest(GlobalValidationMessage.FileNotFound);
using (ExcelPackage package = new ExcelPackage(fileLocation))
{
ExcelWorksheet workSheet = package.Workbook.Worksheets["Table1"];
//var workSheet = package.Workbook.Worksheets.First();
int totalRows = workSheet.Dimension.Rows;
var DataList = new List<Customers>();
for (int i = 2; i <= totalRows; i++)
{
DataList.Add(new Customers
{
CustomerName = workSheet.Cells[i, 1].Value.ToString(),
CustomerEmail = workSheet.Cells[i, 2].Value.ToString(),
CustomerCountry = workSheet.Cells[i, 3].Value.ToString()
});
}
_db.Customers.AddRange(customerList);
_db.SaveChanges();
}
return Ok();
}
I tried this code below (without using libs) for ASP.NET Core and it worked:
public ActionResult OnPostUpload(List<IFormFile> files)
{
try
{
var file = files.FirstOrDefault();
var inputstream = file.OpenReadStream();
XSSFWorkbook workbook = new XSSFWorkbook(stream);
var FIRST_ROW_NUMBER = {{firstRowWithValue}};
ISheet sheet = workbook.GetSheetAt(0);
// Example: var firstCellRow = (int)sheet.GetRow(0).GetCell(0).NumericCellValue;
for (int rowIdx = 2; rowIdx <= sheet.LastRowNum; rowIdx++)
{
IRow currentRow = sheet.GetRow(rowIdx);
if (currentRow == null || currentRow.Cells == null || currentRow.Cells.Count() < FIRST_ROW_NUMBER) break;
var df = new DataFormatter();
for (int cellNumber = {{firstCellWithValue}}; cellNumber < {{lastCellWithValue}}; cellNumber++)
{
//business logic & saving data to DB
}
}
}
catch(Exception ex)
{
throw new FileFormatException($"Error on file processing - {ex.Message}");
}
}
if we are talking about Razor Pages, here's a simple sample that I tested today..
Environ: .NET Core 3.1, VS 2019
A simple class
public class UserModel
{
public string Name { get; set; }
public string City { get; set; }
}
Index.cshtml.cs
usings..
using ExcelDataReader;
public void OnPost(IFormFile file)
{
List<UserModel> users = new List<UserModel>();
System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance);
using (var stream = new MemoryStream())
{
file.CopyTo(stream);
stream.Position = 0;
using (var reader = ExcelReaderFactory.CreateReader(stream))
{
while (reader.Read()) //Each row of the file
{
users.Add(new UserModel { Name = reader.GetValue(0).ToString(), City = reader.GetValue(1).ToString()});
}
}
}
//users // you got the values here
}
Mark up in View
<form id="form1" method="post" enctype="multipart/form-data">
<div class="text-center">
<input type="file" id="file1" name="file" />
</div>
<script>
document.getElementById('file1').onchange = function () {
document.getElementById('form1').submit();
};
</script>
You would require ExcelDataReader nuget package, I used 3.6.0 version
github working code
Latest versions of ExcelDataReader support netstandard2.0, thus work with ASP.NET Core 2. It also targets netstandard1.3, so works with ASP.NET Core 1.x as well.
(not sure what you searched that said it is not possible, but that is clearly wrong)
First upload your excel file and read the excel file record using asp.net core 3.1.
using System;
using Microsoft.AspNetCore.Mvc;
using ExcelFileRead.Models;
using Microsoft.AspNetCore.Hosting;
using System.IO;
using OfficeOpenXml;
using System.Linq;
namespace ExcelFileRead.Controllers
{
public class HomeController : Controller
{
private readonly IHostingEnvironment _hostingEnvironment;
public HomeController(IHostingEnvironment hostingEnvironment)
{
_hostingEnvironment = hostingEnvironment;
}
public ActionResult File()
{
FileUploadViewModel model = new FileUploadViewModel();
return View(model);
}
[HttpPost]
public ActionResult File(FileUploadViewModel model)
{
string rootFolder = _hostingEnvironment.WebRootPath;
string fileName = Guid.NewGuid().ToString() + model.XlsFile.FileName;
FileInfo file = new FileInfo(Path.Combine(rootFolder, fileName));
using (var stream = new MemoryStream())
{
model.XlsFile.CopyToAsync(stream);
using (var package = new ExcelPackage(stream))
{
package.SaveAs(file);
}
}
using (ExcelPackage package = new ExcelPackage(file))
{
ExcelWorksheet worksheet = package.Workbook.Worksheets.FirstOrDefault();
if (worksheet == null)
{
//return or alert message here
}
else
{
var rowCount = worksheet.Dimension.Rows;
for (int row = 2; row <= rowCount; row++)
{
model.StaffInfoViewModel.StaffList.Add(new StaffInfoViewModel
{
FirstName = (worksheet.Cells[row, 1].Value ?? string.Empty).ToString().Trim(),
LastName = (worksheet.Cells[row, 2].Value ?? string.Empty).ToString().Trim(),
Email = (worksheet.Cells[row, 3].Value ?? string.Empty).ToString().Trim(),
});
}
}
}
return View(model);
}
}
}
For more details(step by step)
https://findandsolve.com/articles/how-to-read-column-value-from-excel-in-aspnet-core-or-best-way-to-read-write-excel-file-in-dotnet-core

Raven paging queries in a specific way

I'm developing an ASP.NET MVC application using RavenDB 3. I don't have a lot of experience with raven.
In general, when executing queries to display data, the first 128 items are returned on the page. More records are added in an "infinite scroll"-manner by using paged queries.
Now, however, I have the requirement that items are loaded in 'groups'.
Assume the following class:
public class Item {
public string Id { get; set; }
public int Group { get; set; }
public string text { get; set; }
}
Assume the database contains 40 items having group='1', 40 items having group='2' and 50 items having group='3'.
This is a total of 130 items. This would mean that the last 'group' fetched would not be complete. It would be missing 2 items.
I would like a mechanism that is aware of this, so that it would fetch at least 128 AND would fetch 'extra' if the last group is not completely included.
Afterwards, when I fetch the next page, I would like it to start with the next group.
Is there any way I can make this work without 'fabricating' a single page myself by doing more than one call?
EDIT: I cannot assume the groups are perfectly equal in size, but I can assume the sizes will be 'simular'
Also, I cannot change the design to store all items in a single 'group'-object for instance.
Okay so basically what you will need to do is calculate the number of results that were in the previous pages and the number of results that are in the current page. Below is a quick example app to give you an idea of how to do it. One caveat, if the number of results for the current group range exceeds the MaxNumberOfRequestsPerSession than an error will be thrown, so you might want to put some handling in there for that.
Note for running this example:
Make sure your platform is set to x64 in visual studio if you are using the most recent versions of RavenDB. Otherwise, this example will thrown an error about Voron not being stable in 32 bit mode.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Raven.Client;
using Raven.Client.Embedded;
using Raven.Client.Listeners;
namespace ConsoleApplication
{
internal class Program
{
private static void Main(string[] args)
{
using (var gdm = new GroupDataManager())
{
Console.WriteLine("Started Seeding");
gdm.Seed().Wait();
Console.WriteLine("Finished Seeding");
Console.WriteLine("===============================================================");
Console.WriteLine("Get First Page");
Console.WriteLine("===============================================================");
var firstPage = gdm.GetPagedGroupResults(1, 3).Result;
foreach (var item in firstPage)
{
Console.WriteLine(item.Text);
}
Console.WriteLine("===============================================================");
Console.WriteLine("Get Second Page");
Console.WriteLine("===============================================================");
var secondPage = gdm.GetPagedGroupResults(2, 3).Result;
foreach (var item in secondPage)
{
Console.WriteLine(item.Text);
}
}
Console.ReadLine();
}
}
public class GroupDataManager : IDisposable
{
private readonly IDocumentStore _documentStore = new EmbeddedRavenDatabase().Store;
public void Dispose()
{
_documentStore.Dispose();
}
public async Task Seed()
{
var rnd = new Random();
using (var session = _documentStore.OpenAsyncSession())
{
for (var groupNumber = 1; groupNumber < 15; groupNumber++)
{
for (var i = 0; i < rnd.Next(5, 25); i++)
{
var item = new Item
{
Group = groupNumber,
Text = string.Format("Group: {0} Item:{1}", groupNumber, i)
};
await session.StoreAsync(item);
}
}
await session.SaveChangesAsync();
}
}
public async Task<IList<Item>> GetPagedGroupResults(int page, int numberOfGroupsPerPage)
{
var startingGroup = ((page - 1) * numberOfGroupsPerPage) + 1;
using (var session = _documentStore.OpenAsyncSession())
{
// Calculate the number of items that were contained in the previous groups
var numberToSkip = await session.Query<Item>().CountAsync(item => item.Group < startingGroup);
var endGroup = startingGroup + numberOfGroupsPerPage;
// Calculate the number of items that are in the current range of groups
var numberToTake = await session.Query<Item>().CountAsync(item => item.Group >= startingGroup && item.Group < endGroup);
var results = await session.Query<Item>().Skip(numberToSkip).Take(numberToTake).ToListAsync();
return results;
}
}
}
public class Item
{
public string Id { get; set; }
public int Group { get; set; }
public string Text { get; set; }
}
/// <summary>
/// For Testing Only. Prevents stale queries
/// </summary>
public class NoStaleQueriesAllowed : IDocumentQueryListener
{
public void BeforeQueryExecuted(IDocumentQueryCustomization queryCustomization)
{
queryCustomization.WaitForNonStaleResults();
}
}
public class EmbeddedRavenDatabase
{
private static bool _configured = false;
private static readonly Lazy<IDocumentStore> _lazyDocStore = new Lazy<IDocumentStore>(() =>
{
var docStore = new EmbeddableDocumentStore
{
RunInMemory = true
};
docStore.RegisterListener(new NoStaleQueriesAllowed());
docStore.Initialize();
return docStore;
});
public IDocumentStore Store
{
get { return _lazyDocStore.Value; }
}
}
}

Using memorystream and DotNetZip in MVC gives "Cannot access a closed Stream"

I'm trying to create a zipfile in a MVC method using the DotNetZip components.
Here is my code:
public FileResult DownloadImagefilesAsZip()
{
using (var memoryStream = new MemoryStream())
{
using (var zip = new ZipFile())
{
zip.AddDirectory(Server.MapPath("/Images/"));
zip.Save(memoryStream);
return File(memoryStream, "gzip", "images.zip");
}
}
}
When I run it I get a "Cannot access a closed Stream" error, and I'm not sure why.
Don't dispose the MemoryStream, the FileStreamResult will take care once it has finished writing it to the response:
public ActionResult DownloadImagefilesAsZip()
{
var memoryStream = new MemoryStream();
using (var zip = new ZipFile())
{
zip.AddDirectory(Server.MapPath("~/Images"));
zip.Save(memoryStream);
return File(memoryStream, "application/gzip", "images.zip");
}
}
By the way I would recommend you writing a custom action result to handle this instead of writing plumbing code inside your controller action. Not only that you will get a reusable action result but bear in mind that your code is hugely inefficient => you are performing the ZIP operation inside the memory and thus loading the whole ~/images directory content + the zip file in memory. If you have many users and lots of files inside this directory you will very quickly run out of memory.
A much more efficient solution is to write directly to the response stream:
public class ZipResult : ActionResult
{
public string Path { get; private set; }
public string Filename { get; private set; }
public ZipResult(string path, string filename)
{
Path = path;
Filename = filename;
}
public override void ExecuteResult(ControllerContext context)
{
if (context == null)
{
throw new ArgumentNullException("context");
}
var response = context.HttpContext.Response;
response.ContentType = "application/gzip";
using (var zip = new ZipFile())
{
zip.AddDirectory(Path);
zip.Save(response.OutputStream);
var cd = new ContentDisposition
{
FileName = Filename,
Inline = false
};
response.Headers.Add("Content-Disposition", cd.ToString());
}
}
}
and then:
public ActionResult DownloadImagefilesAsZip()
{
return new ZipResult(Server.MapPath("~/Images"), "images.zip");
}
Couldn't comment.
Darin's answer is great! Still received a memory exception though so had to add response.BufferOutput = false; and because of that had to move content-disposition code higher.
So you have:
...
var response = context.HttpContext.Response;
response.ContentType = "application/zip";
response.BufferOutput = false;
var cd = new ContentDisposition
{
FileName = ZipFilename,
Inline = false
};
response.Headers.Add("Content-Disposition", cd.ToString());
using (var zip = new ZipFile())
{
...
Just in case it wasn't obvious :)

Resources