asp.net mvc 5 sent email attachments are damaged - asp.net-mvc

I am trying to send an email using the method described in this tutorial with a model structure form this tutorial and im partially successfull in doing so. The only issue I am having is the fact that files sent as attachments are damaged. I have tried to get it working in so many ways that I lost count. Obviously haven't been trying hard enough since I didn't find the answer, but decided to ask while I continue looking for an answer.
My controller action is as follows:
public async Task<ActionResult> Index( [Bind(Include = "column names..")] Contact contact, HttpPostedFileBase upload){
if (ModelState.IsValid && status)
{
var message = new MailMessage();
if (upload != null && upload.ContentLength > 0)
{
// 4MB -> 4000 * 1024
const int maxFileSize = 4096000;
if (upload.ContentLength < maxFileSize)
{
var document = new File
{
FileName = System.IO.Path.GetFileName(upload.FileName),
FileType = FileType.Document,
ContentType = upload.ContentType
};
var supportedTypes = new[] {"doc", "docx", "pdf", "jpg"};
var extension = System.IO.Path.GetExtension(document.FileName);
if (extension != null)
{
var fileExt = extension.Substring(1);
if (!supportedTypes.Contains(fileExt))
{
ModelState.AddModelError("document", "Wrong format");
return View();
}
//this is the line that sends damaged attachments,
//I believe I should be using document in some way (using reader bit below),
//but whatever I use the code complains or crashes.
message.Attachments.Add(new Attachment(upload.InputStream, Path.GetFileName(upload.FileName)));
using (var reader = new System.IO.BinaryReader(upload.InputStream))
{
document.Content = reader.ReadBytes(upload.ContentLength);
//message.Attachments.Add(new Attachment(document.Content, document.FileName));
}
contact.Files = new List<File> {document};
}
}else
{
ModelState.AddModelError("document", "File too big. Max 4MB.");
}
}
EDIT: A lot of times the code cannot find the file, how do I make sure I give it correct path each time?

Related

how to get date wise email faster using gmail API in Asp.Net MVC with OAuth Token

Here I have written code for Gmail API to fetch mail with date filter
I am able to fetch MessageId and ThreadId using the First API. On the basis of MessageId, I put that messageId parameter in a List object and I have sent this parameter in foreach loop from List to the next API to fetch email body on basis of messageID. But the process is very slow for fetching messages from Gmail
public async Task<ActionResult> DisplayEmailWithFilter (string fromDate, string toDate) {
Message messageObj = new Message ();
Example exampleObj = new Example ();
List<GmailMessage> gmailMessagesList = new List<GmailMessage> ();
GmailMessage gmailMessage = new GmailMessage ();
var responseData = "";
//dateFilter string parameter Created with Date Values
string dateFilter = "in:Inbox after:" + fromDate + " before:" + toDate;
try {
// calling Gmail API to get MessageID Details by Date Filter
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue (scheme: "Bearer",
parameter : Session["Token"].ToString ());
HttpResponseMessage responseMessage = await client.GetAsync ("https://www.googleapis.com/gmail/v1/users/me/messages?q=" + dateFilter);
if (responseMessage.IsSuccessStatusCode) {
var data = responseMessage.Content;
}
try {
responseData = responseMessage.Content.ReadAsStringAsync ().Result;
//This Json Data Converted into List Object
var msgList = JsonConvert.DeserializeObject<Root1> (responseData);
//loop for Fetching EmailMessageData by MessageID
if (msgList.resultSizeEstimate != 0) {
foreach (var msgItem in msgList.messages) {
messageObj.id = msgItem.id;
//Calling API with MessageID Parameter to fetch Respective Message Data
HttpResponseMessage responseMessageList = await client.GetAsync ("https://www.googleapis.com/gmail/v1/users/userId/messages/id?id=" + messageObj.id.ToString () + "&userId=me&format=full");
if (responseMessageList.IsSuccessStatusCode) {
var dataNew = responseMessageList.Content;
var responseDataNew = responseMessageList.Content.ReadAsStringAsync ().Result;
//Converting json string in Object
exampleObj = JsonConvert.DeserializeObject<Example> (responseDataNew);
gmailMessage.Body = exampleObj.snippet;
//fetching Header Values comparing with string to get Data
for (int i = 1; i < exampleObj.payload.headers.Count; i++) {
if (exampleObj.payload.headers[i].name.ToString () == "Date") {
gmailMessage.RecievedDate = exampleObj.payload.headers[i].value;
}
if (exampleObj.payload.headers[i].name.ToString () == "Subject") {
gmailMessage.Subject = exampleObj.payload.headers[i].value;
}
if (exampleObj.payload.headers[i].name.ToString () == "Message-ID") {
gmailMessage.SenderEmailID = exampleObj.payload.headers[i].value;
}
if (exampleObj.payload.headers[i].name.ToString () == "From") {
gmailMessage.SenderName = exampleObj.payload.headers[i].value;
}
}
//Adding This Object Values in GmailMessgage List Object
gmailMessagesList.Add (
new GmailMessage {
Body = exampleObj.snippet,
SenderEmailID = gmailMessage.SenderEmailID,
RecievedDate = gmailMessage.RecievedDate,
SenderName = gmailMessage.SenderName,
Subject = gmailMessage.Subject,
});
}
}
}
} catch (Exception e) {
string errorMgs = e.Message.ToString ();
throw;
}
} catch (Exception e) {
string errorMgs = e.Message.ToString ();
throw;
}
return View (gmailMessagesList);
}
I can fetch Gmail email datewise but it took so much time to fetch. how can I improve my code and performance faster?
The query seems like the most you can do. If you know more information about those emails, like a specific subjects or there always come from the same sender you can try to filter that too, like you would in the Gmail interface.
Other way you would be kind of out of luck. You are limited by the files retrieved from User.messages.list.
If you need to escape from the API limitations maybe trying to retrieve the message other way would be the correct way to go. Considerate creating a small code to retrieve message by the IMAP protocol. Several questions in this topic may help you:
Reading Gmail messages using Python IMAP
Reading Gmail Email in Python
How can I get an email message's text content using Python?

Large File upload to ASP.NET Core 3.0 Web API fails due to Request Body to Large

I have an ASP.NET Core 3.0 Web API endpoint that I have set up to allow me to post large audio files. I have followed the following directions from MS docs to set up the endpoint.
https://learn.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads?view=aspnetcore-3.0#kestrel-maximum-request-body-size
When an audio file is uploaded to the endpoint, it is streamed to an Azure Blob Storage container.
My code works as expected locally.
When I push it to my production server in Azure App Service on Linux, the code does not work and errors with
Unhandled exception in request pipeline: System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Request body too large.
Per advice from the above article, I have configured incrementally updated Kesterl with the following:
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseKestrel((ctx, options) =>
{
var config = ctx.Configuration;
options.Limits.MaxRequestBodySize = 6000000000;
options.Limits.MinRequestBodyDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.MinResponseDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.RequestHeadersTimeout =
TimeSpan.FromMinutes(2);
}).UseStartup<Startup>();
Also configured FormOptions to accept files up to 6000000000
services.Configure<FormOptions>(options =>
{
options.MultipartBodyLengthLimit = 6000000000;
});
And also set up the API controller with the following attributes, per advice from the article
[HttpPost("audio", Name="UploadAudio")]
[DisableFormValueModelBinding]
[GenerateAntiforgeryTokenCookie]
[RequestSizeLimit(6000000000)]
[RequestFormLimits(MultipartBodyLengthLimit = 6000000000)]
Finally, here is the action itself. This giant block of code is not indicative of how I want the code to be written but I have merged it into one method as part of the debugging exercise.
public async Task<IActionResult> Audio()
{
if (!MultipartRequestHelper.IsMultipartContentType(Request.ContentType))
{
throw new ArgumentException("The media file could not be processed.");
}
string mediaId = string.Empty;
string instructorId = string.Empty;
try
{
// process file first
KeyValueAccumulator formAccumulator = new KeyValueAccumulator();
var streamedFileContent = new byte[0];
var boundary = MultipartRequestHelper.GetBoundary(
MediaTypeHeaderValue.Parse(Request.ContentType),
_defaultFormOptions.MultipartBoundaryLengthLimit
);
var reader = new MultipartReader(boundary, Request.Body);
var section = await reader.ReadNextSectionAsync();
while (section != null)
{
var hasContentDispositionHeader = ContentDispositionHeaderValue.TryParse(
section.ContentDisposition, out var contentDisposition);
if (hasContentDispositionHeader)
{
if (MultipartRequestHelper
.HasFileContentDisposition(contentDisposition))
{
streamedFileContent =
await FileHelpers.ProcessStreamedFile(section, contentDisposition,
_permittedExtensions, _fileSizeLimit);
}
else if (MultipartRequestHelper
.HasFormDataContentDisposition(contentDisposition))
{
var key = HeaderUtilities.RemoveQuotes(contentDisposition.Name).Value;
var encoding = FileHelpers.GetEncoding(section);
if (encoding == null)
{
return BadRequest($"The request could not be processed: Bad Encoding");
}
using (var streamReader = new StreamReader(
section.Body,
encoding,
detectEncodingFromByteOrderMarks: true,
bufferSize: 1024,
leaveOpen: true))
{
// The value length limit is enforced by
// MultipartBodyLengthLimit
var value = await streamReader.ReadToEndAsync();
if (string.Equals(value, "undefined",
StringComparison.OrdinalIgnoreCase))
{
value = string.Empty;
}
formAccumulator.Append(key, value);
if (formAccumulator.ValueCount >
_defaultFormOptions.ValueCountLimit)
{
return BadRequest($"The request could not be processed: Key Count limit exceeded.");
}
}
}
}
// Drain any remaining section body that hasn't been consumed and
// read the headers for the next section.
section = await reader.ReadNextSectionAsync();
}
var form = formAccumulator;
var file = streamedFileContent;
var results = form.GetResults();
instructorId = results["instructorId"];
string title = results["title"];
string firstName = results["firstName"];
string lastName = results["lastName"];
string durationInMinutes = results["durationInMinutes"];
//mediaId = await AddInstructorAudioMedia(instructorId, firstName, lastName, title, Convert.ToInt32(duration), DateTime.UtcNow, DateTime.UtcNow, file);
string fileExtension = "m4a";
// Generate Container Name - InstructorSpecific
string containerName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{instructorId}";
string contentType = "audio/mp4";
FileType fileType = FileType.audio;
string authorName = $"{firstName} {lastName}";
string authorShortName = $"{firstName[0]}{lastName}";
string description = $"{authorShortName} - {title}";
long duration = (Convert.ToInt32(durationInMinutes) * 60000);
// Generate new filename
string fileName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{Guid.NewGuid()}";
DateTime recordingDate = DateTime.UtcNow;
DateTime uploadDate = DateTime.UtcNow;
long blobSize = long.MinValue;
try
{
// Update file properties in storage
Dictionary<string, string> fileProperties = new Dictionary<string, string>();
fileProperties.Add("ContentType", contentType);
// update file metadata in storage
Dictionary<string, string> metadata = new Dictionary<string, string>();
metadata.Add("author", authorShortName);
metadata.Add("tite", title);
metadata.Add("description", description);
metadata.Add("duration", duration.ToString());
metadata.Add("recordingDate", recordingDate.ToString());
metadata.Add("uploadDate", uploadDate.ToString());
var fileNameWExt = $"{fileName}.{fileExtension}";
var blobContainer = await _cloudStorageService.CreateBlob(containerName, fileNameWExt, "audio");
try
{
MemoryStream fileContent = new MemoryStream(streamedFileContent);
fileContent.Position = 0;
using (fileContent)
{
await blobContainer.UploadFromStreamAsync(fileContent);
}
}
catch (StorageException e)
{
if (e.RequestInformation.HttpStatusCode == 403)
{
return BadRequest(e.Message);
}
else
{
return BadRequest(e.Message);
}
}
try
{
foreach (var key in metadata.Keys.ToList())
{
blobContainer.Metadata.Add(key, metadata[key]);
}
await blobContainer.SetMetadataAsync();
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
blobSize = await StorageUtils.GetBlobSize(blobContainer);
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
Media media = Media.Create(string.Empty, instructorId, authorName, fileName, fileType, fileExtension, recordingDate, uploadDate, ContentDetails.Create(title, description, duration, blobSize, 0, new List<string>()), StateDetails.Create(StatusType.STAGED, DateTime.MinValue, DateTime.UtcNow, DateTime.MaxValue), Manifest.Create(new Dictionary<string, string>()));
// upload to MongoDB
if (media != null)
{
var mapper = new Mapper(_mapperConfiguration);
var dao = mapper.Map<ContentDAO>(media);
try
{
await _db.Content.InsertOneAsync(dao);
}
catch (Exception)
{
mediaId = string.Empty;
}
mediaId = dao.Id.ToString();
}
else
{
// metadata wasn't stored, remove blob
await _cloudStorageService.DeleteBlob(containerName, fileName, "audio");
return BadRequest($"An issue occurred during media upload: rolling back storage change");
}
if (string.IsNullOrEmpty(mediaId))
{
return BadRequest($"Could not add instructor media");
}
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
var result = new { MediaId = mediaId, InstructorId = instructorId };
return Ok(result);
}
I reiterate, this all works great locally. I do not run it in IISExpress, I run it as a console app.
I submit large audio files via my SPA app and Postman and it works perfectly.
I am deploying this code to an Azure App Service on Linux (as a Basic B1).
Since the code works in my local development environment, I am at a loss of what my next steps are. I have refactored this code a few times but I suspect that it's environment related.
I cannot find anywhere that mentions that the level of App Service Plan is the culprit so before I go out spending more money I wanted to see if anyone here had encountered this challenge and could provide advice.
UPDATE: I attempted upgrading to a Production App Service Plan to see if there was an undocumented gate for incoming traffic. Upgrading didn't work either.
Thanks in advance.
-A
Currently, as of 11/2019, there is a limitation with the Azure App Service for Linux. It's CORS functionality is enabled by default and cannot be disabled AND it has a file size limitation that doesn't appear to get overridden by any of the published Kestrel configurations. The solution is to move the Web API app to a Azure App Service for Windows and it works as expected.
I am sure there is some way to get around it if you know the magic combination of configurations, server settings, and CLI commands but I need to move on with development.

How to make file uploading optional using Mvc 4

I am uploading 3 types of files, i.e 1)video 2)image 3)document.
if i am uploading all three files at once so it is upload and show successfully, but if i want to skip one the file of uploading then it giving me following errors. Please Help me here:
httpPostedFile.SaveAs(fileSavePath);
db.SaveChanges();
one the errors is because of sending path to db i guess.
[HttpPost]
public ActionResult AddSKU(SKU_Det skufiles, IEnumerable<HttpPostedFileBase> files)
{
var httpPostedFile = Request.Files[0];
if (httpPostedFile != null)
{
var uploadFilesDir = System.Web.HttpContext.Current.Server.MapPath("~/Content/Videos");
if (!Directory.Exists(uploadFilesDir))
{
Directory.CreateDirectory(uploadFilesDir);
}
var fileSavePath = Path.Combine(uploadFilesDir, httpPostedFile.FileName);
httpPostedFile.SaveAs(fileSavePath);
}
foreach (var file in files)
{
if (file != null && file.ContentLength > 0)
{
file.SaveAs(HttpContext.Server.MapPath("~/Areas/Admin/Images/") + file.FileName);
}
}
SKU_Det sku = new SKU_Det();
sku.SKU = skufiles.SKU;
sku.VideoPath = Request.Files[0].FileName;
sku.Imagepath = sku.FilePath = Request.Files[1].FileName;
sku.FilePath = sku.FilePath = Request.Files[2].FileName;
db.SKU_Det.Add(sku);
db.SaveChanges();

OData Service not returning complete response

I am reading Sharepoint list data (>20000 entries) using Odata RESTful service as detailed here -http://blogs.msdn.com/b/ericwhite/archive/2010/12/09/getting-started-using-the-odata-rest-api-to-query-a-sharepoint-list.aspx
I am able to read data but I get only the first 1000 records. I also checked that List View Throttling is set to 5000 on sharepoint server. Kindly advise.
Update:
#Turker: Your answer is spot on!! Thank you very much. I was able to get the first 2000 records in first iteration. However, I am getting the same records in each iteration of while loop. My code is as follows-
...initial code...
int skipCount =0;
while (((QueryOperationResponse)query).GetContinuation() != null)
{
//query for the next partial set of customers
query = dc.Execute<CATrackingItem>(
((QueryOperationResponse)query).GetContinuation().NextLinkUri
);
//Add the next set of customers to the full list
caList.AddRange(query.ToList());
var results = from d in caList.Skip(skipCount)
select new
{
Actionable = Actionable,
}; Created = d.Created,
foreach (var res in results)
{
structListColumns.Actionable = res.Actionable;
structListColumns.Created= res.Created;
}
skipCount = caList.Count;
}//Close of while loop
Do you see a <link rel="next"> element at the end of the feed?
For example, if you look at
http://services.odata.org/Northwind/Northwind.svc/Customers/
you will see
<link rel="next" href="http://services.odata.org/Northwind/Northwind.svc/Customers/?$skiptoken='ERNSH'" />
at the end of the feed which means the service is implementing server side paging and you need to send the
http://services.odata.org/Northwind/Northwind.svc/Customers/?$skiptoken='ERNSH'
query to get the next set of results.
I don't see anything particularly wrong with your code. You can try to dump the URLs beign requested (either from the code, or using something like fiddler) to see if the client really sends the same queries (and thus getting same responses).
In any case, here is a sample code which does work (using the sample service):
DataServiceContext ctx = new DataServiceContext(new Uri("http://services.odata.org/Northwind/Northwind.svc"));
QueryOperationResponse<Customer> response = (QueryOperationResponse<Customer>)ctx.CreateQuery<Customer>("Customers").Execute();
do
{
foreach (Customer c in response)
{
Console.WriteLine(c.CustomerID);
}
DataServiceQueryContinuation<Customer> continuation = response.GetContinuation();
if (continuation != null)
{
response = ctx.Execute(continuation);
}
else
{
response = null;
}
} while (response != null);
I had the same problem, and wanted it to be a generic solution.
So I've extended DataServiceContext with a GetAlltems methode.
public static List<T> GetAlltems<T>(this DataServiceContext context)
{
return context.GetAlltems<T>(null);
}
public static List<T> GetAlltems<T>(this DataServiceContext context, IQueryable<T> queryable)
{
List<T> allItems = new List<T>();
DataServiceQueryContinuation<T> token = null;
EntitySetAttribute attr = (EntitySetAttribute)typeof(T).GetCustomAttributes(typeof(EntitySetAttribute), false).First();
// Execute the query for all customers and get the response object.
DataServiceQuery<T> query = null;
if (queryable == null)
{
query = context.CreateQuery<T>(attr.EntitySet);
}
else
{
query = (DataServiceQuery<T>) queryable;
}
QueryOperationResponse<T> response = query.Execute() as QueryOperationResponse<T>;
// With a paged response from the service, use a do...while loop
// to enumerate the results before getting the next link.
do
{
// If nextLink is not null, then there is a new page to load.
if (token != null)
{
// Load the new page from the next link URI.
response = context.Execute<T>(token);
}
allItems.AddRange(response);
}
// Get the next link, and continue while there is a next link.
while ((token = response.GetContinuation()) != null);
return allItems;
}

"A generic error occurred in GDI+" error while showing uploaded images

i am using the following code to show the image that has been saved in my database from my asp.net mvc(C#) application:.
public ActionResult GetSiteHeaderLogo()
{
SiteHeader _siteHeader = new SiteHeader();
Image imgImage = null;
long userId = Utility.GetUserIdFromSession();
if (userId > 0)
{
_siteHeader = this.siteBLL.GetSiteHeaderLogo(userId);
if (_siteHeader.Logo != null && _siteHeader.Logo.Length > 0)
{
byte[] _imageBytes = _siteHeader.Logo;
if (_imageBytes != null)
{
using (System.IO.MemoryStream imageStream = new System.IO.MemoryStream(_imageBytes))
{
imgImage = Image.FromStream(imageStream);
}
}
string sFileExtension = _siteHeader.FileName.Substring(_siteHeader.FileName.IndexOf('.') + 1,
_siteHeader.FileName.Length - (_siteHeader.FileName.IndexOf('.') + 1));
Response.ContentType = Utility.GetContentTypeByExtension(sFileExtension.ToLower());
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.BufferOutput = false;
if (imgImage != null)
{
ImageFormat _imageFormat = Utility.GetImageFormat(sFileExtension.ToLower());
imgImage.Save(Response.OutputStream, _imageFormat);
imgImage.Dispose();
}
}
}
return new EmptyResult();
}
It works fine when i upload original image. But when i upload any downloaded images, it throws the following error:
System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+.
System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+.
at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams)
at System.Drawing.Image.Save(Stream stream, ImageFormat format)
For. Ex: When i upload the original image, it shows as logo in my site and i downloaded that logo from the site and when i re-upload the same downloaded image, it throws the above error. It seems very weird to me and not able to find why its happening. Any ideas on this?
I'd guess that your problem lies here:
using (System.IO.MemoryStream imageStream = new System.IO.MemoryStream(_imageBytes))
{
imgImage = Image.FromStream(imageStream);
}
Because after using .FromStream, the Image owns the stream and might be very upset if you close it. To verify if that's the problem you can just try:
using (System.IO.MemoryStream imageStream = new System.IO.MemoryStream(_imageBytes))
{
imgImage = new Bitmap( Image.FromStream(imageStream) );
}
I've found that error usually comes from a file access problem. Sounds obvious, I realize, but double-check that the file path is correct and that the file exists, and also that the IIS process has permissions to that file.

Resources