I'm trying to write a WebApi service that receives a file, does a trivial manipulation, and sends the file back. I'm having issues on sending and/or receiving the file from the service.
The issue I'm having is that the file returned from the service is ~1.5x larger than the manipulated file, e.g. when the file is returned it's like 300kb instead of the 200kb it should be.
I assume its being wrapped and or manipulated somehow, and I'm unsure of how to receive it properly. The code for the WebAPI service and the method that calls the web service are included below
In, the WebApi service, when I hit the line return Ok(bufferResult), the file is a byte[253312]
In the method that calls the web service, after the file is manipulated and returned, following the line var content = stream.Result;, the stream has a length of 337754 bytes.
Web API service code
public ConversionController: APIController{
public async Task<IHttpActionResult> TransformImage()
{
if (!Request.Content.IsMimeMultipartContent())
throw new Exception();
var provider = new MultipartMemoryStreamProvider();
await Request.Content.ReadAsMultipartAsync(provider);
var file = provider.Contents.First();
var filename = file.Headers.ContentDisposition.FileName.Trim('\"');
var buffer = await file.ReadAsByteArrayAsync();
var stream = new MemoryStream(buffer);
// [file manipulations omitted;]
// [the result is populated into a MemoryStream named response ]
//debug : save memory stream to disk to make sure tranformation is successfull
/*response.Position = 0;
path = #"C:\temp\file.ext";
using (var fileStream = System.IO.File.Create(path))
{
saveStream.CopyTo(fileStream);
}*/
var bufferResult = response.GetBuffer();
return Ok(bufferResult);
}
}
Method Calling the Service
public async Task<ActionResult> AsyncConvert()
{
var url = "http://localhost:49246/api/conversion/transformImage";
var filepath = "drive/file/path.ext";
HttpContent fileContent = new ByteArrayContent(System.IO.File.ReadAllBytes(filepath));
using (var client = new HttpClient())
{
using (var formData = new MultipartFormDataContent())
{
formData.Add(fileContent, "file", "fileName");
//call service
var response = client.PostAsync(url, formData).Result;
if (!response.IsSuccessStatusCode)
{
throw new Exception();
}
else
{
if (response.Content.GetType() != typeof(System.Net.Http.StreamContent))
throw new Exception();
var stream = response.Content.ReadAsStreamAsync();
var content = stream.Result;
var path = #"drive\completed\name.ext";
using (var fileStream = System.IO.File.Create(path))
{
content.CopyTo(fileStream);
}
}
}
}
return null;
}
I'm still new to streams and WebApi, so I may be missing something quite obvious. Why are the file streams different sizes? (eg. is it wrapped and how do I unwrap and/or receive the stream)
okay, to receive the file correctly, I needed to replace the line
var stream = response.Content.ReadAsStreamAsync();
with
var contents = await response.Content.ReadAsAsync<Byte[]>();
to provide the correct type for the binding
so, the later part of the methods that calls the service looks something like
var content = await response.Content.ReadAsAsync<Byte[]>();
var saveStream = new MemoryStream(content);
saveStream.Position = 0;
//Debug: save converted file to disk
/*
var path = #"drive\completed\name.ext";
using (var fileStream = System.IO.File.Create(path))
{
saveStream.CopyTo(fileStream);
}*/
Related
I have an ASP.NET Core 3.0 Web API endpoint that I have set up to allow me to post large audio files. I have followed the following directions from MS docs to set up the endpoint.
https://learn.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads?view=aspnetcore-3.0#kestrel-maximum-request-body-size
When an audio file is uploaded to the endpoint, it is streamed to an Azure Blob Storage container.
My code works as expected locally.
When I push it to my production server in Azure App Service on Linux, the code does not work and errors with
Unhandled exception in request pipeline: System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Request body too large.
Per advice from the above article, I have configured incrementally updated Kesterl with the following:
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseKestrel((ctx, options) =>
{
var config = ctx.Configuration;
options.Limits.MaxRequestBodySize = 6000000000;
options.Limits.MinRequestBodyDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.MinResponseDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.RequestHeadersTimeout =
TimeSpan.FromMinutes(2);
}).UseStartup<Startup>();
Also configured FormOptions to accept files up to 6000000000
services.Configure<FormOptions>(options =>
{
options.MultipartBodyLengthLimit = 6000000000;
});
And also set up the API controller with the following attributes, per advice from the article
[HttpPost("audio", Name="UploadAudio")]
[DisableFormValueModelBinding]
[GenerateAntiforgeryTokenCookie]
[RequestSizeLimit(6000000000)]
[RequestFormLimits(MultipartBodyLengthLimit = 6000000000)]
Finally, here is the action itself. This giant block of code is not indicative of how I want the code to be written but I have merged it into one method as part of the debugging exercise.
public async Task<IActionResult> Audio()
{
if (!MultipartRequestHelper.IsMultipartContentType(Request.ContentType))
{
throw new ArgumentException("The media file could not be processed.");
}
string mediaId = string.Empty;
string instructorId = string.Empty;
try
{
// process file first
KeyValueAccumulator formAccumulator = new KeyValueAccumulator();
var streamedFileContent = new byte[0];
var boundary = MultipartRequestHelper.GetBoundary(
MediaTypeHeaderValue.Parse(Request.ContentType),
_defaultFormOptions.MultipartBoundaryLengthLimit
);
var reader = new MultipartReader(boundary, Request.Body);
var section = await reader.ReadNextSectionAsync();
while (section != null)
{
var hasContentDispositionHeader = ContentDispositionHeaderValue.TryParse(
section.ContentDisposition, out var contentDisposition);
if (hasContentDispositionHeader)
{
if (MultipartRequestHelper
.HasFileContentDisposition(contentDisposition))
{
streamedFileContent =
await FileHelpers.ProcessStreamedFile(section, contentDisposition,
_permittedExtensions, _fileSizeLimit);
}
else if (MultipartRequestHelper
.HasFormDataContentDisposition(contentDisposition))
{
var key = HeaderUtilities.RemoveQuotes(contentDisposition.Name).Value;
var encoding = FileHelpers.GetEncoding(section);
if (encoding == null)
{
return BadRequest($"The request could not be processed: Bad Encoding");
}
using (var streamReader = new StreamReader(
section.Body,
encoding,
detectEncodingFromByteOrderMarks: true,
bufferSize: 1024,
leaveOpen: true))
{
// The value length limit is enforced by
// MultipartBodyLengthLimit
var value = await streamReader.ReadToEndAsync();
if (string.Equals(value, "undefined",
StringComparison.OrdinalIgnoreCase))
{
value = string.Empty;
}
formAccumulator.Append(key, value);
if (formAccumulator.ValueCount >
_defaultFormOptions.ValueCountLimit)
{
return BadRequest($"The request could not be processed: Key Count limit exceeded.");
}
}
}
}
// Drain any remaining section body that hasn't been consumed and
// read the headers for the next section.
section = await reader.ReadNextSectionAsync();
}
var form = formAccumulator;
var file = streamedFileContent;
var results = form.GetResults();
instructorId = results["instructorId"];
string title = results["title"];
string firstName = results["firstName"];
string lastName = results["lastName"];
string durationInMinutes = results["durationInMinutes"];
//mediaId = await AddInstructorAudioMedia(instructorId, firstName, lastName, title, Convert.ToInt32(duration), DateTime.UtcNow, DateTime.UtcNow, file);
string fileExtension = "m4a";
// Generate Container Name - InstructorSpecific
string containerName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{instructorId}";
string contentType = "audio/mp4";
FileType fileType = FileType.audio;
string authorName = $"{firstName} {lastName}";
string authorShortName = $"{firstName[0]}{lastName}";
string description = $"{authorShortName} - {title}";
long duration = (Convert.ToInt32(durationInMinutes) * 60000);
// Generate new filename
string fileName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{Guid.NewGuid()}";
DateTime recordingDate = DateTime.UtcNow;
DateTime uploadDate = DateTime.UtcNow;
long blobSize = long.MinValue;
try
{
// Update file properties in storage
Dictionary<string, string> fileProperties = new Dictionary<string, string>();
fileProperties.Add("ContentType", contentType);
// update file metadata in storage
Dictionary<string, string> metadata = new Dictionary<string, string>();
metadata.Add("author", authorShortName);
metadata.Add("tite", title);
metadata.Add("description", description);
metadata.Add("duration", duration.ToString());
metadata.Add("recordingDate", recordingDate.ToString());
metadata.Add("uploadDate", uploadDate.ToString());
var fileNameWExt = $"{fileName}.{fileExtension}";
var blobContainer = await _cloudStorageService.CreateBlob(containerName, fileNameWExt, "audio");
try
{
MemoryStream fileContent = new MemoryStream(streamedFileContent);
fileContent.Position = 0;
using (fileContent)
{
await blobContainer.UploadFromStreamAsync(fileContent);
}
}
catch (StorageException e)
{
if (e.RequestInformation.HttpStatusCode == 403)
{
return BadRequest(e.Message);
}
else
{
return BadRequest(e.Message);
}
}
try
{
foreach (var key in metadata.Keys.ToList())
{
blobContainer.Metadata.Add(key, metadata[key]);
}
await blobContainer.SetMetadataAsync();
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
blobSize = await StorageUtils.GetBlobSize(blobContainer);
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
Media media = Media.Create(string.Empty, instructorId, authorName, fileName, fileType, fileExtension, recordingDate, uploadDate, ContentDetails.Create(title, description, duration, blobSize, 0, new List<string>()), StateDetails.Create(StatusType.STAGED, DateTime.MinValue, DateTime.UtcNow, DateTime.MaxValue), Manifest.Create(new Dictionary<string, string>()));
// upload to MongoDB
if (media != null)
{
var mapper = new Mapper(_mapperConfiguration);
var dao = mapper.Map<ContentDAO>(media);
try
{
await _db.Content.InsertOneAsync(dao);
}
catch (Exception)
{
mediaId = string.Empty;
}
mediaId = dao.Id.ToString();
}
else
{
// metadata wasn't stored, remove blob
await _cloudStorageService.DeleteBlob(containerName, fileName, "audio");
return BadRequest($"An issue occurred during media upload: rolling back storage change");
}
if (string.IsNullOrEmpty(mediaId))
{
return BadRequest($"Could not add instructor media");
}
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
var result = new { MediaId = mediaId, InstructorId = instructorId };
return Ok(result);
}
I reiterate, this all works great locally. I do not run it in IISExpress, I run it as a console app.
I submit large audio files via my SPA app and Postman and it works perfectly.
I am deploying this code to an Azure App Service on Linux (as a Basic B1).
Since the code works in my local development environment, I am at a loss of what my next steps are. I have refactored this code a few times but I suspect that it's environment related.
I cannot find anywhere that mentions that the level of App Service Plan is the culprit so before I go out spending more money I wanted to see if anyone here had encountered this challenge and could provide advice.
UPDATE: I attempted upgrading to a Production App Service Plan to see if there was an undocumented gate for incoming traffic. Upgrading didn't work either.
Thanks in advance.
-A
Currently, as of 11/2019, there is a limitation with the Azure App Service for Linux. It's CORS functionality is enabled by default and cannot be disabled AND it has a file size limitation that doesn't appear to get overridden by any of the published Kestrel configurations. The solution is to move the Web API app to a Azure App Service for Windows and it works as expected.
I am sure there is some way to get around it if you know the magic combination of configurations, server settings, and CLI commands but I need to move on with development.
I would like to upload small files with metadata (DriveItem) attached so that the LastModifiedDateTime property is set properly.
First, my current workaround is this:
var graphFileSystemInfo = new Microsoft.Graph.FileSystemInfo()
{
CreatedDateTime = fileSystemInfo.CreationTimeUtc,
LastAccessedDateTime = fileSystemInfo.LastAccessTimeUtc,
LastModifiedDateTime = fileSystemInfo.LastWriteTimeUtc
};
using (var stream = new System.IO.File.OpenRead(localPath))
{
if (fileSystemInfo.Length <= 4 * 1024 * 1024) // file.Length <= 4 MB
{
var driveItem = new DriveItem()
{
File = new File(),
FileSystemInfo = graphFileSystemInfo,
Name = Path.GetFileName(item.Path)
};
try
{
var newDriveItem = await graphClient.Me.Drive.Root.ItemWithPath(item.Path).Content.Request().PutAsync<DriveItem>(stream);
await graphClient.Me.Drive.Items[newDriveItem.Id].Request().UpdateAsync(driveItem);
}
catch (Exception ex)
{
throw;
}
}
else
{
// large file upload
}
}
This code works by first uploading the content via PutAsync and then updating the metadata via UpdateAsync. I tried to do it vice versa (as suggested here) but then I get the error that no file without content can be created. If I then add content to the DriveItem.Content property, the next error is that the stream's ReadTimeout and WriteTimeout properties cannot be read. With a wrapper class for the FileStream, I can overcome this but then I get the next error: A stream property 'content' has a value in the payload. In OData, stream property must not have a value, it must only use property annotations.
By googling, I found that there is another way to upload data, called multipart upload (link). With this description I tried to use the GraphServiceClient to create such a request. But it seems that this is only fully implemented for OneNote items. I took this code as template and created the following function to mimic the OneNote behavior:
public static async Task UploadSmallFile(GraphServiceClient graphClient, DriveItem driveItem, Stream stream)
{
var jsondata = JsonConvert.SerializeObject(driveItem);
// Create the metadata part.
StringContent stringContent = new StringContent(jsondata, Encoding.UTF8, "application/json");
stringContent.Headers.ContentDisposition = new ContentDispositionHeaderValue("related");
stringContent.Headers.ContentDisposition.Name = "Metadata";
stringContent.Headers.ContentType = new MediaTypeHeaderValue("application/json");
// Create the data part.
var streamContent = new StreamContent(stream);
streamContent.Headers.ContentDisposition = new ContentDispositionHeaderValue("related");
streamContent.Headers.ContentDisposition.Name = "Data";
streamContent.Headers.ContentType = new MediaTypeHeaderValue("text/plain");
// Put the multiparts together
string boundary = "MultiPartBoundary32541";
MultipartContent multiPartContent = new MultipartContent("related", boundary);
multiPartContent.Add(stringContent);
multiPartContent.Add(streamContent);
var requestUrl = graphClient.Me.Drive.Items["F4C4DC6C33B9D421!103"].Children.Request().RequestUrl;
// Create the request message and add the content.
HttpRequestMessage hrm = new HttpRequestMessage(HttpMethod.Post, requestUrl);
hrm.Content = multiPartContent;
// Send the request and get the response.
var response = await graphClient.HttpProvider.SendAsync(hrm);
}
With this code, I get the error Entity only allows writes with a JSON Content-Type header.
What am I doing wrong?
Not sure why the provided error occurs, your example appears to be a valid and corresponds to Request body example
But the alternative option could be considered for this matter, since Microsoft Graph supports JSON batching, the folowing example demonstrates how to upload a file and update its metadata within a single request:
POST https://graph.microsoft.com/v1.0/$batch
Accept: application/json
Content-Type: application/json
{
"requests": [
{
"id":"1",
"method":"PUT",
"url":"/me/drive/root:/Sample.docx:/content",
"headers":{
"Content-Type":"application/octet-stream"
},
},
{
"id":"2",
"method":"PATCH",
"url":"/me/drive/root:/Sample.docx:",
"headers":{
"Content-Type":"application/json; charset=utf-8"
},
"body":{
"fileSystemInfo":{
"lastModifiedDateTime":"2019-08-09T00:49:37.7758742+03:00"
}
},
"dependsOn":["1"]
}
]
}
Here is a C# example
var bytes = System.IO.File.ReadAllBytes(path);
var stream = new MemoryStream(bytes);
var batchRequest = new BatchRequest();
//1.1 construct upload file query
var uploadRequest = graphClient.Me
.Drive
.Root
.ItemWithPath(System.IO.Path.GetFileName(path))
.Content
.Request();
batchRequest.AddQuery(uploadRequest, HttpMethod.Put, new StreamContent(stream));
//1.2 construct update driveItem query
var updateRequest = graphClient.Me
.Drive
.Root
.ItemWithPath(System.IO.Path.GetFileName(path))
.Request();
var driveItem = new DriveItem()
{
FileSystemInfo = new FileSystemInfo()
{
LastModifiedDateTime = DateTimeOffset.UtcNow.AddDays(-1)
}
};
var jsonPayload = new StringContent(graphClient.HttpProvider.Serializer.SerializeObject(driveItem), Encoding.UTF8, "application/json");
batchRequest.AddQuery(updateRequest, new HttpMethod("PATCH"), jsonPayload, true, typeof(Microsoft.Graph.DriveItem));
//2. execute Batch request
var result = await graphClient.SendBatchAsync(batchRequest);
var updatedDriveItem = result[1] as DriveItem;
Console.WriteLine(updatedDriveItem.LastModifiedDateTime);
where SendBatchAsync is an extension method which implements JSON Batching support for Microsoft Graph .NET Client Library
I am using Microsoft Graph SDK to upload file in chunks in OneDrive. I am using below code to upload the file:
try
{
GraphServiceClient graphClient = this.GetGraphServiceClient(accessToken);
string fileName = Path.GetFileName(srcFilePath);
using (var fileContentStream = System.IO.File.Open(srcFilePath, System.IO.FileMode.Open))
{
var uploadSession = await graphClient.Me.Drive.Root.ItemWithPath(fileName).CreateUploadSession().Request().PostAsync();
var maxChunkSize = 5 * 1024 * 1024;
var provider = new ChunkedUploadProvider(uploadSession, graphClient, fileContentStream, maxChunkSize);
var chunkRequests = provider.GetUploadChunkRequests();
var readBuffer = new byte[maxChunkSize];
var trackedExceptions = new List<Exception>();
Microsoft.Graph.DriveItem itemResult = null;
foreach (var request in chunkRequests)
{
var result = await provider.GetChunkRequestResponseAsync(request, readBuffer, trackedExceptions);
if (result.UploadSucceeded)
{
itemResult = result.ItemResponse;
}
}
}
}
catch (Microsoft.Graph.ServiceException e)
{
}
catch (Exception ex)
{
}
The above code works fine with normal file names. However, when I am trying to upload a file with name as Test#123.pdf, "Object reference not set to an object" exception is thrown at line var provider = new ChunkedUploadProvider(uploadSession, graphClient, fileContentStream, maxChunkSize); Please see below screenshot:
Is this a limitation of OneDrive SDK, or am I not passing the parameters correctly?
The # sign has a special meaning in a URL. Before you can use it, you'll need to URL Encode the file name: Test%23123.pdf.
I'm struggling with a easy problem. I want to download an image from web using this code:
WebRequest requestPic = WebRequest.Create(#"http://something.com/" + id + ".jpg");
WebResponse responsePic = await requestPic.GetResponseAsync();
Now I wanted to write the WebResponse's stream in a StorageFile (eg. create a file id.jpg in the app's storage), but I haven't found any way to achieve that. I searched the web for it, but no success - all ways incompatible Stream types and so on.
Could you please help?
I have found the following solution, which works and is not too complicated.
public async static Task<StorageFile> SaveAsync(
Uri fileUri,
StorageFolder folder,
string fileName)
{
var file = await folder.CreateFileAsync(fileName, CreationCollisionOption.ReplaceExisting);
var downloader = new BackgroundDownloader();
var download = downloader.CreateDownload(
fileUri,
file);
var res = await download.StartAsync();
return file;
}
You will need to read the response stream into a buffer then write the data to a StorageFile. THe following code shows an example:
var fStream = responsePic.GetResponseStream();
var file = await ApplicationData.Current.LocalFolder.CreateFileAsync("testfile.txt");
using (var ostream = await file.OpenStreamForWriteAsync())
{
int count = 0;
do
{
var buffer = new byte[1024];
count = fStream.Read(buffer, 0, 1024);
await ostream.WriteAsync(buffer, 0, count);
}
while (fStream.CanRead && count > 0);
}
That can be done using the C++ REST SDK in Windows Store Apps. It's explained by HTTP Client Tutorial.
I am trying to use WebClient.UploadFile in my project to post file to server. WebClient.UploadFile accept file name uri as parameter but I would like to pass file stream instead of file name uri. Is that possible with WebClient?
Here are some examples that shows how to write stream to the specified resource using WebClient class:
Using WebClient.OpenWrite:
using (var client = new WebClient())
{
var fileContent = System.IO.File.ReadAllBytes(fileName);
using (var postStream = client.OpenWrite(endpointUrl))
{
postStream.Write(fileContent, 0, fileContent.Length);
}
}
Using WebClient.OpenWriteAsync:
using (var client = new WebClient())
{
client.OpenWriteCompleted += (sender, e) =>
{
var fileContent = System.IO.File.ReadAllBytes(fileName);
using (var postStream = e.Result)
{
postStream.Write(fileContent, 0, fileContent.Length);
}
};
client.OpenWriteAsync(new Uri(endpointUrl));
}
You should be able to use the methods WebClient.OpenWrite and OpenWriteAsync to send a stream back to your server.
If you use the later then subscribe to OpenWriteCompleted and use e.Result as the stream to CopyTo.