So, I am trying to save an image file from the camera then upload it to firebase. Unfortunately, the firebase storage plugin has some errors and doesn't return errors in the catch so I need to check wifi connection before i do so, just in case i need to cache the image and upload it later. Once the image file has uploaded i then create some JSON and send that to firebase database where the app pulls down relevant information.
Note: sometimes this code works and the image is not empty, othertimes the image comesback empty so im guessing its a timing issue?
Future saveImageFile(UploadImage image) async {
await storage.init();
var imageFile = storage.saveImageFile(image.file, image.getName());
instance.setInt("ImageCount", imageCount + 1);
checkConnectionThenUploadImage(imageFile);
}
//From storage class
File saveImageFile(File toBeSaved, String fileName) {
final filePath = '$imageDirectory$fileName';
var file = new File(filePath)
..createSync(recursive: true)
..writeAsBytes(toBeSaved.readAsBytesSync());
return file;
}
checkConnectionThenUploadImage(File image) {
checkConnectivity().then((isConnected) async {
if (!isConnected) {
instance.setBool("hasImagesToUpload", true);
} else {
await saveImageToStorage(image);
}
}).catchError((error) {
print("Error getting connectivity status, was error: $error");
});
}
saveImageToStorage(File imageFile) async {
final fileName = getNameFromFile(imageFile);
final StorageReference ref = FirebaseStorage.instance.ref().child("AllUsers").child(uuid).child(fileName);
final StorageUploadTask uploadTask = ref.putFile(imageFile, const StorageMetadata(contentLanguage: "en"));
final url = (await uploadTask.future).downloadUrl;
if (url != null) { //Normally you could catch an error here but the plugin has a bug so it needs to be checked in other ways
final fireImage = new FireImage(getNameFromFile(imageFile), storage.getDateFromFileName(fileName), imageCount, "", url.toString());
saveImageJsonToDatabase(fireImage);
storage.deleteImageFile(fileName);
} else {
checkConnectionThenUploadImage(imageFile);
}
}
saveImageJsonToDatabase(FireImage image) async {
await storage.init();
storage.saveJsonFile(image);
checkConnectivity().then((isConnected) {
if (!isConnected) {
instance.setBool("hasJsonToUpload", true);
} else {
final DatabaseReference dataBaseReference = FirebaseDatabase.instance.reference().child("AllUsers").child(uuid);
dataBaseReference.child("images").push().set(image.toJson()).whenComplete (() {
storage.deleteJsonFile(basename(image.name));
}).catchError((error) { //catching errors works with firebase database
saveImageJsonToDatabase(image);
});
}
});
}
//From storage class
deleteImageFile(String fileName) async {
final filePath = '$imageDirectory$fileName';
File(filePath).delete();
}
The image gets uploaded and the json is created but when i try to view the image using the download url from firebase storage it says the image is empty. The only clue i have is that this is a timing issue because it only happens occasionally.
Can anyone see where im going wrong?
Related
I'd like to write a multi-stage process linearly (as shown below) that starts with a file download with progress:
/// Processes the database, from download to prices processing.
Future<void> updateDatabase() async {
//final fontText = await File('./example/cosmic.flf').readAsString();
//print(art.renderFiglet('updateDatabase'));
// == 1) download
print('1 ======== DOWNLOADING ==========');
try {
await startDownloading();
} catch (err) {}
// == 2) Decompress: Whatever download was ok or not, we decompress the last downloaded zip file we have locally
print('2 ======== DECOMPRESSING ========');
try {
await startDecompressing();
} catch (err) {}
// == i) Stage i, etc.
But something does not work in my download stage as it starts stage 2) prior stage 1) completion.
My stage one (download) is like so:
/// Starts download procress
Future<void> startDownloading() async {
print("startDownloading…");
_state = DownloadState.downloading;
_progress = 0;
notifyListeners();
/// Database string url
final databaseUrlForInstantData = "https://XXX";
try {
final request = Request('GET', Uri.parse(databaseUrlForInstantData));
final StreamedResponse response = await Client().send(request);
final contentLength = response.contentLength;
// == Start from zero
_progress = 0;
notifyListeners();
/// The currently downloaded file as an array of bytes
List<int> bytes = [];
response.stream.listen(
/// = Closure listener for newly downloaded bytes
(List<int> newBytes) {
bytes.addAll(newBytes);
final downloadedLength = bytes.length;
if (contentLength == null) {
_progress = 0;
} else {
_progress = downloadedLength / contentLength;
}
notifyListeners();
print(
'Download in progress $_progress%: ${bytes.length} bytes so far');
},
/// = Download successfully completed
onDone: () async {
_state = DownloadState.downloaded;
notifyListeners();
/// The resulting local copy of the database
final file = await _getDownloadedZipFile();
// == Write to file
await file.writeAsBytes(bytes);
print('Download complete: ${bytes.length} bytes');
},
/// = Download error
onError: (e) {
_state = DownloadState.error;
_error = e.message;
print('Download error at $_progress%: $e');
},
cancelOnError: true,
);
}
// == Catches potential error
catch (e) {
_state = DownloadState.error;
_error = 'Could not download the databse: $e';
print('Download error at $_progress%: $e');
}
}
Your startDownloading function returns after it registers callbacks to listen to the Stream and does not wait for the Stream to complete.
To wait for the Stream to complete, you can save the StreamSubscription returned by .listen and then await the Future from StreamSubscription.asFuture:
var streamSubscription = response.stream.listen(...);
await streamSubscription.asFuture();
Is there a way to convert dart:io File object to dart:html File? I tried html.File file = dartFile as html.File and it isn't working
No. The two file objects are completely unrelated.
I am not aware of any platform which has both the dart:io and the dart:html library available, so even being able to import both in the same program is surprising.
No way. But you can handle it by having different code. See bellow:
final _photos = <File>[];
final _photosWeb = <html.File>[];
if (kIsWeb == false) { //if its not web, handle dart.io file
final pickedFile = await _picker.getImage(
source: ImageSource.gallery,
imageQuality:
100,
);
File image = File(pickedFile!.path);
if (image != null) {
_photos.insert(_numberOfImage, image);
}
} else { //if its web, handle html.file
final temp = (await ImagePicker()
.getImage(source: ImageSource.camera, imageQuality: 80));
final pickedFile = await temp!.readAsBytes();
var image = html.File(temp.path.codeUnits, temp.path);
if (image != null) {
_photosWeb.insert(_numberOfImage, image);
}
}
I have an ASP.NET Core 3.0 Web API endpoint that I have set up to allow me to post large audio files. I have followed the following directions from MS docs to set up the endpoint.
https://learn.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads?view=aspnetcore-3.0#kestrel-maximum-request-body-size
When an audio file is uploaded to the endpoint, it is streamed to an Azure Blob Storage container.
My code works as expected locally.
When I push it to my production server in Azure App Service on Linux, the code does not work and errors with
Unhandled exception in request pipeline: System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Request body too large.
Per advice from the above article, I have configured incrementally updated Kesterl with the following:
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseKestrel((ctx, options) =>
{
var config = ctx.Configuration;
options.Limits.MaxRequestBodySize = 6000000000;
options.Limits.MinRequestBodyDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.MinResponseDataRate =
new MinDataRate(bytesPerSecond: 100,
gracePeriod: TimeSpan.FromSeconds(10));
options.Limits.RequestHeadersTimeout =
TimeSpan.FromMinutes(2);
}).UseStartup<Startup>();
Also configured FormOptions to accept files up to 6000000000
services.Configure<FormOptions>(options =>
{
options.MultipartBodyLengthLimit = 6000000000;
});
And also set up the API controller with the following attributes, per advice from the article
[HttpPost("audio", Name="UploadAudio")]
[DisableFormValueModelBinding]
[GenerateAntiforgeryTokenCookie]
[RequestSizeLimit(6000000000)]
[RequestFormLimits(MultipartBodyLengthLimit = 6000000000)]
Finally, here is the action itself. This giant block of code is not indicative of how I want the code to be written but I have merged it into one method as part of the debugging exercise.
public async Task<IActionResult> Audio()
{
if (!MultipartRequestHelper.IsMultipartContentType(Request.ContentType))
{
throw new ArgumentException("The media file could not be processed.");
}
string mediaId = string.Empty;
string instructorId = string.Empty;
try
{
// process file first
KeyValueAccumulator formAccumulator = new KeyValueAccumulator();
var streamedFileContent = new byte[0];
var boundary = MultipartRequestHelper.GetBoundary(
MediaTypeHeaderValue.Parse(Request.ContentType),
_defaultFormOptions.MultipartBoundaryLengthLimit
);
var reader = new MultipartReader(boundary, Request.Body);
var section = await reader.ReadNextSectionAsync();
while (section != null)
{
var hasContentDispositionHeader = ContentDispositionHeaderValue.TryParse(
section.ContentDisposition, out var contentDisposition);
if (hasContentDispositionHeader)
{
if (MultipartRequestHelper
.HasFileContentDisposition(contentDisposition))
{
streamedFileContent =
await FileHelpers.ProcessStreamedFile(section, contentDisposition,
_permittedExtensions, _fileSizeLimit);
}
else if (MultipartRequestHelper
.HasFormDataContentDisposition(contentDisposition))
{
var key = HeaderUtilities.RemoveQuotes(contentDisposition.Name).Value;
var encoding = FileHelpers.GetEncoding(section);
if (encoding == null)
{
return BadRequest($"The request could not be processed: Bad Encoding");
}
using (var streamReader = new StreamReader(
section.Body,
encoding,
detectEncodingFromByteOrderMarks: true,
bufferSize: 1024,
leaveOpen: true))
{
// The value length limit is enforced by
// MultipartBodyLengthLimit
var value = await streamReader.ReadToEndAsync();
if (string.Equals(value, "undefined",
StringComparison.OrdinalIgnoreCase))
{
value = string.Empty;
}
formAccumulator.Append(key, value);
if (formAccumulator.ValueCount >
_defaultFormOptions.ValueCountLimit)
{
return BadRequest($"The request could not be processed: Key Count limit exceeded.");
}
}
}
}
// Drain any remaining section body that hasn't been consumed and
// read the headers for the next section.
section = await reader.ReadNextSectionAsync();
}
var form = formAccumulator;
var file = streamedFileContent;
var results = form.GetResults();
instructorId = results["instructorId"];
string title = results["title"];
string firstName = results["firstName"];
string lastName = results["lastName"];
string durationInMinutes = results["durationInMinutes"];
//mediaId = await AddInstructorAudioMedia(instructorId, firstName, lastName, title, Convert.ToInt32(duration), DateTime.UtcNow, DateTime.UtcNow, file);
string fileExtension = "m4a";
// Generate Container Name - InstructorSpecific
string containerName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{instructorId}";
string contentType = "audio/mp4";
FileType fileType = FileType.audio;
string authorName = $"{firstName} {lastName}";
string authorShortName = $"{firstName[0]}{lastName}";
string description = $"{authorShortName} - {title}";
long duration = (Convert.ToInt32(durationInMinutes) * 60000);
// Generate new filename
string fileName = $"{firstName[0].ToString().ToLower()}{lastName.ToLower()}-{Guid.NewGuid()}";
DateTime recordingDate = DateTime.UtcNow;
DateTime uploadDate = DateTime.UtcNow;
long blobSize = long.MinValue;
try
{
// Update file properties in storage
Dictionary<string, string> fileProperties = new Dictionary<string, string>();
fileProperties.Add("ContentType", contentType);
// update file metadata in storage
Dictionary<string, string> metadata = new Dictionary<string, string>();
metadata.Add("author", authorShortName);
metadata.Add("tite", title);
metadata.Add("description", description);
metadata.Add("duration", duration.ToString());
metadata.Add("recordingDate", recordingDate.ToString());
metadata.Add("uploadDate", uploadDate.ToString());
var fileNameWExt = $"{fileName}.{fileExtension}";
var blobContainer = await _cloudStorageService.CreateBlob(containerName, fileNameWExt, "audio");
try
{
MemoryStream fileContent = new MemoryStream(streamedFileContent);
fileContent.Position = 0;
using (fileContent)
{
await blobContainer.UploadFromStreamAsync(fileContent);
}
}
catch (StorageException e)
{
if (e.RequestInformation.HttpStatusCode == 403)
{
return BadRequest(e.Message);
}
else
{
return BadRequest(e.Message);
}
}
try
{
foreach (var key in metadata.Keys.ToList())
{
blobContainer.Metadata.Add(key, metadata[key]);
}
await blobContainer.SetMetadataAsync();
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
blobSize = await StorageUtils.GetBlobSize(blobContainer);
}
catch (StorageException e)
{
return BadRequest(e.Message);
}
Media media = Media.Create(string.Empty, instructorId, authorName, fileName, fileType, fileExtension, recordingDate, uploadDate, ContentDetails.Create(title, description, duration, blobSize, 0, new List<string>()), StateDetails.Create(StatusType.STAGED, DateTime.MinValue, DateTime.UtcNow, DateTime.MaxValue), Manifest.Create(new Dictionary<string, string>()));
// upload to MongoDB
if (media != null)
{
var mapper = new Mapper(_mapperConfiguration);
var dao = mapper.Map<ContentDAO>(media);
try
{
await _db.Content.InsertOneAsync(dao);
}
catch (Exception)
{
mediaId = string.Empty;
}
mediaId = dao.Id.ToString();
}
else
{
// metadata wasn't stored, remove blob
await _cloudStorageService.DeleteBlob(containerName, fileName, "audio");
return BadRequest($"An issue occurred during media upload: rolling back storage change");
}
if (string.IsNullOrEmpty(mediaId))
{
return BadRequest($"Could not add instructor media");
}
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
var result = new { MediaId = mediaId, InstructorId = instructorId };
return Ok(result);
}
I reiterate, this all works great locally. I do not run it in IISExpress, I run it as a console app.
I submit large audio files via my SPA app and Postman and it works perfectly.
I am deploying this code to an Azure App Service on Linux (as a Basic B1).
Since the code works in my local development environment, I am at a loss of what my next steps are. I have refactored this code a few times but I suspect that it's environment related.
I cannot find anywhere that mentions that the level of App Service Plan is the culprit so before I go out spending more money I wanted to see if anyone here had encountered this challenge and could provide advice.
UPDATE: I attempted upgrading to a Production App Service Plan to see if there was an undocumented gate for incoming traffic. Upgrading didn't work either.
Thanks in advance.
-A
Currently, as of 11/2019, there is a limitation with the Azure App Service for Linux. It's CORS functionality is enabled by default and cannot be disabled AND it has a file size limitation that doesn't appear to get overridden by any of the published Kestrel configurations. The solution is to move the Web API app to a Azure App Service for Windows and it works as expected.
I am sure there is some way to get around it if you know the magic combination of configurations, server settings, and CLI commands but I need to move on with development.
I'm new in Xamarin development.
I build my app, where user can clicks on DOWNLOAD button.
This button download video from the server and save to Photo library.
Here is how I implement this (maybe its incorrect way??)
public bool SaveVideo(byte[] videoData, int id)
{
try
{
CreateCustomAlbum();
// Save file to applicaiton folder
string local_path = SaveFileToApplicationFolder(videoData);
_lib.WriteVideoToSavedPhotosAlbum(new Foundation.NSUrl(local_path), (t, u) =>
{
DeleteLocalFile(local_path); // HERE I DELETE FILE FOR NOT INCREASE SIZE OF APPLICATION
_local_file_path = t.AbsoluteUrl.ToString(); // global variable
_lib.Enumerate(ALAssetsGroupType.Album, HandleALAssetsLibraryGroupsEnumerationResultsDelegate, (obj) => { });
});
return true;
}
catch (Exception ex)
{
return false;
}
}
void DeleteLocalFile(string local_path)
{
try
{
if (File.Exists(local_path))
{
File.Delete(local_path);
if (!File.Exists(local_path))
{
Console.WriteLine("Deleted");
}
}
}
catch (Exception ex)
{
}
}
string SaveFileToApplicationFolder(byte[] videoData)
{
try
{
string file_path = String.Empty;
var doc = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);
string filename = "MY-APP-" + id + ".mp4"; // id global variable
file_path = Path.Combine(doc, filename); // global variable
File.WriteAllBytes(file_path, videoData);
return file_path;
}
catch (Exception ex)
{
return String.Empty;
}
}
void HandleALAssetsLibraryGroupsEnumerationResultsDelegate(ALAssetsGroup group, ref bool stop)
{
try
{
if (group == null)
{
stop = true;
return;
}
if (group.Name == "MY-APP-ALBUM-NAME")
{
stop = true;
_current_album = group;
SaveFileToCustomAlbum();
}
}
catch (Exception ex)
{
}
}
void SaveFileToCustomAlbum()
{
try
{
if (_current_album != null && !String.IsNullOrEmpty(_local_file_path))
{
_lib.AssetForUrl(new Foundation.NSUrl(_local_file_path), delegate (ALAsset asset)
{
if (asset != null)
{
_current_album.AddAsset(asset);
}
else
{
Console.WriteLine("ASSET == NULL.");
}
}, delegate (NSError assetError)
{
Console.WriteLine(assetError.ToString());
});
}
}
catch (Exception ex)
{
}
}
So this code do:
1) Save video to local folder my video - Method SaveFileToApplicationFolder
2) Then Save video file to Photo library - Method SaveVideo
3) Then Delete file from app folder (in purpose not increase application folder size (app size) --- IF ITS CORRECT logic??
4) Then put assets to Custom Album for my App
SO everything here works well for me......BUT!
I need overtime when user open item - check if he already has video for this item in photos library or not?
And here I'm stack....i just don't understand how i can to check if user has specific video?? I don't find hot to set NAME for ASSETS and hot looking for assets by name...so don't know hot to find this assets....METADATA?? Key_VALUE of object??
Refer to Obj-C Check if image already exists in Photos gallery
In short:
Store assetUrl when saving video with NSUserDefaults
Check if the video exists in Photo library with assetUrl when next time to open it .
You may just want to use xam.plugin.media nuget package. It makes it very easy to take and store videos as well as access the default video picker for selecting existing videos
I am trying to improve a gnome-shell-extension by allowing retrieving of remote image (jpg) and set as icon for a certain widget.
Here is what I got so far, but it does not work, due to mismatch of data type:
// allow remote album art url
const GdkPixbuf = imports.gi.GdkPixbuf;
const Soup = imports.gi.Soup;
const _httpSession = new Soup.SessionAsync();
Soup.Session.prototype.add_feature.call(_httpSession, new Soup.ProxyResolverDefault());
function getAlbumArt(url, callback) {
var request = Soup.Message.new('GET', url);
_httpSession.queue_message(request, function(_httpSession, message) {
if (message.status_code !== 200) {
callback(message.status_code, null);
return;
} else {
var albumart = request.response_body_data;
// this line gives error message:
// JS ERROR: Error: Expected type guint8 for Argument 'data'
// but got type 'object'
// getAlbumArt/<#~/.local/share/gnome-shell/extensions
// /laine#knasher.gmail.com/streamMenu.js:42
var icon = GdkPixbuf.Pixbuf.new_from_inline(albumart, true);
callback(null, icon);
};
});
Here is the callback:
....
log('try retrieve albumart: ' + filePath);
if(GLib.file_test(iconPath, GLib.FileTest.EXISTS)){
let file = Gio.File.new_for_path(iconPath)
let icon = new Gio.FileIcon({file:file});
this._albumArt.gicon = icon;
} else if (filePath.indexOf('http') == 0) {
log('try retrieve from url: ' + filePath);
getAlbumArt(filePath, function(code, icon){
if (code) {
this._albumArt.gicon = icon;
} else {
this._albumArt.hide();
}
});
}
....
My question is, how to parse the response, which is a jpg image, so that I can set the widget icon with it?
Thank you very much!
I was able achieve this by simply doing:
const St = imports.gi.St;
const Gio = imports.gi.Gio;
// ...
this.icon = new St.Icon()
// ...
let url = 'https://some.url'
let icon = Gio.icon_new_for_string(url);
this.icon.set_gicon(icon);
And it will automatically download it.
I had been struggling for hours with this issue until I finally figured out a way to do it with a local image cache (downloading the image and storing it in an icons/ folder). Then I tried this approach for fun (just to see what would happen, expecting it to fail miserably), and guess what? It just worked. This is not mentioned anywhere in the very scarce documentation I was able to find.
For anyone still having the same problem here is my solution:
_httpSession.queue_message(request, function(_httpSession, message) {
let buffer = message.response_body.flatten();
let bytes = buffer.get_data();
let gicon = Gio.BytesIcon.new(bytes);
// your code here
});