I implement a Xamarin.Forms control. The problem I'm currently experiancing is that an overridden Draw() method of a custom renderer blocks UI (at least for iOS platform). I've googled but with no success. Is is possible to perform the drawing in a background without blocking the UI?
Here is the code of a simple renderer for iOS platform that that demonstrates the issue.
public class MyCustomRenderer : ViewRenderer
{
protected override void OnElementPropertyChanged(object sender, PropertyChangedEventArgs e)
{
base.OnElementPropertyChanged(sender, e);
SetNeedsDisplay();
}
public override void Draw(CoreGraphics.CGRect rect)
{
var myControl = (MyControl)this.Element;
if (!myControl.IsRendered)
{
using (var context = UIGraphics.GetCurrentContext())
{
var token = CancellationToken.None;
var task = Task.Factory.StartNew(() => TimeConsumingRendering(context, token), token);
// task.Wait() blocks the UI but draws the desired graphics.
// When task.Wait() is commented out = the desired graphics doesn't get drawn and it doesn't block the UI
task.Wait();
}
}
}
private void TimeConsumingRendering(CGContext context, CancellationToken token)
{
try
{
for (int i = 0; i <= 100; i++)
{
token.ThrowIfCancellationRequested();
var delay = Task.Delay(50);
delay.Wait();
}
context.ScaleCTM(1f, -1f);
context.TranslateCTM(0, -Bounds.Height);
context.SetTextDrawingMode(CGTextDrawingMode.FillStroke);
context.SelectFont("Helvetica-Bold", 16f, CGTextEncoding.MacRoman);
context.SetFillColor(new CoreGraphics.CGColor(1f, 0f, 0f));
context.ShowTextAtPoint(0, 0, "Finished");
}
catch
{ }
}
}
Looks like that the only solution for that is to separate the time consuming drawing and drawing on the actual control.
The solution is
to generate an image in a background(triggered by an event). and only than use it within the Draw method.
to use the generated image within the overridden Draw method.
At least it works for me.
Related
I have a small app, that does read QR-Codes for a login and alternatively offers the possibility to hand-type the code and login.
The app starts and heads directly to the login (View). When I try to scan a qr code that does not work - the delegate is never called/the event never raised.
I adapted the approach from Larry OBrien http://www.knowing.net/index.php/2013/10/09/natively-recognize-barcodesqr-codes-in-ios-7-with-xamarin-ios/
And created my own ScannerView class for that use:
public sealed partial class ScannerView : UIView
{
private readonly AVCaptureVideoPreviewLayer _layer;
public AVCaptureSession Session { get; }
private readonly AVCaptureMetadataOutput _metadataOutput;
public event EventHandler<AVMetadataMachineReadableCodeObject> MetadataFound = delegate { };
public ScannerView (IntPtr handle) : base (handle)
{
Session = new AVCaptureSession();
var camera = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);
var input = AVCaptureDeviceInput.FromDevice(camera);
Session.AddInput(input);
//Add the metadata output channel
_metadataOutput = new AVCaptureMetadataOutput {RectOfInterest = Bounds};
var metadataDelegate = new MetadataOutputDelegate();
var dispatchQueue = new DispatchQueue("scannerQueue");
_metadataOutput.SetDelegate(metadataDelegate, dispatchQueue);
Session.AddOutput(_metadataOutput);
_layer = new AVCaptureVideoPreviewLayer(Session)
{
MasksToBounds = true,
VideoGravity = AVLayerVideoGravity.ResizeAspectFill,
Frame = Bounds
};
Layer.AddSublayer(_layer);
// Hand event over to subscriber
metadataDelegate.MetadataFound += (s, e) => MetadataFound(s, e);
}
public override void LayoutSubviews()
{
base.LayoutSubviews();
_layer.Frame = Bounds;
_metadataOutput.RectOfInterest = Bounds;
}
public void SetMetadataType(AVMetadataObjectType type)
{
//Confusing! *After* adding to session, tell output what to recognize...
_metadataOutput.MetadataObjectTypes = type;
}
}
And in my LoginView I do the following:
public override void ViewWillAppear(bool animated)
{
base.ViewWillAppear(animated);
// Manipulate navigation stack
NavigationController.SetViewControllers(
NavigationController.ViewControllers.Where(
viewController => viewController is LoginView).ToArray(), false);
ScannerView.MetadataFound += (s, e) =>
{
Console.WriteLine($"Found: [{e.Type.ToString()}] {e.StringValue}");
LoginViewModel.BarCode = e.StringValue;
if (LoginViewModel.DoneCommand.CanExecute())
{
ScannerView.Session.StopRunning();
LoginViewModel.DoneCommand.Execute();
}
};
}
public override void ViewDidAppear(bool animated)
{
base.ViewDidAppear(animated);
ScannerView.Session.StartRunning();
ScannerView.SetMetadataType(AVMetadataObjectType.QRCode | AVMetadataObjectType.EAN13Code);
}
Funny thing is, that this works once I logged in with the manual input and logged out again, so I'm on the same screen again (possibly not the same but a new instance of it as the GC may destroy the view as it is removed from the navigation stack?)
I have put the scannerview as a subview on the LoginView in the storyboard. For navigation I use MVVMCross. (just for info)
So: What am I doing wrong? What do I need to do to make it work on the first load? (I got it to do that once - with the same code... maybe it is a timing issue?)
Obviously this is a timing issue.
I solved it by adding a "Tap to scan" paradigm.
When tapping I execute the following code:
public override void TouchesBegan(NSSet touches, UIEvent evt)
{
base.TouchesBegan(touches, evt);
Console.WriteLine($"Current types to scan: {this.MetadataOutput.MetadataObjectTypes}");
this.SetMetadataType(this.MetadataObjectType);
Console.WriteLine($"New types to scan: {this.MetadataOutput.MetadataObjectTypes}");
}
public void SetMetadataType(AVMetadataObjectType type)
{
//Confusing! *After* adding to session, tell output what to recognize...
this.Session.BeginConfiguration();
this.MetadataOutput.MetadataObjectTypes = type;
this.Session.CommitConfiguration();
}
Where MetadataObjectType is set to the codes we're looking for before.
And that solves the problem - the scanning now works every time.
I think the magic part is the Begin- and CommitConfiguration call, as this also works, if I do not use the touch to scan paradigm.
I'm working on Xmarin Forms(PCL) project, I want to convert the StackLayout to Image / buffer and send it to printer for hard print.
Can anyone suggest how to do it in (Xamarin.Android & Xamarin.iOS).
You can't. Xamarin does not have that kind of feature. You should write a Renderer for your UIComponent.
Fortunately there is an Objective-C iOS implementation, and an Android one as well. You can inspire from them.
Taken from this link, which I have personally used, quite a while back though, the following code will take a screenshot of the entire page.
I ended up modifying the code to only take a screenshot of a specific view on the page and also changed a few other things but this example is what I based it off of, so let me know if you would rather see that code and/or if something below is not working for you.
First you create an interface in your Forms project, IScreenshotManager.cs for example:
public interface IScreenshotManager {
Task<byte[]> CaptureAsync();
}
Now we need to implement our interface in Android, ScreenshotManager.cs for example:
public class ScreenshotManager : IScreenshotManager {
public static Activity Activity { get; set; }
public async System.Threading.Tasks.Task<byte[]> CaptureAsync() {
if(Activity == null) {
throw new Exception("You have to set ScreenshotManager.Activity in your Android project");
}
var view = Activity.Window.DecorView;
view.DrawingCacheEnabled = true;
Bitmap bitmap = view.GetDrawingCache(true);
byte[] bitmapData;
using (var stream = new MemoryStream()) {
bitmap.Compress(Bitmap.CompressFormat.Png, 0, stream);
bitmapData = stream.ToArray();
}
return bitmapData;
}
}
Then set ScreenshotManager.Activity in MainActivity:
public class MainActivity : Xamarin.Forms.Platform.Android.FormsApplicationActivity {
protected override async void OnCreate(Android.OS.Bundle bundle) {
...
ScreenshotManager.Activity = this; //There are better ways to do this but this is what the example from the link suggests
...
}
}
Finally we implement this on iOS, ScreenshotManager.cs:
public class ScreenshotManager : IScreenshotManager {
public async System.Threading.Tasks.Task<byte[]> CaptureAsync() {
var view = UIApplication.SharedApplication.KeyWindow.RootViewController.View;
UIGraphics.BeginImageContext(view.Frame.Size);
view.DrawViewHierarchy(view.Frame, true);
var image = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
using(var imageData = image.AsPNG()) {
var bytes = new byte[imageData.Length];
System.Runtime.InteropServices.Marshal.Copy(imageData.Bytes, bytes, 0, Convert.ToInt32(imageData.Length));
return bytes;
}
}
}
I have a MapControl working just creating my route. Now, I just need to figure out a way to print it out. Using the UWP printing sample, I get a black box where the control should be. The map and route are being built, just not rendered correctly in the print preview. I thought I saw a MapControl.Print... but I think that was in the Bing.Maps stuff. Any pointers would be appreciated. Thanks.
Using the UWP printing sample, I get a black box where the control should be.
It seems the MapControl can not be printed.
As a workround, we can use RenderTargetBitmap to get the image from the MapControl. That we can print the image.
Using a RenderTargetBitmap, you can accomplish scenarios such as applying image effects to a visual that originally came from a XAML UI composition, generating thumbnail images of child pages for a navigation system, or enabling the user to save parts of the UI as an image source and then share that image with other apps.
Because RenderTargetBitmap is a subclass of ImageSource, it can be used as the image source for Image elements or an ImageBrush brush.
For more info,see RenderTargetBitmap.
For example:
RenderTargetBitmap renderTargetBitmap = new RenderTargetBitmap();
await renderTargetBitmap.RenderAsync(MyMap);
MyImage.Source = renderTargetBitmap;
The printing code:
public sealed partial class MainPage : Page
{
private PrintManager printmgr = PrintManager.GetForCurrentView();
private PrintDocument printDoc = null;
private PrintTask task = null;
public MainPage()
{
this.InitializeComponent();
printmgr.PrintTaskRequested += Printmgr_PrintTaskRequested;
}
private void Printmgr_PrintTaskRequested(PrintManager sender, PrintTaskRequestedEventArgs args)
{
var deferral = args.Request.GetDeferral();
task = args.Request.CreatePrintTask("Print", OnPrintTaskSourceRequrested);
task.Completed += PrintTask_Completed;
deferral.Complete();
}
private void PrintTask_Completed(PrintTask sender, PrintTaskCompletedEventArgs args)
{
//the PrintTask is completed
}
private async void OnPrintTaskSourceRequrested(PrintTaskSourceRequestedArgs args)
{
var def = args.GetDeferral();
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal,
() =>
{
args.SetSource(printDoc?.DocumentSource);
});
def.Complete();
}
private async void appbar_Printer_Click(object sender, RoutedEventArgs e)
{
if (printDoc != null)
{
printDoc.GetPreviewPage -= OnGetPreviewPage;
printDoc.Paginate -= PrintDic_Paginate;
printDoc.AddPages -= PrintDic_AddPages;
}
this.printDoc = new PrintDocument();
printDoc.GetPreviewPage += OnGetPreviewPage;
printDoc.Paginate += PrintDic_Paginate;
printDoc.AddPages += PrintDic_AddPages;
bool showPrint = await PrintManager.ShowPrintUIAsync();
}
private void PrintDic_AddPages(object sender, AddPagesEventArgs e)
{
printDoc.AddPage(this);
printDoc.AddPagesComplete();
}
private void PrintDic_Paginate(object sender, PaginateEventArgs e)
{
PrintTaskOptions opt = task.Options;
printDoc.SetPreviewPageCount(1, PreviewPageCountType.Final);
}
private void OnGetPreviewPage(object sender, GetPreviewPageEventArgs e)
{
printDoc.SetPreviewPage(e.PageNumber, this);
}
}
So, I'm working on my paint application. Every time I make changes, the current screen state is copied and saved as a bitmap image on my disk (so I can use it in my paint event).
The problem occurs when I minimize and return the window to its normal state and then try to draw. This triggers my event reacting to changes, the program tries to save the image ---->>> kabooom.
It says "A generic error occurred in GDI+".. So, I've been surfing through various forums in search for the answer but none of them gave me true answer, they all mention wrong paths etc. but I'm pretty sure that's not the problem. Do I have to dispose bitmap or do something with the stream?
int width = pictureBox1.Size.Width;
int height = pictureBox1.Size.Height;
Point labelOrigin = new Point(0, 0); // this is referencing the control
Point screenOrigin = pictureBox1.PointToScreen(labelOrigin);
int x = screenOrigin.X;
int y = screenOrigin.Y;
Rectangle bounds = this.Bounds;
using (Bitmap bitmap = new Bitmap(width, height))
{
using (Graphics g = Graphics.FromImage(bitmap))
{
g.CopyFromScreen(new Point(x, y), Point.Empty, bounds.Size);
}
bitmap.Save(_brojFormi + ".bmp", System.Drawing.Imaging.ImageFormat.Bmp);
}
You're saving an image to disk so you can use it in another event? Wow.
Why not just use a class-global variable to store the bitmap?
class MyForm
{
Bitmap currentImage = null;
Graphics gfx = null;
private void btnLoad_Click(object sender, EventArgs e)
{
// ...
currentImage = new Bitmap(fileName);
gfx = Graphics.FromImage(currentImage);
}
private void pbEditor_Paint(object sender, PaintEventArgs e)
{
if (currentImage != null && gfx != null)
{
lock(currentImage) e.Graphics.DrawImage(currentImage, ...);
}
}
private void pbEditor_Click(object sender, MouseEventArgs e)
{
// quick example to show bitmap drawing
if (e.Button == MouseButtons.Left)
lock(currentImage) currentImage.SetPixel(e.Location.X, e.Location.Y, Colors.Black);
}
}
I'm trying to create an app to read QR codes using Monotouch and C# port of Zxing but I'm hitting memory issues. While the app processes captured screen frames the app receives memory warnings and is then shut down. I have removed the call to Zxing to track down where the memory issue stems from and can reproduce the issue with just capturing the screen image in a loop.
Here is the code:
using System;
using System.Drawing;
using System.Collections.Generic;
using System.Threading;
using MonoTouch.UIKit;
using MonoTouch.Foundation;
using MonoTouch.CoreGraphics;
using com.google.zxing;
using com.google.zxing.common;
using System.Collections;
using MonoTouch.AudioToolbox;
using iOS_Client.Utilities;
namespace iOS_Client.Controllers
{
public class CameraOverLayView : UIView
{
private Thread _thread;
private CameraViewController _parentViewController;
private Hashtable hints;
private static com.google.zxing.MultiFormatReader _multiFormatReader = null;
private static RectangleF picFrame = new RectangleF(0, 146, 320, 157);
private static UIImage _theScreenImage = null;
public CameraOverLayView(CameraViewController parentController) : base()
{
Initialize();
_parentViewController = parentController;
}
private void Initialize()
{
}
private bool Worker()
{
Result resultb = null;
if(DeviceHardware.Version == DeviceHardware.HardwareVersion.iPhone4
|| DeviceHardware.Version == DeviceHardware.HardwareVersion.iPhone4S)
{
picFrame = new RectangleF(0, 146*2, 320*2, 157*2);
}
if(hints==null)
{
var list = new ArrayList();
list.Add (com.google.zxing.BarcodeFormat.QR_CODE);
hints = new Hashtable();
hints.Add(com.google.zxing.DecodeHintType.POSSIBLE_FORMATS, list);
hints.Add (com.google.zxing.DecodeHintType.TRY_HARDER, true);
}
if(_multiFormatReader == null)
{
_multiFormatReader = new com.google.zxing.MultiFormatReader();
}
using (var screenImage = CGImage.ScreenImage.WithImageInRect(picFrame))
{
using (_theScreenImage = UIImage.FromImage(screenImage))
{
Bitmap srcbitmap = new System.Drawing.Bitmap(_theScreenImage);
LuminanceSource source = null;
BinaryBitmap bitmap = null;
try {
source = new RGBLuminanceSource(srcbitmap, screenImage.Width, screenImage.Height);
bitmap = new BinaryBitmap(new HybridBinarizer(source));
try {
_multiFormatReader.Hints = hints;
resultb = null;
//_multiFormatReader.decodeWithState(bitmap);
if(resultb != null && resultb.Text!=null)
{
InvokeOnMainThread( () => _parentViewController.BarCodeScanned(resultb));
}
}
catch (ReaderException re)
{
//continue;
}
} catch (Exception ex) {
Console.WriteLine(ex.Message);
}
finally {
if(bitmap!=null)
bitmap = null;
if(source!=null)
source = null;
if(srcbitmap!=null)
{
srcbitmap.Dispose();
srcbitmap = null;
}
}
}
}
return resultb != null;
}
public void StartWorker()
{
if(_thread==null)
{
_thread = new Thread(()=> {
bool result = false;
while (result == false)
{
result = Worker();
Thread.Sleep (67);
}
});
}
_thread.Start();
}
public void StopWorker()
{
if(_thread!=null)
{
_thread.Abort();
_thread = null;
}
//Just in case
_multiFormatReader = null;
hints = null;
}
protected override void Dispose(bool disposing)
{
StopWorker();
base.Dispose(disposing);
}
}
}
Interestingly I took a look at http://blog.reinforce-lab.com/2010/02/monotouchvideocapturinghowto.html to try and see how others were capturing and processing video and this code suffers from the same as mine, quitting after about 40 seconds with memory warnings.
Hopefully the QR codes will be scanned in less than 40 seconds but I'm not sure if the memory ever gets released so the problem may crop up after many codes have been scanned. Either way it should be possible to capture a video feed continuously without memory issues right?
This is somewhat counter-intuitive, but the ScreenImage property will create a new CGImage instance every time you call it, so you must call Dispose on that object as well:
using (var img = CGImage.ScreenImage) {
using (var screenImage = img.WithImageInRect(picFrame))
{
}
}
I will just add the actual solution that worked for me which combined information from previous answers. The code inside the loop looks like:
using (var pool = new NSAutoreleasePool ())
{
using (var img = CGImage.ScreenImage)
{
using (var screenImage = img.WithImageInRect(picFrame))
{
using (_theScreenImage = UIImage.FromImage(screenImage))
{
}
}
}
}
GC.Collect();
The original System.Drawing.Bitmap from zxing.MonoTouch suffered from a lack of Dispose which made it never release the unmanaged memory it allocated.
The more recent one (from your link) does free the unmanaged memory when Dispose is called (it's better). However it creates a bitmap context (in it's constructor) and does not dispose it manually (e.g. with a using). So it relies on the garbage collector (GC) to do it later...
In many cases this is not a big issue since the GC will, eventually, free this context instance and will reclaim the associated memory. However if you're doing this in a loop it's possible you'll run out of (unmanaged) memory before the GC kicks in. That will get you memory warnings and iOS can decide to kill your application (or it could crash by itself).
but I'm not sure if the memory ever gets released
Yes, it should be - but maybe not as fast as you need the memory back. Implementing (and using) IDisposable correctly will solve this.
Either way it should be possible to capture a video feed continuously without memory issues right?
Yes. Make sure you're releasing your memory as soon as possible, e.g. with using (var ...) { }, and ensure the 3rd party code you use does the same.