How to perform affine transform on stream? - ios

I want to rotate image, which I get from memoryStream.
I've tried to use UIView:
private Stream TransformImageFromStream(MemoryStream stream, CGAffineTransform transform)
{
var bytes = stream.ToBytes ();
var data = NSData.FromArray (bytes);
var image = UIImage.LoadFromData (data);
var uiImage = new UIImageView (image);
uiImage.Transform = transform;
var result = uiImage.Image.AsPNG ().AsStream ();
var testBytes = result.ToBytes ();
if (testBytes [0] == bytes [0])
{
// throw new Exception ("test failed");
}
return result;
}
But transform never applies as it always do with UIView within graphic canvas.
I've found somewhere that it will work with using ApplyTransform directly to CIImage
private Stream TransformImageFromStream(MemoryStream stream, CGAffineTransform transform)
{
var bytes = stream.ToBytes ();
using (var data = NSData.FromArray (bytes))
using (var ciImage = CIImage.FromData (data))
using (var transformedImage = ciImage.ImageByApplyingTransform (transform))
using (var uiImage = UIImage.FromImage(ciImage))
using (var uiImageView = new UIImageView(uiImage))
{
var result = uiImage.AsPNG ();
return null;
}
}
But UIImage doesnt want to convert it AsPng() in order to convert it to stream. In this case I've noticed that CGImage is empty, most of properties set to 0. Perhaps, there is some way to convert the CIImage itself, without any wrapping?
I have no more clues to what to do.

Related

How to get PNG image data from a Canvas in Flutter?

I have a Flutter widget that accepts user input and draws to a canvas using a custom painter:
class SPPoint {
final Point point;
final double size;
SPPoint(this.point, this.size);
String toString() => "SPPoint $point $size";
}
class SignaturePadPainter extends CustomPainter {
final List<SPPoint> allPoints;
final SignaturePadOptions opts;
Canvas _lastCanvas;
Size _lastSize;
SignaturePadPainter(this.allPoints, this.opts);
ui.Image getPng() {
if (_lastCanvas == null) {
return null;
}
if (_lastSize == null) {
return null;
}
var recorder = new ui.PictureRecorder();
var origin = new Offset(0.0, 0.0);
var paintBounds = new Rect.fromPoints(_lastSize.topLeft(origin), _lastSize.bottomRight(origin));
var canvas = new Canvas(recorder, paintBounds);
paint(canvas, _lastSize);
var picture = recorder.endRecording();
return picture.toImage(_lastSize.width.round(), _lastSize.height.round());
}
paint(Canvas canvas, Size size) {
_lastCanvas = canvas;
_lastSize = size;
for (var point in this.allPoints) {
var paint = new Paint()..color = colorFromColorString(opts.penColor);
paint.strokeWidth = 5.0;
var path = new Path();
var offset = new Offset(point.point.x, point.point.y);
path.moveTo(point.point.x, point.point.y);
var pointSize = point.size;
if (pointSize == null || pointSize.isNaN) {
pointSize = opts.dotSize;
}
canvas.drawCircle(offset, pointSize, paint);
paint.style = PaintingStyle.stroke;
canvas.drawPath(path, paint);
}
}
bool shouldRepaint(SignaturePadPainter oldDelegate) {
return true;
}
}
Currently currently getPng() returns a dart:ui Image object, but I can't tell how to get the bytes from the image data (if this is even possible)
Here's the solution I came up with, now that toByteData() was added to the SDK:
var picture = recorder.endRecording();
var image =
picture.toImage(lastSize.width.round(), lastSize.height.round());
ByteData data = await image.toByteData(format: ui.ImageByteFormat.png);
return data.buffer.asUint8List();
This solution is working and has now been published to pub as part of the signature_pad_flutter package: https://github.com/apptreesoftware/signature-pad-dart/blob/master/signature_pad_flutter/lib/src/painter.dart#L17

CIDetector.RectDetector bounds to view bounds coordinates

So,
I am trying to display a rectanlge around a detected document (A4)
I am using an AVCaptureSession for the feed along with the AVCaptureStillImageOutput Output.
NSError Error = null;
Session = new AVCaptureSession();
AVCaptureDevice Device = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);
AVCaptureDeviceInput DeviceInput = AVCaptureDeviceInput.FromDevice(Device, out Error);
Session.AddInput(DeviceInput);
AVCaptureStillImageOutput CaptureOutput = new AVCaptureStillImageOutput();
CaptureOutput.OutputSettings = new NSDictionary(AVVideo.CodecKey, AVVideo.CodecJPEG) ;
Session.AddOutput(CaptureOutput);
I have a timer that takes the output and passes that to my handler
NSTimer.CreateRepeatingScheduledTimer(TimeSpan.Parse("00:00:02"), delegate
{
CaptureImageWithMetadata(CaptureOutput,CaptureOutput.Connections[0]);
});
I also have an AVCapturePreviewLayer with its bound being full screen (iPad Mini Portrait)
PreviewLayer = new AVCaptureVideoPreviewLayer(Session);
PreviewLayer.Frame = this.View.Frame;
PreviewLayer.VideoGravity = AVLayerVideoGravity.ResizeAspectFill;
this.View.Layer.AddSublayer(PreviewLayer);
PreviewLayer.ZPosition = (PreviewLayer.ZPosition - 1);
Below is the handler
private async void CaptureImageWithMetadata(AVCaptureStillImageOutput output, AVCaptureConnection connection)
{
var sampleBuffer = await output.CaptureStillImageTaskAsync(connection);
var imageData = AVCaptureStillImageOutput.JpegStillToNSData(sampleBuffer);
var image = CIImage.FromData(imageData);
var metadata = image.Properties.Dictionary.MutableCopy() as NSMutableDictionary;
CIContext CT = CIContext.FromOptions(null);
CIDetectorOptions OP = new CIDetectorOptions();
OP.Accuracy = FaceDetectorAccuracy.High;
OP.AspectRatio = 1.41f;
CIDetector CI = CIDetector.CreateRectangleDetector(CT, OP);
CIFeature[] HH = CI.FeaturesInImage(image,CIImageOrientation.BottomRight);
CGAffineTransform Transfer = CGAffineTransform.MakeScale(1, -1);
Transfer = CGAffineTransform.Translate(Transfer, 0, -this.View.Bounds.Size.Height);
if (HH.Length > 0)
{
CGRect RECT = CGAffineTransform.CGRectApplyAffineTransform(HH[0].Bounds, Transfer);
Console.WriteLine("start");
Console.WriteLine("IMAGE : "+HH[0].Bounds.ToString());
Console.WriteLine("SCREEN :"+RECT.ToString());
Console.WriteLine("end");
BB.Frame = RECT;
BB.Hidden = false;
}
}
Despite however after following a guid that suggested I need to convert the coordinates - my highlighter (green) is not surround the document, and i cant figure out why.
I am using CIImageOrientation.BottomRight just as test but no matter what i put here.. always the same result. See Images

Artifacts in UIImage when loaded in background

I'm trying to load images for GMGridView cells. The issue is that the image loading process is not that fast so I decided to go multithreading. I created a good all-in-one class for background image loading. Here are it's contents:
public void LoadImageIntoView (string imageURL, UIImageView imageView, int index)
{
rwl.AcquireReaderLock (Timeout.Infinite);
if (disposed)
return;
UIImage image;
lock (locker) {
cache.TryGetValue (imageURL, out image);
}
if (image != null)
imageView.Image = image;
else {
new Thread (() => {
if (MediaLoader.IsFileCached (imageURL))
LoadImage (index, imageURL);
else {
MediaLoader loader = new MediaLoader ();
loader.OnCompleteDownload += (object sender, OnCompleteDownloadEventArgs e) => {
if (e.Success)
LoadImage (index, e.FileURL);
};
loader.GetFileAsync (imageURL, false, DownloadPriority.Low);
}
}).Start ();
}
rwl.ReleaseReaderLock ();
}
private void LoadImage (int index, string imageURL)
{
rwl.AcquireReaderLock (Timeout.Infinite);
if (disposed)
return;
string pathToFile = MediaLoader.GetCachedFilePath (imageURL);
UIImage uiImage = UIImage.FromFile (pathToFile);;
// Load the image
if (uiImage != null) {
lock (locker) {
cache [imageURL] = uiImage;
}
BeginInvokeOnMainThread (() => InsertImage (false, index, uiImage));
}
rwl.ReleaseReaderLock ();
}
private void InsertImage (bool secondTime, int index, UIImage image)
{
rwl.AcquireReaderLock (Timeout.Infinite);
if (disposed)
return;
UIImageView imageView = FireGetImageViewCallback (index);
if (imageView != null) {
CATransition transition = CATransition.CreateAnimation ();
transition.Duration = 0.3f;
transition.TimingFunction = CAMediaTimingFunction.FromName(CAMediaTimingFunction.EaseInEaseOut);
transition.Type = CATransition.TransitionFade;
imageView.Layer.AddAnimation (transition, null);
imageView.Image = image;
} else {
if (!secondTime) {
new Thread (() => {
Thread.Sleep (150);
BeginInvokeOnMainThread (() => InsertImage (true, index, image));
}).Start ();
}
}
rwl.ReleaseReaderLock ();
}
I have also tried this code for image loading inside the LoadImage method:
UIImage loadedImage = UIImage.FromFile (pathToFile);
CGImage image = loadedImage.CGImage;
if (image != null) {
CGColorSpace colorSpace = CGColorSpace.CreateDeviceRGB ();
// Create a bitmap context from the image's specifications
CGBitmapContext bitmapContext = new CGBitmapContext (null, image.Width, image.Height, image.BitsPerComponent, image.Width * 4, colorSpace, CGImageAlphaInfo.PremultipliedFirst);
bitmapContext.ClearRect (new System.Drawing.RectangleF (0, 0, image.Width, image.Height));
// Draw the image into the bitmap context and retrieve the
// decompressed image
bitmapContext.DrawImage (new System.Drawing.RectangleF (0, 0, image.Width, image.Height), image);
CGImage decompressedImage = bitmapContext.ToImage ();
// Create a UIImage
uiImage = new UIImage (decompressedImage);
// Release everything
colorSpace.Dispose ();
decompressedImage.Dispose ();
bitmapContext.Dispose ();
image.Dispose ();
}
When I build and try my app it appears that from time to time images returned by my ImageLoader have artifacts inside them. Sometimes it can be white rectangles at random locations, sometimes it can be some unexpectedly colored pixels. I'll be very happy to hear a solution to this problem as the app is about to go to AppStore and this issue is a big headache.
P.S. FireGetImageViewCallback returns an UIImageView via a delegate which I set in the class's constructor. Cache is a Dictionary , locker is just an object, rwl is a ReaderWriterLock instance.
The problem was solved by using GCD instead of usual C# threading. It puts the task into the queue which makes them run one after another, but not simultaneously. This worked perfect except the fact that when you scroll down the huge list of images and all of them go into the queue, it will take much time for currently visible rows to be filled with images. That's why I applied some sort of optimization: when my ImageLoader's LoadImageIntoView method is called, it is also provided with an index, so ImageLoader knows which row was acquired last. In the task I check whether the cell, the image of which is going to be downloaded, is currently visible and if not it simply returns, allowing the next task to execute. Here's some code that illustrates this approach:
private void LoadImage (int index, string imageURL)
{
DispatchQueue.GetGlobalQueue (DispatchQueuePriority.Low).DispatchAsync (() => {
rwl.AcquireReaderLock (Timeout.Infinite);
if (disposed)
return;
bool shouldDownload = false;
lastAcquiredIndexRwl.AcquireReaderLock (Timeout.Infinite);
shouldDownload = index <= (lastAcquiredIndex + visibleRange) && index >= (lastAcquiredIndex - visibleRange);
lastAcquiredIndexRwl.ReleaseReaderLock ();
if (shouldDownload) {
string pathToFile = MediaLoader.GetCachedFilePath (imageURL);
UIImage uiImage = null;
// Load the image
CGDataProvider dataProvider = new CGDataProvider (pathToFile);
CGImage image = null;
if (pathToFile.IndexOf (".png") != -1)
image = CGImage.FromPNG (dataProvider, null, false, CGColorRenderingIntent.Default);
else
image = CGImage.FromJPEG (dataProvider, null, false, CGColorRenderingIntent.Default);
if (image != null) {
CGColorSpace colorSpace = CGColorSpace.CreateDeviceRGB ();
// Create a bitmap context from the image's specifications
CGBitmapContext bitmapContext = new CGBitmapContext (null, image.Width, image.Height, image.BitsPerComponent, image.Width * 4, colorSpace, CGImageAlphaInfo.PremultipliedFirst | (CGImageAlphaInfo)CGBitmapFlags.ByteOrder32Little);
colorSpace.Dispose ();
bitmapContext.ClearRect (new System.Drawing.RectangleF (0, 0, image.Width, image.Height));
// Draw the image into the bitmap context and retrieve the
// decompressed image
bitmapContext.DrawImage (new System.Drawing.RectangleF (0, 0, image.Width, image.Height), image);
image.Dispose ();
CGImage decompressedImage = bitmapContext.ToImage ();
bitmapContext.Dispose ();
uiImage = new UIImage (decompressedImage);
decompressedImage.Dispose ();
}
if (uiImage != null) {
lock (locker) {
cache [imageURL] = uiImage;
}
DispatchQueue.MainQueue.DispatchAsync (() => InsertImage (false, index, uiImage));
}
}
rwl.ReleaseReaderLock ();
});
}

Converting a saved photo into a WritableBitmap in WinRT

I have taken a photo and saved it, but I need to do image processing on it. I tried using the following code, but WritableBitmap does not accept a bitmap, it needs a stream.
var writeableBitmap = new WritableBitmap(bitmap);
Here is the code:
CameraCaptureUI cam = new CameraCaptureUI();
var capturedImage = await cam.CaptureFileAsync(CameraCaptureUIMode.Photo);
if (capturedImage != null)
{
var img = new BitmapImage();
img.SetSource(await capturedImage.OpenReadAsync());
}
This should do the trick - You need to use the OpenAsync instead.
var dialog = new CameraCaptureUI();
var file = await dialog.CaptureFileAsync(CameraCaptureUIMode.Photo);
if (file != null)
{
var stream = await file.OpenAsync(FileAccessMode.Read);
var img = new BitmapImage();
img.SetSource(stream);
AccountPictureImage.Source = img;
}

Save UIImage to personal folder and then load it via UIImage.FromFile

I´ve done a picture selector via UIImagePickerController. Because of the memory issues this one has I want to save the selected image to disc and if needed load it from filepath. But I can´t manage to get it working.
If i bind the original image directly it is displayed with no problems.
File.Exists in the code returns true but image in the last line is null if watched in debugger.. Thank you very much for your help!
NSData data = originalImage.AsPNG();
string path = Environment.GetFolderPath (Environment.SpecialFolder.Personal);
string pathTempImage = Path.Combine(path, "tempImage.png");
byte[] tempImage = new byte[data.Length];
File.WriteAllBytes(pathTempImage, tempImage);
if(File.Exists(pathTempImage))
{
int i = 0;
}
UIImage image = UIImage.FromFile(pathTempImage);
Update
This is the code that works for me:
void HandleFinishedPickingMedia (object sender, UIImagePickerMediaPickedEventArgs e)
{
_view.DismissModalViewControllerAnimated (true);
BackgroundWorker bw = new BackgroundWorker();
bw.DoWork += delegate(object bwsender, DoWorkEventArgs e2) {
// determine what was selected, video or image
bool isImage = false;
switch(e.Info[UIImagePickerController.MediaType].ToString()) {
case "public.image":
Console.WriteLine("Image selected");
isImage = true;
break;
case "public.video":
Console.WriteLine("Video selected");
break;
}
// get common info (shared between images and video)
NSUrl referenceURL = e.Info[new NSString("UIImagePickerControllerReferenceUrl")] as NSUrl;
if (referenceURL != null)
Console.WriteLine("Url:"+referenceURL.ToString ());
// if it was an image, get the other image info
if(isImage) {
// get the original image
originalImage = e.Info[UIImagePickerController.OriginalImage] as UIImage;
if(originalImage != null) {
NSData data = originalImage.AsPNG();
_picture = new byte[data.Length];
ImageResizer resizer = new ImageResizer(originalImage);
resizer.RatioResize(200,200);
string path = Environment.GetFolderPath (Environment.SpecialFolder.Personal);
string pathTempImage = Path.Combine(path, "tempImage.png");
string filePath = Path.Combine(path, "OriginalImage.png");
NSData dataTempImage = resizer.ModifiedImage.AsPNG();
byte[] tempImage = new byte[dataTempImage.Length];
System.Runtime.InteropServices.Marshal.Copy(dataTempImage.Bytes,tempImage,0,Convert.ToInt32(tempImage.Length));
//OriginalImage
File.WriteAllBytes(filePath, _picture);
//TempImag
File.WriteAllBytes(pathTempImage, tempImage);
UIImage image = UIImage.FromFile(pathTempImage);
_view.InvokeOnMainThread (delegate {
templateCell.BindDataToCell(appSelectPicture.Label, image);
});
_picture = null;
}
} else { // if it's a video
// get video url
NSUrl mediaURL = e.Info[UIImagePickerController.MediaURL] as NSUrl;
if(mediaURL != null) {
Console.WriteLine(mediaURL.ToString());
}
}
// dismiss the picker
};
bw.RunWorkerAsync();
bw.RunWorkerCompleted += HandleRunWorkerCompleted;
}
byte[] tempImage = new byte[data.Length];
File.WriteAllBytes(pathTempImage, tempImage);
You're not copying the image data to your allocated array before saving it. That result in a large empty file that is not a valid image.
Try using one of the NSData.Save overloads, like:
NSError error;
data.Save (pathTempImage, NSDataWritingOptions.FileProtectionNone, out error);
That will allow you to avoid allocating the byte[] array.

Resources