Show PKCanvasView on MacOS in SwiftUI - ios

Is it possible to display a PKCanvasView drawing on MacOS that was previously created on an iOS device (data transfer takes place with core data and Cloudkit)?

You can initialize a new PKDrawing object from your drawing data and generate a NSImage from it:
import PencilKit
do {
let pkDrawing = try PKDrawing(data: drawingData)
let nsImage = pkDrawing.image(from: pkDrawing.bounds, scale: view.window?.backingScaleFactor ?? 1)
} catch {
print(error)
}

Related

Fixing Error EXC_RESOURCE RESOURCE_TYPE_MEMORY limit=200mb in File Provider Extension

In my File Provider Extension I have defined a custom action that displays a ViewController with a collectionView getting data from DiffableDataSource.
Each cell is configured to adjust export settings for a PDF file. In the process of preparing the PDF files to be exported I convert them to images using a renderer. The code I use is this.
if let page = document.page(at: pageIndex) {
let pageRect = page.getBoxRect(.mediaBox)
let renderer = UIGraphicsImageRenderer(size: pageRect.size)
var img = renderer.image { context in
UIColor.white.set()
context.fill(pageRect)
context.cgContext.translateBy(x: 0.0, y: pageRect.size.height)
context.cgContext.scaleBy(x: 1.0, y: -1.0)
context.cgContext.drawPDFPage(page)
}
img = MUtilities.shared.imageRotatedByDegrees(oldImage: img, deg: CGFloat(page.rotationAngle))
let image = img.jpegData(compressionQuality: quality)
do {
try? FileManager.default.createDirectory(at: imagePath.deletingLastPathComponent(), withIntermediateDirectories: true)
try image?.write(to: imagePath)
} catch {
fatalError("Unable to write jpg to file, \(error.localizedDescription)")
}
}
}
The code works fine in the Simulator and displays the collectionView without any issue. When I test the extension on my device with iOS 16.0 I get the error:
Thread 1: EXC_RESOURCE RESOURCE_TYPE_MEMORY (limit=200 MB, unused=0x0)
on the line:
var img = renderer.image { context in
How can I fix this error?
The error occurs within the context of the File Provider UI custom action (FPUIActionExtensionViewController). This led me to investigate all the code within that context to find any memory leaks or excessive memory usage.
I found a call to the realm database for all objects of a certain type outside the prepare function. I moved the call into the prepare function and limited the call with a .filter on the returned array. That fixed the problem for me.

Mock Core Data object in SwiftUI Preview

Xcode's preview canvas keeps on crashing with no error message when I try to pass in a preview Core Data object like so:
import SwiftUI
import CoreData
struct BookView: View {
let book: Book
var body: some View {
Text("Hello, World!")
}
}
// ^^^ This stuff is fine ^^^
// vvv This stuff is not vvv
struct BookView_Previews: PreviewProvider {
static let moc = NSManagedObjectContext(concurrencyType: .mainQueueConcurrencyType)
static var previews: some View {
let book = Book(context: moc)
book.title = "Test book"
book.author = "Test author"
book.genre = "Fantasy"
book.rating = 4
book.review = "This was a great book; I really enjoyed it."
return NavigationView {
BookView(book: book)
}
}
}
I'm following a Hacking with Swift tutorial on Core Data and SwiftUI and am at this step.
This appears to be the standard way to add preview objects into the SwiftUI canvas, but I'm unable to get it to work. FYI the app runs fine in the simulator, I'm just trying to get it to also work in the preview canvas. I'm using Xcode 13.2.1 on macOS 12.
Thank you!
Instead of creating a NSManagedObjectContext use
static let context = PersistenceController.preview.container.viewContext
That variable is provided in the standard Xcode project with Core Data.
Also, if you have been using the real store for preview it might be corrupted somehow so you might have to destroy.
Add the code below
do{
try container.persistentStoreCoordinator.destroyPersistentStore(at: container.persistentStoreDescriptions.first!.url!, type: .sqlite, options: nil)
}catch{
print(error)
}
Right under
container = NSPersistentCloudKitContainer(name: "YourAppName")
Before you load the store.
This destroys the store and then it gets recreated when you call loadPersistentStores be sure to remove that piece of code after you clear the preview device so you don't accidentally destroy another store you don't mean to destroy.

CoreML Output labels NSCFString - Labels not showing correctly

I am working on an iOS app where I need to use a CoreML model to perform image classification.
I used Google Cloud Platform AutoML Vision to train the model. Google provides a CoreML version of the model and I downloaded it to use in my app.
I followed Google's tutorial and everything appeared to be going smoothly. However when it use time to start using the model and got very strange prediction. I got the confidence of the prediction and then I got a very strange string that I didn't know what it was.
<VNClassificationObservation: 0x600002091d40> A7DBD70C-541C-4112-84A4-C6B4ED2EB7E2 requestRevision=1 confidence=0.332127 "CICAgICAwPmveRIJQWdsYWlzX2lv"
The string I am referring to is CICAgICAwPmveRIJQWdsYWlzX2lv.
After some research and debugging I found out that this is a NSCFString.
https://developer.apple.com/documentation/foundation/1395135-nsclassfromstring
Apparently this is part of the foundation API. Does anyone has any experience with this?
With the CoreML file also comes a dict.txt file with the correct labels. Do I have to convert this string to the labels? How do I do that.
This the code I have so far.
//
// Classification.swift
// Lepidoptera
//
// Created by Tomás Mamede on 15/09/2020.
// Copyright © 2020 Tomás Santiago. All rights reserved.
//
import Foundation
import SwiftUI
import Vision
import CoreML
import ImageIO
class Classification {
private lazy var classificationRequest: VNCoreMLRequest = {
do {
let model = try VNCoreMLModel(for: AutoML().model)
let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
if let classifications = request.results as? [VNClassificationObservation] {
print(classifications.first ?? "No classification!")
}
})
request.imageCropAndScaleOption = .scaleFit
return request
}
catch {
fatalError("Error! Can't use Model.")
}
}()
func classifyImage(receivedImage: UIImage) {
let orientation = CGImagePropertyOrientation(rawValue: UInt32(receivedImage.imageOrientation.rawValue))
if let image = CIImage(image: receivedImage) {
DispatchQueue.global(qos: .userInitiated).async {
let handler = VNImageRequestHandler(ciImage: image, orientation: orientation!)
do {
try handler.perform([self.classificationRequest])
}
catch {
fatalError("Error classifying image!")
}
}
}
}
}
The labels are stored in your mlmodel file. If you open the mlmodel in the Xcode 12 model viewer, it will display what those labels are.
My guess is that instead of actual labels, your mlmodel file contains "CICAgICAwPmveRIJQWdsYWlzX2lv" and so on.
It looks like Google's AutoML does not put the correct class labels into the Core ML model.
You can make a dictionary in the app that maps "CICAgICAwPmveRIJQWdsYWlzX2lv" and so on to the real labels.
Or you can replace these labels inside the mlmodel file by editing it using coremltools. (My e-book Core ML Survival Guide has a chapter on how to replace the labels in the model.)

Flutter camera image to swift UIimage

I'm developing a custom flutter plugin where I send flutter image camera to swift and create a UIImage using flutter camera plugin (https://pub.dev/packages/camera).
For that, I send the camera image bytes using this method:
startImageStream((CameraImage img) {
sendFrameBytes(bytesList: img.planes.map((plane) {
return plane.bytes;
}).toList(),
)}
Planes contains a single array containing the RGBA bytes of the image.
On the swift code, I get the RGBA bytes as NSArray and create a UIImage like this:
func detectFromFrame1(args:NSDictionary, result:FlutterResult){
var rgbaPlan = args["bytesList"] as! NSArray
let rgbaTypedData = rgbaPlan[0] as! FlutterStandardTypedData
let rgbaUint8 = [UInt8](rgbaTypedData.data)
let data = NSData(bytes: rgbaUint8, length: rgbaUint8.count)
let uiimage = UIImage(data: data as Data)
print(uiimage)
}
The problem is rgbaTypedData, rgbaUint8, data are not empty and the created uiimage is always nil, I don't understand where the problem is.
I have the same issue. A workaround I use is to convert the image to jpg in flutter and get and give the bytes to the iOS / native code.
The downside is, that it's slow and not usable for real-time use
Update:
Code Sample (Flutter & TFLite package)
Packages:
https://pub.dev/packages/image and
https://pub.dev/packages/tflite
CODE:
_cameraController.startImageStream((_availableCameraImage)
{
imglib.Image img = imglib.Image.fromBytes(_availableCameraImage.planes[0].width, _availableCameraImage.planes[0].height, _availableCameraImage.planes[0].bytes);
Uint8List imgByte = imglib.encodeJpg(img);
Tfliteswift.detectObjectOnBinary(binary: _availableCameraImage.planes[0].bytes);
}

Take snapshot of PDF viewer on iOS12

Good afternoon,
I am trying to take a snapshot of a PDF but I am facing some difficulties to access view's content on iOS 12.
In order to display the PDF's content, I've already used two different approaches:
UIDocumentInteractorController
Webview
On iOS 11, I couldn't already take a snapshot of a UIDocumentInteractorController view and the best answer that I could find was this one https://stackoverflow.com/a/13332623/2568889. In short, it says that the document viewer runs in an external process and on its own windows that the main app process doesn't have access to.
The WebView was the solution at that time, until iOS 12 came. While testing on real devices running iOS 12, I had the same issue of not being able to access the viewer's content while taking the snapshot. While inspecting the view hierarchy, it looks like we have a childviewcontroller (PDFHostViewController) that is the one rendering the actual view.
Please take into account that this issue is just happening for PDFs, on regular webpages is working fine!
Code used to take snapshot:
private extension UIView {
func takeSnapshot() -> UIImage {
let format = UIGraphicsImageRendererFormat()
format.opaque = self.isOpaque
let renderer = UIGraphicsImageRenderer(size: self.frame.size, format: format)
return renderer.image { context in
self.drawHierarchy(in: self.frame, afterScreenUpdates: true)
}
}
}
Note: I have also tried to use the native Webview.takeSnapshot(with:,completionHandler) of the web view but it just works for regular webpages, not PDFs
Maybe it works with Apple's PDFKit. As far as i know the PDFView is a subclass of UIView.
import PDFKit
#IBOutlet var pdfView: PDFView!
let pdfDocument = PDFDocument(url: url)
pdfView.document = pdfDocument
And then your extension as PDFView extension

Resources