I'm developing a QLThumbnailProvider extension to display thumbnails for my document type. My extension does not appear to be being called - my thumbnails are not appearing and I'm not seeing the logging I've added appearing in any log files.
I have an UIDocumentBrowserViewController based app that defines a new document type. It exports an UTI (com.latenightsw.Eureka.form). My app is able to browse, create and open documents, but the thumbnails are blank.
I've added a Thumbnail Extension target to my project. The code looks like this:
class ThumbnailProvider: QLThumbnailProvider {
override func provideThumbnail(for request: QLFileThumbnailRequest, _ handler: #escaping (QLThumbnailReply?, Error?) -> Void) {
// Third way: Set an image file URL.
print("provideThumbnail: \(request)")
handler(QLThumbnailReply(imageFileURL: Bundle.main.url(forResource: "EurekaForm", withExtension: "png")!), nil)
}
}
I've confirmed that EurekaForm.png is part of the target and being copied to the extension's bundle (as well as the host app's bundle).
And I've confirmed that my UTI is declared:
Does anyone have any suggestions?
It appears that logging and breakpoints sometimes do not work inside app extension. Even fatalErrors occur silently.
In my project I could not get the initialiser QLThumbnailReply(imageFileURL:) to work. However the other initialisers seem to work better.
Drawing the image into a context
When using the context initialiser you have to use a context size which lies between request.minimumSize and request.maximumSize.
Below I've written some code which takes an image and draws it into the context while keeping the above conditions.
override func provideThumbnail(for request: QLFileThumbnailRequest, _ handler: #escaping (QLThumbnailReply?, Error?) -> Void) {
let imageURL = // ... put your own code here
let image = UIImage(contentsOfFile: imageURL.path)!
// size calculations
let maximumSize = request.maximumSize
let imageSize = image.size
// calculate `newImageSize` and `contextSize` such that the image fits perfectly and respects the constraints
var newImageSize = maximumSize
var contextSize = maximumSize
let aspectRatio = imageSize.height / imageSize.width
let proposedHeight = aspectRatio * maximumSize.width
if proposedHeight <= maximumSize.height {
newImageSize.height = proposedHeight
contextSize.height = max(proposedHeight.rounded(.down), request.minimumSize.height)
} else {
newImageSize.width = maximumSize.height / aspectRatio
contextSize.width = max(newImageSize.width.rounded(.down), request.minimumSize.width)
}
handler(QLThumbnailReply(contextSize: contextSize, currentContextDrawing: { () -> Bool in
// Draw the thumbnail here.
// draw the image in the upper left corner
//image.draw(in: CGRect(origin: .zero, size: newImageSize))
// draw the image centered
image.draw(in: CGRect(x: contextSize.width/2 - newImageSize.width/2,
y: contextSize.height/2 - newImageSize.height/2,
width: newImageSize.width,
height: newImageSize.height);)
// Return true if the thumbnail was successfully drawn inside this block.
return true
}), nil)
}
I've gotten the Thumbnail Extension rendering but it only displays its renders in the Files app (others use the App Icon) as far as I can tell.
It is important to note this issue with debugging extensions in that print to console and breakpoints may not be called even though the extension is running.
I see that you have the QLSupportedContentTypes set with your UTI but you may also want to change your UTI to something new as this is when it started working for me. I think after some testing the UTI can get corrupted. While it was working, I had a breakpoint set and it was never called.
In my case, the extension didn't work in the simulator (Xcode 11.1). Everything works as expected on a real device (iOS 13.1.2).
Related
In my File Provider Extension I have defined a custom action that displays a ViewController with a collectionView getting data from DiffableDataSource.
Each cell is configured to adjust export settings for a PDF file. In the process of preparing the PDF files to be exported I convert them to images using a renderer. The code I use is this.
if let page = document.page(at: pageIndex) {
let pageRect = page.getBoxRect(.mediaBox)
let renderer = UIGraphicsImageRenderer(size: pageRect.size)
var img = renderer.image { context in
UIColor.white.set()
context.fill(pageRect)
context.cgContext.translateBy(x: 0.0, y: pageRect.size.height)
context.cgContext.scaleBy(x: 1.0, y: -1.0)
context.cgContext.drawPDFPage(page)
}
img = MUtilities.shared.imageRotatedByDegrees(oldImage: img, deg: CGFloat(page.rotationAngle))
let image = img.jpegData(compressionQuality: quality)
do {
try? FileManager.default.createDirectory(at: imagePath.deletingLastPathComponent(), withIntermediateDirectories: true)
try image?.write(to: imagePath)
} catch {
fatalError("Unable to write jpg to file, \(error.localizedDescription)")
}
}
}
The code works fine in the Simulator and displays the collectionView without any issue. When I test the extension on my device with iOS 16.0 I get the error:
Thread 1: EXC_RESOURCE RESOURCE_TYPE_MEMORY (limit=200 MB, unused=0x0)
on the line:
var img = renderer.image { context in
How can I fix this error?
The error occurs within the context of the File Provider UI custom action (FPUIActionExtensionViewController). This led me to investigate all the code within that context to find any memory leaks or excessive memory usage.
I found a call to the realm database for all objects of a certain type outside the prepare function. I moved the call into the prepare function and limited the call with a .filter on the returned array. That fixed the problem for me.
I have standard iOS app, with a standard app icon contained in Assets.
I'd like to display the app icon within the app (using SwiftUI). Note that I am not asking how to set the app icon, or change the icon dynamically. I just want to show the app icon within the app's own Settings view.
It would appear the App Icon asset should just be like any other, and I could include it using the following (note there is no space between App and Icon in the default icon naming),
Image("AppIcon")
I've also tried experimenting with,
Image("icon_60pt#3x.png") // Pick out a specific icon by filename
Image("icon_60pt#3x") // Maybe it assumes it's a .png
Image("icon_60pt") // Maybe it auto picks most appropriate resolution, like UIKit
...but none of these work.
How do I include the apps own icon within the app, without duplicating it as a separate Image Set (which I have tried, and does work.)
Thanks.
The following works if app icon is correctly set for used device (ie. iPhone icons for iPhone, etc.)
Note: sizes of app icons must match exactly!
Tested with Xcode 11.4
Image(uiImage: UIImage(named: "AppIcon") ?? UIImage())
This works:
extension Bundle {
var iconFileName: String? {
guard let icons = infoDictionary?["CFBundleIcons"] as? [String: Any],
let primaryIcon = icons["CFBundlePrimaryIcon"] as? [String: Any],
let iconFiles = primaryIcon["CFBundleIconFiles"] as? [String],
let iconFileName = iconFiles.last
else { return nil }
return iconFileName
}
}
struct AppIcon: View {
var body: some View {
Bundle.main.iconFileName
.flatMap { UIImage(named: $0) }
.map { Image(uiImage: $0) }
}
}
You can then use this in any view as just:
AppIcon()
Good afternoon,
I am trying to take a snapshot of a PDF but I am facing some difficulties to access view's content on iOS 12.
In order to display the PDF's content, I've already used two different approaches:
UIDocumentInteractorController
Webview
On iOS 11, I couldn't already take a snapshot of a UIDocumentInteractorController view and the best answer that I could find was this one https://stackoverflow.com/a/13332623/2568889. In short, it says that the document viewer runs in an external process and on its own windows that the main app process doesn't have access to.
The WebView was the solution at that time, until iOS 12 came. While testing on real devices running iOS 12, I had the same issue of not being able to access the viewer's content while taking the snapshot. While inspecting the view hierarchy, it looks like we have a childviewcontroller (PDFHostViewController) that is the one rendering the actual view.
Please take into account that this issue is just happening for PDFs, on regular webpages is working fine!
Code used to take snapshot:
private extension UIView {
func takeSnapshot() -> UIImage {
let format = UIGraphicsImageRendererFormat()
format.opaque = self.isOpaque
let renderer = UIGraphicsImageRenderer(size: self.frame.size, format: format)
return renderer.image { context in
self.drawHierarchy(in: self.frame, afterScreenUpdates: true)
}
}
}
Note: I have also tried to use the native Webview.takeSnapshot(with:,completionHandler) of the web view but it just works for regular webpages, not PDFs
Maybe it works with Apple's PDFKit. As far as i know the PDFView is a subclass of UIView.
import PDFKit
#IBOutlet var pdfView: PDFView!
let pdfDocument = PDFDocument(url: url)
pdfView.document = pdfDocument
And then your extension as PDFView extension
I'm working with the Photos framework, specifically I'd like to keep track of the current camera roll status, thus updating it every time assets are added, deleted or modified (mainly when a picture is edited by the user - e.g a filter is added, image is cropped).
My first implementation would look something like the following:
private var lastAssetFetchResult : PHFetchResult<PHAsset>?
func photoLibraryDidChange(_ changeInstance: PHChange) {
guard let fetchResult = lastAssetFetchResult,
let details = changeInstance.changeDetails(for: fetchResult) else {return}
let modified = details.changedObjects
let removed = details.removedObjects
let added = details.insertedObjects
// update fetch result
lastAssetFetchResult = details.fetchResultAfterChanges
// do stuff with modified, removed, added
}
However, I soon found out that details.changedObjects would not contain only the assets that have been modified by the user, so I moved to the following implementation:
let modified = modifiedAssets(changeInstance: changeInstance)
with:
func modifiedAssets(changeInstance: PHChange) -> [PHAsset] {
var modified : [PHAsset] = []
lastAssetFetchResult?.enumerateObjects({ (obj, _, _) in
if let detail = changeInstance.changeDetails(for: obj) {
if detail.assetContentChanged {
if let updatedObj = detail.objectAfterChanges {
modified.append(updatedObj)
}
}
}
})
return modified
}
So, relying on the PHObjectChangeDetails.assetContentChanged
property, which, as documentation states indicates whether the asset’s photo or video content has changed.
This brought the results closer to the ones I was expecting, but still, I'm not entirely understanding its behavior.
On some devices (e.g. iPad Mini 3) I get the expected result (assetContentChanged = true) in all the cases that I tested, whereas on others (e.g. iPhone 6s Plus, iPhone 7) it's hardly ever matching my expectation (assetContentChanged is false even for assets that I cropped or added filters to).
All the devices share the latest iOS 11.2 version.
Am I getting anything wrong?
Do you think I could achieve my goal some other way?
Thank you in advance.
Alright, I am not familiar with structs or the ordeal I am dealing with in Swift, but what I need to do is create an iMessage in my iMessage app extension with a sticker in it, meaning the image part of the iMessage is set to the sticker.
I have pored over Apple's docs and https://www.captechconsulting.com/blogs/ios-10-imessages-sdk-creating-an-imessages-extension but I do not understand how to do this or really how structs work. I read up on structs but that has not helped me accomplishing what Apple does in their sample code (downloadable at Apple)
What Apple does is they first compose a message, which I understood, taking their struct as a property, but I take sticker instead
guard let conversation = activeConversation else { fatalError("Expected a conversation") }
//Create a new message with the same session as any currently selected message.
let message = composeMessage(with: MSSticker, caption: "sup", session: conversation.selectedMessage?.session)
// Add the message to the conversation.
conversation.insert(message) { error in
if let error = error {
print(error)
}
}
They then do this (this is directly from sample code) to compose the message:
fileprivate func composeMessage(with iceCream: IceCream, caption: String, session: MSSession? = nil) -> MSMessage {
var components = URLComponents()
components.queryItems = iceCream.queryItems
let layout = MSMessageTemplateLayout()
layout.image = iceCream.renderSticker(opaque: true)
layout.caption = caption
let message = MSMessage(session: session ?? MSSession())
message.url = components.url!
message.layout = layout
return message
}
}
Basically this line is what Im having the problem with as I need to set my sticker as the image:
layout.image = iceCream.renderSticker(opaque: true)
Apple does a whole complicated function thing that I don't understand in renderSticker to pull the image part out of their stickers, and I have tried their way but I think this is better:
let img = UIImage(contentsOfURL: square.imageFileURL)
layout.image = ing
layout.image needs a UIImage, and I can get the imageFileURL from the sticker, I just cant get this into a UIImage. I get an error it does not match available overloads.
What can I do here? How can I insert the image from my sticker into a message? How can I get an image from its imageFileURL?
I'm not sure what exactly the question is, but I'll try to address as much as I can --
As rmaddy mentioned, if you want to create an image given a file location, simply use the UIImage constructor he specified.
As far as sending just a sticker (which you asked about in the comments on rmaddy's answer), you can insert just a sticker into an iMessage conversation. This functionality is available as part of an MSConversation. Here is a link to the documentation:
https://developer.apple.com/reference/messages/msconversation/1648187-insert
The active conversation can be accessed from your MSMessagesAppViewController.
There is no init(contentsOfURL:) initializer for UIImage. The closest one is init(contentsOfFile:).
To use that one with your file URL you can do:
let img = UIImage(contentsOfFile: square.imageFileURL.path)