How can I get selected text from ImageAnalysisInteraction on UIImageView? - ios

I work on an iOS app that displays images that often contain text, and I'm adding support for ImageAnalysisInteraction as described in this WWDC 2022 session. I have gotten as far as making the interaction show up and being able to select text and get the system selection menu, and even add my own action to the menu via the buildMenuWithBuilder API. But what I really want to do with my custom action is get the selected text and do a custom lookup-like thing to check the text against other content in my app.
So how do I get the selected text from an ImageAnalysisInteraction on a UIImageView? The docs show methods to check if there is selected text, but I want to know what the text is.

I was trying to solve the same problem. However, there doesn't currently seem to be any straightforward way to get selected text from ImageAnalysisInteraction. The closest thing seems to be the ImageAnalysis.transcript property, but it contains all the OCR text, not just what the user selected.
My solution was to capture the text whenever the user taps on the copy button on the selection menu. You can do this by observing clipboard changes, which allows you to copy the selected text from the clipboard whenever a change is detected.
See:
Get notified on clipboard change in swift
How to copy text to clipboard/pasteboard with Swift

Hope this help you
// Step -1
import Vision
// Step -2
// converting image into CGImage
guard let cgImage = imageWithText.image?.cgImage else {return}
// Step -3
// creating request with cgImage
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
// Step -4
let request = VNRecognizeTextRequest { request, error in
guard let observations = request.results as [VNRecognizedTextObservation],
error == nil else {return}
let text = observations.compactMap({
$0.topCandidates(1).first?.string
}).joined(separator: ", ")
print(text) // text we get from image
}
// step -5
request.recognitionLevel = VNRequestTextRecognitionLevel
try handler.perform([request])
For Reference and more details

Related

How to get selected text on an image with live text enabled?

I implemented live text on images with the following code
let config = ImageAnalyzer.Configuration([.text, .machineReadableCode])
Task {
do {
let analysis = try await analyzer.analyze(image, configuration: config)
interaction.analysis = analysis
interaction.preferredInteractionTypes = .automatic
} catch {
}
}
I am able to select text, but I'm not able to do much after that. I would like to be able to get the selected text along with the position of the selected text relative to the image. How would I do that?

How to highlight selected text in pdf using PDFKit?

I have set up a PDFViewer and would like to add a highlight functionality so that when a user selects text, they could highlight it. When you highlight text in notes, iMessages etc. you have the option to select all, copy, paste etc. How would you edit this so that you could have a highlight functionality as well? Also, how would the application save the highlighting so when a user closed and reopened the app, they would still be able to view the highlighted text? Would this involve using core data or something else? Thanks!
this is a screenshot of the default functionalities that Apple provides but I would like to add an additional highlighting functionality
let select = pdfView.currentSelection?.selectionsByLine()
//assuming for single-page pdf.
guard let page = select?.first?.pages.first else { return }
select?.forEach({ selection in
let highlight = PDFAnnotation(bounds: select.bounds(for: page), forType: .highlight, withProperties: nil)
highlight.endLineStyle = .square
highlight.color = UIColor.orange.withAlphaComponent(0.5)
page.addAnnotation(highlight)
})

iOS loading table data with nested request

I am working with a UITableView which contains an image and some labels.
The text is loading from one server and the image is downloaded from another server. The image URL is dependent on the text value response but I have to show them in one cell. What I have to do is combine those data after they have loaded and to then to show them.
What could be the right approach?
You can combine response of two requests using DispatchGroup:
let group = DispatchGroup()
var text: String?
var image: UIImage?
group.enter()
requestText(completion: { response in
text = // extract text from response
group.leave()
})
group.enter()
requestImage(completion: { response in
image = // extract image from response
group.leave()
})
group.notify(queue: DispatchQueue.main, execute: {
let textWithImage = (text, image)
// show data in table view
})
You can simply display the textual data first than as soon as a image is downloaded you can than map that image to the textual data by having some common id in both the responses and reload that particular cell . In this way the user will be able to see the textual data and after some milliseconds the images will also show up nicely.

Problems with structs in Swift and making a UIImage from url?

Alright, I am not familiar with structs or the ordeal I am dealing with in Swift, but what I need to do is create an iMessage in my iMessage app extension with a sticker in it, meaning the image part of the iMessage is set to the sticker.
I have pored over Apple's docs and https://www.captechconsulting.com/blogs/ios-10-imessages-sdk-creating-an-imessages-extension but I do not understand how to do this or really how structs work. I read up on structs but that has not helped me accomplishing what Apple does in their sample code (downloadable at Apple)
What Apple does is they first compose a message, which I understood, taking their struct as a property, but I take sticker instead
guard let conversation = activeConversation else { fatalError("Expected a conversation") }
//Create a new message with the same session as any currently selected message.
let message = composeMessage(with: MSSticker, caption: "sup", session: conversation.selectedMessage?.session)
// Add the message to the conversation.
conversation.insert(message) { error in
if let error = error {
print(error)
}
}
They then do this (this is directly from sample code) to compose the message:
fileprivate func composeMessage(with iceCream: IceCream, caption: String, session: MSSession? = nil) -> MSMessage {
var components = URLComponents()
components.queryItems = iceCream.queryItems
let layout = MSMessageTemplateLayout()
layout.image = iceCream.renderSticker(opaque: true)
layout.caption = caption
let message = MSMessage(session: session ?? MSSession())
message.url = components.url!
message.layout = layout
return message
}
}
Basically this line is what Im having the problem with as I need to set my sticker as the image:
layout.image = iceCream.renderSticker(opaque: true)
Apple does a whole complicated function thing that I don't understand in renderSticker to pull the image part out of their stickers, and I have tried their way but I think this is better:
let img = UIImage(contentsOfURL: square.imageFileURL)
layout.image = ing
layout.image needs a UIImage, and I can get the imageFileURL from the sticker, I just cant get this into a UIImage. I get an error it does not match available overloads.
What can I do here? How can I insert the image from my sticker into a message? How can I get an image from its imageFileURL?
I'm not sure what exactly the question is, but I'll try to address as much as I can --
As rmaddy mentioned, if you want to create an image given a file location, simply use the UIImage constructor he specified.
As far as sending just a sticker (which you asked about in the comments on rmaddy's answer), you can insert just a sticker into an iMessage conversation. This functionality is available as part of an MSConversation. Here is a link to the documentation:
https://developer.apple.com/reference/messages/msconversation/1648187-insert
The active conversation can be accessed from your MSMessagesAppViewController.
There is no init(contentsOfURL:) initializer for UIImage. The closest one is init(contentsOfFile:).
To use that one with your file URL you can do:
let img = UIImage(contentsOfFile: square.imageFileURL.path)

Method for User Input in Swift for iOS App Development

I have referred the Apple's Swift Programming Language book, and it is of no help.
var fh = NSFileHandle.fileHandleWithStandardInput()
if let data = fh.availableData
{
var str = NSString(data: data, encoding: NSUTF8StringEncoding)
}
There is more to it than that. Typically in iOS development, you'll have a UITextView become first responder. A responder is an object (event handling object) that can respond to events and handle them. Once you make a UI element become first responder, you can accomplish what you want. From there, the keyboard appears and the user enters something.
Once that's done, you can resign the first responder and look at the text and use it however you want. Some rough code for this process looks like this:
//Create a label
let tv = UITextView(frame: CGRectMake(0, 0, 200, 100))
self.view.addSubView(tv)
//Tell iOS we want this to handle text input
tv.becomeFirstResponder()
//User enters text, tell iOS we're done handling text input events and print input
tv.resignFirstResponder()
println(tv.text)
A good resource for input in iOS: User Input in iOS

Resources