I have an UIImage generated from a CanvasView.
I want to use the featureprintObservationForImage feature on it. However it seems to take a URL and I am trying to provide a UIImage, how can I get around this?
Here is my code:
//getting image
UIGraphicsBeginImageContextWithOptions(theCanvasView.bounds.size, false, UIScreen.main.scale)
theCanvasView.drawHierarchy(in: theCanvasView.bounds, afterScreenUpdates: true)
let image2 = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//setting up observation for image
let drawing = featureprintObservationForImage(atURL: Bundle.main.url(image2)!)
I am getting the error on the last line that says:
Cannot convert value of type 'UIImage?' to expected argument type
'String'
Any ideas?
Assuming that you need to get a VNFeaturePrintObservation instance, you could request an image instead of a URL by using the VNImageRequestHandler.
Assuming that featureprintObservationForImage method is (or looks something like this):
func featureprintObservationForImage(atURL url: URL) -> VNFeaturePrintObservation? {
let requestHandler = VNImageRequestHandler(url: url, options: [:])
let request = VNGenerateImageFeaturePrintRequest()
do {
try requestHandler.perform([request])
return request.results?.first as? VNFeaturePrintObservation
} catch {
print("Vision error: \(error)")
return nil
}
}
You could have a different version as:
func featureprintObservationForImage(_ image: CIImage?) -> VNFeaturePrintObservation? {
guard let ciImage = image else {
return nil
}
let requestHandler = VNImageRequestHandler(ciImage: ciImage, options: [:])
let request = VNGenerateImageFeaturePrintRequest()
do {
try requestHandler.perform([request])
return request.results?.first as? VNFeaturePrintObservation
} catch {
print("Vision error: \(error)")
return nil
}
}
The differences in the second one are:
The signature of the method, takes an optional CIImage instead of a URL.
The initializer of the requestHandler.
Therefore:
let drawing = featureprintObservationForImage(image2?.ciImage)
Related
I'm attempting to crop an UIImage in iOS using Saliency via the VNGenerateObjectnessBasedSaliencyImageRequest().
I'm following the documentation provided by Apple here https://developer.apple.com/documentation/vision/2908993-vnimagerectfornormalizedrect and
working off of this tutorial https://betterprogramming.pub/cropping-areas-of-interest-using-vision-in-ios-e83b5e53440b.
I'm also referencing this project https://developer.apple.com/documentation/vision/highlighting_areas_of_interest_in_an_image_using_saliency.
This is the code I currently have in place.
static func cropImage(_ image: UIImage, completionHandler:#escaping(UIImage?, String?) -> Void) -> Void {
guard let originalImage = image.cgImage else { return }
let saliencyRequest = VNGenerateObjectnessBasedSaliencyImageRequest()
let requestHandler = VNImageRequestHandler(cgImage: originalImage, orientation: .right, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try requestHandler.perform([saliencyRequest])
guard let results = saliencyRequest.results?.first else{return}
if let observation = results as VNSaliencyImageObservation?
{
let salientObjects = observation.salientObjects
if let ciimage = CIImage(image: image)
{
let salientRect = VNImageRectForNormalizedRect((salientObjects?.first!.boundingBox)!,
Int(ciimage.extent.size.width),
Int(ciimage.extent.size.height))
let croppedImage = ciimage.cropped(to: salientRect)
let cgImage = iOSVisionHelper.convertCIImageToCGImage(inputImage: croppedImage)
if cgImage != nil {
let thumbnail = UIImage(cgImage: cgImage!)
completionHandler(thumbnail, nil)
}else{
completionHandler(nil, "Unable to crop image")
}
}
}
} catch {
completionHandler(nil, error.localizedDescription)
}
}
}
static func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
This is working pretty well, except it seems like it's not adjusting the height of the image. It crops in the sides perfectly, but not the top or bottom.
Here are examples of the original image and it being cropped.
This is what the iOS demo app found at https://developer.apple.com/documentation/vision/highlighting_areas_of_interest_in_an_image_using_saliency generates.
Any help would be very much appreciated.
What do I simply do?
let pasteboard = UIPasteboard.general
let base64EncodedImageString = "here_base_64_string_image"
let data = Data(base64Encoded: base64EncodedImageString)
let url = data?.write(withName: "image.jpeg")
pasteboard.image = UIImage(url: url) //and now when I try to paste somewhere that image for example in imessage, it is rotated... why?
What may be important:
It happens only for images created by camera.
However, if use exactly the same process (!) to create activityItems for UIActivityViewController and try to use iMessage app, then it works... why? What makes the difference?
I use above two simple extensions for UIImage and Data:
extension Data {
func write(withName name: String) -> URL {
let url = URL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent(name)
do {
try write(to: url, options: NSData.WritingOptions.atomic)
return url
} catch {
return url
}
}
}
extension UIImage {
convenience init?(url: URL?) {
guard let url = url else {
return nil
}
do {
self.init(data: try Data(contentsOf: url))
} catch {
return nil
}
}
}
Before server returns base64EncodedString I upload an image from camera like this:
func imagePickerController(
_ picker: UIImagePickerController,
didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]
) {
let image = info[.originalImage] as? UIImage
let encodedBase64 = image?.jpegData(compressionQuality: 0.9)?.base64EncodedString() ?? ""
//upload encodedBase64 to the server... that is all
}
I am not sure but I think UIPasteBoard converts your image to PNG and discards its orientation. You can explicitly tell the kind of data you are adding to the pasteboard but I am not sure if this would work for your scenery.
extension Data {
var image: UIImage? { UIImage(data: self) }
}
setting your pasteboard data
UIPasteboard.general.setData(jpegData, forPasteboardType: "public.jpeg")
loading the data from pasteboard
if let pbImage = UIPasteboard.general.data(forPasteboardType: "public.jpeg")?.image {
}
Or Redrawing your image before setting your pasteboard image property
extension UIImage {
func flattened(isOpaque: Bool = true) -> UIImage? {
if imageOrientation == .up { return self }
UIGraphicsBeginImageContextWithOptions(size, isOpaque, scale)
defer { UIGraphicsEndImageContext() }
draw(in: CGRect(origin: .zero, size: size))
return UIGraphicsGetImageFromCurrentImageContext()
}
}
UIPasteboard.general.image = image.flattened()
I am trying to detect bar code from user selected image. I am able to detect QR code from the image but can not find anything related to bar code scanning from Image. The code I am using to detect QR code from image is like below:
func detectQRCode(_ image: UIImage?) -> [CIFeature]? {
if let image = image, let ciImage = CIImage.init(image: image){
var options: [String: Any]
let context = CIContext()
options = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let qrDetector = CIDetector(ofType: CIDetectorTypeQRCode, context: context, options: options)
if ciImage.properties.keys.contains((kCGImagePropertyOrientation as String)){
options = [CIDetectorImageOrientation: ciImage.properties[(kCGImagePropertyOrientation as String)] ?? 1]
}else {
options = [CIDetectorImageOrientation: 1]
}
let features = qrDetector?.features(in: ciImage, options: options)
return features
}
return nil
}
When I go into the documentation of the CIDetectorTypeQRCode it says
/* Specifies a detector type for barcode detection. */
#available(iOS 8.0, *)
public let CIDetectorTypeQRCode: String
Though this is QR code Type documentation says it can detect barcode also.
Fine. But when I use the same function to decode barcode it returns me empty array of features. Even if it returns me some features how will I be able to convert it to the bar code alternative of CIQRCodeFeature ? I do not see any bar code alternative in the documentation of CIQRCodeFeature. I know with ZBar SDK you can do this, but I am trying not to use any third party library here, or is it mandatory to use this in this regard??.
Please help, Thanks a lot.
You can use Vision Framework
Barcode detection request code
var vnBarCodeDetectionRequest : VNDetectBarcodesRequest{
let request = VNDetectBarcodesRequest { (request,error) in
if let error = error as NSError? {
print("Error in detecting - \(error)")
return
}
else {
guard let observations = request.results as? [VNDetectedObjectObservation]
else {
return
}
print("Observations are \(observations)")
}
}
return request
}
Function in which you need to pass the image.
func createVisionRequest(image: UIImage)
{
guard let cgImage = image.cgImage else {
return
}
let requestHandler = VNImageRequestHandler(cgImage: cgImage, orientation: image.cgImageOrientation, options: [:])
let vnRequests = [vnBarCodeDetectionRequest]
DispatchQueue.global(qos: .background).async {
do{
try requestHandler.perform(vnRequests)
}catch let error as NSError {
print("Error in performing Image request: \(error)")
}
}
}
Reference Link
I am needing to load images from a URL and store them locally so they dont have to be reloaded over and over. I have this extension I am working on:
extension UIImage {
func load(image imageName: String) -> UIImage {
// declare image location
let imagePath: String = "\(NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0])/\(imageName).png"
let imageUrl: URL = URL(fileURLWithPath: imagePath)
// check if the image is stored already
if FileManager.default.fileExists(atPath: imagePath),
let imageData: Data = try? Data(contentsOf: imageUrl),
let image: UIImage = UIImage(data: imageData, scale: UIScreen.main.scale) {
return image
}
// image has not been created yet: create it, store it, return it
do {
let url = URL(string: eventInfo!.bannerImage)!
let data = try Data(contentsOf: url)
let loadedImage: UIImage = UIImage(data: data)!
}
catch{
print(error)
}
let newImage: UIImage =
try? UIImagePNGRepresentation(loadedImage)?.write(to: imageUrl)
return newImage
}
}
I am running into a problem where the "loadedImage" in the UIImagePNGRepresentation comes back with an error "Use of unresolved identifier loadedImage". My goal is to store a PNG representation of the image locally. Any suggestions on this error would be appreciated.
It's a simple matter of variable scope. You declare loadedImage inside the do block but then you attempt to use outside (after) that block.
Move the use of loadedImage to be inside the do block.
You also need better error handling and better handling of optional results. And your load method should probably return an optional image incase all attempts to get the image fail. Or return some default image.
Here's your method rewritten using better APIs and better handling of optionals and errors.
extension UIImage {
func load(image imageName: String) -> UIImage? {
// declare image location
guard let imageUrl = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first?.appendingPathComponent(imageName).appendingPathExtension("png") else {
return nil // or create and return some default image
}
// check if the image is stored already
if FileManager.default.fileExists(atPath: imageUrl.path) {
if let imageData = try? Data(contentsOf: imageUrl), let image = UIImage(data: imageData) {
return image
}
}
// image has not been created yet: create it, store it, return it
do {
let url = URL(string: eventInfo!.bannerImage)! // two force-unwraps - consider better handling of this
if let data = try Data(contentsOf: url), let loadedImage = UIImage(data: data) {
try data.write(to: imageUrl)
return loadedImage
}
}
catch{
print(error)
}
return nil // or create and return some default image
}
}
If eventInfo!.bannerImage is a remote URL, then you must never run this code on the main queue.
I am using CoreData for an app. I have set image as BinaryData in data model. But I have fetched image from server as UIImage and it throws error as:
cannot assign value of type 'UIImage?' to type 'NSData?
I searched but couldn't find any resemblance solution for it. Can anyone help me in swift 3?
My code is:
let url1:URL = URL(string:self.appDictionary.value(forKey: "image") as! String)!
let picture = "http://54.243.11.100/storage/images/news/f/"
let strInterval = String(format:"%#%#",picture as CVarArg,url1 as CVarArg) as String as String
let url = URL(string: strInterval as String)
SDWebImageManager.shared().downloadImage(with: url, options: [],progress: nil, completed: {[weak self] (image, error, cached, finished, url) in
if self != nil {
task.imagenews = image //Error:cannot assign value of type 'UIImage?' to type 'NSData?'
}
})
The error message is pretty clear - you cannot assign UIImage object to a variable of NSData type.
To convert UIImage to Swift's Data type, use UIImagePNGRepresentation
var data : Data = UIImagePNGRepresentation(image)
Note that if you're using Swift, you should be using Swift's type Data instead of NSData
You must convert, image into Data (or NSData) to support imagenews data type.
Try this
let url1:URL = URL(string:self.appDictionary.value(forKey: "image") as! String)!
let picture = "http://54.243.11.100/storage/images/news/f/"
let strInterval = String(format:"%#%#",picture as CVarArg,url1 as CVarArg) as String as String
let url = URL(string: strInterval as String)
SDWebImageManager.shared().downloadImage(with: url, options: [],progress: nil, completed: {[weak self] (image, error, cached, finished, url) in
if self != nil {
if let data = img.pngRepresentationData { // If image type is PNG
task.imagenews = data
} else if let data = img.jpegRepresentationData { // If image type is JPG/JPEG
task.imagenews = data
}
}
})
// UIImage extension, helps to convert Image into data
extension UIImage {
var pngRepresentationData: Data? {
return UIImagePNGRepresentation(img)
}
var jpegRepresentationData: Data? {
return UIImageJPEGRepresentation(self, 1.0)
}
}