I want to detect qrcode from still image
Here is the code
let ciimage = <Load image from asset and covert to CIImage>
let detector = CIDetector(ofType: CIDetectorTypeQRCode, context:nil, options:nil)
let features = detector.featuresInImage(ciimage)
print(features.count)
The image is correct CIImage instance which has QRCode but I always get no features, Have any one had the same issue?
Thank you very much.
I think I found answer
That code only works in Playground or real iPhone, it never works on simulator
I generated QRCode image using CIQRCodeGenerator CoreImage filter.
I hope someone can check my answer here and give me advice.
Below is source code what I used
enum QRCodeUtilError:ErrorType{
case GenerationFailed
}
extension UIImage {
/**
This function trys to return CIImage even the image is CGImageRef based.
- Returns: CIImage equal to current image
*/
func ciImage() -> CoreImage.CIImage?{
// if CIImage is not nil then return it
if self.CIImage != nil {
return self.CIImage
}
guard let cgImage = self.CGImage else {
return nil
}
return CoreImage.CIImage(CGImage: cgImage)
}
}
/**
QRCode utility class
*/
class QRCodeUtil{
/**
Generates 'H' mode qrcode image
- Parameter qrCode: QRCode String
- Returns: Optional UIImage instance of QRCode
*/
class func imageForQrCode(qrCode:String) throws -> UIImage{
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
throw QRCodeUtilError.GenerationFailed
}
guard let data = qrCode.dataUsingEncoding(NSISOLatin1StringEncoding) else {
throw QRCodeUtilError.GenerationFailed
}
filter.setValue(data, forKey: "inputMessage")
filter.setValue("H", forKey: "inputCorrectionLevel")
guard let outputImage = filter.outputImage else {
throw QRCodeUtilError.GenerationFailed
}
/*
Convert the image to CGImage based because
it crashes when saving image using
UIImagePNGRepresentation or UIImageJPEGRepresentation
if image is CIImage based.
*/
let context = CIContext(options: nil)
let cgImage = context.createCGImage(outputImage, fromRect: outputImage.extent)
return UIImage(CGImage: cgImage)
}
/**
Detects QRCode from string
- Parameter qrImage: UIImage which has QRCode
- Returns: QRCode detected
*/
class func qrCodeFromImage(qrImage:UIImage) -> String {
let detector = CIDetector(ofType: CIDetectorTypeQRCode, context: nil, options: [CIDetectorAccuracy:CIDetectorAccuracyHigh])
guard let ciImage = qrImage.ciImage() else {
return ""
}
guard let feature = detector.featuresInImage(ciImage).last as? CIQRCodeFeature else {
return ""
}
return feature.messageString
}
}
Related
I'm attempting to crop an UIImage in iOS using Saliency via the VNGenerateObjectnessBasedSaliencyImageRequest().
I'm following the documentation provided by Apple here https://developer.apple.com/documentation/vision/2908993-vnimagerectfornormalizedrect and
working off of this tutorial https://betterprogramming.pub/cropping-areas-of-interest-using-vision-in-ios-e83b5e53440b.
I'm also referencing this project https://developer.apple.com/documentation/vision/highlighting_areas_of_interest_in_an_image_using_saliency.
This is the code I currently have in place.
static func cropImage(_ image: UIImage, completionHandler:#escaping(UIImage?, String?) -> Void) -> Void {
guard let originalImage = image.cgImage else { return }
let saliencyRequest = VNGenerateObjectnessBasedSaliencyImageRequest()
let requestHandler = VNImageRequestHandler(cgImage: originalImage, orientation: .right, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try requestHandler.perform([saliencyRequest])
guard let results = saliencyRequest.results?.first else{return}
if let observation = results as VNSaliencyImageObservation?
{
let salientObjects = observation.salientObjects
if let ciimage = CIImage(image: image)
{
let salientRect = VNImageRectForNormalizedRect((salientObjects?.first!.boundingBox)!,
Int(ciimage.extent.size.width),
Int(ciimage.extent.size.height))
let croppedImage = ciimage.cropped(to: salientRect)
let cgImage = iOSVisionHelper.convertCIImageToCGImage(inputImage: croppedImage)
if cgImage != nil {
let thumbnail = UIImage(cgImage: cgImage!)
completionHandler(thumbnail, nil)
}else{
completionHandler(nil, "Unable to crop image")
}
}
}
} catch {
completionHandler(nil, error.localizedDescription)
}
}
}
static func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
This is working pretty well, except it seems like it's not adjusting the height of the image. It crops in the sides perfectly, but not the top or bottom.
Here are examples of the original image and it being cropped.
This is what the iOS demo app found at https://developer.apple.com/documentation/vision/highlighting_areas_of_interest_in_an_image_using_saliency generates.
Any help would be very much appreciated.
I've been having this problem for a while now and looked at dozens of answers here and can't seem to find anything that helps.
Scenario
I am generating a QR Code on the iOS side of my app and want this QR code to be sent to the WatchKit Extension that I am currently developing.
How I am generating the QR Code
func createQR(with string: String) {
if let filter = CIFilter(name: "CIQRCodeGenerator") {
//set the data to the contact data
filter.setValue(string, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
if let codeImage = filter.outputImage {
return UIImage(ciImage: codeImage);
}
}
}
What I want next
I want to get the data from the QR image so that I can send it to the Apple Watch app, like so:
let data = UIImagePNGRepresentation(QRCodeImage);
But, This always returns nil because there is no image data backing the output from the filter.
Note: I know that there is no data associated with the CI Image because it hasn't been rendered and doesn't even have data associated with it because it's just the output from the filter. I don't know how to get around this because I'm pretty new to image processing and such. :/
What I've Tried
Creating a cgImage from the filter.outputImage
func createQR(with string: String) {
if let filter = CIFilter(name: "CIQRCodeGenerator") {
//set the data to the contact data
filter.setValue(contactData, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
if let codeImage = filter.outputImage {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(codeImage, from: codeImage.extent) {
self.QRCode = UIImage(cgImage: cgImage)
}
}
}
}
But this doesn't work, it doesn't seem, because the image on the view is blank.
Creating a blank CIImage as Input Image
func update(with string: String) {
let blankCiImage = CIImage(color: .white) //This probably isn't right...
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(contactData, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
filter.setValue(blankCiImage, forKey: kCIInputImageKey)
if let codeImage = filter.outputImage {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(codeImage, from: codeImage.extent) {
self.contactCode = UIImage(cgImage: cgImage)
print(self.contactCode!)
print(UIImagePNGRepresentation(self.contactCode!))
}
}
}
}
This doesn't work either - my thought was to add a blank image to it and then do the filter on top of it, but I am probably not doing this right.
My Goal
Literally, just to get the data from the generated QR Code. Most threads suggest UIImage(ciImage: output) , but this doesn't have any backing data.
If anyone could help me out with this, that'd be great. And any explanation on how it works would be wonderful too.
Edit: I don't believe this is the same as the marked duplicate - The marked duplicate is about editing an existing image using CI filters and getting that data and this is about an image that is solely created through CI filter with no input image - QR Codes. the other answer did not fully relate.
You have a couple of issues in your code. You need to convert your string to data using String Encoding isoLatin1 before passing it to the filter. Another issue is that to convert your CIImage to data you need to redraw/render your CIImage and to prevent blurring the image when scaled you need to apply a transform to the image to increase its size:
extension StringProtocol {
var qrCode: UIImage? {
guard
let data = data(using: .isoLatin1),
let outputImage = CIFilter(name: "CIQRCodeGenerator",
parameters: ["inputMessage": data, "inputCorrectionLevel": "M"])?.outputImage
else { return nil }
let size = outputImage.extent.integral
let output = CGSize(width: 250, height: 250)
let format = UIGraphicsImageRendererFormat()
format.scale = UIScreen.main.scale
return UIGraphicsImageRenderer(size: output, format: format).image { _ in outputImage
.transformed(by: .init(scaleX: output.width/size.width, y: output.height/size.height))
.image
.draw(in: .init(origin: .zero, size: output))
}
}
}
extension CIImage {
var image: UIImage { .init(ciImage: self) }
}
Playground testing:
let link = "https://stackoverflow.com/questions/51178573/swift-image-data-from-ciimage-qr-code-how-to-render-cifilter-output?noredirect=1"
let image = link.qrCode!
let data = image.jpegData(compressionQuality: 1) // 154785 bytes
I am having some problem getting a CIImage from my UIImage in extension.
I am trying to apply a WhitePointAdjust on the UIImage.
Here is my code:
extension UIImage{
enum ImageError: Error {
case filterLookupError(String)
case filterError(CIFilter)
case ciImageError(CIImage?)
case cgImageError(CIImage, CGRect)
}
func applyWhitePointAdjustment(_ whiteColor: UIColor) -> UIImage?{
let context = CIContext(options: nil)
let filterName = "CIWhitePointAdjust"
guard let wpaFilter = CIFilter(name: filterName) else {
ImageError.filterLookupError("filter not found: " + filterName).handle()
return nil
}
guard let inputImage = self.ciImage else {
ImageError.ciImageError(self.ciImage).handle()
return self
}
wpaFilter.setValue(inputImage, forKey: kCIInputImageKey)
wpaFilter.setValue(whiteColor, forKey: kCIInputColorKey)
guard let output = wpaFilter.outputImage else {
ImageError.filterError(wpaFilter).handle()
return nil
}
guard let cgImage = context.createCGImage(output, from: output.extent) else {
ImageError.cgImageError(output, output.extent).handle()
return nil
}
return UIImage(cgImage: cgImage)
}
}
Your help would be highly appreciated. I checked if, the UIImage is there and it is.
The filter also doesn't work, but I didn't have time to check it out. However if anyone has a quick solution for that, it would be appreciated as well
Update:
I replaced
guard let inputImage = self.ciImage
with:
guard let inputImage = CIImage(image: self)
and it seams to work. But I don't really understand the difference. Could anyone please give an explanation?
I get nil on an iOS device (works ok on Simulator or Playground) when I create create a barcode UIImage with generateBarcode("ABCDEF") and call UIImageJPEGRepresentation or UIImagePNGRepresentation.
There is clearly something still wrong in the UIImage as the debugger cannot display the UIImage either. The image exists, the cgImage property is set but UIImageJPEGRepresentation doesn't like it.
I have tried to solve it as per: UIImageJPEGRepresentation returns nil
class func generateBarcode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CICode128BarcodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
if let ciImage = filter.outputImage {
if let cgImange = convertCIImageToCGImage(inputImage: ciImage) {
return UIImage(cgImage: cgImange)
}
}
}
return nil
}
class func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
I would like to pixelize and unpixelize an UIImage or an UIImageview using Swift but I have no idea on how can I do that.
Maybe using effects, layer or something like that?
This is a very easy task on iOS.
Pixelation
Your can use the CIPixellate Core Image Filter.
func pixellated(image: UIImage) -> UIImage? {
guard let
ciImage = CIImage(image: image),
filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: "inputImage")
guard let output = filter.outputImage else { return nil }
return UIImage(CIImage: output)
}
Result
The default inputScale value is 8. However you can increase or decrease the effect manually setting the parameter.
filter.setValue(8, forKey: "inputScale")
// ^
// change this
Extension
You can also define the following extension
extension UIImage {
func pixellated(scale: Int = 8) -> UIImage? {
guard let
ciImage = UIKit.CIImage(image: self),
filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: "inputImage")
filter.setValue(scale, forKey: "inputScale")
guard let output = filter.outputImage else { return nil }
return UIImage(CIImage: output)
}
}
Unpixelation
The mechanism is exactly the same, you just need to use a different filter. You can find the full list of filters here (but take a look at which params are available/required for each filter). I think the CIGaussianBlur can do the job.
Of course don't expect to be able to input a low resolution superpixellated image and get an high definition one. This technology is available only in X-Files :D
The Mushroom image has been taken from here.