Swift CoreImage Retaining Memory - ios

I'm using ReactiveCocoa and CoreImage/Video to process videos, CoreImage to transform and filter each CMSampleBuffer from the video and ReactiveCocoa to process the buffers in sequence.
My filtering function is fairly simple; all I'm doing is detecting if there's a face in the given image, and cropping the image to the face bounds.
static func process(input: CIImage) -> SignalProducer<CIImage?, Types.Error> {
return SignalProducer { observer, disposable in
let context = CIContext()
let detector = CIDetector(ofType: CIDetectorTypeFace, context: context, options: nil)
guard let firstFeature = detector.featuresInImage(input, options: [CIDetectorImageOrientation: NSNumber(integer: 6)]).first else {
observer.sendNext(nil)
observer.sendCompleted()
return
}
let cropFilter = CIFilter(name: "CICrop")
let cropRect: CIVector = CIVector(CGRect: firstFeature.bounds)
cropFilter?.setValue(input, forKey: "inputImage")
cropFilter?.setValue(cropRect, forKey: "inputRectangle")
guard let output = cropFilter?.outputImage else {
observer.sendNext(nil)
observer.sendCompleted()
return
}
observer.sendNext(output)
observer.sendCompleted()
disposable.addDisposable {
cropFilter?.setValue(nil, forKey: "inputImage")
}
}
}
However, somewhere in this function memory is being unintentionally retained. If I wrap the inner SignalProducer block in an autoreleasepool statement, everything works fine and my memory usage never gets above 50 MB. But If I don't wrap my filtering code in that, the memory jumps from 30MB to 200MB then the app crashes.
What's being retained and which line is causing it?
EDIT:
I discovered that shortly there after it was this line:
guard let firstFeature = detector.featuresInImage(input, options: [CIDetectorImageOrientation: NSNumber(integer: 6)]).first else {
observer.sendNext(nil)
observer.sendCompleted()
return
}
Although as to why it's this line, I have no clue. Could this perhaps be a bug in CoreImage itself?

Related

Blur image performance issues on iOS simulator 10.2.1

I use this code to blur my UIImage
extension UIImage {
func blurred(radius: CGFloat) -> UIImage {
let ciContext = CIContext(options: nil)
guard let cgImage = cgImage else { return self }
let inputImage = CIImage(cgImage: cgImage)
guard let ciFilter = CIFilter(name: "CIGaussianBlur") else { return self }
ciFilter.setValue(inputImage, forKey: kCIInputImageKey)
ciFilter.setValue(radius, forKey: "inputRadius")
guard let resultImage = ciFilter.value(forKey: kCIOutputImageKey) as? CIImage else { return self }
guard let cgImage2 = ciContext.createCGImage(resultImage, from: inputImage.extent) else { return self }
return UIImage(cgImage: cgImage2)
}
}
But it takes so long to return image from this operation.
Actually this operation takes about 2 seconds:
guard let cgImage2 = ciContext.createCGImage(resultImage, from: inputImage.extent) else { return self }
I have not tested it on real device, but not sure if the code is efficient
That code looks fine-ish, though you should cache the image it returns rather than calling it repeatedly if at all possible; as Matt points out in the comments below, you should also use a shared CIContext rather than setting a new one up every time.
The performance issue you’re seeing is due to the simulator having very different performance characteristics from real hardware. It sounds like Core Image is either using the simulator’s emulated OpenGL ES interface (which is slow) or the CPU (which is slower). Testing it on an iOS device will give you a much better idea of the performance you should expect.

How to fix IOAF code GPU errors while using ARKit2 & Vision (VNDetectFaceRectanglesRequest) on iPhone XS

While running ARKit on iPhone XS (with iOS 12.1.2 and Xcode 10.1), I'm getting errors and crashes/hangs while running Vision code to detect face bounds.
Errors I'm getting are:
2019-01-04 03:03:03.155867-0800 ARKit Vision Demo[12969:3307770] Execution of the command buffer was aborted due to an error during execution. Caused GPU Timeout Error (IOAF code 2)
2019-01-04 03:03:03.155786-0800 ARKit Vision Demo[12969:3307850] Execution of the command buffer was aborted due to an error during execution. Discarded (victim of GPU error/recovery) (IOAF code 5)
[SceneKit] Error: display link thread seems stuck
This happens on iPhone XS while running the following proof of concept code to reproduce the error (always happens within a few seconds of running the app) - https://github.com/xta/ARKit-Vision-Demo
The relevant ViewController.swift contains the problematic methods:
func classifyCurrentImage() {
guard let buffer = currentBuffer else { return }
let image = CIImage(cvPixelBuffer: buffer)
let options: [VNImageOption: Any] = [:]
let imageRequestHandler = VNImageRequestHandler(ciImage: image, orientation: self.imageOrientation, options: options)
do {
try imageRequestHandler.perform(self.requests)
} catch {
print(error)
}
}
func handleFaces(request: VNRequest, error: Error?) {
DispatchQueue.main.async {
guard let results = request.results as? [VNFaceObservation] else { return }
// TODO - something here with results
print(results)
self.currentBuffer = nil
}
}
What is the correct way to use Apple's ARKit + Vision with VNDetectFaceRectanglesRequest? Getting mysterious IOAF code errors is not correct.
Ideally, I'd also like to use VNTrackObjectRequest & VNSequenceRequestHandler to track requests.
There is decent online documentation for using VNDetectFaceRectanglesRequest with Vision (and without ARKit). Apple has a page here (https://developer.apple.com/documentation/arkit/using_vision_in_real_time_with_arkit) which I've followed, but I'm still getting the errors/crashes.
For anyone else who goes through the pain I just did trying to fix this exact error for VNDetectRectanglesRequest, here was my solution:
It seems that using a CIImage:
let imageRequestHandler = VNImageRequestHandler(ciImage: image, orientation: self.imageOrientation, options: options)
caused Metal to retain a large amount Internal Functions in my memory graph.
I noticed that Apple's example projects all use this instead:
let handler: VNImageRequestHandler! = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,
orientation: orientation,
options: requestHandlerOptions)
Switching to use cvPixelBuffer instead of a CIImage fixed all of my random GPU timeout errors!
I used these functions to get the orientation (I'm using the back camera. I think you may have to mirror for the front camera depending on what you're trying to do):
func exifOrientationForDeviceOrientation(_ deviceOrientation: UIDeviceOrientation) -> CGImagePropertyOrientation {
switch deviceOrientation {
case .portraitUpsideDown:
return .right
case .landscapeLeft:
return .down
case .landscapeRight:
return .up
default:
return .left
}
}
func exifOrientationForCurrentDeviceOrientation() -> CGImagePropertyOrientation {
return exifOrientationForDeviceOrientation(UIDevice.current.orientation)
}
and the following as the options:
var requestHandlerOptions: [VNImageOption: AnyObject] = [:]
let cameraIntrinsicData = CMGetAttachment(pixelBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil)
if cameraIntrinsicData != nil {
requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData
}
Hopefully this saves someone the week that I lost!
You need to call perform method async, just as it is done in the link you've shared.
Try below code:
func classifyCurrentImage() {
guard let buffer = currentBuffer else { return }
let image = CIImage(cvPixelBuffer: buffer)
let options: [VNImageOption: Any] = [:]
let imageRequestHandler = VNImageRequestHandler(ciImage: image, orientation: self.imageOrientation, options: options)
DispatchQueue.global(qos: .userInteractive).async {
do {
try imageRequestHandler.perform(self.requests)
} catch {
print(error)
}
}
}
Update: from what I can tell, the issue was retain cycles (or the lack of [weak self]) in my demo repo. In Apple's sample project, they properly use [weak self] to avoid retain cycles and ARKit + Vision app runs on the iPhone XS.

Swift - Image Data From CIImage QR Code / How to render CIFilter Output

I've been having this problem for a while now and looked at dozens of answers here and can't seem to find anything that helps.
Scenario
I am generating a QR Code on the iOS side of my app and want this QR code to be sent to the WatchKit Extension that I am currently developing.
How I am generating the QR Code
func createQR(with string: String) {
if let filter = CIFilter(name: "CIQRCodeGenerator") {
//set the data to the contact data
filter.setValue(string, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
if let codeImage = filter.outputImage {
return UIImage(ciImage: codeImage);
}
}
}
What I want next
I want to get the data from the QR image so that I can send it to the Apple Watch app, like so:
let data = UIImagePNGRepresentation(QRCodeImage);
But, This always returns nil because there is no image data backing the output from the filter.
Note: I know that there is no data associated with the CI Image because it hasn't been rendered and doesn't even have data associated with it because it's just the output from the filter. I don't know how to get around this because I'm pretty new to image processing and such. :/
What I've Tried
Creating a cgImage from the filter.outputImage
func createQR(with string: String) {
if let filter = CIFilter(name: "CIQRCodeGenerator") {
//set the data to the contact data
filter.setValue(contactData, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
if let codeImage = filter.outputImage {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(codeImage, from: codeImage.extent) {
self.QRCode = UIImage(cgImage: cgImage)
}
}
}
}
But this doesn't work, it doesn't seem, because the image on the view is blank.
Creating a blank CIImage as Input Image
func update(with string: String) {
let blankCiImage = CIImage(color: .white) //This probably isn't right...
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(contactData, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
filter.setValue(blankCiImage, forKey: kCIInputImageKey)
if let codeImage = filter.outputImage {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(codeImage, from: codeImage.extent) {
self.contactCode = UIImage(cgImage: cgImage)
print(self.contactCode!)
print(UIImagePNGRepresentation(self.contactCode!))
}
}
}
}
This doesn't work either - my thought was to add a blank image to it and then do the filter on top of it, but I am probably not doing this right.
My Goal
Literally, just to get the data from the generated QR Code. Most threads suggest UIImage(ciImage: output) , but this doesn't have any backing data.
If anyone could help me out with this, that'd be great. And any explanation on how it works would be wonderful too.
Edit: I don't believe this is the same as the marked duplicate - The marked duplicate is about editing an existing image using CI filters and getting that data and this is about an image that is solely created through CI filter with no input image - QR Codes. the other answer did not fully relate.
You have a couple of issues in your code. You need to convert your string to data using String Encoding isoLatin1 before passing it to the filter. Another issue is that to convert your CIImage to data you need to redraw/render your CIImage and to prevent blurring the image when scaled you need to apply a transform to the image to increase its size:
extension StringProtocol {
var qrCode: UIImage? {
guard
let data = data(using: .isoLatin1),
let outputImage = CIFilter(name: "CIQRCodeGenerator",
parameters: ["inputMessage": data, "inputCorrectionLevel": "M"])?.outputImage
else { return nil }
let size = outputImage.extent.integral
let output = CGSize(width: 250, height: 250)
let format = UIGraphicsImageRendererFormat()
format.scale = UIScreen.main.scale
return UIGraphicsImageRenderer(size: output, format: format).image { _ in outputImage
.transformed(by: .init(scaleX: output.width/size.width, y: output.height/size.height))
.image
.draw(in: .init(origin: .zero, size: output))
}
}
}
extension CIImage {
var image: UIImage { .init(ciImage: self) }
}
Playground testing:
let link = "https://stackoverflow.com/questions/51178573/swift-image-data-from-ciimage-qr-code-how-to-render-cifilter-output?noredirect=1"
let image = link.qrCode!
let data = image.jpegData(compressionQuality: 1) // 154785 bytes

How can I fix a Core Image's CILanczosScaleTransform filter border artifact?

I want to implement an image downscaling algorithm for iOS. After reading that Core Images's CILanczosScaleTransform was a great fit for it, I implemented it the following way:
public func resizeImage(_ image: UIImage, targetWidth: CGFloat) -> UIImage? {
assert(targetWidth > 0.0)
let scale = Double(targetWidth) / Double(image.size.width)
guard let ciImage = CIImage(image: image) else {
fatalError("Couldn't create CIImage from image in input")
}
guard let filter = CIFilter(name: "CILanczosScaleTransform") else {
fatalError("The filter CILanczosScaleTransform is unavailable on this device.")
}
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let result = filter.outputImage else {
fatalError("No output on filter.")
}
guard let cgImage = context.createCGImage(result, from: result.extent) else {
fatalError("Couldn't create CG Image")
}
return UIImage(cgImage: cgImage)
}
It works well but I get a classic border artifact probably due to the pixel-neighborhood base of the algorithm. I couldn't find anything in Apple's doc about this. Is there something smarter than rendering a bigger image and then crop the border to solve this issue?
You can use imageByClampingToExtent.
Calling this method ... creates an image of infinite extent by repeating
pixel colors from the edges of the original image.
You could use it like this:
...
guard let ciImage = CIImage(image: image)?.clampedToExtent() else {
fatalError("Couldn't create CIImage from image in input")
}
See more information here: Apple Doc for clampedtoextent

How to pixelize and unpixelize an UIImage or UIImageview?

I would like to pixelize and unpixelize an UIImage or an UIImageview using Swift but I have no idea on how can I do that.
Maybe using effects, layer or something like that?
This is a very easy task on iOS.
Pixelation
Your can use the CIPixellate Core Image Filter.
func pixellated(image: UIImage) -> UIImage? {
guard let
ciImage = CIImage(image: image),
filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: "inputImage")
guard let output = filter.outputImage else { return nil }
return UIImage(CIImage: output)
}
Result
The default inputScale value is 8. However you can increase or decrease the effect manually setting the parameter.
filter.setValue(8, forKey: "inputScale")
// ^
// change this
Extension
You can also define the following extension
extension UIImage {
func pixellated(scale: Int = 8) -> UIImage? {
guard let
ciImage = UIKit.CIImage(image: self),
filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: "inputImage")
filter.setValue(scale, forKey: "inputScale")
guard let output = filter.outputImage else { return nil }
return UIImage(CIImage: output)
}
}
Unpixelation
The mechanism is exactly the same, you just need to use a different filter. You can find the full list of filters here (but take a look at which params are available/required for each filter). I think the CIGaussianBlur can do the job.
Of course don't expect to be able to input a low resolution superpixellated image and get an high definition one. This technology is available only in X-Files :D
The Mushroom image has been taken from here.

Resources