Memory Management - CVPixelBuffer to UIImage, difference between using CIImage and VTCreateCGImageFromCVPixelBuffer - ios

I got two solution converting CVPixelBuffer to UIImage.
Using CIImage and its context
First solution is using CIImage and CIContext.
func readAssetAndCache(completion: #escaping () -> Void) {
/** Setup AVAssetReader **/
reader.startReading()
let context = CIContext()
while let sampleBuffer = generator.trackoutput.copyNextSampleBuffer() {
autoreleasepool {
guard let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
let ciImage = CIImage(cvImageBuffer: imageBuffer)
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return }
if let image = UIImage(cgImage: cgImage) {
CacheManager.default.cache(image: image, forKey: "CachedImages") {
completion()
}
}
}
}
}
Using VTCreateCGImageFromCVPixelBuffer
Code Looks like this.
func readAssetAndCache(completion: #escaping () -> Void) {
/** Setup AVAssetReader **/
reader.startReading()
while let sampleBuffer = generator.trackoutput.copyNextSampleBuffer() {
autoreleasepool {
guard let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
if let image = UIImage(pixelBuffer: imageBuffer,
scale: 1.0,
orientation: self.imageOrientation) {
CacheManager.default.cache(image: image, forKey: "CachedImages") {
completion()
}
}
}
}
}
}
In this solution I use this extension method.
import UIKit
import VideoToolbox
extension UIImage {
public convenience init?(pixelBuffer: CVPixelBuffer, scale: CGFloat, orientation: UIImageOrientation) {
var image: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, nil, &image)
guard let cgImage = image else { return nil }
self.init(cgImage: cgImage, scale: scale, orientation: orientation)
}
}
Whole process related to extracting images from video is as follows.
Whole Process
Extract All frames serially.
Cache each image in disk concurrently, using OperationQueue.
Question
The problem occurs when images are cached concurrently, using CIContext and its createCGImage(ciImage:rect:). Memory footprints gets higher until the end of all caching process in first scenario. But, Second way, it releases its memory when each workItem is finished.
I really don't know why.
Here is my Caching method related to this problem.
func cache(image: UIImage, forKey key: String, completion: #escaping () -> Void) {
let operation = BlockOperation(block: { [weak self] in
guard let `self` = self else { return }
self.diskCache(image: image)
})
operation.completionBlock = completion
operationQueue.addOperation(operation)
}
func diskCache(image: UIImage) {
let data = UIImageJPEGRepresentation(image, 1.0)
let key = "\(self.identifierPrefix).jpg"
FileManager.default.createFile(atPath: self.cachePath.appendingPathComponent(key).path,
contents: data, attributes: nil)
}

Related

iOS Object Based Saliency Image Request Not Cropping Correctly

I'm attempting to crop an UIImage in iOS using Saliency via the VNGenerateObjectnessBasedSaliencyImageRequest().
I'm following the documentation provided by Apple here https://developer.apple.com/documentation/vision/2908993-vnimagerectfornormalizedrect and
working off of this tutorial https://betterprogramming.pub/cropping-areas-of-interest-using-vision-in-ios-e83b5e53440b.
I'm also referencing this project https://developer.apple.com/documentation/vision/highlighting_areas_of_interest_in_an_image_using_saliency.
This is the code I currently have in place.
static func cropImage(_ image: UIImage, completionHandler:#escaping(UIImage?, String?) -> Void) -> Void {
guard let originalImage = image.cgImage else { return }
let saliencyRequest = VNGenerateObjectnessBasedSaliencyImageRequest()
let requestHandler = VNImageRequestHandler(cgImage: originalImage, orientation: .right, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try requestHandler.perform([saliencyRequest])
guard let results = saliencyRequest.results?.first else{return}
if let observation = results as VNSaliencyImageObservation?
{
let salientObjects = observation.salientObjects
if let ciimage = CIImage(image: image)
{
let salientRect = VNImageRectForNormalizedRect((salientObjects?.first!.boundingBox)!,
Int(ciimage.extent.size.width),
Int(ciimage.extent.size.height))
let croppedImage = ciimage.cropped(to: salientRect)
let cgImage = iOSVisionHelper.convertCIImageToCGImage(inputImage: croppedImage)
if cgImage != nil {
let thumbnail = UIImage(cgImage: cgImage!)
completionHandler(thumbnail, nil)
}else{
completionHandler(nil, "Unable to crop image")
}
}
}
} catch {
completionHandler(nil, error.localizedDescription)
}
}
}
static func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
This is working pretty well, except it seems like it's not adjusting the height of the image. It crops in the sides perfectly, but not the top or bottom.
Here are examples of the original image and it being cropped.
This is what the iOS demo app found at https://developer.apple.com/documentation/vision/highlighting_areas_of_interest_in_an_image_using_saliency generates.
Any help would be very much appreciated.

UIImage is rotated 90 degrees when creating from url and set to the pasteboard

What do I simply do?
let pasteboard = UIPasteboard.general
let base64EncodedImageString = "here_base_64_string_image"
let data = Data(base64Encoded: base64EncodedImageString)
let url = data?.write(withName: "image.jpeg")
pasteboard.image = UIImage(url: url) //and now when I try to paste somewhere that image for example in imessage, it is rotated... why?
What may be important:
It happens only for images created by camera.
However, if use exactly the same process (!) to create activityItems for UIActivityViewController and try to use iMessage app, then it works... why? What makes the difference?
I use above two simple extensions for UIImage and Data:
extension Data {
func write(withName name: String) -> URL {
let url = URL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent(name)
do {
try write(to: url, options: NSData.WritingOptions.atomic)
return url
} catch {
return url
}
}
}
extension UIImage {
convenience init?(url: URL?) {
guard let url = url else {
return nil
}
do {
self.init(data: try Data(contentsOf: url))
} catch {
return nil
}
}
}
Before server returns base64EncodedString I upload an image from camera like this:
func imagePickerController(
_ picker: UIImagePickerController,
didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]
) {
let image = info[.originalImage] as? UIImage
let encodedBase64 = image?.jpegData(compressionQuality: 0.9)?.base64EncodedString() ?? ""
//upload encodedBase64 to the server... that is all
}
I am not sure but I think UIPasteBoard converts your image to PNG and discards its orientation. You can explicitly tell the kind of data you are adding to the pasteboard but I am not sure if this would work for your scenery.
extension Data {
var image: UIImage? { UIImage(data: self) }
}
setting your pasteboard data
UIPasteboard.general.setData(jpegData, forPasteboardType: "public.jpeg")
loading the data from pasteboard
if let pbImage = UIPasteboard.general.data(forPasteboardType: "public.jpeg")?.image {
}
Or Redrawing your image before setting your pasteboard image property
extension UIImage {
func flattened(isOpaque: Bool = true) -> UIImage? {
if imageOrientation == .up { return self }
UIGraphicsBeginImageContextWithOptions(size, isOpaque, scale)
defer { UIGraphicsEndImageContext() }
draw(in: CGRect(origin: .zero, size: size))
return UIGraphicsGetImageFromCurrentImageContext()
}
}
UIPasteboard.general.image = image.flattened()

UIImageJPEGRepresentation returns nil (on a device - OK on Simulator)

I get nil on an iOS device (works ok on Simulator or Playground) when I create create a barcode UIImage with generateBarcode("ABCDEF") and call UIImageJPEGRepresentation or UIImagePNGRepresentation.
There is clearly something still wrong in the UIImage as the debugger cannot display the UIImage either. The image exists, the cgImage property is set but UIImageJPEGRepresentation doesn't like it.
I have tried to solve it as per: UIImageJPEGRepresentation returns nil
class func generateBarcode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CICode128BarcodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
if let ciImage = filter.outputImage {
if let cgImange = convertCIImageToCGImage(inputImage: ciImage) {
return UIImage(cgImage: cgImange)
}
}
}
return nil
}
class func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}

Not detecting QR Code from a static image

I want to detect qrcode from still image
Here is the code
let ciimage = <Load image from asset and covert to CIImage>
let detector = CIDetector(ofType: CIDetectorTypeQRCode, context:nil, options:nil)
let features = detector.featuresInImage(ciimage)
print(features.count)
The image is correct CIImage instance which has QRCode but I always get no features, Have any one had the same issue?
Thank you very much.
I think I found answer
That code only works in Playground or real iPhone, it never works on simulator
I generated QRCode image using CIQRCodeGenerator CoreImage filter.
I hope someone can check my answer here and give me advice.
Below is source code what I used
enum QRCodeUtilError:ErrorType{
case GenerationFailed
}
extension UIImage {
/**
This function trys to return CIImage even the image is CGImageRef based.
- Returns: CIImage equal to current image
*/
func ciImage() -> CoreImage.CIImage?{
// if CIImage is not nil then return it
if self.CIImage != nil {
return self.CIImage
}
guard let cgImage = self.CGImage else {
return nil
}
return CoreImage.CIImage(CGImage: cgImage)
}
}
/**
QRCode utility class
*/
class QRCodeUtil{
/**
Generates 'H' mode qrcode image
- Parameter qrCode: QRCode String
- Returns: Optional UIImage instance of QRCode
*/
class func imageForQrCode(qrCode:String) throws -> UIImage{
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
throw QRCodeUtilError.GenerationFailed
}
guard let data = qrCode.dataUsingEncoding(NSISOLatin1StringEncoding) else {
throw QRCodeUtilError.GenerationFailed
}
filter.setValue(data, forKey: "inputMessage")
filter.setValue("H", forKey: "inputCorrectionLevel")
guard let outputImage = filter.outputImage else {
throw QRCodeUtilError.GenerationFailed
}
/*
Convert the image to CGImage based because
it crashes when saving image using
UIImagePNGRepresentation or UIImageJPEGRepresentation
if image is CIImage based.
*/
let context = CIContext(options: nil)
let cgImage = context.createCGImage(outputImage, fromRect: outputImage.extent)
return UIImage(CGImage: cgImage)
}
/**
Detects QRCode from string
- Parameter qrImage: UIImage which has QRCode
- Returns: QRCode detected
*/
class func qrCodeFromImage(qrImage:UIImage) -> String {
let detector = CIDetector(ofType: CIDetectorTypeQRCode, context: nil, options: [CIDetectorAccuracy:CIDetectorAccuracyHigh])
guard let ciImage = qrImage.ciImage() else {
return ""
}
guard let feature = detector.featuresInImage(ciImage).last as? CIQRCodeFeature else {
return ""
}
return feature.messageString
}
}

Convert UIImage to NSData and convert back to UIImage in Swift?

I'm trying to save a UIImage to NSData and then read the NSData back to a new UIImage in Swift. To convert the UIImage to NSData I'm using the following code:
let imageData: NSData = UIImagePNGRepresentation(myImage)
How do I convert imageData (i.e., NSData) back to a new UIImage?
UIImage(data:imageData,scale:1.0) presuming the image's scale is 1.
In swift 4.2, use below code for get Data().
image.pngData()
Thanks. Helped me a lot. Converted to Swift 3 and worked
To save: let data = UIImagePNGRepresentation(image)
To load: let image = UIImage(data: data)
Use imageWithData: method, which gets translated to Swift as UIImage(data:)
let image : UIImage = UIImage(data: imageData)
Now in Swift 4.2 you can use pngData() new instance method of UIImage to get the data from the image
let profileImage = UIImage(named:"profile")!
let imageData = profileImage.pngData()
Details
Xcode 10.2.1 (10E1001), Swift 5
Solution 1
guard let image = UIImage(named: "img") else { return }
let jpegData = image.jpegData(compressionQuality: 1.0)
let pngData = image.pngData()
Solution 2.1
extension UIImage {
func toData (options: NSDictionary, type: CFString) -> Data? {
guard let cgImage = cgImage else { return nil }
return autoreleasepool { () -> Data? in
let data = NSMutableData()
guard let imageDestination = CGImageDestinationCreateWithData(data as CFMutableData, type, 1, nil) else { return nil }
CGImageDestinationAddImage(imageDestination, cgImage, options)
CGImageDestinationFinalize(imageDestination)
return data as Data
}
}
}
Usage of solution 2.1
// about properties: https://developer.apple.com/documentation/imageio/1464962-cgimagedestinationaddimage
let options: NSDictionary = [
kCGImagePropertyOrientation: 6,
kCGImagePropertyHasAlpha: true,
kCGImageDestinationLossyCompressionQuality: 0.5
]
// https://developer.apple.com/documentation/mobilecoreservices/uttype/uti_image_content_types
guard let data = image.toData(options: options, type: kUTTypeJPEG) else { return }
let size = CGFloat(data.count)/1000.0/1024.0
print("\(size) mb")
Solution 2.2
extension UIImage {
func toJpegData (compressionQuality: CGFloat, hasAlpha: Bool = true, orientation: Int = 6) -> Data? {
guard cgImage != nil else { return nil }
let options: NSDictionary = [
kCGImagePropertyOrientation: orientation,
kCGImagePropertyHasAlpha: hasAlpha,
kCGImageDestinationLossyCompressionQuality: compressionQuality
]
return toData(options: options, type: .jpeg)
}
func toData (options: NSDictionary, type: ImageType) -> Data? {
guard cgImage != nil else { return nil }
return toData(options: options, type: type.value)
}
// about properties: https://developer.apple.com/documentation/imageio/1464962-cgimagedestinationaddimage
func toData (options: NSDictionary, type: CFString) -> Data? {
guard let cgImage = cgImage else { return nil }
return autoreleasepool { () -> Data? in
let data = NSMutableData()
guard let imageDestination = CGImageDestinationCreateWithData(data as CFMutableData, type, 1, nil) else { return nil }
CGImageDestinationAddImage(imageDestination, cgImage, options)
CGImageDestinationFinalize(imageDestination)
return data as Data
}
}
// https://developer.apple.com/documentation/mobilecoreservices/uttype/uti_image_content_types
enum ImageType {
case image // abstract image data
case jpeg // JPEG image
case jpeg2000 // JPEG-2000 image
case tiff // TIFF image
case pict // Quickdraw PICT format
case gif // GIF image
case png // PNG image
case quickTimeImage // QuickTime image format (OSType 'qtif')
case appleICNS // Apple icon data
case bmp // Windows bitmap
case ico // Windows icon data
case rawImage // base type for raw image data (.raw)
case scalableVectorGraphics // SVG image
case livePhoto // Live Photo
var value: CFString {
switch self {
case .image: return kUTTypeImage
case .jpeg: return kUTTypeJPEG
case .jpeg2000: return kUTTypeJPEG2000
case .tiff: return kUTTypeTIFF
case .pict: return kUTTypePICT
case .gif: return kUTTypeGIF
case .png: return kUTTypePNG
case .quickTimeImage: return kUTTypeQuickTimeImage
case .appleICNS: return kUTTypeAppleICNS
case .bmp: return kUTTypeBMP
case .ico: return kUTTypeICO
case .rawImage: return kUTTypeRawImage
case .scalableVectorGraphics: return kUTTypeScalableVectorGraphics
case .livePhoto: return kUTTypeLivePhoto
}
}
}
}
Usage of solution 2.2
let compressionQuality: CGFloat = 0.4
guard let data = image.toJpegData(compressionQuality: compressionQuality) else { return }
printSize(of: data)
let options: NSDictionary = [
kCGImagePropertyHasAlpha: true,
kCGImageDestinationLossyCompressionQuality: compressionQuality
]
guard let data2 = image.toData(options: options, type: .png) else { return }
printSize(of: data2)
Problems
Image representing will take a lot of cpu and memory resources. So, in this case it is better to follow several rules:
- do not run jpegData(compressionQuality:) on main queue
- run only one jpegData(compressionQuality:) simultaneously
Wrong:
for i in 0...50 {
DispatchQueue.global(qos: .utility).async {
let quality = 0.02 * CGFloat(i)
//let data = image.toJpegData(compressionQuality: quality)
let data = image.jpegData(compressionQuality: quality)
let size = CGFloat(data!.count)/1000.0/1024.0
print("\(i), quality: \(quality), \(size.rounded()) mb")
}
}
Right:
let serialQueue = DispatchQueue(label: "queue", qos: .utility, attributes: [], autoreleaseFrequency: .workItem, target: nil)
for i in 0...50 {
serialQueue.async {
let quality = 0.02 * CGFloat(i)
//let data = image.toJpegData(compressionQuality: quality)
let data = image.jpegData(compressionQuality: quality)
let size = CGFloat(data!.count)/1000.0/1024.0
print("\(i), quality: \(quality), \(size.rounded()) mb")
}
}
Links
UTI Image Content Types
CGImageDestinationAddImage(::_:)
Thinking about Memory: Converting UIImage to Data in Swift
Different resize technics
To save as data:
From StoryBoard, if you want to save "image" data on the imageView of MainStoryBoard, following codes will work.
let image = UIImagePNGRepresentation(imageView.image!) as NSData?
To load "image" to imageView:
Look at exclamation point "!", "?" closely whether that is quite same as this one.
imageView.image = UIImage(data: image as! Data)
"NSData" type is converted into "Data" type automatically during this process.
Image to Data:-
if let img = UIImage(named: "xxx.png") {
let pngdata = img.pngData()
}
if let img = UIImage(named: "xxx.jpeg") {
let jpegdata = img.jpegData(compressionQuality: 1)
}
Data to Image:-
guard let image = UIImage(data: pngData) else { return }
For safe execution of code, use if-let block with Data to prevent app crash & , as function UIImagePNGRepresentation returns an optional value.
if let img = UIImage(named: "TestImage.png") {
if let data:Data = UIImagePNGRepresentation(img) {
// Handle operations with data here...
}
}
Note: Data is Swift 3+ class. Use Data instead of NSData with
Swift 3+
Generic image operations (like png & jpg both):
if let img = UIImage(named: "TestImage.png") { //UIImage(named: "TestImage.jpg")
if let data:Data = UIImagePNGRepresentation(img) {
handleOperationWithData(data: data)
} else if let data:Data = UIImageJPEGRepresentation(img, 1.0) {
handleOperationWithData(data: data)
}
}
*******
func handleOperationWithData(data: Data) {
// Handle operations with data here...
if let image = UIImage(data: data) {
// Use image...
}
}
By using extension:
extension UIImage {
var pngRepresentationData: Data? {
return UIImagePNGRepresentation(self)
}
var jpegRepresentationData: Data? {
return UIImageJPEGRepresentation(self, 1.0)
}
}
*******
if let img = UIImage(named: "TestImage.png") { //UIImage(named: "TestImage.jpg")
if let data = img.pngRepresentationData {
handleOperationWithData(data: data)
} else if let data = img.jpegRepresentationData {
handleOperationWithData(data: data)
}
}
*******
func handleOperationWithData(data: Data) {
// Handle operations with data here...
if let image = UIImage(data: data) {
// Use image...
}
}
Swift 5
let the image you create as UIImage be image
image.pngData() as NSData?
Use this for a simple solution
static var UserProfilePhoto = UIImage()
guard let image = UIImage(named: "Photo") else { return }
guard let pngdata = image.pngData() else { return }
UserProfilePhoto = UIImage(data: pngdata)!

Resources