Reduce memory consumption while loading gif images in UIImageView - ios

I want to show gif image in a UIImageView and with the code below (source: https://iosdevcenters.blogspot.com/2016/08/load-gif-image-in-swift_22.html, *I did not understand all the codes), I am able to display gif images. However, the memory consumption seems high (tested on real device). Is there any way to modify the code below to reduce the memory consumption?
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
let url = "https://cdn-images-1.medium.com/max/800/1*oDqXedYUMyhWzN48pUjHyw.gif"
let gifImage = UIImage.gifImageWithURL(url)
imageView.image = gifImage
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
fileprivate func < <T : Comparable>(lhs: T?, rhs: T?) -> Bool {
switch (lhs, rhs) {
case let (l?, r?):
return l < r
case (nil, _?):
return true
default:
return false
}
}
extension UIImage {
public class func gifImageWithData(_ data: Data) -> UIImage? {
guard let source = CGImageSourceCreateWithData(data as CFData, nil) else {
print("image doesn't exist")
return nil
}
return UIImage.animatedImageWithSource(source)
}
public class func gifImageWithURL(_ gifUrl:String) -> UIImage? {
guard let bundleURL:URL? = URL(string: gifUrl) else {
return nil
}
guard let imageData = try? Data(contentsOf: bundleURL!) else {
return nil
}
return gifImageWithData(imageData)
}
public class func gifImageWithName(_ name: String) -> UIImage? {
guard let bundleURL = Bundle.main
.url(forResource: name, withExtension: "gif") else {
return nil
}
guard let imageData = try? Data(contentsOf: bundleURL) else {
return nil
}
return gifImageWithData(imageData)
}
class func delayForImageAtIndex(_ index: Int, source: CGImageSource!) -> Double {
var delay = 0.1
let cfProperties = CGImageSourceCopyPropertiesAtIndex(source, index, nil)
let gifProperties: CFDictionary = unsafeBitCast(
CFDictionaryGetValue(cfProperties,
Unmanaged.passUnretained(kCGImagePropertyGIFDictionary).toOpaque()),
to: CFDictionary.self)
var delayObject: AnyObject = unsafeBitCast(
CFDictionaryGetValue(gifProperties,
Unmanaged.passUnretained(kCGImagePropertyGIFUnclampedDelayTime).toOpaque()),
to: AnyObject.self)
if delayObject.doubleValue == 0 {
delayObject = unsafeBitCast(CFDictionaryGetValue(gifProperties,
Unmanaged.passUnretained(kCGImagePropertyGIFDelayTime).toOpaque()), to: AnyObject.self)
}
delay = delayObject as! Double
if delay < 0.1 {
delay = 0.1
}
return delay
}
class func gcdForPair(_ a: Int?, _ b: Int?) -> Int {
var a = a
var b = b
if b == nil || a == nil {
if b != nil {
return b!
} else if a != nil {
return a!
} else {
return 0
}
}
if a < b {
let c = a
a = b
b = c
}
var rest: Int
while true {
rest = a! % b!
if rest == 0 {
return b!
} else {
a = b
b = rest
}
}
}
class func gcdForArray(_ array: Array<Int>) -> Int {
if array.isEmpty {
return 1
}
var gcd = array[0]
for val in array {
gcd = UIImage.gcdForPair(val, gcd)
}
return gcd
}
class func animatedImageWithSource(_ source: CGImageSource) -> UIImage? {
let count = CGImageSourceGetCount(source)
var images = [CGImage]()
var delays = [Int]()
for i in 0..<count {
if let image = CGImageSourceCreateImageAtIndex(source, i, nil) {
images.append(image)
}
let delaySeconds = UIImage.delayForImageAtIndex(Int(i),
source: source)
delays.append(Int(delaySeconds * 1000.0)) // Seconds to ms
}
let duration: Int = {
var sum = 0
for val: Int in delays {
sum += val
}
return sum
}()
let gcd = gcdForArray(delays)
var frames = [UIImage]()
var frame: UIImage
var frameCount: Int
for i in 0..<count {
frame = UIImage(cgImage: images[Int(i)])
frameCount = Int(delays[Int(i)] / gcd)
for _ in 0..<frameCount {
frames.append(frame)
}
}
let animation = UIImage.animatedImage(with: frames,
duration: Double(duration) / 1000.0)
return animation
}
}
When I render the image as normal png image, the consumption is around 10MB.

The GIF in question has a resolution of 480×288 and contains 10 frames.
Considering that UIImageView stores frames as 4-byte RGBA, this GIF occupies 4 × 10 × 480 × 288 = 5 529 600 bytes in RAM, which is more than 5 megabytes.
There are numerous ways to mitigate that, but only one of them puts no additional strain on the CPU; the others are mere CPU-to-RAM trade-offs.
The method I`m talking about is subclassing UIImageView and loading your GIFs by hand, preserving their internal representation (indexed image + palette). It would allow you to cut the memory usage fourfold.
N.B.: even though GIFs may be stored as full images for each frame (which is the case for the GIF in question), many are not. On the contrary, most of the frames can only contain the pixels that have changed since the previous one. Thus, in general the internal GIF representation only allows to display frames in direct order.
Other methods of saving RAM include e.g. re-reading every frame from disk prior to displaying it, which is certainly not good for battery life.

To display GIFs with less memory consumption, try BBWebImage.
BBWebImage will decide how many image frames are decoded and cached depending on current memory usage. If free memory is not enough, only part of image frames are decoded and cached.
For Swift 4:
// BBAnimatedImageView (subclass UIImageView) displays animated image
imageView = BBAnimatedImageView(frame: frame)
// Load and display gif
imageView.bb_setImage(with: url,
placeholder: UIImage(named: "placeholder"))
{ (image: UIImage?, data: Data?, error: Error?, cacheType: BBImageCacheType) in
// Do something when finish loading
}

Related

UILabel Not Updating Consistently

I have a very simple update method, where I've included the debugging lines.
#IBOutlet weak var meterLabel: UILabel!
func updateMeter(string: String)
{
if Thread.isMainThread {
meterLabel.text = string
} else {
DispatchQueue.main.sync {
meterLabel.text = string
}
}
print(string)
}
Obviously string is never nil. The function updateMeter is called about 3 times a second, however currently in the simulator I do not see the UILabel change (it does change during calls to this same updateMeter elsewhere). Is there any reason why changing a UILabel's text would not have a visible result on the main thread?
Called here:
public func startRecording()
{
let recordingPeriod = TimeInterval(Float(Constants.windowSize)/Float(Constants.sampleFrequency))
var index = 0
Timer.scheduledTimer(withTimeInterval: 0.5, repeats: true) { timer in
let audioRecorder = self.AudioRecorders[index]!
audioRecorder.deleteRecording()
audioRecorder.record()
DispatchQueue.main.asyncAfter(deadline: .now() + recordingPeriod)
{
if let pitch = self.finishSampling(audioRecorder: audioRecorder, index: self.AudioRecorders.index(of: audioRecorder))
{
self.meterViewController?.updateMeter(string: String(pitch))
}
}
index = index + 1
if index == 4 { index = 0 }
if !(self.keepRecording ?? false) { timer.invalidate() }
}
}
Other methods called:
private func finishSampling(audioRecorder: AVAudioRecorder?, index: Int?) -> Float?
{
audioRecorder?.stop()
if let index = index, var (data, _, _) = loadAudioSignal(audioURL: getDirectory(for: index))
{
let pitch = getPitch(&data, Int32(data.count), Int32(Constants.windowSize), Int32(Constants.sampleFrequency))
return Float(pitch)
}
return nil
}
private func loadAudioSignal(audioURL: URL) -> (signal: [Float], rate: Double, frameCount: Int)?
{
guard
let file = try? AVAudioFile(forReading: audioURL),
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: file.fileFormat.sampleRate, channels: file.fileFormat.channelCount, interleaved: false),
let buf = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: UInt32(file.length))
else
{
return nil
}
try? file.read(into: buf)
let floatArray = Array(UnsafeBufferPointer(start: buf.floatChannelData?[0], count:Int(buf.frameLength)))
return (signal: floatArray, rate: file.fileFormat.sampleRate, frameCount: Int(file.length))
}
Where getPitch does some simple processing and runs relatively quick.
By calling usleep you are blocking the main thread. The main thread is the thread that updates the UI. Since it is blocked, it cannot do that.
You should use an alternate approach, such as a Timer to periodically update the label.

Unwrapping optional value (returned by data.withUnsafeBytes(_:)) sometimes does not work with guard let

I have issue with guard let statement, which behaves strange. Whole code is below. Else block of statement guard let data = readData, let size = sizeOfData else ... in method readActivity(subdata: Data) is wrongly executed even thoug readData and sizeOfData are not nil.
Code
import Foundation
enum ActivityDataReaderError: Error {
case activityIsReadingOtherCentral
case bluetooth(Error?)
case staleData
}
protocol ActivityDataReaderDelegate: class {
func didReadActivity(data: Data)
func didFailToReadActivity(error: ActivityDataReaderError)
}
final class ActivityDataReader {
private var sizeOfData: Int?
private var isOtherDeviceReading: Bool {
// 0xFFFF
return sizeOfData == 65535
}
private var readData: Data?
var isEmpty: Bool {
return sizeOfData == nil
}
weak var delegate: ActivityDataReaderDelegate?
static func timestampValue(_ timestamp: UInt32) -> Data {
var value = timestamp
return Data(buffer: UnsafeBufferPointer(start: &value, count: 1))
}
func reset() {
readData = nil
sizeOfData = nil
NSLog("reset() -- \(Thread.current)")
}
func readActivity(data: Data?, error: Error? = nil) {
guard let data = data else {
delegate?.didFailToReadActivity(error: .bluetooth(error))
return
}
let isFirstChunk = readData == nil
if isFirstChunk {
let sizeData = data.subdata(in: 0..<2)
sizeOfData = sizeData.withUnsafeBytes { $0.pointee }
guard !isOtherDeviceReading else {
delegate?.didFailToReadActivity(error: .activityIsReadingOtherCentral)
return
}
NSLog(String("readActivity() Size of data: \(String(describing: sizeOfData))"))
let subdata = data.subdata(in: 2..<data.count)
readActivity(subdata: subdata)
} else {
readActivity(subdata: data)
}
}
private func readActivity(subdata: Data) {
if let lastReadData = readData {
readData = lastReadData + subdata
} else {
readData = subdata
}
guard let data = readData, let size = sizeOfData else {
NSLog("WTF? data:\(String(describing: readData)), "
+ "sizeOfData: \(String(describing: sizeOfData)), "
+ "thread: \(Thread.current)")
assertionFailure("WTF")
return
}
NSLog("subdata: \(String(describing: subdata)), "
+ "totalReadBytes: \(data.count), "
+ "size: \(size)")
if data.count == size {
delegate?.didReadActivity(data: data)
reset()
}
}
}
Test
Test which sometimes passes and sometimes crashes because of assertionFailure("WTF").
class ActivityDataServiceReaderTests: XCTestCase {
var service: ActivityDataReader?
override func setUp() {
super.setUp()
service = ActivityDataReader()
}
override func tearDown() {
service = nil
super.tearDown()
}
func testBufferIsNotEmpty() {
NSLog("testBufferIsNotEmpty thread: \(Thread.current)")
guard let service = service else { fatalError() }
let firstDataBytes = [UInt8.min]
let data1 = Data(bytes: [7, 0] + firstDataBytes)
service.readActivity(data: data1)
XCTAssertFalse(service.isEmpty)
service.reset()
XCTAssertTrue(service.isEmpty)
}
}
Log of console in case of crash
2018-10-25 14:53:30.033573+0200 GuardBug[84042:11188210] WTF? data:Optional(1 bytes), sizeOfData: Optional(7), thread: <NSThread: 0x600003399d00>{number = 1, name = main}
Environment
Xcode10
swift 4.1 with legacy build system
swift 4.2
In my opinion, there is no possible way to execute code in else block in guard let else block of method readActivity(subdata: Data). Everything is running on main thread. Am I misssing something? How is possible sometimes test passes and sometimes crasshes?
Thank you for any help.
Edit:
More narrow problem of guard let + data.withUnsafeBytes:
func testGuardLet() {
let data = Data(bytes: [7, 0, UInt8.min])
let sizeData = data.subdata(in: 0 ..< 2)
let size: Int? = sizeData.withUnsafeBytes { $0.pointee }
guard let unwrappedSize = size else {
NSLog("failure: \(size)")
XCTFail()
return
}
NSLog("success: \(unwrappedSize)")
}
Log:
2018-10-25 16:32:19.497540+0200 GuardBug[90576:11351167] failure: Optional(7)
Thanks to help at: https://forums.swift.org/t/unwrapping-value-with-guard-let-sometimes-does-not-work-with-result-from-data-withunsafebytes-0-pointee/17357 problem was with the line:
let size: Int? = sizeData.withUnsafeBytes { $0.pointee }
Where read data was downcasted to Optional Int (8 bytes long) but sizeData it self was just 2 bytes long. I have no idea how is possible it sometimes worked but solution -- which seems to work properly -- is to use method withUnsafeBytes in fallowing way:
let size = sizeData.withUnsafeBytes { (pointer: UnsafePointer<UInt16>) in pointer.pointee }
Returned value is not optional and has the proper type UInt16 (2 bytes long).
If you check Documentation, there is a warning:
Warning The byte pointer argument should not be stored and used
outside of the lifetime of the call to the closure.
Seems like You should deal with the size inside the closure body
func testGuardLet() {
let data = Data(bytes: [7, 0, UInt8.min])
var sizeData = data.subdata(in: 0 ..< 2)
withUnsafeBytes(of: &sizeData) { bytes in
print(bytes.count)
for byte in bytes {
print(byte)
}
}
let bytes = withUnsafeBytes(of: &sizeData) { bytes in
return bytes // BUGS ☠️☠️☠️
}
}

Swift 4 - How to update progressView in loop

I have created this progressView:
progress = UIProgressView(progressViewStyle: .default)
progress.center = view.center
progress.setProgress(0.5, animated: true)
view.addSubview(progress)
In my viewDidLoad method, I call getLandGradingImages and inside that I do a loop, inside that I call getLandGradingImage for each result in getLandGradingImages, what would be the best way to update the progressView I created during this entire process:
getLandGradingImages(jobNo: jobNo) { result in
//Define axis variables
var x = 25
var y = 80
//Define image counter variable
var counterImages = 0
var actualImageCounter = 0
//For each Lot Image
for item in result
{
self.getLandGradingImage(image: item["imageBytes"] as! String) { data in
//Create Image from Data
let image = UIImage(data: data)
//Add Image to Image View
let imageView = UIImageView(image: image!)
//Set Image View Frame
imageView.frame = CGRect(x: x, y: y, width: 100, height: 100)
//Define UITapGestureRecognizer
let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(self.imageTapped(tapGestureRecognizer:)))
//Enable user interaction
imageView.isUserInteractionEnabled = true
//Assign tap gesture to image view
imageView.addGestureRecognizer(tapGestureRecognizer)
//Add Image to view
self.view.addSubview(imageView)
//Increase x axis
x = x + 115
//Increase image counter
counterImages = counterImages + 1
actualImageCounter = actualImageCounter + 1
//If image counter is equal to 3, create new line.
if(counterImages == 3)
{
//Increase y axis
y = y + 115
//Reset x axis
x = 25
//Reset image counter
counterImages = 0
}
if(actualImageCounter == result.count)
{
//self.stopIndicator()
}
}
}
}
Here are those two methods I am calling in this process:
func getLandGradingImages(jobNo:String, completionHandler:#escaping (_ result:Array<Dictionary<String, Any>>) -> Void) {
//Define array for returning data
var returnedResults = Array<Dictionary<String, Any>>()
//Call API
WebService().GetLandGradingImages(jobNo: jobNo)
{
(result: Array<Dictionary<String, Any>>) in
DispatchQueue.main.async {
//Return our results
returnedResults = result
completionHandler(returnedResults)
}
}
}
func getLandGradingImage(image:String, completionHandler:#escaping (_ result:Data) -> Void) {
//Define array for returning data
var returnedResults = Data()
//Call API
WebService().GetLandGradingImage(image: image)
{
(result: Data) in
DispatchQueue.main.async {
//Return our results
returnedResults = result
completionHandler(returnedResults)
}
}
}
For these calls, I am using Alamofire:
func GetLandGradingImage(image: String, completion: #escaping (_ result: Data) -> Void)
{
let imagePath : String = image
let url = URL(string: webserviceImages + imagePath)!
Alamofire.request(url).authenticate(user: self.appDelegate.username!, password: self.appDelegate.password!).responseData { response in
let noData = Data()
if(response.error == nil)
{
if let data = response.data {
completion(data)
}
}
else
{
completion(noData)
}
}
}
I have tried this:
var counter:Int = 0 {
didSet {
let fractionalProgress = Float(counter) / 100.0
let animated = counter != 0
progress.setProgress(fractionalProgress, animated: animated)
}
}
and inside the for loop:
DispatchQueue.global(qos: .background).async {
sleep(1)
DispatchQueue.main.async {
self.counter = self.counter + 1
}
}
Still nothing, my the progress view appears but does not get updated.

Toggle flash in ios swift

I am building an image clasifier app. On camera screen I have a switch button which I want to use to toggle flash so that user can switch on flash in low light.
Here is my code:
import UIKit
import AVFoundation
import Vision
// controlling the pace of the machine vision analysis
var lastAnalysis: TimeInterval = 0
var pace: TimeInterval = 0.33 // in seconds, classification will not repeat faster than this value
// performance tracking
let trackPerformance = false // use "true" for performance logging
var frameCount = 0
let framesPerSample = 10
var startDate = NSDate.timeIntervalSinceReferenceDate
var flash=0
class ImageDetectionViewController: UIViewController {
var callBackImageDetection :(State)->Void = { state in
}
#IBOutlet weak var previewView: UIView!
#IBOutlet weak var stackView: UIStackView!
#IBOutlet weak var lowerView: UIView!
#IBAction func swithch(_ sender: UISwitch) {
if(sender.isOn == true)
{
stopActiveSession();
let captureSession=AVCaptureSession()
let captureDevice: AVCaptureDevice?
setupCamera(flash: 1)
}
}
var previewLayer: AVCaptureVideoPreviewLayer!
let bubbleLayer = BubbleLayer(string: "")
let queue = DispatchQueue(label: "videoQueue")
var captureSession = AVCaptureSession()
var captureDevice: AVCaptureDevice?
let videoOutput = AVCaptureVideoDataOutput()
var unknownCounter = 0 // used to track how many unclassified images in a row
let confidence: Float = 0.8
// MARK: Load the Model
let targetImageSize = CGSize(width: 227, height: 227) // must match model data input
lazy var classificationRequest: [VNRequest] = {
do {
// Load the Custom Vision model.
// To add a new model, drag it to the Xcode project browser making sure that the "Target Membership" is checked.
// Then update the following line with the name of your new model.
// let model = try VNCoreMLModel(for: Fruit().model)
let model = try VNCoreMLModel(for: CodigocubeAI().model)
let classificationRequest = VNCoreMLRequest(model: model, completionHandler: self.handleClassification)
return [ classificationRequest ]
} catch {
fatalError("Can't load Vision ML model: \(error)")
}
}()
// MARK: Handle image classification results
func handleClassification(request: VNRequest, error: Error?) {
guard let observations = request.results as? [VNClassificationObservation]
else { fatalError("unexpected result type from VNCoreMLRequest") }
guard let best = observations.first else {
fatalError("classification didn't return any results")
}
// Use results to update user interface (includes basic filtering)
print("\(best.identifier): \(best.confidence)")
if best.identifier.starts(with: "Unknown") || best.confidence < confidence {
if self.unknownCounter < 3 { // a bit of a low-pass filter to avoid flickering
self.unknownCounter += 1
} else {
self.unknownCounter = 0
DispatchQueue.main.async {
self.bubbleLayer.string = nil
}
}
} else {
self.unknownCounter = 0
DispatchQueue.main.async {[weak self] in
guard let strongSelf = self
else
{
return
}
// Trimming labels because they sometimes have unexpected line endings which show up in the GUI
let identifierString = best.identifier.trimmingCharacters(in: CharacterSet.whitespacesAndNewlines)
strongSelf.bubbleLayer.string = identifierString
let state : State = strongSelf.getState(identifierStr: identifierString)
strongSelf.stopActiveSession()
strongSelf.navigationController?.popViewController(animated: true)
strongSelf.callBackImageDetection(state)
}
}
}
func getState(identifierStr:String)->State
{
var state :State = .none
if identifierStr == "entertainment"
{
state = .entertainment
}
else if identifierStr == "geography"
{
state = .geography
}
else if identifierStr == "history"
{
state = .history
}
else if identifierStr == "knowledge"
{
state = .education
}
else if identifierStr == "science"
{
state = .science
}
else if identifierStr == "sports"
{
state = .sports
}
else
{
state = .none
}
return state
}
// MARK: Lifecycle
override func viewDidLoad() {
super.viewDidLoad()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewView.layer.addSublayer(previewLayer)
}
override func viewDidAppear(_ animated: Bool) {
self.edgesForExtendedLayout = UIRectEdge.init(rawValue: 0)
bubbleLayer.opacity = 0.0
bubbleLayer.position.x = self.view.frame.width / 2.0
bubbleLayer.position.y = lowerView.frame.height / 2
lowerView.layer.addSublayer(bubbleLayer)
setupCamera(flash:2)
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
previewLayer.frame = previewView.bounds;
}
// MARK: Camera handling
func setupCamera(flash :Int) {
let deviceDiscovery = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back)
if let device = deviceDiscovery.devices.last {
if(flash == 1)
{
if (device.hasTorch) {
do {
try device.lockForConfiguration()
if (device.isTorchAvailable) {
do {
try device.setTorchModeOn(level:0.2 )
}
catch
{
print(error)
}
device.unlockForConfiguration()
}
}
catch
{
print(error)
}
}
}
captureDevice = device
beginSession()
}
}
func beginSession() {
do {
videoOutput.videoSettings = [((kCVPixelBufferPixelFormatTypeKey as NSString) as String) : (NSNumber(value: kCVPixelFormatType_32BGRA) as! UInt32)]
videoOutput.alwaysDiscardsLateVideoFrames = true
videoOutput.setSampleBufferDelegate(self, queue: queue)
captureSession.sessionPreset = .hd1920x1080
captureSession.addOutput(videoOutput)
let input = try AVCaptureDeviceInput(device: captureDevice!)
captureSession.addInput(input)
captureSession.startRunning()
} catch {
print("error connecting to capture device")
}
}
func stopActiveSession()
{
if captureSession.isRunning == true
{
captureSession.stopRunning()
}
}
override func viewWillDisappear(_ animated: Bool) {
self.stopActiveSession()
}
deinit {
print("deinit called")
}
}
// MARK: Video Data Delegate
extension ImageDetectionViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
// called for each frame of video
func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let currentDate = NSDate.timeIntervalSinceReferenceDate
// control the pace of the machine vision to protect battery life
if currentDate - lastAnalysis >= pace {
lastAnalysis = currentDate
} else {
return // don't run the classifier more often than we need
}
// keep track of performance and log the frame rate
if trackPerformance {
frameCount = frameCount + 1
if frameCount % framesPerSample == 0 {
let diff = currentDate - startDate
if (diff > 0) {
if pace > 0.0 {
print("WARNING: Frame rate of image classification is being limited by \"pace\" setting. Set to 0.0 for fastest possible rate.")
}
print("\(String.localizedStringWithFormat("%0.2f", (diff/Double(framesPerSample))))s per frame (average)")
}
startDate = currentDate
}
}
// Crop and resize the image data.
// Note, this uses a Core Image pipeline that could be appended with other pre-processing.
// If we don't want to do anything custom, we can remove this step and let the Vision framework handle
// crop and resize as long as we are careful to pass the orientation properly.
guard let croppedBuffer = croppedSampleBuffer(sampleBuffer, targetSize: targetImageSize) else {
return
}
do {
let classifierRequestHandler = VNImageRequestHandler(cvPixelBuffer: croppedBuffer, options: [:])
try classifierRequestHandler.perform(classificationRequest)
} catch {
print(error)
}
}
}
let context = CIContext()
var rotateTransform: CGAffineTransform?
var scaleTransform: CGAffineTransform?
var cropTransform: CGAffineTransform?
var resultBuffer: CVPixelBuffer?
func croppedSampleBuffer(_ sampleBuffer: CMSampleBuffer, targetSize: CGSize) -> CVPixelBuffer? {
guard let imageBuffer: CVImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
fatalError("Can't convert to CVImageBuffer.")
}
// Only doing these calculations once for efficiency.
// If the incoming images could change orientation or size during a session, this would need to be reset when that happens.
if rotateTransform == nil {
let imageSize = CVImageBufferGetEncodedSize(imageBuffer)
let rotatedSize = CGSize(width: imageSize.height, height: imageSize.width)
guard targetSize.width < rotatedSize.width, targetSize.height < rotatedSize.height else {
fatalError("Captured image is smaller than image size for model.")
}
let shorterSize = (rotatedSize.width < rotatedSize.height) ? rotatedSize.width : rotatedSize.height
rotateTransform = CGAffineTransform(translationX: imageSize.width / 2.0, y: imageSize.height / 2.0).rotated(by: -CGFloat.pi / 2.0).translatedBy(x: -imageSize.height / 2.0, y: -imageSize.width / 2.0)
let scale = targetSize.width / shorterSize
scaleTransform = CGAffineTransform(scaleX: scale, y: scale)
// Crop input image to output size
let xDiff = rotatedSize.width * scale - targetSize.width
let yDiff = rotatedSize.height * scale - targetSize.height
cropTransform = CGAffineTransform(translationX: xDiff/2.0, y: yDiff/2.0)
}
// Convert to CIImage because it is easier to manipulate
let ciImage = CIImage(cvImageBuffer: imageBuffer)
let rotated = ciImage.transformed(by: rotateTransform!)
let scaled = rotated.transformed(by: scaleTransform!)
let cropped = scaled.transformed(by: cropTransform!)
// Note that the above pipeline could be easily appended with other image manipulations.
// For example, to change the image contrast. It would be most efficient to handle all of
// the image manipulation in a single Core Image pipeline because it can be hardware optimized.
// Only need to create this buffer one time and then we can reuse it for every frame
if resultBuffer == nil {
let result = CVPixelBufferCreate(kCFAllocatorDefault, Int(targetSize.width), Int(targetSize.height), kCVPixelFormatType_32BGRA, nil, &resultBuffer)
guard result == kCVReturnSuccess else {
fatalError("Can't allocate pixel buffer.")
}
}
// Render the Core Image pipeline to the buffer
context.render(cropped, to: resultBuffer!)
// For debugging
// let image = imageBufferToUIImage(resultBuffer!)
// print(image.size) // set breakpoint to see image being provided to CoreML
return resultBuffer
}
// Only used for debugging.
// Turns an image buffer into a UIImage that is easier to display in the UI or debugger.
func imageBufferToUIImage(_ imageBuffer: CVImageBuffer) -> UIImage {
CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
let context = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
let quartzImage = context!.makeImage()
CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
let image = UIImage(cgImage: quartzImage!, scale: 1.0, orientation: .right)
return image
}
I am getting error An AVCaptureOutput instance may not be added to more than one session'
Now I want to give user the facility to toggle flash. How to destroy active camera session and open new with flash on?
Can anyone help me also any other way to achieve this?

Is there a way to check if a UIImage has been decompressed?

I'm sure most of you have dealt with forced decompression on a background thread to enhance rendering performance. My question is whether there is a way to check if an image has been decompressed.
It helped me to checked Image has been Decompressed or not by below technique. It is simple code to understand :-
import UIKit
class ViewController: UIViewController {
var compressedImage:NSString?
var decompressedImage:NSString?
override func viewDidLoad() {
super.viewDidLoad()
let image = compressImage()
var imageView = UIImageView(image: image)
//self.view.addSubview(imageView)
let decompressImage = deCompressImage(image: image)
let imageData = Data(UIImagePNGRepresentation(decompressImage)! )
print("***** Size after decompred \(imageData.description) **** ")
imageView = UIImageView(image: decompressImage)
decompressedImage = imageData.description as NSString?
let decompressed = checkImageBeenDecompressed(decompressedImage: decompressedImage!, compressedImage: compressedImage!)
print(decompressed)
//self.view.addSubview(imageView)
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func checkImageBeenDecompressed(decompressedImage:NSString , compressedImage:NSString) -> Bool {
let decompressedSize = Int( decompressedImage.getNumFromString()! )
let compressedSize = Int (compressedImage.getNumFromString()! )
if( decompressedSize! > compressedSize! ) {
print("Image has been decompressed")
return true
}
print("Image has not been decompressed")
return false
}
func compressImage() -> UIImage {
let oldImage = UIImage(named: "background.jpg")
var imageData = Data(UIImagePNGRepresentation(oldImage!)! )
print("***** Original Uncompressed Size \(imageData.description) **** ")
imageData = UIImageJPEGRepresentation(oldImage!, 0.025)!
print("***** Compressed Size \(imageData.description) **** ")
compressedImage = imageData.description as NSString?
let image = UIImage(data: imageData)
return image!
}
func deCompressImage(image:UIImage) -> UIImage {
UIGraphicsBeginImageContextWithOptions(image.size, true, 0)
image.draw(at: CGPoint.zero)
let decompressedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return decompressedImage!
}
}
extension NSString {
func getNumFromString() -> String? {
var numberString: NSString?
let thisScanner = Scanner(string: self as String)
let numbers = NSCharacterSet(charactersIn: "0123456789")
thisScanner.scanUpToCharacters(from: numbers as CharacterSet, into: nil)
thisScanner.scanCharacters(from: numbers as CharacterSet, into: &numberString)
return numberString as? String;
}
}
Demo Reference

Resources