ios 13 PHImageManager.default().requestImage and Fusuma - ios

i am using Fusuma pod for my project, but with the last iOS 13 there is one bug related to image which is selected from Gallery.
exactly saying, if selected image is from Gallery (on iphone device with ios 13, not simulator), then the dimensions of image are width:39 and height:39, functions below are Fusuma's , located in FusumaViewController
private func requestImage(with asset: PHAsset, cropRect: CGRect, completion: #escaping (PHAsset, UIImage) -> Void) {
DispatchQueue.global(qos: .default).async(execute: {
let options = PHImageRequestOptions()
options.deliveryMode = .highQualityFormat
options.isNetworkAccessAllowed = true
options.normalizedCropRect = cropRect
options.resizeMode = .exact
let targetWidth = floor(CGFloat(asset.pixelWidth) * cropRect.width)
let targetHeight = floor(CGFloat(asset.pixelHeight) * cropRect.height)
let dimensionW = max(min(targetHeight, targetWidth), 1024 * UIScreen.main.scale)
let dimensionH = dimensionW * self.getCropHeightRatio()
let targetSize = CGSize(width: dimensionW, height: dimensionH)
PHImageManager.default().requestImage(for: asset, targetSize: targetSize, contentMode: .aspectFill, options: options) { result, info in
guard let result = result else { return }
DispatchQueue.main.async(execute: {
completion(asset, result)
})
}
})
}
private func fusumaDidFinishInMultipleMode() {
guard let view = albumView.imageCropView else { return }
let normalizedX = view.contentOffset.x / view.contentSize.width
let normalizedY = view.contentOffset.y / view.contentSize.height
let normalizedWidth = view.frame.width / view.contentSize.width
let normalizedHeight = view.frame.height / view.contentSize.height
let cropRect = CGRect(x: normalizedX,
y: normalizedY,
width: normalizedWidth,
height: normalizedHeight)
var images = [UIImage]()
var metaData = [ImageMetadata]()
for asset in albumView.selectedAssets {
requestImage(with: asset, cropRect: cropRect) { asset, result in
images.append(result)
metaData.append(self.getMetaData(asset: asset))
if asset == self.albumView.selectedAssets.last {
self.doDismiss {
self.delegate?.fusumaMultipleImageSelected(images, source: self.mode, metaData: metaData)
}
}
}
}
}
requestImage(...) function returns incorrect image size

The documentation says PHImageManager may call requestImage completion multiple times and the final call should have an image with a requested targetSize.

Related

How to get thumbnail and Original image from UIImagePickerViewcontroller?

After captured photo from camera, I was doing image compression For (400kb and 1 Mb), it look almost 3 seconds in iPhone 6 and less than a second in iPhone 6s.
Is there any way to get thumbnail and original image without doing manual compression?
Code used for image compression
Extension for UIImage
extension UIImage {
// MARK: - UIImage+Resize
func compressTo(_ expectedSizeInMb:Int) -> Data? {
let sizeInBytes = expectedSizeInMb * 1024 * 1024
var needCompress:Bool = true
var imgData:Data?
var compressingValue:CGFloat = 1.0
while (needCompress && compressingValue > 0.0) {
if let data:Data = jpegData(compressionQuality: compressingValue) {
if data.count < sizeInBytes {
needCompress = false
imgData = data
} else {
compressingValue -= 0.1
}
}
}
if let data = imgData {
if (data.count < sizeInBytes) {
return data
}
}
return nil
}
}
usage:
if let imageData = image.compressTo(1) {
print(imageData)
}
For images saved in Photos Library :
Try :
let phAsset = info[UIImagePickerController.InfoKey.phAsset] as! PHAsset
let options = PHImageRequestOptions()
options.deliveryMode = .fastFormat
options.isSynchronous = false
// you can change your target size to CGSize(width: Int , height: Int) any number you want.
PHImageManager.default().requestImage(for: phAsset, targetSize: PHImageManagerMaximumSize, contentMode: .default, options: options, resultHandler: { image , _ in
let thumbnail = image
// use your thumbnail
})
For Captured images from Camera, you can get image pixels without recalculating data count :
let image = info[UIImagePickerController.InfoKey.originalImage] as! UIImage
// pixels are the same on each device’s camera
let widthPixels = image.size.width * image.scale
let heightPixels = image.size.height * image.scale
let sizeInBytes = 1024 * 1024
var thumbnail : UIImage! = nil
if Int(widthPixels * heightPixels) > sizeInBytes {
// assign custom width and height you need
let rect = CGRect(x: 0.0, y: 0.0, width: 100, height: 100)
UIGraphicsBeginImageContextWithOptions(rect.size, false, 1)
let context = UIGraphicsGetCurrentContext()
context?.interpolationQuality = .low
image.draw(in: rect)
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
thumbnail = resizedImage
} else {
thumbnail = image
}

How to sync AVPlayer and MTKView

I have a project where users can take a video and later add filters to them or change basic settings like brightness and contrast. To accomplish this, I use BBMetalImage, which basically returns the video in a MTKView (named a BBMetalView in the project).
Everything works great - I can play the video, add filters and the desired effects, but there is no audio. I asked the author about this, who recommended using an AVPlayer (or AVAudioPlayer) for this. So I did. However, the video and audio are out of sync. Possibly because of different bitrates in the first place, and the author of the library also mentioned the frame rate can differ because of the filter process (the time this consumes is variable):
The render view FPS is not exactly the same to the actual rate.
Because the video source output frame is processed by filters and the
filter process time is variable.
First, I crop my video to the desired aspect ratio (4:5). I save this file (480x600) locally, using AVVideoProfileLevelH264HighAutoLevel as AVVideoProfileLevelKey. My audio configuration, using NextLevelSessionExporter, has the following setup: AVEncoderBitRateKey: 128000, AVNumberOfChannelsKey: 2, AVSampleRateKey: 44100.
Then, the BBMetalImage library takes this saved audio file and provides a MTKView (BBMetalView) to display the video, allowing me to add filters and effects in real time. The setup kind of looks like this:
self.metalView = BBMetalView(frame: CGRect(x: 0, y: self.view.center.y - ((UIScreen.main.bounds.width * 1.25) / 2), width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.width * 1.25))
self.view.addSubview(self.metalView)
self.videoSource = BBMetalVideoSource(url: outputURL)
self.videoSource.playWithVideoRate = true
self.videoSource.audioConsumer = self.metalAudio
self.videoSource.add(consumer: self.metalView)
self.videoSource.add(consumer: self.videoWriter)
self.audioItem = AVPlayerItem(url: outputURL)
self.audioPlayer = AVPlayer(playerItem: self.audioItem)
self.playerLayer = AVPlayerLayer(player: self.audioPlayer)
self.videoPreview.layer.addSublayer(self.playerLayer!)
self.playerLayer?.frame = CGRect(x: 0, y: 0, width: 0, height: 0)
self.playerLayer?.backgroundColor = UIColor.black.cgColor
self.startVideo()
And startVideo() goes like this:
audioPlayer.seek(to: .zero)
audioPlayer.play()
videoSource.start(progress: { (frameTime) in
print(frameTime)
}) { [weak self] (finish) in
guard let self = self else { return }
self.startVideo()
}
This is all probably pretty vague because of the external library/libraries. However, my question is pretty simple: is there any way I can sync the MTKView with my AVPlayer? It would help me a lot and I'm sure Silence-GitHub would also implement this feature into the library to help a lot of other users. Any ideas on how to approach this are welcome!
I custom the BBMetalVideoSource as follow then it worked:
Create a delegate in BBMetalVideoSource to get the current time of the audio player with which we want to sync
In func private func processAsset(progress:, completion:), I replace this block of code if useVideoRate { //... } by:
if useVideoRate {
if let playerTime = delegate.getAudioPlayerCurrentTime() {
let diff = CMTimeGetSeconds(sampleFrameTime) - playerTime
if diff > 0.0 {
sleepTime = diff
if sleepTime > 1.0 {
sleepTime = 0.0
}
usleep(UInt32(1000000 * sleepTime))
} else {
sleepTime = 0
}
}
}
This code help us resolve both problems: 1. No audio when preview video effect, and 2. Sync audio with video.
Due to your circumstances, you seem to need to try 1 of 2 things:
1) Try and apply some sort of overlay that has the desired effect for your video. I could attempt something like this, but I have personally not done this.
2) This takes a little more time beforehand - in the sense that the program would have to take a few moments (depending on your filtering, time varies), to recreate a new video with the desired effects. You can try this out and see if it works for you.
I have made my own VideoCreator using some sourcecode from SO from somewhere.
//Recreates a new video with applied filter
public static func createFilteredVideo(asset: AVAsset, completionHandler: #escaping (_ asset: AVAsset) -> Void) {
let url = (asset as? AVURLAsset)!.url
let snapshot = url.videoSnapshot()
guard let image = snapshot else { return }
let fps = Int32(asset.tracks(withMediaType: .video)[0].nominalFrameRate)
let writer = VideoCreator(fps: Int32(fps), width: image.size.width, height: image.size.height, audioSettings: nil)
let timeScale = asset.duration.timescale
let timeValue = asset.duration.value
let frameTime = 1/Double(fps) * Double(timeScale)
let numberOfImages = Int(Double(timeValue)/Double(frameTime))
let queue = DispatchQueue(label: "com.queue.queue", qos: .utility)
let composition = AVVideoComposition(asset: asset) { (request) in
let source = request.sourceImage.clampedToExtent()
//This is where you create your filter and get your filtered result.
//Here is an example
let filter = CIFilter(name: "CIBlendWithMask")
filter!.setValue(maskImage, forKey: "inputMaskImage")
filter!.setValue(regCIImage, forKey: "inputImage")
let filteredImage = filter!.outputImage.clamped(to: source.extent)
request.finish(with: filteredImage, context: nil)
}
var i = 0
getAudioFromURL(url: url) { (buffer) in
writer.addAudio(audio: buffer, time: .zero)
i == 0 ? writer.startCreatingVideo(initialBuffer: buffer, completion: {}) : nil
i += 1
}
let group = DispatchGroup()
for i in 0..<numberOfImages {
group.enter()
autoreleasepool {
let time = CMTime(seconds: Double(Double(i) * frameTime / Double(timeScale)), preferredTimescale: timeScale)
let image = url.videoSnapshot(time: time, composition: composition)
queue.async {
writer.addImageAndAudio(image: image!, audio: nil, time: time.seconds)
group.leave()
}
}
}
group.notify(queue: queue) {
writer.finishWriting()
let url = writer.getURL()
//Now create exporter to add audio then do completion handler
completionHandler(AVAsset(url: url))
}
}
static func getAudioFromURL(url: URL, completionHandlerPerBuffer: #escaping ((_ buffer:CMSampleBuffer) -> Void)) {
let asset = AVURLAsset(url: url, options: [AVURLAssetPreferPreciseDurationAndTimingKey: NSNumber(value: true as Bool)])
guard let assetTrack = asset.tracks(withMediaType: AVMediaType.audio).first else {
fatalError("Couldn't load AVAssetTrack")
}
guard let reader = try? AVAssetReader(asset: asset)
else {
fatalError("Couldn't initialize the AVAssetReader")
}
reader.timeRange = CMTimeRange(start: .zero, duration: asset.duration)
let outputSettingsDict: [String : Any] = [
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsBigEndianKey: false,
AVLinearPCMIsFloatKey: false,
AVLinearPCMIsNonInterleaved: false
]
let readerOutput = AVAssetReaderTrackOutput(track: assetTrack,
outputSettings: outputSettingsDict)
readerOutput.alwaysCopiesSampleData = false
reader.add(readerOutput)
while reader.status == .reading {
guard let readSampleBuffer = readerOutput.copyNextSampleBuffer() else { break }
completionHandlerPerBuffer(readSampleBuffer)
}
}
extension URL {
func videoSnapshot(time:CMTime? = nil, composition:AVVideoComposition? = nil) -> UIImage? {
let asset = AVURLAsset(url: self)
let generator = AVAssetImageGenerator(asset: asset)
generator.appliesPreferredTrackTransform = true
generator.requestedTimeToleranceBefore = .zero
generator.requestedTimeToleranceAfter = .zero
generator.videoComposition = composition
let timestamp = time == nil ? CMTime(seconds: 1, preferredTimescale: 60) : time
do {
let imageRef = try generator.copyCGImage(at: timestamp!, actualTime: nil)
return UIImage(cgImage: imageRef)
}
catch let error as NSError
{
print("Image generation failed with error \(error)")
return nil
}
}
}
Below is the VideoCreator
//
// VideoCreator.swift
// AKPickerView-Swift
//
// Created by Impression7vx on 7/16/19.
//
import UIKit
import AVFoundation
import UIKit
import Photos
#available(iOS 11.0, *)
public class VideoCreator: NSObject {
private var settings:RenderSettings!
private var imageAnimator:ImageAnimator!
public override init() {
self.settings = RenderSettings()
self.imageAnimator = ImageAnimator(renderSettings: self.settings)
}
public convenience init(fps: Int32, width: CGFloat, height: CGFloat, audioSettings: [String:Any]?) {
self.init()
self.settings = RenderSettings(fps: fps, width: width, height: height)
self.imageAnimator = ImageAnimator(renderSettings: self.settings, audioSettings: audioSettings)
}
public convenience init(width: CGFloat, height: CGFloat) {
self.init()
self.settings = RenderSettings(width: width, height: height)
self.imageAnimator = ImageAnimator(renderSettings: self.settings)
}
func startCreatingVideo(initialBuffer: CMSampleBuffer?, completion: #escaping (() -> Void)) {
self.imageAnimator.render(initialBuffer: initialBuffer) {
completion()
}
}
func finishWriting() {
self.imageAnimator.isDone = true
}
func addImageAndAudio(image:UIImage, audio:CMSampleBuffer?, time:CFAbsoluteTime) {
self.imageAnimator.addImageAndAudio(image: image, audio: audio, time: time)
}
func getURL() -> URL {
return settings!.outputURL
}
func addAudio(audio: CMSampleBuffer, time: CMTime) {
self.imageAnimator.videoWriter.addAudio(buffer: audio, time: time)
}
}
#available(iOS 11.0, *)
public struct RenderSettings {
var width: CGFloat = 1280
var height: CGFloat = 720
var fps: Int32 = 2 // 2 frames per second
var avCodecKey = AVVideoCodecType.h264
var videoFilename = "video"
var videoFilenameExt = "mov"
init() { }
init(width: CGFloat, height: CGFloat) {
self.width = width
self.height = height
}
init(fps: Int32) {
self.fps = fps
}
init(fps: Int32, width: CGFloat, height: CGFloat) {
self.fps = fps
self.width = width
self.height = height
}
var size: CGSize {
return CGSize(width: width, height: height)
}
var outputURL: URL {
// Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
// Using the CachesDirectory ensures the file won't be included in a backup of the app.
let fileManager = FileManager.default
if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
}
fatalError("URLForDirectory() failed")
}
}
#available(iOS 11.0, *)
public class ImageAnimator {
// Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
static let kTimescale: Int32 = 600
let settings: RenderSettings
let videoWriter: VideoWriter
var imagesAndAudio:SynchronizedArray<(UIImage, CMSampleBuffer?, CFAbsoluteTime)> = SynchronizedArray<(UIImage, CMSampleBuffer?, CFAbsoluteTime)>()
var isDone:Bool = false
let semaphore = DispatchSemaphore(value: 1)
var frameNum = 0
class func removeFileAtURL(fileURL: URL) {
do {
try FileManager.default.removeItem(atPath: fileURL.path)
}
catch _ as NSError {
// Assume file doesn't exist.
}
}
init(renderSettings: RenderSettings, audioSettings:[String:Any]? = nil) {
settings = renderSettings
videoWriter = VideoWriter(renderSettings: settings, audioSettings: audioSettings)
}
func addImageAndAudio(image: UIImage, audio: CMSampleBuffer?, time:CFAbsoluteTime) {
self.imagesAndAudio.append((image, audio, time))
// print("Adding to array -- \(self.imagesAndAudio.count)")
}
func render(initialBuffer: CMSampleBuffer?, completion: #escaping ()->Void) {
// The VideoWriter will fail if a file exists at the URL, so clear it out first.
ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)
videoWriter.start(initialBuffer: initialBuffer)
videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
//ImageAnimator.saveToLibrary(self.settings.outputURL)
completion()
}
}
// This is the callback function for VideoWriter.render()
func appendPixelBuffers(writer: VideoWriter) -> Bool {
//Don't stop while images are NOT empty
while !imagesAndAudio.isEmpty || !isDone {
if(!imagesAndAudio.isEmpty) {
let date = Date()
if writer.isReadyForVideoData == false {
// Inform writer we have more buffers to write.
// print("Writer is not ready for more data")
return false
}
autoreleasepool {
//This should help but truly doesn't suffice - still need a mutex/lock
if(!imagesAndAudio.isEmpty) {
semaphore.wait() // requesting resource
let imageAndAudio = imagesAndAudio.first()!
let image = imageAndAudio.0
// let audio = imageAndAudio.1
let time = imageAndAudio.2
self.imagesAndAudio.removeAtIndex(index: 0)
semaphore.signal() // releasing resource
let presentationTime = CMTime(seconds: time, preferredTimescale: 600)
// if(audio != nil) { videoWriter.addAudio(buffer: audio!) }
let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
if success == false {
fatalError("addImage() failed")
}
else {
// print("Added image # frame \(frameNum) with presTime: \(presentationTime)")
}
frameNum += 1
let final = Date()
let timeDiff = final.timeIntervalSince(date)
// print("Time: \(timeDiff)")
}
else {
// print("Images was empty")
}
}
}
}
print("Done writing")
// Inform writer all buffers have been written.
return true
}
}
#available(iOS 11.0, *)
public class VideoWriter {
let renderSettings: RenderSettings
var audioSettings: [String:Any]?
var videoWriter: AVAssetWriter!
var videoWriterInput: AVAssetWriterInput!
var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!
var audioWriterInput: AVAssetWriterInput!
static var ci:Int = 0
var initialTime:CMTime!
var isReadyForVideoData: Bool {
return (videoWriterInput == nil ? false : videoWriterInput!.isReadyForMoreMediaData )
}
var isReadyForAudioData: Bool {
return (audioWriterInput == nil ? false : audioWriterInput!.isReadyForMoreMediaData)
}
class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize, alpha:CGImageAlphaInfo) -> CVPixelBuffer? {
var pixelBufferOut: CVPixelBuffer?
let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
if status != kCVReturnSuccess {
fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
}
let pixelBuffer = pixelBufferOut!
CVPixelBufferLockBaseAddress(pixelBuffer, [])
let data = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: alpha.rawValue)
context!.clear(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let horizontalRatio = size.width / image.size.width
let verticalRatio = size.height / image.size.height
//aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)
let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0
let cgImage = image.cgImage != nil ? image.cgImage! : image.ciImage!.convertCIImageToCGImage()
context!.draw(cgImage!, in: CGRect(x: x, y: y, width: newSize.width, height: newSize.height))
CVPixelBufferUnlockBaseAddress(pixelBuffer, [])
return pixelBuffer
}
#available(iOS 11.0, *)
init(renderSettings: RenderSettings, audioSettings:[String:Any]? = nil) {
self.renderSettings = renderSettings
self.audioSettings = audioSettings
}
func start(initialBuffer: CMSampleBuffer?) {
let avOutputSettings: [String: AnyObject] = [
AVVideoCodecKey: renderSettings.avCodecKey as AnyObject,
AVVideoWidthKey: NSNumber(value: Float(renderSettings.width)),
AVVideoHeightKey: NSNumber(value: Float(renderSettings.height))
]
let avAudioSettings = audioSettings
func createPixelBufferAdaptor() {
let sourcePixelBufferAttributesDictionary = [
kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.width)),
kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.height))
]
pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
}
func createAssetWriter(outputURL: URL) -> AVAssetWriter {
guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileType.mov) else {
fatalError("AVAssetWriter() failed")
}
guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaType.video) else {
fatalError("canApplyOutputSettings() failed")
}
return assetWriter
}
videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: avOutputSettings)
// if(audioSettings != nil) {
audioWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: nil)
audioWriterInput.expectsMediaDataInRealTime = true
// }
if videoWriter.canAdd(videoWriterInput) {
videoWriter.add(videoWriterInput)
}
else {
fatalError("canAddInput() returned false")
}
// if(audioSettings != nil) {
if videoWriter.canAdd(audioWriterInput) {
videoWriter.add(audioWriterInput)
}
else {
fatalError("canAddInput() returned false")
}
// }
// The pixel buffer adaptor must be created before we start writing.
createPixelBufferAdaptor()
if videoWriter.startWriting() == false {
fatalError("startWriting() failed")
}
self.initialTime = initialBuffer != nil ? CMSampleBufferGetPresentationTimeStamp(initialBuffer!) : CMTime.zero
videoWriter.startSession(atSourceTime: self.initialTime)
precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
}
func render(appendPixelBuffers: #escaping (VideoWriter)->Bool, completion: #escaping ()->Void) {
precondition(videoWriter != nil, "Call start() to initialze the writer")
let queue = DispatchQueue(__label: "mediaInputQueue", attr: nil)
videoWriterInput.requestMediaDataWhenReady(on: queue) {
let isFinished = appendPixelBuffers(self)
if isFinished {
self.videoWriterInput.markAsFinished()
self.videoWriter.finishWriting() {
DispatchQueue.main.async {
print("Done Creating Video")
completion()
}
}
}
else {
// Fall through. The closure will be called again when the writer is ready.
}
}
}
func addAudio(buffer: CMSampleBuffer, time: CMTime) {
if(isReadyForAudioData) {
print("Writing audio \(VideoWriter.ci) of a time of \(CMSampleBufferGetPresentationTimeStamp(buffer))")
let duration = CMSampleBufferGetDuration(buffer)
let offsetBuffer = CMSampleBuffer.createSampleBuffer(fromSampleBuffer: buffer, withTimeOffset: time, duration: duration)
if(offsetBuffer != nil) {
print("Added audio")
self.audioWriterInput.append(offsetBuffer!)
}
else {
print("Not adding audio")
}
}
VideoWriter.ci += 1
}
func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {
precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")
//1
let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size, alpha: CGImageAlphaInfo.premultipliedFirst)!
return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime + self.initialTime)
}
}
I was looking a little further into this - and while I could have updated my answer, I'd rather open this tangent in a new area to separate these ideas. Apple states that we can use an AVVideoComposition to "To use the created video composition for playback, create an AVPlayerItem object from the same asset used as the composition’s source, then assign the composition to the player item’s videoComposition property. To export the composition to a new movie file, create an AVAssetExportSession object from the same source asset, then assign the composition to the export session’s videoComposition property.".
https://developer.apple.com/documentation/avfoundation/avasynchronousciimagefilteringrequest
So, what you COULD try is using the AVPlayer for the ORIGINAL URL. Then try applying your filter.
let filter = CIFilter(name: "CIGaussianBlur")!
let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
// Clamp to avoid blurring transparent pixels at the image edges
let source = request.sourceImage.imageByClampingToExtent()
filter.setValue(source, forKey: kCIInputImageKey)
// Vary filter parameters based on video timing
let seconds = CMTimeGetSeconds(request.compositionTime)
filter.setValue(seconds * 10.0, forKey: kCIInputRadiusKey)
// Crop the blurred output to the bounds of the original image
let output = filter.outputImage!.imageByCroppingToRect(request.sourceImage.extent)
// Provide the filter output to the composition
request.finishWithImage(output, context: nil)
})
let asset = AVAsset(url: originalURL)
let item = AVPlayerItem(asset: asset)
item.videoComposition = composition
let player = AVPlayer(playerItem: item)
I'm sure you know what to do from here. This may allow you to do a "Real-time" of your filtering. What I could see as a potential issue is that this runs into the same issues as your original thing, whereas it still takes a set time to run each frame and leading to a delay between audio and video. However, this may not happen. If you do get this working, once the user selects their filter, you can use AVAssetExportSession to export the specific videoComposition.
More here if you need help!

Resize UIImage before uploading to Firebase storage in swift 3

I have set up my application so that when I press the button "cambiaimmagineutente" a picker controller appears and I can choose the image which I then upload to FIRStorage using the "UIImagePickerControllerReferenceURL". I cannot find a way to resize the image before uploading it to save space and to place it in a smaller image view.
Here is the code:
#IBAction func cambiaImmagineUtente(_ sender: UIButton) {
imagePicker.allowsEditing = false
imagePicker.sourceType = .photoLibrary
present(imagePicker, animated: true, completion:nil)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
picker.dismiss(animated: true, completion:nil)
// if it's a photo from the library, not an image from the camera
if #available(iOS 8.0, *), let referenceUrl = info[UIImagePickerControllerReferenceURL] as? URL {
let assets = PHAsset.fetchAssets(withALAssetURLs: [referenceUrl], options: nil)
let asset = assets.firstObject
asset?.requestContentEditingInput(with: nil, completionHandler: { (contentEditingInput, info) in
let imageFile = contentEditingInput?.fullSizeImageURL
let filePath = FIRAuth.auth()!.currentUser!.uid +
"/\(Int(Date.timeIntervalSinceReferenceDate * 1000))/\(imageFile!.lastPathComponent)"
// [START uploadimage]
self.storageRef.child(filePath)
.putFile(imageFile!, metadata: nil) { (metadata, error) in
if let error = error {
//an error occured
print("Error uploading: \(error)")
return
}
self.uploadSuccess(metadata!, storagePath: filePath)
}
// [END uploadimage]
})
} else {
guard let image = info[UIImagePickerControllerOriginalImage] as? UIImage else { return }
guard let imageData = UIImageJPEGRepresentation(image, 0.8) else { return }
let imagePath = FIRAuth.auth()!.currentUser!.uid +
"/\(Int(Date.timeIntervalSinceReferenceDate * 1000)).jpg"
let metadata = FIRStorageMetadata()
metadata.contentType = "image/jpeg"
self.storageRef.child(imagePath)
.put(imageData, metadata: metadata) { (metadata, error) in
if let error = error {
//an error occured
print("Error uploading: \(error)")
return
}
self.uploadSuccess(metadata!, storagePath: imagePath)
}
}
}
func uploadSuccess(_ metadata: FIRStorageMetadata, storagePath: String) {
print("Upload Succeeded!")
//self.urlTextView.text = metadata.downloadURL()?.absoluteString
UserDefaults.standard.set(storagePath, forKey: "storagePath")
UserDefaults.standard.synchronize()
//self.downloadPicButton.isEnabled = true
}
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
picker.dismiss(animated: true, completion:nil)
}
You can use this:
func resizeImage(image: UIImage, targetSize: CGSize) -> UIImage {
let size = image.size
let widthRatio = targetSize.width / image.size.width
let heightRatio = targetSize.height / image.size.height
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSize(width: size.width * heightRatio, height: size.height * heightRatio)
} else {
newSize = CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
}
let rect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
Use:
let resizedImage = resizeImage(image: selectedImage, targetSize: CGSize.init(width: 300, height: 300))
make sure you also make a write rule to a max value in your storage rules!

Loading image from gallery by local URL (Swift)

I’ve encountered a strange behavior:
An imagePicker returns a PHAsset, and I do the Following and manage to present an imageView with image from the data:
asset.requestContentEditingInput(with: PHContentEditingInputRequestOptions()) { (input, _) in
let url = input?.fullSizeImageURL
let imgV = UIImageView()
let test_url = URL(string: (url?.absoluteString)!)
print("><><><^^^><><>\(test_url)")
//prints: ><><><^^^><><>Optional(file:///var/mobile/Media/DCIM/107APPLE/IMG_7242.JPG)
let data = NSData(contentsOf: test_url! as URL)
imgV.image = UIImage(data: data! as Data)
imgV.backgroundColor = UIColor.cyan
att.imageLocalURL = url?.absoluteString//// saving the string to use in the other class
imgV.frame = CGRect(x: 0, y: 0, width: 100, height: 100)
self.view.addSubview(imgV) /// just to test that the file exists and can produce an image
However when I do the following in another class:
if((NSURL( string: self.attachment.imageLocalURL! ) as URL!).isFileURL)// checking if is Local URL
{
let test_url = URL(string: self.attachment.imageLocalURL!) // reading the value stored from before
print("><><><^^^><><>\(test_url)")
//prints :><><><^^^><><>Optional(file:///var/mobile/Media/DCIM/107APPLE/IMG_7242.JPG)
let data = NSData(contentsOf: test_url! as URL)
imageView.image = UIImage(data: data! as Data)
}
The data is nil! What am I doing wrong, the String for URL is identical in both cases!
PHAsset objects should be accessed via the PHImageManager class. If you want to load the image synchronously I recommend you do something like this:
func getImage(assetUrl: URL) -> UIImage? {
let asset = PHAsset.fetchAssets(withALAssetURLs: [assetUrl], options: nil)
guard let result = asset.firstObject else {
return nil
}
var assetImage: UIImage?
let options = PHImageRequestOptions()
options.isSynchronous = true
PHImageManager.default().requestImage(for: result, targetSize: UIScreen.main.bounds.size, contentMode: PHImageContentMode.aspectFill, options: options) { image, info in
assetImage = image
}
return assetImage
}
You could even write a UIImageView extension to load the image directly from a PHAsset url.
var images:NSMutableArray = NSMutableArray() //hold the fetched images
func fetchPhotos ()
{
images = NSMutableArray()
//totalImageCountNeeded = 3
self.fetchPhotoAtIndexFromEnd(0)
}
func fetchPhotoAtIndexFromEnd(index:Int)
{
let status : PHAuthorizationStatus = PHPhotoLibrary.authorizationStatus()
if status == PHAuthorizationStatus.Authorized
{
let imgManager = PHImageManager.defaultManager()
let requestOptions = PHImageRequestOptions()
requestOptions.synchronous = true
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key:"creationDate", ascending: true)]
if let fetchResult: PHFetchResult = PHAsset.fetchAssetsWithMediaType(PHAssetMediaType.Image, options: fetchOptions)
{
if fetchResult.count > 0
{
imgManager.requestImageForAsset(fetchResult.objectAtIndex(fetchResult.count - 1 - index) as! PHAsset, targetSize: CGSizeMake(self.img_CollectionL.frame.size.height/3, self.img_CollectionL.frame.size.width/3), contentMode: PHImageContentMode.AspectFill, options: requestOptions, resultHandler: { (image, _) in
//self.images.addObject(image!)
if image != nil
{
self.images.addObject(image!)
}
if index + 1 < fetchResult.count && self.images.count < 20 //self.totalImageCountNeeded
{
self.fetchPhotoAtIndexFromEnd(index + 1)
}
else
{
}
})
self.img_CollectionL.reloadData()
}
}
}
}

How to extract selected images from bs_presentImagePickerController [duplicate]

I'm attempting to create a UIImage (like a thumbnail or something) from a PHAsset so that I can pass it into something that takes a UIImage. I've tried adapting solutions I found on SO (since they all just directly pass it into say a tableview or something), but I have no success (likely because I'm not doing it right).
func getAssetThumbnail(asset: PHAsset) -> UIImage {
var retimage = UIImage()
println(retimage)
let manager = PHImageManager.defaultManager()
manager.requestImageForAsset(asset, targetSize: CGSize(width: 100.0, height: 100.0), contentMode: .AspectFit, options: nil, resultHandler: {(result, info)->Void in
retimage = result
})
println(retimage)
return retimage
}
The printlns are telling me that the manager.request line isn't doing anything right now. How do I get it to give me the asset as a UIImage.
Thanks.
This did what I needed it to do, in case anyone also needs this.
func getAssetThumbnail(asset: PHAsset) -> UIImage {
let manager = PHImageManager.defaultManager()
let option = PHImageRequestOptions()
var thumbnail = UIImage()
option.synchronous = true
manager.requestImageForAsset(asset, targetSize: CGSize(width: 100.0, height: 100.0), contentMode: .AspectFit, options: option, resultHandler: {(result, info)->Void in
thumbnail = result!
})
return thumbnail
}
Edit: Swift 3 update
func getAssetThumbnail(asset: PHAsset) -> UIImage {
let manager = PHImageManager.default()
let option = PHImageRequestOptions()
var thumbnail = UIImage()
option.isSynchronous = true
manager.requestImage(for: asset, targetSize: CGSize(width: 100, height: 100), contentMode: .aspectFit, options: option, resultHandler: {(result, info)->Void in
thumbnail = result!
})
return thumbnail
}
try this it works for me, hope it helps you too,
func getUIImage(asset: PHAsset) -> UIImage? {
var img: UIImage?
let manager = PHImageManager.default()
let options = PHImageRequestOptions()
options.version = .original
options.isSynchronous = true
manager.requestImageData(for: asset, options: options) { data, _, _, _ in
if let data = data {
img = UIImage(data: data)
}
}
return img
}
Simple Solution (Swift 4.2)
Method 1:
extension PHAsset {
var image : UIImage {
var thumbnail = UIImage()
let imageManager = PHCachingImageManager()
imageManager.requestImage(for: self, targetSize: CGSize(width: 100, height: 100), contentMode: .aspectFit, options: nil, resultHandler: { image, _ in
thumbnail = image!
})
return thumbnail
}
}
let image = asset.image
Use this method if you only need UIImage from PHAsset.
OR
extension PHAsset {
func image(targetSize: CGSize, contentMode: PHImageContentMode, options: PHImageRequestOptions?) -> UIImage {
var thumbnail = UIImage()
let imageManager = PHCachingImageManager()
imageManager.requestImage(for: self, targetSize: targetSize, contentMode: contentMode, options: options, resultHandler: { image, _ in
thumbnail = image!
})
return thumbnail
}
}
let image = asset.image(targetSize: CGSize, contentMode: PHImageContentMode, options: PHImageRequestOptions?)
Use this method for your desired UIImage.
OR
extension PHAsset {
func image(completionHandler: #escaping (UIImage) -> ()){
var thumbnail = UIImage()
let imageManager = PHCachingImageManager()
imageManager.requestImage(for: self, targetSize: CGSize(width: 100, height: 100), contentMode: .aspectFit, options: nil, resultHandler: { img, _ in
thumbnail = img!
})
completionHandler(thumbnail)
}
}
let image = asset.image(completionHandler: {(img) in
print("Finished")
})
Use this method for notify after completion.
Method 2:
extension PHAsset {
var data : (UIImage, [AnyHashable : Any]) {
var img = UIImage(); var information = [AnyHashable : Any](); let imageManager = PHCachingImageManager()
imageManager.requestImage(for: self, targetSize: CGSize(width: 100, height: 100), contentMode: .aspectFit, options: nil, resultHandler: { image,info in
img = image!
information = info!
})
return (img,information)
}
}
let image_withData : (UIImage, [AnyHashable : Any]) = asset.data
Use this method if you want UIImage And Result Info of PHAsset
OR
extension PHAsset {
func data(targetSize: CGSize, contentMode: PHImageContentMode, options: PHImageRequestOptions?) -> (UIImage, [AnyHashable : Any]) {
var img = UIImage(); var information = [AnyHashable : Any](); let imageManager = PHCachingImageManager()
imageManager.requestImage(for: self, targetSize: targetSize, contentMode: contentMode, options: options, resultHandler: { image,info in
img = image!
information = info!
})
return (img,information)
}
}
let data = asset?.data(targetSize: CGSize, contentMode: PHImageContentMode, options: PHImageRequestOptions?)
Use this method for your desired Data.
Swift 5
extension PHAsset {
func getAssetThumbnail() -> UIImage {
let manager = PHImageManager.default()
let option = PHImageRequestOptions()
var thumbnail = UIImage()
option.isSynchronous = true
manager.requestImage(for: self,
targetSize: CGSize(width: self.pixelWidth, height: self.pixelHeight),
contentMode: .aspectFit,
options: option,
resultHandler: {(result, info) -> Void in
thumbnail = result!
})
return thumbnail
}
}
Swift 4.
resizeMode,deliveryMode - These can be set according to user requirement.
isNetworkAccessAllowed - set this to "true" for fetching images from the cloud
imageSize- required image size
func getImageFromAsset(asset:PHAsset,imageSize:CGSize, callback:#escaping (_ result:UIImage) -> Void) -> Void{
let requestOptions = PHImageRequestOptions()
requestOptions.resizeMode = PHImageRequestOptionsResizeMode.fast
requestOptions.deliveryMode = PHImageRequestOptionsDeliveryMode.highQualityFormat
requestOptions.isNetworkAccessAllowed = true
requestOptions.isSynchronous = true
PHImageManager.default().requestImage(for: asset, targetSize: imageSize, contentMode: PHImageContentMode.default, options: requestOptions, resultHandler: { (currentImage, info) in
callback(currentImage!)
})
}
I'd suggest using Apple's PHCachingImageManager (that inherits from PHImageManager):
A PHCachingImageManager object fetches or generates image data for photo or video assets
Also, PHCachingImageManager support a better caching mechanism.
Example of fetching a thumbnail synchronous:
let options = PHImageRequestOptions()
options.deliveryMode = .HighQualityFormat
options.synchronous = true // Set it to false for async callback
let imageManager = PHCachingImageManager()
imageManager.requestImageForAsset(YourPHAssetVar,
targetSize: CGSizeMake(CGFloat(160), CGFloat(160)),
contentMode: .AspectFill,
options: options,
resultHandler: { (resultThumbnail : UIImage?, info : [NSObject : AnyObject]?) in
// Assign your thumbnail which is the *resultThumbnail*
}
In addition, you can use PHCachingImageManager to cache your images for faster UI response:
To use a caching image manager:
Create a PHCachingImageManager instance. (This step replaces using the
shared PHImageManager instance.)
Use PHAsset class methods to fetch the assets you’re interested in.
To prepare images for those assets, call the
startCachingImagesForAssets:targetSize:contentMode:options: method
with the target size, content mode, and options you plan to use when
later requesting images for each individual asset.
When you need an image for an individual asset, call the
requestImageForAsset:targetSize:contentMode:options:resultHandler:
method, and pass the same parameters you used when preparing that
asset.
If the image you request is among those already prepared, the
PHCachingImageManager object immediately returns that image.
Otherwise, Photos prepares the image on demand and caches it for later
use.
In our example:
var phAssetArray : [PHAsset] = []
for i in 0..<assets.count
{
phAssetArray.append(assets[i] as! PHAsset)
}
let options = PHImageRequestOptions()
options.deliveryMode = .Opportunistic
options.synchronous = false
self.imageManager.startCachingImagesForAssets(phAssetArray,
targetSize: CGSizeMake(CGFloat(160), CGFloat(160)),
contentMode: .AspectFill,
options: options)
For Swift 3.0.1:
func getAssetThumbnail(asset: PHAsset, size: CGFloat) -> UIImage {
let retinaScale = UIScreen.main.scale
let retinaSquare = CGSize(width: size * retinaScale, height: size * retinaScale)//(size * retinaScale, size * retinaScale)
let cropSizeLength = min(asset.pixelWidth, asset.pixelHeight)
let square = CGRect(x:0, y: 0,width: CGFloat(cropSizeLength),height: CGFloat(cropSizeLength))
let cropRect = square.applying(CGAffineTransform(scaleX: 1.0/CGFloat(asset.pixelWidth), y: 1.0/CGFloat(asset.pixelHeight)))
let manager = PHImageManager.default()
let options = PHImageRequestOptions()
var thumbnail = UIImage()
options.isSynchronous = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .exact
options.normalizedCropRect = cropRect
manager.requestImage(for: asset, targetSize: retinaSquare, contentMode: .aspectFit, options: options, resultHandler: {(result, info)->Void in
thumbnail = result!
})
return thumbnail
}
Resource : https://gist.github.com/lvterry/f062cf9ae13bca76b0c6#file-getassetthumbnail-swift
The problem is that requestImageForAsset is a resultHandler and this block of code happens in the future after your functions has already printed and returned the value you was expecting. I did come changes to show you this happening and also suggest some simple solutions.
func getAssetThumbnail(asset: PHAsset) {
var retimage = UIImage()
println(retimage)
let manager = PHImageManager.defaultManager()
manager.requestImageForAsset(asset, targetSize: CGSize(width: 100.0, height: 100.0), contentMode: .AspectFit, options: nil, resultHandler: {
(result, info)->Void in
retimage = result
println("This happens after")
println(retimage)
callReturnImage(retimage) // <- create this method
})
println("This happens before")
}
Learn more about closures and completion handle and async funcs at Apple documentation
I hope that helps you!
Objective-c version of code based on dcheng answer.
-(UIImage *)getAssetThumbnail:(PHAsset * )asset {
PHImageRequestOptions *options = [[PHImageRequestOptions alloc]init];
options.synchronous = true;
__block UIImage *image;
[PHCachingImageManager.defaultManager requestImageForAsset:asset targetSize:CGSizeMake(100, 100) contentMode:PHImageContentModeAspectFit options:options resultHandler:^(UIImage * _Nullable result, NSDictionary * _Nullable info) {
image = result;
}];
return image;
}
Swift 5 working function
func getImageForAsset(asset: PHAsset) -> UIImage {
let manager = PHImageManager.default
let option = PHImageRequestOptions()
var thumbnail = UIImage()
option.isSynchronous = true
manager().requestImage(for: asset, targetSize: CGSize(width: 100.0, height: 100.0), contentMode: .aspectFit, options: nil, resultHandler: {(result, info) -> Void in
thumbnail = result!
})
return thumbnail
}
I have a different solution which worked really nicely when I wanted to get the memory down in my collectionView:
First I get the URL from the asset:
func getImageUrlFrom(asset: PHAsset, completion: #escaping ((URL?)->())) {
asset.requestContentEditingInput(with: nil, completionHandler: { (input, info) in
if let input = input {
completion(input.fullSizeImageURL)
}
})
}
Then, instead of requesting an image, I downSample it and make it memory efficient for the size of the image: https://developer.apple.com/videos/play/wwdc2018/219
func downsample(imageAt imageURL: URL?,
to pointSize: CGSize,
scale: CGFloat = UIScreen.main.scale) -> UIImage? {
guard let imageURL = imageURL else { return nil }
// Create an CGImageSource that represent an image
let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let imageSource = CGImageSourceCreateWithURL(imageURL as CFURL, imageSourceOptions) else {
return nil
}
// Calculate the desired dimension
let maxDimensionInPixels = max(pointSize.width, pointSize.height) * scale
// Perform downsampling
let downsampleOptions = [
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceShouldCacheImmediately: true,
kCGImageSourceCreateThumbnailWithTransform: true,
kCGImageSourceThumbnailMaxPixelSize: maxDimensionInPixels
] as CFDictionary
guard let downsampledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, downsampleOptions) else {
return nil
}
// Return the downsampled image as UIImage
return UIImage(cgImage: downsampledImage)
}
Since I can't use PHCachingImageManager, I just use NSCache and the localIdentifier of the asset as the reference for caching.
And remember to use DispatchQueue.global(qos: .userInitiated).async { } when you call both methods.

Resources