I have stumbled across the following piece of code and I can't understand exactly how it works.
There is the following property which is populated when a method of AVCapturePhotoCaptureDelegate is called:
var photoCaptureCompletionBlock: ((UIImage?, Error?) -> Void)?
The delegate method is triggered by the following piece of code:
func captureImage(completion: #escaping (UIImage?, Error?) -> Void) {
let settings = AVCapturePhotoSettings()
self.photoOutput?.capturePhoto(with: settings, delegate: self)
self.photoCaptureCompletionBlock = completion
}
The line that triggers the delegate is:
self.photoOutput?.capturePhoto(with: settings, delegate: self)
and immediately after that the completion variable is assigned to self.photoCaptureCompletionBlock
Conceptually I would understand the opposite, i.e. to assign self.photoCaptureCompletionBlock to completion and not the other way around (which is not possible without an inout variable since completion is a let).
What are the mechanics behind this assignment? How does it work?
EDIT: For context, the delegate method that is called is the following:
func photoOutput(_ output: AVCapturePhotoOutput,
didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?,
previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
if let error = error {
self.photoCaptureCompletionBlock?(nil, error)
} else if let buffer = photoSampleBuffer,
let data = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: buffer, previewPhotoSampleBuffer: nil) {
let image = UIImage(data: data)
self.photoCaptureCompletionBlock?(image, nil)
} else {
self.photoCaptureCompletionBlock?(nil, CameraControllerError.unknown)
}
}
Your method captureImage(completion: #escaping (UIImage?, Error?) -> Void) is not a part of AVCapturePhotoCaptureDelegate protocol. It is a custom method of the object's API which implements this protocol.
So since there is no full code of that object, I can only guess. In this method you start photo capturing and pass the completion block, which will be triggered when photo capturing will finish.
This completion block stored in object's variable and I think some other method of delegate, for ex this one func photoOutput(AVCapturePhotoOutput, didFinishProcessingPhoto: AVCapturePhoto, error: Error?) in object implementation will trigger this completion block after photo capturing will be finished.
Related
I am currently using base64 encoding to convert and sent multiple images in a JSON file from my Swift app to my API using:
let imageData = image.jpegData(compressionQuality: 1.0)
let sSideL = imageData.base64EncodedString(options: .lineLength64Characters)
While extending my API, I now would like to use the rich EXIF data provided by most smartphones like lense information, field of view or the device model. Most important for my current purpose is the "Image Model" tag, in order to identify the device, which took the picture.
I recognized that there are some EXIF data left in the base64 data coming through my API but it is limited to the orientation and very basic information like the orientation. Also when I directly print the base64String in Xcode and analyze it, it has very poor EXIF information. Technically it should be possible, because converting the same image in an online base64 converter and analyzing the returning string, I am able to see EXIF information like "Image Model", etc.
Is there a way to convert my UIImage to a base64 string keeping all EXIF details?
The API represents the main part of my system, so I would like to keep it as simple as possible and not add additional upload parameter.
EDIT
Here my code to capture the UIImage
extension CameraController: AVCapturePhotoCaptureDelegate {
public func photoOutput(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?, previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Swift.Error?) {
if let error = error {
// ERROR
}
else if let buffer = photoSampleBuffer,
let data = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: buffer, previewPhotoSampleBuffer: nil),
let image = UIImage(data: data) {
// SEND IMAGE TO SERVER
}
else {
// UNKNOWN ERROR
}
}
}
You can use the newer (iOS 11+) delegate method:
public func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
if let error = error {
// ERROR
} else if let data = photo.fileDataRepresentation() {
// SEND IMAGE DATA TO SERVER
}
else {
// UNKNOWN ERROR
}
}
or the method you are using:
public func photoOutput(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?, previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Swift.Error?) {
if let error = error {
// ERROR
} else if let buffer = photoSampleBuffer,
let data = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: buffer, previewPhotoSampleBuffer: nil) {
// SEND IMAGE DATA TO SERVER
}
else {
// UNKNOWN ERROR
}
}
Like leo-dabus mentioned, you need to send the image data to the server, that has the metadata in it. If you first create an UIImage and convert that back again to data, you have lost the metadata.
I am currently working on implementing In App Purchases in my app and after restoring purchases i would like to call a completion to perform an action of displaying an alert to the user. I was doing it this way and found a post that says it might not even be executed. How can I properly structure this.
func restoreIAPPurchases(completion: (() -> Void)) {
if !self.canMakePayments {
return
}
self.paymentQueue.restoreCompletedTransactions()
completion()
}
let alertController = UIAlertController.vy_alertControllerWithTitle(nil, message: "Restore will reprocess your existing subscription. You will not be charged", actionSheet: false)
alertController.addAction("Ok")
alertController.addActionWithTitle("Restore", style: .default) {
IAPService.shared.restoreIAPPurchases {
UIAlertController.vy_showAlertFrom(self, title: "Restore complete", message: "Successfully restored purchase")
}
}
alertController.presentFrom(self)
"I was doing it this way and found a post that says it might not even be executed"
It might not be executed because you don't call the completion handler on all paths.
As Sh_Khan mentioned in his answer, you don't really need a completion handler here, you need to use the delegate methods to be informed when it completes and whether it was successful or not. But your particular issue with your specific code is that you are not calling completion in the if statement.
if !self.canMakePayments {
return
}
Should probably be
guard canMakePayments else {
completion()
return
}
In the code you had, if canMakePayments is false then your completion code will not execute.
The result is asynchonous here
func paymentQueueRestoreCompletedTransactionsFinished(_ queue: SKPaymentQueue)
or
func paymentQueue(_ queue: SKPaymentQueue,
restoreCompletedTransactionsFailedWithError error: Error)
Suppose I'm writing code for login and need Completion Hander for wait/call back after request completed.
//MARK:- #Properties
var signInCompletionHandler : ((_ result : AnyObject?, _ error : NSError?) -> Void)?
var viewController : UIViewController?
//MARK:- call login method with completion handler.
func login(withViewControler viewController : UIViewController, completionHandler : #escaping (_ result : AnyObject?, _ error : NSError?) -> Void) {
// Write your logic here.
}
i'm trying to implement simple didReceiveTrust in XMPPStreamDelegate, but Xcode shows warning on method definition:
func xmppStream(_ sender: XMPPStream!, didReceiveTrust trust: SecTrust, completionHandler: XMPPStreamCompletionHandler) {
completionHandler(true)
}
warning is following:
Instance method
'xmppStream(sender:didReceiveTrust:completionHandler:)' nearly matches
optional requirement 'xmppStream(_:didReceive:completionHandler:)' of
protocol 'XMPPStreamDelegate'
when testing app i'm getting following in output:
2018-06-12 23:10:11:239 MyMessages[55145:3561831] XMPPStream: Stream
secured with (GCDAsyncSocketManuallyEvaluateTrust == YES), but there
are no delegates that implement
xmppStream:didReceiveTrust:completionHandler:. This is likely a
mistake.
please help
following function definition works as expected:
func xmppStream(_ sender: XMPPStream?, didReceive trust: SecTrust?, completionHandler: #escaping (_ shouldTrustPeer: Bool) -> Void) {
completionHandler(true)
}
I need access to my devices camera data (not image data! Already have that). Such as, "pinhole" fx & fy & anything else I can possibly get.
Currently, I'm using AVFoundation's 'AVCaptureSession' with a custom UI. But previously I used 'UIImagePickerController' which has a delegate method called
imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any])
I was able to retrieve the taken photograph from the "info" dictionary. It also gave me very detailed information regarding the camera & its capabilities. As it stands, I don't know how to retrieve this same detailed information during 'AVCaptureSession' photography. Please help
You need to use AVCapturePhotoOutput and not AVCaptureStillImageOutput.
First the code below needs the following variables in your class
private let photoOutput = AVCapturePhotoOutput()
private var inProgressPhotoCaptureDelegates = [Int64 : AVPhotoCaptureDelegate]()
private let sessionQueue = DispatchQueue(label: "session queue", attributes: [], target: nil) // Communicate with the session
private var videoDeviceOrientation : AVCaptureVideoOrientation = .portrait // this needs updated as the device orientation changes
Add photo output at AVSession setup
// Add photo output.
if session.canAddOutput(photoOutput) {
session.addOutput(photoOutput)
self.photoOutput.isHighResolutionCaptureEnabled = true
}
Your capturePhoto() function should be setup as follows
func capturePhoto(aspectRatio : Float, metaData : NSDictionary?) {
sessionQueue.async {
// Update the photo output's connection to match the video orientation of the video preview layer.
if let photoOutputConnection = self.photoOutput.connection(with: AVMediaType.video) {
photoOutputConnection.videoOrientation = self.videoDeviceOrientation
}
// Capture a JPEG photo with flash set to off and high resolution photo enabled.
let photoSettings = AVCapturePhotoSettings()
photoSettings.flashMode = .off
photoSettings.isHighResolutionPhotoEnabled = true
if photoSettings.availablePreviewPhotoPixelFormatTypes.count > 0 {
photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String : photoSettings.availablePreviewPhotoPixelFormatTypes.first!]
}
// Use a separate object for the photo capture delegate to isolate each capture life cycle.
let photoCaptureDelegate = MyAVPhotoCaptureDelegate(completed: { [unowned self] photoCaptureDelegate in
// When the capture is complete, remove a reference to the photo capture delegate so it can be deallocated.
self.sessionQueue.async { [unowned self] in
self.inProgressPhotoCaptureDelegates[photoCaptureDelegate.requestedPhotoSettings.uniqueID] = nil
}
)
/*
The Photo Output keeps a weak reference to the photo capture delegate so
we store it in an array to maintain a strong reference to this object
until the capture is completed.
*/
self.inProgressPhotoCaptureDelegates[photoCaptureDelegate.requestedPhotoSettings.uniqueID] = photoCaptureDelegate
self.photoOutput.capturePhoto(with: photoSettings, delegate: photoCaptureDelegate)
}
}
The MyAVPhotoCaptureDelegate class referenced in the capturePhoto() function above will need to be setup as follows
class MyAVPhotoCaptureDelegate: NSObject, AVCapturePhotoCaptureDelegate {
init(completed: #escaping (AVPhotoCaptureDelegate) -> ()) {
self.completed = completed
}
private func didFinish() {
completed(self)
}
func photoOutput(_ captureOutput: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
}
func photoOutput(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?, previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let photoSampleBuffer = photoSampleBuffer {
let propertiesDictionary = NSMutableDictionary()
if let exif = CMGetAttachment(photoSampleBuffer, kCGImagePropertyExifDictionary as NSString, nil) {
if let exifDictionary = exif as? NSMutableDictionary {
// view exif data
}
}
photoData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
}
else {
print("Error capturing photo: \(String(describing:error))")
return
}
}
func photoOutput(_ captureOutput: AVCapturePhotoOutput, didFinishCaptureFor resolvedSettings: AVCaptureResolvedPhotoSettings, error: Error?) {
// Use PHPhotoLibrary to save photoData to photo library
...
}
private let completed : (MyAVPhotoCaptureDelegate) -> ()
}
Most of this code comes from my version of AVCam with the specifics of my implementation removed. I have left out the code for saving to the photo library but you can extract that from the sample code. You can view the exif data at the point I have commented "view exif data"
I'm trying to use this:
https://developer.apple.com/library/content/samplecode/AVCam/Introduction/Intro.html
I'm trying to access the photoData after the photo has been taken so I can upload it to a server.
When the capture button is pressed, a ton of setup code gets run, and then this is called at the very bottom:
self.photoOutput.capturePhoto(with: photoSettings, delegate: photoCaptureDelegate)
photoCaptureDelegate is another file that comes along with the project, and inside of that is this:
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let photoSampleBuffer = photoSampleBuffer {
photoData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
print("Got photo Data...this is where I want to capture/use the data")
}
else {
print("Error capturing photo: \(error)")
return
}
}
So in this photoCaptureDelegate "photoData" is getting set to a jpeg of whatever picture is being taken. I'm wanting to then be able to use that data back in the view controller the capturePhoto button is so I can upload it to the server using functions I've got defined in the main view controller.
How do I grab that photo data and use it back in the other viewcontroller that called the self.photoOutput.capturePhoto?
Alternatively..would it be bad practice to just run the posting to server code from directly inside of the didFinishProcessingPhoto? I could make it so I had access to the variables I need from inside there, but this seems incorrect.
There is no need to do that in the delegate.
In the call, in CameraViewController, when you create the PhotoCaptureDelegate (line 533), it has 3 callbacks, being the last one completed, in that code you receive the photoCaptureDelegate responsible of the photo and in it you have your code, so you can do what you want there
completed: { [unowned self] photoCaptureDelegate in
//Save capture here. Image data is in here
//photoCaptureDelegate.photoData
self.sessionQueue.async { [unowned self] in
self.inProgressPhotoCaptureDelegates[photoCaptureDelegate.requestedPhotoSettings.uniqueID] = nil
}
}