Swift base64 image encoding with EXIF data - ios

I am currently using base64 encoding to convert and sent multiple images in a JSON file from my Swift app to my API using:
let imageData = image.jpegData(compressionQuality: 1.0)
let sSideL = imageData.base64EncodedString(options: .lineLength64Characters)
While extending my API, I now would like to use the rich EXIF data provided by most smartphones like lense information, field of view or the device model. Most important for my current purpose is the "Image Model" tag, in order to identify the device, which took the picture.
I recognized that there are some EXIF data left in the base64 data coming through my API but it is limited to the orientation and very basic information like the orientation. Also when I directly print the base64String in Xcode and analyze it, it has very poor EXIF information. Technically it should be possible, because converting the same image in an online base64 converter and analyzing the returning string, I am able to see EXIF information like "Image Model", etc.
Is there a way to convert my UIImage to a base64 string keeping all EXIF details?
The API represents the main part of my system, so I would like to keep it as simple as possible and not add additional upload parameter.
EDIT
Here my code to capture the UIImage
extension CameraController: AVCapturePhotoCaptureDelegate {
public func photoOutput(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?, previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Swift.Error?) {
if let error = error {
// ERROR
}
else if let buffer = photoSampleBuffer,
let data = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: buffer, previewPhotoSampleBuffer: nil),
let image = UIImage(data: data) {
// SEND IMAGE TO SERVER
}
else {
// UNKNOWN ERROR
}
}
}

You can use the newer (iOS 11+) delegate method:
public func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
if let error = error {
// ERROR
} else if let data = photo.fileDataRepresentation() {
// SEND IMAGE DATA TO SERVER
}
else {
// UNKNOWN ERROR
}
}
or the method you are using:
public func photoOutput(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?, previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Swift.Error?) {
if let error = error {
// ERROR
} else if let buffer = photoSampleBuffer,
let data = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: buffer, previewPhotoSampleBuffer: nil) {
// SEND IMAGE DATA TO SERVER
}
else {
// UNKNOWN ERROR
}
}
Like leo-dabus mentioned, you need to send the image data to the server, that has the metadata in it. If you first create an UIImage and convert that back again to data, you have lost the metadata.

Related

Assignment of completion handler to closure variable in Swift

I have stumbled across the following piece of code and I can't understand exactly how it works.
There is the following property which is populated when a method of AVCapturePhotoCaptureDelegate is called:
var photoCaptureCompletionBlock: ((UIImage?, Error?) -> Void)?
The delegate method is triggered by the following piece of code:
func captureImage(completion: #escaping (UIImage?, Error?) -> Void) {
let settings = AVCapturePhotoSettings()
self.photoOutput?.capturePhoto(with: settings, delegate: self)
self.photoCaptureCompletionBlock = completion
}
The line that triggers the delegate is:
self.photoOutput?.capturePhoto(with: settings, delegate: self)
and immediately after that the completion variable is assigned to self.photoCaptureCompletionBlock
Conceptually I would understand the opposite, i.e. to assign self.photoCaptureCompletionBlock to completion and not the other way around (which is not possible without an inout variable since completion is a let).
What are the mechanics behind this assignment? How does it work?
EDIT: For context, the delegate method that is called is the following:
func photoOutput(_ output: AVCapturePhotoOutput,
didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?,
previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
if let error = error {
self.photoCaptureCompletionBlock?(nil, error)
} else if let buffer = photoSampleBuffer,
let data = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: buffer, previewPhotoSampleBuffer: nil) {
let image = UIImage(data: data)
self.photoCaptureCompletionBlock?(image, nil)
} else {
self.photoCaptureCompletionBlock?(nil, CameraControllerError.unknown)
}
}
Your method captureImage(completion: #escaping (UIImage?, Error?) -> Void) is not a part of AVCapturePhotoCaptureDelegate protocol. It is a custom method of the object's API which implements this protocol.
So since there is no full code of that object, I can only guess. In this method you start photo capturing and pass the completion block, which will be triggered when photo capturing will finish.
This completion block stored in object's variable and I think some other method of delegate, for ex this one func photoOutput(AVCapturePhotoOutput, didFinishProcessingPhoto: AVCapturePhoto, error: Error?) in object implementation will trigger this completion block after photo capturing will be finished.

Get used exposure duration and ISO values after the capture is complete from the AVCapturePhotoOutput

Background
I am using AVCaptureSession with AVCapturePhotoOutput to save captures as JPEG images.
let captureSession = AVCaptureSession()
let stillImageOutput = AVCapturePhotoOutput()
var captureDevice : AVCaptureDevice?
...
func setupCamera() {
captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for: AVMediaType.video, position: .back)
if (captureDevice != nil) {
captureSession.addInput(try AVCaptureDeviceInput(device: captureDevice!))
if captureSession.canAddOutput(stillImageOutput) {
captureSession.addOutput(stillImageOutput)
}
}
}
The AVCaptureDevice is set to automatically and continuously adjust exposure settings
func configureCamera() {
do {
try captureDevice?.lockForConfiguration()
captureDevice?.exposureMode = AVCaptureDevice.ExposureMode.continuousAutoExposure
captureDevice?.unlockForConfiguration()
} catch let error as NSError {
// Errors handled here...
}
}
The capture is started by
func capture(){
// Get an instance of AVCapturePhotoSettings class
let photoSettings = AVCapturePhotoSettings()
// Set photo settings
photoSettings.isAutoStillImageStabilizationEnabled = true
photoSettings.flashMode = .off
// Call capturePhoto method by passing photo settings and a
// delegate implementing AVCapturePhotoCaptureDelegate
stillImageOutput.capturePhoto(with: photoSettings, delegate: self)
}
The parent class is set as an AVCapturePhotoCaptureDelegate and the photoOutput is handled by it
//Delegate
func photoOutput(_ captureOutput: AVCapturePhotoOutput,
didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?,
previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
// Make sure there is a photo sample buffer
guard error == nil,
let photoSampleBuffer = photoSampleBuffer else {
//Errors handled here
return
}
// Convert photo same buffer to a jpeg image data by using // AVCapturePhotoOutput
guard let imageData =
AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer) else {
return
}
let capturedImage = UIImage.init(data: imageData , scale: 1.0)
if let image = capturedImage {
//save photo ...
}
}
And everything works as it should, however...
Problem
I need to know the exposure duration and ISO values that were used for each capture. The values vary because the camera is set to automatically adjust exposure and it has to be like that.
I know the metadata of the capture holds these values but I can't figure out how to access them.
The exposure duration and ISO values are necessary for fine tuning the exposure to achieve optimal results. After fine tuning the capture is started with these manual exposure values
captureDevice?.setExposureModeCustom(duration: customTime, iso: customISO, completionHandler: nil)
Instead of getting the used ISO and exposure duration from the capture metadata, I read these values just before capturing a photo. When doing it this way it is important to check that the exposure has finished adjusting.
Just before calling the capture:
check that the auto exposure is not adjusting
while ((captureDevice?.isAdjustingExposure)!){
usleep(100000) // wait 100 msec
}
Read the current exposure parameters
let current_exposure_duration : CMTime = (captureDevice?.exposureDuration)!
let current_exposure_ISO : Float = (captureDevice?.iso)!
And then take a photo
stillImageOutput.capturePhoto(with: photoSettings, delegate: self)

Device camera data (not images but actual camera properties)

I need access to my devices camera data (not image data! Already have that). Such as, "pinhole" fx & fy & anything else I can possibly get.
Currently, I'm using AVFoundation's 'AVCaptureSession' with a custom UI. But previously I used 'UIImagePickerController' which has a delegate method called
imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any])
I was able to retrieve the taken photograph from the "info" dictionary. It also gave me very detailed information regarding the camera & its capabilities. As it stands, I don't know how to retrieve this same detailed information during 'AVCaptureSession' photography. Please help
You need to use AVCapturePhotoOutput and not AVCaptureStillImageOutput.
First the code below needs the following variables in your class
private let photoOutput = AVCapturePhotoOutput()
private var inProgressPhotoCaptureDelegates = [Int64 : AVPhotoCaptureDelegate]()
private let sessionQueue = DispatchQueue(label: "session queue", attributes: [], target: nil) // Communicate with the session
private var videoDeviceOrientation : AVCaptureVideoOrientation = .portrait // this needs updated as the device orientation changes
Add photo output at AVSession setup
// Add photo output.
if session.canAddOutput(photoOutput) {
session.addOutput(photoOutput)
self.photoOutput.isHighResolutionCaptureEnabled = true
}
Your capturePhoto() function should be setup as follows
func capturePhoto(aspectRatio : Float, metaData : NSDictionary?) {
sessionQueue.async {
// Update the photo output's connection to match the video orientation of the video preview layer.
if let photoOutputConnection = self.photoOutput.connection(with: AVMediaType.video) {
photoOutputConnection.videoOrientation = self.videoDeviceOrientation
}
// Capture a JPEG photo with flash set to off and high resolution photo enabled.
let photoSettings = AVCapturePhotoSettings()
photoSettings.flashMode = .off
photoSettings.isHighResolutionPhotoEnabled = true
if photoSettings.availablePreviewPhotoPixelFormatTypes.count > 0 {
photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String : photoSettings.availablePreviewPhotoPixelFormatTypes.first!]
}
// Use a separate object for the photo capture delegate to isolate each capture life cycle.
let photoCaptureDelegate = MyAVPhotoCaptureDelegate(completed: { [unowned self] photoCaptureDelegate in
// When the capture is complete, remove a reference to the photo capture delegate so it can be deallocated.
self.sessionQueue.async { [unowned self] in
self.inProgressPhotoCaptureDelegates[photoCaptureDelegate.requestedPhotoSettings.uniqueID] = nil
}
)
/*
The Photo Output keeps a weak reference to the photo capture delegate so
we store it in an array to maintain a strong reference to this object
until the capture is completed.
*/
self.inProgressPhotoCaptureDelegates[photoCaptureDelegate.requestedPhotoSettings.uniqueID] = photoCaptureDelegate
self.photoOutput.capturePhoto(with: photoSettings, delegate: photoCaptureDelegate)
}
}
The MyAVPhotoCaptureDelegate class referenced in the capturePhoto() function above will need to be setup as follows
class MyAVPhotoCaptureDelegate: NSObject, AVCapturePhotoCaptureDelegate {
init(completed: #escaping (AVPhotoCaptureDelegate) -> ()) {
self.completed = completed
}
private func didFinish() {
completed(self)
}
func photoOutput(_ captureOutput: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
}
func photoOutput(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhoto photoSampleBuffer: CMSampleBuffer?, previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let photoSampleBuffer = photoSampleBuffer {
let propertiesDictionary = NSMutableDictionary()
if let exif = CMGetAttachment(photoSampleBuffer, kCGImagePropertyExifDictionary as NSString, nil) {
if let exifDictionary = exif as? NSMutableDictionary {
// view exif data
}
}
photoData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
}
else {
print("Error capturing photo: \(String(describing:error))")
return
}
}
func photoOutput(_ captureOutput: AVCapturePhotoOutput, didFinishCaptureFor resolvedSettings: AVCaptureResolvedPhotoSettings, error: Error?) {
// Use PHPhotoLibrary to save photoData to photo library
...
}
private let completed : (MyAVPhotoCaptureDelegate) -> ()
}
Most of this code comes from my version of AVCam with the specifics of my implementation removed. I have left out the code for saving to the photo library but you can extract that from the sample code. You can view the exif data at the point I have commented "view exif data"

Swift 3 - Custom camera view - display still image of photo taken after button click

I am using Swift 3, Xcode 8.2.
I have a custom camera view which displays the video feed fine and a button that I want to act as a shutter. When the user taps on the button, I want a picture taken and it to be displayed on the screen. (e.g. like a Snapchat or Facebook Messenger style camera behavior)
Here is my code:
import UIKit
import AVFoundation
class CameraVC: UIViewController, AVCapturePhotoCaptureDelegate {
// this is where the camera feed from the phone is going to be displayed
#IBOutlet var cameraView : UIView!
var shutterButton : UIButton = UIButton.init(type: .custom)
// manages capture activity and coordinates the flow of data from input devices to capture outputs.
var capture_session = AVCaptureSession()
// a capture output for use in workflows related to still photography.
var session_output = AVCapturePhotoOutput()
// preview layer that we will have on our view so users can see the photo we took
var preview_layer = AVCaptureVideoPreviewLayer()
// still picture image is what we show as the picture taken, frozen on the screen
var still_picture_image : UIImage!
... //more code in viewWillAppear that sets up the camera feed
// called when the shutter button is pressed
func shutterButtonPressed() {
// get the actual video feed and take a photo from that feed
session_output.capturePhoto(with: AVCapturePhotoSettings.init(format: [AVVideoCodecKey : AVVideoCodecJPEG]), delegate: self as AVCapturePhotoCaptureDelegate)
}
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
// take the session output, get the buffer, and create an image from that buffer
if let sampleBuffer = photoSampleBuffer,
let previewBuffer = previewPhotoSampleBuffer,
let imageData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: previewBuffer) {
print("Here") // doesn't get here
}
}
It doesn't seem to print "Here" when running this code and I can't find any Swift 3 tutorials on how to display this image. I'm guessing I want to take the imageData and assign it to my still_picture_image and overlay that over the camera feed somehow.
Any help or a point in the right direction would be great help.
EDIT
After adding the following to my code:
if let error = error {
print(error.localizedDescription)
}
But I still don't get any error printed.
Add the following code into your delegate method to print out the error being thrown:
if let error = error {
print(error.localizedDescription)
}
Once you get your error resolved, I think this post should help you to extract the image: Taking photo with custom camera Swift 3
Okay, I figured out my problem:
First, drag an UIImageView to the Storyboard and have it take up the entire screen. This is where the still picture will be displayed after pressing the shutter button.
Create that variable in the code and link it.
#IBOutlet weak var stillPicture : UIImageView!
Then, in viewDidLoad make sure that you insert the UIImageView on top of the camera view.
self.view.insertSubview(stillPicture, aboveSubview: your_camera_view)
This is the function that is called when the shutter button is clicked:
func shutterButtonPressed() {
let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160,
]
settings.previewPhotoFormat = previewFormat
session_output.capturePhoto(with: settings, delegate: self)
}
Then, in your capture delegate:
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let error = error {
print(error.localizedDescription)
}
// take the session output, get the buffer, and create an image from that buffer
if let sampleBuffer = photoSampleBuffer, let previewBuffer = previewPhotoSampleBuffer, let dataImage = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: sampleBuffer, previewPhotoSampleBuffer: previewBuffer) {
// this is the image that the user has taken!
let takenImage : UIImage = UIImage(data: dataImage)!
stillPicture?.image = takenImage
} else {
print("Error setting up photo capture")
}
}

Apple's AVCamera photo capture

I'm trying to use this:
https://developer.apple.com/library/content/samplecode/AVCam/Introduction/Intro.html
I'm trying to access the photoData after the photo has been taken so I can upload it to a server.
When the capture button is pressed, a ton of setup code gets run, and then this is called at the very bottom:
self.photoOutput.capturePhoto(with: photoSettings, delegate: photoCaptureDelegate)
photoCaptureDelegate is another file that comes along with the project, and inside of that is this:
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let photoSampleBuffer = photoSampleBuffer {
photoData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
print("Got photo Data...this is where I want to capture/use the data")
}
else {
print("Error capturing photo: \(error)")
return
}
}
So in this photoCaptureDelegate "photoData" is getting set to a jpeg of whatever picture is being taken. I'm wanting to then be able to use that data back in the view controller the capturePhoto button is so I can upload it to the server using functions I've got defined in the main view controller.
How do I grab that photo data and use it back in the other viewcontroller that called the self.photoOutput.capturePhoto?
Alternatively..would it be bad practice to just run the posting to server code from directly inside of the didFinishProcessingPhoto? I could make it so I had access to the variables I need from inside there, but this seems incorrect.
There is no need to do that in the delegate.
In the call, in CameraViewController, when you create the PhotoCaptureDelegate (line 533), it has 3 callbacks, being the last one completed, in that code you receive the photoCaptureDelegate responsible of the photo and in it you have your code, so you can do what you want there
completed: { [unowned self] photoCaptureDelegate in
//Save capture here. Image data is in here
//photoCaptureDelegate.photoData
self.sessionQueue.async { [unowned self] in
self.inProgressPhotoCaptureDelegates[photoCaptureDelegate.requestedPhotoSettings.uniqueID] = nil
}
}

Resources