I have successfully implemented an Image Picker in my app - it pulls pictures from the library and you can select specific images.
Now after the user selects the image, I want to store the image to a variable somehow in my code so that I can upload the image to a database.
Here's what I currently have in my code:
public class PhotoPickerService : IPhotoPickerService
{
TaskCompletionSource<Stream> taskCompletionSource;
UIImagePickerController imagePicker;
public Task<Stream> GetImageStreamAsync()
{
//Create and define UIImagePickerController
imagePicker = new UIImagePickerController
{
SourceType = UIImagePickerControllerSourceType.PhotoLibrary,
MediaTypes = UIImagePickerController.AvailableMediaTypes(UIImagePickerControllerSourceType.PhotoLibrary)
};
//Set Event Handlers
imagePicker.FinishedPickingMedia += OnImagePickerFinishedPickingMedia;
imagePicker.Canceled += OnImagePickerCancelled;
//Present UIImagePickerController
UIWindow window = UIApplication.SharedApplication.KeyWindow;
var viewController = window.RootViewController;
viewController.PresentViewController(imagePicker, true, null);
//Return Task Object
taskCompletionSource = new TaskCompletionSource<Stream>();
return taskCompletionSource.Task;
}
void OnImagePickerFinishedPickingMedia(object sender, UIImagePickerMediaPickedEventArgs args)
{
//assigns var image to the edited image if there is one, otherwise it'll assign it to the original image
UIImage image = args.EditedImage ?? args.OriginalImage;
if (image != null)
{
//Convert UIImage to .NET stream object
NSData data;
if(args.ReferenceUrl.PathExtension.Equals("PNG") || args.ReferenceUrl.PathExtension.Equals("png"))
{
data = image.AsPNG();
//Console.WriteLine(data);
}
else
{
data = image.AsJPEG(1);
Console.WriteLine(data);
}
Stream stream = data.AsStream();
UnregisterEventHandlers();
taskCompletionSource.SetResult(stream);
}
else
{
UnregisterEventHandlers();
taskCompletionSource.SetResult(null);
}
imagePicker.DismissModalViewController(true);
}
void OnImagePickerCancelled(object sender, EventArgs args)
{
UnregisterEventHandlers();
taskCompletionSource.SetResult(null);
imagePicker.DismissModalViewController(true);
}
void UnregisterEventHandlers()
{
imagePicker.FinishedPickingMedia -= OnImagePickerFinishedPickingMedia;
imagePicker.Canceled -= OnImagePickerCancelled;
}
Here is what I'm wondering: I need to upload this image to a database with the datatype varbinary. The image currently has the datatype NSData which as I've researched is used to store items such as pictures with raw binary data. Am I storing this correctly as of right now to upload to a database? Do I need to do some sort of conversion?
You should implement UIImagePickerControllerDelegate to your ViewController
import UIKit
import AVFoundation
import MobileCoreServices
class ViewController: UIViewController {
var choosenImageData: Data?
fileprivate lazy var imagePicker: UIImagePickerController = {
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.mediaTypes = [(kUTTypeImage as String)]
imagePicker.allowsEditing = false
return imagePicker
}()
#IBAction func cameraButtonPressed(_ sender: Any) {
self.imagePicker.sourceType = .camera // .photoLibrary
self.present(self.imagePicker, animated: true, completion: nil)
}
}
extension ViewController: UIImagePickerControllerDelegate {
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {
if let image = info[UIImagePickerController.InfoKey.originalImage] as? UIImage {
self.choosenImageData = image.jpeg(.lowest)
}
}
}
Use this extension to convert UIImage to Data
import Foundation
// MARK: - Compress UIImage Quality
extension UIImage {
enum JPEGQuality: CGFloat {
case lowest = 0
case low = 0.25
case medium = 0.5
case high = 0.75
case highest = 1
}
/// Returns the data for the specified image in JPEG format.
/// If the image object’s underlying image data has been purged, calling this function forces that data to be reloaded into memory.
/// - returns: A data object containing the JPEG data, or nil if there was a problem generating the data. This function may return nil if the image has no data or if the underlying CGImageRef contains data in an unsupported bitmap format.
func jpeg(_ quality: JPEGQuality) -> Data? {
return self.jpegData(compressionQuality: quality.rawValue)
}
}
After you have successfully retrieved selected image data, you can perform your tasks to save/upload to your online or local DB.
This answer is an alternative that you might consider instead of using UIImagePicker
Here this function will allow you to get the photos on the library on the phone, so you can use the result of this function which will be an away of [UIImage] and store it on a variable I understand you need to do. but to actually check what image what the user to upload you will need to build a CollectionView for example and give the user the option to select the photos to upload, will the user selecting the photos from the collection view you can save the index that you can use to define which pictures to upload.
Note: you can specify the request option: (the delivery mode defines the quality of the image you can use fastFormat , opportunistic or highQualityFormat)
requestOptions.isSynchronous = true
requestOptions.deliveryMode = .opportunistic
This is the function just a copy past on your project will do the trick
private func getPhotos() -> [UIImage]? {
// Image Array declaration
var imageArray = [UIImage]()
// Image manager and specify the request option
let imageManager = PHImageManager.default()
let requestOptions = PHImageRequestOptions()
requestOptions.isSynchronous = true
requestOptions.deliveryMode = .opportunistic
// Sorted by creation date
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: false)]
// Perform the fetch and appand to Image array
if let fetchResult: PHFetchResult = PHAsset.fetchAssets(with: .image, options: fetchOptions) {
if fetchResult.count > 0 {
for i in 0..<fetchResult.count {
imageManager.requestImage(for: fetchResult.object(at: i), targetSize: CGSize(width: 280, height: 280), contentMode: .aspectFill, options: requestOptions) { image, error in
imageArray.append(image!)
}
}
return imageArray
} else {
print("You dont have any photos!")
}
}
return nil
}
Related
A gif image is loaded into a UIImageView (by using this extension) and another UIImageView is overlaid on it. Everything works fine but the problem is when I going for combine both via below code, it shows a still image (.jpg). I wanna combine both and after combine it should be a animated image (.gif) too.
let bottomImage = gifPlayer.image
let topImage = UIImage
let size = CGSize(width: (bottomImage?.size.width)!, height: (bottomImage?.size.height)!)
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: size.width, height: size.height)
bottomImage!.draw(in: areaSize)
topImage!.draw(in: areaSize, blendMode: .normal, alpha: 0.8)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Click here to know more about this problem please.
When using an animated GIF in a UIImageView, it becomes an array of UIImage.
We can set that array with (for example):
imageView.animationImages = arrayOfImages
imageView.animationDuration = 1.0
or, we can set the .image property to an animatedImage -- that's how the GIF-Swift code you are using works:
if let img = UIImage.gifImageWithName("funny") {
bottomImageView.image = img
}
in that case, the image also contains the duration:
img.images?.duration
So, to generate a new animated GIF with the border/overlay image, you need to get that array of images and generate each "frame" with the border added to it.
Here's a quick example...
This assumes:
you are using GIF-Swift
you have added bottomImageView and topImageView in Storyboard
you have a GIF in the bundle named "funny.gif" (edit the code if yours is different)
you have a "border.png" in assets (again, edit the code as needed)
and you have a button to connect to the #IBAction:
import UIKit
import ImageIO
import UniformTypeIdentifiers
class animImageViewController: UIViewController {
#IBOutlet var bottomImageView: UIImageView!
#IBOutlet var topImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
if let img = UIImage.gifImageWithName("funny") {
bottomImageView.image = img
}
if let img = UIImage(named: "border") {
topImageView.image = img
}
}
#IBAction func saveButtonTapped(_ sender: Any) {
generateNewGif(from: bottomImageView, with: topImageView)
}
func generateNewGif(from animatedImageView: UIImageView, with overlayImageView: UIImageView) {
var images: [UIImage]!
var delayTime: Double!
guard let overlayImage = overlayImageView.image else {
print("Could not get top / overlay image!")
return
}
if let imgs = animatedImageView.image?.images {
// the image view is using .image = animatedImage
// unwrap the duration
if let dur = animatedImageView.image?.duration {
images = imgs
delayTime = dur / Double(images.count)
} else {
print("Image view is using an animatedImage, but could not get the duration!" )
return
}
} else if let imgs = animatedImageView.animationImages {
// the image view is using .animationImages
images = imgs
delayTime = animatedImageView.animationDuration / Double(images.count)
} else {
print("Could not get images array!")
return
}
// we now have a valid [UIImage] array, and
// a valid inter-frame duration, and
// a valid "overlay" UIImage
// generate unique file name
let destinationFilename = String(NSUUID().uuidString + ".gif")
// create empty file in temp folder to hold gif
let destinationURL = URL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent(destinationFilename)
// metadata for gif file to describe it as an animated gif
let fileDictionary = [kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFLoopCount : 0]]
// create the file and set the file properties
guard let animatedGifFile = CGImageDestinationCreateWithURL(destinationURL as CFURL, UTType.gif.identifier as CFString, images.count, nil) else {
print("error creating file")
return
}
CGImageDestinationSetProperties(animatedGifFile, fileDictionary as CFDictionary)
let frameDictionary = [kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFDelayTime: delayTime]]
// use original size of gif
let sz: CGSize = images[0].size
let renderer: UIGraphicsImageRenderer = UIGraphicsImageRenderer(size: sz)
// loop through the images
// drawing the top/border image on top of each "frame" image with 80% alpha
// then writing the combined image to the gif file
images.forEach { img in
let combinedImage = renderer.image { ctx in
img.draw(at: .zero)
overlayImage.draw(in: CGRect(origin: .zero, size: sz), blendMode: .normal, alpha: 0.8)
}
guard let cgFrame = combinedImage.cgImage else {
print("error creating cgImage")
return
}
// add the combined image to the new animated gif
CGImageDestinationAddImage(animatedGifFile, cgFrame, frameDictionary as CFDictionary)
}
// done writing
CGImageDestinationFinalize(animatedGifFile)
print("New GIF created at:")
print(destinationURL)
print()
// do something with the newly created file...
// maybe move it to documents folder, or
// upload it somewhere, or
// save to photos library, etc
}
}
Notes:
the code is based on this article: How to Make an Animated GIF Using Swift
this should be considered Example Code Only!!! -- a starting-point for you, not a "production ready" solution.
We are writing an app which analyzes a real world 3D data by using the TrueDepth camera on the front of an iPhone, and an AVCaptureSession configured to produce AVDepthData along with image data. This worked great on iPhone 12, but the same code on iPhone 13 produces an unwanted "smoothing" effect which makes the scene impossible to process and breaks our app. We are unable to find any information on this effect, from Apple or otherwise, much less how to avoid it, so we are asking you experts.
At the bottom of this post (Figure 3) is our code which configures the capture session, using an AVCaptureDataOutputSynchronizer, to produce frames of 640x480 image and depth data. I boiled it down as much as possible, sorry it's so long. The main two parts are the configure function, which sets up our capture session, and the dataOutputSynchronizer function, near the bottom, which fires when a sycned set of data is available. In the latter function I've included my code which extracts the information from the AVDepthData object, including looping through all 640x480 depth data points (in meters). I've excluded further processing for brevity (believe it or not :)).
On an iPhone 12 device, the PNG data and the depth data merge nicely. The front view and side view of the merged pointcloud are below (Figure 1) . The angles visible in the side view are due to the application of the focal length which "de-perspectives" the data and places them in their proper position in xyz space.
The same code on an iPhone 13 produces depth maps that result in point cloud further below (Figure 2 -- straight on view, angled view, and side view). There is no longer any clear distinction between objects and the background becasue the depth data appears to be "smoothed" between the mannequin and the background -- i.e., there are seven or eight points between the subject and background that are not realistic and make it impossible to do any meaningful processing such as segmenting the scene.
Has anyone else encountered this issue, or have any insight into how we might change our code to avoid it? Any help or ideas are MUCH appreciated, since this is a definite showstopper (we can't tell people to only run our App on older phones :)). Thank you!
Figure 1 -- Merged depth data and image into point cloud, from iPhone 12
Figure 2 -- Merged depth data and image into point cloud, from iPhone 13; unwanted smoothing effect visible
Figure 3 -- Our configuration code and capture handler; edited to remove downstream processing of captured data (which was basically formatting it into an XML file and uploading to the cloud)
import Foundation
import Combine
import AVFoundation
import Photos
import UIKit
import FirebaseStorage
public struct AlertError {
public var title: String = ""
public var message: String = ""
public var primaryButtonTitle = "Accept"
public var secondaryButtonTitle: String?
public var primaryAction: (() -> ())?
public var secondaryAction: (() -> ())?
public init(title: String = "", message: String = "", primaryButtonTitle: String = "Accept", secondaryButtonTitle: String? = nil, primaryAction: (() -> ())? = nil, secondaryAction: (() -> ())? = nil) {
self.title = title
self.message = message
self.primaryAction = primaryAction
self.primaryButtonTitle = primaryButtonTitle
self.secondaryAction = secondaryAction
}
}
///////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////
//
//
// this is the CameraService class, which configures and runs a capture session
// which acquires syncronized image and depth data
// using an AVCaptureDataOutputSynchronizer
//
//
///////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////
public class CameraService: NSObject,
AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureDepthDataOutputDelegate,
AVCaptureDataOutputSynchronizerDelegate,
MyFirebaseProtocol,
ObservableObject{
#Published public var shouldShowAlertView = false
#Published public var shouldShowSpinner = false
public var labelStatus: String = "Ready"
var images: [UIImage?] = []
public var alertError: AlertError = AlertError()
public let session = AVCaptureSession()
var isSessionRunning = false
var isConfigured = false
var setupResult: SessionSetupResult = .success
private let sessionQueue = DispatchQueue(label: "session queue") // Communicate with the session and other session objects on this queue.
#objc dynamic var videoDeviceInput: AVCaptureDeviceInput!
private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInTrueDepthCamera], mediaType: .video, position: .front)
var videoCaptureDevice : AVCaptureDevice? = nil
let videoDataOutput: AVCaptureVideoDataOutput = AVCaptureVideoDataOutput() // Define frame output.
let depthDataOutput = AVCaptureDepthDataOutput()
var outputSynchronizer: AVCaptureDataOutputSynchronizer? = nil
let dataOutputQueue = DispatchQueue(label: "video data queue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem)
var scanStateCounter: Int = 0
var m_DepthDatasetsToUpload = [AVCaptureSynchronizedDepthData]()
var m_FrameBufferToUpload = [AVCaptureSynchronizedSampleBufferData]()
var firebaseDepthDatasetsArray: [String] = []
#Published var firebaseImageUploadCount = 0
#Published var firebaseTextFileUploadCount = 0
public func configure() {
/*
Setup the capture session.
In general, it's not safe to mutate an AVCaptureSession or any of its
inputs, outputs, or connections from multiple threads at the same time.
Don't perform these tasks on the main queue because
AVCaptureSession.startRunning() is a blocking call, which can
take a long time. Dispatch session setup to the sessionQueue, so
that the main queue isn't blocked, which keeps the UI responsive.
*/
sessionQueue.async {
self.configureSession()
}
}
// MARK: Checks for user's permisions
public func checkForPermissions() {
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .authorized:
// The user has previously granted access to the camera.
break
case .notDetermined:
/*
The user has not yet been presented with the option to grant
video access. Suspend the session queue to delay session
setup until the access request has completed.
*/
sessionQueue.suspend()
AVCaptureDevice.requestAccess(for: .video, completionHandler: { granted in
if !granted {
self.setupResult = .notAuthorized
}
self.sessionQueue.resume()
})
default:
// The user has previously denied access.
setupResult = .notAuthorized
DispatchQueue.main.async {
self.alertError = AlertError(title: "Camera Access", message: "SwiftCamera doesn't have access to use your camera, please update your privacy settings.", primaryButtonTitle: "Settings", secondaryButtonTitle: nil, primaryAction: {
UIApplication.shared.open(URL(string: UIApplication.openSettingsURLString)!,
options: [:], completionHandler: nil)
}, secondaryAction: nil)
self.shouldShowAlertView = true
}
}
}
// MARK: Session Management
// Call this on the session queue.
/// - Tag: ConfigureSession
private func configureSession() {
if setupResult != .success {
return
}
session.beginConfiguration()
session.sessionPreset = AVCaptureSession.Preset.vga640x480
// Add video input.
do {
var defaultVideoDevice: AVCaptureDevice?
let frontCameraDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front)
// If the rear wide angle camera isn't available, default to the front wide angle camera.
defaultVideoDevice = frontCameraDevice
videoCaptureDevice = defaultVideoDevice
guard let videoDevice = defaultVideoDevice else {
print("Default video device is unavailable.")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
let videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
if session.canAddInput(videoDeviceInput) {
session.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
} else if session.inputs.isEmpty == false {
self.videoDeviceInput = videoDeviceInput
} else {
print("Couldn't add video device input to the session.")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
} catch {
print("Couldn't create video device input: \(error)")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
// MARK: add video output to session
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
videoDataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: kCVPixelFormatType_32BGRA)] as [String : Any]
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "camera_frame_processing_queue"))
if session.canAddOutput(self.videoDataOutput) {
session.addOutput(self.videoDataOutput)
} else if session.outputs.contains(videoDataOutput) {
} else {
print("Couldn't create video device output")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
guard let connection = self.videoDataOutput.connection(with: AVMediaType.video),
connection.isVideoOrientationSupported else { return }
connection.videoOrientation = .portrait
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
// MARK: add depth output to session
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Add a depth data output
if session.canAddOutput(depthDataOutput) {
session.addOutput(depthDataOutput)
depthDataOutput.isFilteringEnabled = false
//depthDataOutput.setDelegate(T##delegate: AVCaptureDepthDataOutputDelegate?##AVCaptureDepthDataOutputDelegate?, callbackQueue: <#T##DispatchQueue?#>)
depthDataOutput.setDelegate(self, callbackQueue: DispatchQueue(label: "depth_frame_processing_queue"))
if let connection = depthDataOutput.connection(with: .depthData) {
connection.isEnabled = true
} else {
print("No AVCaptureConnection")
}
} else if session.outputs.contains(depthDataOutput){
} else {
print("Could not add depth data output to the session")
session.commitConfiguration()
return
}
// Search for highest resolution with half-point depth values
let depthFormats = videoCaptureDevice!.activeFormat.supportedDepthDataFormats
let filtered = depthFormats.filter({
CMFormatDescriptionGetMediaSubType($0.formatDescription) == kCVPixelFormatType_DepthFloat16
})
let selectedFormat = filtered.max(by: {
first, second in CMVideoFormatDescriptionGetDimensions(first.formatDescription).width < CMVideoFormatDescriptionGetDimensions(second.formatDescription).width
})
do {
try videoCaptureDevice!.lockForConfiguration()
videoCaptureDevice!.activeDepthDataFormat = selectedFormat
videoCaptureDevice!.unlockForConfiguration()
} catch {
print("Could not lock device for configuration: \(error)")
session.commitConfiguration()
return
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Use an AVCaptureDataOutputSynchronizer to synchronize the video data and depth data outputs.
// The first output in the dataOutputs array, in this case the AVCaptureVideoDataOutput, is the "master" output.
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
outputSynchronizer = AVCaptureDataOutputSynchronizer(dataOutputs: [videoDataOutput, depthDataOutput])
outputSynchronizer!.setDelegate(self, queue: dataOutputQueue)
session.commitConfiguration()
self.isConfigured = true
//self.start()
}
// MARK: Device Configuration
/// - Tag: Stop capture session
public func stop(completion: (() -> ())? = nil) {
sessionQueue.async {
//print("entered stop")
if self.isSessionRunning {
//print(self.setupResult)
if self.setupResult == .success {
//print("entered success")
DispatchQueue.main.async{
self.session.stopRunning()
self.isSessionRunning = self.session.isRunning
if !self.session.isRunning {
DispatchQueue.main.async {
completion?()
}
}
}
}
}
}
}
/// - Tag: Start capture session
public func start() {
// We use our capture session queue to ensure our UI runs smoothly on the main thread.
sessionQueue.async {
if !self.isSessionRunning && self.isConfigured {
switch self.setupResult {
case .success:
self.session.startRunning()
self.isSessionRunning = self.session.isRunning
if self.session.isRunning {
}
case .configurationFailed, .notAuthorized:
print("Application not authorized to use camera")
DispatchQueue.main.async {
self.alertError = AlertError(title: "Camera Error", message: "Camera configuration failed. Either your device camera is not available or its missing permissions", primaryButtonTitle: "Accept", secondaryButtonTitle: nil, primaryAction: nil, secondaryAction: nil)
self.shouldShowAlertView = true
}
}
}
}
}
// ------------------------------------------------------------------------
// MARK: CAPTURE HANDLERS
// ------------------------------------------------------------------------
public func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) {
//printWithTime("Capture")
guard let syncedDepthData: AVCaptureSynchronizedDepthData =
synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData else {
return
}
guard let syncedVideoData: AVCaptureSynchronizedSampleBufferData =
synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else {
return
}
///////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////
//
//
// Below is the code that extracts the information from depth data
// The depth data is 640x480, which matches the size of the synchronized image
// I save this info to a file, upload it to the cloud, and merge it with the image
// on a PC to create a pointcloud
//
//
///////////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////////////
let depth_data : AVDepthData = syncedDepthData.depthData
let cvpixelbuffer : CVPixelBuffer = depth_data.depthDataMap
let height : Int = CVPixelBufferGetHeight(cvpixelbuffer)
let width : Int = CVPixelBufferGetWidth(cvpixelbuffer)
let quality : AVDepthData.Quality = depth_data.depthDataQuality
let accuracy : AVDepthData.Accuracy = depth_data.depthDataAccuracy
let pixelsize : Float = depth_data.cameraCalibrationData!.pixelSize
let camcaldata : AVCameraCalibrationData = depth_data.cameraCalibrationData!
let intmat : matrix_float3x3 = camcaldata.intrinsicMatrix
let cal_lensdistort_x : CGFloat = camcaldata.lensDistortionCenter.x
let cal_lensdistort_y : CGFloat = camcaldata.lensDistortionCenter.y
let cal_matrix_width : CGFloat = camcaldata.intrinsicMatrixReferenceDimensions.width
let cal_matrix_height : CGFloat = camcaldata.intrinsicMatrixReferenceDimensions.height
let intrinsics_fx : Float = camcaldata.intrinsicMatrix.columns.0.x
let intrinsics_fy : Float = camcaldata.intrinsicMatrix.columns.1.y
let intrinsics_ox : Float = camcaldata.intrinsicMatrix.columns.2.x
let intrinsics_oy : Float = camcaldata.intrinsicMatrix.columns.2.y
let pixelformattype : OSType = CVPixelBufferGetPixelFormatType(cvpixelbuffer)
CVPixelBufferLockBaseAddress(cvpixelbuffer, CVPixelBufferLockFlags(rawValue: 0))
let int16Buffer = unsafeBitCast(CVPixelBufferGetBaseAddress(cvpixelbuffer), to: UnsafeMutablePointer<Float16>.self)
let int16PerRow = CVPixelBufferGetBytesPerRow(cvpixelbuffer) / 2
for x in 0...height-1
{
for y in 0...width-1
{
let luma = int16Buffer[x * int16PerRow + y]
/////////////////////////
// SAVE DEPTH VALUE 'luma' to FILE FOR PROCESSING
}
}
CVPixelBufferUnlockBaseAddress(cvpixelbuffer, CVPixelBufferLockFlags(rawValue: 0))
}
Please don't judge me I'm just learning Swift.
Recently I installed MetalPetal framework and I followed the instructions:
https://github.com/MetalPetal/MetalPetal#example-code
But I get error because of MTIContext. Maybe I have to declare something more of MetalPetal?
My Code:
import UIKit
import MetalPetal
import CoreGraphics
class ViewController: UIViewController {
#IBOutlet weak var image1: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
weak var image: UIImage?
image = image1.image
var ciImage = CIImage(image: image!)
var cgImage1 = convertCIImageToCGImage(inputImage: ciImage!)
let imageFromCGImage = MTIImage(cgImage: cgImage1!)
let inputImage = imageFromCGImage
let filter = MTISaturationFilter()
filter.saturation = 1
filter.inputImage = inputImage
let outputImage = filter.outputImage
let context = MTIContext()
do {
try context.render(outputImage, to: pixelBuffer)
var image3: CIImage? = try context.makeCIImage(from: outputImage!)
//context.makeCIImage(from: image)
//context.makeCGImage(from: image)
} catch {
print(error)
}
// Do any additional setup after loading the view, typically from a nib.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
}
#YuAo
Input Image
An UIImage is based on either underlying Quartz image (can be retrieved with cgImage) or an underlying Core Image (can be retrieved from UIImage with ciImage).
MTIImage offers constructors for both types.
MTIContext
A MTIContext must be initialized with a device that can be retrieved by calling MTLCreateSystemDefaultDevice().
Rendering
A rendering to a pixel buffer is not needed. We can get the result by calling makeCGImage.
Test
I've taken your source code above and slightly adapted it to the aforementioned points.
I also added a second UIImageView to see the result of the filtering. I also changed the saturation to 0 to see if the filter works
If GPU or shaders are involved it makes sense to test on a real device and not on the simulator.
The result looks like this:
In the upper area you see the original jpg, in the lower area the filter is applied.
Swift
The simplified Swift code that produces this result looks like this:
override func viewDidLoad() {
super.viewDidLoad()
guard let image = UIImage(named: "regensburg.jpg") else { return }
guard let cgImage = image.cgImage else { return }
imageView1.image = image
let filter = MTISaturationFilter()
filter.saturation = 0
filter.inputImage = MTIImage(cgImage: cgImage)
if let device = MTLCreateSystemDefaultDevice(),
let outputImage = filter.outputImage {
do {
let context = try MTIContext(device: device)
let filteredImage = try context.makeCGImage(from: outputImage)
imageView2.image = UIImage(cgImage: filteredImage)
} catch {
print(error)
}
}
}
I've been wrestling with this issue ad nauseam but still cant figure out what is causing the crash. I instantiate the camera from a button, thereafter when the user chooses the photo i add the photo to my images array and reload the collectionView. After approximately 20 times i get the output in the console "Connection to assets was interrupted or assets died" subsequently followed by a crash. When i remove the collectionView?.reloadData() method from within the delegate method "didFinishPickingMediaWithInfo" it doesnt crash. This has left me extremely obfuscated. See code below:
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: Dictionary) {
if let image = info["UIImagePickerControllerOriginalImage"] as? UIImage {
images.append(image)
// when i remove this method the app doesnt crash!
collectionView.reloadData()
collectionView?layoutIfneeded()
dismiss(animated: true, completion: nil)
}
This is the solution, note the compression size is hardcoded, you may want to adapt the code to your needs or refactor and improve the method.
func imageCompression(_ imageDict: [String : UIImage]) -> [String : Data] {
var compressedImages = [String : Data]()
for (key, value) in imageDict {
var resizeAttempts = 5
var compressionRatio: CGFloat = 1
var imageData = UIImageJPEGRepresentation(value, compressionRatio)
if imageData?.count <= 90000 {
compressedImages[key] = imageData
} else {
while imageData?.count > 90000 && resizeAttempts > 0 {
resizeAttempts -= 1
compressionRatio = compressionRatio * 0.5
imageData = UIImageJPEGRepresentation(value, compressionRatio)
compressedImages[key] = imageData
print("image size now is \(imageData?.count)")
}
}
}
return compressedImages
}
I am making a custom view of camera, in which i'm taking photos from gallery and from camera, photo gallery works ok, but camera is not working fine, I'm using UIImagePickerController for it, after taking 3 4 pictures it causes memory leaks and shut down the app, i'm properly presentingviewcontroller and dissmissingviewcontroller but it creates memory leaks issues anyway, i Used leak instrument to track down the issue and i found that UIImagePickerController creats new instance every time it appears to take photo
Avfoundation -[AVCapturePhotoOutput init]
NSSmutableArray Avfoundation -[[AVCapturePhotoOutput init]
please guide me how can i resolve it? because I'm not good in managing memory leaks.
Edit:
this is didfinishdelegate method!
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : AnyObject]){
if let image = info[UIImagePickerControllerOriginalImage] as? UIImage{
self.delegate?.didFinishTakingPhoto(image)
picker.dismissViewControllerAnimated(true, completion: { () -> Void in
self.popMe(false)
})
}
}
func didFinishTakingPhoto(image: UIImage)
{
self.imageView.image = image;
self.startActivity("", detailMsg: "")
self.view.userInteractionEnabled = false
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0)) { () -> Void in
if let chokusItem = self.item {
var size = CGSizeMake(600.0, CGFloat.max)
if Global.shared.highQualityPhotoEnables {
size.width = 900.0
}
let scaledImage = self.imageView.image!.resizedImageWithContentMode(UIViewContentMode.ScaleAspectFit, bounds: size, interpolationQuality: CGInterpolationQuality.High)
let thumbSize = CGSizeMake(80.0, CGFloat.max)
self.thumbImage = self.imageView.image!.resizedImageWithContentMode(UIViewContentMode.ScaleAspectFit, bounds: thumbSize, interpolationQuality: CGInterpolationQuality.High)
self.photo = PhotoViewModel(image: scaledImage, parent: chokusItem)
let delay = 0 * Double(NSEC_PER_SEC)
let time = dispatch_time(DISPATCH_TIME_NOW, Int64(delay))
dispatch_after(time, dispatch_get_main_queue(), {
self.imageView.image = scaledImage
self.stopActivity()
self.removeCommentTableViews()
self.removeCommentViews()
self.view.userInteractionEnabled = true
self.showPhotoLimitAlertIfRequired()
})
if Global.shared.shouldSavePhotoToGallery {
let assetsLibrary = ALAssetsLibrary()
assetsLibrary.saveImage(scaledImage, toAlbum: "Inspection Images", completion: { (url, error) -> Void in
print("success", terminator: "")
}, failure: { (error) -> Void in
print("failure", terminator: "")
})
}
}
}
}
This appears to be some kind of bug in iOS , I am not sure however, I got pointed to this thread after making my own question, since I didn't find this one.
However, I am initializing my code in a completely different way than you but the leaked object appears the same.
I have opened a radar to this on:
https://openradar.appspot.com/29495120
You can also see my question here:
https://stackoverflow.com/questions/41111899/avcapturephotooutput-memory-leak-ios
Hope this answer atleast gives you some less headache wasting time while I am awaiting a confirmation.