Here is my problem : I want to display a pixel buffer that I calculated to a MTKView. I searched for MTLTexture, MTLBuffer and other Metal objects, but I can't find any way to just present a pixel buffer.
Every tutorial I saw are about presenting 3D objects with vertex and fragments shaders.
I think the buffer has to be presented within the drawInMTKView function (maybe with the MTLRenderCommandEncoder), but again, I can't find any information about this.
I hope I'm not asking an obvious question.
Thanks
Welcome!
I recommend you use Core Image for rendering the content of the pixel buffer into the view. This requires the least manual Metal setup.
Setup the MTKView and some required objects as follows (assuming you have a view controller and a storyboard setup):
import UIKit
import CoreImage
class PreviewViewController: UIViewController {
#IBOutlet weak var metalView: MTKView!
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
var ciContext: CIContext!
var pixelBuffer: CVPixelBuffer?
override func viewDidLoad() {
super.viewDidLoad()
self.device = MTLCreateSystemDefaultDevice()
self.commandQueue = self.device.makeCommandQueue()
self.metalView.delegate = self
self.metalView.device = self.device
// this allows us to render into the view's drawable
self.metalView.framebufferOnly = false
self.ciContext = CIContext(mtlDevice: self.device)
}
}
In the delegate method you use Core Image to transform the pixel buffer to fit the contents of the view (this is a bonus, adapt it to your use case) and render it using the CIContext:
extension PreviewViewController: MTKViewDelegate {
func draw(in view: MTKView) {
guard let pixelBuffer = self.pixelBuffer,
let commandBuffer = self.commandQueue.makeCommandBuffer() else { return }
// turn the pixel buffer into a CIImage so we can use Core Image for rendering into the view
let image = CIImage(cvPixelBuffer: pixelBuffer)
// bonus: transform the image to aspect-fit the view's bounds
let drawableSize = view.drawableSize
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scale = min(scaleX, scaleY)
let scaledImage = image.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
// center in the view
let originX = max(drawableSize.width - scaledImage.extent.size.width, 0) / 2
let originY = max(drawableSize.height - scaledImage.extent.size.height, 0) / 2
let centeredImage = scaledImage.transformed(by: CGAffineTransform(translationX: originX, y: originY))
// Create a render destination that allows to lazily fetch the target texture
// which allows the encoder to process all CI commands _before_ the texture is actually available.
// This gives a nice speed boost because the CPU doesn't need to wait for the GPU to finish
// before starting to encode the next frame.
// Also note that we don't pass a command buffer here, because according to Apple:
// "Rendering to a CIRenderDestination initialized with a commandBuffer requires encoding all
// the commands to render an image into the specified buffer. This may impact system responsiveness
// and may result in higher memory usage if the image requires many passes to render."
let destination = CIRenderDestination(width: Int(drawableSize.width),
height: Int(drawableSize.height),
pixelFormat: view.colorPixelFormat,
commandBuffer: nil,
mtlTextureProvider: { () -> MTLTexture in
return currentDrawable.texture
})
// render into the view's drawable
let _ = try! self.ciContext.startTask(toRender: centeredImage, to: destination)
// present the drawable
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
}
There is a slightly simpler way for rendering into the drawable texture instead of using CIRenderDestination, but this is recommended if you want to achieve high frame rates (see comment).
I think I found a solution : https://developer.apple.com/documentation/metal/creating_and_sampling_textures?language=objc.
In this exemple, they show how to render an image to a Metal view, using just a few vertices and a fragment shader to render the texture to a 2D square.
I'll go from there. Not sure if there isn't a better (simpler ?) way to do that. But I guess that's how Metal wants us to do this.
Related
I want to detect ball and have AR model interact with it. I used opencv for ball detection and send center of ball which I can use in hitTest to get coordinates in sceneView. I have been converting CVPixelBuffer to UIImage using following function:
static func convertToUIImage(buffer: CVPixelBuffer) -> UIImage?{
let ciImage = CIImage(cvPixelBuffer: buffer)
let temporaryContext = CIContext(options: nil)
if let temporaryImage = temporaryContext.createCGImage(ciImage, from: CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(buffer), height: CVPixelBufferGetHeight(buffer)))
{
let capturedImage = UIImage(cgImage: temporaryImage)
return capturedImage
}
return nil
}
This gave me rotated image:
Then i found about changing orientation using:
let capturedImage = UIImage(cgImage: temporaryImage, scale: 1.0, orientation: .right)
While it gave correct orientation while device is in portrait, rotating device to landscape again gave rotated image.
Now I am thinking about handling it using viewWillTransition. But before that i want to know:
If there is other way around to convert image with correct orientation?
Why does this happen?
1. Is there another way to convert the image with the correct orientation?
You may try to use snapshot() of ARSCNView (inherited from SCNView), which:
Draws the contents of the view and returns them as a new image object
so if you have an object like:
#IBOutlet var arkitSceneView:ARSCNView!
you only need to do so:
let imageFromArkitScene:UIImage? = arkitSceneView.snapshot()
2. Why does this happen?
It's because the CVPixelBuffer comes from ARFrame, which is :
captured (continuously) from the device camera, by the running AR session.
Well, since the camera orientation does not change with the rotation of the device (they are separate), to be able to adjust the orientation of your frame to the current view, you should re-orient the image captured from your camera applying the affine transform extracted with displayTransform(for:viewportSize:):
Returns an affine transform for converting between normalized image coordinates and a coordinate space appropriate for rendering the camera image onscreen.
you may find good documentation here, usage example:
let orient = UIApplication.shared.statusBarOrientation
let viewportSize = yourSceneView.bounds.size
let transform = frame.displayTransform(for: orient, viewportSize: viewportSize).inverted()
var finalImage = CIImage(cvPixelBuffer: pixelBuffer).transformed(by: transform)
I have the following code where I create a sprite node that displays an animated GIF. I want to create another function that darkens the GIF when called upon. I could still be able to watch the animation, but the content would be visibly darker. I'm not sure how to approach this. Should I individually darken every texture or frame used to create the animation? If so, how do I darken a texture or frame in the first place?
// Extract frames and duration
guard let imageData = try? Data(contentsOf: url as URL) else {
return
}
let source = CGImageSourceCreateWithData(imageData as CFData, nil)
var images = [CGImage]()
let count = CGImageSourceGetCount(source!)
var delays = [Int]()
// Fill arrays
for i in 0..<count {
// Add image
if let image = CGImageSourceCreateImageAtIndex(source!, i, nil) {
images.append(image)
}
// At it's delay in cs
let delaySeconds = UIImage.delayForImageAtIndex(Int(i),
source: source)
delays.append(Int(delaySeconds * 1000.0)) // Seconds to ms
}
// Calculate full duration
let duration: Int = {
var sum = 0
for val: Int in delays {
sum += val
}
return sum
}()
// Get frames
let gcd = SKScene.gcdForArray(delays)
var frames = [SKTexture]()
var frame: SKTexture
var frameCount: Int
for i in 0..<count {
frame = SKTexture(cgImage: images[Int(i)])
frameCount = Int(delays[Int(i)] / gcd)
for _ in 0..<frameCount {
frames.append(frame)
}
}
let gifNode = SKSpriteNode.init(texture: frames[0])
gifNode.position = CGPoint(x: skScene.size.width / 2.0, y: skScene.size.height / 2.0)
gifNode.name = "content"
// Add animation
let gifAnimation = SKAction.animate(with: frames, timePerFrame: ((Double(duration) / 1000.0)) / Double(frames.count))
gifNode.run(SKAction.repeatForever(gifAnimation))
skScene.addChild(gifNode)
I would recommend to use the colorize(with:colorBlendFactor:duration:) method. It is an SKAction that animates changing the color of a whole node. That way you don't have to get into darkening the individual textures or frames, and it also adds a nice transition from a non-dark to a darkened color. Once the action ends, the node will stay darkened until you undarken it, so any changes to the node's texture will also be visible as darkend to the user.
Choose whatever color and colorBlendFactor will work best for you to have the darkened effect you need, e.g. you could set the color to .black and colorBlendFactor to 0.3. To undarken, just set the color to .clear and colorBlendFactor to 0.
Documentation here.
Hope this helps!
Is there a simple way to render a rotated tiled image as a view background? Something to the effect of UIColor(patternImage:) but where the image is rotated at a certain angle?
There is no simple way to achieve this, at least not in vanilla Swift. I would use another UIView as a subview for our original view, set its background to a tiled image and add a CGAffineTransform to that particular view.
Turns out Core Image filter CIAffineTile does exactly what I want.
extension UIImage {
func tile(angle: CGFloat) -> CIImage? {
return CIImage(image: self)?.applyingFilter(
"CIAffineTile",
withInputParameters: [
kCIInputTransformKey: NSValue(
cgAffineTransform: CGAffineTransform(rotationAngle: angle)
)
]
)
}
}
This function creates a CIImage with infinite extent, which can be cropped and converted to a real image.
let v = UIImageView()
// ...
let source = UIImage(named: "sample")!
let tiled = source.tile(angle: CGFloat.pi / 6)!
let result = UIImage(ciImage: tiled.cropping(to: v.bounds))
v.image = result
I'm creating an app that requires real-time application of filters to images. Converting the UIImage to a CIImage, and applying the filters are both extremely fast operations, yet it takes too long to convert the created CIImage back to a CGImageRef and display the image (1/5 of a second, which is actually a lot if editing needs to be real-time).
The image is about 2500 by 2500 pixels big, which is most likely part of the problem
Currently, I'm using
let image: CIImage //CIImage with applied filters
let eagl = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
let context = CIContext(EAGLContext: eagl, options: [kCIContextWorkingColorSpace : NSNull()])
//this line takes too long for real-time processing
let cg: CGImage = context.createCGImage(image, fromRect: image.extent)
I've looked into using EAGLContext.drawImage()
context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
Yet I can't find any solid documentation on exactly how this is done, or if it would be any faster
Is there any faster way to display a CIImage to the screen (either in a UIImageView, or directly on a CALayer)? I would like to avoid decreasing the image quality too much, because this may be noticeable to the user.
It may be worth considering Metal and displaying with a MTKView.
You'll need a Metal device which can be created with MTLCreateSystemDefaultDevice(). That's used to create a command queue and Core Image context. Both these objects are persistent and quite expensive to instantiate, so ideally should be created once:
lazy var commandQueue: MTLCommandQueue =
{
return self.device!.newCommandQueue()
}()
lazy var ciContext: CIContext =
{
return CIContext(MTLDevice: self.device!)
}()
You'll also need a color space:
let colorSpace = CGColorSpaceCreateDeviceRGB()!
When it comes to rendering a CIImage, you'll need to create a short lived command buffer:
let commandBuffer = commandQueue.commandBuffer()
You'll want to render your CIImage (let's call it image) to the currentDrawable?.texture of a MTKView. If that's bound to targetTexture, the rendering syntax is:
ciContext.render(image,
toMTLTexture: targetTexture,
commandBuffer: commandBuffer,
bounds: image.extent,
colorSpace: colorSpace)
commandBuffer.presentDrawable(currentDrawable!)
commandBuffer.commit()
I have a working version here.
Hope that helps!
Simon
I ended up using the context.drawImage(image, inRect: destinationRect, fromRect: image.extent) method. Here's the image view class that I created
import Foundation
//GLKit must be linked and imported
import GLKit
class CIImageView: GLKView{
var image: CIImage?
var ciContext: CIContext?
//initialize with the frame, and CIImage to be displayed
//(or nil, if the image will be set using .setRenderImage)
init(frame: CGRect, image: CIImage?){
super.init(frame: frame, context: EAGLContext(API: EAGLRenderingAPI.OpenGLES2))
self.image = image
//Set the current context to the EAGLContext created in the super.init call
EAGLContext.setCurrentContext(self.context)
//create a CIContext from the EAGLContext
self.ciContext = CIContext(EAGLContext: self.context)
}
//for usage in Storyboards
required init?(coder aDecoder: NSCoder){
super.init(coder: aDecoder)
self.context = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
EAGLContext.setCurrentContext(self.context)
self.ciContext = CIContext(EAGLContext: self.context)
}
//set the current image to image
func setRenderImage(image: CIImage){
self.image = image
//tell the processor that the view needs to be redrawn using drawRect()
self.setNeedsDisplay()
}
//called automatically when the view is drawn
override func drawRect(rect: CGRect){
//unwrap the current CIImage
if let image = self.image{
//multiply the frame by the screen's scale (ratio of points : pixels),
//because the following .drawImage() call uses pixels, not points
let scale = UIScreen.mainScreen().scale
let newFrame = CGRectMake(rect.minX, rect.minY, rect.width * scale, rect.height * scale)
//draw the image
self.ciContext?.drawImage(
image,
inRect: newFrame,
fromRect: image.extent
)
}
}
}
Then, to use it, simply
let myFrame: CGRect //frame in self.view where the image should be displayed
let myImage: CIImage //CIImage with applied filters
let imageView: CIImageView = CIImageView(frame: myFrame, image: myImage)
self.view.addSubview(imageView)
Resizing the UIImage to the screen size before converting it to a CIImage also helps. It speeds things up a lot in the case of high quality images. Just make sure to use the full-size image when actually saving it.
Thats it! Then, to update the image in the view
imageView.setRenderImage(newCIImage)
//note that imageView.image = newCIImage won't work because
//the view won't be redrawn
You can use GlkView and render as you said with context.drawImage() :
let glView = GLKView(frame: superview.bounds, context: EAGLContext(API: .OpenGLES2))
let context = CIContext(EAGLContext: glView.context)
After your processing render the image :
glView.bindDrawable()
context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
glView.display()
That is a pretty big image so that's definitely part of it. I'd recommend looking at GPUImage for doing single image filters. You can skip over using CoreImage altogether.
let inputImage:UIImage = //... some image
let stillImageSource = GPUImagePicture(image: inputImage)
let filter = GPUImageSepiaFilter()
stillImageSource.addTarget(filter)
filter.useNextFrameForImageCapture()
stillImageSource.processImage()
I am trying to take an image snapshot, crop it, and save it to a UIImageView.
I have tried this from a few dozen different directions but here is the general setup.
First, I am running this under ARC, XCODE 7.2, testing on a 6Plus phone iOS 9.2.
Here is now the delegate is setup..
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSLog(#"CameraViewController : imagePickerController");
//Get the Image Data
NSData *getDataImage = UIImageJPEGRepresentation([info objectForKey:#"UIImagePickerControllerOriginalImage"], 0.9);
// Turn it into a UI image
UIImage *getCapturedImage = [[UIImage alloc] initWithData:getDataImage];
// Figure out the size and build the rectangle we are going to put the image into
CGSize imageSize = getCapturedImage.size;
CGFloat imageScale = getCapturedImage.scale;
int yCoord = (imageSize.height - ((imageSize.width*2)/3))/2;
CGRect getRect = CGRectMake(0, yCoord, imageSize.width, ((imageSize.width*2)/3));
CGRect rect = CGRectMake(getRect.origin.x*imageScale,
getRect.origin.y*imageScale,
getRect.size.width*imageScale,
getRect.size.height*imageScale);
//Resize the image and store it
CGImageRef imageRef = CGImageCreateWithImageInRect([getCapturedImage CGImage], rect);
//Stick the resulting image into an image variable
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
//Release that reference
CGImageRelease(imageRef);
//Save the newly cropped image to a UIImageView property
_imageView.image = cropped;
_saveBtn.hidden = NO;
[picker dismissViewControllerAnimated:YES completion:^{
// After we are finished with dismissing the picker, run the below to close out the camera tool
[self dismissCameraViewFromImageSelect];
}];
}
When I run the above I get the below image.
At this point I am viewing the image in the previously set _imageView.image. And the image data has gobbled up 30MB. But when I back out of this view, the image data is still retained.
If I try to go through the process of capturing a new image this is what I get.
And when I bypass resizing the image and assign it to the ImageView there is no 30MB gobbled.
I have looked at all the advice on this and everything suggested doesn't make a dent but lets go over what I tried and didn't work.
Did not work.
Putting it in a #autoreleasepool block.
This never seems to work. Maybe I am not doing it right but having tried this a few different ways, nothing released the memory.
CGImageRelease(imageRef);
I am doing that but I have tried this a number of different ways. Still no luck.
CFRelease(imageRef);
Also doesn't work.
Setting imageRef = nil;
Still retains. Even the combination of that and CGImageRelease didn't work for me.
I have tried separating the cropping aspect into its own function and returning the results but still no luck.
I haven't found anything particularly helpful online and all references to similar issues have advice (as mentioned above) that doesn't seem to work.
Thanks for your advice in advance.
Alright, after much time thinking on this, I decided to just start from scratch and since most of my recent work has been in Swift, I put together a swift class that can be called, controls the camera, and passes up the image through a delegate to the caller.
The end result is that I don't have this memory leak where some variable is holding on to the memory of the previous image and I can use it in my current project by bridging the Swift class file to my Obj-C ViewControllers.
Here is the Code for the class that does the fetching.
//
// CameraOverlay.swift
// CameraTesting
//
// Created by Chris Cantley on 3/3/16.
// Copyright © 2016 Chris Cantley. All rights reserved.
//
import Foundation
import UIKit
import AVFoundation
//We want to pass an image up to the parent class once the image has been taken so the easiest way to send it up
// and trigger the placing of the image is through a delegate.
protocol CameraOverlayDelegate: class {
func cameraOverlayImage(image:UIImage)
}
class CameraOverlay: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
//MARK: Internal Variables
//Setting up the delegate reference to be used later on.
internal var delegate: CameraOverlayDelegate?
//Varibles for setting the camera view
internal var returnImage : UIImage!
internal var previewView : UIView!
internal var boxView:UIView!
internal let myButton: UIButton = UIButton()
//Setting up Camera Capture required properties
internal var previewLayer:AVCaptureVideoPreviewLayer!
internal var captureDevice : AVCaptureDevice!
internal let session=AVCaptureSession()
internal var stillImageOutput: AVCaptureStillImageOutput!
//When we put up the camera preview and the button we have to reference a parent view so this will hold the
// parent view passed into the class so that other methods can work with it.
internal var view : UIView!
//When this class is instantiated, we want to require that the calling class passes us
//some view that we can tie the camera previewer and button to.
//MARK: - Instantiation Methods
init(parentView: UIView){
//Instantiate the reference to the passed-in UIView
self.view = parentView
//We are doing the following here because this only needs to be setup once per instantiation.
//Create the output container with settings to specify that we are getting a still Image, and that it is a JPEG.
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
//Now we are sticking the image into the above formatted container
session.addOutput(stillImageOutput)
}
//MARK: - Public Functions
func showCameraView() {
//This handles showing the camera previewer and button
self.setupCameraView()
//This sets up the parameters for the camera and begins the camera session.
self.setupAVCapture()
}
//MARK: - Internal Functions
//When the user clicks the button, this gets the image, sends it up to the delegate, and shuts down all the Camera related views.
internal func didPressTakePhoto(sender: UIButton) {
//Create a media connection...
if let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
//Setup the orientation to be locked to portrait
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
//capture the still image from the camera
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
//Get the image data
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
//The 2.0 scale halves the scale of the image. Where as the 1.0 gives you the full size.
let image = UIImage(CGImage: cgImageRef!, scale: 2.0, orientation: UIImageOrientation.Up)
// What size is this image.
let imageSize = image.size
let imageScale = image.scale
let yCoord = (imageSize.height - ((imageSize.width*2)/3))/2
let getRect = CGRectMake(0, yCoord, imageSize.width, ((imageSize.width*2)/3))
let rect = CGRectMake(getRect.origin.x*imageScale, getRect.origin.y*imageScale, getRect.size.width*imageScale, getRect.size.height*imageScale)
let imageRef = CGImageCreateWithImageInRect(image.CGImage, rect)
//let newImage = UIImage(CGImage: imageRef!)
//This app forces the user to use landscapto take pictures so this simply turns the image so that it looks correct when we take the image.
let newImage: UIImage = UIImage(CGImage: imageRef!, scale: image.scale, orientation: UIImageOrientation.Down)
//Pass the image up to the delegate.
self.delegate?.cameraOverlayImage(newImage)
//stop the session
self.session.stopRunning()
//Remove the views.
self.previewView.removeFromSuperview()
self.boxView.removeFromSuperview()
self.myButton.removeFromSuperview()
//By this point the image has been handed off to the caller through the delegate and memory has been cleaned up.
}
})
}
}
internal func setupCameraView(){
//Add a view that is big as the frame that acts as a background.
self.boxView = UIView(frame: self.view.frame)
self.boxView.backgroundColor = UIColor(red: 255, green: 255, blue: 255, alpha: 1.0)
self.view.addSubview(self.boxView)
//Add Camera Preview View
// This sets up the previewView to be a 3:2 aspect ratio
let newHeight = UIScreen.mainScreen().bounds.size.width / 2 * 3
self.previewView = UIView(frame: CGRectMake(0, 0, UIScreen.mainScreen().bounds.size.width, newHeight))
self.previewView.backgroundColor = UIColor.cyanColor()
self.previewView.contentMode = UIViewContentMode.ScaleToFill
self.view.addSubview(previewView)
//Add the button.
myButton.frame = CGRectMake(0,0,200,40)
myButton.backgroundColor = UIColor.redColor()
myButton.layer.masksToBounds = true
myButton.setTitle("press me", forState: UIControlState.Normal)
myButton.setTitleColor(UIColor.whiteColor(), forState: UIControlState.Normal)
myButton.layer.cornerRadius = 20.0
myButton.layer.position = CGPoint(x: self.view.frame.width/2, y:(self.view.frame.height - myButton.frame.height ) )
myButton.addTarget(self, action: "didPressTakePhoto:", forControlEvents: .TouchUpInside)
self.view.addSubview(myButton)
}
internal func setupAVCapture(){
session.sessionPreset = AVCaptureSessionPresetPhoto;
let devices = AVCaptureDevice.devices();
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the front camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
//-> Now that we have the back of the camera, start a session.
beginSession()
break;
}
}
}
}
}
// Sets up the session
internal func beginSession(){
var err : NSError? = nil
var deviceInput:AVCaptureDeviceInput?
//See if we can get input from the Capture device as defined in setupAVCapture()
do {
deviceInput = try AVCaptureDeviceInput(device: captureDevice)
} catch let error as NSError {
err = error
deviceInput = nil
}
if err != nil {
print("error: \(err?.localizedDescription)")
}
//If we can add input into the AVCaptureSession() then do so.
if self.session.canAddInput(deviceInput){
self.session.addInput(deviceInput)
}
//Now show layers that were setup in the previewView, and mask it to the boundary of the previewView layer.
let rootLayer :CALayer = self.previewView.layer
rootLayer.masksToBounds=true
//put a live video capture based on the current session.
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session);
// Determine how to fill the previewLayer. In this case, I want to fill out the space of the previewLayer.
self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.previewLayer.frame = rootLayer.bounds
//Put the sublayer into the previewLayer
rootLayer.addSublayer(self.previewLayer)
session.startRunning()
}
}
Here is how I am using this class in a view controller.
//
// ViewController.swift
// CameraTesting
//
// Created by Chris Cantley on 2/26/16.
// Copyright © 2016 Chris Cantley. All rights reserved.
//
import UIKit
import AVFoundation
class ViewController: UIViewController, CameraOverlayDelegate{
//Setting up the class reference.
var cameraOverlay : CameraOverlay!
//Connected to the UIViewController main view.
#IBOutlet var getView: UIView!
//Connected to an ImageView that will display the image when it is passed back to the delegate.
#IBOutlet weak var imgShowImage: UIImageView!
//Connected to the button that is pressed to bring up the camera view.
#IBAction func btnPictureTouch(sender: AnyObject) {
//Remove the image from the UIImageView and take another picture.
self.imgShowImage.image = nil
self.cameraOverlay.showCameraView()
}
override func viewDidLoad() {
super.viewDidLoad()
//Pass in the target UIView which in this case is the main view
self.cameraOverlay = CameraOverlay(parentView: getView)
//Make this class the delegate for the instantiated class.
//That way it knows to receive the image when the user takes a picture
self.cameraOverlay.delegate = self
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
//Nothing here but if you run out of memorry you might want to do something here.
}
override func shouldAutorotate() -> Bool {
if (UIDevice.currentDevice().orientation == UIDeviceOrientation.LandscapeLeft ||
UIDevice.currentDevice().orientation == UIDeviceOrientation.LandscapeRight ||
UIDevice.currentDevice().orientation == UIDeviceOrientation.Unknown) {
return false;
}
else {
return true;
}
}
//This references the delegate from CameraOveralDelegate
func cameraOverlayImage(image: UIImage) {
//Put the image passed up from the CameraOverlay class into the UIImageView
self.imgShowImage.image = image
}
}
Here is a link to the project where I put that together.
GitHub - Boiler plate get image from camera