I am writing a small app that takes a photo and do modifications to it. There is a feature to put stickers(images) on top of the taken photo. I want the user to be abel to pinch rotate and drag the stickers so I used a UIImageView to contain the image so that I could use gesture functions to modify it.
But here is the problem, after the user finished modifying the stickers, how could I save the photo with the sticker? They are in different views and the only thing I can think of is to keep track of the modifications to the stickers and draw them on the photo after the user finished modifying it. Is there a easier way? What should I do?
func addSticker(name: String)
{
var stickerModView = UIImageView(frame: CGRect(blah blah))
var sticker = UIImage(named:"blah blah.png")
stickerModView.image = sticker
self.view.addSubview(stickerMod)
var tapRec = UITapGestureRecognizer()
var pinchRec = UIPinchGestureRecognizer()
var rotateRec = UIRotationGestureRecognizer()
var panRec = UIPanGestureRecognizer()
pinchRec.addTarget(self, action: Selector("pinchedView:"))
rotateRec.addTarget(self, action: Selector("rotatedView:"))
panRec.addTarget(self, action: Selector("draggedView:"))
tapRec.addTarget(self, action: Selector("tappedView:"))
stickerModView.addGestureRecognizer(pinchRec)
stickerModView.addGestureRecognizer(rotateRec)
stickerModView.addGestureRecognizer(panRec)
stickerModView.userInteractionEnabled = true
stickerModView.multipleTouchEnabled = true
}
After adding your complete UIImageView with editing, you can try this,
let rect : CGRect = CGRect() //Your view size from where you want to make UIImage
UIGraphicsBeginImageContext(rect.size);
let context : CGContextRef = UIGraphicsGetCurrentContext()
self.view.layer.renderInContext(context)
let img : UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
//your image ready to save in img
It may help you!!
Related
Hi there,
I’ve been trying to solve a problem of how to create two progressViews in my viewController for the past week with little to no success. Please see the diagram I have attached as an idea as to the question I am asking.
I am creating a game with a timer which elapses with each question. As the time elapses I would like a UIProgressView to cascade down the entire screen transitioning from blue to white as it goes (Please see number 1 in the diagram, this indicates the white trackTintColor).
Number 2 in the diagram (the cyan part) represents the progressTintColor. The diagram should hopefully be clear in that I am hoping to customise the progressView so that it tracks downwards. Which is one of the main issues at the moment. (I can only seem to fins walkthroughs with small customisable Progressviews which move sideways not up and down)
The hardest part of what I am trying to achieve is (number 3 in the diagram), customisizing a UIImageView, so that it slowly drains downward with the inverse colours to the background (so the dog will be white with cyan flooding downwards to coincide with the progressView elapsing behind it).
These are the options I have tried to solve this issue, thus far, to no avail.
I have tried using a progressView for the backgroundColor, but I cannot find any examples anywhere of anyone else doing this (downwards)so I’m not sure if it’s even possible?
For the image I have tried drawing a CAShape layer but it appears the dog is too difficult a shape for a novice like myself to draw effectively. And upon realising that I would not be able to set a layer of a different colour behind the dog to move downwards as the screen will also be changing color I abandoned all hope of using this option.
I have tried a UIView transition for the dog image, however, the only option I could find that was anywhere close was the transitionCrossDissolve which did not give the downward effect I was hoping for, but instead just faded from a white dog to a cyan dog which was not appropriate. Should I somehow be using progressImage? If so, is there anywhere I can find help with the syntax for that? I can’t seem to find any anywhere.
I currently have 55 images in my assets folder, each with slightly more cyan in than the last, progressively moving downwards (as it animates through an array of the images). Although this works, it is not exactly seamless and does look a little like the user is waiting for an image to load on dial up.
If anyone has any ideas or could spare the time to walk me through how I would go about doing this I would very much appreciate it. I am very much still a beginner so the more detail the better! Oh yes to make matters more difficult, so far I have managed to do the app programatically, so an answers in this form would be great.
Thanks in advance!
I hope you have done number 1 and number 2, perfectly. I have tried for number 3.
I have tried with two UIView. Its working fine. I thought, it will give some idea to achieve yours.
I have two images.
With the help of TIMER , I tried sample for this ProgressView .
Intially, cyanDogView height should be Zero. Once Timer Starts, height should increased by 2px. Once cyanDogView's height should be greater than BlackDogView, then Timer Stops.
Coding
#IBOutlet weak var blackDogView: UIView!
#IBOutlet weak var blackDogImgVw: UIImageView!
#IBOutlet weak var cyanDogView: UIView!
#IBOutlet weak var cyanDogImgVw: UIImageView!
var getHeight : CGFloat = 0.0
var progressTime = Timer()
override func viewDidAppear(_ animated: Bool) {
cyanDogView.frame.size.height = 0
getHeight = blackDogView.frame.height
}
#IBAction func startAnimateButAcn(_ sender: UIButton) {
progressTime = Timer.scheduledTimer(timeInterval: 0.2, target: self, selector: #selector(self.update), userInfo: nil, repeats: true)
}
#objc func update() {
cyanDogView.frame.size.height = cyanDogView.frame.size.height + 2
if cyanDogView.frame.size.height >= getHeight
{
progressTime.invalidate()
cyanDogView.frame.size.height = 0
}
}
Story Board
Output
I'm going to give you some Frankenstein answers here, Part Obj-C Part Swift. I hope it helps.
First, You could create a bezier path of the image mask you're using as a template:
- (UIImage *)cerateImageFromImage:(UIImage *)image
withMaskImage:(UIImage *)mask {
CGImageRef imageRef = image.CGImage;
CGImageRef maskRef = mask.CGImage;
CGImageRef imageMask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef),
NULL,
YES);
CGImageRef maskedReference = CGImageCreateWithMask(imageRef, imageMask);
CGImageRelease(imageMask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedReference];
CGImageRelease(maskedReference);
return maskedImage;
}
UIImage *image = [UIImage imageNamed:#"Photo.png"];
UIImage *mask = [UIImage imageNamed:#"Mask.png"];
self.imageView.image = [self cerateImageFromImage:image
withMaskImage:mask];
Credit to Keenle
Next you can create a custom progress view based on a path :
func drawProgressLayer(){
let bezierPath = UIBezierPath(roundedRect: viewProg.bounds, cornerRadius: viewCornerRadius)
bezierPath.closePath()
borderLayer.path = bezierPath.CGPath
borderLayer.fillColor = UIColor.blackColor().CGColor
borderLayer.strokeEnd = 0
viewProg.layer.addSublayer(borderLayer)
}
//Make sure the value that you want in the function `rectProgress` that is going to define
//the width of your progress bar must be in the range of
// 0 <--> viewProg.bounds.width - 10 , reason why to keep the layer inside the view with some border left spare.
//if you are receiving your progress values in 0.00 -- 1.00 range , just multiply your progress values to viewProg.bounds.width - 10 and send them as *incremented:* parameter in this func
func rectProgress(incremented : CGFloat){
print(incremented)
if incremented <= viewProg.bounds.width - 10{
progressLayer.removeFromSuperlayer()
let bezierPathProg = UIBezierPath(roundedRect: CGRectMake(5, 5, incremented , viewProg.bounds.height - 10) , cornerRadius: viewCornerRadius)
bezierPathProg.closePath()
progressLayer.path = bezierPathProg.CGPath
progressLayer.fillColor = UIColor.whiteColor().CGColor
borderLayer.addSublayer(progressLayer)
}
}
Credit to Dravidian
Please click the blue links and explore their answers in full to get a grasp of what is possible.
Ok so using McDonal_11's answer I have managed to get the progressView working however, I am still experiencing some problems. I cannot add anything on top of the progressView, it just blanket covers everything else underneath, and before the dog begins its animation into a cyan dog there is a brief flash of the entire cyan dog image.
Code below
private let contentView = UIView(frame: .zero)
private let backgroundImageView = UIImageView(frame: .zero)
private let progressView = ProgressView(frame: .zero)
private let clearViewOverProgress = UIView(frame: .zero)
private let blackDogView = UIView(frame: .zero)
private let blackDogViewImage = UIImageView(frame: .zero)
private let cyanDogView = UIView(frame: .zero)
private let cyanDogViewImage = UIImageView()
var timer = Timer()
var startHeight : CGFloat = 0.0
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
self.progressView.setProgress(10.0, animated: true)
startHeight = cyanDogViewImage.frame.height
self.cyanDogViewImage.frame.size.height = 0
self.timer = Timer.scheduledTimer(timeInterval: 0.01, target: self, selector: #selector(self.updateImage), userInfo: nil, repeats: true)
}
override func viewDidLoad() {
super.viewDidLoad()
setupViews()
}
func setupViews() {
self.view.addSubview(backgroundImageView)
self.backgroundImageView.addSubview(progressView)
self.progressView.addSubview(clearViewOverProgress)
self.clearViewOverProgress.addSubview(blackDogView)
self.blackDogView.addSubview(blackDogViewImage)
self.blackDogViewImage.addSubview(cyanDogView)
self.cyanDogView.addSubview(cyanDogViewImage)
// Setting up constraints of both labels (has been omitted for brevity)
self.blackDogViewImage.image = UIImage(named: “BlackDogImage”)
self.cyanDogViewImage.contentMode = UIViewContentMode.top
self.cyanDogViewImage.clipsToBounds = true
self.cyanDogViewImage.image = UIImage(named: “CyanDogImage”)
}
func updateImage() {
cyanDogViewImage.frame.size.height =
cyanDogViewImage.frame.size.height + 0.07
if cyanDogViewImage.frame.size.height >= blackDogViewImage.frame.size.height
{
timer.invalidate()
cyanDogViewImage.frame.size.height = blackDogViewImage.frame.size.height
}
}
func outOfTime() {
timer.invalidate()
}
I currently have an image set into my UIImageView the following way:
art_image.image = UIImage(named:(artworkPin.title!))
where art_image is the image View and artworkPin.title refers to the name of the image. However, I want to add a second image to the UIImageView if it exists I thought of programming it as
art_image.image = UIImage(named:(artworkPin.title + "1")?)
would this work? In this case I would name a second image the name of the first image but with a '1' on the end. Example: 'Photo.jpeg' and 'Photo1.jpeg' would both be in the image view if Photo1.jpeg existed.
Thanks for your help.
I came across a similar task myself once. What I did was, I created a UIScrollView with the frame of the UIImageView and its contentSize would be imageView.frame.size.width * numberOfImages.
let scrollView = UIScrollView(frame: view.bounds)
view.addSubview(scrollView)
scrollView.contentSize = CGSize(width: scrollView.bounds.size.width * CGFloat(numberOfImages), height: scrollView.bounds.size.height))
for i in 0...<numberOfImages.count-1 {
let imageView = UIImageView(frame: CGRect(x: scrollView.bounds.size.width * CGFloat(i), y: scrollView.bounds.origin.y, width: scrollView.bounds.size.width, height: scrollView.bounds.size.height))
imageView.image = UIImage(named: "artwork" + "\(i)")
scrollView.addSubview(imageView)
}
You can animate it to scroll with a Timer if you want.
scrollView.setContentOffset(scrollPoint, animated: true)
you can only show 1 image inside the imageivew at a time, so if you have the lines as follows:
art_image.image = UIImage(named:(artworkPin.title!))
art_image.image = UIImage(named:(artworkPin.title! + "1")?)
art_image would consist only of Photo1 provided such an image exists and that the artworkPin.title unwrapped is also not nil otherwise you could see some different results.
if you do want to add multiple images to an image view for the purpose of animation, you need to use the animationImages property of UIImageView which takes an array of UIImages for example
art_image.animationImages = [UIImage.init(named:"Photo")!,
UIImage.init(named:"Photo1")!]
Hope this helps
EDIT
var imagesListArray = [UIImage]()
for position in 1...5
{
if let image = UIImage.init(named: ("Photo\(position)"))
{
imagesListArray.append(image)
}
}
art_image.animationImages = imagesListArray
art_image.animationDuration = 3.0
art_image.startAnimating()
This would be a safer a way to add the images so it will ONLY add an image if it is not nil and adds Photo1, Photo2 ..... Photo5
With regards to your other questions:
If you want the user to be able to
What do you mean by animation?
I have added two more lines of code, and it gives this result:
art_image.animationDuration = 1.0
art_image.startAnimating()
It will give you something like this:
If you want the user to swipe, then you need to make some changes such as:
using a scrollview, collectionview for example is the easiest or using a gesture recognizer on swipe you need to change the image
EDIT
Have a look at this example. Imagine each button is your annotation so when I tap it, the image changes.
I have 3 images named Photo11.png, Photo21.png and Photo31.png and this is my code inside the button handler
#IBAction func buttonTapped(_ sender: UIButton)
{
if let image = UIImage.init(named: sender.currentTitle!+"1")
{
art_image.image = image
}
}
As you can see I am setting the image with the title of my button + "1" as so it displays either Photo11.png or Photo21.png etc
I have my background image set by self.view.insertSubview. I'm trying to create a UISwipeGestureRecognizer that advances through the background Image on swipe, while another image array cycles on tap. The tap image works fine but the swipe image only works on the first swipe.
Here's the gesture recognizer:
let gestureRecognizerBackground = UISwipeGestureRecognizer(target: self, action: #selector(changeBackground))
dragon2View.addGestureRecognizer(gestureRecognizerBackground)
and here's the changeBackground func:
func changeBackground() {
let backgroundImageArray = [#imageLiteral(resourceName: "artic.png"),#imageLiteral(resourceName: "beach.png"),#imageLiteral(resourceName: "mountain.jpg"),#imageLiteral(resourceName: "spring.png")]
let randomBackground = Int(arc4random_uniform(UInt32 (backgroundImageArray.count)))
let backgroundImage = UIImageView(frame: UIScreen.main.bounds)
backgroundImage.image = backgroundImageArray[randomBackground]
self.view.insertSubview(backgroundImage, at: 0)
}
Not sure why it's not advancing. Thanks in advance for your comments.
I was trying to set the background image with self.view.insertSubview because I didn't realize I could just have two views and 'send to back'. No problem with the array now.
I have this asset as the background of a view and its assigned to the background using the code described below.
The animation is to get the diagonal rows animate, so they move from left to right when the loading is happening.
Any pointers of how to get this done?
var view = UIImageView()
view.translatesAutoresizingMaskIntoConstraints = false
view.image = UIImage(assetIdentifier: "background-view")
view.layer.cornerRadius = 8.0
view.layer.masksToBounds = true
view.contentMode = .ScaleAspectFill
view.clipsToBounds = true
view.backgroundColor = UIColor.whiteColor()
return view
"background-view" is here
I guess the best would be to have all the images needed ( all the frames ) to create the animated image you want and then put these images in UIImageView's animationImages property.
For instance, if you get a loading bar gif loading_bar.gif, you can get all the different images in that gif ( c.f. this tutorial among others : http://www.macobserver.com/tmo/article/preview-extracting-frames-from-animated-gifs ).
Load all the images in your code ( in the assets folder for instance ) and then do something like :
func getAnimatedImages -> Array<UIImage>
{
var animatedImages = Array<UIImage>()
var allImagesLoaded = false
var i = 0
while !allImagesLoaded
{
if let image = UIImage(named: "background_" + String(i))
{
animatedImages.append(image)
i++
}
else
{
allImagesLoaded = true
}
}
return animatedImages
}
( if you called your images background_0, background_1, etc... )
and then
self.yourBackgroundImageView.animationImages = self.getAnimatedImages
self.yourBackgroundImageView.startAnimating()
I would use the method class func animatedImageNamed(_ name: String, duration duration: NSTimeInterval) -> UIImage?
From the Apple doc:
this method would attempt to load images from files with the names ‘image0’, ‘image1’ and so on all the way up to ‘image1024’. All images included in the animated image should share the same size and scale.
If you create an animated image you can assign it to your UIImageView and it will animate automatically.
As for the image creation #Randy had a pretty good idea :)
I am trying to take an image snapshot, crop it, and save it to a UIImageView.
I have tried this from a few dozen different directions but here is the general setup.
First, I am running this under ARC, XCODE 7.2, testing on a 6Plus phone iOS 9.2.
Here is now the delegate is setup..
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSLog(#"CameraViewController : imagePickerController");
//Get the Image Data
NSData *getDataImage = UIImageJPEGRepresentation([info objectForKey:#"UIImagePickerControllerOriginalImage"], 0.9);
// Turn it into a UI image
UIImage *getCapturedImage = [[UIImage alloc] initWithData:getDataImage];
// Figure out the size and build the rectangle we are going to put the image into
CGSize imageSize = getCapturedImage.size;
CGFloat imageScale = getCapturedImage.scale;
int yCoord = (imageSize.height - ((imageSize.width*2)/3))/2;
CGRect getRect = CGRectMake(0, yCoord, imageSize.width, ((imageSize.width*2)/3));
CGRect rect = CGRectMake(getRect.origin.x*imageScale,
getRect.origin.y*imageScale,
getRect.size.width*imageScale,
getRect.size.height*imageScale);
//Resize the image and store it
CGImageRef imageRef = CGImageCreateWithImageInRect([getCapturedImage CGImage], rect);
//Stick the resulting image into an image variable
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
//Release that reference
CGImageRelease(imageRef);
//Save the newly cropped image to a UIImageView property
_imageView.image = cropped;
_saveBtn.hidden = NO;
[picker dismissViewControllerAnimated:YES completion:^{
// After we are finished with dismissing the picker, run the below to close out the camera tool
[self dismissCameraViewFromImageSelect];
}];
}
When I run the above I get the below image.
At this point I am viewing the image in the previously set _imageView.image. And the image data has gobbled up 30MB. But when I back out of this view, the image data is still retained.
If I try to go through the process of capturing a new image this is what I get.
And when I bypass resizing the image and assign it to the ImageView there is no 30MB gobbled.
I have looked at all the advice on this and everything suggested doesn't make a dent but lets go over what I tried and didn't work.
Did not work.
Putting it in a #autoreleasepool block.
This never seems to work. Maybe I am not doing it right but having tried this a few different ways, nothing released the memory.
CGImageRelease(imageRef);
I am doing that but I have tried this a number of different ways. Still no luck.
CFRelease(imageRef);
Also doesn't work.
Setting imageRef = nil;
Still retains. Even the combination of that and CGImageRelease didn't work for me.
I have tried separating the cropping aspect into its own function and returning the results but still no luck.
I haven't found anything particularly helpful online and all references to similar issues have advice (as mentioned above) that doesn't seem to work.
Thanks for your advice in advance.
Alright, after much time thinking on this, I decided to just start from scratch and since most of my recent work has been in Swift, I put together a swift class that can be called, controls the camera, and passes up the image through a delegate to the caller.
The end result is that I don't have this memory leak where some variable is holding on to the memory of the previous image and I can use it in my current project by bridging the Swift class file to my Obj-C ViewControllers.
Here is the Code for the class that does the fetching.
//
// CameraOverlay.swift
// CameraTesting
//
// Created by Chris Cantley on 3/3/16.
// Copyright © 2016 Chris Cantley. All rights reserved.
//
import Foundation
import UIKit
import AVFoundation
//We want to pass an image up to the parent class once the image has been taken so the easiest way to send it up
// and trigger the placing of the image is through a delegate.
protocol CameraOverlayDelegate: class {
func cameraOverlayImage(image:UIImage)
}
class CameraOverlay: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
//MARK: Internal Variables
//Setting up the delegate reference to be used later on.
internal var delegate: CameraOverlayDelegate?
//Varibles for setting the camera view
internal var returnImage : UIImage!
internal var previewView : UIView!
internal var boxView:UIView!
internal let myButton: UIButton = UIButton()
//Setting up Camera Capture required properties
internal var previewLayer:AVCaptureVideoPreviewLayer!
internal var captureDevice : AVCaptureDevice!
internal let session=AVCaptureSession()
internal var stillImageOutput: AVCaptureStillImageOutput!
//When we put up the camera preview and the button we have to reference a parent view so this will hold the
// parent view passed into the class so that other methods can work with it.
internal var view : UIView!
//When this class is instantiated, we want to require that the calling class passes us
//some view that we can tie the camera previewer and button to.
//MARK: - Instantiation Methods
init(parentView: UIView){
//Instantiate the reference to the passed-in UIView
self.view = parentView
//We are doing the following here because this only needs to be setup once per instantiation.
//Create the output container with settings to specify that we are getting a still Image, and that it is a JPEG.
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
//Now we are sticking the image into the above formatted container
session.addOutput(stillImageOutput)
}
//MARK: - Public Functions
func showCameraView() {
//This handles showing the camera previewer and button
self.setupCameraView()
//This sets up the parameters for the camera and begins the camera session.
self.setupAVCapture()
}
//MARK: - Internal Functions
//When the user clicks the button, this gets the image, sends it up to the delegate, and shuts down all the Camera related views.
internal func didPressTakePhoto(sender: UIButton) {
//Create a media connection...
if let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
//Setup the orientation to be locked to portrait
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
//capture the still image from the camera
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
//Get the image data
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
//The 2.0 scale halves the scale of the image. Where as the 1.0 gives you the full size.
let image = UIImage(CGImage: cgImageRef!, scale: 2.0, orientation: UIImageOrientation.Up)
// What size is this image.
let imageSize = image.size
let imageScale = image.scale
let yCoord = (imageSize.height - ((imageSize.width*2)/3))/2
let getRect = CGRectMake(0, yCoord, imageSize.width, ((imageSize.width*2)/3))
let rect = CGRectMake(getRect.origin.x*imageScale, getRect.origin.y*imageScale, getRect.size.width*imageScale, getRect.size.height*imageScale)
let imageRef = CGImageCreateWithImageInRect(image.CGImage, rect)
//let newImage = UIImage(CGImage: imageRef!)
//This app forces the user to use landscapto take pictures so this simply turns the image so that it looks correct when we take the image.
let newImage: UIImage = UIImage(CGImage: imageRef!, scale: image.scale, orientation: UIImageOrientation.Down)
//Pass the image up to the delegate.
self.delegate?.cameraOverlayImage(newImage)
//stop the session
self.session.stopRunning()
//Remove the views.
self.previewView.removeFromSuperview()
self.boxView.removeFromSuperview()
self.myButton.removeFromSuperview()
//By this point the image has been handed off to the caller through the delegate and memory has been cleaned up.
}
})
}
}
internal func setupCameraView(){
//Add a view that is big as the frame that acts as a background.
self.boxView = UIView(frame: self.view.frame)
self.boxView.backgroundColor = UIColor(red: 255, green: 255, blue: 255, alpha: 1.0)
self.view.addSubview(self.boxView)
//Add Camera Preview View
// This sets up the previewView to be a 3:2 aspect ratio
let newHeight = UIScreen.mainScreen().bounds.size.width / 2 * 3
self.previewView = UIView(frame: CGRectMake(0, 0, UIScreen.mainScreen().bounds.size.width, newHeight))
self.previewView.backgroundColor = UIColor.cyanColor()
self.previewView.contentMode = UIViewContentMode.ScaleToFill
self.view.addSubview(previewView)
//Add the button.
myButton.frame = CGRectMake(0,0,200,40)
myButton.backgroundColor = UIColor.redColor()
myButton.layer.masksToBounds = true
myButton.setTitle("press me", forState: UIControlState.Normal)
myButton.setTitleColor(UIColor.whiteColor(), forState: UIControlState.Normal)
myButton.layer.cornerRadius = 20.0
myButton.layer.position = CGPoint(x: self.view.frame.width/2, y:(self.view.frame.height - myButton.frame.height ) )
myButton.addTarget(self, action: "didPressTakePhoto:", forControlEvents: .TouchUpInside)
self.view.addSubview(myButton)
}
internal func setupAVCapture(){
session.sessionPreset = AVCaptureSessionPresetPhoto;
let devices = AVCaptureDevice.devices();
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the front camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
//-> Now that we have the back of the camera, start a session.
beginSession()
break;
}
}
}
}
}
// Sets up the session
internal func beginSession(){
var err : NSError? = nil
var deviceInput:AVCaptureDeviceInput?
//See if we can get input from the Capture device as defined in setupAVCapture()
do {
deviceInput = try AVCaptureDeviceInput(device: captureDevice)
} catch let error as NSError {
err = error
deviceInput = nil
}
if err != nil {
print("error: \(err?.localizedDescription)")
}
//If we can add input into the AVCaptureSession() then do so.
if self.session.canAddInput(deviceInput){
self.session.addInput(deviceInput)
}
//Now show layers that were setup in the previewView, and mask it to the boundary of the previewView layer.
let rootLayer :CALayer = self.previewView.layer
rootLayer.masksToBounds=true
//put a live video capture based on the current session.
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session);
// Determine how to fill the previewLayer. In this case, I want to fill out the space of the previewLayer.
self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.previewLayer.frame = rootLayer.bounds
//Put the sublayer into the previewLayer
rootLayer.addSublayer(self.previewLayer)
session.startRunning()
}
}
Here is how I am using this class in a view controller.
//
// ViewController.swift
// CameraTesting
//
// Created by Chris Cantley on 2/26/16.
// Copyright © 2016 Chris Cantley. All rights reserved.
//
import UIKit
import AVFoundation
class ViewController: UIViewController, CameraOverlayDelegate{
//Setting up the class reference.
var cameraOverlay : CameraOverlay!
//Connected to the UIViewController main view.
#IBOutlet var getView: UIView!
//Connected to an ImageView that will display the image when it is passed back to the delegate.
#IBOutlet weak var imgShowImage: UIImageView!
//Connected to the button that is pressed to bring up the camera view.
#IBAction func btnPictureTouch(sender: AnyObject) {
//Remove the image from the UIImageView and take another picture.
self.imgShowImage.image = nil
self.cameraOverlay.showCameraView()
}
override func viewDidLoad() {
super.viewDidLoad()
//Pass in the target UIView which in this case is the main view
self.cameraOverlay = CameraOverlay(parentView: getView)
//Make this class the delegate for the instantiated class.
//That way it knows to receive the image when the user takes a picture
self.cameraOverlay.delegate = self
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
//Nothing here but if you run out of memorry you might want to do something here.
}
override func shouldAutorotate() -> Bool {
if (UIDevice.currentDevice().orientation == UIDeviceOrientation.LandscapeLeft ||
UIDevice.currentDevice().orientation == UIDeviceOrientation.LandscapeRight ||
UIDevice.currentDevice().orientation == UIDeviceOrientation.Unknown) {
return false;
}
else {
return true;
}
}
//This references the delegate from CameraOveralDelegate
func cameraOverlayImage(image: UIImage) {
//Put the image passed up from the CameraOverlay class into the UIImageView
self.imgShowImage.image = image
}
}
Here is a link to the project where I put that together.
GitHub - Boiler plate get image from camera