I am working on Swift base MAC OS-X application. The user will choose a video file and the NSView Controller will play the video using AVPlayerView. The user can draw a rectangle over the AVPlayerView and I have achieved this functionality. How to show the selected area video in a rectangle to a camera Preview Layer at right bottom?
Here is the output:
Here is the Code:
var videoURL = NSURL() // The video file URL by the file chooser
var videoSize = NSSize() // Here the video resolution size will be kept i.e. 720*480
var player:AVPlayerView! // The AVPlayerView to play video
var cameraPreviewLayer:AVPlayerLayer! //The right bottom camera Preview Layer
var rectangleLayer:AVPlayerLayer! // The rectangle that will draw over AVPlayer while mouse dragging
var isClicked = false
var startPoint = NSPoint() // holds the starting point location of mouse pointer while draging
var rect:NSRect!
override func viewDidLoad() {
super.viewDidLoad()
self.player = AVPlayerView()
self.player.frame.origin = CGPoint(x: 0, y: 0)
self.player.setFrameSize(self.videoSize)
self.player.player = AVPlayer.playerWithURL(videoURL) as! AVPlayer
self.view.addSubview(self.player)
setupRectangle()
setupCameraPreviewLayer()
}
// intially setup white color rectangle that will draged over AVPlayer
func setupRectangle() {
self.rectangleLayer = AVPlayerLayer()
self.rectangleLayer.backgroundColor = NSColor.clearColor().CGColor
self.rectangleLayer.borderColor = NSColor.whiteColor().CGColor
self.rectangleLayer.borderWidth = 1.5
self.rectangleLayer.frame = self.player.bounds
}
// intially setup CameraPreview Layer that will show at right bottom of AVPlayer
func setupCameraPreviewLayer(){
cameraPreviewLayer = AVPlayerLayer(player: self.player.player)
//place the cameraPreview layer at right bottom
cameraPreviewLayer.frame.origin.x = self.player.framesize.width-100
cameraPreviewLayer.frame.origin.y = self.player.frame.origin.y-100
cameraPreviewLayer.frame.size.width = 100
cameraPreviewLayer.frame.size.height = 100
}
override func mouseDown(theEvent: NSEvent) {
startPoint = theEvent.locationInWindow
isClicked = true
removeCameraPreviewLayer()
}
override func mouseDragged(theEvent: NSEvent) {
var endPoint = theEvent.locationInWindow
if( isClicked ){
rect = NSRect(x: startPoint.x, y: startPoint.y, width: -(startPoint.x-endPoint.x), height: -(startPoint.y-endPoint.y))
drawCustomRect(rect)
}
}
override func mouseUp(theEvent: NSEvent) {
var endPoint = theEvent.locationInWindow
if( isClicked ){
// I think we have to do some magic code here
addCameraPreviewLayer()
}
isClicked = false;
}
// redraw the white color rectange over avplayer
func drawCustomRect(rect: NSRect) {
self.rectangleLayer.frame = rect
self.player.layer?.addSublayer(self.rectangleLayer)
}
// add Camera PreviewLayer from AVPlayer
func addCameraPreviewLayer() {
self.player.layer?.addSublayer(self.layer)
}
// remove Camera PreviewLayer from AVPlayer
func removeCameraPreviewLayer() {
self.cameraPreviewLayer.removeFromSuperlayer()
}
Here is the desire output picture which I wants.
Suppose Video has a size 720*480, user has draw a rectangle and its points are (x1, y1) (x2,y1) (x3,y3) (x4,y4). How could I crop a video in camera Preview layer(at right bottom) which shows the video of area same as rectangle selected by the user?
Any one help how to achieve this functionality? I have spend many days on this and exhaust.
Note: I can do it in open-CV by video processing using ROI, but the requirement is to do it in native SWIFT Language.
Related
When adding a text MeshResource, with no angle and with a fixed world position, it looks fine from the camera perspective.
However, when the user walks to the other side of the text entity and turns around, it looks mirrored.
I don't want to use the look(at_) API since I only want to rotate it around the Y-axis 180 degrees and when the user passes it again to reset the angle to 0.
First we have to put text in anchor that will stay in the same orientation even when we rotate text. Then add textIsMirrored variable that will handle rotation when changed:
class TextAnchor: Entity,HasAnchoring {
let textEntity = ModelEntity(mesh: .generateText("text"))
var textIsMirrored = false {
willSet {
if newValue != textIsMirrored {
if newValue == true {
textEntity.setOrientation(.init(angle: .pi, axis: [0,1,0]), relativeTo: self)
} else {
textEntity.setOrientation(.init(angle: 0, axis: [0,1,0]), relativeTo: self)
}
}
}
}
required init() {
super.init()
textEntity.scale = [0.01,0.01,0.01]
anchoring = AnchoringComponent(.plane(.horizontal, classification: .any, minimumBounds: [0.3,0.3]))
addChild(textEntity)
}
}
Then in your ViewController you can create anchor that will have Camera as a target so we can track camera position and create out textAnchor:
let cameraAnchor = AnchorEntity(.camera)
let textAnchor = TextAnchor()
For it to work you have to add it as a child of your scene (preferably in viewDidLoad):
arView.scene.addAnchor(cameraAnchor)
arView.scene.addAnchor(textAnchor)
Now in ARSessionDelegate function you can check camera position in relation to your text and rotate it if Z axis is below 0:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
if cameraAnchor.position(relativeTo: textAnchor).z < 0 {
textAnchor.textIsMirrored = true
} else {
textAnchor.textIsMirrored = false
}
}
I'm testing the possibility of the tile editor that comes with Xcode 8 (8.2.2). And I've created a PacMan-like map as shown above. There's a game character at the top-left corner in a rectangle. I wonder if there's an easy way of making the game character staying inside the blue borders? So far, I've set a (red) wall to the left like the following through the scene editor. And I have the following lines of code.
struct PhysicsCategory {
static let None: UInt32 = 0
static let Player: UInt32 = 0b1 // 1
static let Edge: UInt32 = 0b10 // 2
static let Wall: UInt32 = 0b100 // 4
}
class GameScene: SKScene {
// MARK: - Variables
var background: SKTileMapNode! // background
var player: SKNode! // player
// MARK: - DidMove
override func didMove(to view: SKView) {
setupNodes()
}
func setupNodes() {
background = self.childNode(withName: "World") as! SKTileMapNode
background.physicsBody = SKPhysicsBody(edgeLoopFrom: background.frame)
background.physicsBody?.categoryBitMask = PhysicsCategory.Edge
let wall = self.childNode(withName: "Wall")
wall?.physicsBody = SKPhysicsBody(rectangleOf: (wall?.frame.size)!)
wall?.physicsBody?.isDynamic = false
wall?.physicsBody?.categoryBitMask = PhysicsCategory.Wall
player = self.childNode(withName: "Player")
player.physicsBody = SKPhysicsBody(circleOfRadius: 32)
player.physicsBody?.categoryBitMask = PhysicsCategory.Player
player.physicsBody?.collisionBitMask = 4
player.physicsBody?.allowsRotation = false
}
}
The user will get to control the player position with CoreMotion. For now, the game character does respect the left edge. But if the map gets complicated, I could end up placing a lot of walls here and there. And that kind of kills the fun as it could be time-consuming. So, again, is there a simplier way of making the game character collide the map borders?
Take a look at GameplayKit's pathfinding classes, specifically GKGridGraph and GKGridGraphNode which allow you to specify a graph that only has these kind of rectilinear paths.
There is a good tutorial from Apple here: https://developer.apple.com/library/content/documentation/General/Conceptual/GameplayKit_Guide/Pathfinding.html
And a demo app here: https://developer.apple.com/library/content/samplecode/Pathfinder_GameplayKit/Introduction/Intro.html#//apple_ref/doc/uid/TP40016461
I have a video view set to full screen. However while playing in the simulator, it is not running full screen.
This issue is only for iPads and not iPhones.
Here is my code:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(true)
let bundle: Bundle = Bundle.main
let videoPlayer: String = bundle.path(forResource: "normalnewer", ofType: "mp4")!
let movieUrl : NSURL = NSURL.fileURL(withPath: videoPlayer) as NSURL
// var fileURL = NSURL(fileURLWithPath: "/Users/Mantas/Desktop/123/123/video-1453562323.mp4.mp4")
playerView = AVPlayer(url: movieUrl as URL)
NotificationCenter.default.addObserver(self,selector: #selector(playerItemDidReachEnd),name: NSNotification.Name.AVPlayerItemDidPlayToEndTime,object: self.playerView.currentItem) // Add observer
var playerLayer=AVPlayerLayer(player: playerView)
// self.avPlayerLayer.playerLayer = AVLayerVideoGravityResizeAspectFill
playerLayer.frame = viewVideo.bounds
// self.playerLayer.frame = self.videoPreviewLayer.bounds
viewVideo.layer.addSublayer(playerLayer)
playerView.play()
UserDefaults.standard.set("normalnewer", forKey: "video")
}
I have tried checking landscape mode in target's general.
Gone through the threads below:
how to play a video in fullscreen mode using swift ios?
Play video fullscreen in landscape mode, when my entire application is in lock in portrait mode
Setting device orientation in Swift iOS
But these did not resolve my issue.
Here is a screenshot.
Can somebody help me resolve this?
I see you solved your issue, but I noticed you were using an AVPlayerLayer. The orientation rotation is handled with the AVPlayerViewController, but not with a custom view using a player layer. It is useful to be able to put the layer in fullscreen without rotating the device. I answered this question elsewhere, but I will put it here as well.
Transforms and frame manipulation can solve this issue:
extension CGAffineTransform {
static let ninetyDegreeRotation = CGAffineTransform(rotationAngle: CGFloat(M_PI / 2))
}
extension AVPlayerLayer {
var fullScreenAnimationDuration: TimeInterval {
return 0.15
}
func minimizeToFrame(_ frame: CGRect) {
UIView.animate(withDuration: fullScreenAnimationDuration) {
self.setAffineTransform(.identity)
self.frame = frame
}
}
func goFullscreen() {
UIView.animate(withDuration: fullScreenAnimationDuration) {
self.setAffineTransform(.ninetyDegreeRotation)
self.frame = UIScreen.main.bounds
}
}
}
Setting the frame of the AVPlayerLayer changes it's parent's frame. Save the original frame in your view subclass, to minimize the AVPLayerLayer back to where it was. This allows for autolayout.
IMPORTANT - This only works if the player is in the center of your view subclass.
Incomplete example:
class AVPlayerView: UIView {
fileprivate var avPlayerLayer: AVPlayerLayer {
return layer as! AVPlayerLayer
}
fileprivate var hasGoneFullScreen = false
fileprivate var isPlaying = false
fileprivate var originalFrame = CGRect.zero
func togglePlayback() {
if !hasGoneFullScreen {
originalFrame = frame
hasGoneFullScreen = true
}
isPlaying = !isPlaying
if isPlaying {
avPlayerLayer.goFullscreen()
avPlayerLayer.player?.play()
} else {
avPlayerLayer.player?.pause()
avPlayerLayer.minimizeToFrame(originalFrame)
}
}
}
After spending some time taking a good look at the Storyboard, I tried changing the fill to Aspect fill & fit and that resolved the problem :)
The best thing you can do is put this code
In viewDidLayoutSubviews and enjoy it!
DispatchQueue.main.async {
self.playerLayer?.frame = self.videoPlayerView.bounds
}
I am trying to take an image snapshot, crop it, and save it to a UIImageView.
I have tried this from a few dozen different directions but here is the general setup.
First, I am running this under ARC, XCODE 7.2, testing on a 6Plus phone iOS 9.2.
Here is now the delegate is setup..
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSLog(#"CameraViewController : imagePickerController");
//Get the Image Data
NSData *getDataImage = UIImageJPEGRepresentation([info objectForKey:#"UIImagePickerControllerOriginalImage"], 0.9);
// Turn it into a UI image
UIImage *getCapturedImage = [[UIImage alloc] initWithData:getDataImage];
// Figure out the size and build the rectangle we are going to put the image into
CGSize imageSize = getCapturedImage.size;
CGFloat imageScale = getCapturedImage.scale;
int yCoord = (imageSize.height - ((imageSize.width*2)/3))/2;
CGRect getRect = CGRectMake(0, yCoord, imageSize.width, ((imageSize.width*2)/3));
CGRect rect = CGRectMake(getRect.origin.x*imageScale,
getRect.origin.y*imageScale,
getRect.size.width*imageScale,
getRect.size.height*imageScale);
//Resize the image and store it
CGImageRef imageRef = CGImageCreateWithImageInRect([getCapturedImage CGImage], rect);
//Stick the resulting image into an image variable
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
//Release that reference
CGImageRelease(imageRef);
//Save the newly cropped image to a UIImageView property
_imageView.image = cropped;
_saveBtn.hidden = NO;
[picker dismissViewControllerAnimated:YES completion:^{
// After we are finished with dismissing the picker, run the below to close out the camera tool
[self dismissCameraViewFromImageSelect];
}];
}
When I run the above I get the below image.
At this point I am viewing the image in the previously set _imageView.image. And the image data has gobbled up 30MB. But when I back out of this view, the image data is still retained.
If I try to go through the process of capturing a new image this is what I get.
And when I bypass resizing the image and assign it to the ImageView there is no 30MB gobbled.
I have looked at all the advice on this and everything suggested doesn't make a dent but lets go over what I tried and didn't work.
Did not work.
Putting it in a #autoreleasepool block.
This never seems to work. Maybe I am not doing it right but having tried this a few different ways, nothing released the memory.
CGImageRelease(imageRef);
I am doing that but I have tried this a number of different ways. Still no luck.
CFRelease(imageRef);
Also doesn't work.
Setting imageRef = nil;
Still retains. Even the combination of that and CGImageRelease didn't work for me.
I have tried separating the cropping aspect into its own function and returning the results but still no luck.
I haven't found anything particularly helpful online and all references to similar issues have advice (as mentioned above) that doesn't seem to work.
Thanks for your advice in advance.
Alright, after much time thinking on this, I decided to just start from scratch and since most of my recent work has been in Swift, I put together a swift class that can be called, controls the camera, and passes up the image through a delegate to the caller.
The end result is that I don't have this memory leak where some variable is holding on to the memory of the previous image and I can use it in my current project by bridging the Swift class file to my Obj-C ViewControllers.
Here is the Code for the class that does the fetching.
//
// CameraOverlay.swift
// CameraTesting
//
// Created by Chris Cantley on 3/3/16.
// Copyright © 2016 Chris Cantley. All rights reserved.
//
import Foundation
import UIKit
import AVFoundation
//We want to pass an image up to the parent class once the image has been taken so the easiest way to send it up
// and trigger the placing of the image is through a delegate.
protocol CameraOverlayDelegate: class {
func cameraOverlayImage(image:UIImage)
}
class CameraOverlay: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
//MARK: Internal Variables
//Setting up the delegate reference to be used later on.
internal var delegate: CameraOverlayDelegate?
//Varibles for setting the camera view
internal var returnImage : UIImage!
internal var previewView : UIView!
internal var boxView:UIView!
internal let myButton: UIButton = UIButton()
//Setting up Camera Capture required properties
internal var previewLayer:AVCaptureVideoPreviewLayer!
internal var captureDevice : AVCaptureDevice!
internal let session=AVCaptureSession()
internal var stillImageOutput: AVCaptureStillImageOutput!
//When we put up the camera preview and the button we have to reference a parent view so this will hold the
// parent view passed into the class so that other methods can work with it.
internal var view : UIView!
//When this class is instantiated, we want to require that the calling class passes us
//some view that we can tie the camera previewer and button to.
//MARK: - Instantiation Methods
init(parentView: UIView){
//Instantiate the reference to the passed-in UIView
self.view = parentView
//We are doing the following here because this only needs to be setup once per instantiation.
//Create the output container with settings to specify that we are getting a still Image, and that it is a JPEG.
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
//Now we are sticking the image into the above formatted container
session.addOutput(stillImageOutput)
}
//MARK: - Public Functions
func showCameraView() {
//This handles showing the camera previewer and button
self.setupCameraView()
//This sets up the parameters for the camera and begins the camera session.
self.setupAVCapture()
}
//MARK: - Internal Functions
//When the user clicks the button, this gets the image, sends it up to the delegate, and shuts down all the Camera related views.
internal func didPressTakePhoto(sender: UIButton) {
//Create a media connection...
if let videoConnection = stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
//Setup the orientation to be locked to portrait
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
//capture the still image from the camera
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in
if (sampleBuffer != nil) {
//Get the image data
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
//The 2.0 scale halves the scale of the image. Where as the 1.0 gives you the full size.
let image = UIImage(CGImage: cgImageRef!, scale: 2.0, orientation: UIImageOrientation.Up)
// What size is this image.
let imageSize = image.size
let imageScale = image.scale
let yCoord = (imageSize.height - ((imageSize.width*2)/3))/2
let getRect = CGRectMake(0, yCoord, imageSize.width, ((imageSize.width*2)/3))
let rect = CGRectMake(getRect.origin.x*imageScale, getRect.origin.y*imageScale, getRect.size.width*imageScale, getRect.size.height*imageScale)
let imageRef = CGImageCreateWithImageInRect(image.CGImage, rect)
//let newImage = UIImage(CGImage: imageRef!)
//This app forces the user to use landscapto take pictures so this simply turns the image so that it looks correct when we take the image.
let newImage: UIImage = UIImage(CGImage: imageRef!, scale: image.scale, orientation: UIImageOrientation.Down)
//Pass the image up to the delegate.
self.delegate?.cameraOverlayImage(newImage)
//stop the session
self.session.stopRunning()
//Remove the views.
self.previewView.removeFromSuperview()
self.boxView.removeFromSuperview()
self.myButton.removeFromSuperview()
//By this point the image has been handed off to the caller through the delegate and memory has been cleaned up.
}
})
}
}
internal func setupCameraView(){
//Add a view that is big as the frame that acts as a background.
self.boxView = UIView(frame: self.view.frame)
self.boxView.backgroundColor = UIColor(red: 255, green: 255, blue: 255, alpha: 1.0)
self.view.addSubview(self.boxView)
//Add Camera Preview View
// This sets up the previewView to be a 3:2 aspect ratio
let newHeight = UIScreen.mainScreen().bounds.size.width / 2 * 3
self.previewView = UIView(frame: CGRectMake(0, 0, UIScreen.mainScreen().bounds.size.width, newHeight))
self.previewView.backgroundColor = UIColor.cyanColor()
self.previewView.contentMode = UIViewContentMode.ScaleToFill
self.view.addSubview(previewView)
//Add the button.
myButton.frame = CGRectMake(0,0,200,40)
myButton.backgroundColor = UIColor.redColor()
myButton.layer.masksToBounds = true
myButton.setTitle("press me", forState: UIControlState.Normal)
myButton.setTitleColor(UIColor.whiteColor(), forState: UIControlState.Normal)
myButton.layer.cornerRadius = 20.0
myButton.layer.position = CGPoint(x: self.view.frame.width/2, y:(self.view.frame.height - myButton.frame.height ) )
myButton.addTarget(self, action: "didPressTakePhoto:", forControlEvents: .TouchUpInside)
self.view.addSubview(myButton)
}
internal func setupAVCapture(){
session.sessionPreset = AVCaptureSessionPresetPhoto;
let devices = AVCaptureDevice.devices();
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the front camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
//-> Now that we have the back of the camera, start a session.
beginSession()
break;
}
}
}
}
}
// Sets up the session
internal func beginSession(){
var err : NSError? = nil
var deviceInput:AVCaptureDeviceInput?
//See if we can get input from the Capture device as defined in setupAVCapture()
do {
deviceInput = try AVCaptureDeviceInput(device: captureDevice)
} catch let error as NSError {
err = error
deviceInput = nil
}
if err != nil {
print("error: \(err?.localizedDescription)")
}
//If we can add input into the AVCaptureSession() then do so.
if self.session.canAddInput(deviceInput){
self.session.addInput(deviceInput)
}
//Now show layers that were setup in the previewView, and mask it to the boundary of the previewView layer.
let rootLayer :CALayer = self.previewView.layer
rootLayer.masksToBounds=true
//put a live video capture based on the current session.
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session);
// Determine how to fill the previewLayer. In this case, I want to fill out the space of the previewLayer.
self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.previewLayer.frame = rootLayer.bounds
//Put the sublayer into the previewLayer
rootLayer.addSublayer(self.previewLayer)
session.startRunning()
}
}
Here is how I am using this class in a view controller.
//
// ViewController.swift
// CameraTesting
//
// Created by Chris Cantley on 2/26/16.
// Copyright © 2016 Chris Cantley. All rights reserved.
//
import UIKit
import AVFoundation
class ViewController: UIViewController, CameraOverlayDelegate{
//Setting up the class reference.
var cameraOverlay : CameraOverlay!
//Connected to the UIViewController main view.
#IBOutlet var getView: UIView!
//Connected to an ImageView that will display the image when it is passed back to the delegate.
#IBOutlet weak var imgShowImage: UIImageView!
//Connected to the button that is pressed to bring up the camera view.
#IBAction func btnPictureTouch(sender: AnyObject) {
//Remove the image from the UIImageView and take another picture.
self.imgShowImage.image = nil
self.cameraOverlay.showCameraView()
}
override func viewDidLoad() {
super.viewDidLoad()
//Pass in the target UIView which in this case is the main view
self.cameraOverlay = CameraOverlay(parentView: getView)
//Make this class the delegate for the instantiated class.
//That way it knows to receive the image when the user takes a picture
self.cameraOverlay.delegate = self
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
//Nothing here but if you run out of memorry you might want to do something here.
}
override func shouldAutorotate() -> Bool {
if (UIDevice.currentDevice().orientation == UIDeviceOrientation.LandscapeLeft ||
UIDevice.currentDevice().orientation == UIDeviceOrientation.LandscapeRight ||
UIDevice.currentDevice().orientation == UIDeviceOrientation.Unknown) {
return false;
}
else {
return true;
}
}
//This references the delegate from CameraOveralDelegate
func cameraOverlayImage(image: UIImage) {
//Put the image passed up from the CameraOverlay class into the UIImageView
self.imgShowImage.image = image
}
}
Here is a link to the project where I put that together.
GitHub - Boiler plate get image from camera
I'm having an issue with some animations in a Swift iOS application. I am trying to allow a user to grab a UIImageView and drag (pan) it to a different point. Then if they push "animate" it shows the animation of the imageview along a path from the first point to the second.
Here is what I have so far, which is more so me just trying to hammer an early solution. I'm getting an error when the "animate" button is pressed that says:
CGPathAddLineToPoint(CGMutablePathRef, const CGAffineTransform *,
CGFloat, CGFloat): no current point.
Here is my code:
// There was some global stuff set up earlier, such as pathPlayer1 which is an
// array of CGPoints I am using to store the path; they are commented
override func viewDidLoad() {
super.viewDidLoad()
var panRecognizer1 = UIPanGestureRecognizer(target: self, action: "handlePanning1:")
playerWithBall.addGestureRecognizer(panRecognizer1)
pathPlayer1.append(playerWithBall.center)
}
func handlePanning1(recognizer: UIPanGestureRecognizer) {
var newTranslation: CGPoint = recognizer.translationInView(playerWithBall)
recognizer.view?.transform = CGAffineTransformMakeTranslation(lastTranslation1.x + newTranslation.x, lastTranslation1.y + newTranslation.y)
if recognizer.state == UIGestureRecognizerState.Ended {
// lastTranslation1 is a global
lastTranslation1.x += newTranslation.x
lastTranslation1.y += newTranslation.y
// another global to get the translation from imageview center in main view
// to the new point in main view
playerWithBallPos.x = playerWithBall.center.x + lastTranslation1.x
playerWithBallPos.y = playerWithBall.center.y + lastTranslation1.y
// add this point to the path to animate along
pathPlayer1.append(playerWithBallPos)
//This was to make sure the append was working
println(pathPlayer1)
}
}
#IBAction func animatePlay(sender: UIButton) {
var path = CGPathCreateMutable()
var i: Int = 0
for (i = 0; i < pathPlayer1.count; i++) {
var location: CGPoint! = pathPlayer1[i]
// I think if its the first time you need to call CGPathMoveToPoint?
if firstTime {
CGPathMoveToPoint(path, nil, location.x, location.y)
firstTime = false
} else {
CGPathAddLineToPoint(path, nil, location.x, location.y)
}
}
var pathAnimation: CAKeyframeAnimation = CAKeyframeAnimation(keyPath: "pos")
pathAnimation.path = path
pathAnimation.duration = 1.0
}
The panning is working just fine and it appears to be correctly getting the new point, but I have no experience using objective-c and a computer science class worth of knowledge of swift/iOS and I'm not familiar with these types of animations.
I would like to make my solution work so that I could extend it from a single imageview to multiple and animate each one simultaneously (think of a sports playbook and animating a play or something like that)