I would like to get some help with a scenekit related problem.
I have a dae (collada) file. The scene contains a mesh with bones and one animation. The bones will be moving in the animation and i would like to draw a line from a fix point to a selected moving bone. The expected result should looks like this: img 1, img 2. One end of the line is fixed and the other end of the line should follow the animated bone. Like a rubber band. I tried to implement it in a sample project but i was not able to get the line follow the animation. I don't know how to follow the animation's position per frame and i don't know how could i update the line on the screen. The code:
#import "ViewController.h"
#import SceneKit;
#import <OpenGLES/ES2/gl.h>
#interface ViewController ()<SCNSceneRendererDelegate>
#property (weak, nonatomic) IBOutlet SCNView *scnView;
#property (strong, nonatomic) SCNNode *topBone;
#property (strong, nonatomic) SCNNode *lineNode;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.scnView.allowsCameraControl = YES;
self.scnView.playing = YES;
self.scnView.delegate = self; // Set the delegate to myself. Because during the animation i would like to redraw the line to the correct position
// I have an animation with bones.
SCNScene *mainScene = [SCNScene sceneNamed:#"art.scnassets/test.dae"]; // Loading the test scene
self.topBone = [mainScene.rootNode childNodeWithName:#"TopBone" recursively:YES]; // Find the top bone. I would like to draw a line between the top of the bone and a fix point (5,5,0)
self.lineNode = [self lineFrom:self.topBone.presentationNode.position to:SCNVector3Make(5.0, 5.0, 0.0)]; // Create a node for the line and draw the line
self.scnView.scene = mainScene; // Set the scene
[mainScene.rootNode addChildNode:self.lineNode]; // Add the line node to the scene
}
// I would like to use the render delegate to update the line position. Maybe this is not the right solution...
- (void)renderer:(id<SCNSceneRenderer>)aRenderer willRenderScene:(SCNScene *)scene atTime:(NSTimeInterval)time {
SCNVector3 v = self.topBone.presentationNode.position; // Problem1: The position of the top bone never change. It is always the same value. It is not follow the animation
// Problem 2: if I could get the correct bone position in the animation frame, how could I redraw the line in the Node in every frame?
glLineWidth(20); // Set the line width
}
- (SCNNode *)lineFrom:(SCNVector3)p1 to:(SCNVector3)p2 { // Draw a line between two points and return it as a node
int indices[] = {0, 1};
SCNVector3 positions[] = { p1, p2 };
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithVertices:positions count:2];
NSData *indexData = [NSData dataWithBytes:indices length:sizeof(indices)];
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:indexData primitiveType:SCNGeometryPrimitiveTypeLine primitiveCount:1 bytesPerIndex:sizeof(int)];
SCNGeometry *line = [SCNGeometry geometryWithSources:#[vertexSource] elements:#[element]];
line.materials.firstObject.diffuse.contents = [UIColor redColor];
SCNNode *lineNode = [SCNNode nodeWithGeometry:line];
return lineNode;
}
#end
Related
I am aware that this question has been asked many times, but I have found a partial resolution.
I have a custom class GraphView in which I have several sliders which change the graph parameters and instigate a redraw using [self setNeedsDisplay]. The only way I can get the setNeedsDisplay to work is to have the view of type GraphView just under the View Controller and the slider just under (and inside) the GraphView (in the storyboard hierarchy). This is problematic since the slider must be inside the graph.
Here is an mcve #interface:
#import <UIKit/UIKit.h>
#interface GraphView : UIView
#property float red;
#property __weak IBOutlet UITextField *redout;
#property UIBezierPath *aPath;
#property CGPoint aPoint;
- (void)drawRect:(CGRect)rect;
- (id) initWithFrame:(CGRect)frameRect;
- (IBAction)red_rabi:(id)sender;
Here is the mcve #implementation:
#import "GraphView.h"
#implementation GraphView
- (id) initWithFrame:(CGRect)frameRect
{
if ((self = [super initWithFrame:frameRect]) != nil)
{
_red=1.0;
}
return self;
}
- (void)drawRect:(CGRect)rect
{
[super drawRect:rect];
int i;
NSString *redstr;
float width, height;
width = rect.size.width;
height = rect.size.height;
_aPath =[UIBezierPath bezierPathWithRect:rect] ;
[_aPath setLineWidth:1.0];
_aPoint.x=0.0;
_aPoint.y=0.0;
[_aPath moveToPoint:_aPoint];
redstr = [NSString localizedStringWithFormat:#"%6.2f", _red];
for (i=1;i<400;i++)
{
_aPoint.x=i*width/400.0;
_aPoint.y=height-height*(sin(i*_red/30.)+1.0)/2.0;
[_aPath addLineToPoint:_aPoint];
}
[_aPath stroke];
}
- (IBAction)red_rabi:(id)sender
{
NSString *redstr;
UISlider *slider = (UISlider *)sender;
_red= slider.value;
redstr = [NSString localizedStringWithFormat:#"%6.2f", _red];
_redout.text = redstr;
[self setNeedsDisplay];
}
#end
If you place a generic View just underneath the View Controller (which I didn't touch), change the generic View's class to GraphView, and place a slider and TextField inside the GraphView (connecting them to the outlet and action), this app will generate a few cycles of a sine wave with the frequency controlled by the slider and its value displayed in the TextField.
If you want the slider and TextField in another view, one must use an enveloping view for all three items (GraphView, slider, text) and one cannot connect the slider and TextField to the GraphView using Ctrl_drag to the GraphView.h file. To remedy this, I placed a generic Object at the highest level and renamed it GraphView - I could then connect the slider and TextField. Although the Textfield reads correctly, the slider doesn't update the GraphView.
By the way, essentially the same code with the GraphView and slider in separate views works perfectly in OS X.
Sorry for the length of this query and thanks!
Problem solved! Due to an interesting SO post from three years ago (about connecting to subviews of UIView), I discovered that one merely drags (not Ctrl_drag!) from the action or outlet circle (in the .h file) to the control and that's it. Works perfectly even when the controls are in a different view from the subclassed UIView. Works equally well with outlets as with actions though you always drag away from the circle.
Thanks to all for their help.
I'm following along with the Stanford ios7 course, chapter 8, where the instructor builds a simplified Tetris game, with colored blocks dropping from above, where you have to fill in rows. After adding gravity behavior to make the blocks fall, he adds a collision behavior (note the property below), lazily instantiates it and, while doing that, he sets the bounds like this
_collider.translatesReferenceBoundsIntoBoundary = YES;
which makes the blocks collide with the bottom of the screen (rather than falling through) so they can stack on top of each other. He then adds the collision behavior to the animator property, and, as a final step, in the drop method, he adds the dropView (which are the blocks) to the collision behavior. When he runs, the blocks hit the bottom and stack ontop of each other. When I run, using the code below, the blocks continue to fall through the bottom of the screen (on the simulator). In other words, there is no stacking.
Can you see why the collision behavior might not be working.
ViewController
#property (strong,nonatomic) UIDynamicAnimator *animator;
#property (strong, nonatomic) UIGravityBehavior *gravity;
#property (strong, nonatomic) UICollisionBehavior *collider;
#end
#implementation DropItViewController
static const CGSize DROP_SIZE = { 40, 40 };
- (IBAction)tap:(UITapGestureRecognizer *)sender {
[self drop];
}
- (UICollisionBehavior *)collider
{
if (!_collider){
_collider = [[UICollisionBehavior alloc] init];
_collider.translatesReferenceBoundsIntoBoundary = YES;
[self.animator addBehavior:_collider];
}
return _collider;
}
- (UIDynamicAnimator *)animator
{
if (!_animator) {
_animator = [[UIDynamicAnimator alloc] init];
}
return _animator;
}
-(UIGravityBehavior *)gravity
{
if (!_gravity) {
_gravity = [[UIGravityBehavior alloc] init];
[self.animator addBehavior:_gravity];
}
return _gravity;
}
Add the dropView to the collider in the drop method
-(void)drop
{
CGRect frame;
frame.origin = CGPointZero;
frame.size = DROP_SIZE;
int x = (arc4random() % (int)self.gameView.bounds.size.width) / DROP_SIZE.width;
frame.origin.x = x * DROP_SIZE.width;
UIView *dropView = [[UIView alloc] initWithFrame:frame];
dropView.backgroundColor = self.randomColor;
[self.gameView addSubview:dropView];
[self.gravity addItem:dropView];
[self.collider addItem:dropView];
}
When you instantiate your UIDynamicAnimator, use initWithReferenceView instead of init. Then when you use translatesReferenceBoundsIntoBoundary, it will know what reference bounds to use.
I'm in the same course and thought I was seeing the same issue. However, try running in the 4.0 inch simulator. Your squares are probably collecting just offscreen (outside the bounds of a 3.5 inch screen).
I'm trying out a spring joint in SpriteKit (i.e. SKPhysicsJointSpring) with this simple scene. Pretty much, I've got a red sprite acting as the "ceiling", and then an orange sprite acting as a mass "block" that is supposed to be suspended from it by a spring (note: I did not draw anything to connect the two squares, but just imagine there was a spring there).
With the default gravity, I would expect that the orange block would begin to bounce up and down, but in fact, it just sits there. To further my confusion, if I uncomment the application of some force at the end of the scene's -didMoveToView: method, the x direction of the vector seems to actually be affecting the orange block (it begins to act as a pendulum), but the y direction vector doesn't seem to affect anything. It's as if the spring is really acting like a rigid rod. Is that supposed to happen?
And finally, why does the pendulum-like motion eventually dampen out? It seems that the default friction is 0.0, and I have not applied any friction myself. Can someone help me better understand this SKPhysicsJointSpring?
#import "XYZMainScene.h"
#interface XYZMainScene ()
#property (nonatomic, strong) SKSpriteNode *ceiling;
#property (nonatomic, strong) SKSpriteNode *block;
#end
#implementation XYZMainScene
- (void)didMoveToView:(SKView *)view {
SKSpriteNode *ceiling = self.ceiling;
[self addChild:ceiling];
SKSpriteNode *block = self.block;
[self addChild:block];
SKPhysicsJointSpring *spring = [SKPhysicsJointSpring jointWithBodyA:ceiling.physicsBody
bodyB:block.physicsBody
anchorA:ceiling.position
anchorB:block.position];
[self.physicsWorld addJoint:spring];
// [block.physicsBody applyForce:CGVectorMake(60, -100)];
}
- (SKSpriteNode *)ceiling {
if (!_ceiling) {
_ceiling = [SKSpriteNode spriteNodeWithColor:[SKColor redColor]
size:CGSizeMake(30, 30)];
_ceiling.position = CGPointMake(self.frame.size.width/2, 400);
_ceiling.physicsBody = [SKPhysicsBody bodyWithEdgeFromPoint:_ceiling.position
toPoint:_ceiling.position];
}
return _ceiling;
}
- (SKSpriteNode *)block {
if (!_block) {
_block = [SKSpriteNode spriteNodeWithColor:[SKColor orangeColor]
size:CGSizeMake(50,50)];
_block.position = CGPointMake(self.ceiling.position.x, self.ceiling.position.y - 200);
_block.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:_block.frame.size];
}
return _block;
}
#end
You need to change the frequency and damping properties of the SKPhysicsJointSpring.
SKPhysicsJointSpring *spring = [SKPhysicsJointSpring jointWithBodyA:ceiling.physicsBody
bodyB:block.physicsBody
anchorA:ceiling.position
anchorB:block.position];
spring.frequency = 1.0; //gives the spring some elasticity.
spring.damping = 0.0; //Will remove damping to create the 'pendulum'
[self.physicsWorld addJoint:spring];
Read up on the SKPhysicsJointSpring class reference here.
Recently I used UIDynamics to animate an image view into place. However, because its autolayout y-pos constraint was set to off-screen, when navigating away from the screen and then returning to it, my image view was being placed off-screen again. The animation took about 3 seconds, so after three seconds I just reset the constraint. That feels a little hacky.
So my question is this: what is the proper way to handle autolayout and UIDynamics at the same time?
This is not really a dynamics problem. Autolayout is incompatible with any view animation, or any manual setting of the frame: when layout comes along, it is the constraints that will be obeyed. It is up to you, if you move a view manually in any way, to update the constraints to match its new position/size/whatever.
Having said that: with UIKit Dynamics, when the animation ends, the animator will pause, and the animator's delegate is notified:
https://developer.apple.com/library/ios/documentation/uikit/reference/UIDynamicAnimatorDelegate_Protocol/Reference/Reference.html#//apple_ref/occ/intfm/UIDynamicAnimatorDelegate/dynamicAnimatorDidPause:
So that is the moment to update the constraints.
You have a nice solution provided by Geppy Parziale in this tutorial.
Basically you can create an object that conforms to UIDynamicItem:
#interface DynamicHub : NSObject <UIDynamicItem>
#property(nonatomic, readonly) CGRect bounds;
#property(nonatomic, readwrite) CGPoint center;
#property(nonatomic, readwrite) CGAffineTransform transform;
#end
That needs to init the bounds or it will crash:
- (id)init {
self = [super init];
if (self) {
_bounds = CGRectMake(0, 0, 100, 100);
}
return self;
}
And then you use UIDynamics on that object and use the intermediate values to update your constraints:
DynamicHub *dynamicHub = [[DynamicHub alloc] init];
UISnapBehavior *snapBehavior = [[UISnapBehavior alloc] initWithItem:dynamicHub
snapToPoint:CGPointMake(50.0, 150.0)];
[snapBehavior setDamping:.1];
snapBehavior.action = ^{
self.firstConstraint.constant = [dynamicHub center].y;
self.secondConstraint.constant = [dynamicHub center].x;
};
[self.animator addBehavior:snapBehavior];
I'm building a Photo filter app (like Instagram, Camera+ and many more..), may main screen is a UIImageView that presenting the image to the user, and a bottom bar with some filters and other options.
One of the option is blur, where the user can use his fingers to pinch or move a circle that represent the non-blur part (radius and position) - all the pixels outside of this circle will be blurred.
When the user touch the screen I want to add a semi transparent layer above my image that represent the blurred part, with a fully transparent circle that represent the non-blur part.
So my question is, how do I add this layer? I suppose I need to use some view above my image view, and to use some mask to get my circle shape? I would really appreciate a good tip here.
One More Thing
I need the circle will not be cut straight, but have a kind of gradient fade. something like Instagram:
And what's very important is to get this effect with good performance, I'd succeed getting this effect with drawRect: but the performance was very bad on old devices (iphone 4, iPod)
Sharp Mask
Whenever you want to draw a path that consists of a shape (or series of shapes) as a hole in another shape, the key is almost always using an 'even odd winding rule'.
From the Winding Rules section of the Cocoa Drawing Guide:
A winding rule is simply an algorithm that tracks information about each contiguous region that makes up the path's overall fill area. A ray is drawn from a point inside a given region to any point outside the path bounds. The total number of crossed path lines (including implicit lines) and the direction of each path line are then interpreted using rules which determine if the region should be filled.
I appreciate that description isn't really helpful without the rules as context and diagrams to make it easier to understand so I urge you to read the links I've provided above. For the sake of creating our circle mask layer the following diagrams depict what an even odd winding rule allows us to accomplish:
Non Zero Winding Rule
Even Odd Winding Rule
Now it's simply a matter of creating the translucent mask using a CAShapeLayer that can be repositioned and expanded and contracted through user interaction.
Code
#import <QuartzCore/QuartzCore.h>
#interface ViewController ()
#property (strong, nonatomic) IBOutlet UIImageView *imageView;
#property (strong) CAShapeLayer *blurFilterMask;
#property (assign) CGPoint blurFilterOrigin;
#property (assign) CGFloat blurFilterDiameter;
#end
#implementation ViewController
// begin the blur masking operation.
- (void)beginBlurMasking
{
self.blurFilterOrigin = self.imageView.center;
self.blurFilterDiameter = MIN(CGRectGetWidth(self.imageView.bounds), CGRectGetHeight(self.imageView.bounds));
CAShapeLayer *blurFilterMask = [CAShapeLayer layer];
// Disable implicit animations for the blur filter mask's path property.
blurFilterMask.actions = [[NSDictionary alloc] initWithObjectsAndKeys:[NSNull null], #"path", nil];
blurFilterMask.fillColor = [UIColor blackColor].CGColor;
blurFilterMask.fillRule = kCAFillRuleEvenOdd;
blurFilterMask.frame = self.imageView.bounds;
blurFilterMask.opacity = 0.5f;
self.blurFilterMask = blurFilterMask;
[self refreshBlurMask];
[self.imageView.layer addSublayer:blurFilterMask];
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleTap:)];
[self.imageView addGestureRecognizer:tapGesture];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinch:)];
[self.imageView addGestureRecognizer:pinchGesture];
}
// Move the origin of the blur mask to the location of the tap.
- (void)handleTap:(UITapGestureRecognizer *)sender
{
self.blurFilterOrigin = [sender locationInView:self.imageView];
[self refreshBlurMask];
}
// Expand and contract the clear region of the blur mask.
- (void)handlePinch:(UIPinchGestureRecognizer *)sender
{
// Use some combination of sender.scale and sender.velocity to determine the rate at which you want the circle to expand/contract.
self.blurFilterDiameter += sender.velocity;
[self refreshBlurMask];
}
// Update the blur mask within the UI.
- (void)refreshBlurMask
{
CGFloat blurFilterRadius = self.blurFilterDiameter * 0.5f;
CGMutablePathRef blurRegionPath = CGPathCreateMutable();
CGPathAddRect(blurRegionPath, NULL, self.imageView.bounds);
CGPathAddEllipseInRect(blurRegionPath, NULL, CGRectMake(self.blurFilterOrigin.x - blurFilterRadius, self.blurFilterOrigin.y - blurFilterRadius, self.blurFilterDiameter, self.blurFilterDiameter));
self.blurFilterMask.path = blurRegionPath;
CGPathRelease(blurRegionPath);
}
...
(This diagram may help understand the naming conventions in the code)
Gradient Mask
The Gradients section of Apple's Quartz 2D Programming Guide details how to draw radial gradients which we can use to create a mask with a feathered edge. This involves drawing a CALayers content directly by subclassing it or implementing its drawing delegate. Here we subclass it to encapsulate the data related to it i.e. origin and diameter.
Code
BlurFilterMask.h
#import <QuartzCore/QuartzCore.h>
#interface BlurFilterMask : CALayer
#property (assign) CGPoint origin; // The centre of the blur filter mask.
#property (assign) CGFloat diameter; // the diameter of the clear region of the blur filter mask.
#end
BlurFilterMask.m
#import "BlurFilterMask.h"
// The width in points the gradated region of the blur filter mask will span over.
CGFloat const GRADIENT_WIDTH = 50.0f;
#implementation BlurFilterMask
- (void)drawInContext:(CGContextRef)context
{
CGFloat clearRegionRadius = self.diameter * 0.5f;
CGFloat blurRegionRadius = clearRegionRadius + GRADIENT_WIDTH;
CGColorSpaceRef baseColorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat colours[8] = { 0.0f, 0.0f, 0.0f, 0.0f, // Clear region colour.
0.0f, 0.0f, 0.0f, 0.5f }; // Blur region colour.
CGFloat colourLocations[2] = { 0.0f, 0.4f };
CGGradientRef gradient = CGGradientCreateWithColorComponents (baseColorSpace, colours, colourLocations, 2);
CGContextDrawRadialGradient(context, gradient, self.origin, clearRegionRadius, self.origin, blurRegionRadius, kCGGradientDrawsAfterEndLocation);
CGColorSpaceRelease(baseColorSpace);
CGGradientRelease(gradient);
}
#end
ViewController.m (Wherever you are implementing the blur filer masking functionality)
#import "ViewController.h"
#import "BlurFilterMask.h"
#import <QuartzCore/QuartzCore.h>
#interface ViewController ()
#property (strong, nonatomic) IBOutlet UIImageView *imageView;
#property (strong) BlurFilterMask *blurFilterMask;
#end
#implementation ViewController
// Begin the blur filter masking operation.
- (void)beginBlurMasking
{
BlurFilterMask *blurFilterMask = [BlurFilterMask layer];
blurFilterMask.diameter = MIN(CGRectGetWidth(self.imageView.bounds), CGRectGetHeight(self.imageView.bounds));
blurFilterMask.frame = self.imageView.bounds;
blurFilterMask.origin = self.imageView.center;
blurFilterMask.shouldRasterize = YES;
[self.imageView.layer addSublayer:blurFilterMask];
[blurFilterMask setNeedsDisplay];
self.blurFilterMask = blurFilterMask;
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleTap:)];
[self.imageView addGestureRecognizer:tapGesture];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinch:)];
[self.imageView addGestureRecognizer:pinchGesture];
}
// Move the origin of the blur mask to the location of the tap.
- (void)handleTap:(UITapGestureRecognizer *)sender
{
self.blurFilterMask.origin = [sender locationInView:self.imageView];
[self.blurFilterMask setNeedsDisplay];
}
// Expand and contract the clear region of the blur mask.
- (void)handlePinch:(UIPinchGestureRecognizer *)sender
{
// Use some combination of sender.scale and sender.velocity to determine the rate at which you want the mask to expand/contract.
self.blurFilterMask.diameter += sender.velocity;
[self.blurFilterMask setNeedsDisplay];
}
...
(This diagram may help understand the naming conventions in the code)
Note
Ensure the multipleTouchEnabled property of the UIImageView hosting your image is set to YES/true:
Note
For sake of clarity in answering the OPs question this answer continues to use the naming conventions originally used. This may be slightly misleading to others. 'Mask' is this context does not refer to an image mask but mask in a more general sense. This answer doesn't use any image masking operations.
Sounds like you want to use GPUImageGaussianSelectiveBlurFilter which is contained inside the GPUImage framework. It should be a faster more efficient way to achieve what you want.
You can hook up the excludeCircleRadius property to a UIPinchGestureRecognizer in order to allow the user to change the size of the non-blurred circle. Then use the 'excludeCirclePoint' property in conjuction with a UIPanGestureRecognizer to allow the user to move the center of the non-blurred circle.
Read more about how to apply the filter here:
https://github.com/BradLarson/GPUImage#processing-a-still-image
In Swift if anyone needs it (added pan gesture as well):
BlurFilterMask.swift
import Foundation
import QuartzCore
class BlurFilterMask : CALayer {
private let GRADIENT_WIDTH : CGFloat = 50.0
var origin : CGPoint?
var diameter : CGFloat?
override init() {
super.init()
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func drawInContext(ctx: CGContext) {
let clearRegionRadius : CGFloat = self.diameter! * 0.5
let blurRegionRadius : CGFloat = clearRegionRadius + GRADIENT_WIDTH
let baseColorSpace = CGColorSpaceCreateDeviceRGB();
let colours : [CGFloat] = [0.0, 0.0, 0.0, 0.0, // Clear region
0.0, 0.0, 0.0, 0.5] // blur region color
let colourLocations : [CGFloat] = [0.0, 0.4]
let gradient = CGGradientCreateWithColorComponents (baseColorSpace, colours, colourLocations, 2)
CGContextDrawRadialGradient(ctx, gradient, self.origin!, clearRegionRadius, self.origin!, blurRegionRadius, .DrawsAfterEndLocation);
}
}
ViewController.swift
func addMaskOverlay(){
imageView!.userInteractionEnabled = true
imageView!.multipleTouchEnabled = true
let blurFilterMask = BlurFilterMask()
blurFilterMask.diameter = min(CGRectGetWidth(self.imageView!.bounds), CGRectGetHeight(self.imageView!.bounds))
blurFilterMask.frame = self.imageView!.bounds
blurFilterMask.origin = self.imageView!.center
blurFilterMask.shouldRasterize = true
self.imageView!.layer.addSublayer(blurFilterMask)
self.blurFilterMask = blurFilterMask
self.blurFilterMask!.setNeedsDisplay()
self.imageView!.addGestureRecognizer(UIPinchGestureRecognizer(target: self, action: "handlePinch:"))
self.imageView!.addGestureRecognizer(UITapGestureRecognizer(target: self, action: "handleTap:"))
self.imageView!.addGestureRecognizer(UIPanGestureRecognizer(target: self, action: "handlePan:"))
}
func donePressed(){
//save photo and add to textview
let parent : LoggedInContainerViewController? = self.parentViewController as? LoggedInContainerViewController
let vc : OrderFlowCareInstructionsTextViewController = parent?.viewControllers[(parent?.viewControllers.count)!-2] as! OrderFlowCareInstructionsTextViewController
vc.addImageToTextView(imageView?.image)
parent?.popViewController()
}
//MARK: Mask Overlay
func handleTap(sender : UITapGestureRecognizer){
self.blurFilterMask!.origin = sender.locationInView(self.imageView!)
self.blurFilterMask!.setNeedsDisplay()
}
func handlePinch(sender : UIPinchGestureRecognizer){
self.blurFilterMask!.diameter = self.blurFilterMask!.diameter! + sender.velocity*3
self.blurFilterMask!.setNeedsDisplay()
}
func handlePan(sender : UIPanGestureRecognizer){
let translation = sender.translationInView(self.imageView!)
let center = CGPoint(x:self.imageView!.center.x + translation.x,
y:self.imageView!.center.y + translation.y)
self.blurFilterMask!.origin = center
self.blurFilterMask!.setNeedsDisplay()
}