Is it possible to change the bounds of a UIView (which is attached to some other UIViews using UIAttachmentBehaviors) and have the UICollisionBehavior in combination with the UIAttachmentBehavior respond to it (like the sample movie here: http://www.netwalkapps.com/ani.mov, whereby upon touch the ball UIView grows and the other ball UIViews move out of the way)?
Thanks!
Tom.
I got this to work but it was pretty hacky. I had to remove all behaviors from my animator object and re-add them again.
- (void)_tickleBehaviors
{
NSArray *behaviors = [self.animator.behaviors copy];
for (UIDynamicBehavior *behavior in self.animator.behaviors) {
[self.animator removeBehavior:behavior];
}
for (UIDynamicBehavior *behavior in behaviors) {
[self.animator addBehavior:behavior];
}
}
Related
This question already has answers here:
UIButton: Making the hit area larger than the default hit area
(36 answers)
Closed 15 days ago.
I have a UILabel that I have attached a touch, pinch, and rotate gesture to. The problem I am having is that there are only a few characters in it which can make it hard to rotate/pinch/touch. Is there a way that I can add a margin to the UILabel or add some area around the UILabel that will fire the rotate/pinch/touch action to make it easier to manipulate?
Here is an example of the method:
- (IBAction)handlePinch:(UIPinchGestureRecognizer *)recognizer {
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
currentAction = TEXT_STRETCHING;
recognizer.scale = 1;
}
There are a number of solutions for your issue:
Add gesture to another, bigger view.
If your label text is
centered you can make the label itself wider and the appearance will
remain as it was.
You can subclass UILabel and override hitTest
method. This way you can increase the area of gesture recognition
for your custom view:
CustomLabel.m:
#implementation CustomLabel: UILabel
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
CGRect bigRect = CGRectInset(self.bounds, -10, -10);
return CGRectContainsPoint(bigRect, point);
}
#end
You can obviously customize the inset values for dx and dy to suit your needs.
More details about this method here:
https://developer.apple.com/reference/uikit/uiview/1622533-pointinside?language=objc
Storyboard/xib
Set CustomLabel class instead of UILabel
ViewController.m:
Leave everything as is. Your handlePinch: method is just fine.
P.S. Perhaps it would be better to stick with UIButton instead of UILabel, but this technique may still be helpful.
I have an image as a background and I want to make certain parts of this image clickable with zooming in and out , Is there any way to do something like that ??
Don't create a new view for your gesture recognizer. The recognizer implements a locationInView: method. Set it up for the view that contains the sensitive region. On the handleGesture, hit-test the region you care about like this:
0) Do all this on the view that contains the region you care about. Don't add a special view just for the gesture recognizer.
1) Setup mySensitiveRect
#property (assign, nonatomic) CGRect mySensitiveRect;
#synthesize mySensitiveRect=_mySensitiveRect;
self.mySensitiveRect = CGRectMake(0.0, 240.0, 320.0, 240.0);
2) Create your gestureRecognizer:
gr = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handleGesture:)];
[self.view addGestureRecognizer:gr];
// if not using ARC, you should [gr release];
// mySensitiveRect coords are in the coordinate system of self.view
- (void)handleGesture:(UIGestureRecognizer *)gestureRecognizer {
CGPoint p = [gestureRecognizer locationInView:self.view];
if (CGRectContainsPoint(mySensitiveRect, p)) {
//Add your zooming code here
} else {
NSLog(#"got a tap, but not where i need it");
}
}}
The sensitive rect should be initialized in myView's coordinate system, the same view to which you attach the recognizer.
Apple has a demo app called PhotoScroller that implements a zoomable, scrollable set of images (in a page view controller, but you don't need that.) That would be a good starting point for what you need.
Their sample apps used to be built into the Xcode docs. Since Xcode 6 I haven't seen them linked in the docs any more.
You can download PhotoScroller from Apple's online iOS Developer Library. (link)
I'm getting int SpriteKit. And would like to know how to create motion effect on SKNode object.
For UIView I use following method :
+(void)registerEffectForView:(UIView *)aView
depth:(CGFloat)depth
{
UIInterpolatingMotionEffect *effectX;
UIInterpolatingMotionEffect *effectY;
effectX = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:#"center.x"
type:UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis];
effectY = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:#"center.y"
type:UIInterpolatingMotionEffectTypeTiltAlongVerticalAxis];
effectX.maximumRelativeValue = #(depth);
effectX.minimumRelativeValue = #(-depth);
effectY.maximumRelativeValue = #(depth);
effectY.minimumRelativeValue = #(-depth);
[aView addMotionEffect:effectX];
[aView addMotionEffect:effectY];
}
I haven't found anything similar for SKNode. So my question is is it possible? And if not then how can I implement it.
UIInterpolatingMotionEffect works in deep level and you can't use an arbitrary keyPath like "cloudX". Even after adding motionEffects, the actual value of centre property won't change.
So the answer is, you can't add a motion effect other than UIView. Using arbitrary property other than particular property such as 'center' or 'frame' is not possible either.
UIInterpolatingMotionEffect just maps the device tilt to properties of the view it's applied to -- it's all about what keyPaths you set it up with, and what the setters for those key paths do.
The example you posted maps horizontal tilt to the x coordinate of the view's center property. When the device is tilted horizontally, UIKit automatically calls setCenter: on the view (or sets view.center =, if you prefer your syntax that way), passing a point whose X coordinate is offset proportionally to the amount of horizontal tilt.
You can just as well define custom properties on a custom UIView subclass. Since you're working with Sprite Kit, you can subclass SKView to add properties.
For example... say you have a cloud sprite in your scene that you want to move as the user tilts the device. Name it as a property in your SKScene subclass:
#interface MyScene : SKScene
#property SKSpriteNode *cloud;
#end
And add properties and accessors in your SKView subclass that move it:
#implementation MyView // (excerpt)
- (CGFloat)cloudX {
return ((MyScene *)self.scene).cloud.position.x;
}
- (void)setCloudX:(CGFloat)x {
SKSpriteNode *cloud = ((MyScene *)self.scene).cloud;
cloud.position = CGPointMake(x, cloud.position.y);
}
#end
Now, you can create a UIInterpolatingMotionEffect whose keyPath is cloudX, and it should* automagically move the sprite in your scene.
(* totally untested code)
I'm having trouble getting a repeatable background to work in my game menu.
The user can slide a finger across the screen to select a character to play.
I have a parallax effect working with various backgrounds as the characters slide into view.
Sample below.
- (void)didMoveToView:(SKView *)view
{
self.pan = [[UIPanGestureRecognizer alloc]initWithTarget:self action:#selector(dragScene:)];
self.pan.minimumNumberOfTouches = 1;
self.pan.delegate = self;
[self.view addGestureRecognizer:self.pan];
}
- (void)dragScene:(UIPanGestureRecognizer *)gesture
{
CGPoint trans = [gesture translationInView:self.view];
SKAction *moveSky = [SKAction moveByX:trans.x*0.03 y:0 duration:0];
[_skyBackground runAction:moveSky];
}
I would like to repeat the backgrounds. I know how to do this with automatically scrolling backgrounds but I can't seem to get it to work here. It needs to repeat in both directions, left and right.
Thanks for any help!
You can create two more background nodes - one to the left of your current background node and one to the right. Move them aswell any time you move your existing _skyBackground node.
Then, in the update method, check if any of the three nodes needs to be "shifted" - either to behind the other two or in front. You're basically swapping the three nodes' positions if needed.
-(void)update:(NSTimeInterval)currentTime {
//get the left background node (or if using an ivar just use _leftNode)
SKSpriteNode *leftNode = (SKSpriteNode*)[self childNodeWithName:#"leftNode"];
//my positioning might be off but you'll get the idea
if (leftNode.position.x < -leftNode.size.width*2)
{
leftNode.position = CGPointMake(leftNode.size.width, leftNode.position.y);
}
if (leftNode.position.x > leftNode.size.width*2)
{
leftNode.position = CGPointMake(-leftNode.size.width, leftNode.position.y);
}
//repeat the same for _skyBackground and _rightNode
}
You may need more than 3 images if there's a slight gap between images as they're shifted.
I've been trying to figure this out for hours, completely at a loss here. I'm trying to implement a UIPinchGestureRecognizer for some of the custom UIImageViews in my game, but it doesn't work. Everything thing I've researched says it should work, yet it doesn't. Pinch works fine if I add it to my view controller, or to a custom UIView, but not the UIImageViews. I've tried all the common fixes and tweaks, to no success. I have userInteractionEnabled and multipleTouchEnabled set to YES. I have the delegate and selectors set up properly. I have shouldRecognizeSimultaneouslyWithGestureRecognizer set to return YES.
The gesture recognizer is getting added to the UIImageView, I've been able to access its properties later in my update loop, but the NSLog in the selector never gets called for the UIImageView when I try to pinch. I've adjusted the z-position of the views to ensure they are on top but no dice.
My UIImageViews are stored in a NSMutableDictionary and are updated by looping through it during each update loop of the game. Could this have an effect on the UIPinchGestureRecognizer not getting called?... I can't think of anything else and posting the code probably won't help - because the same exact code works when it's used for the UIView or view controller.
I do have touch handling code in the view controller's touchesBegan and touchedMoved events... but I've turned that off but the problem still persists, and the pinch worked for other elements with it on anyway.
Any ideas what could prevent a gesture selector from firing on an UIImageView? The dictionary? Something to do with being constantly updated in the game loop? Any ideas would be welcome, this seems so simple to implement...
Edit: Here's the code for the UIImageView and what I'm doing with it... not sure if this helps.
Extended UIImageView class Paper.m (prp is a struct of properties used to initialize my custom variables:
NSString *tName = [NSString stringWithUTF8String: prp.imagePath];
UIImage *tImage = [UIImage imageNamed:[NSString stringWithFormat:#"%#.png",tName]];
self = [self initWithImage: tImage];
self.userInteractionEnabled = YES;
self.multipleTouchEnabled = YES;
self.center = CGPointMake(prp.spawnX, prp.spawnY);
if (prp.zPos != 0) { self.layer.zPosition = prp.zPos; }
// other initialization excised
Then I have a custom class called ObjManager that holds the NSMutableDictionary and initializes all UIImageView objects like so, where addObj is called in a loop to add each object:
- (ObjManager*) initWithBlank {
// create an array for our objects
self = [super init];
if (self) {
objects = [[NSMutableDictionary alloc] init];
spawnID = 100; // start of counter for dynamically spawned object IDs
}
return self;
}
- (void) addObj:(Paper *)paperPiece wasSpawned:(BOOL)spawned {
// add each paper piece, assign spawnID if dynamically spawned
NSNumber *newID;
if (spawned) { newID = [NSNumber numberWithInt:spawnID]; spawnID++; }
else { newID = [NSNumber numberWithInt:paperPiece.objID]; }
[objects setObject:paperPiece forKey:newID];
}
My view controller calls the initialization of the ObjManager (called _world in my VC). Then it loops through _world like so:
// Populate additional object managers and add all subviews
for (NSNumber *key in _world.objects) {
_eachPiece = [_world.objects objectForKey:key];
// Populate collision object manager
if (_eachPiece.collision) {
[_world_collisions addObj:_eachPiece wasSpawned:NO];
}
// only add pinch gesture if the object flag is set
if (_eachPiece.pinch) {
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(pinchPaper:)];
pinchGesture.delegate = self;
[_eachPiece addGestureRecognizer:pinchGesture];
NSLog(#"Added pinch recognizer scale: %#", pinchGesture.view.description);
}
// Add each object as a subview
[self.view addSubview:_eachPiece];
}
_eachPiece is an object in my view controller, declared in the .h file (as is _world):
#property (nonatomic, strong) ObjManager *world;
#property (nonatomic, strong) Paper *eachPiece;
Then I have an NSTimer object that updates all moveable Paper objects (the UIImageViews) in _world (ObjManager) every frame like so:
// loop through each piece and update
for (NSNumber *key in _world.objects) {
eachPiece = [_world.objects objectForKey:key];
// only update moveable pieces
if ((eachPiece.moveType == Move_Touch) || (eachPiece.moveType == Move_Auto)) {
CGPoint paperCenter;
paperCenter = eachPiece.center;
// a bunch of code to update paperCenter x & y for the object's new position based on velocity and user input
// determine image direction and transformation matrix
[_world updateDirection:eachPiece];
CGAffineTransform transformPiece = [_world imageTransform:eachPiece];
if (transformEnabled) {
eachPiece.transform = transformPiece;
}
// finally move it
[eachPiece setCenter:paperCenter];
}
}
And the pinch selector:
- (void)pinchPaper:(UIPinchGestureRecognizer *)recognizer {
NSLog(#"Pinch scale: %f", recognizer.scale);
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
}
As far as I can tell, the pinch should work. If I take the same pinch gesture code and set it to add to the view controller, it works for the entire view. I also have a custom UIView class that acts as a border (simply a rectangle drawn around the view), and moving the pinch gesture code to that allows me to pinch the border only.
Alright, so apparently gesture recognizers don't fire on views where the position is being animated. So to make it work I had to put the recognizer on the view controller, then perform a hit test and apply pinch/zoom on the touched view if it's one I want to pinch/zoom. Info on that here:
http://iphonedevsdk.com/forum/iphone-sdk-tutorials/100982-caanimation-tutorial.html
For my particular case, I kept track of which animated views I wanted to pinch, in a variable/array at the View Controller level. Then I used this code in the selector (essentially from the link above, all credit to them):
- (void)pinchPaper:(UIPinchGestureRecognizer *)recognizer {
CALayer *pinchLayer;
id layerDelegate;
CGPoint touchPoint = [recognizer locationInView:self.view];
pinchLayer = [self.view.layer.presentationLayer hitTest: touchPoint];
layerDelegate = [pinchLayer delegate];
//_pinchView is the UIView I want to pinch
if (layerDelegate == _pinchView) {
_pinchView.transform = CGAffineTransformScale(_pinchView.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
}
}
Only tricky thing is if you have other scale transforms (like changing directions in mine) going on as part of the existing UIView animation, you have to account for that, by using the current transform during each update loop.
For any gesture recognizer to work on imageViews, userInteraction must be enabled on it.
So, it should be,
yourImageView.userInteractionEnabled = YES;
Or, if you are using storyboards, you can check that option in storyboard's inspector window too.
Hope it helps..:)