I'm getting int SpriteKit. And would like to know how to create motion effect on SKNode object.
For UIView I use following method :
+(void)registerEffectForView:(UIView *)aView
depth:(CGFloat)depth
{
UIInterpolatingMotionEffect *effectX;
UIInterpolatingMotionEffect *effectY;
effectX = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:#"center.x"
type:UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis];
effectY = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:#"center.y"
type:UIInterpolatingMotionEffectTypeTiltAlongVerticalAxis];
effectX.maximumRelativeValue = #(depth);
effectX.minimumRelativeValue = #(-depth);
effectY.maximumRelativeValue = #(depth);
effectY.minimumRelativeValue = #(-depth);
[aView addMotionEffect:effectX];
[aView addMotionEffect:effectY];
}
I haven't found anything similar for SKNode. So my question is is it possible? And if not then how can I implement it.
UIInterpolatingMotionEffect works in deep level and you can't use an arbitrary keyPath like "cloudX". Even after adding motionEffects, the actual value of centre property won't change.
So the answer is, you can't add a motion effect other than UIView. Using arbitrary property other than particular property such as 'center' or 'frame' is not possible either.
UIInterpolatingMotionEffect just maps the device tilt to properties of the view it's applied to -- it's all about what keyPaths you set it up with, and what the setters for those key paths do.
The example you posted maps horizontal tilt to the x coordinate of the view's center property. When the device is tilted horizontally, UIKit automatically calls setCenter: on the view (or sets view.center =, if you prefer your syntax that way), passing a point whose X coordinate is offset proportionally to the amount of horizontal tilt.
You can just as well define custom properties on a custom UIView subclass. Since you're working with Sprite Kit, you can subclass SKView to add properties.
For example... say you have a cloud sprite in your scene that you want to move as the user tilts the device. Name it as a property in your SKScene subclass:
#interface MyScene : SKScene
#property SKSpriteNode *cloud;
#end
And add properties and accessors in your SKView subclass that move it:
#implementation MyView // (excerpt)
- (CGFloat)cloudX {
return ((MyScene *)self.scene).cloud.position.x;
}
- (void)setCloudX:(CGFloat)x {
SKSpriteNode *cloud = ((MyScene *)self.scene).cloud;
cloud.position = CGPointMake(x, cloud.position.y);
}
#end
Now, you can create a UIInterpolatingMotionEffect whose keyPath is cloudX, and it should* automagically move the sprite in your scene.
(* totally untested code)
Related
Two likely related things here:
1) I can draw a box and add to child from my SKScene impelmentation file with self, self.scene, and myWorld, but not with an SKSprite node's scene property.
SKSpriteNode *bla = [SKSpriteNode spriteNodeWithColor:[UIColor redColor] size:CGSizeMake(100, 100)];
[self.scene addChild:bla]; // If I use bla.scene, it doesn't work. self, self.scene, myworld all do work though.
bla.position = CGPointMake(0, 0);
bla.zPosition = 999;
2) I've seen the related questions here and here, but I'm trying to add a joint during gameplay (grabbing a rope). This method gets called after doing some sorting in `didBeginContact:
-(void)monkey:(SKPhysicsBody *)monkeyPhysicsBody didCollideWithRope:(SKPhysicsBody *)ropePhysicsBody atPoint:(CGPoint)contactPoint
{
if (monkeyPhysicsBody.joints.count == 0) {
// Create a new joint between the player and the rope segment
CGPoint convertedRopePosition = [self.scene convertPoint:ropePhysicsBody.node.position fromNode:ropePhysicsBody.node.parent];
SKPhysicsJointPin *jointPin = [SKPhysicsJointPin jointWithBodyA:playerPhysicsBody bodyB:ropePhysicsBody anchor:convertedRopePosition];
jointPin.upperAngleLimit = M_PI/4;
jointPin.shouldEnableLimits = YES;
[self.scene.physicsWorld addJoint:jointPin];
}
}
I've got showPhyiscs enabled on the scene, so I can see that the joint is ending up in a totally wacky place. Unfortunately, I don't know how to apply the linked solutions since I'm not adding the SKSpriteNodes in this method, they already exist, so I can't just flip the order of position and physicsBody.
I've tried every permutation I could for both of the convertPoint methods. Nothing worked. My best guess is that physicsWorld is using some wacky coordinate system.
Method members of SKPhysicsWorld that relate to position (CGPoint) or frame (CGRect) are to be in scene coordinates. Scene coordinates reference the point {0,0} as the bottom left corner and is consistent throughout SpriteKit.
The scene property of your object named bla will be nil when bla is first created and is set by the scene when added to it.
[bla.scene addChild:bla]; // this won't do anything as scene will be nil when first created
It looks as though convertedRopePosition is being assigned an incorrect value because the second member you're passing into, - (CGPoint)convertPoint:(CGPoint)point fromNode:(SKNode *)node , is the scene when it should be another node in the same node tree as this node. where this node is the caller (in this case the SKScene subclass).
Try replacing the line-
CGPoint convertedRopePosition = [self.scene convertPoint:ropePhysicsBody.node.position fromNode:ropePhysicsBody.node.parent];
with-
CGPoint convertedRopePosition = [self convertPoint:ropePhysicsBody.node.position fromNode:ropePhysicsBody.node];
I came up with a janky work around for this problem. It turns out that the coordinate system offset for physicsWorld was likely due to an anchor difference. Changing the anchors of every related thing made no difference, and you can't change the anchor of the physicsWorld directly, so I ended up adding half of the scene width and half of the scene height to the anchor position of my joint. That got it to show in the right place and behave normally.
This problem persisted once side scrolling was factored in. I've posted other questions here with this same problem but I'll include th
I added the following convenience method to my GameScene.m. It essentially takes the place of the seemingly useless convertTo built in method.
-(CGPoint)convertSceneToFrameCoordinates:(CGPoint)scenePoint
{
CGFloat xDiff = myWorld.position.x - self.position.x;
CGFloat yDiff = myWorld.position.y - self.position.y;
return CGPointMake(scenePoint.x + self.frame.size.width/2 + xDiff, scenePoint.y + self.frame.size.height/2 + yDiff);
}
I use this method to add joints. It handles all of the coordinate system transformations that need to be dealt with that lead to the issue raised in this question. For example, the way I add joints
CGPoint convertedRopePosition = [self convertSceneToFrameCoordinates:ropePhysicsBody.node.position];
SKPhysicsJointPin *jointPin = [SKPhysicsJointPin jointWithBodyA:monkeyPhysicsBody bodyB:ropePhysicsBody anchor:convertedRopePosition];
jointPin.upperAngleLimit = M_PI/4;
jointPin.shouldEnableLimits = YES;
[self.scene.physicsWorld addJoint:jointPin];
I am creating a UI where we have a deck of cards that you can swipe off the screen.
What I had hoped to be able to do was create a subclass of UIView which would represent each card and then to modify the transform property to move them back (z-axis) and a little up (y-axis) to get the look of a deck of cards.
Reading up on it I found I needed to use a CATransformLayer instead of the normal CALayer in order for the z-axis to not get flattened. I prototyped this by creating a CATransformLayer which I added to the CardDeckView's layer, and then all my cards are added to that CATransformLayer. The code looks a little bit like this:
In init:
// Initialize the CATransformSublayer
_rootTransformLayer = [self constructRootTransformLayer];
[self.layer addSublayer:_rootTransformLayer];
constructRootTransformLayer (the angle method is redundant, was going to angle the deck but later decided not to):
CATransformLayer* transformLayer = [CATransformLayer layer];
transformLayer.frame = self.bounds;
// Angle the transform layer so we an see all of the cards
CATransform3D rootRotateTransform = [self transformWithZRotation:0.0];
transformLayer.transform = rootRotateTransform;
return transformLayer;
Then the code to add the cards looks like:
// Set up a CardView as a wrapper for the contentView
RVCardView* cardView = [[RVCardView alloc] initWithContentView:contentView];
cardView.layer.cornerRadius = 6.0;
if (cardView != nil) {
[_cardArray addObject:cardView];
//[self addSubview:cardView];
[_rootTransformLayer addSublayer:cardView.layer];
[self setNeedsLayout];
}
Note that what I originally wanted was to simply add the RVCardView directly as a subview - I want to preserve touch events which adding just the layer doesn't do. Unfortunately what ends up happening is the following:
If I add the cards to the rootTransformLayer I end up with the right look which is:
Note that I tried using the layerClass on the root view (CardDeckView) which looks like this:
+ (Class) layerClass
{
return [CATransformLayer class];
}
I've confirmed that the root layer type is now CATransformLayer but I still get the flattened look. What else do I need to do in order to prevent the flattening?
When you use views, you see a flat scene because there is no perspective set in place. To make a comparison with 3D graphics, like OpenGL, in order to render a scene you must set the camera matrix, the one that transforms the 3D world into a 2D image.
This is the same: sublayers content are transformed using CATransform3D in 3D space but then, when the parent CALayer displays them, by default it projects them on x and y ignoring the z coordinate.
See Adding Perspective to Your Animations on Apple documentation. This is the code you are missing:
CATransform3D perspective = CATransform3DIdentity;
perspective.m34 = -1.0 / eyePosition; // ...on the z axis
myParentDeckView.layer.sublayerTransform = perspective;
Note that for this, you don't need to use CATransformLayer, a simple CALayer would suffice:
here is the transformation applied to the subviews in the picture (eyePosition = -0.1):
// (from ViewController-viewDidLoad)
for (UIView *v in self.view.subviews) {
CGFloat dz = (float)(arc4random() % self.view.subviews.count);
CATransform3D t = CATransform3DRotate(CATransform3DMakeTranslation(0.f, 0.f, dz),
0.02,
1.0, 0.0, 0.0);
v.layer.transform = t;
}
The reason for using CATransformLayer is pointed out in this question. CALayer "rasterizes" its transformed sublayers and then applies its own transformation, while CATransformLayer preserves the full hierarchy and draws each sublayer independently; it is useful only if you have more than one level of 3D-transformed sublayers. In your case, the scene tree has only one level: the deck view (which itself has the identity matrix as transformation) and the card views, the children (which are instead moved in the 3D space). So CATransformLayer is superfluous in this case.
I am subclassing SKSpriteNode class to use on a custom sprite. I would like to keep everything as self contained as possible.
Is there a way for a SKSpriteNode know when it is being used on a scene? I mean, suppose another class does this:
MySprite *sprite = [[MySprite alloc] init];
and ages later does this
[self addChild:sprite];
Can the sprite know by its own when it is added as a child of some scene or another node?
SKNode has an property called scene. If this property returns nil it means it isn't in any scene. You can do the following to check that.
if(!MyNode.scene){
//Do something
}
You can also check that at SKNode Doc.
https://developer.apple.com/library/ios/documentation/SpriteKit/Reference/SKNode_Ref/Reference/Reference.html
Is it possible to change the bounds of a UIView (which is attached to some other UIViews using UIAttachmentBehaviors) and have the UICollisionBehavior in combination with the UIAttachmentBehavior respond to it (like the sample movie here: http://www.netwalkapps.com/ani.mov, whereby upon touch the ball UIView grows and the other ball UIViews move out of the way)?
Thanks!
Tom.
I got this to work but it was pretty hacky. I had to remove all behaviors from my animator object and re-add them again.
- (void)_tickleBehaviors
{
NSArray *behaviors = [self.animator.behaviors copy];
for (UIDynamicBehavior *behavior in self.animator.behaviors) {
[self.animator removeBehavior:behavior];
}
for (UIDynamicBehavior *behavior in behaviors) {
[self.animator addBehavior:behavior];
}
}
I've been trying to figure this out for hours, completely at a loss here. I'm trying to implement a UIPinchGestureRecognizer for some of the custom UIImageViews in my game, but it doesn't work. Everything thing I've researched says it should work, yet it doesn't. Pinch works fine if I add it to my view controller, or to a custom UIView, but not the UIImageViews. I've tried all the common fixes and tweaks, to no success. I have userInteractionEnabled and multipleTouchEnabled set to YES. I have the delegate and selectors set up properly. I have shouldRecognizeSimultaneouslyWithGestureRecognizer set to return YES.
The gesture recognizer is getting added to the UIImageView, I've been able to access its properties later in my update loop, but the NSLog in the selector never gets called for the UIImageView when I try to pinch. I've adjusted the z-position of the views to ensure they are on top but no dice.
My UIImageViews are stored in a NSMutableDictionary and are updated by looping through it during each update loop of the game. Could this have an effect on the UIPinchGestureRecognizer not getting called?... I can't think of anything else and posting the code probably won't help - because the same exact code works when it's used for the UIView or view controller.
I do have touch handling code in the view controller's touchesBegan and touchedMoved events... but I've turned that off but the problem still persists, and the pinch worked for other elements with it on anyway.
Any ideas what could prevent a gesture selector from firing on an UIImageView? The dictionary? Something to do with being constantly updated in the game loop? Any ideas would be welcome, this seems so simple to implement...
Edit: Here's the code for the UIImageView and what I'm doing with it... not sure if this helps.
Extended UIImageView class Paper.m (prp is a struct of properties used to initialize my custom variables:
NSString *tName = [NSString stringWithUTF8String: prp.imagePath];
UIImage *tImage = [UIImage imageNamed:[NSString stringWithFormat:#"%#.png",tName]];
self = [self initWithImage: tImage];
self.userInteractionEnabled = YES;
self.multipleTouchEnabled = YES;
self.center = CGPointMake(prp.spawnX, prp.spawnY);
if (prp.zPos != 0) { self.layer.zPosition = prp.zPos; }
// other initialization excised
Then I have a custom class called ObjManager that holds the NSMutableDictionary and initializes all UIImageView objects like so, where addObj is called in a loop to add each object:
- (ObjManager*) initWithBlank {
// create an array for our objects
self = [super init];
if (self) {
objects = [[NSMutableDictionary alloc] init];
spawnID = 100; // start of counter for dynamically spawned object IDs
}
return self;
}
- (void) addObj:(Paper *)paperPiece wasSpawned:(BOOL)spawned {
// add each paper piece, assign spawnID if dynamically spawned
NSNumber *newID;
if (spawned) { newID = [NSNumber numberWithInt:spawnID]; spawnID++; }
else { newID = [NSNumber numberWithInt:paperPiece.objID]; }
[objects setObject:paperPiece forKey:newID];
}
My view controller calls the initialization of the ObjManager (called _world in my VC). Then it loops through _world like so:
// Populate additional object managers and add all subviews
for (NSNumber *key in _world.objects) {
_eachPiece = [_world.objects objectForKey:key];
// Populate collision object manager
if (_eachPiece.collision) {
[_world_collisions addObj:_eachPiece wasSpawned:NO];
}
// only add pinch gesture if the object flag is set
if (_eachPiece.pinch) {
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(pinchPaper:)];
pinchGesture.delegate = self;
[_eachPiece addGestureRecognizer:pinchGesture];
NSLog(#"Added pinch recognizer scale: %#", pinchGesture.view.description);
}
// Add each object as a subview
[self.view addSubview:_eachPiece];
}
_eachPiece is an object in my view controller, declared in the .h file (as is _world):
#property (nonatomic, strong) ObjManager *world;
#property (nonatomic, strong) Paper *eachPiece;
Then I have an NSTimer object that updates all moveable Paper objects (the UIImageViews) in _world (ObjManager) every frame like so:
// loop through each piece and update
for (NSNumber *key in _world.objects) {
eachPiece = [_world.objects objectForKey:key];
// only update moveable pieces
if ((eachPiece.moveType == Move_Touch) || (eachPiece.moveType == Move_Auto)) {
CGPoint paperCenter;
paperCenter = eachPiece.center;
// a bunch of code to update paperCenter x & y for the object's new position based on velocity and user input
// determine image direction and transformation matrix
[_world updateDirection:eachPiece];
CGAffineTransform transformPiece = [_world imageTransform:eachPiece];
if (transformEnabled) {
eachPiece.transform = transformPiece;
}
// finally move it
[eachPiece setCenter:paperCenter];
}
}
And the pinch selector:
- (void)pinchPaper:(UIPinchGestureRecognizer *)recognizer {
NSLog(#"Pinch scale: %f", recognizer.scale);
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
}
As far as I can tell, the pinch should work. If I take the same pinch gesture code and set it to add to the view controller, it works for the entire view. I also have a custom UIView class that acts as a border (simply a rectangle drawn around the view), and moving the pinch gesture code to that allows me to pinch the border only.
Alright, so apparently gesture recognizers don't fire on views where the position is being animated. So to make it work I had to put the recognizer on the view controller, then perform a hit test and apply pinch/zoom on the touched view if it's one I want to pinch/zoom. Info on that here:
http://iphonedevsdk.com/forum/iphone-sdk-tutorials/100982-caanimation-tutorial.html
For my particular case, I kept track of which animated views I wanted to pinch, in a variable/array at the View Controller level. Then I used this code in the selector (essentially from the link above, all credit to them):
- (void)pinchPaper:(UIPinchGestureRecognizer *)recognizer {
CALayer *pinchLayer;
id layerDelegate;
CGPoint touchPoint = [recognizer locationInView:self.view];
pinchLayer = [self.view.layer.presentationLayer hitTest: touchPoint];
layerDelegate = [pinchLayer delegate];
//_pinchView is the UIView I want to pinch
if (layerDelegate == _pinchView) {
_pinchView.transform = CGAffineTransformScale(_pinchView.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
}
}
Only tricky thing is if you have other scale transforms (like changing directions in mine) going on as part of the existing UIView animation, you have to account for that, by using the current transform during each update loop.
For any gesture recognizer to work on imageViews, userInteraction must be enabled on it.
So, it should be,
yourImageView.userInteractionEnabled = YES;
Or, if you are using storyboards, you can check that option in storyboard's inspector window too.
Hope it helps..:)