UICollisionBehavior collide with background view only - ios

In my ViewController, I have a background view and a bunch of collision views. I have added gravity behavior to those collision views, so that they will fall from their original positions. But I want to let these views stop falling when they reach the bottom of the screen. So I add collision behavior to each collision view. Meanwhile I don't want them to collide with each other. In another word, each collision view can overlap with other collision views. I have tried
[_collision setTranslatesReferenceBoundsIntoBoundaryWithInsets:NO]
But it does't work, and I guess I'm using the wrong method. So how can I achieve this?

Related

SpriteKit: Overlayed nodes cannot be tapped

I have multiple SKEffectNodes I use as layers to represent different screens e.g. main menu screen, game screen, game result screen. I attach children nodes to represent buttons, labels, etc... to their respective layers.
When one layer is visible, the previous is blurred with a low alpha in the background so it's slightly visible. When transitioning, the foreground will blur and tween the alpha to 0 while the background will blur and tween the alpha to 1. The foreground does not get recycled (I realize this may be part of the issue).
When the Game Screen is behind the Main Menu Screen, the Play button (Button: SKSpriteNode with isUserInteractionEnabled = true") on the main menu overlays an SKSpriteNode with "isUserInteractionEnabled = false" from the Game Screen. When I try tapping on the middle of the Play Button, it does not trigger the touchesEnded(...) method. But when I tap anywhere where the SKSpriteNode is not behind it, it triggers the Play button's touchesEnded(...) method.
I have a transitionInComplete and a transitionOutComplete methods I use to determine when a layer is fully visible / invisible and is active / inactive. I thought that this issue had something to do with the zPositioning, so I set the current layer's zPosition to 100 when transitionInComplete is called and zPosition to 0 when transitionOutComplete is called but this did not solve the problem. I do not touch any of the layer's components' (buttons, labels, etc) zPositions at this time.
Any ideas?
Edit (09/25/2018 # 3:23 PM PST):
It's worth to note I do have swipe gestures enabled on the scene. The SKSpriteNodes react to swipe gestures.
It was an issue with zPositioning.
The following post made me realize that there was overlap because of how zPositioning is calculated from parent to child:
zPosition of SKNode relative to its parent?

How to tie together gestures and animation

I'm as beginner as you can get when it comes to iOS animation.
I know you can do fixed (non-gesture-controlled) animations, where you animate a property of a view over a fixed period of time. This however is entirely different than using a gesture to control the animation.
You know how you fold a sheet of 8x11 paper to put it in an envelope and mail it... you fold it in thirds. Well basically, my boss wants an interface such that 2/3rds of it is shown on the screen at a time, and the other third is slid on/off screen with a gesture. So basically, the screen would show either thirds 1 and 2, or thirds 2 and 3 depending on whether you swipe left or right.
Now this also means doing things like snapping/rubberbanding, bouncing, acceleration/deceleration, sticky. I have no clue where to even start to do something like this. I'm assuming those types of motions are not already built into any of the iOS framework and if you want snapping/rubberbanding, bouncing, acceleration/deceleration, you'd have to program that entirely from scratch.
Like how would a view know my artificial snap/bounce points are, and sticky behavior, such that you remove your finger from the screen before an arbitrary position is reached, and the view bounces back to it's previous position.
Where would you suggest I start on researching how to drive animations with gestures?
I suggest you take a look at Core Animation. You can do some really complex animations including accelerations, deceleration, bounce and others.
You could easily create a UIPanGestureRecognizer to track when someone has dragged their finger across the screen. Attach an action wasDragged to your gestureRecognizer
From the documentation:
A panning gesture is continuous. It begins (UIGestureRecognizerStateBegan) when the minimum number of fingers allowed (minimumNumberOfTouches) has moved enough to be considered a pan. It changes (UIGestureRecognizerStateChanged) when a finger moves while at least the minimum number of fingers are pressed down. It ends (UIGestureRecognizerStateEnded) when all fingers are lifted.
In wasDragged check what state the gestureRecognizer is in. If state is UIGestureRecognizerStateChanged, then you can adjust the size or position of your UIView so that it appears like you are dragging it out. If state is UIGestureRecognizerStateEnded, check whether the point at which the gesture ended is greater than your threshold point (e.g. halfway across the screen). If it isn't, then snap the view back using an animation, if it is then snap the view into where you want it.
Hope this makes sense.

Scroll in spritekit game

I am building a spritekit game with 2 screens. Inside the 1st screen the player should pick one Hangar out of 6-7, by horizontal scrolling. When one picked a new SKScene will appear with the actual game play. For scrolling - One Hangar should be centered, while two others are partly shown from the sides.
Can it be done with UIScrollView, on top of SKScene? Or better use sprite nodes for it?
I am just not sure about the best way to handle user interface with sprite kit.
I would implement this by putting the hangars as children of a SKNode. Swiping would move this this SKNode move around with all it's children.
If you wanted the positioning you described; when the swiping has stopped I would use an SKAction to center a the hangar closest to the middle of the screen.
I would do it like this because I think you should only mix in UIKit when necessary because :
It is easier to port to OSX
You don't have to convert between different types of coordinate systems

Multiple UIView transformation animations, one repeats, the other does not

I have a UIImageView whose instance has a repeating pulse-like animation.
I want this UIImageView to move with the force of the accelerometer. To do this I have added another animation, this time a translation.
However, once this translation happens, it stops the pulse, even when its not animating.
As these two animations are separate, I cannot use the CGAffineTransformConcat method to combine both animations.
I have tried adding one to the UIView and one to the layer property of the same UIView but it does not work.
I have also tried adding the movement directly to the frame of the view, however, this just causes my view to disobey the constraint constraining it to the center x of the superview.
Does anyone use/know of a work around to this?
Thanks as always!

Using Cocos2d, to detect if a Sprite is tapped on, we need to do all the calculations?

For example, if we have 10 rectangle sprites, and we generate them using random width, height, position, and z-index. And now the method
-(void) ccTouchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
is called. How do we know which sprite is tapped on? I know some technique checks whether the tapped point is within the bound of the sprite's rectangle, but in the case described above, what if rect A is on top of rect B at the TOP LEFT corner, and when the TOP LEFT corner of rect B is tapped on, it could be rect A that is tapped on -- the tapping point is actually inside of both rects.
Do we have to do it manually, and even consider the z-index...? (possibly looping through all sprites from the highest z-index to the lowest).
What if the sprite is a triangle, and rotating? There isn't a built-in way in Cocos2d that handles that?
(that's because I went through Core Graphics sample code a few days ago... seems like in that case, there will be two tap events, one for the main view, and one for the sub-view, and we can check what view it is that the user tapped on, without doing any calculation)
A possible solution would be a subclass of CCSprite that declares itself a delegate for CCStandardTouchDelegate or CCTargetedTouchDelegate. Then perform the appropriate operations on the sprite in those delegate methods.

Resources