Move, wibble, a UIView, attached to a spring, using the accelerometer - ios

Say you have a simple square UIView sitting in the view of a view controller:
#IBOutlet var redSquare: UIView!
Imagine it can move up/down left/right slightly. TBC it simply stays in the same totally normal 3D plane, it remains an ordinary "square of red" on the glass screen; it does not even rotate.
In iOS (1) you can access the motion manager along the lines...
let mm: CMMotionManager = CMMotionManager()
mm.startAccelerometerUpdates()
and you can (2) make an object "be attached to a spring" along the lines...
sumthin.physicsBody.dynamic = true
sumthin.physicsBody.affectedByGravity = false
sumthin.physicsBody.mass = 1
finally (3) you can apply force, to an object, based on the accelerometer with something like...
f = mm.accelerometerData
sumthin.physicsBody.applyForce(
CGVectorMake(CGFloat(f.acceleration.x), CGFloat(f.acceleration.y)))
GOAL: As you shake the phone around ...
the red square will simply "wibble" up/down, left-right a bit (perhaps 10 points or so, just to be clear); when you stop moving your hand it will settle as if on a spring back to the original position.
How do you do this, with an ordinary UIView?

Related

Swift sprite kit vertical background infinite image

I have 3 images:
topBg.png
midBg.png
botBg.png
I want to set topBg.png at top scene and height = 200
middleBg.png should be infinite scale or repeat vertically
botBg.png - should be in bottom and height = 200
i have next code:
override func didMove(to view: SKView) {
self.bgTopSpriteNode = self.childNode(withName: "//bgTopNode") as? SKSpriteNode
self.bgMiddleSpriteNode = self.childNode(withName: "//bgMiddleNode") as? SKSpriteNode
self.bgBottomSpriteNode = self.childNode(withName: "//bgBottomNode") as? SKSpriteNode
if let bgTopSpriteNode = self.bgTopSpriteNode,
let bgMiddleSpriteNode = self.bgMiddleSpriteNode,
let bgBottomSpriteNode = self.bgBottomSpriteNode {
bgTopSpriteNode.size.width = self.frame.width
bgTopSpriteNode.size.height = 200
bgTopSpriteNode.position.x = 0
bgMiddleSpriteNode.size.width = self.frame.width
bgMiddleSpriteNode.size.height = self.frame.height-400
bgMiddleSpriteNode.position.x = 0
bgBottomSpriteNode.size.width = self.frame.width
bgBottomSpriteNode.size.height = 200
bgBottomSpriteNode.position.x = 0
}
}
But how to set Y position of images. Because coordinates begin from center of screen, not from left top and i don't know how to convert them.
There are a couple of different ways to achieve what you're looking to do.
First, you can compute the y position of the top and the bottom of the screen using simply size.height / 2 if you have the anchorPoint of your scene at (0.5,0.5). (Don't use frame - use size. That way, you take into account the scaleMode of the scene.)
It sounds like you are frustrated that the origin of the scene is in the center. If you'd like to move it to the corner, you can easily do so by setting the scene's anchorPoint property, say, to (0.0, 0.0) for the lower left corner. Then, your y-values are 0 and size.height. If you are using the .sks editor, this is exposed in the interface - you can just set it there. Otherwise, you can set it programmatically.
Finally, you can set the scaleMode of your scene to something like .aspectFill, set the size of the scene directly (say, to 1024x768 for an iPad), and just place the images wherever they need to go. This approach works particularly well with .sks files, if you are using them; when you load up a scene, you can set the size of the scene based on the aspect ratio of the view it's in to accommodate different aspect ratios. For instance, you could adopt a 320x480 "reference size" for your iPhone scenes. Whenever you load up the scene, you could set the size of the scene to be 320 points wide and however many points tall to match the aspect ratio of the device. Then, all your graphics would be produced at 320pt wide, and you could slide them up or down proportionally across the scene's size for layout. This is a little more complicated, but it's a lot easier than trying to deal with separate layout considerations for multiple devices.
I should also point out a couple of things.
You can use the anchorPoint property of a sprite to dictate where the sprite's coordinates are measured from. This is handy for cases where you want images to be flush up against something. For instance, if you want an image flush against the left side of the screen, set its position to be exactly the left side of the screen, and then set its anchorPoint.x to 0.0; this will put the left edge of the sprite against the left edge of the screen. This also works for scenes, as you encountered - moving the anchorPoint of the scene moves everything in the scene relative to its size.
You don't need three images for what you're describing. You can use a single sprite and just set its centerRect property to tell it to use the top and bottom of an image and stretch the center part vertically. You have to do a little math to set the right xScale and yScale (not width and height, IIRC), but then you can draw all of that with one sprite instead of three. This would be really handy in your case, because you could just leave the sprite at (0,0), set its scale to match the size of the entire scene, and set the centerRect property - you wouldn't have to do any positioning math at all.

How to create a User Generated Animation in SceneKit with .dae (COLLADA) file from Blender

I’m trying to animate specific parts of a SCNScene object in SceneKit (in my case I want to animate fingers of a hand). I import the .dae (COLLADA) file easily from Blender with the respective bones to generate articulation on the model.
override func viewDidLoad() {
super.viewDidLoad()
var scene = SCNScene(named: "hand.dae")!
sceneView.scene = scene
sceneView.allowsCameraControl = true
sceneView.autoenablesDefaultLighting = true
sceneView.backgroundColor = UIColor.lightGrayColor()
}
My goal is to animate those bones on iOS with user generated values between 0 and 1. Imagine a UISlider where you scroll back and forth and see the specific finger move depending on the value of the slider.
This is needed animation screenshot
Image with the animation pretended
I’ve tried animate the model by calling an animation file like the Apple’s Fox example:
private var indexFingerAnimation: CAAnimation!
indexFingerAnimation = CAAnimation.animationWithSceneNamed(“move_index_finger.dae”)
indexFingerAnimation = false
indexFingerAnimation = 0.3
indexFingerAnimation = 0.3
indexFingerAnimation = Float.infinity
The problem is that’s a Global animation instead of just the index finger. Besides it’s always a ‘pre-defined’ animation instead of an animation controlled by user input. Ultimately I want to mix animations (e.g. move index finger and thumb at the same time revealing gestures)
Is this possible? I’m struggling because I can’t figure out how to manipulate specific parts of the mesh. I’m starting to study MetalKit but it’s not clear to me that’s the solution.
Any help would be really appreciated.
I have never tried two animations at the same time
but I can rotate SCNNode in dae file with two or more animate
You must set pivot point and group them together

Swift: Make control buttons not move with camera

I'm building a platform game, and I made the camera follow the player when he walks:
let cam = SKCameraNode()
override func didMoveToView(view: SKView) {
self.camera = cam
...
}
override func update(currentTime: CFTimeInterval) {
/* Called before each frame is rendered */
cam.position = Player.player.position
...
But, when the camera moves, the control buttons move as well
What should I do to keep the control buttons static?
See this note in the SKCameraNode docs:
A camera’s descendants are always rendered relative to the camera node’s origin and without applying the camera’s scaling or rotation to them. For example, if your game wants to display scores or other data floating above the gameplay, the nodes that render these elements should be descendants of the current camera node.
If you want HUD elements that stay fixed relative to the screen even as the camera moves/scales/rotates, make them child nodes of the camera.
By the way, you don't need to change the camera's position on every update(). Instead, just constrain the camera's position to match that of the player:
let constraint = SKConstraint.distance(SKRange(constantValue: 0), toNode: player)
camera.constraints = [ constraint ]
Then, SpriteKit will automatically keep the camera centered on the player without any per-frame work from you. You can even add more than one constraint — say, to follow the player but keep the camera from getting too close to the edge of the world (and showing empty space).
Add the buttons as child to the camera, like cam.addchild(yourButton)
From rickster's answer I made these constraints where the camera only moves horizontally, even if the player jumps. The order in which they are added is important. In case somebody else find them useful:
Swift 4.2
let camera = SKCameraNode()
scene.addChild(camera)
camera.constraints = [SKConstraint.distance(SKRange(upperLimit: 200), to: player),
SKConstraint.positionY(SKRange(constantValue: 0))]

SpriteKit: What's up with the coordinate system?

I'm teaching myself how to do SpriteKit programming by coding up a simple game that requires that I lay out a square "game field" on the left side of a landscape-oriented scene. I'm just using the stock 1024x768 view you get when creating a new SpriteKit "Game" project in XCode - nothing fancy. When I set up the game field in didMoveToView(), however, I'm finding the coordinate system to be a little weird. First of all, I expected I would have to place the board at (0, 0) for it to appear in the lower-left. Not so -- it turns out the game board has to be bumped up about 96 pixels in the y direction to work. So I end up with this weird code:
let gameFieldOrigin = CGPoint(x:0, y:96) // ???
let gameFieldSize = CGSize(width:560, height: 560)
let gameField = CGRect(origin: gameFieldOrigin, size: gameFieldSize)
gameBorder = SKShapeNode(rect: gameField)
gameBorder.strokeColor = UIColor.redColor()
gameBorder.lineWidth = 0.1
self.addChild(gameBorder) // "self" is the SKScene subclass GameScene
Furthermore, when I add a child to it (a ball that bounces inside the field), I assumed I would just use relative coordinates to place it in the center. However, I ended up having to use "absolute" coordinate and I had to offset the y-coordinate by 96 again.
Another thing I noticed is when I called touch.locationInNode(gameBorder), the coordinates were again not relative to the border, and start at (0, 96) at the bottom of the border instead of (0, 0) as I would have guessed.
So what am I missing here? Am I misunderstanding something fundamental about how coordinates work?
[PS: I wanted to add the tag "SpriteKit" to this question, but I don't have enough rep. :/]
You want to reference the whole screen as a coordinate system, but you're actually setting all the things on a scene loading from GameScene.sks. The right way to do is modify one line in your GameViewController.swift in order to set your scene size same as the screen size. Initialize scene size like this instead of unarchiving from .sks file:
let scene = GameScene(size: view.bounds.size)
Don't forget to remove the if-statement as well because we don't need it any more. In this way, the (0, 0) is at the lower-left corner.
To put something, e.g. aNode, in the center of the scene, you can set its position like:
aNode.position = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame));

SKEffectNode - CIFilter Blur Size Limit - Big Black Box

I am trying to blur multiple SKNode objects. I do this by having a parent SKEffectNode with a CIFilter set to #"CIGaussianBlur". Like so:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
blurNode.shouldRasterize = YES;
[blurNode setShouldEnableEffects:NO];
[blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
This works fine for a bunch of nodes currently onscreen. But when I space these notes far away from each other (about 3000 pixels), the blurring no longer happens and I get a big black box. This happens regardless of whether the SKNodes I'm blurring are SKShapeNodes or SKSpriteNodes. Here's a sample project with this issue: Sample Project. (By the way, thanks to BobMoff for the initial version found here):
Here's happy blur (when nodes are less than 3000 pixels away from each other):
Sad blur (when nodes are more than 3000 pixels away from each other):
UPDATE
This behavior occurs whenever an SKEffectNode is the parent. It doesn't matter if it's enabling effects, blurring, etc. If the parent node is an SKNode, it's fine. i.e. Even if the parent blur node is created like it is below, you will get the blackness:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
// blurNode.shouldRasterize = YES;
// [blurNode setShouldEnableEffects:NO];
// [blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
// keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
I had a similar problem, with a very wide, panning scene that I wanted to blur.
To get the blur effect to work, I removed any nodes that were sticking out too far past the edges of the scene:
// Property declarations, elsewhere in the class:
var blurNode: SKEffectNode
var mainScene: SKScene
var exParents: [SKNode : SKNode] = [:]
/**
* Remove outlying nodes from the scene and activate the SKEffectNode
*/
func blurScene() {
let FILTER_MARGIN: CGFloat = 100
let widthMax: CGFloat = mainScene.size.width + FILTER_MARGIN
let heightMax: CGFloat = mainScene.size.height + FILTER_MARGIN
// Recursively iterate through all blurNode's children
blurNode.enumerateChildNodesWithName(".//*", usingBlock: {
[unowned self]
node, stop in
if node.parent != nil && node.scene != nil { // Ignore nodes we already removed
if let sprite = node as? SKSpriteNode {
// Calculate sprite node position in scene coordinates
let sceneOrig = sprite.scene!.convertPoint(sprite.position, fromNode: sprite.parent!)
// Find left, right, bottom and top edges of sprite
let l = sceneOrig.x - sprite.size.width*sprite.anchorPoint.x
let r = l + sprite.size.width
let b = sceneOrig.y - sprite.size.height*sprite.anchorPoint.y
let t = b + sprite.size.height
if l < -FILTER_MARGIN || r > widthMax || b < -FILTER_MARGIN || t > heightMax {
self.exParents[sprite] = sprite.parent!
sprite.removeFromParent()
}
}
}
})
blurNode.shouldEnableEffects = true
}
/**
* Disable blur and reparent nodes we removed earlier
*/
func removeBlur() {
self.blurNode.shouldEnableEffects = false
for (kid, parent) in exParents {
parent.addChild(kid)
}
exParents = [:]
}
NOTES:
This does remove content from your effect node, so extremely wide nodes won't show up in the final result:
You can see the mountain highlighted in red stuck out too far and was removed from the resulting blur.
This code only considers SKSpriteNodes. Empty SKNodes don't seem to break the effect node, but if you're using other visible nodes like SKShapeNodes or SKLabelNodes, you'll have to modify this code to include them.
If you have ignoreSiblingOrder = false, this code might mess up your z-ordering since you can't guarantee what order the nodes are added back to the scene.
Stuff I tried that didn't work
Simply saying node.hidden = true instead of using removeFromParent() doesn't work. That would be WAY too easy ;)
Using an SKCropNode to crop out outlying content didn't work for me. I tried having the SKEffectNode parent the SKCropNode and the other way around, but the black square appeared no matter how small I made the cropped area. This might still be worth looking into if you're desperate for a cleaner solution.
As noted here, SKScenes are secretly SKEffectNodes and you can set their filter just like our blurNode above. SKScenes don't show a black screen when their content is too big. Unfortunately, they seem to just silently disable the filter instead. Again, I might have missed something, so you could explore this option further if you're trying to apply an effect across the entire scene.
Alternate Solutions
You can capture an image of the whole screen and apply a filter to that, as suggested here. I ended up going with an even simpler solution; I took a generic screenshot of the stuff I wanted to blur, then applied a very heavy blur so you can't see the precise details. I used that as the blurred background and you can hardly tell it's not the real thing ;) This also saves a healthy chunk of memory and avoids a small UI hiccup.
Musings
This is a pretty nasty bug, and I hope Apple comes up with a solution soon. You can click this cute picture of a camera to get a GPU trace and some insight on what's happening:
The device seems to be discarding the framebuffer for the effect node because it takes up too much memory. This is affirmed by the fact that when there's more memory pressure on the device, it's easier to get the 'black square' on smaller content in the SKEffectNode.
I used a method that worked for my game but it requires the blurred area to be static without movement.
On iOS 10 using Swift 3 I used SKSpriteNode, SKView, SKEffectNode, CIFilter. I created a sprite from a texture returned from the SKView method "texture from node" and passed the current scene as the parameter because it inherits from SKNode. So essentially I was taking a "screenshot" of the scene and creating a sprite from it. I then put it in an SKEffectNode with a blur filter. (set "should rasterize" to true for better performance as I only needed to blur once). Finally I added the new sprite to the scene. From there you could add sprites to the scene and place them above the new blurred node.
let blurFilter = CIFilter(name: "CIGaussianBlur")!
let blurAmount = 15.0
blurFilter.setValue(blurAmount, forKey: kCIInputRadiusKey)
let blurEffect = SKEffectNode()
blurEffect.shouldRasterize = true
let screenshotNode = SKSpriteNode(texture: gameScene.view!.texture(from: gameScene))
blurEffect.addChild(screenshotNode)
blurEffect.filter = blurFilter
gameScene.addChild(blurEffect)
Possible workaround for the bug:
Use a camera, zoom WAY out, so you can see most everything of your background, take a screenshot style rendering of this image. Crop it to your needs, and then blur it. Then rasterise this.
Then scale this image back up, and slice it up if needs be, and place accordingly.
SKEffectNode renders into a texture. In most iOS systems the maximum size for a texture is 2048x2048. If an SKEffectNode is trying to render content larger than that, it will just use a 2048x2048 texture and anything outside of it will just not appear in the texture. It won't give you any error or warning about this happening; it simply does it silently.
And no, there is no way to tell SKEffectNode to use a texture of a specific size, and pan&clamp the content into it. It always uses a texture that will cover all the child nodes, and if the texture would be too large, it just silently uses that 2048x2048 texture.

Resources