Creating the effect of sprites jumping in and out of objects - ios

I was trying to achieve the following functionality and wanted to ask for some help:
There are n barrels on the screen.
Randomly, a monster will pop up out of the barrel, and then go back down. I got this working, but I want to make a revision where sometimes instead of going up/down, a monster will jump out of the barrel along an arc, and land somewhere on the ground nearby. After some time, he will jump back and follow the path in reverse back into the barrel.
My original code workedby simply created a barrel i.e. addChild(barrel) to the scene, added a cropping node over it i.e. addChild(croppingNode) where croppingNode's position is over the barrel, and then add the monster to the cropping node as a child (croppingNode.addchild(monster))
This worked perfectly - the monsters come up, and then slide back down and the node hides their lower half so they appear to have went back inside the barrel.
My revision was to have a monster randomly jump out of the barrel on an angle and land on the ground, but the problem is because the monster is added to the cropping node, it is clipped if I try to move it outside the bounds of a cropping node.
If I add the monster outside of the cropping node, I believe he will no longer be cropped, correct?
What would be a way to achieve something like this? Use a larger cropping node image that is larger than the barrel and accounts for his arc onto the ground?
Thanks!

Related

How to temporarily freeze a node in front of the camera using ARKit, SceneKit in Swift

I built a complete structure as a node (with its child nodes) and the user will walk through it using ARKit.
At some point, if the user cannot continue because of some real obstacle in the real world, I added a "pause" button which should freeze whatever the user currently sees in front of the camera, the user could then move freely to some other open space and when the user will release the pause button he/she will be able to resume where they left off (only someplace else in the real world).
A while ago I asked about it in the Apple Developer forum and an Apple Frameworks Engineer gave the following reply:
For "freezing" the scene, you could transform the anchor's position (in world coordinates) to camera coordinates, and then anchor your content to the camera. This will give you the effect that the scene is "frozen", i.e., does not move relative to the camera.
I'm currently not using an anchor because I don't necessarily need to find a flat surface. Rather, my node is placed at a certain position relative to where we start at (0,0,0).
My question is how do I exactly do what the Apple engineer told me to do?
I have the following code which I'm still stuck with. When I add the node to the camera (pointOfView, last line of the code below), it does freeze in place, but I can't get it to freeze in the same position and orientation as it was before it was frozen.
#IBAction func pauseButtonClicked(_ sender: UIButton) {
let currentPosition = sceneView.pointOfView?.position
let currentEulerAngles = sceneView.pointOfView?.eulerAngles
var internalNodeTraversal = lastNodeRootPosition - currentPosition! // for now, lastNodeRootPosition is (0,0,0)
internalNodeTraversal.y = lastNodeRootPosition.y + 20 // just so it’s positioned a little higher in front of the camera
myNode?.removeFromParentNode() // remove the node from the Real World view. Looks like this line has no effect and just adding the node as a child to the camera (pointOfView) is enough, but it feels more right to do this anyway.
myNode?.position = internalNodeTraversal // the whole node is moved respectively in the opposite direction from the root to where I’m standing to reposition the camera in my current position inside the node
// myNode?.eulerAngles = (currentEulerAngles! * -1) — this code put the whole node in weird positions so I removed it
myNode?.eulerAngles.y = currentEulerAngles!.y * -1 // opposite orientation of the node so the camera will be oriented in the same direction
myNode?.eulerAngles.x = 0.3 // just tilting it up a little bit to have a better view, more similar to the view as before it was locked to the camera
// I don’t think I need to change the eulerAngles.z
myNode!.convertPosition(internalNodeTraversal, to: sceneView.pointOfView) // I’m not sure I wrote this correctly. Also, this line doesn’t seem tp change anything
sceneView.pointOfView?.addChildNode(myNode!) // attaching the node to the camera so it will remain stuck while the user moves around until the button is released
}
So I first calculate where in the node I'm currently standing and then I change the position of the node in the opposite direction so that the camera will now be in that position. That seems to be correct.
Now I need to change the orientation of the node so that it will point in the right direction and here things get funky. I've been trying so many things for days now.
I use the eulerAngles for the orientation. If I set the whole vector multiplied by -1, it would show weird orientations. I ended up only using the eulerAngles.y which is the left/right orientation and I hardcoded the x orientation (up/down).
Ultimately what I have in the code above is the closest that I was able to get. If I'm pointing straight, the freeze will be correct. If I turn just a little bit, the freeze will be pretty close as well. Almost the same as what the user saw before the freeze. But the more I turn, the more the frozen image is off and more slanted. At some point (say I turn 50 or 60 degrees to the side) the whole node is off the camera and cannot be seen.
Somehow I have a feeling that there must be an easier and more correct way to achieve the above.
The Apple engineer wrote to "transform the anchor's position (in world coordinates) to camera coordinates". For that reason I added the "convertPosition" function in my code, but a) I'm not sure I used it correctly and b) it doesn't seem to change anything in my code if I have that line or not.
What am I doing wrong?
Any help would be very much appreciated.
Thanks!
I found the solution!
Actually, the problem I had was not even described as I didn't think it was relevant. I built the AR nodes 2 meters in front of the origin (-2 for the z-coordinate) while the center of my node was still at the origin. So when I changed the rotation or eulerAngles, it rotated around the origin so my nodes moved in a large curve and in fact also changed their position as a result.
The solution was to use a simdPivot. Instead of changing the position and rotation of the node itself, I created a translation matrix and a rotation matrix which was at the point of the camera (where the user is standing) and I then multiplied both matrices. Now when I added the node as a child of the camera (pointOfView) this would freeze the image and in effect show exactly what the user was seeing before it was frozen as the position is the same and the rotation is exactly around the user's standing position.

Removing SKNodes When Not Visible On Screen

In my game, the size of the level can be larger than the screen of the phone and the camera will follow the player around the level, so there can be a decent amount of content(such as SKEmitterNodes) in the scene that is not visible at any given time. I've been reading through some of the SpriteKit documentation and found this quote in the SMEmitterNode section:
"Consider removing a particle emitter from the scene when it is not
visible onscreen. Add it just before it becomes visible."
Is this something that can be done in my type of game design? I don't want the nodes to be completely removed since they will eventually be put on the screen, but is there a good way for me to add/remove the EmitterNodes (or other SpriteNodes) that are a certain distance from the screen/is this a good idea to do? I'm looking to improve my frame-rate and don't want costly nodes like SMEmitterNodes working while they're not even being displayed, but will adding/removing them as the player moves around reduce the performance?
Here is the idea I currently have: create a rectangle that extends a certain distance around the screen and detect when a node comes into that rectangle, and if it's not already added to the scene, go ahead and add it. Thank you for any suggestions.
SKNodes really aren't a problem because when they are off screen they are not being rendered anyway, just evaluated. So the main thing to worry about with SKNodes are any physics bodies attached to them,
SKEmitterNodes however require some processing power, and that is why apple is recommending not having them emit if they are not on screen. I would just subclass my SKScene class, and do a checks only on SKEmitterNodes whether or not they are in frame, and emit based on that.
So, I would throw all your SKEmitterNodes into a container like an array, and have a loop function to have the node do a CGRectIntersectsRect check based on your camera location and viewable screen size. and if they intersect, add it to the scene, if not remove it from the scene. The array will keep a strong reference so you do not have to worry about it deiniting on you

How to correctly use SKConstraints

I'd like to build a menu of 'tiles' using SpriteKit. For simplicity, the tiles are SKSpriteNodes of all the same size, and are housed in a larger 'container' SKSpriteNode. The container node is a mask node, so only a subset of the menu tiles are shown at a given time. Dragging up or down on a tile should scroll the list of tiles up or down, respectively.
Right now, when a drag is detected over a tile, I just reposition all tiles up or down--easy. There are two other constraints on the problem, though. The top tile should never scroll below the top of the mask/container node. And, the bottom tile should never scroll above the bottom of the mask/container node. Taken together, this keeps the list of tiles from ever being completely dragged to a place where they are hidden/unreachable.
This problem seemed like it would be elegantly solved with either SKConstraints or SKPhysicsJoints. I've tried a lot of combinations, but nothing seems to give me the desired effect.
For instance, I've used an SKPhysicsJointFixed to pin each pair of neighboring tiles together. This has two problems--first, the tiles are 'sluggish' when dragged, so the finger drags more quickly than the tile to the point that it is no longer over the tile, and the drag stops being recognized as on the node. Second, this pin allows the tiles to rotate freely about the anchor point. I added an SKConstraint to restrict the z-rotation of every tile, but now the tiles can barely be dragged at all.
I tried implementing everything with just SKConstraints, so I wouldn't have to fuss with setting masses correctly, etc., in order to make the physics approach feel more usable. I added a constraint on the x position of every tile so that they could only be dragged vertically. Then, I added another with SKConstraint.distance(_:toNode:) on every pair of tiles to keep them separated by the correct vertical distance. The problem with this is, given two tiles, if I add this distance constraint on each, only the last tile to be given the constraint is allowed to move. It moves and the other tile follows this tile correctly. That other tile can't be moved at all, though.
Then there comes the problem of keeping at least some part of the set of tiles 'in bounds', so that they never are dragged completely outside of the mask node, and thus not visible/unreachable. A constraint/joint seemed like it might work here--add some constraint/joint to the bottom tile keeping it above the bottom of the mask node, and similarly for the top tile. But now, how do I use a constraint/joint to keep the bottom tile above the bottom of the bottom of the mask node but let it move below this point, and vice versa for the top tile?
Am I going about this all wrong? Is using constraints/joints not the correct approach? Obviously, I can hand code all of these restrictions, but it seems like constraints/joints would solve this problem so much more elegantly, while also letting me rely on the physics world for some springiness, flicking, etc. If there's a good way to do what I'm trying to do, would someone please provide a suggestion?
Many thanks!

SpriteKit and combining multiple drags for a single rotation

What I'm attempting to recreate is the draggable arrow that's used in the popular iOS game called "Fragger", complete with both adjusting the rotation AND the strength of the pull all at the same time based on a finger drag but using SpriteKit - I believe they did it in Cocos2D.
I'll start by saying that I've honestly spent many weeks relearning trigonometry (http://www.mathsisfun.com/ is a great resource) and combing through Ray Wenderlich's tutorials (http://www.raywenderlich.com/35866/trigonometry-for-game-programming-part-1) but I can't find a solution for executing a "collective" drag rotation AND strength on a SpriteKit object.
Yes, it's relatively easy to set an anchor point and rotate that object (I'm using an arrow) so that it's pointing at your finger and thanks to Ray, I've got a really smooth action utilizing the following:
float angleRadians = atan2f(firstFingerTouchY - _arrow.position.y, firstFingerTouchX - _arrow.position.x);
float angleDegrees = RADIANS_TO_DEGREES(angleRadians);
float rotateDegreesPerSecond = 180 / 0.5; // Would take 0.5 seconds to rotate 180 degrees, or half a circle
float degreesDiff = (angleCurrent - angleDegrees) * 1.0;
float rotateDuration = fabs(degreesDiff / rotateDegreesPerSecond);
however, that's only part of the equation ...
Again, the first drag is fine as the angle of rotation is derived from the fulcrum of the arrow (it's anchor point) and where ever my finger ends up. However, let's say due to the limit of the screen size I can't get the angle I desire in a single drag, so I lift up my finger and then place it back down to continue the drag? Well I can't use the above mentioned code because as soon as I place my finger down for the second time, the arrow's rotation jumps over to where my finger now starts - again ending the same way.
So I thought the solution would be to not use the anchor point of the arrow at all, but instead use the initial finger touchpoint (as the new anchor) and calculate the arrow's angle based on that touchpoint and where ever my finger moves to? That works in theory, however when you then try to factor in the "strength draw" aspect that's represented in "Fragger" (where pulling the finger closer or farther away from the arrow adjusts the strength of the grenade throw), then you're not moving your finger towards the arrow to weaken the pull, but instead towards where ever you initially touched - which is not only visually difficult but as a bonus creates a lot of rotational jerky-ness as you get your finger closer to the origin.
I've been working on it for about a month now (did I mention that I hate math) trying to create a homogenized method (SIN ... COS ... TAN ... I hate you all!) to do the following:
First pull will rotate the arrow based on the direction of the pull but NOT automatically point to your finger (initial angle)
The ability to lift my finger up, place it back down and drag to ADD TO or SUBTRACT FROM the current rotation of the arrow. (delta angle)
The drag of my finger towards or away from the anchor of the arrow (relative to the current angle of the arrow) will adjust the strength of the "pull" accordingly. For example, if the arrow is already pointing straight up, then continuing the drag straight up would increase the strength and down would decrease the strength respectively. (hypotenuse derived from arrow anchor point).
I also need to be able to use another finger (2 finger touch) to assist in the rotation/strength calculation, so that adds even more chaos into my cluster ...
If you've ever played the 'Fragger' game, you'll recognize the complexity of this single finger action which is also it's functional beauty. I can get pieces of it working - I can rotate it, I can adjust a color mask to indicate strength - but not all of it working together. Perhaps I'm going about it completely wrong, however every example I find online stops at simply rotating an object to point to your finger with every new touch drag ending with the same results.

Controlling movement of flying a vehicle with SceneKit

I currently have a scene which contains a central node at the root of the scene with earth-like geometry and a node representing a flying vehicle.
However I cannot find the right approach to control the movement of the vehicle. I need the ability to turn left and right while orbiting at a static altitude and speed.
I have tried many combinations of animations and physics body forces all leading to undesirable results.
The closest I've come is:
Setting the pivot property of the vehicle to the centre of the scene
Then setting an Action like below to control moving forward
[_vehicleNode runAction:[SCNAction repeatActionForever:[SCNAction rotateByX:-1 y:0 z:0 duration:10.0]]];
Then finally applying forces for turning left and right with
[_vehicleNode.physicsBody applyTorque:SCNVector4Make(0, 1, 0, 1) impulse:YES];
However I cannot seem to set the pivot and/or position to the right value to get the desired result.
Edit: It appears as the above method would be the solution I'm looking for, however for some reason when I add geometry to the vehicle node, it's position in the scene graph gets changed dramatically. When I add hardcoded buttons to change it's position to where it belongs it appears correct for only that single frame then straight back to being in the middle of nowhere.
Edit 2: After replacing all geometry with a primitive sphere for testing the node is now rotating as intended but is now unaffected by physics forces appearing to ignore it's declaration as a dynamicBody.
If I understand what you are trying to achieve correctly, you can try this:
add a first node to your scene, positioned at (0,0,0) and make it rotate forever along the Y axis using an SCNAction
add your ship node as a child of the first node. Position it at (X,0,0) so that it orbits around the earth
rotate your ship node along the X axis with an action or an animation

Resources