SceneKit Calculate Viewable Bounds? - ios

so as per title I am trying to figure out if there is a good way to calculate the bounds of a scene as the usual frame/bounds properties don't really work in the context.
I basically need a way to check if an object has moved out of the viewable screen based on the camera settings of xFov/yFov/zNear/zFar. So far I haven't really found a good way to do so. Have I overlooked any API methods here or does this need to be calculated manually?
I hope I have made sense here if not please tell me and I will clarify further.

SCNView conforms to SCNSceneRenderer which in turn has a method called isNodeInsideFrustum:withPointOfView: which is what you are looking for. According to the documentation, it returns:
YES if the bounding box of the tested node intersects the view frustum defined by the pointOfView node; otherwise, NO.
Using it looks something like this:
BOOL isInside = [sceneView isNodeInsideFrustum:nodeToTest
withPointOfView:sceneView.pointOfView];
if (!isInside) {
// the bounding box of nodeToTest is not in the viewport ...
}

Related

Swift - Detect the user drawing through defined points/areas.

My app already has the drawing enabled for the user on the screen. How can I define certain points in the screen and detect when the user draws through that point? Read about a few swift methods but can't quite grasp if they are applicable for what I need, also I can't find any "collision" methods.
You can use the .containspoint method. However I would recommend you to use a rectangle and not a point. It`s very difficult to draw over one certain point. So you could use CGRect for a rectangle and then again .containspoint(from the user touched point).

tracking camera's position and rotation

I have allowsCameraControl property set to true. I need my camera to tell me what it's position and rotation is, while I move it around with pinch and pan gestures so I can later update my camera to that position. Is there some function that is called every rendering moment so I can put println: statement in it? The other option I could think was to set a didSet statement at camera's position and rotation property but I have no idea how to do that if I'm not the one defining the property in the first place.
Found a way around it using custom buttons(moveLeft,moveRight,rotateLeft etc..) to move the camera(and report current position) around 3D space. Works great. Can't tell if mnuages's suggestion works, but it looks allright.
you can use delegation methods such as SCNSceneRendererDelegate's -renderer:didRenderScene:atTime: and you can access the "free" camera by using the view's pointOfView.

How to change center of gravity in sprite kit game?

I have been trying to find, but did not succeed. Is there a way to change physicsworld gravity property in such way, that objects would not be attracted towards the bottom of the screen, but to the centre of it instead?
Use an SKFieldNode for that. Just place it at the center of your map and disable gravity by putting code in that sets the gravity to zero (CGVector(0,0))
I would give you code, but I don't use Objective C, so I don't know the exact syntax for it. I can give it a shot though... [physicsBody setGravity: CGVector(0,0)]; PLEASE NOTE I HAVE NO IDEA IF THAT IS CORRECT SYNTAX
EDIT:
The asker requested an example of SKFieldNode in Swift, so here it goes.
For the question asked, what you would do is create a Radial Gravity Field node at the center. This code goes in GameScene.swift (or .m if you're using Objective C, and make sure you change the syntax to Obj-C).
let gravField = SKFieldNode.radialGravityField(); // Create grav field
gravField.position.x = size.width/2; // Center on X axis
gravField.position.y = size.height/2; // Center on Y axis (Now at center of screen)
addChild(gravField); // Add to world
You are probably looking for a SKFieldNode. There are a couple of different types so you will have to read the docs. The one you are probably looking for is called radialGravityField.

Advanced custom control features in Swift

I'm working on building a custom control. Basically I want to allow the application to generate rectangles (positioned at x = 0 with a variable y value that increases as each rectangle is added).
I'd like them to respond to gestures where they have two positions (closed - which mostly hidden, open - expanded fully so that the entire rectangle is still visible but tethered to the side).
I've already designed an application with this in mind. Seeing as the rectangles will be generated by the users, I assume core graphics would be best for the job. Also, I want the rectangles to display different information based on their gesture-related position.
Is it possible to combine core graphics with these types of controls? I know this is asking a lot.
It's just that I'm having trouble determining how to combine each component in code.
Any advice would be greatly appreciated. Thanks!
Clearly, we're not here to write code for you, but a few thoughts:
You say that you assume Core Graphics would be best for the job. You definitely could, but you could also use CAShapeLayer, too.
So you might create a gesture recognizer whose handler:
Creates a CAShapeLayer when the gesture's state is UIGestureStateBegan and adds it as a sublayer of the view's layer.
Replace that shape layer's path property with the CGPath of a UIBezierPath which is created on the basis of updated location that the gesture recognizer handler captures when the gesture's state is UIGestureStateChanged.
I'd suggest you take a crack at that (googling "CAShapeLayer tutorial" or "UIPanGestureRecognizer example" or what have you, if any of these concepts are unfamiliar).
If you really want to use Core Graphics, you would have a custom UIView subclass whose drawRect draws all of the rectangles. Conceptually it's very similar to the above, but you have to also writing your own rectangle drawing code that you'll put in drawRect, rather than letting CAShapeLayer do that for you.

Simple algorithm for annotations request from server on map

What is a good enough method of knowing when to go out to the server and request for annotations?
i.e. knowing when the area on the screen was not yet exposed by the user?
If I have a LAT1,LON1 LAT2,LON2 specifying the screen boundaries, or maybe the screen's center as LAT,LON, how can I know that the surface the user has moved to has never been exposed or even just a part of it?
Weird, but I can't find ideas online, Any methods would be welcome!
Thanks!
store a set of MKMapRect objects that the map showed in a NSMutableSet maybe -- you then have all the areas that were visible previously. (combine rects when it makes sense to keep the set reasonable)
when you get a new MKMapRect (after a scroll or zoom of the map view -- the delegate is informed here) see if the new visibleMapRect lies within an old one or just intersects or is not inside the rect at all
an MKMapRect can be treated almost like a CGRect :)

Resources