I'm hoping to overlay a UIView (specifically a highlight box with some text, etc) over an object rendered in SceneKit, but I'm encountering an issue: I don't know exactly where the object will be onscreen at the time.
Is there a way to get the CGRect frame of a SCNNode's current position within the SCNView? Even just a center point for the node would be helpful, but ideally it would give the whole frame, indicating how much vertical and horizontal space the geometry was taking up onscreen as well.
I've searched in the documentation and online for various references to "frames" and "bounds" relative to an SCNNode, but all I'm finding is stuff about the coordinate system within the scene.
Is there no way of translating a SCNNode's position in a scene into the frame coordinates of the view, or the app window?
Check the post about calculating the projected size of an object.
Performance wise, it is much better to use SpriteKit in SceneKit if you want to overlay 2D contents. Check the overlaySKScene property of SCNSceneRenderer.
Related
MKAnnotationViews documentation says
Managing Collisions Between Annotation Views
var collisionMode: MKAnnotationView.CollisionMode
The collision mode to use when interpreting the collision frame rectangle.
enum MKAnnotationView.CollisionMode
Constants indicating how to interpret the collision frame rectangle of an annotation view.
I'd like to debug some collision behaviour that I don't understand.
So how do I get the collision frame rectangle that is referenced in the MapKit documentation? I'll probably try to draw this rectangle for visual debugging.
How do I set the collision frame rectangle? Maybe not directly, but which of the many involved views determines this rectangle?
This is the only reference of this term that I found in MapKit
Edit
Is this collision frame rectangle only used to make clusters or is it also used to hide the cluster with a lower display priority?
I have two AnnotationViews visually drawn on top of each other. One has displayPriority = .required, one has displayPriority = .defaultHigh. One should disappear. But where are their collision frame rectangles? Do they really overlap?
I found an explanation here. It says:
collisionMode: An MKAnnotationView.CollisionMode. Two annotation views with the same clusteringIdentifier will be replaced by a cluster annotation if the map is zoomed out so far that they collide.
But what constitutes a collision between two annotation views? To know that, we need a collision edge. It might be:
.rectangle: The edge is the view’s frame.
.circle: The edge is the largest circle inscribable in and centered within the view’s frame.
EDIT:
The docs say: The most efficient way to provide the content for an annotation view is to set its image property. The annotation view sizes itself automatically to the image you specify and draws that image for its contents. Additionally, there are other properties that may influence the frame property. So it is this automatically adjusted framethat determines the collision frame.
I am looking to show some helper text on the GLSurfaceView. But I want to show this only when I see the object in my Camera's frame not before that. How can I detect whether the 3d object is visible in the Camera's frame or not?
Rather than having to check every time what objects are in the camera frame, it might be easier, deepening on your particular application, to simply attach the helper text below or above the object you want it to apply to, on the same anchor.
This would also have the advantage that the text would be centred only when the object is centred - i.e. you would not have the text suddenly fully appear when just a corner of the object is in the camera view.
ViewRenderables allow you to render 'a 2D Android view in 3D space by attaching it to a Node with setRenderable(Renderable)'.
(https://developers.google.com/ar/reference/java/sceneform/reference/com/google/ar/sceneform/rendering/ViewRenderable)
I have an iPhone 3D model in my SceneKit application, it has a material which I get. It is called iScreen (the image that "is on" the screen of the iPhone):
var iScreen: SCNMaterial!
iScreen = iphone.geometry?.materialWithName("Screen")!
I decided to somehow project a webView there instead of an image.
Therefore I need the frame / position / size of the screen where iScreen "draws" to set the UIWebView's frame. Is that possible?
Note
Of course I tried position, frame, size, etc. but that all was not available :/
you will first have to know to which SCNGeometryElement the material applies (an SCNGeometry is made of one or several SCNGeometryElement).
That geometry element is essentially a list of indices to retrieve vertex data contained in the geometry's SCNGeometrySources. Here you are interested in the position source (it gives you the 3D coordinates of the vertices).
By iterating over these positions you'll be able to find the element's width and height.
What are the coordinates for the bottom of the screen... or how can I create a "floor" at the bottom of the screen in spritekit?
Sorry, but I don't understand screen coordinates that well in spritekit.
You need to understand the Sprite Kit coordinate system as explained in Apple's Documentation here.
Here's how you create a floor at the bottom of the screen in SpriteKit:
SKNode *floor = [SKNode node];
node.physicsBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:CGRectMake(CGRectGetMidX(self.frame),1.0 , CGRectGetWidth(self.frame), 1)];
[self addChild: floor];
You need some universal approach to get coordinates of corners on the screen.
Using code from that answer you can get CGRect with necessary information.
Example:
let screenRect = getVisibleScreen(
sceneRect: self.scene!.frame,
viewRect: self.view!.frame )
And then you can use it:
screenRect.minX
screenRect.maxX
screenRect.minY
screenRect.maxY
screenRect.width
screenRect.height
This is more then enough to calculate coordinates of "floor" or any other relative positions.
The location of the bottom of the screen will depend on what coordinate system you are using for your scene.
Out of the box, the bottom of the screen will be at y coordinate zero, but there are a few things that can happen that will affect that.
For instance, if you are using the scene editor in xCode, and your scene's anchorPoint property is something other than y=0, then the "origin" of your scene will not be at the bottom of the screen. In the recent xCode beta, they changed the default behavior to have the scene's origin at the center of the scene instead of the lower left corner, so that would explain why you might be seeing things in the center of the screen when you expect them to be at the bottom.
Also, the "bottom of the screen" will be relative to whatever parenting structure you have in your scene. For instance, if you place a background sprite in your scene, and want to attach a floor sprite to that which is at the bottom of the screen, you'll have to do some computing to figure out where to place it because you are going to inherit the translation and rotation of the floor's parent node (and any parents that node has).
To keep things simple, you can just place everything directly on the stage and manage their z-order manually. This will let you, basically, use the same coordinate system for everything. This is often fine; as long as you're not trying to do anything complex with your sprites, you don't need a complicated "tree" of nodes.
But even with this approach, the metrics of your scene are going to have to be handled dynamically. The width and height of your scene are going to depend on how you approach displaying your scene on different devices with different sizes. For instance, the top right of an iPhone 4 is going to be in a different place than the top right of an iPad Pro. A full discussion of how to deal with that is beyond the scope of your question, but generally, you'll probably want to use a "reference width" or a "reference height" for your scene, use .AspectFit or .AspectFill for the scaleMode, and set your scene's size accordingly. (I.e., inspect the view's frame to get the actual aspect ratio of your scene and set your scene size to match your reference metric on one axis and scale the other axis to match the device's aspect ratio.) This will let you use the same metrics for all devices (although one of your two axes will be fluid).
I have a SpriteKit Scene in which I want to have the effect as if a camera zoom and scale. Does anyone know of any libraries or some easy methods of doing this?
It was very easy to do in other 2D engines but does not seem simple.
I was thinking of doing it from the app delegate, and using the window to zoom since my character does stay around the same position.
The desired effect I would like to accomplish is like that of the start of an Angry Bird level when the camera pans into the level and then the launch doc.
http://www.youtube.com/watch?v=_iQbZ3KNGWQ This is an example of the camera zoom and pans I am talking about.
Thanks for the help.
If you add an SKNode to the SKScene, and make your scene content children of that node instead of direct children of the scene, then you can zoom and pan all of the contained content just by adjusting the xScale, yScale and position properties of the added node. (Any content you did not want scrolled e.g. scores or whatever could then be added to a different SKNode or added directly to the scene).
The adjustment could be done by overriding one of update:, didEvaluateActions, or didSimulatePhysics in your SKScene subclass. The choice would depend on if you are just moving your character around by yourself in update:, or if it also gets moved around by running SKActions or by simulated physics.