I'm reasonably new to iOS's SceneKit and have come across a dilemma with regards to user-interaction in a 3d scene:
I have a set of SCNNode cubes in an SCNView, and would like to be able to pin-point where a user touches the mesh of a given cube, as a 3d coordinate (so as to later manipulate the scene according to touch vectors). At present, I've been using a UIGestureRecognizer in order to achieve basic hit-testing, but this seems to be limited to returning 2d-points.
This isn't a problem when wanting to hit-test a whole node itself, as this can be achieved via a UIGestureRecognizer's hittest method in the SCNView. However, does anybody have any suggestions as to how to precisely locate where a touch landed on a node, in terms of coordinates (i.e. SCNVector3)?
Thanks!
You are on the right track with calling hitTest:options: on the SCNView. As you have probably seen it results in an array of SCNHitTestResults.
The hit test result can tell you many things about the hit, one of them being what node was hit. What you are looking for is either the localCoordinates or the worldCoordinates.
The local coordinate is relative to the node that was hit. Since you are asking "how to precisely locate where a touch landed on a node" this is probably the one you are looking for.
The world coordinate is relative to the root node.
Related
Currently, RealityKit doesn't have any method that provides the currently visible entities. In SceneKit we do have a method for that particular functionality—nodesInsideFrustum(pointOfView).
Our internal solution is to create a big fake bounding box in front of the camera. We then check intersections between the "frustum" bounding box and each entity's bounding box. That, of course, is a bit cumbersome and inaccurate. I wonder if someone can come up with a better solution who is willing to share it.
You could combine two ARView methods:
ARView.project(position) to get the 2D point in screen space
ARView.bounds.contains(point) to know if it's visible on screen
But it's not enough, you also have to check if the object is behind you:
Entity.position(relativeTo: cameraAnchor) (with cameraAnchor being an AnchorEntity(.camera)) to have the local position
the sign of localPosition.z shows if it's in front or behind the camera
The gear joint in Box2d is great, but I don't know how to implement that in Sprite Kit. Is there any solutions to implement the gear joint in Sprite Kit?
Thanks.
Here are the available Sprite-Kit joints :https://developer.apple.com/reference/spritekit/skphysicsjoint
As far as I can understand, there does not appear to be a direct correlation to Box2D's gear joint, which seems to make one body rotate when another body is rotated.
In that case, you might want to investigate overriding the didSimulatePhysics or didFinishUpdate methods to manually set the rotation of one object based upon the rotation of another object:
https://developer.apple.com/reference/spritekit/skscene/1519965-didsimulatephysics
https://developer.apple.com/reference/spritekit/skscene/1520269-didfinishupdate
It might be as simple as:
wheel2.zRotation = wheel1.zRotation
but if the gears have different numbers of teeth (thus different ratios), you'll have to do some calculations.
In my game, the size of the level can be larger than the screen of the phone and the camera will follow the player around the level, so there can be a decent amount of content(such as SKEmitterNodes) in the scene that is not visible at any given time. I've been reading through some of the SpriteKit documentation and found this quote in the SMEmitterNode section:
"Consider removing a particle emitter from the scene when it is not
visible onscreen. Add it just before it becomes visible."
Is this something that can be done in my type of game design? I don't want the nodes to be completely removed since they will eventually be put on the screen, but is there a good way for me to add/remove the EmitterNodes (or other SpriteNodes) that are a certain distance from the screen/is this a good idea to do? I'm looking to improve my frame-rate and don't want costly nodes like SMEmitterNodes working while they're not even being displayed, but will adding/removing them as the player moves around reduce the performance?
Here is the idea I currently have: create a rectangle that extends a certain distance around the screen and detect when a node comes into that rectangle, and if it's not already added to the scene, go ahead and add it. Thank you for any suggestions.
SKNodes really aren't a problem because when they are off screen they are not being rendered anyway, just evaluated. So the main thing to worry about with SKNodes are any physics bodies attached to them,
SKEmitterNodes however require some processing power, and that is why apple is recommending not having them emit if they are not on screen. I would just subclass my SKScene class, and do a checks only on SKEmitterNodes whether or not they are in frame, and emit based on that.
So, I would throw all your SKEmitterNodes into a container like an array, and have a loop function to have the node do a CGRectIntersectsRect check based on your camera location and viewable screen size. and if they intersect, add it to the scene, if not remove it from the scene. The array will keep a strong reference so you do not have to worry about it deiniting on you
I am making a Sprite Kit game where the player (basically a stickman) has a running animation and a parallax scrolling background.
Now I have enemies that come near my player. To destroy these enemies sometimes I have to touch the enemies node to launch a rocket or attack them with an attack button or just jump over them.
Everything is working fine, but I want to add some extra moves to destroy them. I want some enemies that you can just destroy if you have drawn a whole circle around them. So imagine they come and you make a circle and then my player launch a laser or something. The problem is I have no idea where to start.
I haven't found anything on the internet. If it's too complicated or almost impossible how about touching my player node and dragging to the enemy?
EDIT: I think I have to create a custom GestureRecognizer that recognizes if a circle is drawn around a sprite and then runs the code. I don't know how this works ?
Yes, it's too complex. Not just from a coding point of view, but also from that of the player's experience.
Anything that requires complex gestures over a large amount of glass is annoying for the player because they're never going to have the same experience. Their finger's moisture and oil content always changes, as does the ambient temperature and cleanliness of their screen.
So big gestures required to be performed quickly (a gaming input like this) will sometimes be fun and smooth, and other times degrade as an experience based on the nature of the above properties.
Best to avoid them for a game's best possible experience.
If you must do it, there's two ways to research how.
Seek out "custom gesture" creation and utilisation through documentation and google, etc.
Think about using some kind of array to store all the points where the player's finger moves through during that circle gesture and attempt to discern if an enemy is within that space and then act accordingly.
--- probably other ways, too. But these jump to mind.
I need to be able to interact with a representation of a cilinder that has many different parts in it. When the users taps over on of the small rectangles, I need to display a popover related to the specific piece (form).
The next image demonstrates a realistic 3d approach. But, I repeat, I need to solve the problem, the 3d is NOT required (would be really cool though). A representation that complies the functional needs will suffice.
The info about the parts to make the drawing comes from an API (size, position, etc)
I dont need it to be realistic really. The simplest aproximation would be to show a cilinder in a 2d representation, like a rectangle made out of interactable small rectangles.
So, as I mentioned, I think there are (as I see it) two opposite approaches: Realistic or Simplified
Is there a way to achieve a nice solution in the middle? What libraries, components, frameworks that I should look into?
My research has led me to SceneKit, but I still dont know if I will be able to interact with it. Interaction is a very important part as I need to display a popover when the user taps on any small rectangle over the cylinder.
Thanks
You don't need any special frameworks to achieve an interaction like this. This effect can be achieved with standard UIKit and UIView and a little trigonometry. You can actually draw exactly your example image using 2D math and drawing. My answer is not an exact formula but involves thinking about how the shapes are defined and break the problem down into manageable steps.
A cylinder can be defined by two offset circles representing the end pieces, connected at their radii. I will use an orthographic projection meaning the cylinder doesn't appear smaller as the depth extends into the background (but you could adapt to perspective if needed). You could draw this with CoreGraphics in a UIView drawRect.
A square slice represents an angle piece of the circle, offset by an amount smaller than the length of the cylinder, but in the same direction, as in the following diagram (sorry for imprecise drawing).
This square slice you are interested in is the area outlined in solid red, outside the radius of the first circle, and inside the radius of the imaginary second circle (which is just offset from the first circle by whatever length you want the slice).
To draw this area you simply need to draw a path of the outline of each arc and connect the endpoints.
To check if a touch is inside one of these square slices:
Check if the touch point is between angle a from the origin at a.
Check if the touch point is outside the radius of the inside circle.
Check if the touch point is inside the radius of the outside circle. (Note what this means if the circles are more than a radius apart.)
To find a point to display the popover you could average the end points on the slice or find the middle angle between the two edges and offset by half the distance.
Theoretically, doing this in Scene Kit with either SpriteKit or UIKit Popovers is ideal.
However Scene Kit (and Sprite Kit) seem to be in a state of flux wherein nobody from Apple is communicating with users about the raft of issues folks are currently having with both. From relatively stable and performant Sprite Kit in iOS 8.4 to a lot of lost performance in iOS 9 seems common. Scene Kit simply doesn't seem finished, and the documentation and community are both nearly non-existent as a result.
That being said... the theory is this:
Material IDs are what's used in traditional 3D apps to define areas of an object that have different materials. Somehow these Material IDs are called "elements" in SceneKit. I haven't been able to find much more about this.
It should be possible to detect the "element" that's underneath a touch on an object, and respond accordingly. You should even be able to change the state/nature of the material on that element to indicate it's the currently selected.
When wanting a smooth, well rounded cylinder as per your example, start with a cylinder that's made of only enough segments to describe/define the material IDs you need for your "rectangular" sections to be touched.
Later you can add a smoothing operation to the cylinder to make it round, and all the extra smoothing geometry in each quadrant of unique material ID should be responsive, regardless of how you add this extra detail to smooth the presentation of the cylinder.
Idea for the "Simplified" version:
if this representation is okey, you can use a UICollectionView.
Each cell can have a defined size thanks to
collectionView:layout:sizeForItemAtIndexPath:
Then each cell of the collection could be a small rectangle representing a
touchable part of the cylinder.
and using
collectionView:(UICollectionView *)collectionView
didSelectItemAtIndexPath:(NSIndexPath *)indexPath
To get the touch.
This will help you to display the popover at the right place:
CGRect rect = [collectionView layoutAttributesForItemAtIndexPath:indexPath].frame;
Finally, you can choose the appropriate popover (if the app has to work on iPhone) here:
https://www.cocoacontrols.com/search?q=popover
Not perfect, but i think this is efficient!
Yes, SceneKit.
When user perform a touch event, that mean you knew the 2D coordinate on screen, so your only decision is to popover a view or not, even a 3D model is not exist.
First, we can logically split the requirement into two pieces, determine the touching segment, showing right "color" in each segment.
I think the use of 3D model is to determine which piece of data to show in your case if I don't get you wrong. In that case, the SCNView's hit test method will do most of work for you. What you should do is to perform a hit test, take out the hit node and the hit's local 3D coordinate of this node, you can then calculate which segment is hit by this touch and do the decision.
Now how to draw the surface of the cylinder would be the only left question, right? There are various ways to do, for example simply paint each image you need and programmatically and attach it to the cylinder's material or have your image files on disk and use as material for the cylinder ...
I think the problem would be basically solved.