How to correctly use SKConstraints - ios

I'd like to build a menu of 'tiles' using SpriteKit. For simplicity, the tiles are SKSpriteNodes of all the same size, and are housed in a larger 'container' SKSpriteNode. The container node is a mask node, so only a subset of the menu tiles are shown at a given time. Dragging up or down on a tile should scroll the list of tiles up or down, respectively.
Right now, when a drag is detected over a tile, I just reposition all tiles up or down--easy. There are two other constraints on the problem, though. The top tile should never scroll below the top of the mask/container node. And, the bottom tile should never scroll above the bottom of the mask/container node. Taken together, this keeps the list of tiles from ever being completely dragged to a place where they are hidden/unreachable.
This problem seemed like it would be elegantly solved with either SKConstraints or SKPhysicsJoints. I've tried a lot of combinations, but nothing seems to give me the desired effect.
For instance, I've used an SKPhysicsJointFixed to pin each pair of neighboring tiles together. This has two problems--first, the tiles are 'sluggish' when dragged, so the finger drags more quickly than the tile to the point that it is no longer over the tile, and the drag stops being recognized as on the node. Second, this pin allows the tiles to rotate freely about the anchor point. I added an SKConstraint to restrict the z-rotation of every tile, but now the tiles can barely be dragged at all.
I tried implementing everything with just SKConstraints, so I wouldn't have to fuss with setting masses correctly, etc., in order to make the physics approach feel more usable. I added a constraint on the x position of every tile so that they could only be dragged vertically. Then, I added another with SKConstraint.distance(_:toNode:) on every pair of tiles to keep them separated by the correct vertical distance. The problem with this is, given two tiles, if I add this distance constraint on each, only the last tile to be given the constraint is allowed to move. It moves and the other tile follows this tile correctly. That other tile can't be moved at all, though.
Then there comes the problem of keeping at least some part of the set of tiles 'in bounds', so that they never are dragged completely outside of the mask node, and thus not visible/unreachable. A constraint/joint seemed like it might work here--add some constraint/joint to the bottom tile keeping it above the bottom of the mask node, and similarly for the top tile. But now, how do I use a constraint/joint to keep the bottom tile above the bottom of the bottom of the mask node but let it move below this point, and vice versa for the top tile?
Am I going about this all wrong? Is using constraints/joints not the correct approach? Obviously, I can hand code all of these restrictions, but it seems like constraints/joints would solve this problem so much more elegantly, while also letting me rely on the physics world for some springiness, flicking, etc. If there's a good way to do what I'm trying to do, would someone please provide a suggestion?
Many thanks!

Related

Swift SceneKit - I am trying to figure out if a Node object goes off the screen

Using SceneKit
I want to make the gray transparent box to disappear and only show the colored boxes when the user zooms in.
So I want to detect when that box's edges are starting to fall off the screen as I zoom, so I can hide the gray box accordingly.
First thoughts, but there may be better solutions:
You could do an unprojectPoint on the node and check against screen coordinates, do the +/- math on object size and skip Z. I "think" that would work
You can do some physics based collision detection against an invisible box or plane geometries that acts as your screen edges, has some complexity if your view is changing, but testing would be easy - just leave visible until you get what you want, then isVisible=false
isNode(insideFrustomof: ) - returns boolean on whether it "might" be visible. I'm assuming "might" means obscured by other geometry which in your case, shouldn't matter (edit) on second thought, that doesn't solve your problem but I'll leave it in here for reference.

How can I get boxes to snap into position when I drag and drop them into certain regions of the screen?

I am currently building a game on swift, using Storyboards. The game revolves around generating income from fishing lobsters. Users have lobster pots, which they can place into either inshore or outshone regions of the water. With no prior experience. I have minimal knowledge on how to code in swift.
My problem at the moment is understanding collision detection. There are three regions of the screen where the users can drag their pots into. The first screen is the starting position of the lobster pots, from which the player must drag the pots into either inshore or offshore locations. Currently, I have managed to code the action of dragging and dropping the pots, so they can be placed into any point on the screen. What I hope to do is to be able to have the pots to snap into position when the pots are dropped within the regions of either the inshore of offshore boxes. Furthermore, when the pots are dropped into place, I would like them to be organized in a row, equally spaced, and dropping into a row below, filling up the box.
Image -
I think I should also mention that the background is an image view, taken as a screenshot of the view when the game is running. I did this to avoid layering, as some pots would sometimes move behind the boxes when dragging them.
Thanks in advance.
Here some ideas:
You already have the code to move the tiles, that's good. All you now need is same math.
Although your background is an image, you also need some data model to keep, where your stuff is (or where your pots belong to). It is important to know, if a pot is in "My Pots" or "Inshor" or "Outshore". This information has to keep in some objects, like "myPots" and "inshore" and so on.
So dragging doesn't only move the pots on screen, it also changes where a pot belong to.
Hint: A representation of a area (myPots, ...) can be done with invisible areas. Invisible, because you already have the background. But a invisible rectangle gives you the ability to resize the ui without complicated re-calculations.
I would devide the area like this:
The coordinates are just examples for better understanding.
Most game engines work with coordinate (0,0) at top left.
So if you drag and release a pot, you have to calculate the end point of drag and compare it with your areas. No complicated collision detection necessary, because you only test if a point is in an area. But if you want collision detection, search for AABB collision detection (like here https://studiofreya.com/3d-math-and-physics/simple-aabb-vs-aabb-collision-detection/).
In your case it would be enough to have the decision:
if draggedPot.endCoordinate.y > 100 {
// in or out shore
if draggedPot.endCoordinate.x > 300 {
// outshore
}
else {
// inshore
}
}
else {
// still in myPots
}
I hope you get the idea :)
For arrange in a row it's also some math. Loop over the pots in an area, place one by one, always start-x + width-of-pot + some space. If this is greater than width of the area, set y to height of pot + some space and x starts at zero.

Built in way to convert from screen coordinates to image coordinates?

I have an app where users can scale and position images in a number of ways. They can drag an entire layer of images around, scale that layer, drag around individual images inside the layer, and scale those individual images.
For some unrelated functionality, I need to generate the image coordinates that a user is pointing to on a given image (ie (0,0) for the top left & (width,height) for the bottom right), independent of how much it has been moved around and scaled. Is there a built in method for tranforming an absolute mouse position to it's relative position on an image (and vice versa) that takes into account any scaling/panning? I have started building my own methods for this tranformation but before I got too deep I wanted to see if it was already built in somewhere that I'm not seeing.
Konva doesn't have such methods yet. You have to implement them manually.
You can subscribe to this related issue: https://github.com/konvajs/konva/issues/303

How to detect intersecting nodes of children of other SKSpriteNodes

I hope you all can help with this. I'm working on app of a board game. I have hex shaped tiles which are called randomly and laid out at the start of the game. Each of these tiles has four sides with a value of 1 and the other two sides have values of 2 and 3.
Each tile is a SKSpriteNode with transparent rectangle Nodes on the edges. There are 5 different types of tiles and they need to be separate Sprites with child nodes because in addition to being randomly laid out they area also randomly rotated. So I need to know programmatically which tile edges are touching which edges of other tiles.
Like this:
https://app.box.com/s/nnym97st3xmrsx979zchowdq1qwsmpoo
(I tried to post an image of what I'm trying to accomplish, but apparently I don't have a high enough of a rating.) ;-)
For example: If a "2" is touching a "3", etc.
I first tried Collision detection, but of course that only works with dynamic, moving objects.
I tried an IF statement to compare if the other nodes were touching and then remembered that the coordinates where specific to the Parent Node, so that didn't work.
I then tried intersectsNode, but that seems to only work with nodes under the same parent.
I am currently working with convertPoint in order to get the coordinates to match the scene and thus be comparable. But I can't seem to get it work the way I need.
There must be something simple that I am not seeing. Any ideas?
Certainly not simple.
One solution would be to start all your shapes slightly spaced out from each other. Add invisible child nodes with physics bodies to all six sides and give each physics body an appropriate category based on their rating (1, 2 or 3).
When you start the game, move all the outer nodes into their proper position (sides touching) by using whatever movement method your prefer. This will give you contact messages as each hex side touches another. The contact messages will tell you what side number is touching its neighbor.
The exact coding of this idea depends on your current code, game play, etc...

How to drag and drop multiple SpriteKit Nodes with the same parent?

I have multiple SKSpriteNodes(some rectangles) which are draggable (I followed the tutorial on Sprite Kit Tutorial: How To Drag and Drop Sprites). When a collision happens between them, I group them (by making the one rectangle a parent and the other a child). No matter how many rectangles I will combine, I manage always to have one parent and multiple rectangles that belong to it. I am doing this cause I want to move cubes belonging in a group together and I observed that if I move the parent, I move all of its children. What I am doing to achieve this is to transform the group at touchBegin and to make the touchedNode a parent and all the other nodes of the group children of this new parent. I believe that the following image may make things a little bit more clear.
The problem I am facing is that I can drag the group even if I touch at the white space (shown with red circle) included between the horizontal and vertical rectangles. As all rectangles shown in the image have the same parent, I guess that there is a bounding box that include them all and this is why the white space in the middle can trigger a drag event.
Does anyone have any idea how I can deal with this issue?
Is it possible to have a bounding box as shown in the following image?
Thanks in advance.
You need to write custom hit testing to perform this kind of trick.
For every click -> For every box (within certain range of touch) -> For every other box (within certain range of touch) -> Combine the two box frames into one (CGRectUnion(<#CGRect r1#>, <#CGRect r2#>)) and see if your finger is within the rect.
This might give results for a lot of dispersed rectangles, so limit your initial search of boxes to a given range from the touch itself.
Apart from that it's just simple code.

Resources