What objects is my physics object touching? - coronasdk

I expect a display object that has physics added to it(addBody) to provide me a table of objects it's touching, but it doesn't.
Is there a simple mechanism for getting a table of all the other objects that my display object is touching? Or do I have to keep track of them during collision events?
Edit for clarity:
I looked into region queries, the problem is I need each object to be accessible recursively. Think of the game Bejeweled. If I have a ton of physics objects falling to the 'ground', and a few of them share like-properties (like colors in Bejeweled), I'd like to know which of those 5 physics objects are touching all at the same time. And if x or more are touching, I'd like to remove the objects from the game at the same time.
So if A is touching B and C, and A B & C all share the same color, remove them all at once. But perhaps A & B have come to rest for a while before C shows up. There's no real easy way to check the state of A, if C collides into B. How can I see if C, B, and A objects all connect with one another, are still touching, and all share a like property so I can make them disappear?
I tried adding a property to my falling objects, and then on a collision event, seeing if the event.other.color matches my self.color. And if they do, then I add them to another table indicating they're currently touching.
But my method of keeping track of what is "touching" only works until the event.phase == "ended". When that happens, I remove the data from the table. Unfortunately something happens here where things don't always get registered as "no longer touching". I think it's a sign that my method isn't working really well.
I can't be the first to want to do something like this, but how is everyone else doing it? Where have I gone wrong?

Perhaps a "Region Query" would help. Look at the RegionQuery sample app in SampleCode/Physics/RegionQuery

Related

How to find if collision between two scene nodes ended in ARKit-Scenekie of ios 11?

I implemented physics contact delegate in order to find out the contact between two nodes. Upon contact, the delegate is getting called correctly and i am changing the color of the nodes. But contact didEnd method is called again and again and hence i couldn't find out when the actual contact ended.
For example, if i move one object to overlap with the other, i am changing the color of the objects. But when i again move the second object outwards and if there is no contact, again i have to change back the color to original one. How to achieve this?
There is method to tell that there is some contact but there is no method to tell that the contact is no more.

SpriteKit prevent more than one touch at a time

I am making a SpriteKit game I was wondering if there was a way to prevent more than one touch at a time.
in my game an object gets added at every touch and I kinda don't want that. (even though its hilarious) if there is a way how do I do it? what would I use? and could you point me in the right direction? and I know that there are ways to do it as I have seen multiple games with that feature.
would I put something into 'appdelegate.swift' to prevent that or would it have something to do with the 'touches began' function I have tried several methods but none seem to work also I have searched all over google but to no avail.
if somebody could help me with this I would appreciate it but its not really that important as it doesn't upset the balance of the game at all.
You can use multipleTouchEnabled property of a UIView:
When set to YES, the view receives all touches associated with a
multi-touch sequence and starting within the view's bounds. When set
to NO, the view receives only the first touch event in a multi-touch
sequence that start within the view's bounds. The default value of
this property is NO.
Use it like this self.view.multipleTouchEnabled = false, where self is a scene.

SpriteKit Objective-C – Detect how much finger has moved in -touchesMoved

I'd simply like to know if there is a way to detect how many pixels the finger has moved during the -touchesMoved function?
EDIT:
This is what I've tried. I made two instance variables called _previousPosition and _currentPosition. In -touchesBegan, I set them both to be the current finger location in the scene. In -touchesMoved, I set _currentPosition to be the current finger location once again. Keep in mind that during -touchesMoved, when I'm updating _currentPosition, _currentPosition is being constantly updated, while _previousPosition is not. Finally, in touchesEnded, I create another variable (not global, but private) called pixelsMoved, and set that equal to _currentPosition - _previousPosition. Right after that, in -touchesEnded, I reset _previousLocation to be the current finger location. It's all very complicated, so I'm almost positive I've made some mistake somewhere. Any help would be appreciated.
I'd simply like to know if there is a way to detect how many pixels the finger has moved during the -touchesMoved function?
-touchesMoved:withEvent: provides an event, and from the event you can get individual touch objects, each of which have an associated location that you get with -[UITouch locationInView:]. You don't get information about how far the touch has moved since the last time you looked, but you can keep track of the location of each touch and do the comparison yourself.

Gesture pattern recognizer in IOS.?

I am trying to implement a special login for developer without having a bit of changes in the UI. For example, suppose to log in as a developer, I made a "D" shape over the UI and it will open a Developer mode for me. How can I achieve this functionality? Is there any third party which can recognize the shape that i am trying to made or any other suggestion?
Yeah, you can do that with a subclass of UIGestureRecognizer; the tutorial I am linking shows all the tools you will need to build your own. You want to look at the Custom Gesture Recognizer part of the tutorial towards the bottom.
Basically, you will want to write a gesture that can evaluate if the user made a "D" shape over whichever view has your gesture recognizer. This can be done by keeping track of the last point, and seeing if the current point at any given time fits in the gesture. Or, you could keep track of every point the gesture has ever recorded and write a function that would evaluate if the points you've recorded qualify as a "D" in your gesture.
This may get complicated as there multiple ways to draw a D. However, you could start with two, one looking for a vertical line, followed by an backwards C. The other, a backwards C followed by a vertical line.
Here is a good tutorial:
http://www.raywenderlich.com/6567/uigesturerecognizer-tutorial-in-ios-5-pinches-pans-and-more
While searching here and there I found that it would be a great idea to divide our screen in 9 area and assign each one as a digit same as mobile phone keypad. When a user pan at any location have that location co-ordinate and match it with divided region if it falls manage a array and hold that value.
That value works like a unique pin for you.
For example to check that letter is "L" check that if the order of array element is 1->4->7->8->9 or to check "U" check that if the order of array is 1->4->7->8->9->6->3 then it should be "U".
Is there any other way to recognize the character by touch on phone.

Design pattern for large quanity of synchronized animations

I have over 150 UIImageViews. I need them to behave like a large ballet. I also need to be able to dynamically choreograph them. Some spin, some duck, some jump (each currently a different UIImageView subclass).
I'm assuming I need to put them all into an NSDictionary, then grab the object based on a key and say "you spin", "you jump", "you duck". And do those moves in sync while the music is playing (no real music, but instead data coming in from an external source). And I have no idea what that music(data) will be until it arrives, and no one's heard the song before. I've setup a while (music){} loop with a large switch statement inside.
I'd like to place all the ballerina's on the view in the nib by hand so that they line up correctly with the subview. None of them change x,y. They might only change z, and maybe swap an image, fade opacity, etc (which I was calling spin, duck and jump). And there is no user interaction here, you're just in the audience.
I'm also assuming I'll need to use the UIView's beginAnimation, setAnimation, commitAnimation methods.
Am I on the right track? What's the best way to achieve this? Any optimizations I should consider?Apologies for all the analogies, it's the easiest way to explain what I'm trying to achieve.
I recommend you look at UICollectionView. There was a great WWDC session about it. I'm not exactly sure what you're trying to achieve, but with UICollectionView you can set up custom layouts and animate between different layouts automatically. So if your images have to move in sync, this might be a good option.

Resources