ARKit only detect floor to place objects - ios

In ARKit i can use hitTest:tapPoint types:ARHitTestResultTypeExistingPlaneUsingExtent
This works if you want to place objects on e.g. tables as types:ARHitTestResultTypeExistingPlaneUsingExtent will only detect hits within the extent of the detected plane.
It is less usefull if you want to place objects on the floor, because you need to have to walk around until ARKit has placed (or extended) a lot of planes across your floor.
ARHitTestResultTypeExistingPlane solves that issue, because you just need to have detected a small patch of your floor and can place objects everywhere. The problem is however as soon as ARKit has detected another plane that doesn't correspond to the floor (e.g. a table), every object will be placed on that higher surface.
Is it possible to control which planes are used for the hittest?

The hit testing methods return multiple results, sorted by distance from the camera. If you're hit testing against existing planes with infinite extent, you should see at least two results in the situation you describe: first the table/desk/etc, then the floor.
If you specifically want the floor, there are a couple of ways to find it:
If you already know which ARPlaneAnchor is the floor from earlier in your session, search the array of hit test results for one whose anchor matches.
Assume the floor is always the plane farthest from the camera (the last in the array). Probably a safe assumption in most cases, but watch out for balconies, genkan, etc.

Related

FEM Integrating Close to Integration Points

I am working on a program that can essentially determine the electrostatic field of some arbitrarily shaped mesh with some surface charge. To test my program I make use of a cube whose left and right faces are oppositely charged.
I use a finite element method (FEM) that discretizes the object's surface into triangles and gives to each triangle 3 integration points (see below figure, bottom-left and -right). To obtain the field I then simply sum over all these points, whilst taking into account some weight factor (because not all triangles have the same size).
In principle this works all fine, until I get too close to a triangle. Since three individual points are not the same as a triangular surface, the program breaks and gives these weird dots. (block spots precisely between two integration points).
Below you see a figure showing the simulation of the field (top left), the discretized surface mesh (bottom left). The picture in the middle depicts what you see when you zoom in on the surface of the cube. The right-most picture shows qualitatively how the integration points are distributed on a triangle.
Because the electric field of one integration point always points away from that point, two neighbouring points will cancel each other out since their vectors aim in the exact opposite direction. Of course what I need instead is that both vectors point away from the surface instead.
I have tried many solutions, mostly around the following points:
Patching the regions near an integration point with a theoretically correct uniform field pointing away from the surface.
Reorienting the vectors only nearby the integration point to manually put them in the right direction.
Apply a sigmoid or other decay function to make the above look more smooth.
Though, none of the methods above allow me to properly connect the nearby and faraway regions.
I guess what might work is some method to extrapolate the correct value from the surroundings. Though, because of the large number of computations, I moved the simulation the my GPU, which means that I have to be careful allowing two pixels to write to each other.
Either way, my question here is as follows:
What would be a good way to smooth out my results? That is, I need a more accurate description of my model when I get closer to a triangle.
As a final note I want to add that it is not my goal to simply obtain a smooth image. Later in the program I need this data to determine the response of a conducting material, which is where these black dots internally become a real pain...
Thank you for your help !!!

Collision with objects

I am new to StackOverflow and my first question here is, how do I collide with objects in a game, removed them from scene and add 1 to the score .
To help you understand this question more here is an example: If the player collides with a coin, the coin will get removed from the scene and it will add 1 to the score
The only code I have is generating the diamonds and that't but I am not sure how to get round this question, I think I need to write the code in beginContact or something similar. I would be really glad if someone could help me with this issue. Thanks!
Object collision can be done in several ways depending on the complexity of the game.
The simplest method is to track the entire field in a 2d or 3d matrix and if the user moves into the same coordinates as an obtainable object remove the object and increment the score. This has obvious issues when it comes to large maps or complex systems that would run the hardware out of memory. So this works for something like a chess/checkers board but not a driving simulation.
The second method is to keep a linked list of objects visible on the field with it's central coordinates and it drawing directions. The objects might look like coords (1007.2053, 489.2111) shape (box). Where box is a function that generates all the border coordinates. Then detect collisions by checking if primary target overlaps any of the object in the list. You'll probably have to write a collision function for each shape. The simpler the shape the easier the collision function. For more complex object it's often easier just to simplify the shape to a box no mater what it looks like. This is why 3d games often have clipping errors and why you could shoot the edge of an object in a fps and still not be considered to hit it.
Your question is too broad for a better answer but Here is a very basic article that discusses the second method.
More Info. The game is apparently a 2d endless runner.
So you could set up a 2d matrix that acts as a queue. Say your board is 3 high so your character can walk, jump, or high jump.
highjump
jump
walk
This would be a simple matrix like board 10 X 3 (x, y)
With each frame the board removes the front column and adds a new column to the back. Think front of matix is left side, back is right side. When adding to back you randomly decide which box to put the coin in. In each frame the user must be in one of the 3 positions. if his position is the same as the object collect it and gain points. Or if the object is undesirable lose a life and start game over, etc.
more links
Information specific to endless runners here and another stack overflow question similar to yours here

How to find the normal vector of the contact point between two sprites

I am creating a game where the character can jump in the direction opposite of the surface he lands on. For example if he is on the ground he can only jump up. If he is on a right edge he can only jump left...etc etc.. Currently I have set up my SKNodes into leftWall, rightWall, bottomSurface, topSurface and have a huge if/else... this works but will become way more complicated as I add different surfaces with different angles etc...
I thought a better way to implement this would be to find the normal vector direction at the point of contact between the character sprite and a general wallNode sprites.
Can anyone help me determine A) the point of contact between sprites assuming the character has a circle physics body and the walls are always straight edges. and B) the normal to the contact point
thank you!
I don't think the vector or angle of impact will do you much good as your player's jump would likely be in an ark. As such, hitting the same surface at different points of the player's jump ark would yield different results.
I suggest you use unique category bit masks for your surfaces. A more advanced version of that would be to use only one category and utilize the SKSpriteNode's dictionary property to hold the angle value.
Based on the contact data you can then set the correctly angled jump for your player.

Surface Detection in 2d Game?

I'm working on a 2D Platform game, and I was wondering what's the best (performance-wise) way to implement Surface (Collision) Detection.
So far I'm thinking of constructing a list of level objects constructed of a list of lines, and I draw tiles along the lines.
alt text http://img375.imageshack.us/img375/1704/lines.png
I'm thinking every object holds the ID of the surface that he walks on, in order to easily manipulate his y position while walking up/downhill.
Something like this:
//Player/MovableObject class
MoveLeft()
{
this.Position.Y = Helper.GetSurfaceById(this.SurfaceId).GetYWhenXIs(this.Position.X)
}
So the logic I use to detect "droping/walking on surface" is a simple point (player's lower legs)-touches-line (surface) check
(with some safety approximation
- let`s say 1-2 pixels over the line).
Is this approach OK?
I`ve been having difficulty trying to find reading material for this problem, so feel free to drop links/advice.
Having worked with polygon-based 2D platformers for a long time, let me give you some advice:
Make a tile-based platformer.
Now, to directly answer your question about collision-detection:
You need to make your world geometry "solid" (you can get away with making your player object a point, but making it solid is better). By "solid" I mean - you need to detect if the player object is intersecting your world geometry.
I've tried "does the player cross the edge of this world geometry" and in practice is doesn't work (even though it might seem to work on paper - floating point precision issues will not be your only problem).
There are lots of instructions online on how to do intersection tests between various shapes. If you're just starting out I recommend using Axis-Aligned Bounding Boxes (AABBs).
It is much, much, much, much, much easier to make a tile-based platformer than one with arbitrary geometry. So start with tiles, detect intersections with AABBs, and then once you get that working you can add other shapes (such as slopes).
Once you detect an intersection, you have to perform collision response. Again a tile-based platformer is easiest - just move the player just outside the tile that was collided with (do you move above it, or to the side? - it will depend on the collision - I will leave how to do this is an exercise).
(PS: you can get terrific results with just square tiles - look at Knytt Stories, for example.)
Check out how it is done in the XNA's Platformer Starter Kit Project. Basically, the tiles have enum for determining if the tile is passable, impassable etc, then on your level you GetBounds of the tiles and then check for intersections with the player and determine what to do.
I've had wonderful fun times dealing with 2D collision detection. What seems like a simple problem can easily become a nightmare if you do not plan it out in advance.
The best way to do this in a OO-sense would be to make a generic object, e.g. classMapObject. This has a position coordinate and slope. From this, you can extend it to include other shapes, etc.
From that, let's work with collisions with a Solid object. Assuming just a block, say 32x32, you can hit it from the left, right, top and bottom. Or, depending on how you code, hit it from the top and from the left at the same time. So how do you determine which way the character should go? For instance, if the character hits the block from the top, to stand on, coded incorrectly you might inadvertently push the character off to the side instead.
So, what should you do? What I did for my 2D game, I looked at the person's prior positioning before deciding how to react to the collision. If the character's Y position + Height is above the block and moving west, then I would check for the top collision first and then the left collision. However, if the Character's Y position + height is below the top of the block, I would check the left collision.
Now let's say you have a block that has incline. The block is 32 pixels wide, 32 pixels tall at x=32, 0 pixels tall at x=0. With this, you MUST assume that the character can only hit and collide with this block from the top to stand on. With this block, you can return a FALSE collision if it is a left/right/bottom collision, but if it is a collision from the top, you can state that if the character is at X=0, return collision point Y=0. If X=16, Y=16 etc.
Of course, this is all relative. You'll be checking against multiple blocks, so what you should do is store all of the possible changes into the character's direction into a temporary variable. So, if the character overlaps a block by 5 in the X direction, subtract 5 from that variable. Accumulate all of the possible changes in the X and Y direction, apply them to the character's current position, and reset them to 0 for the next frame.
Good luck. I could provide more samples later, but I'm on my Mac (my code is on a WinPC) This is the same type of collision detection used in classic Mega Man games IIRC. Here's a video of this in action too : http://www.youtube.com/watch?v=uKQM8vCNUTM
You can try to use one of physics engines, like Box2D or Chipmunk. They have own advanced collision detection systems and a lot of different bonuses. Of course they don't accelerate your game, but they are suitable for most of games on any modern devices
It is not that easy to create your own collision detection algorithm. One easy example of a difficulty is: what if your character is moving at a high enough velocity that between two frames it will travel from one side of a line to the other? Then your algorithm won't have had time to run in between, and a collision will never be detected.
I would agree with Tiendil: use a library!
I'd recommend Farseer Physics. It's a great and powerful physics engine that should be able to take care of anything you need!
I would do it this way:
Strictly no lines for collision. Only solid shapes (boxes and triangles, maybe spheres)
2D BSP, 2D partitioning to store all level shapes, OR "sweep and prune" algorithm. Each of those will be very powerfull. Sweep and prune, combined with insertion sort, can easily thousands of potentially colliding objects (if not hundreds of thousands), and 2D space partitioning will allow to quickly get all nearby potentially colliding shapes on demand.
The easiest way to make objects walk on surfaces is to make then fall down few pixels every frame, then get the list of surfaces object collides with, and move object into direction of surface normal. In 2d it is a perpendicular. Such approach will cause objects to slide down on non-horizontal surfaces, but you can fix this by altering the normal slightly.
Also, you'll have to run collision detection and "push objects away" routine several times per frame, not just once. This is to handle situations if objects are in a heap, or if they contact multiple surfaces.
I have used a limited collision detection approach that worked on very different basis so I'll throw it out here in case it helps:
A secondary image that's black and white. Impassible pixels are white. Construct a mask of the character that's simply any pixels currently set. To evaluate a prospective move read the pixels of that mask from the secondary image and see if a white one comes back.
To detect collisions with other objects use the same sort of approach but instead of booleans use enough depth to cover all possible objects. Draw each object to the secondary entirely in the "color" of it's object number. When you read through the mask and get a non-zero pixel the "color" is the object number you hit.
This resolves all possible collisions in O(n) time rather than the O(n^2) of calculating interactions.

Image Processing: What are occlusions?

I'm developing an image processing project and I come across the word occlusion in many scientific papers, what do occlusions mean in the context of image processing? The dictionary is only giving a general definition. Can anyone describe them using an image as a context?
Occlusion means that there is something you want to see, but can't due to some property of your sensor setup, or some event.
Exactly how it manifests itself or how you deal with the problem will vary due to the problem at hand.
Some examples:
If you are developing a system which tracks objects (people, cars, ...) then occlusion occurs if an object you are tracking is hidden (occluded) by another object. Like two persons walking past each other, or a car that drives under a bridge.
The problem in this case is what you do when an object disappears and reappears again.
If you are using a range camera, then occlusion is areas where you do not have any information. Some laser range cameras works by transmitting a laser beam onto the surface you are examining and then having a camera setup which identifies the point of impact of that laser in the resulting image. That gives the 3D-coordinates of that point. However, since the camera and laser is not necessarily aligned there can be points on the examined surface which the camera can see but the laser can not hit (occlusion).
The problem here is more a matter of sensor setup.
The same can occur in stereo imaging if there are parts of the scene which are only seen by one of the two cameras. No range data can obviously be collected from these points.
There are probably more examples.
If you specify your problem, then maybe we can define what occlusion is in that case, and what problems it entails
The problem of occlusion is one of the main reasons why computer vision is hard in general. Specifically, this is much more problematic in Object Tracking. See the below figures:
Notice, how the lady's face is not completely visible in frames 0519 & 0835 as opposed to the face in frame 0005.
And here's one more picture where the face of the man is partially hidden in all three frames.
Notice in the below image how the tracking of the couple in red & green bounding box is lost in the middle frame due to occlusion (i.e. partially hidden by another person in front of them) but correctly tracked in the last frame when they become (almost) completely visible.
Picture courtesy: Stanford, USC
Occlusion is the one which blocks our view. In the image shown here, we can easily see the people in the front row. But the second row is partly visible and third row is much less visible. Here, we say that second row is partly occluded by first row, and third row is occluded by first and second rows.
We can see such occlusions in class rooms (students sitting in rows), traffic junctions (vehicles waiting for signal), forests (trees and plants), etc., when there are a lot of objects.
Additionally to what has been said I want to add the following:
For Object Tracking, an essential part in dealing with occlusions is writing an efficient cost function, which will be able to discriminate between the occluded object and the object that is occluding it. If the cost function is not ok, the object instances (ids) may swap and the object will be incorrectly tracked. There are numerous ways in which cost functions can be written some methods use CNNs[1] while some prefer to have more control and aggregate features[2]. The disadvantage of CNN models is that in case you are tracking objects that are in the training set in the presence of objects which are not in the training set, and the first ones get occluded, the tracker can latch onto the wrong object and may or may never recover. Here is a video showing this. The disadvantage of aggregate features is that you have to manually engineer the cost function, and this can take time and sometimes knowledge of advanced mathematics.
In the case of dense Stereo Vision reconstruction, occlusion happens when a region is seen with the left camera and not seen with the right(or vice versa). In the disparity map this occluded region appears black (because the corresponding pixels in that region have no equivalent in the other image). Some techniques use the so called background filling algorithms which fill the occluded black region with pixels coming from the background. Other reconstruction methods simply let those pixels with no values in the disparity map, because the pixels coming from the background filling method may be incorrect in those regions. Bellow you have the 3D projected points obtained using a dense stereo method. The points were rotated a bit to the right(in the 3D space). In the presented scenario the values in the disparity map which are occluded are left unreconstructed (with black) and due to this reason in the 3D image we see that black "shadow" behind the person.
As the other answers have explained the occlusion well, I will only add to that. Basically, there is semantic gap between us and the computers.
Computer actually see every image as the sequence of values, typically in the range 0-255, for every color in RGB Image. These values are indexed in the form of (row, col) for every point in the image. So if the objects change its position w.r.t the camera where some aspect of the object hides (lets hands of a person are not shown), computer will see different numbers (or edges or any other features) so this will change for the computer algorithm to detect, recognize or track the object.

Resources