XNA Farseer. Raycast is going through shapes - xna

I'm making a game in XNA.
I'm doing a raycast from the enemy to the player to determine if the enemy can see the player.
Heres the code..
private float RayCallBack(Fixture fixture, Vector2 point, Vector2 normal, float fraction)
{
rayhit = fixture.Body.UserData.ToString();
if (fixture.Body.UserData == "player")
{
//AIawake = true;
}
return 0f;
}
_world.RayCast(RayCallBack, _blocklist[0]._floor.Position , ConvertUnits.ToSimUnits(playerpos));
My problem is that in the situation in the picture where I have procedurally generated caves made out of blocks the rays seem to go through the blocks so the enemy can see through the walls.
--
UPDATE
Ok the following code works! but.. I have no idea why!! :/
private float RayCallBack(Fixture fixture, Vector2 point, Vector2 normal, float fraction)
{
rayhit = fixture.Body.UserData.ToString();
if (fixture.Body.UserData == "player")
{
return fraction;
}
else
{
return 0f;
}
}
and then in a seperate update statement in this class have the code to awaken the enemy.
if (rayhit == "player") AIawake = true;
I obviously do not understand how raycast and the callback works. If someone could explain why this method works that'd be great. I am planning on doing a lot more raycasting to stop the enemies crashing into stuff and so on.

There are 4 scenarios. If you:
return -1 in your callback fixture then the fixture is going to be ignored and the ray will continue
return 0 the ray will terminate
return fraction then the ray will stop at this point
return 1 then the ray will continue until it ends
The first scenario is used when you want to ignore certain fixtures.
Second scenario is used when you want to know if your ray hit anything (not necessarily the closest object).
Third scenario is used to find the nearest object.
And the fourth scenario is used when you wish to know about all objects in the ray's path.
In general you callback logic would have a part that checks for objects that should be ignored and return -1 if they should be ignored which will be followed by a code block that either returns 0, fraction or 1.
The difference between second and third scenario is possibly the hardest to understand. The way I understand it is that if you return 0 none of other potential hits are going to be evaluated. But if you return a fraction (which will clip the ray) other potential hits within the new (clipped) ray length are going to be evaluated - in other words your callback will be executed for all of them and the ray might be clipped even further which will eventually return the nearest object.
See Box2D documentation about Ray Casting here.
One thing that documentation does say is that the order of evaluation is not guaranteed. In your case it might be that the player was being evaluated before the wall blocks.
tl; dr; return fractions as you did in your second code sample.

Related

Interpolating and predicting CLLocationManager

I need to get an updated user location with at least 10 hz to animate the location smoothly in MapBox for iOS while driving. Since Core Location only provides one point every second I believe I need to do some prediction.
I have tried ikalman but it doesn`t seem to do any difference when updated once a second and queried at 10 hz.
How do i tackle this please?
What you're looking for is extrapolation, not interpolation.
I'm really, really surprised that there's so few resources on extrapolation on the internet. If you want to know more you should read some numerical methods/math book and implement the algorithm yourself.
Maybe simple linear extrapolation will suffice ?
// You need two last points to extrapolate
-(double) getExtrapolatedValueAt:(double)x withPointA:(Point*)A andPointB(Point*)B
{
// X is time, Y is either longtitute or latitude.
return A.y + ( x - A.x ) / (B.x - A.x) * (B.y - A.y);
}
-(Point*) getExtrapolatedPointAtTime:(double)X fromLatitudeA:(Point*)latA andLatitudeB:(Point*)latB andLongtitudeA:(Point*)longA andLongtitudeB:(Coord*)longB
{
double extrapolatedLatitude = [self getExtraploatedValueAt:X withPointA:latA andPointB:latB];
double extrapolatedLongtitude = [self getExtrapolatedValueAt:X withPointA:longA andPointB:longB];
Coord* extrapolatedPoint = [Coord new];
extrapolatedPoint.longtitude = extrapolatedLongtitude;
extrapolatedPoint.latitude = extrapolatedLatitude;
return extrapolatedPoint;
}
Not sure if I got the function right but you can check here:
http://en.wikipedia.org/wiki/Extrapolation
it's really easy.
You should implement the linear extrapolation.
If you find out that linear extrapolation isn't enough (for curves for example) you should just iterate and change it with some other extrapolation algorithm.
Another approach would be to have a 1 sec delay in animation and animate between two known points using interpolation. I don't know if that's acceptable for your use case.
This problem is typically solved with something called "Dead Reckoning". And you're right on track with trying to use a Kalman filter for doing this. If iKalman isn't working for you, you can try to resort to a simpler approach.
There's a lot of this sort of problem solving when dealing with games and network latency, so you can likely reuse an algorithm developed for this purpose.
This seems like a pretty thorough example.
The wiki on Kalman filters may help out as well.
I ended up solving this by using long UIView animations instead (2-3) seconds with easing that start from the current state. This gives the impression of smooth position and heading following "for free".

How to optimize math with 2D vectors?

I’m working on iOS app that performs some calculations with an array of thousand objects. The objects have properties with x and y coordinates, velocity for x and y axises and couple other x-y properties. There is some math to calculate an interaction between physical objects represented by objects in the array. The math is pretty straight forward, basically it is calculation of the forces applied to the objects, speed and change in position (x,y) for each objects. I wrote a code using regular scalar math in Objective C, it worked fine on iPhone 5s, however too slow on other devices like iPhone 4, 4s, 5 and iPad mini. I found that the most time consuming operations were the calculations of the distance between 2 points and calculations of the length of a vector like shown below which involves taking a square root:
float speed = sqrtf(self.dynamic.speed_x,2)+pow(self.dynamic.speed_y,2));
So, I had to do something to make the calculations quicker. I re-wrote the code to make the properties with the coordinates of the objects and such properties as velocity which were presented by X and Y components to be vectors of GLKVector2 type. I was hoping that it would make the calculations of the variables like the distance between 2 vectors (or points, as per my understanding), addition and subtraction of vectors significantly faster due to using special vector functions like GLKVector2Distance, GLKVector2Normalize,GLKVector2Add etc. However, it didn’t help too much in terms of performance, because, as I believe, to put the object with properties of GLKVector2 type to the array I had to use NSValue, as well as to decode the GLKVector2 values back from the object in the array to perform vector calculations. Below is the code from calculation method in object’s implementation:
GLKVector2 currentPosition;
[self.currentPosition getValue:&currentPosition];
GLKVector2 newPosition;
// calculations with vectors. Result is in newPosition.
self.currentPosition = [NSValue value:&newPosition withObjCType:#encode(GLKVector2)];
Moreover, when I rewrote the code to use GLKVector2, I got memory warnings and after some time of running the applications sometimes crashes.
I spend several days trying to find a better way to do the calculations faster, I looked at vecLib, openGL, but have not found a solution that would be understandable for me. I have a feeling that I might have to look at writing code in C and integrate it somehow into objective C, but I don’t understand how to integrate it with the array of objects without using NSValue thousands times.
I would greatly appreciate it if anyone could help with advise on what direction should I look at? Maybe there is some library available that can be easily used in Objective C with group of objects stored in arrays?
Learn how to use Instruments. Any performance optimisation is totally pointless unless you measure speed before and after your changes. An array of 1000 objects is nothing, so you might be worrying about nothing. Or slowdowns are in a totally different place. Use Instruments.
x * x is a multiplication. powf (x, 2.0) is an expensive function call that probably takes anywhere between 20 and 100 times longer.
GLKVector2 is a primitive (it is a union). Trying to stash it into an NSValue is totally pointless and wastes probably 100 times more time than just storing it directly.
Here an answer to you question about integrating physics calculations in C with your current Objective-C class. If you have a fragment of C code in your .m file which may look like
static CGPoint point[MAX_OBJ];
static int n_points = 0;
with a corresponding function in plain C as you suggested for simulating physical interactions that acts on point to update object positions, as in
void tick_world() {
for (int k = 0; k < n_points; k ++) {
float speed = sqrtf((point[k].x*point[k].x) + (point[k].y*point[k].y));
...
}
}
then, your Objective-C class Moving for the moving object could contain a pointer to a particular CGPoint in point that you would define in the interface (probably the corresponding *.h file):
#interface Moving : NSObject {
...
CGPoint *pos;
}
When handling the init message, you can then grab and initialize the next available element in point. If your objects persist throughout run time, this could be done very simply simply by
#implementation
...
-(id)initAtX:(float)x0 Y:(float)y0 {
self = [super init];
if (self) {
if (n_points == MAX_OBJ) {
[self release];
return nil;
}
pos = point + n_points ++;
pos->x = x0;
pos->y = y0;
}
return self;
}
If your Moving objects do not persist, you might want to think of a smart way to recycle slots after destruction. For example, you could initialize all x of point with NAN, and use this as a way to locate a free slot. In your dealloc, you would then pos->x = NAN.
It sounds like you're up against a couple common problems: 1) fast math in a high-level language, and 2) a meta-problem: whether to get the benefit of others' work (OpenGL, and several ideas listed here) in exchange for a steep learning curve as the developer.
Especially for this subject matter, I think the trade is pretty good in favor of using a library. For many (e.g. Eigen), most of the learning curve is about integration of Objective C and C++, which you can quickly put behind you.
(As an aside, often times, computing the square of the distance between objects is sufficient for making comparisons. If that works in your app, you can save cycles by replacing distance(a,b) with distnaceSquared(a,b)

Check if user is near route checkpoint with GPS

Here's the situation:
I have a predetermined GPS route that the user will run. The route has some checkpoints and the user should pass near all of them (think of them as a racing game checkpoint, that prevents the user from taking shortcuts). I need to ensure that the user passes through all the checkpoints.
I want to determine an area that will be considered inside a checkpoint's radius, but I don't want it to be just a radial area, it should be an area taking into consideration the form of the path.
Didn't understand it? Neither did I. Look at this poorly drawn image to understand it better:
The black lines represents the pre-determined path, the blue ball is the checkpoint and the blue polygon is the wanted area. The green line is a more precise user, and the red line is a less accurate user (a drunk guy driving maybe? lol). Both lines should be inside the polygon, but a user who skips totally the route shouldn't.
I already saw somewhere here a function to check is the user is inside a polygon like this, but I need to know how to calculate the polygon.
Any suggestions?
EDIT:
I'm considering using the simple distanceTo() function to just draw an imaginary circle and check if the user is there. That's good because is so much simple to implement and understand, and bad because to make sure the most erronic user passes whithin the checkpoint I would need a big radius, making the correct user enter the checkpoint area sooner than expected.
And just so you guys understand the situation better, this is for an app that is supposed to be used in traffic (car or bus), and the checkpoints should be landmarks or spots that divides your route, for example, somewhere where traffic jam starts or stops.
You could just check the distance between the two, assuming you know the GeoLocation of the checkpoint.
Use the distanceTo function and setup a threshold of however many meters the user needs to be from the checkpoint to continue on.
Edit
Since you want to avoid distanceTo, here is a small function I wrote a while back to check if a point is in a polygon:
public boolean PIP(Point point, List<Point> polygon){
boolean nodepolarity=false;
int sides = polygon.size();
int j = sides -1;
for(int i=0;i<sides;i++){
if((polygon.get(i).y<point.y && polygon.get(j).y>=point.y) ||(polygon.get(j).y<point.y && polygon.get(i).y>=point.y)){
if (polygon.get(i).x+(point.y-polygon.get(i).y)/(polygon.get(j).y-polygon.get(i).y)*(polygon.get(j).x-polygon.get(i).x)<point.x) {
nodepolarity=!nodepolarity;
}
}
j=i;
}
return nodepolarity; //FALSE=OUTSIDE, TRUE=INSIDE
}
List<Point> polygon is a list of the points that make up a polygon.
This uses the Ray casting algorithm to determine how many intersections a ray makes through the polygon.
All you would need to do is create the 'boundary' around the area you need with GeoPoints being translated into pixels using the toPixels method.
Store those points into a List<> of points, and you should be all set.
check a few algos to do this in the link below
http://geospatialpython.com/2011/01/point-in-polygon.html
I know this is an old question, but maybe it would be useful for someone.
This is a simpler method, with much less computation needed. This would not trigger the first time the user comes inside the threshold area, it only gets the closest point where the user has passed near the checkpoint AND (s)he has come close enough.
The idea is to maintain a 3 item list of distances for every checkpoint, with the last three distances in it (so it would be [d(t), d(t-1), d(t-2)]). This list should be rotated on every distance calculation.
If on any distance calculation the previous d(t-1) distance is smaller than the current one d(t) and bigger than the preceding d(t-2), then the moving point has passed the checkpoint. Whether this was a real passing, or it was only a glitch, can be decided by checking the actual distance d(t-1).
private long DISTANCE_THRESHOLD = 2000;
private Checkpoint calculateCheckpoint(Map<Checkpoint, List<Double>> checkpointDistances)
{
Map<Checkpoint, Double> candidates = new LinkedHashMap<Checkpoint, Double>();
for (Checkpoint checkpoint: checkpointDistances.keySet())
{
List<Double> distances = checkpointDistances.get(checkpoint);
if (distances == null || distances.size() < 3)
continue;
if (distances.get(0) > distances.get(1) && distances.get(1) < distances.get(2) && distances.get(1) < (DISTANCE_THRESHOLD)) //TODO: make this depend on current speed
candidates.put(checkpoint, distances.get(1));
}
List<Entry<Checkpoint, Double>> list = new LinkedList<Entry<Checkpoint,Double>>(candidates.entrySet());
Collections.sort(list, comp);
if (list.size() > 0)
return list.get(0).getKey();
else
return null;
}
Comparator<Entry<Checkpoint, Double>> comp = new Comparator<Entry<Checkpoint,Double>>()
{
#Override
public int compare(Entry<Checkpoint, Double> o1, Entry<Checkpoint, Double> o2)
{
return o1.getValue().compareTo(o2.getValue());
}
};
The function gets one parameter - a Map<Checkpoint, List<Double>> with the checkpoints and the list of the last three distances. It outputs the closest Checkpoint passed or null (if there were none).
The DISTANCE_THRESHOLD should be chosen wisely.
The Comparator is just to be able to sort the Checkpoints based on their distance to the user to get the closest one.
Naturally this has some minor flaws, e.g. if the moving point is moving criss-cross, or the error movement from GPS precision is commensurable with the actual speed of the user, than this would give multiple pass marks, but this would hit almost any algorithm.

How do I detect a collision with a poly line in the lua love engine?

I am using the lua love engine to make a simple game. However, im having a little trouble with collision.
I have a polyline between a set of points (representing the rocky ground) and a box I need to collide with it, but I cant think of an easy implementation.
love.graphics.line( 0,60, 100,70, 150,300, 200,230, 250,230, 300,280, 350,220, 400,220, 420,150, 440,140, 480,340 )
If anyone could help with code snippets or advice, it would be much appreciated.
You need to create a more abstract representation of the ground, from which you can generate both the line for the graphics, and the body for the physics.
So for example:
--ground represented as a set of points
local ground = {0,60, 100,70, 150,300, 200,230, 250,230, 300,280, 350,220, 400,220, 420,150, 440,140, 480,340}
This may not be the best representation, but I will continue with my example.
You now have a representation of the ground, now you a way of converting that something that your physics can understand (to do the collision) and something that your graphical display can understand (do draw the ground). So you declare two functions:
--converts a table representing the ground into something that
--can be understood by your physics.
--If you are using love.physics then this would create
--a Body with a set PolygonShapes attached to it.
local function createGroundBody(ground)
--you may need to pass some additional arguments,
--such as the World in which you want to create the ground.
end
--converts a table representing the ground into something
--that you are able to draw.
local function createGroundGraphic(ground)
--ground is already represented in a way that love.graphics.line can handle
--so just pass it through.
return ground
end
Now, putting it all together:
local groundGraphic = nil
local groundPhysics = nil
love.load()
groundGraphic = createGroundGraphic(ground)
physics = makePhysicsWorld()
groundPhysics = createGroundBody(ground)
end
love.draw()
--Draws groundGraphic, could be implemented as
--love.graphics.line(groundGraphic)
--for example.
draw(groundGraphic)
--you also need do draw the box and everything else here.
end
love.update(dt)
--where `box` is the box you have to collide with the ground.
runPhysics(dt, groundPhysics, box)
--now you need to update the representation of your box graphic
--to match up with the new position of the box.
--This could be done implicitly by using the representation for physics
--when doing the drawing of the box.
end
Without knowing exactly how you want to implement the physics and the drawing, it is hard to give more detail than this. I hope that you use this as an example of how this code could be structured. The code that I have presented is only a very approximate structure, so it will not come close to actually working. Please ask about anything that I have been unclear about!

Are the key frames in a CAKeyframeAnimation always hit exactly?

Can anyone tell me if the key frames in a CAKeyframeAnimation are always guaranteed to be hit with their exact values when the animation runs? Or... do they only act as interpolation guides? e.g. If I specify, say, 3 points on a path for some arbitrary property to follow - let's call it 'position' - and I specify an execution time of 0.3f seconds, whilst (obviously) points 1 and 3 must be hit (as they are the terminal points) can I guarantee that point 2 will be evaluated exactly as specified in the key frame array? Surprisingly, I haven't found a single document that gives an adequate answer. I ask this because I'm writing an OpenAL sound-effect synchroniser that uses a keyframe animation's path to trigger various short sounds along its length and whilst most of them get executed, now and again a few don't and I don't know if it's my logic that's wrong or my code.
Thanks in advance.
In general, relying on the "exactness" of a floating-point value that is the result of a calculation is fraught with danger. So for example the following code:
CGFloat x1 = some_function();
CGFloat x2 = some_other_function();
if(x1 == x2)
{
// do something
}
without even knowing what the functions do is most likely incorrect. Even if the functions do very similar calculations the optimizer may have reordered operations causing small rounding errors sufficient to cause the equality test to fail.
It should be:
CGFloat x1 = some_function();
CGFloat x2 = some_other_function();
CGFloat tolerance = 0.1; // or some tolerance suitable for the calculation.
if(fabsf(x1 - x2) < tolerance)
{
// do something
}
where tolerance is some value suitable for the calculation being performed.
So, without knowing the internals of CAKeyframeAnimation, I can tell you that any code that expects exact values would be inherently "fragile". This is not to say that you won't get exact values, you might, but it will depend a lot on the input data.
I hope this helps.

Resources