converting gps coordinates to opengl word in AR - ios

i have a list of gps coordinates (long,lat) and i have my current position (long,lat).
i found out that by subtracting the two coordinates i find the relative coordinates from my position, and that coordinates i use in my AR app to draw the pois in the opengl world.
the problem is that far-away coordinates will still be too far to "see", so i want an equation to translate everything to be close to my position, but with their original relative position.
double kGpsToOpenglCoorRatio = 1000;
- (void)convertGlobalCoordinatesToLocalCoordinates:(double)latitude x_p:(double *)x_p longitude:(double)longitude y_p:(double *)y_p {
*x_p = ((latitude - _userLocation.coordinate.latitude) * kGpsToOpenglCoorRatio);
*y_p = ((longitude - _userLocation.coordinate.longitude) * kGpsToOpenglCoorRatio);
}
i tried applying Square root in order to give them a "distance limit", but their positions got messed up relatively to their original position.

This might be because GPS uses a spherical(ish) coordinate system, and you're trying to directly map it to a cartesian coordinate system (a plane).
What you could to do is convert your GPS coordinates to a local reference plane, rather than map them directly. If you consider your own location the origin of your coordinate system, you can get the polar coordinates of the points on the ground plane relative to the origin and true north by using great circle distance (r) and bearing (theta) between your location and the remote coordinate, and then covert that to cartesian coordinates using (x,y) = (r*cos(theta), r*sin(theta)).
Better again for your case, once you have the great circle bearing, you can just foreshorten r (the distance). That will drag the points closer to you in both x and y, but they'll still be at the correct relative bearing, you'll just need to indicate this somehow.
Another approach is to scale the size of the objects you're visualizing so that they get larger with distance to compensate for perspective. This way you can just directly use the correct position and orientation.
This page has the bearing/distance algorithms: http://www.movable-type.co.uk/scripts/latlong.html

I ended up solving it using the equation of the gps coordinate intercepted with the circle i want all the pois to appear on, it works perfectly. I didn't use bearings anywhere.
here is the code if anyone interested:
- (void)convertGlobalCoordinatesToLocalCoordinates:(double)latitude x_p:(double *)x_p longitude:(double)longitude y_p:(double *)y_p {
double x = (latitude - _userLocation.coordinate.latitude) * kGpsToOpenglCoorRatio;
double y = (longitude - _userLocation.coordinate.longitude) * kGpsToOpenglCoorRatio;
y = (y == 0 ? y = 0.0001 : y);
x = (x == 0 ? x = 0.0001 : x);
double slope = x / ABS(y);
double outY = sqrt( kPoiRadius / (1+pow(slope,2)) );
double outX = slope * outY;
if (y < 0) {
outY = -1 * outY;
}
*x_p = outX;
*y_p = outY;
}

Related

how can i get the heading of the device with CMDeviceMotion in iOS 5

I'm developing an AR app using the gyro. I have use an apple code example pARk. It use the rotation matrix to calculate the position of the coordinate and it do really well, but now I'm trying to implement a "radar" and I need to rotate this in function of the device heading. I'm using the CLLocationManager heading but it's not correct.
The question is, how can I get the heading of the device using the CMAttitude to reflect exactly what I get in the screen??
I'm new with rotation matrix and that kind of things.
This is part of the code used to calculate the AR coordinates. Update the cameraTransform with the attitude:
CMDeviceMotion *d = motionManager.deviceMotion;
if (d != nil) {
CMRotationMatrix r = d.attitude.rotationMatrix;
transformFromCMRotationMatrix(cameraTransform, &r);
[self setNeedsDisplay];
}
and then in the drawRect code:
mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
int i = 0;
for (PlaceOfInterest *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
I also rotate the view with the pitch angle.
The motions updates are started using the north:
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXTrueNorthZVertical];
So I think that must be possible to get the "roll"/heading of the device in any position (with any pitch and yaw...) but I don't know how.
There are a few ways to calculate heading from the rotation matrix returned by CMDeviceMotion. This assumes you use the same definition of Apple's compass, where the +y direction (top of the iPhone) pointing due north returns a heading of 0, and rotating the iPhone to the right increases the heading, so East is 90, South is 180, and so forth.
First, when you start updates, be sure to check to make sure headings are available:
if (([CMMotionManager availableAttitudeReferenceFrames] & CMAttitudeReferenceFrameXTrueNorthZVertical) != 0) {
...
}
Next, when you start the motion manager, ask for attitude as a rotation from X pointing true North (or Magnetic North if you need that for some reason):
[motionManager startDeviceMotionUpdatesUsingReferenceFrame: CMAttitudeReferenceFrameXTrueNorthZVertical
toQueue: self.motionQueue
withHandler: dmHandler];
When the motion manager reports a motion update, you want to find out how much the device has rotated in the X-Y plane. Since we are interested in the top of the iPhone, we'll pick a point in that direction and rotate it using the returned rotation matrix to get the point after rotation:
[m11 m12 m13] [0] [m12]
[m21 m22 m23] [1] = [m22]
[m31 m32 m33] [0] [m32]
The funky brackets are matrices; it's the best I can do using ASCII. :)
The heading is the angle between the rotated point and true North. We can use the X and Y coordinates of the rotated point to extract the arc tangent, which gives the angle between the point and the X axis. This is actually 180 degrees off from what we want, so we have to adjust accordingly. The resulting code looks like this:
CMDeviceMotionHandler dmHandler = ^(CMDeviceMotion *aMotion, NSError *error) {
// Check for an error.
if (error) {
// Add error handling here.
} else {
// Get the rotation matrix.
CMAttitude *attitude = self.motionManager.deviceMotion.attitude;
CMRotationMatrix rm = attitude.rotationMatrix;
// Get the heading.
double heading = PI + atan2(rm.m22, rm.m12);
heading = heading*180/PI;
printf("Heading: %5.0f\n", heading);
}
};
There is one gotcha: If the top of the iPhone is pointed straight up or straight down, the direction is undefined. The result is m21 and m22 are zero, or very close to it. You need to decide what this means for your app and handle the condition accordingly. You might, for example, switch to a heading based on the -Z axis (behind the iPhone) when m12*m12 + m22*m22 is close to zero.
This all assumes you want to rotate about the X-Y plane, as Apple usually does for their compass. It works because you are using the rotation matrix returned by the motion manager to rotate a vector pointed along the Y axis, which is this matrix:
[0]
[1]
[0]
To rotate a different vector--say, one pointed along -Z--use a different matrix, like
[0]
[0]
[-1]
Of course, you also have to take the arc tangent in a different plane, so instead of
double heading = PI + atan2(rm.m22, rm.m12);
you would use
double heading = PI + atan2(-rm.m33, -rm.m13);
to get the rotation in the X-Z plane.

Use of maths in the Apple pARk sample code

I'm studied the pARK example project (http://developer.apple.com/library/IOS/#samplecode/pARk/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011083) so I can apply some of its fundamentals in an app i'm working on. I understand nearly everything, except:
The way it has to calculate if a point of interest must appear or not. It gets the attitude, multiply it with the projection matrix (to get the rotation in GL coords?), then multiply that matrix with the coordinates of the point of interest and, at last, look at the last coordinate of that vector to find out if the point of interest must be shown. Which are the mathematic fundamentals of this?
Thanks a lot!!
I assume you are referring to the following method:
- (void)drawRect:(CGRect)rect
{
if (placesOfInterestCoordinates == nil) {
return;
}
mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
int i = 0;
for (PlaceOfInterest *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
if (v[2] < 0.0f) {
poi.view.center = CGPointMake(x*self.bounds.size.width, self.bounds.size.height-y*self.bounds.size.height);
poi.view.hidden = NO;
} else {
poi.view.hidden = YES;
}
i++;
}
}
This is performing an OpenGL like vertex transformation on the places of interest to check if they are in a viewable frustum. The frustum is created in the following line:
createProjectionMatrix(projectionTransform, 60.0f*DEGREES_TO_RADIANS, self.bounds.size.width*1.0f / self.bounds.size.height, 0.25f, 1000.0f);
This sets up a frustum with a 60 degree field of view, a near clipping plane of 0.25 and a far clipping plane of 1000. Any point of interest that is further away than 1000 units will then not be visible.
So, to step through the code, first the projection matrix that sets up the frustum, and the camera view matrix, which simply rotates the object so it is the right way up relative to the camera, are multiplied together. Then, for each place of interest, its location is multiplied by the viewProjection matrix. This will project the location of the place of interest into the view frustum, applying rotation and perspective.
The next two lines then convert the transformed location of the place into whats known as normalized device coordinates. The 4 component vector needs to be collapsed to 3 dimensional space, this is achieved by projecting it onto the plane w == 1, by dividing the vector by its w component, v[3]. It is then possible to determine if the point lies within the projection frustum by checking if its coordinates lie in the cube with side length 2 with origin [0, 0, 0]. In this case, the x and y coordinates are being biased from the range [-1 1] to [0 1] to match up with the UIKit coordinate system, by adding 1 and dividing by 2.
Next, the v[2] component, z, is checked to see if it is greater than 0. This is actually incorrect as it has not been biased, it should be checked to see if it is greater than -1. This will detect if the place of interest is in the first half of the projection frustum, if it is then the object is deemed visible and displayed.
If you are unfamiliar with vertex projection and coordinate systems, this is a huge topic with a fairly steep learning curve. There is however a lot of material online covering it, here are a couple of links to get you started:
http://www.falloutsoftware.com/tutorials/gl/gl0.htm
http://www.opengl.org/wiki/Vertex_Transformation
Good luck//

How to calculate distance between two rectangles? (Context: a game in Lua.)

Given two rectangles with x, y, width, height in pixels and a rotation value in degrees -- how do I calculate the closest distance of their outlines toward each other?
Background: In a game written in Lua I'm randomly generating maps, but want to ensure certain rectangles aren't too close to each other -- this is needed because maps become unsolvable if the rectangles get into certain close-distance position, as a ball needs to pass between them. Speed isn't a huge issue as I don't have many rectangles and the map is just generated once per level. Previous links I found on StackOverflow are this and this
Many thanks in advance!
Not in Lua, a Python code based on M Katz's suggestion:
def rect_distance((x1, y1, x1b, y1b), (x2, y2, x2b, y2b)):
left = x2b < x1
right = x1b < x2
bottom = y2b < y1
top = y1b < y2
if top and left:
return dist((x1, y1b), (x2b, y2))
elif left and bottom:
return dist((x1, y1), (x2b, y2b))
elif bottom and right:
return dist((x1b, y1), (x2, y2b))
elif right and top:
return dist((x1b, y1b), (x2, y2))
elif left:
return x1 - x2b
elif right:
return x2 - x1b
elif bottom:
return y1 - y2b
elif top:
return y2 - y1b
else: # rectangles intersect
return 0.
where
dist is the euclidean distance between points
rect. 1 is formed by points (x1, y1) and (x1b, y1b)
rect. 2 is formed by points (x2, y2) and (x2b, y2b)
Edit: As OK points out, this solution assumes all the rectangles are upright. To make it work for rotated rectangles as the OP asks you'd also have to compute the distance from the corners of each rectangle to the closest side of the other rectangle. But you can avoid doing that computation in most cases if the point is above or below both end points of the line segment, and to the left or right of both line segments (in telephone positions 1, 3, 7, or 9 with respect to the line segment).
Agnius's answer relies on a DistanceBetweenLineSegments() function. Here is a case analysis that does not:
(1) Check if the rects intersect. If so, the distance between them is 0.
(2) If not, think of r2 as the center of a telephone key pad, #5.
(3) r1 may be fully in one of the extreme quadrants (#1, #3, #7, or #9). If so
the distance is the distance from one rect corner to another (e.g., if r1 is
in quadrant #1, the distance is the distance from the lower-right corner of
r1 to the upper-left corner of r2).
(4) Otherwise r1 is to the left, right, above, or below r2 and the distance is
the distance between the relevant sides (e.g., if r1 is above, the distance
is the distance between r1's low y and r2's high y).
Actually there is a fast mathematical solution.
Length(Max((0, 0), Abs(Center - otherCenter) - (Extent + otherExtent)))
Where Center = ((Maximum - Minimum) / 2) + Minimum and Extent = (Maximum - Minimum) / 2.
Basically the code above zero's axis which are overlapping and therefore the distance is always correct.
It's preferable to keep the rectangle in this format as it's preferable in many situations ( a.e. rotations are much easier ).
Pseudo-code:
distance_between_rectangles = some_scary_big_number;
For each edge1 in Rectangle1:
For each edge2 in Rectangle2:
distance = calculate shortest distance between edge1 and edge2
if (distance < distance_between_rectangles)
distance_between_rectangles = distance
There are many algorithms to solve this and Agnius algorithm works fine. However I prefer the below since it seems more intuitive (you can do it on a piece of paper) and they don't rely on finding the smallest distance between lines but rather the distance between a point and a line.
The hard part is implementing the mathematical functions to find the distance between a line and a point, and to find if a point is facing a line. You can solve all this with simple trigonometry though. I have below the methodologies to do this.
For polygons (triangles, rectangles, hexagons, etc.) in arbitrary angles
If polygons overlap, return 0
Draw a line between the centres of the two polygons.
Choose the intersecting edge from each polygon. (Here we reduce the problem)
Find the smallest distance from these two edges. (You could just loop through each 4 points and look for the smallest distance to the edge of the other shape).
These algorithms work as long as any two edges of the shape don't create angles more than 180 degrees. The reason is that if something is above 180 degrees then it means that the some corners are inflated inside, like in a star.
Smallest distance between an edge and a point
If point is not facing the face, then return the smallest of the two distances between the point and the edge cornerns.
Draw a triangle from the three points (edge's points plus the solo point).
We can easily get the distances between the three drawn lines with Pythagorean Theorem.
Get the area of the triangle with Heron's formula.
Calculate the height now with Area = 12⋅base⋅height with base being the edge's length.
Check to see if a point faces an edge
As before you make a triangle from an edge and a point. Now using the Cosine law you can find all the angles with just knowing the edge distances. As long as each angle from the edge to the point is below 90 degrees, the point is facing the edge.
I have an implementation in Python for all this here if you are interested.
This question depends on what kind of distance. Do you want, distance of centers, distance of edges or distance of closest corners?
I assume you mean the last one. If the X and Y values indicate the center of the rectangle then you can find each the corners by applying this trick
//Pseudo code
Vector2 BottomLeftCorner = new Vector2(width / 2, heigth / 2);
BottomLeftCorner = BottomLeftCorner * Matrix.CreateRotation(MathHelper.ToRadians(degrees));
//If LUA has no built in Vector/Matrix calculus search for "rotate Vector" on the web.
//this helps: http://www.kirupa.com/forum/archive/index.php/t-12181.html
BottomLeftCorner += new Vector2(X, Y); //add the origin so that we have to world position.
Do this for all corners of all rectangles, then just loop over all corners and calculate the distance (just abs(v1 - v2)).
I hope this helps you
I just wrote the code for that in n-dimensions. I couldn't find a general solution easily.
// considering a rectangle object that contains two points (min and max)
double distance(const rectangle& a, const rectangle& b) const {
// whatever type you are using for points
point_type closest_point;
for (size_t i = 0; i < b.dimensions(); ++i) {
closest_point[i] = b.min[i] > a.min[i] ? a.max[i] : a.min[i];
}
// use usual euclidian distance here
return distance(a, closest_point);
}
For calculating the distance between a rectangle and a point you can:
double distance(const rectangle& a, const point_type& p) const {
double dist = 0.0;
for (size_t i = 0; i < dimensions(); ++i) {
double di = std::max(std::max(a.min[i] - p[i], p[i] - a.max[i]), 0.0);
dist += di * di;
}
return sqrt(dist);
}
If you want to rotate one of the rectangles, you need to rotate the coordinate system.
If you want to rotate both rectangles, you can rotate the coordinate system for rectangle a. Then we have to change this line:
closest_point[i] = b.min[i] > a.min[i] ? a.max[i] : a.min[i];
because this considers there is only one candidate as the closest vertex in b. You have to change it to check the distance to all vertexes in b. It's always one of the vertexes.
See: https://i.stack.imgur.com/EKJmr.png
My approach to solving the problem:
Combine the two rectangles into one large rectangle
Subtract from the large rectangle the first rectangle and the second
rectangle
What is left after the subtraction is a rectangle between the two
rectangles, the diagonal of this rectangle is the distance between
the two rectangles.
Here is an example in C#
public static double GetRectDistance(this System.Drawing.Rectangle rect1, System.Drawing.Rectangle rect2)
{
if (rect1.IntersectsWith(rect2))
{
return 0;
}
var rectUnion = System.Drawing.Rectangle.Union(rect1, rect2);
rectUnion.Width -= rect1.Width + rect2.Width;
rectUnion.Width = Math.Max(0, rectUnion.Width);
rectUnion.Height -= rect1.Height + rect2.Height;
rectUnion.Height = Math.Max(0, rectUnion.Height);
return rectUnion.Diagonal();
}
public static double Diagonal(this System.Drawing.Rectangle rect)
{
return Math.Sqrt(rect.Height * rect.Height + rect.Width * rect.Width);
}
Please check this for Java, it has the constraint all rectangles are parallel, it returns 0 for all intersecting rectangles:
public static double findClosest(Rectangle rec1, Rectangle rec2) {
double x1, x2, y1, y2;
double w, h;
if (rec1.x > rec2.x) {
x1 = rec2.x; w = rec2.width; x2 = rec1.x;
} else {
x1 = rec1.x; w = rec1.width; x2 = rec2.x;
}
if (rec1.y > rec2.y) {
y1 = rec2.y; h = rec2.height; y2 = rec1.y;
} else {
y1 = rec1.y; h = rec1.height; y2 = rec2.y;
}
double a = Math.max(0, x2 - x1 - w);
double b = Math.max(0, y2 - y1 - h);
return Math.sqrt(a*a+b*b);
}
Another solution, which calculates a number of points on the rectangle and choses the pair with the smallest distance.
Pros: works for all polygons.
Cons: a little bit less accurate and slower.
import numpy as np
import math
POINTS_PER_LINE = 100
# get points on polygon outer lines
# format of polygons: ((x1, y1), (x2, y2), ...)
def get_points_on_polygon(poly, points_per_line=POINTS_PER_LINE):
all_res = []
for i in range(len(poly)):
a = poly[i]
if i == 0:
b = poly[-1]
else:
b = poly[i-1]
res = list(np.linspace(a, b, points_per_line))
all_res += res
return all_res
# compute minimum distance between two polygons
# format of polygons: ((x1, y1), (x2, y2), ...)
def min_poly_distance(poly1, poly2, points_per_line=POINTS_PER_LINE):
poly1_points = get_points_on_polygon(poly1, points_per_line=points_per_line)
poly2_points = get_points_on_polygon(poly2, points_per_line=points_per_line)
distance = min([math.sqrt((a[0] - b[0])**2 + (a[1] - b[1])**2) for a in poly1_points for b in poly2_points])
# slower
# distance = min([np.linalg.norm(a - b) for a in poly1_points for b in poly2_points])
return distance

OpenGL: How to lathe a 2D shape into 3D?

I have an OpenGL program (written in Delphi) that lets user draw a polygon. I want to automatically revolve (lathe) it around an axis (say, Y asix) and get a 3D shape.
How can I do this?
For simplicity, you could force at least one point to lie on the axis of rotation. You can do this easily by adding/subtracting the same value to all the x values, and the same value to all the y values, of the points in the polygon. It will retain the original shape.
The rest isn't really that hard. Pick an angle that is fairly small, say one or two degrees, and work out the coordinates of the polygon vertices as it spins around the axis. Then just join up the points with triangle fans and triangle strips.
To rotate a point around an axis is just basic Pythagoras. At 0 degrees rotation you have the points at their 2-d coordinates with a value of 0 in the third dimension.
Lets assume the points are in X and Y and we are rotating around Y. The original 'X' coordinate represents the hypotenuse. At 1 degree of rotation, we have:
sin(1) = z/hypotenuse
cos(1) = x/hypotenuse
(assuming degree-based trig functions)
To rotate a point (x, y) by angle T around the Y axis to produce a 3d point (x', y', z'):
y' = y
x' = x * cos(T)
z' = x * sin(T)
So for each point on the edge of your polygon you produce a circle of 360 points centered on the axis of rotation.
Now make a 3d shape like so:
create a GL 'triangle fan' by using your center point and the first array of rotated points
for each successive array, create a triangle strip using the points in the array and the points in the previous array
finish by creating another triangle fan centered on the center point and using the points in the last array
One thing to note is that usually, the kinds of trig functions I've used measure angles in radians, and OpenGL uses degrees. To convert degrees to radians, the formula is:
degrees = radians / pi * 180
Essentially the strategy is to sweep the profile given by the user around the given axis and generate a series of triangle strips connecting adjacent slices.
Assume that the user has drawn the polygon in the XZ plane. Further, assume that the user intends to sweep around the Z axis (i.e. the line X = 0) to generate the solid of revolution, and that one edge of the polygon lies on that axis (you can generalize later once you have this simplified case working).
For simple enough geometry, you can treat the perimeter of the polygon as a function x = f(z), that is, assume there is a unique X value for every Z value. When we go to 3D, this function becomes r = f(z), that is, the radius is unique over the length of the object.
Now, suppose we want to approximate the solid with M "slices" each spanning 2 * Pi / M radians. We'll use N "stacks" (samples in the Z dimension) as well. For each such slice, we can build a triangle strip connecting the points on one slice (i) with the points on slice (i+1). Here's some pseudo-ish code describing the process:
double dTheta = 2.0 * pi / M;
double dZ = (zMax - zMin) / N;
// Iterate over "slices"
for (int i = 0; i < M; ++i) {
double theta = i * dTheta;
double theta_next = (i+1) * dTheta;
// Iterate over "stacks":
for (int j = 0; j <= N; ++j) {
double z = zMin + i * dZ;
// Get cross-sectional radius at this Z location from your 2D model (was the
// X coordinate in the 2D polygon):
double r = f(z); // See above definition
// Convert 2D to 3D by sweeping by angle represented by this slice:
double x = r * cos(theta);
double y = r * sin(theta);
// Get coordinates of next slice over so we can join them with a triangle strip:
double xNext = r * cos(theta_next);
double yNext = r * sin(theta_next);
// Add these two points to your triangle strip (heavy pseudocode):
strip.AddPoint(x, y, z);
strip.AddPoint(xNext, yNext, z);
}
}
That's the basic idea. As sje697 said, you'll possibly need to add end caps to keep the geometry closed (i.e. a solid object, rather than a shell). But this should give you enough to get you going. This can easily be generalized to toroidal shapes as well (though you won't have a one-to-one r = f(z) function in that case).
If you just want it to rotate, then:
glRotatef(angle,0,1,0);
will rotate it around the Y-axis. If you want a lathe, then this is far more complex.

Get map position when WGS-84 lat/lon when upper left and lower right corners' lat/lon are given

Suppose I have a map, for example from openstreetmaps.org.
I know the WGS-84 lat/lon of the upper left and lower right corner of the map.
How can I find other positions on the map from given WGS-84 lat/lon coordinates?
If the map is roughly street/city level, uses a mercator projection (as openstreetmap.org seems to), and isn't too close to the poles, linear interpolation may be accurate enough. Assuming the following:
TL = lat/lon of top left corner
BR = lat/lon of bottom right corner
P = lat/lon of the point you want to locate on the map
(w,h) = width and height of the map you have (pixels?)
the origin of the map image, (0,0), is at its top-left corner
, we could interpolate the (x,y) position corresponding to P as:
x = w * (P.lon - TL.lon) / (BR.lon - TL.lon)
y = h * (P.lat - TL.lat) / (BR.lat - TL.lat)
Common gotcha's:
The lat/lon notation convention lists the latitude first and the longitude second, i.e. "vertical" before "horizontal". This is opposite to the common x,y notation of image coordinates.
Latitude values increase when going in a north-ward direction ("up"), whereas y coordinates in your map image may be increasing when doing down.
If the map covers a larger area, linear interpolation will not be as accurate for latitudes. For a map that spans one degree of latitude and is in the earth's habitable zones (e.g. the bay area), the center latitude will be off by 0.2% or so, which is likely to by less than a pixel (depending on size)
If that's precise enough for your needs, you can stop here!
The more precise math for getting from P's latitude to a pixel y position would start with the mercator math. We know that for a latitude P.lat, the Y position on a projection starting at the equator would be as follows (I'll use a capital Y as unlike the y value we're looking for, Y starts at the equator and increases towards the north):
Y = k * ln((1 + sin(P.lat)) / (1 - sin(P.lat)))
The constant k depends on the vertical scaling of the map, which we may not know. Luckily, it can be deduced observing that y(TL) - y(BR) = h. That gets us:
k = h / (ln((1 + sin(TL.lat)) / (1 - sin(TL.lat))) - ln((1 + sin(BR.lat)) / (1 - sin(BR.lat))))
(yikes! that's four levels of brackets!) With k known, we now have the formula to find out the Y position of any latitude. We just need to correct for: (1) our y value starts at TL.lat, not the equator, and (2) y grows towards the south, rather than to the north. This gets us:
Y(TL.lat) = k * ln((1 + sin(TL.lat)) / (1 - sin(TL.lat)))
Y(P.lat) = k * ln((1 + sin(P.lat )) / (1 - sin(P.lat )))
y(P.lat) = -(Y(P.lat) - Y(TL.lat))
So this gets you:
x = w * (P.lon - TL.lon) / (BR.lon - TL.lon) // like before
y = -(Y(P.lat) - Y(TL.lat)) // where Y(anything) depends just on h, TL.lat and BR.lat

Resources