Tile to CGPoint conversion with Retina display - ios

I have a project that uses a tilemap. I have a separate tilemap for low-res (29x29 Tilesize) and high-res (58x58). I have these methods to calculate tileCoord to position and back again.
- (CGPoint)tileCoordForPosition:(CGPoint)position {
int x = position.x / _tileMap.tileSize.width;
int y = ((_tileMap.mapSize.height * _tileMap.tileSize.height) - position.y) / _tileMap.tileSize.height;
return ccp(x, y);
}
- (CGPoint)positionForTileCoord:(CGPoint)tileCoord {
int x = (tileCoord.x * _tileMap.tileSize.width) + _tileMap.tileSize.width/2;
int y = (_tileMap.mapSize.height * _tileMap.tileSize.height) - (tileCoord.y * _tileMap.tileSize.height) - _tileMap.tileSize.height/2;
return ccp(x, y);
}
I got this from RayWenderLich and I do honeslty not understand how it works, and why it has to be so complicated. But this doesn't work when I use retina tilemaps, only on 480x320. Can someone clever come up with a way to make this work for HD? Does not have to work on low-res either, I do not plan on supporting sub-iOS 7.
I want the output to be in the low-res coordinate scale tho, as you might know, cocos2d does the resizing to HD for you. (By multiplying by two)

i think this will work
- (CGPoint)tileCoordForPosition:(CGPoint)position {
    int x = position.x/29;
    int y = ((11*29)-position.y) / 29;
    
    return ccp(x, y);
}
- (CGPoint)positionForTileCoord:(CGPoint)tileCoord {
    double x = tileCoord.x * 29 + 14.5;
    double y = (11*29) - (tileCoord.y * 29) - 14.5;
    return ccp(x, y);
}

Here you're trying to compute your map X coordinate:
int x = position.x / _tileMap.tileSize.width;
The problem here is that (as of v0.99.5-rc0, cocos2d generally uses points for positions, but CCTMXTiledMap always uses pixels for tileSize. On a low-res device, 1 point = 1 pixel, but on a Retina device, 1 point = 2 pixels. Thus on a Retina device, you need to multiply by 2.
You can use the CC_CONTENT_SCALE_FACTOR() macro to fix this:
int x = CC_CONTENT_SCALE_FACTOR() * position.x / _tileMap.tileSize.width;
Here you're trying to compute yoru map Y coordinate:
int y = ((_tileMap.mapSize.height * _tileMap.tileSize.height) - position.y) / _tileMap.tileSize.height;
The extra math here is trying to account for the difference between Cocos2D's normal coordinate system and your map's flipped coordinate system. In standard Cartesian coordinates, the origin is at the lower left and Y coordinates increase as you move up. In a flipped coordinate system, the origin is at the upper left and Y coordinates increase as you move down. Thus you must subtract your position's Y coordinate from the height of the map (in scene units, which are points) to flip it to map coordinates.
The problem again is that _tileMap.tileSize is in pixels, not points. You can again fix that by using CC_CONTENT_SCALE_FACTOR():
CGFloat tileHeight = _tileMap.tileSize.height / CC_CONTENT_SCALE_FACTOR();
int y = ((_tileMap.mapSize.height * tileHeight) - position.y) / tileHeight;

Related

How to calculate camera ray position for use with XMVector3Unproject(), DirectX11?

I'm trying to create a ray-casting camera in DirectX11 using XMVector3Unproject(). From my understanding, I will be passing in the (Vector3)position of the pixel on the near plane, and in separate call, a corresponding position on the far plane. Then I would subtract these vectors to get the direction of the ray. The origin would then be the Unprojected coordinate on the near plane. My problem here is calculating the origin of the ray to be passed in.
Example
// assuming screenHeight and screenWidth are the number of pixels.
const uint32_t screenHeight = 768;
const uint32_t screenWidth = 1024;
struct Ray
{
XMFLOAT3 origin;
XMFLOAT3 direction;
};
Ray rays[screenWidth * screenHeight];
for (uint32_t i = 0; i < screenHeight; ++i)
{
for (uint32_t j = 0; j < screenWidth; ++j)
{
// 1. ***calculate and store the current pixel position on the near plane***
// 2. ***calculate the corresponding point on the far plane***
// 3. ***pass both positions separately into XMVector3Unproject() (2 total calls to the function)***
// 4. ***store the returned vectors' difference into rays[i * screenWidth + j].direction***
// 5. ***store the near plane pixel position's returned vector into rays[i * screenWidth + j].origin***
}
}
Hopefully I'm understanding this correctly. Any help in determining the ray origins, or corrections would be greatly appreciated.
According to the documentation, the XMVector3Unproject function gives you the coordinates of a ray you have provided in camera space (Normalized-device coordinates), in object space (given your model matrix).
To generate your camera rays, you consider your camera pinhole (all the light passes through one point, which is your camera (0, 0, 0), then you choose your ray direction. Let say you want to generate W*H camera rays, your loop might look like this
Vector3 ray_origin = Vector3(0, 0, 0);
for (float x = -1.f; x <= 1.f; x += 2.f / W) {
for (float y = -1.f; y <= 1.f; y += 2.f / H) {
Vector3 ray_direction = Normalize(Vector3(x, y, -1.f)) - ray_origin;
Vector3 ray_in_model = Unproject(ray_direction, 0.f, 0.f,
width, height, znear, zfar,
proj, view, model);
}
}
You might also want to have a look at this link which sounds interesting

How can I track a point on a texture in OpenGL ES1?

In my iOS application I have a texture applied to a sphere rendered in OpenGLES1. The sphere can be rotated by the user. How can I track where a given point on the texture is in 2D space at any given time?
For example, given point (200, 200) on a texture that's 1000px x 1000px, I'd like to place a UIButton on top of my OpenGL view that tracks the point as the sphere is manipulated.
What's the best way to do this?
On my first attempt, I tried to use a color-picking technique where I have a separate sphere in an off-screen framebuffer that uses a black texture with a red square at point (200, 200). Then, I used glReadPixels() to track the position of the red square and I moved my button accordingly. Unfortunately, grabbing all the pixel data and iterating it 60 times a second just isn't possible for obvious performance reasons. I tried a number of ways to optimize this hack (eg: iterating only the red pixels, iterating every 4th red pixel, etc), but it just didn't prove to be reliable.
I'm an OpenGL noob, so I'd appreciate any guidance. Is there a better solution? Thanks!
I think it's easier to keep track of where your ball is instead of searching for it with pixels. Then just have a couple of functions to translate your ball's coordinates to your view's coordinates (and back), then set your subview's center to the translated coordinates.
CGPoint translatePointFromGLCoordinatesToUIView(CGPoint coordinates, UIView *myGLView){
//if your drawing coordinates were between (horizontal {-1.0 -> 1.0} vertical {-1 -> 1})
CGFloat leftMostGLCoord = -1;
CGFloat rightMostGLCoord = 1;
CGFloat bottomMostGLCoord = -1;
CGFloat topMostGLCoord = 1;
CGPoint scale;
scale.x = (rightMostGLCoord - leftMostGLCoord) / myGLView.bounds.size.width;
scale.y = (topMostGLCoord - bottomMostGLCoord) / myGLView.bounds.size.height;
coordinates.x -= leftMostGLCoord;
coordinates.y -= bottomMostGLCoord;
CGPoint translatedPoint;
translatedPoint.x = coordinates.x / scale.x;
translatedPoint.y =coordinates.y / scale.y;
//flip y for iOS coordinates
translatedPoint.y = myGLView.bounds.size.height - translatedPoint.y;
return translatedPoint;
}
CGPoint translatePointFromUIViewToGLCoordinates(CGPoint pointInView, UIView *myGLView){
//if your drawing coordinates were between (horizontal {-1.0 -> 1.0} vertical {-1 -> 1})
CGFloat leftMostGLCoord = -1;
CGFloat rightMostGLCoord = 1;
CGFloat bottomMostGLCoord = -1;
CGFloat topMostGLCoord = 1;
CGPoint scale;
scale.x = (rightMostGLCoord - leftMostGLCoord) / myGLView.bounds.size.width;
scale.y = (topMostGLCoord - bottomMostGLCoord) / myGLView.bounds.size.height;
//flip y for iOS coordinates
pointInView.y = myGLView.bounds.size.height - pointInView.y;
CGPoint translatedPoint;
translatedPoint.x = leftMostGLCoord + (pointInView.x * scale.x);
translatedPoint.y = bottomMostGLCoord + (pointInView.y * scale.y);
return translatedPoint;
}
In my app I choose to use the iOS coordinate system for my drawing too. I just apply a projection matrix to my whole glkView the reconciles the coordinate system.
static GLKMatrix4 GLKMatrix4MakeIOSCoordsWithSize(CGSize screenSize){
GLKMatrix4 matrix4 = GLKMatrix4MakeScale(
2.0 / screenSize.width,
-2.0 / screenSize.height,
1.0);
matrix4 = GLKMatrix4Translate(matrix4,-screenSize.width / 2.0, -screenSize.height / 2.0, 0);
return matrix4;
}
This way you don't have to translate anything.

how to get degrees of a specific X Position in an object?

This is my method to move squares around a circle like satelites does on planets:
-(CGPoint)circularMovement:(float)degrees radius:(CGFloat)radius{
float x = (planet.position.x + planet.radius) *cos(degrees);
float y = (planet.position.y + planet.radius) *sin(degrees);
CGPoint posicion = CGPointMake(x, y);
return posicion;
}
As you can see, I get an x and y position of my satelite, and calling this method with degrees++ I got a circular movement around planets.
But my problem with this movement sistem is I need the degrees of satelite.position.x+satelite.size.width/2 to detect collisions with another object moving around with the same movement-sistem.
Anybody knows how to get this value??
Just do same calculations, but backwards.
In your example you knew: planet.position, planet.radius, degrees and you had to find x and y for that CGPoint.
Now you know: planet.position, planet.radius and that CGPoint and you need to find degrees.
From your formula:
float x = (planet.position.x + planet.radius) *cos(degrees);
you can find your degrees:
cos(degrees) = x / (planet.position.x + planet.radius);
For example:
cos(x) = 1 / 2;
then
x = acos(1/2);
x = 60 degrees or Pi/3 rads

convertCoordinate toPointToView returning bad results with tilted maps

I have a UIView overlayed on a map, and I'm drawing some graphics in screen space between two of the coordinates using
- (CGPoint)convertCoordinate:(CLLocationCoordinate2D)coordinate toPointToView:(UIView *)view
The problem is that when the map is very zoomed in and tilted (3D-like), the pixel position of the coordinate that is way off-screen stops being consistent. Sometimes the function returns NaN, sometimes it returns the right number and others it jumps to the other side of the screen.
Not sure how can I explain it better. Has anyone run into this?
During research have find a many solution. Any solution might be work for you.
Solution:1
int x = (int) ((MAP_WIDTH/360.0) * (180 + lon));
int y = (int) ((MAP_HEIGHT/180.0) * (90 - lat));
Solution:2
func addLocation(coordinate: CLLocationCoordinate2D)
{
// max MKMapPoint values
let maxY = Double(267995781)
let maxX = Double(268435456)
let mapPoint = MKMapPointForCoordinate(coordinate)
let normalizatePointX = CGFloat(mapPoint.x / maxX)
let normalizatePointY = CGFloat(mapPoint.y / maxY)
print(normalizatePointX)
print(normalizatePointX)
}
Solutuin:3
x = (total width of image in px) * (180 + latitude) / 360
y = (total height of image in px) * (90 - longitude) / 180
note: when using negative longitude of latitude make sure to add or subtract the negative number i.e. +(-92) or -(-35) which would actually be -92 and +35

Algorithm for creating a circular path around a center mass?

I am attempting to simply make objects orbit around a center point, e.g.
The green and blue objects represent objects which should keep their distance to the center point, while rotating, based on an angle which I pass into method.
I have attempted to create a function, in objective-c, but it doesn't work right without a static number. e.g. (It rotates around the center, but not from the true starting point or distance from the object.)
-(void) rotateGear: (UIImageView*) view heading:(int)heading
{
// int distanceX = 160 - view.frame.origin.x;
// int distanceY = 240 - view.frame.origin.y;
float x = 160 - view.image.size.width / 2 + (50 * cos(heading * (M_PI / 180)));
float y = 240 - view.image.size.height / 2 + (50 * sin(heading * (M_PI / 180)));
view.frame = CGRectMake(x, y, view.image.size.width, view.image.size.height);
}
My magic numbers 160, and 240 are the center of the canvas in which I'm drawing the images onto. 50 is a static number (and the problem), which allows the function to work partially correctly -- without maintaining the starting poisition of the object or correct distance. I don't know what to put here unfortunately.
heading is a parameter that passes in a degree, from 0 to 359. It is calculated by a timer and increments outside of this class.
Essentially what I would like to be able to drop any image onto my canvas, and based on the starting point of the image, it would rotate around the center of my circle. This means, if I were to drop an image at Point (10,10), the distance to the center of the circle would persist, using (10,10) as a starting point. The object would rotate 360 degrees around the center, and reach it's original starting point.
The expected result would be to pass for instance (10,10) into the method, based off of zero degrees, and get back out, (15,25) (not real) at 5 degrees.
I know this is very simple (and this problem description is entirely overkill), but I'm going cross eyed trying to figure out where I'm hosing things up. I don't care about what language examples you use, if any. I'll be able to decipher your meanings.
Failure Update
I've gotten farther, but I still cannot get the right calculation. My new code looks like the following:
heading is set to 1 degree.
-(void) rotateGear: (UIImageView*) view heading:(int)heading
{
float y1 = view.frame.origin.y + (view.frame.size.height/2); // 152
float x1 = view.frame.origin.x + (view.frame.size.width/2); // 140.5
float radius = sqrtf(powf(160 - x1 ,2.0f) + powf(240 - y1, 2.0f)); // 90.13
// I know that I need to calculate 90.13 pixels from my center, at 1 degree.
float x = 160 + radius * (cos(heading * (M_PI / 180.0f))); // 250.12
float y = 240 + radius * (sin(heading * (M_PI / 180.0f))); // 241.57
// The numbers are very skewed.
view.frame = CGRectMake(x, y, view.image.size.width, view.image.size.height);
}
I'm getting results that are no where close to where the point should be. The problem is with the assignment of x and y. Where am I going wrong?
You can find the distance of the point from the centre pretty easily:
radius = sqrt((160 - x)^2 + (240 - y)^2)
where (x, y) is the initial position of the centre of your object. Then just replace 50 by the radius.
http://en.wikipedia.org/wiki/Pythagorean_theorem
You can then figure out the initial angle using trigonometry (tan = opposite / adjacent, so draw a right-angled triangle using the centre mass and the centre of your orbiting object to visualize this):
angle = arctan((y - 240) / (x - 160))
if x > 160, or:
angle = arctan((y - 240) / (x - 160)) + 180
if x < 160
http://en.wikipedia.org/wiki/Inverse_trigonometric_functions
Edit: bear in mind I don't actually know any Objective-C but this is basically what I think you should do (you should be able to translate this to correct Obj-C pretty easily, this is just for demonstration):
// Your object gets created here somewhere
float x1 = view.frame.origin.x + (view.frame.size.width/2); // 140.5
float y1 = view.frame.origin.y + (view.frame.size.height/2); // 152
float radius = sqrtf(powf(160 - x1 ,2.0f) + powf(240 - y1, 2.0f)); // 90.13
// Calculate the initial angle here, as per the first part of my answer
float initialAngle = atan((y1 - 240) / (x1 - 160)) * 180.0f / M_PI;
if(x1 < 160)
initialAngle += 180;
// Calculate the adjustment we need to add to heading
int adjustment = (int)(initialAngle - heading);
So we only execute the code above once (when the object gets created). We need to remember radius and adjustment for later. Then we alter rotateGear to take an angle and a radius as inputs instead of heading (this is much more flexible anyway):
-(void) rotateGear: (UIImageView*) view radius:(float)radius angle:(int)angle
{
float x = 160 + radius * (cos(angle * (M_PI / 180.0f)));
float y = 240 + radius * (sin(angle * (M_PI / 180.0f)));
// The numbers are very skewed.
view.frame = CGRectMake(x, y, view.image.size.width, view.image.size.height);
}
And each time we want to update the position we make a call like this:
[objectName rotateGear radius:radius angle:(adjustment + heading)];
Btw, once you manage to get this working, I'd strongly recommend converting all your angles so you're using radians all the way through, it makes it much neater/easier to follow!
The formula for x and y coordinates of a point on a circle, based on radians, radius, and center point:
x = cos(angle) * radius + center_x
y = sin(angle) * radius + center_y
You can find the radius with HappyPixel's formula.
Once you figure out the radius and the center point, you can simply vary the angle to get all the points on the circle that you'd want.
If I understand correctly, you want to do InitObject(x,y). followed by UpdateObject(angle) where angle sweeps from 0 to 360. (But use radians instead of degrees for the math)
So you need to track the angle and radius for each object.:
InitObject(x,y)
relative_x = x-center.x
relative_y = y-center.y
object.radius = sqrt((relative_x)^2, (relative_y)^2)
object.initial_angle = atan(relative_y,relative_x);
And
UpdateObject(angle)
newangle = (object.initial_angle + angle) % (2*PI )
object.x = cos(newangle) * object.radius + center.x
object.y = sin(newangle) * object.radius + center.y
dx=dropx-centerx; //target-source
dy=-(dropy-centery); //minus = invert screen coords to cartesian coords
radius=sqrt(dy*dy+dx*dx); //faster if your compiler optimizer is bad
if dx=0 then dx=0.000001; //hackpatchfudgenudge*
angle=atan(dy/dx); //set this as start angle for the angle-incrementer
Then go with the code you have and you'll be fine. You seem to be calculating radius from current position each time though? This, like the angle, should only be done once, when the object is dropped, or else the radius might not be constant.
*instead of handling 3 special cases for dx=0, if you need < 1/100 degree precision for the start angle go with those instead, google Polar Arctan.

Resources