How to support larger tiles - ios

I've been working on a small platformer and decided to try new collision detection. I followed ray wenderlich's tutorial on how to make an iOS platformer and a couple of questions came up. In the tutorial its set up to support a very specific tile size and I was wondering how to modify it correctly to support a tile size of 80x80. This was the method used to get the tile coordinates and bounding boxes.
- (CGPoint)tileCoordForPosition:(CGPoint)position
{
float x = floor(position.x / map.tileSize.width);
float levelHeightInPixels = map.mapSize.height * map.tileSize.height;
float y = floor((levelHeightInPixels - position.y) / map.tileSize.height);
return ccp(x, y);
}
-(CGRect)tileRectFromTileCoords:(CGPoint)tileCoords
{
float levelHeightInPixels = map.mapSize.height * map.tileSize.height;
CGPoint origin = ccp(tileCoords.x * map.tileSize.width, levelHeightInPixels - ((tileCoords.y + 1) * map.tileSize.height));
return CGRectMake(origin.x, origin.y, map.tileSize.width, map.tileSize.height);
}

Since this code uses the values from the tilemap, if setup correctly in the tilemap you don't need to change the code.
I certainly don't see any magic numbers, which is good. Only fools and wizards use magic numbers. :)

Related

CorePlot allow user drag rotate PieChart

(As for I got the solution now, it is being shared at the bottom)
Fact is I have been struggling a while about this and I believe quite a lot of discussions I found are related to older versions of CorePlot or unanswered.
Firstly, I am using CorePlot 1.5.1.
I am able to plot a PieChart already and now I would like the user to be able to rotate it by dragging on the screen ( doesn't really matter touch directly the pieChart or the host View).
Using these delegates at the moment:
#interface MyChartViewController : UIViewController<CPTPieChartDataSource,CPTPlotSpaceDelegate,CPTPieChartDelegate>
Got a hostView,
#property (strong, nonatomic) IBOutlet CPTGraphHostingView *hostView;
Made a graph, set as, self.hostView.hostedGraph = graph
and made a PieChart, put into the graph, [graph addPlot:self.mainPieChart];
(I set the pieChart with a strong property to let me refer it anytime)
So, here is my first attempt, and fact is, it is responding, (though not in a desirable way)
CPTXYPlotSpace *plotSpace = (CPTXYPlotSpace *) self.hostView.hostedGraph.defaultPlotSpace;
[plotSpace setDelegate:self];
(only works by setting plotSpace delegate to self, not sure why, i guess it's about finding a way to receive user's interaction, anyway, then I overwrite these two functions)
Using this value:
static float deltaAngle;
-(BOOL)plotSpace:(CPTPlotSpace *)space shouldHandlePointingDeviceDownEvent:(UIEvent *)event atPoint:(CGPoint)point
{
float dx = point.x - self.mainPieChart.centerAnchor.x;
float dy = point.y - self.mainPieChart.centerAnchor.y;
deltaAngle = atan2(dy,dx);
return YES;
}
This, in order to save the first touching point
Then sense the dragging do use the difference to make the rotation
( at least I wanted so )
-(BOOL)plotSpace:(CPTPlotSpace *)space shouldHandlePointingDeviceDraggedEvent:(UIEvent *)event atPoint:(CGPoint)point
{
int x = self.mainPieChart.centerAnchor.x;
int y = self.mainPieChart.centerAnchor.y;
float dx = point.x - x;
float dy = point.y - y;
double a = atan2(dx,dy);
float angleDifference = deltaAngle - a;
self.mainPieChart.startAngle = -angleDifference;
return YES;
}
And here is an image about it, though i think I covered most of the details already.
http://postimg.org/image/bey0fosqj/
It is in landscape mode though.
Fact is I think this would be the most appropriate function to call, but somehow I cannot call it out (pretty sure I set self.mainPieChart delegate/ datasource to self already)
-(BOOL)pointingDeviceDraggedEvent:(id)event atPoint:(CGPoint)interactionPoint{
(after further testing)
Interesting, after trying to print out different values, by the shouldHandlePointingDevice function (simply clicking), I think i got some ideas now.
the self.mainPieChart.centerAnchor.x / y values always return 0.5 (both)
However, point x, point y are returning values vary from 1-500+,
it seems more like I am comparing two things, though they are on top of each other, from different perspective.
Likely the PlotSpace set delegate part messed that up.
============================================================
So, as for now I still don't know how to call -(BOOL)pointingDeviceDraggedEvent:(id)event atPoint:(CGPoint)interactionPoint{, I tried to put it into a if loop like
if([self.mainPieChart pointingDeviceDownEvent:event atPoint:self.mainPieChart.centerAnchor] == YES)
under my touched function but nothing happened, never mind.
Back to the point, my current solution works well now, even after applying padding.
float x = (self.hostView.bounds.size.width + self.hostView.hostedGraph.paddingLeft)*self.mainPieChart.centerAnchor.x;
float y = self.hostView.bounds.size.height * self.mainPieChart.centerAnchor.y;
float dx = point.x - x;
float dy = point.y - y;
double a = atan2(dx,dy);
these lines are all same for both press / drag functions, as for drag function,
float angleDifference = deltaAngle - a;
self.mainPieChart.startAngle = angleDifference;
are added before the end
However, the case is slightly different when the Pie Chart is not at the middle, or, in other words, the graph holding the Pie Chart is padded.
( my example somehow is mid centre just to make it easy)
you simply have to mortify the x y float value above, it's easier than I expected.
For example if I have,
graph.paddingLeft = -300.0f;
the value of float x in both press/drag will become
float x = (self.hostView.bounds.size.width + self.hostView.hostedGraph.paddingLeft)*self.mainPieChart.centerAnchor.x;
The pie chart centerAnchor is given as fractions of the width and height. Be sure to multiply the anchor values by the corresponding dimension of the graph before computing dx and dy.

CGPointEqualToPoint not working

I am making an app in which I want to have some thing happen If an image moves over another point. such as
first I have an image moving horizontally across the screen,'
self.ball.center = CGPointMake(self.ball.center.x + self.gravity.x / 8.0 * 200.0, 9);
then when it gets to a certain place another image moves down from that spot.
CGPoint a = CGPointMake(9, 9);
if (CGPointEqualToPoint(ball.center,a)) {
balla.hidden = NO;
self.balla.center = CGPointMake(self.balla.center.x , (self.balla.center.y)- (self.gravity.y / 8.0 * 200.0));
}
the first time it works ok but when I put in the next statement to move another image down from another spot nothing happens.
CGPoint b = CGPointMake(86, 9);
if (CGPointEqualToPoint(ball.center,b)) {
ball2a.hidden = NO;
self.ball2a.center = CGPointMake(self.ball2a.center.x , (self.ball2a.center.y)- (self.gravity.y / 8.0 * 200.0));}
Any ideas as to why this isn't working
If you're moving the ball by adding some floating-point value offset, you might be "skipping over" the point b - you may never hit b, but rather appear slightly before it and then slightly after it.
Rather than testing if you're "equal" to the point, you could be better off comparing the distance between the ball and the point and seeing if it is inside some small radius. A simple euclidean distance could work.
CGFloat euclid_dist(CGPoint a, CGPoint b)
{
return sqrt((b.x-a.x)*(b.x-a.x) + (b.y-a.y)*(b.y-a.y));
}
You could then use this to see if you've "hit" the point:
if (euclid_dist(ball.center, b) < 0.1)
{
// React appropriately
}
In general it's problematic to test for equality between floating point numbers.

OpenCV: How to get corners of CvBox2D?

I need to find corner positions of CvBox2D (or MCvBox2D) to map found contours on game object in XNA. I have a problem with correct translation of rotation angle. I thought that this is kind of basic operation but I kind find any solution in Internet.
I tried:
rotationAngle = box.angle * (180.0/ CV_PI);
angle = box.angle;
box.angle=rotationAngle;
alien.X = box.center.X - box.Width / 2;
alien.Y = box.center.Y - box.Height / 2;
alien.angle=angle;
but it wasn't translating it correctly.
Had someone ever tried to get corners on this kind of structure?
In EmguCV you just need to call
PointF[] corners = box.GetVertices();
if box is a MCvBox2D.
The simplest way to get the vertices of a CvBox2D is to convert it to a RotatedRect:
CvBox2D box = ...
cv::RotatedRect rr(box);
cv::Point2f vertices[4];
rr.points(vertices);
// vertices now has the four corners your seek

UIImage transform/scaling issues

Finally, I have a reason to ask something, instead of scouring endless hours of the joys of Stack Overflow.
Here's my situation: I have an UIImageView with one UIImage inside it. I'm transforming the entire UIImageView via CGAffineTranforms, to scale it height-wise and keep it at a specific angle.
I'm feeding it this transform data through two CGPoints, so it's essentially just calculating the angle and scale between these two points and transforming.
Now, the transforming is working like a charm, but I recently came across the UIImage method resizableImageWithCapInsets, which works just fine if you set the frame of the image manually (ie. scale using a frame), but it seems that using transforms overrides this, which I guess is sort of to be expected since it's Core Graphics doing it's thing.
My question is, how would I go about either a) adding cap insets after transforming the image or b) doing the angle & scaling via a frame?
Note that the two points providing the data are touch points, so they can differ very much, which is why creating a scaled rectangle at a specific angle is tricky at best.
To keep you code hungry geniuses happy, here's a snippet of the current way I'm handling scaling (only doing cap insets when creating the UIImage):
float xDiff = (PointB.x - PointA.x) / 2;
float yDiff = (PointB.y - PointA.y) / 2;
float angle = [self getRotatingAngle:PointA secondPoint:PointB];
CGPoint pDiff = CGPointMake(PointA.x + xDiff, PointA.y + yDiff);
self.center = pDiff;
// Setup a new transform
// Set it up with a scale and an angle
double distance = sqrt(pow((PointB.x - PointA.x), 2.0) + pow((PointB.y - PointA.y), 2.0));
float scale = 1.0 * (distance / self.image.size.height);
CGAffineTransform transformer = self.transform;
transformer = CGAffineTransformConcat(CGAffineTransformMakeScale(1.0, scale), CGAffineTransformMakeRotation(angle));
// Apply the transformer
self.transform = transformer;
Adding a proper answer to this. The answer to the problem can be found here.

XNA isometric tiles rendering issue

I'm currently working on a XNA game prototype. I'm trying to achieve a isometric view of the game world (or is it othographic?? I'm not sure which is the right term for this projection - see pictures).
The world should a tile-based world made of cubic tiles (e.g. similar to Minecraft's world), and I'm trying to render it in 2D by using sprites.
So I have a sprite sheet with the top face of the cube, the front face and the side (visible side) face. I draw the tiles using 3 separate calls to drawSprite, one for the top, one for the side, one for the front, using a source rectangle to pick the face I want to draw and a destination rectangle to set the position on the screen according to a formula to convert from 3D world coordinates to isometric (orthographic?).
(sample sprite:
)
This works good as long as I draw the faces, but if I try to draw fine edges of each block (as per a tile grid) I can see that I get a random rendering pattern in which some lines are overwritten by the face itself and some are not.
Please note that for my world representation, X is left to right, Y is inside screen to outside screen, and Z is up to down.
In this example I'm working only with top face-edges. Here is what I get (picture):
I don't understand why some of the lines are shown and some are not.
The rendering code I use is (note in this example I'm only drawing the topmost layers in each dimension):
/// <summary>
/// Draws the world
/// </summary>
/// <param name="spriteBatch"></param>
public void draw(SpriteBatch spriteBatch)
{
Texture2D tex = null;
// DRAW TILES
for (int z = numBlocks - 1; z >= 0; z--)
{
for (int y = 0; y < numBlocks; y++)
{
for (int x = numBlocks - 1; x >=0 ; x--)
{
myTextures.TryGetValue(myBlockManager.getBlockAt(x, y, z), out tex);
if (tex != null)
{
// TOP FACE
if (z == 0)
{
drawTop(spriteBatch, x, y, z, tex);
drawTop(spriteBatch, x, y, z, outlineTexture);
}
// FRONT FACE
if(y == numBlocks -1)
drawFront(spriteBatch, x, y, z, tex);
// SIDE FACE
if(x == 0)
drawSide(spriteBatch, x, y, z, tex);
}
}
}
}
}
private void drawTop(SpriteBatch spriteBatch, int x, int y, int z, Texture2D tex)
{
int pX = OffsetX + (int)(x * TEXTURE_TOP_X_OFFRIGHT + y * TEXTURE_SIDE_X);
int pY = OffsetY + (int)(y * TEXTURE_TOP_Y + z * TEXTURE_FRONT_Y);
topDestRect.X = pX;
topDestRect.Y = pY;
spriteBatch.Draw(tex, topDestRect, TEXTURE_TOP_RECT, Color.White);
}
I tried using a different approach, creating a second 3-tiers nested for loop after the first one, so I keep the top face drawing in the first loop and the edge highlight in the second loop (I know, this is inefficient, I should also probably avoid having a method call for each tile to draw it, but I'm just trying to get it working for now).
The results are somehow better but still not working as expected, top rows are missing, see picture:
Any idea of why I'm having this problem? In the first approach it might be a sort of z-fighting, but I'm drawing sprites in a precise order so shouldn't they overwrite what's already there?
Thanks everyone
Whoa, sorry guys I'm an idiot :) I started the batch with SpriteBatch.begin(SpriteSortMode.BackToFront) but I didn't use any z-value in the draw.
I should have used SpriteSortMode.Deferred! It's now working fine. Thanks everyone!
Try tweaking the sizes of your source and destination rectangles by 1 or 2 pixels. I have a sneaking suspicion this has something to do with the way these rectangles are handled as sort of 'outlines' of the area to be rendered and a sort of off-by-one problem. This is not expert advice, just a fellow coder's intuition.
Looks like a sub pixel precision or scaling issue. Also try to ensure your texture/tile width/height is a power of 2 (32, 64, 128, etc.) as that could make the effect less bad as well. It's really hard to tell just from those pictures.
I don't know how/if you scale everything, but you should try to avoid rounding wherever possible (especially inside your drawTop() method). Every time you round some position/coordinate chances are good you might increase the error/random offsets. Try to use double (or better: float) coordinates instead of integer.

Resources