I'm programmatically using SKTileMapNode. The code is C# (Xamarin.iOS) but should be readable by every Swift/ObjC developer.
The problem is that the sorting of tiles in isometric projection seems to be incorrect and I cannot see why. To test, the map has 1 row and 5 columns.
See the screenshot:
The tiles at 0|0 and 2|0 are in front of the others. The pyramid styled tile at 4|0 however, is drawn correctly in front of the one at 3|0.
I'm using two simple tiles:
The first one has a resolution of 133x83px and the second one is 132x131px
This is what it looks like in Tiled and what I am trying to reproduce:
The tile map is setup and added to the scene using the following code:
var tileDef1 = new SKTileDefinition (SKTexture.FromImageNamed ("landscapeTiles_014"));
var tileDef2 = new SKTileDefinition (SKTexture.FromImageNamed ("landscapeTiles_036"));
var tileGroup1 = new SKTileGroup (tileDef1);
var tileGroup2 = new SKTileGroup (tileDef2);
var tileSet = new SKTileSet (new [] { tileGroup1, tileGroup2 }, SKTileSetType.Isometric);
var tileMap = SKTileMapNode.Create(tileSet, 5, 2, new CGSize (128, 64));
tileMap.Position = new CGPoint (0, 0);
tileMap.SetTileGroup (tileGroup1, 0, 0);
tileMap.SetTileGroup (tileGroup2, 1, 0);
tileMap.SetTileGroup (tileGroup1, 2, 0);
tileMap.SetTileGroup (tileGroup2, 3, 0);
tileMap.SetTileGroup (tileGroup2, 4, 0);
tileMap.AnchorPoint = new CGPoint (0, 0);
Add (tileMap);
If first suspected an incorrect tile size. The tile size used to initialise the tile map (128|64) is the size of the diamond shaped base of the tile. If using a flat tile, this is identical to the texture size. For tiles with a height, it differs. However, changing the tile size affects the alignment of the tiles an the size I'm using is the same as in Tiled and it's giving the correct result, so that cannot be the culprit.
What am I doing wrong or where am I thinking wrong?
I don't have an answer for the main issue, but the tile size you use to initialise the map (128|64) is the size of the base tile, used to convert isometric coordinates into the the actual orthogonal space.
If you change this size in Tiled, it'll affect the size of the positioning grid. You can see a similar effect in Xcode's built-in tile map editor (or in any other isometric tile map editor I guess).
Related
I'm trying to create something like canvas in SceneKit using an SCNBox, with a UIImage "wrapped" around from one surface and onto the four others adjacent to it.
The only way I can currently think to do this would be to chop up the UIImage into five separate images and put those onto the sides as materials, but I'm sure there must be an easier way.
Can anyone steer me in the right direction here? The box will have a separate texture/material on the side opposite the "front".
The easiest way would probably be to create a custom geometry with matching texture coordinates using +geometryWithSources:elements:
You can use contentsTransform property from SCNMaterialProperty, for adjust needed texture coordinates from your image to SCNBox
Some explanations with simplified example:
Lets suppose that you are using cube and you have a texture like this
By dividing it into rectangles, you will have
You want to skip rectangles 1, 3, 7, 9 and cover your cube with this texture.
For this just normalize the size of side from your SCNBox between 0 and 1, and use it to set the scale and transform in contentsTransform matrix.
I have a cube with equal sides in my example - so it will be the third part of the whole texture. For taking the 5 rectangle from the texture
let normalizedWidth = 1/3
let normilizedHeight = 1/3
let xOffset = 1 //skip 1,4,7 line
let yOffset = 1 //skip 1,2,3 line
let sideMaterial = SCNMaterial()
sideMaterial.diffuse.contents = textureImage
let scaleMatrix = SCNMatrix4MakeScale(normalizedWidth, normilizedHeight, 0.0)
sideMaterial.diffuse.contentsTransform = SCNMatrix4Translate(scaleMatrix,
normalizedWidth * xOffset, yOffset * yOffset, 0.0)
You can fill 5 sides with configured materials, and the last on (on the back) just with the color and set them to materials property of your SCNBox.
In the result you will have
In SpriteKit, I can use touch locatons to record "Hits" in a target, where center of the target, "bulls eye" have the coordinates (0,0). After plenty of shooting, I will fetch all hits as an array with CGPoints. Since the target is 500 x 500 points (SKScene, sks-file), all hits can have a x position from -250 to +250 and likewise for y position.
In the attatched photo, the hits are registered as points at around (150, 150).
The problem arises when I will use the famous LFHeatMap https://github.com/gpolak/LFHeatMap.
+ (UIImage *)heatMapWithRect:(CGRect)rect
boost:(float)boost
points:(NSArray *)points
weights:(NSArray *)weights;
The LFHeatMap generates a UIImage based on the array, which I add to a UIImageView. The problem is that the UIViews has the x and y values arranged differently from SKScenes
func setHeatMap() {
let points = getPointsFromCoreData()
let weigths = getWeightsFromCoreData()
var rect = CGRectMake(0, 0, 500, 500)
rect.origin = CGPointMake(-250, -250)
let image =
LFHeatMap.heatMapWithRect(rect, boost: 1, points: points, weights: weights)
heatMapView.contentMode = UIViewContentMode.ScaleAspectFit
heatMapView.image = image
}
Lowering the shots makes the heat move higher.
How can I solve this? Either All points have to be converted to fit another coordinate system, or the coordiate of the CGrect making the heatmap, must be changed. How can this be done?
This was embarrasingly easy when the solution first occured.
Run a loop trough the points array, and multiply the point.y with -1...
Then all the valus on the y-axis is correct.
For dimension consideration, I resize the opengl view to 2.0 scale than origin, like this:
NSInteger Dimension = 2;
self.glView = [[WQPaintGLView alloc] initWithFrame:CGRectMake(0, 0, width*Dimension, height*Dimension)];
CGAffineTransform tScale = CGAffineTransformMakeScale((float)1/Dimension, (float)1/Dimension);
CGAffineTransform tTranslate = CGAffineTransformTranslate(tScale, -width, -height);
self.glView.transform = tTranslate;
[self.canvasContainerView addSubview:self.glView];
But get a strange issue, see:
I can only draw stuff in the left bottom 1/4 area.
What did I wrong?
The UIView transform and openGL are not very compatible. Also the view resizing after the openGL initialization could be troublesome and in most cases a new render buffer must be created from the view.
Anyway since you scaled the view to have a larger surface you should check for following calls:
glViewport should define what part of the buffer you are writing at. Usually it is set like (0, 0, viewWidth, viewHeight). In your case it must include the scale as well.
glOrtho (or glFrustum) define your coordinate system if used. Those should most likely be the same no matter the view scale.
Any other matrix usage or scissors that may be defined by the view's frame.
By all means if possible remove the transform on the view and try to find a better solution.
I use an image view:
#IBOutlet weak var imageView: UIImageView!
to paint an image and also another image which has been rotated. It turns out that the rotated image has very bad quality. In the following image the glasses in the yellow box are not rotated. The glasses in the red box are rotated by 4.39 degrees.
Here is the code I use to draw the glasses:
UIGraphicsBeginImageContext(imageView.image!.size)
imageView.image!.drawInRect(CGRectMake(0, 0, imageView.image!.size.width, imageView.image!.size.height))
var drawCtxt = UIGraphicsGetCurrentContext()
var glassImage = UIImage(named: "glasses.png")
let yellowRect = CGRect(...)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.yellowColor().CGColor)
CGContextStrokeRect(drawCtxt, yellowRect)
CGContextDrawImage(drawCtxt, yellowRect, glassImage!.CGImage)
// paint the rotated glasses in the red square
CGContextSaveGState(drawCtxt)
CGContextTranslateCTM(drawCtxt, centerX, centerY)
CGContextRotateCTM(drawCtxt, 4.398 * CGFloat(M_PI) / 180)
var newRect = yellowRect
newRect.origin.x = -newRect.size.width / 2
newRect.origin.y = -newRect.size.height / 2
CGContextAddRect(drawCtxt, newRect)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.redColor().CGColor)
CGContextSetLineWidth(drawCtxt, 1)
// draw the red rect
CGContextStrokeRect(drawCtxt, newRect)
// draw the image
CGContextDrawImage(drawCtxt, newRect, glassImage!.CGImage)
CGContextRestoreGState(drawCtxt)
How can I rotate and paint the glasses without losing quality or get a distorted image?
You should use UIGraphicsBeginImageContextWithOptions(CGSize size, BOOL opaque, CGFloat scale) to create the initial context. Passing in 0.0 as the scale will default to the scale of the current screen (e.g., 2.0 on an iPhone 6 and 3.0 on an iPhone 6 Plus).
See this note on UIGraphicsBeginImageContext():
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
As others have pointed out, you need to set up your context to allow for retina displays.
Aside from that, you might want to use a source image that is larger than the target display size and scale it down. (2X the pixel dimensions of the target image would be a good place to start.)
Rotating to odd angles is destructive. The graphics engine has to map a grid of source pixels onto a different grid where they don't line up. Perfectly straight lines in the source image are no longer straight in the destination image, etc. The graphics engine has to do some interpolation, and a source pixel might be spread over several pixels, or less than a full pixel, in the destination image.
By providing a larger source image you give the graphics engine more information to work with. It can better slice and dice those source pixels into the destination grid of pixels.
When you have to display a series of visual components (sprites) within the context of a game each taking a literal height and width that needs to be relative to the height & width of the Viewport (not necessarily aspect ratio) of the target device:
Is there a scaling class to help come up with scaling ratio in a dynamic fashion based on current device viewport size?
Will I need to roll my own scaling ratio algorithm?
Any cross platform issues I should be aware of?
This is not a question relating to the loading of assets based on target device nor is it a question of how to perform the scaling of the sprite (which is described here: http://msdn.microsoft.com/en-us/library/bb194913.aspx), rather a question of how to determine the scale of sprites based on view port size.
You can always create your own implementation of scaling.
For example, the default target viewport dimensions are:
const int defaultWidth = 1280, defaultHeight = 720;
And your current screen dimensions are 800×600, which gives you a (let's use a Vector2 instead of two floats):
int currentWidth = GraphicsDevice.Viewport.Width,
currentHeight = GraphicsDevice.Viewport.Height;
Vector2 scale = new Vector2(currentWidth / defaultWidth,
currentHeight / defaultHeight);
This gives you a {0.625; 0.83333}. You can now use this in a handy SpriteBatch.Draw() overload that takes a Vector2 scaling variable:
public void Draw (
Texture2D texture,
Vector2 position,
Nullable<Rectangle> sourceRectangle,
Color color,
float rotation,
Vector2 origin,
Vector2 scale,
SpriteEffects effects,
float layerDepth
)
Alternatively, you can draw all your stuff to a RenderTarget2D and copy the resulting image from there to a stretched texture on the main screen, but that will still require the above SpriteBatch.Draw() overload, though it might save you time if you have lots of draw calls.
Another Option to generate the scale would be to leverage:
var scaleMatrix = Matrix.CreateScale(
(float)GraphicsDevice.Viewport.Width / View.Width,
(float)GraphicsDevice.Viewport.Width / View.Width, 1f);
http://msdn.microsoft.com/en-gb/library/bb195692.aspx.
But this did not meet my needs, as I would then have to roll my own transform to map touch input location to the 'transformed' sprites (which respond to user touch input by knowing their own position and size).
In the end I used a percentage based approach.
I basically got the viewport height and width...
GraphicsDevice.Viewport.Width
GraphicsDevice.Viewport.Height
...then calculated the Height and Width of my sprites (Note: "as mentioned in question they take a literal height and width") based on their relative size to the screen myself using percentages.
//I want the buttons height and width to be 20% of the viewport
var x, y = GraphicsDevice.Viewport.Width * 0.2f; //20% of screen width
var btnsize = new Vector(x,y);
var button = new GameButton(btnsize);
Then once I have the size of the button I am able to calculate the position on the screen to render the button based of the size of the button and the available viewport size, against working in relative position based in percentages.