Body Type = Alpha Mask Scaling Issue Sprite Kit - ios

I am creating a Sprite Kit game were it's important for me to be able to exclude transparent parts of SKNode Textures when listening to "touches" (touchesbegan).
The solution that works for me is to set the physical body of the nodes to be
BodyType = Alpha mask.
In touches began method I can now iterate over all physicsbodies that are located in the point of the touch.
The PROBLEM now is that this works but the area of the masked body seems to be bigger then the actual size of the Node that but the shape of the body is correct.
Actually the physical body seems to be as big as the original image is before I set the node to be lets say half the size of the original image.
I have seen this behaviour previous in Sprite Kit.
My Scene in Interface Builder is 1024 points wide and I use AspectFill as the scaling option. What I can see is that if I print the scenes size its actually telling me 375 when running on the Iphone 6s (as the resolution of the iphone6s)
But e.g if I want a node to be wide as the screen I have to set it to 1024.
Can someone please help me understand this as it seems scaling problem.

Related

Sprite Kit Size Change

I have found this bizarre thing in SpriteKit. When I create a sprite node it has a width and height of 100.
However when I make that sprite node the child of another node it's width and height change bizarrely.
What is this Bizarre behaviour? and how can I prevent it from happening? It has started messing with many of my projects as programmatically created squares become rectangles.
The Sprites scales are thrown out of wack. In reality the Sprite still has a size of 100 x 100 just it doesn't have a scale of 1,1 anymore. I think that this is a bug in Scene editor and I find it mostly occurs if I resize a sprite visually vs typing in size differences. To fix it just change the scales back to 1 and then manually change the size back to what it should be in the Attributes panel

Dealing with different iOS device resolutions in SpriteKit

I'm playing around with SpriteKit in Xcode 6, iOS 8 beta 5. Everything is all laid out and working perfectly on the iPhone 4S simulator, however when switching to the 5S, the elements at the bottom of the screen are cut off.
It was to my understanding that the bottom left corner of the iPhone screen should be CGPoint(0, 0) but after checking the location by printing the coordinates to the console that the lowest point of the left corner I could click was around (5, 44). Is there something wrong in my scene setup thats causing this?
No changes have been made to the GameViewController file and even after I strip the GameScene file the problem persists.
Can anyone at least point me in the right direction with this?
Adding the following code will fix your problem (code is in Swift):
scene.scaleMode = SKSceneScaleMode.ResizeFill
Now if you want to know why this fixes your problem, what your problem actually is, and how to handle multiple resolutions – I suggest you continue reading.
There are three things that can impact the position of nodes in your scene.
1) Anchor Point
Make sure your scene's anchor point is set to (0,0) bottom left. By default the scene's anchor point starts at (0,0) so i'm assuming that is not causing the issue.
2) Size Check the size of your scene. I typically make my scene size match the size of the device (i.e. iPad, iPhone 4-inch, iPhone 3.5 inch), then I place another layer in the scene for storing my nodes. This makes me able to do a scrolling effect for devices with smaller resolutions, but it depends on your game of-course. My guess is that your scene size might be set to 320, 480 which could be causing the positioning problems on your iPhone 5s.
3) Scale Mode The scale mode has a huge effect on the positioning of nodes in your scene. Make sure you set the scale mode to something that makes sense for your game. The scale mode kicks in when your scene size does not match the size of the view. So the purpose of the scale mode is to let Sprite Kit know how to deal with this situation. My guess is that you have the scene size set to 320,480 and the scene is being scaled to match the iPhone 5 view which will cause positioning problems identical to what you described. Below are the various scale modes you can set for your scene.
SKSceneScaleMode.AspectFill
The scaling factor of each dimension is calculated and the larger of
the two is chosen. Each axis of the scene is scaled by the same
scaling factor. This guarantees that the entire area of the view is
filled, but may cause parts of the scene to be cropped.
SKSceneScaleMode.AspectFit
The scaling factor of each dimension is calculated and the smaller of
the two is chosen. Each axis of the scene is scaled by the same
scaling factor. This guarantees that the entire scene is visible, but
may require letterboxing in the view.
SKSceneScaleMode.Fill
Each axis of the scene is scaled independently so that each axis in
the scene exactly maps to the length of that axis in the view.
SKSceneScaleMode.ResizeFill
The scene is not scaled to match the view. Instead, the scene is
automatically resized so that its dimensions always matches those of
the view.
Conclusion
It looks like you want to remove the scaling of your scene, that way your positions in the scene will match the actual positions in the view. You can either set your scene's size to match the view size, in which case no scaling will take place. Or you can set your scene's scale mode to ResizeFill which will always make the scene's size match your view's size and it won't scale anything. In general I would stay away from any scaling and instead adjust the interface and the scene size to best suit each device. You may also want to add zoom and/or scrolling to allow devices with smaller resolutions to achieve the same view field.
But what if I want to scale my scene?
If however you need to scale your scene, but you still want positions to be relative to the view (i.e. You want (0,0) to be the bottom left of screen even when scene is cutoff) then see my answer here
Additional Info
See answer here for sample code showing how I layout nodes dynamically.
See answer here for more details about scaling to support multiple devices.
If you want to preserve the size of your scene (usually desired when you work with a fixed size and coordinates system), you might want to add padding to either side of your scene. This would remove the letter boxing and preserve all the physics and dynamics of your app on any platform.
I created a small Framework to help with this:
https://github.com/Tokuriku/tokuriku-framework-stash
Just:
Download the ZIP file for the Repository
Open the "SceneSizer" sub-folder
Drag the SceneSizer.framework "lego block" in your project
Make sure that the Framework in Embedded and not just Linked
Import the Framework somewhere in your code import SceneSizer
And you're done, you can now call the sizer Class with:
SceneSizer.calculateSceneSize(#initialSize: CGSize, desiredWidth: CGFloat, desiredHeight: CGFloat) -> CGSize
Just in case, try doing CMD+1, worked for me. Some of the elements were cut off because they were simply not displayed in Simulator - I stress this, this is just a simulator feature (and a bug if you ask me, wasted hours of time to solve this). CMD+2, CMD+3 views can sometimes hide parts of the scene.

Spritekit - Bottom of screen coordinates

What are the coordinates for the bottom of the screen... or how can I create a "floor" at the bottom of the screen in spritekit?
Sorry, but I don't understand screen coordinates that well in spritekit.
You need to understand the Sprite Kit coordinate system as explained in Apple's Documentation here.
Here's how you create a floor at the bottom of the screen in SpriteKit:
SKNode *floor = [SKNode node];
node.physicsBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:CGRectMake(CGRectGetMidX(self.frame),1.0 , CGRectGetWidth(self.frame), 1)];
[self addChild: floor];
You need some universal approach to get coordinates of corners on the screen.
Using code from that answer you can get CGRect with necessary information.
Example:
let screenRect = getVisibleScreen(
sceneRect: self.scene!.frame,
viewRect: self.view!.frame )
And then you can use it:
screenRect.minX
screenRect.maxX
screenRect.minY
screenRect.maxY
screenRect.width
screenRect.height
This is more then enough to calculate coordinates of "floor" or any other relative positions.
The location of the bottom of the screen will depend on what coordinate system you are using for your scene.
Out of the box, the bottom of the screen will be at y coordinate zero, but there are a few things that can happen that will affect that.
For instance, if you are using the scene editor in xCode, and your scene's anchorPoint property is something other than y=0, then the "origin" of your scene will not be at the bottom of the screen. In the recent xCode beta, they changed the default behavior to have the scene's origin at the center of the scene instead of the lower left corner, so that would explain why you might be seeing things in the center of the screen when you expect them to be at the bottom.
Also, the "bottom of the screen" will be relative to whatever parenting structure you have in your scene. For instance, if you place a background sprite in your scene, and want to attach a floor sprite to that which is at the bottom of the screen, you'll have to do some computing to figure out where to place it because you are going to inherit the translation and rotation of the floor's parent node (and any parents that node has).
To keep things simple, you can just place everything directly on the stage and manage their z-order manually. This will let you, basically, use the same coordinate system for everything. This is often fine; as long as you're not trying to do anything complex with your sprites, you don't need a complicated "tree" of nodes.
But even with this approach, the metrics of your scene are going to have to be handled dynamically. The width and height of your scene are going to depend on how you approach displaying your scene on different devices with different sizes. For instance, the top right of an iPhone 4 is going to be in a different place than the top right of an iPad Pro. A full discussion of how to deal with that is beyond the scope of your question, but generally, you'll probably want to use a "reference width" or a "reference height" for your scene, use .AspectFit or .AspectFill for the scaleMode, and set your scene's size accordingly. (I.e., inspect the view's frame to get the actual aspect ratio of your scene and set your scene size to match your reference metric on one axis and scale the other axis to match the device's aspect ratio.) This will let you use the same metrics for all devices (although one of your two axes will be fluid).

Why does gaps between tiles in an orthogonal tilemap cocos2d game appear when running on iPhone?

I'm trying to make a tilemap-based game using cocos2d 2.1 and Tiled 0.9.1. The game runs perfectly on the simulator, but I have gaps (artifact lines) between the tiles when running on the device.
Please see the screenshot.
The diff is the difference (made in photoshop) between the original tile (taken straight from the png of the tileset) and the tile as rendered by cocos2d. As you can see, in simulator they are 100% identical. However, on the device it seems that cocos2d shrinks the tile texture vertically by just a little bit. The 1 pixel stripe is actually the texture above the troublesome tile in the tileset.
Any idea what caused this and how to fix it?
While using this answer In my case enabling CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL was not enough.
I also added the following code to AppDelegate::applicationDidFinishLaunching() function and rounded values passed to setPosition(x, y) function to nearest int.
Director::getInstance()->setProjection(Director::Projection::_2D);
I use cocos2d-x 3.4.
Not certain why this happens on devices only, but you should read in ccConfig.h for parameter CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL. This in itself is a bad kludge, but it gives you a hint as to where to look.
Basically, you should make certain that all your positions are on an exact pixel boundary, ie on non-retina devices cast them to int, and on retina devices round to the nearest exact multiple of .5. Best way to ensure that is to make all your textures w,h even numbers ... the onus is on the artist for anything that will not move. If you move things, and the final position is calculated (for example in a ccTouches move,end), make certain you do this rounding there. Beware of batch nodes : the node itself, and all its children should be on pel boundary.

How to apply full-screen SKEffectNode for post-processing in SpriteKit

I'm trying out SpriteKit with the following setup:
An SKScene with two child nodes used merely for grouping other
nodes: foreground and background.
background is really empty as of now, but would eventually hold some
type of background sprite / layers.
foreground is a SKEffectNode and whenever the user taps on the
screen, a new intance of a SKnode subclass which represents a game
element is added as child to it.
This SKNode subclass basically creates 3 SKShapeNodes and two labels: an outter
circumference, an inner circumference, 2 labels and an inner quarter circumference. The inner quarter circumference has an SKAction that
makes it rotate forever about its origin / center.
Now here's the issue, as long as foreground doesn't have any CIFilter or has shouldEnableEffects = NO, everything is fine. That is, I can tap on the screen and my game elements are instantiated and added to the main scene. But the minute I try adding a CIGaussianBlur or CIBloom to the foreground, I notice two things:
The framerate drops to about 2fps. Mind you, this happens even with
as little as 6 nodes alive in the scene.
The effect seems to be constantly cropping its contents or adjusting
it's frame. That is, if I have one node, the "full screen" effect
seems to try and constantly crop or adjust its bounds to the minimum
area required to hold all nodes. This is for one node:
And this is for 2 nodes:
In OpenGL ES 2, one would do a post blur / bloom by basically rendering the whole framebuffer (all objects) to texture, then doing at least one more pass to blur,etc on that texture and then either present that in the framebuffer attached to the display or compose that with the original render back to the framebuffer. I'd expect SKEffectNode to work in a similar way. However the cropping and the poor performance makes me think I might be using the effect node the wrong way. Any thoughts?
It seems to be a bug with the SKEffectNode trying to apply a filter on children SKShapeNodes as far as I can tell. I played around with this and achieved your results, but when I switched out the SKShapeNodes for SKSpriteNodes (using a simple png of a circle) the cropping no longer appears. It's a bug in that SKEffectNode doesn't handle the stroke of the SKShapeNode very well. If you take off the stroke (lineWidth = 0) and give it a fill color, you'll see that there is no cropping.
As for the frame rate, SKShapeNodes perform poorly. Doing the switch to SKSpriteNodes I mentioned earlier boosted my fps from 40 to 50 when I had 35 nodes on the screen (iPhone 5) with the filter applied.

Resources