I have a fairly simple animation with 8 identically sized images. I'm not using the built in animation methods as I want to manually control the speed of the animation on the fly. I'm using preloaded SKTexture's and doing [object setTexture:texture]; inside of the update:currentTime method.
The problem is that sometimes the texture gets really distorted/stretched. After a lot of debugging, I have narrowed it down to only happening when the node is stationary. In fact, if I move the node a pixel and move it back like this, the problem never occurs:
[self setTexture:texture];
CGPoint currentPosition = self.position;
self.position = CGPointMake(currentPosition.x + 1, currentPosition.y + 1);
self.position = currentPosition;
This feels extremely hacky to me. I think under the hood, it's triggering a redraw on the parent node. Has anyone else experienced this? I have two major questions. 1) What is the cause? and 2) How can I resolve this without resorting to a hack?
Here is a normal frame and a stretched version (I apologize for the quality, placeholder art...)
Edit: After a few comments, I realized that I forgot to mention that I scaled the size of the node smaller than the size of the texture. Even though the textures are the same size, applying a new texture to a node with a smaller size causes the bug.
It seems that upon setting the texture using setTexture: sprite node doesn't change it size, until being moved, resized, etc...
You can resolve this by manually setting the size after setting the texture.
[spriteNode setTexture:texture];
[spriteNode setSize:texture.size];
Related
I am using SpriteKit to render a large (20 x 20) dot grid that looks like this:
I'd like to highlight rows or columns based on user input. For example, I'd like to change rows 1-10 to a red color, or columns 5-15 to a blue color.
What is the most performant way to do this?
I've tried:
Naming each GridNode based on the column it's in (e.g. #"column-4). Then use enumerateChildNodesWithName: with the string as #"column-n", changing the color of each node (by changing SKShapeNode setFillColor:) in the enumerate block.
Giving all the columns a parent node associated with that column. Then telling the parent node to change its alpha (thus changing the alpha of all its children).
Making arrays for the different columns, then looping through each node and changing its color or alpha.
I've tried making the GridDot class an SKEffectNode with shouldRasterize: set to YES. I've tried both an SKShapeNode and a SKSpriteNode as its child. I've also tried taking away the SKEffectNode parent and just render an SKSpriteNode.
Each of these options makes my whole app lag and makes my framerate drop to ~10 FPS. What is the correct way to change the color/alpha of many nodes (without dropping frames)?
At its heart, the issue is rendering this many nodes, yes?
When I faced similar performance problems while using SKShapeNode I came up with this solution:
Create SKShapeNode with required path and color.
Use SKView's method textureFromNode:crop: to convert SKShapeNode to an SKTexture
Repeat steps 1,2 to create all required textures for a node.
Create SKSpriteNode from a texture
Use created SKSpriteNode in your scene instead of SKShapeNode
Change node's texture when needed using SKSpriteNode's texture property
If you have a limited set of collors for your dots, I think this aproach will fit fine for your task.
In contrast to #amobi's statement, 400 nodes is not a lot. For instance, I have a scene with ~400 nodes and a render time of 9.8ms and 9 draw calls.
If you have 400 draw calls though, you should try to reduce that number. To determine the amount of draw calls needed for each frame rendered, implement (some of) the following code. It is actually taken from my own SpriteKit app's ViewController class which contains the SpriteKit scene.
skView.showsFPS = YES;
skView.showsNodeCount = YES;
skView.showsDrawCount = YES;
Proposed solution
I recommend using SKView's ignoresSiblingOrder. This way, SKSpriteNodes with equal zPosition are drawn in one draw call, which (for as many nodes/draw you appear to have) is horribly efficient. Set this in the -viewDidLoad method of the SKView's ViewController.
skView.ignoresSiblingOrder = YES;
I see no reason to burden the GPU with SKEffectNodes in this scenario. They are usually a great way to tank your frame rate.
Final thoughts
Basic performance issues mean you have a CPU or a GPU bottleneck. It is difficult to guess which you're suffering from with the current information. You could launch the Profiler, but Xcode itself also provides valuable information when you are running your app in an attached device. FPS in the Simulator is not representative for device performance.
I have an iOS app that uses sprite kit, and I am ready to add my artwork. The artwork is pixel-art and is inherently very small. I am trying to find the best way to display this in way where:
All of the art is the same size, meaning that one image pixel takes up exactly the amount of real world pixels as in all the other images.
There is no blurring in an attempt to make the textures look smoother, which often happens when scaling images up.
I have tried solving the second one like so:
self = [super init];
if(self){
self.size=size;
self.texture = [SKTexture textureWithImageNamed:#"ForestTree1.png"];
self.texture.filteringMode = SKTextureFilteringNearest;
[self.texture size];
}
return self;
The above code is in the initialization of the SKSpriteNode which will have the texture.
This is my original image (scaled up for easy reference):
The problem is that my result always looks like this:
(The bottom of the trunk being offset is not part of this question.) I am not using any motion blur or anything like it. I'm not sure why it isn't displaying correctly.
Edit 1:
I failed to mention above that the trees were constantly animating when the screenshots were taken. When they are still they look like this:
The image above is of two trees overlapping with one flipped caused because of a bug to be fixed later. My question is now how can I prevent the image from blurring while animation is occurring?
Edit 2:
I am adding multiple instances of the tree, each one loading the same texture. I know it as nothing to do with the animation because I changed the code to add just one tree and animate it, and it was pixelated perfectly.
You need to use "nearest" filtering:
self.texture.filteringMode = SKTextureFilteringNearest;
The pixels in your image must correspond with pixels on the screen perfectly.
If your image is 100x100, and you display it over a whole screen that is 105x105, it will do interpolation to figure out how to do it.
If you display it at a scaled resolution of some multiple of 2 (which should work properly), I think you still have to tell the renderer not to interpolate pixels when it does the scaling.
I've solved the problem...but its really a hack. I have a SKScene which is the parent node to all of the "trees" (SKSpriteNodes). This scene will be adding multiple trees to itself. At first I thought that this was some sort of problem because if I only added one tree, it would display the image correctly. The answer to this question led me to believe that I would need to programmatically create a SKTextureAtlas singleton in the (the texture is in a SKTextureAtlas) and pass it to the tree class to get the texture from on an init method. I made a property in the SKScene to hold the texture atlas so that I could pass it to the tree class every time I made a new one. I tried loading the texture from texture atlas (in the tree class) using the textureNamed: method. This still did not work. I switched back to loading the texture with SKTexture's textureWithImageNamed: method and it worked. Further more I changed to code back so that the tree subclass would not be sent the SKTextureAtlas singleton at all and it still worked.
In the SKScene I get the texture atlas using:
[SKTextureAtlas atlasNamed:#"Textures"]; //Textures is the atlas name.
and set the return value to be the SKTextureAtlas property described above. I thought that maybe the atlas just had to initialized at some point in the code, so I tried this:
SKTextureAtlas *myAtlas = [SKTextureAtlas atlasNamed:#"Textures"];
and the following alone on one line:
[SKTextureAtlas atlasNamed:#"Textures"]
but neither worked. Apparently I need to have a property in my tree's parent class which is the SKTextureAtlas which holds the texture which the tree uses without any reference to a SKTextureAtlas whatsoever... Is this a glitch or something? It's working now but it feels like a hack.
[self setScaleMode:SKSceneScaleModeAspectFill];
SKTexture* texture = [SKTexture textureWithImageNamed:#"image"];
[texture setFilteringMode:SKTextureFilteringNearest];
SKSpriteNode* imageNode = [SKSpriteNode spriteNodeWithTexture:texture];
[self addChild:imageNode];
Works perfectly for me. There's no blur with animation
For my game I am trying to create custom textures from two other textures. This is to allow for a varietly of colours, etc in my sprites.
To do this, I'm creating a sprite by adding both textures together, then applying this to a new SKTexture by using
SKTexture *texture = [self.view textureFromNode:newSprite];
This works great on the whole and I get a nice custom texture. Except when trying my game on Retina devices, where the texture is the correct size on the screen, but clearly a lower resolution.
The textures are all there and properly named so I don't believe that that is an issue.
Has anyone encountered this, or know how I can create the proper #2x texture?
I finally (accidentally) figured out how to fix this. The node which you are creating a texture from has to be added to the scene. Otherwise you will get a non-retina size for your texture.
It's not ideal as it would be nice to create textures without having to add them onto the screen.
I've discovered another way of improving the fidelity of textures created from ShapeNodes, not quite related to this question - but useful intel.
Create your shape at x2 its size and width.
Create all the fonts and other shapes at the same oversized ratio.
Make sure your positioning is relative to this overall size, (e.g. don't use absolute sizes, use relative sizes to the container.)
When you create the texture as a sprite it'll be huge - but then apply
sprite.scale = 0.5; // if you were using 2x
I've found this makes it look much higher resolution, no graininess, no fuzziness on fonts, sharp corners.
I also used tex.filteringMode = SKTextureFilteringNearest;
Thus: it doesn't have to be added to the scene and then removed.
I have a pretty large CAShapeLayer that I'm rendering. The layer is completely static, but it's contained in a UIScrollView so it can move around and be zoomed -- basically, it must be redrawn every now and then. In an attempt to improve the framerate of this scrolling, I set shouldRasterize = YES on the layer, which worked perfectly. Because I never change any property of the layer it never has a rasterization miss and I get a solid 60 fps. High fives all around, right?
Until the layer gets a little bigger. Eventually -- and it doesn't take long -- the rasterized image gets too large for the GPU to handle. According to my console, <Notice>: CoreAnimation: surface 2560 x 4288 is too large, and it just doesn't draw anything on the screen. I don't really blame it -- 2560 x 4288 is pretty big -- but I spent a while scratching my head before I noticed this in the device console.
Now, my question is: how can I work around this limitation? How can I rasterize a really large layer?
The obvious solution seems to be to break the layer up into multiple sublayers, say one for each quadrant, and rasterize each one independently. Is there an "easy" way to do this? Can I create a new layer that renders a rectangular area from another layer? Or is there some other solution I should explore?
Edit
Creating a tiled composite seems to have really bad performance because the layers are re-rasterized every time they enter the screen, creating for a very jerky scrolling experience. Is there some way to cache those rasterizations? Or is this the wrong approach altogether?
Edit
Alright, here's my current solution: render the layer once to a CGImageRef. Create multiple tile layers using sub-rectangles from that image, and actually put those on the screen.
- (CALayer *)getTiledLayerFromLayer:(CALayer *)sourceLayer withHorizontalTiles:(int)horizontalTiles verticalTiles:(int)verticalTiles
{
CALayer *containerLayer = [CALayer layer];
CGFloat tileWidth = sourceLayer.bounds.size.width / horizontalTiles;
CGFloat tileHeight = sourceLayer.bounds.size.height / verticalTiles;
// make sure these are integral, otherwise you'll have image alignment issues!
NSLog(#"tileWidth:%f height:%f", tileWidth, tileHeight);
UIGraphicsBeginImageContextWithOptions(sourceLayer.bounds.size, NO, 0);
CGContextRef tileContext = UIGraphicsGetCurrentContext();
[sourceLayer renderInContext:tileContext];
CGImageRef image = CGBitmapContextCreateImage(tileContext);
UIGraphicsEndImageContext();
for(int horizontalIndex = 0; horizontalIndex < horizontalTiles; horizontalIndex++) {
for(int verticalIndex = 0; verticalIndex < verticalTiles; verticalIndex++) {
CGRect frame = CGRectMake(horizontalIndex * tileWidth, verticalIndex * tileHeight, tileWidth, tileHeight);
CGRect visibleRect = CGRectMake(horizontalIndex / (CGFloat)horizontalTiles, verticalIndex / (CGFloat)verticalTiles, 1.0f / horizontalTiles, 1.0f / verticalTiles);
CALayer *tile = [CALayer layer];
tile.frame = frame;
tile.contents = (__bridge id)image;
tile.contentsRect = visibleRect;
[containerLayer addSublayer:tile];
}
}
CGImageRelease(image);
return containerLayer;
}
This works great...sort of. One the one hand, I get 60fps panning and zooming of a 1980 x 3330 layer on a retina iPad. On the other hand, it takes 20 seconds to start up! So while this solution solves my original problem, it gives me a new one: how can I generate the tiles faster?
Literally all of the time is spent in the [sourceLayer renderInContext:tileContext]; call. This seems weird to me, because if I just add that layer directly I can render it about 40 times per second, according to the Core Animation Instrument. Is it possible that creating my own image context causes it to not use the GPU or something?
Breaking the layer into tiles is the only solution. You can however implement it in many different ways. I suggest doing it manually (creating layers & sublayers on your own), but many recommend using CATiledLayer http://www.mlsite.net/blog/?p=1857, which is the way maps are usually implemented - zooming and rotating is quite easy with this one. The tiles of CATiledLayer are loaded (drawn) on demand, just after they are put on the screen. This implies a short delay (blink) before the tile is fully drawn and AFAIK it is quite hard to get rid of this behaviour.
I am rendering a simple line drawing (a line with some text in the middle) in a CALayer subclass via drawInContext(). I update this layer as the user is performing a gesture by calling setNeedsDisplay on it. The effect that I am seeing is what I might expect if there were no double buffering going on... i.e. I see parts of new rendering overlapping parts of old rendering. When I stop updating (complete the gesture) the system "catches up" and I always see the correct final result, but during the updates I see inconsistent results... This effect is not subtle and sometimes it is extreme... e.g. if I keep updating fast enough I can keep stale parts of the drawing on the screen for seconds while the new parts are drawing ahead...
I don't understand this at all. If Quartz is doing buffering then it seems that it is not blitting the result to the screen in its entirety or it is miscalculating the affected area.
Things I've tried:
1) I am disabling implicit animations and doing all of the drawing within a CATransaction
2) I am not making a mistake in my drawing... It's literally just two lines with some text in between... there is no way that I'm rendering the intermediate artifacts.
3) I have tried limiting the rate of updates by skipping most of them... but even at the lower rate I see artifacts until I stop updating and let the system catch up.
4) BTW, this happens identically in the simulator and on the device (iPad).
Is it necessary for me to draw into an offscreen buffer myself and copy it to the screen in its entirety? I thought that I had read that Quartz does this for me.
Update:
As usual, after hours of banging my head against the wall I find the (partial) answer 5 minutes after posting the question. I realized that I was using a CATiledLayer in order to get my layer re-rendered on zoom. If I switch it back to a regular CALayer the glitches go away. So I guess what I am seeing artifacts of the separate tiles rendering. Now I am trying to figure out how to deal with this...
So, it turns out that I had three problems:
1) CATiledLayer explicitly fades in new tile content with a default time of 0.25 seconds... This was causing havoc with my drawing. I overrode this in my CATiledLayer subclass:
+ (CFTimeInterval)fadeDuration {
NSLog(#"got fade duration");
return 0;
}
2) I also had to adjust the maximum tile size up (I set it to 1024x1024 though I don't know what size it is actually using).
3) I was making adjustments to my layer's frame periodically during the updates and that seemed to cause additional problems for the tiled layer. I am making changes to stop that.
With all of those changes the performance seems acceptable now.