Cocos2d: particle.autoRemoveOnFinish not releasing memory - memory

I create a particle effect in the following way:
CCParticleSun* p = [[CCParticleSun alloc]initWithTotalParticles:5000];
p.autoRemoveOnFinish = YES;
//more parameters
p.duration = 1;
and add it to my scene:
[self addChild:p z:self.zOrder+1];
Every time I create this particle effect, 3MB of memory are allocated, but never released.
What am I doing wrong? Do I have to manually release the particle system?
NSZombies are disabled, so it's not kept in memory by accident.

Everything you alloc (or retain) you have to release as well. For Cocos2D it's easiest to turn it into an autorelease object like this:
CCParticleSun* p = [[CCParticleSun alloc]initWithTotalParticles:5000];
[p autorelease];
p.autoRemoveOnFinish = YES;
p.duration = 1;
Then it will be released after Cocos2D cleans up your scene.
PS: 5000 particles is a GIGANTIC amount of particles! No wonder you're seeing allocations of several megabytes in size. Try going for 500 at most,
100 or less if you're using particle textures that are about 32x32 pixels or greater.

Related

SpriteKit FPS low with PhysicsBody collisions

In my SpriteKit game, I have many tile nodes (>100) arranged randomly and I need to be able to detect collisions between the tiles and the character node. To do this, I use SKPhysicsBody.
I find that if I enable SKPhysicsBody code, my frame rate drops to around 40fps, but If I comment out the code, it goes up to 60fps. I guess this is something to do with the engine trying to simulate physics for 100+ nodes each frame... is there a way I can prevent this from happening but still detect collisions between my character and the tiles?
For the tile physics I'm using the following code for my tiles:
self.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:size];
self.physicsBody.affectedByGravity = NO;
self.physicsBody.categoryBitMask = WallCategory;
self.physicsBody.collisionBitMask = 0;
self.physicsBody.contactTestBitMask = CharacterCategory;
and for my character:
self.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:size];
self.physicsBody.usesPreciseCollisionDetection = YES;
self.physicsBody.restitution = 0;
self.physicsBody.friction = 0;
self.physicsBody.linearDamping = 0;
self.physicsBody.categoryBitMask = CharacterCategory;
self.physicsBody.contactTestBitMask = 0xFFFFFFFF;
self.physicsBody.collisionBitMask = BoundaryCategory;
Your low frame rate could be caused by other contributing factors but without setting your entire code, it's hard to localize the main culprit.
If feasible for your game structure, you could consider disabling the physicsBody and running a contact check yourself. You would have to create an array of all your tile objects and then use intersectsNode: from the SKNode Class to check for any contact between player and object.
There are a few reasons this could be happening.
First, the simulator does not simulate collisions as a real iPhone would. If you are using the simulator alone, I strongly suggest that you try on an actual device.
Secondly, you should assert that your app's assets are not loaded in an inefficient way such as lazy loading. The assets should be loaded within an asynchronous call to the background thread upon launch. For more detail on this, check out an example project by Apple.
Third, you have not posted your - (void)didBeginContact:(SKPhysicsContact *)contact method. If implemented inefficiently or incorrectly, it can cause laggy collisions.
You should also be profiling your app in Instruments, so you can determine exactly where and when the drop in FPS is happening
Note: the property usesPreciseCollisionDetection may be contributing to a significant amount of overhead, given that you are creating >100 nodes on screen.

Preloading Sprite Sheets with cocos2d

I have a game which has many unique units, and each unit has its own sprite sheet (because each unit has several animations), meaning there may be upwards of a dozen sprite sheets used by the game on any given scene. When the scene with the units is created, the initialization of all these assets takes a lot of time (a couple seconds). Specifically, the UI is locked while this code is run:
NSString *unitSheetName = [self getUnitSheetName];
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:unitSheetName];
CCSprite *frame = [CCSprite spriteWithSpriteFrameName:kDefaultFrameName];
// etc...
However, if I then pop the scene, and then create a new version of the same scene (and push it), the loading takes a fraction of the time (the UI does not lock). Thus, it seems the UI locking is due to the initial caching of the sprite sheets...
My hope was that I'd be able to avoid this lag by calling the CCSpriteFrameCache's addSpriteFramesWithFile method while the game is starting up, but this does not seem to work. Despite doing this, the initial load still takes a long time.
Is there any good, effective way to preload the assets? I don't mind adding a second or two to my startup loading screen, if only I can be sure that the UI will later not be locked when the scene is pushed...
Per #LearnCocos2D's reply, above, here's how to properly and fully pre-load a sprite sheet, (fpSheet is the file path of the sheet):
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:fpSheet];
NSDictionary *spriteSheet = [NSDictionary dictionaryWithContentsOfFile:fpSheet];
NSString *fnTexture = spriteSheet[#"metadata"][#"textureFileName"];
[[CCTextureCache sharedTextureCache] addImage:fnTexture];

SpriteKit max number of nodes

I am creating a SpriteKit game with a tiled map. Each tile is an SKSprite node. When I have about 800 tiles, there are no problems. But if I try to increase the size of the map to around 2000 tiles, my FPS goes from 60 to 20. The number of tile nodes on the screen doesn't change (about 80), just the number of nodes off-screen. Any ideas of what could be causing this, or how to remedy it?
There doesn't appear to be a defined max number of nodes. It really depends on the amount of available free memory on your device. For example consider the following code:
int NODE_LIMIT = 375000
....
for (int i = 0; i<NODE_LIMIT; i++) {
SKNode *node = [SKNode node];
[self addChild:node];
}
I can create 375000 nodes in my sprite kit game. But as I increase the number above that, my device runs out of memory. The amount of free memory on your device will vary depending on a number of factors. As mentioned in the comments, the reason your frame rate slows down, is because the physics simulation runs even for nodes which are not visible on screen.
To maintain a high frame rate, get rid of physics bodies which are not visible, or which do not need to be simulated every frame. You could do this by adding sprites / physics bodies only when they are in the viewable part of the screen, and removing them when they are not.

How to load animation in background in cocos2d?

I am currently developing one cocos2d game for iPad, in that game lot of animations is there. I have previously used zwoptex for creating spritesheet and adding animations in my project.
In my game 10 Levels is there, and after each level completed one animation will play, the animation is different for each level and animation image size is same as device screen size. so i am not create spritesheet file, instead of i am loading image directly. My problem is while animation it takes too much time for animation playing.
How to i fix it? please any one guide me. is it possible to load full screensize image(1024x768) in plist, because total 10 levels each level has 20 frames for animation, so 10X20 = 200 images need to load spritesheet.
I am using this code for animation
CCAnimation *animation = [CCAnimation animation];
for(int i=1;i<=20;i++)
{
[animation addFrameWithFile:[NSString stringWithFormat:#"Level%dAni%d.png",level,i];
}
animation.delayPerUnit = 0.3f;
animation.restoreOriginalFrame = YES;
id action = [CCAnimate actionWithAnimation:animation];
My Question is it possible to load full screen animations with spritesheet? and animation loading time is differ and it takes too much time how to fix it?
Please help me..
Do the math and figure out the amount of memory you would need for 200 pics at 1024x768 pixels. Way too much memory for any iSomething device.
If you have a performance problem (ie are you running this on a device?), then there are two things you can do to improve image load speed:
Convert your images to .pvr.gz (i recommend TexturePacker for this). They load significantly faster than .png.
Use an RGBA4444 pixel format (again you can do this with TexturePacker), and set the texture format in cocos just prior to loading the images. Images will be smaller, take much less memory.
In your code, where you do the anim
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA4444];
// load your images and do your anim
...
// at the completion of the anim
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA8888];
By Using Cocos2D you can animate sprite sheet images easily.
Try following examples using Cocos2D
http://www.smashious.com/cocos2d-sprite-sheet-tutorial-animating-a-flying-bird/309/ http://www.raywenderlich.com/1271/how-to-use-animations-and-sprite-sheets-in-cocos2d
Try following examples without using Cocos2D
http://www.dalmob.org/2010/12/07/ios-sprite-animations-with-notifications/
http://developer.glitch.com/blog/2011/11/10/avatar-animations-in-ios/
Use NSThread and load your animation in this thread.
NSThread *thread = [[[NSThread alloc] initWithTarget:self selector:#selector(preloadFireWorkAnimation) object:nil] autorelease];
[thread start];
-(void)preloadFireWorkAnimation
{
// load your animation here;
}

Painfully slow software vectors, particularly CoreGraphics vs. OpenGL

I'm working on an iOS app that requires drawing Bézier curves in real time in response to the user's input. At first, I decided to try using CoreGraphics, which has a fantastic vector drawing API. However, I quickly discovered that performance was painfully, excruciatingly slow, to the point where the framerate started dropping severely with just ONE curve on my retina iPad. (Admittedly, this was a quick test with inefficient code. For example, the curve was getting redrawn every frame. But surely today's computers are fast enough to handle drawing a simple curve every 1/60th of a second, right?!)
After this experiment, I switched to OpenGL and the MonkVG library, and I couldn't be happier. I can now render HUNDREDS of curves simultaneously without any framerate drop, with only a minimal impact on fidelity (for my use case).
Is it possible that I misused CoreGraphics somehow (to the point where it was several orders of magnitude slower than the OpenGL solution), or is performance really that terrible? My hunch is that the problem lies with CoreGraphics, based on the number of StackOverflow/forum questions and answers regarding CG performance. (I've seen several people state that CG isn't meant to go in a run loop, and that it should only be used for infrequent rendering.) Why is this the case, technically speaking?
If CoreGraphics really is that slow, how on earth does Safari work so smoothly? I was under the impression that Safari isn't hardware-accelerated, and yet it has to display hundreds (if not thousands) of vector characters simultaneously without dropping any frames.
More generally, how do applications with heavy vector use (browsers, Illustrator, etc.) stay so fast without hardware acceleration? (As I understand it, many browsers and graphics suites now come with a hardware acceleration option, but it's often not turned on by default.)
UPDATE:
I have written a quick test app to more accurately measure performance. Below is the code for my custom CALayer subclass.
With NUM_PATHS set to 5 and NUM_POINTS set to 15 (5 curve segments per path), the code runs at 20fps in non-retina mode and 6fps in retina mode on my iPad 3. The profiler lists CGContextDrawPath as having 96% of the CPU time. Yes — obviously, I can optimize by limiting my redraw rect, but what if I really, truly needed full-screen vector animation at 60fps?
OpenGL eats this test for breakfast. How is it possible for vector drawing to be so incredibly slow?
#import "CGTLayer.h"
#implementation CGTLayer
- (id) init
{
self = [super init];
if (self)
{
self.backgroundColor = [[UIColor grayColor] CGColor];
displayLink = [[CADisplayLink displayLinkWithTarget:self selector:#selector(updatePoints:)] retain];
[displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSRunLoopCommonModes];
initialized = false;
previousTime = 0;
frameTimer = 0;
}
return self;
}
- (void) updatePoints:(CADisplayLink*)displayLink
{
for (int i = 0; i < NUM_PATHS; i++)
{
for (int j = 0; j < NUM_POINTS; j++)
{
points[i][j] = CGPointMake(arc4random()%768, arc4random()%1024);
}
}
for (int i = 0; i < NUM_PATHS; i++)
{
if (initialized)
{
CGPathRelease(paths[i]);
}
paths[i] = CGPathCreateMutable();
CGPathMoveToPoint(paths[i], &CGAffineTransformIdentity, points[i][0].x, points[i][0].y);
for (int j = 0; j < NUM_POINTS; j += 3)
{
CGPathAddCurveToPoint(paths[i], &CGAffineTransformIdentity, points[i][j].x, points[i][j].y, points[i][j+1].x, points[i][j+1].y, points[i][j+2].x, points[i][j+2].y);
}
}
[self setNeedsDisplay];
initialized = YES;
double time = CACurrentMediaTime();
if (frameTimer % 30 == 0)
{
NSLog(#"FPS: %f\n", 1.0f/(time-previousTime));
}
previousTime = time;
frameTimer += 1;
}
- (void)drawInContext:(CGContextRef)ctx
{
// self.contentsScale = [[UIScreen mainScreen] scale];
if (initialized)
{
CGContextSetLineWidth(ctx, 10);
for (int i = 0; i < NUM_PATHS; i++)
{
UIColor* randomColor = [UIColor colorWithRed:(arc4random()%RAND_MAX/((float)RAND_MAX)) green:(arc4random()%RAND_MAX/((float)RAND_MAX)) blue:(arc4random()%RAND_MAX/((float)RAND_MAX)) alpha:1];
CGContextSetStrokeColorWithColor(ctx, randomColor.CGColor);
CGContextAddPath(ctx, paths[i]);
CGContextStrokePath(ctx);
}
}
}
#end
You really should not compare Core Graphics drawing with OpenGL, you are comparing completely different features for very different purposes.
In terms of image quality, Core Graphics and Quartz are going to be far superior than OpenGL with less effort. The Core Graphics framework is designed for optimal appearance , naturally antialiased lines and curves and a polish associated with Apple UIs. But this image quality comes at a price: rendering speed.
OpenGL on the other hand is designed with speed as a priority. High performance, fast drawing is hard to beat with OpenGL. But this speed comes at a cost: It is much harder to get smooth and polished graphics with OpenGL. There are many different strategies to do something as "simple" as antialiasing in OpenGL, something which is more easily handled by Quartz/Core Graphics.
First, see Why is UIBezierPath faster than Core Graphics path? and make sure you're configuring your path optimally. By default, CGContext adds a lot of "pretty" options to paths that can add a lot of overhead. If you turn these off, you will likely find dramatic speed improvements.
The next problem I've found with Core Graphics Bézier curves is when you have many components in a single curve (I was seeing problems when I went over about 3000-5000 elements). I found very surprising amounts of time spent in CGPathAdd.... Reducing the number of elements in your path can be a major win. From my talks with the Core Graphics team last year, this may have been a bug in Core Graphics and may have been fixed. I haven't re-tested.
EDIT: I'm seeing 18-20FPS in Retina on an iPad 3 by making the following changes:
Move the CGContextStrokePath() outside the loop. You shouldn't stroke every path. You should stroke once at the end. This takes my test from ~8FPS to ~12FPS.
Turn off anti-aliasing (which is probably turned off by default in your OpenGL tests):
CGContextSetShouldAntialias(ctx, false);
That gets me to 18-20FPS (Retina) and up to around 40FPS non-Retina.
I don't know what you're seeing in OpenGL. Remember that Core Graphics is designed to make things beautiful; OpenGL is designed to make things fast. Core Graphics relies on OpenGL; so I would always expect well-written OpenGL code to be faster.
Disclaimer: I'm the author of MonkVG.
The biggest reason that MonkVG is so much faster then CoreGraphics is actually not so much that it is implemented with OpenGL ES as a render backing but because it "cheats" by tessellating the contours into polygons before any rendering is done. The contour tessellation is actually painfully slow, and if you were to dynamically generate contours you would see a big slowdown. The great benefit of an OpenGL backing (verse CoreGraphics using direct bitmap rendering) is that any transform such a translation, rotation or scaling does not force a complete re-tessellation of the contours -- it's essentially for "free".
Your slowdown is because of this line of code:
[self setNeedsDisplay];
You need to change this to:
[self setNeedsDisplayInRect:changedRect];
It's up to you to calculate what rectangle has changed every frame, but if you do this properly, you will likely see over an order of magnitude performance improvement with no other changes.

Resources