I'm developing an os x / iOS crossword game which uses SKLabelNode to display the crossword letters as a child on a SKNode subclass.
So for each letter there is SKNode.SKLabelNode. Without using SKLabelNode the draw counts are in the range of 6-8. With using SKLabelNode they go up to the amount of children in the scene which can be almost 100.
I am now looking for a way to avoid that and came up with the idea to rasterize the SKLabelNode to a texture but this does not lower the draw counts since there are still lots of different textures.
My Idea is now to rasterize these SKNode-Subclasses and to put the textures into a texture atlas.
So question is, is it possible to create a texture atlas in runtime? What to do if a single texture changes? Is it possible to exchange a single texture in the atlas or so I have to rebuild it?
And maybe there is a "best way" to handle lot's of different SKLabelNodes!?
I would go with a letter class which is subclass of SKSpriteNode and an atlas called letters. That way, you will draw all letters in single draw pass(100 draw passes are unnecessary in this situation). Or you don't even have to make a subclass of SKSpriteNode...You can just do this :
SKSpriteNode *letterSprite = [SKSpriteNode spriteNodeWithTexture:[atlas textureNamed:[NSString stringWithFormat:#"%#",character]]];
The limitation of this approach is that letters would have pre-determined size. I doubt that you would need different sized letters in your crossword game. If you need to change size of letters you can still scale them, but I guess some quality loss will theoretically occur because of scaling bitmaps. I say theoretically because in most cases the quality loss is not noticeable.
Here is an example of my TextNode which parses given string (numbers in my case) and create sprites which are drawn in single pass (instead of using SKLabelNode for every single number). Images in atlases should be named like a#2x.png, b#2x.png, or if using numbers 1#2x.png, 2#2x.png etc.
static const float kCharacterDistance = 6.0f;
#import "TextNode.h"
#interface TextNode()
#property (nonatomic,strong) NSMutableArray* characters;
#end
#implementation TextNode
-(instancetype)initWithPosition:(CGPoint)position andText:(NSString*)text{
if(self =[super init]){
self.characters = [[NSMutableArray alloc] initWithCapacity:[text length]];
SKTextureAtlas *atlas = [SKTextureAtlas atlasNamed:#"numbers"];
for(NSUInteger i =0; i < [text length];i++){
NSString *character = [text substringWithRange:NSMakeRange(i, 1)];
SKSpriteNode *characterSprite = [SKSpriteNode spriteNodeWithTexture:[atlas textureNamed:[NSString stringWithFormat:#"%#",character]]];
characterSprite.color = [SKColor yellowColor];
characterSprite.colorBlendFactor = 1.0f;
characterSprite.position = CGPointMake(i*kCharacterDistance,0);
[self.characters addObject:characterSprite];
[self addChild:characterSprite];
}
self.position = position;
}
return self;
}
#end
Hope this helps and will give you the basic idea of how to draw all letters in single draw pass. And note how I colorize letters. In texture atlas images are white, but I colorize them easily to desired color.
As long as your FPS is good, don't worry about anything else. Also, remember that Xcode auto creates texture atlases at runtime so it's not something you have to do yourself.
You can set skView.ignoresSiblingOrder = YES; to improve your performance a bit.
Related
I've now filed a bug for the issue below. Anyone with a good
workaround?
I try to save an SKTexture to file, and load it back again, but I don't succeed. The following code snippet can be copied to GameScene.m in the Xcode startup project.
I use textureFromNode in generateTexture, and that seems to be the root cause of my problem. If I use a texture from a sprite, the code works, and two spaceships are visible.
This code worked in iOS 8 but it stopped working in Xcode7 & iOS 9. I just want to verify that this is a bug before I file a bug report. My worry is that I do something wrong with NSKeyedArchiver.
It happens both in simulator and on device.
#import "GameScene.h"
#implementation GameScene
// Generates a texture
- (SKTexture *)generateTexture
{
SKScene *scene = [[SKScene alloc] initWithSize:CGSizeMake(100, 100)];
SKShapeNode *shapeNode = [SKShapeNode shapeNodeWithRectOfSize:CGSizeMake(50, 50)];
shapeNode.position = CGPointMake(50, 50);
shapeNode.strokeColor = SKColor.redColor;
shapeNode.lineWidth = 10;
[scene addChild:shapeNode];
SKTexture *texture = [self.view textureFromNode:scene];
//SKTexture *texture = [SKSpriteNode spriteNodeWithImageNamed:#"Spaceship"].texture; // This works!
return texture;
}
// Just generate a path
- (NSString *)fullDocumentsPath
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *yourFileName = [documentsDirectory stringByAppendingPathComponent:#"fileName"];
return yourFileName;
}
- (void)didMoveToView:(SKView *)view
{
self.scaleMode = SKSceneScaleModeResizeFill;
// Verify that the generateTexture method indeed produces a valid texture.
SKSpriteNode *s1 = [SKSpriteNode spriteNodeWithTexture:[self generateTexture]];
s1.position = CGPointMake(100, 100);
[self addChild:s1];
// Start with saving the texture.
NSString *fullName = [self fullDocumentsPath];
NSError *error;
NSFileManager *fileMgr = [NSFileManager defaultManager];
if ([fileMgr fileExistsAtPath:fullName])
{
[fileMgr removeItemAtPath:fullName error:&error];
assert(error == nil);
}
NSDictionary *dict1 = [NSDictionary dictionaryWithObject:[self generateTexture] forKey:#"object"];
bool ok = [NSKeyedArchiver archiveRootObject:dict1 toFile:fullName];
assert(ok);
// Read back the texture and place it in a sprite. This sprite is not shown. Why?
NSData *data = [NSData dataWithContentsOfFile:fullName];
NSDictionary *dict2 = [NSKeyedUnarchiver unarchiveObjectWithData:data];
SKTexture *loadedTexture = [dict2 objectForKey:#"object"];
SKSpriteNode *s2= [SKSpriteNode spriteNodeWithTexture:loadedTexture];
NSLog(#"t(%f, %f)", loadedTexture.size.width, loadedTexture.size.height); // Size of sprite & texture is zero. Why?
s2.position = CGPointMake(200, 100);
[self addChild:s2];
}
#end
Update for Yudong:
This might be a more relevant example, but imagine that the scene consists of 4 layers, with lots of sprites. When the game play is over I want to store a thumbnail image of the end scene of the match. The image will be used as a texture on a button. Pressing that button will start a replay movie of the match. There will be lots of buttons with images of old games so I need to store each image on file.
-(SKTexture*)generateTexture
{
SKScene *scene = [[SKScene alloc] initWithSize:CGSizeMake(100, 100)];
SKSpriteNode *ship = [SKSpriteNode spriteNodeWithImageNamed:#"Spaceship"];
ship.position = CGPointMake(50, 50);
[scene addChild:ship];
SKTexture *texture = [self.view textureFromNode:scene];
NSLog(#"texture: %#", texture);
return texture;
}
The solution/work around:
Inspired by Russells code I did the following. It works!
CGImageRef cgImg = texture.CGImage;
SKTexture *newText = [SKTexture textureWithCGImage:cgImg];
I've done a lot of experimenting/hacking with SKTextures. My game utilizes SKTextures. It is written in Swift. Specifically, I've had many problems with textureFromNode and textureFromNode:crop: and creating SKPhysicsBodies from textures. These methods worked fine in ios 8, but Apple completely broke them when they released ios 9.0. In ios 9.0, the textures were coming back as nil. Those nil textures broke SKPhysicsBodies from the textures.
I recently worked on serialization/deserialization of SKTextures.
Some key ideas/clues you might investigate are:
Run ios 9.2. Apple Staff mentioned a lot of issues have been fixed. https://forums.developer.apple.com/thread/17463 I've found ios 9.2 helps with SKTextures but didn't solve every issue especially the serialization issues.
Try PrefersOpenGL (set it to "YES" as a Boolean custom property in your config). Here is a post about PrefersOpenGL in the Apple Dev Forums by Apple Staff. https://forums.developer.apple.com/thread/19683 I've observed that ios 9.x seems to use Metal by default rather than OpenGL. I've found PrefersOpenGL helps with SKTexture issues but still doesn't make my SKShaders work (written in GLSL).
When I tried to serialize/deserialize nodes with SKTextures on ios 9.2, I got white boxes instead of visible textures. Inspired by Apple SKTexture docs that say, "The texture data is loaded when:
The size method on the texture object is called.
Another method is called that requires the texture’s size, such as creating a new SKSpriteNode object that uses the texture object.
One of the preload methods is called (See Preloading the Texture Data.)
The texture data is prepared for rendering when:
A sprite or particle that uses the texture is part of a node tree that is being rendered."
... I've hacked a workaround that creates a secondary texture from the CGImage() call:
// ios 9.2 workaround for white boxes on serialization
let img = texture!.CGImage()
let uimg = UIImage(CGImage: img)
let ntex = SKTexture(image: uimg)
let sprite = SKSpriteNode(texture: ntex, size: texture!.size())
So now my SKSpriteNodes created this way seem to serialize/deserialize fine. BTW, just invoking size() or creating an SKSpriteNode with the original texture does not seem to be enough to reify the texture into memory.
You didn't ask about textureFromNode:crop: but I'm adding observations anyway just in case it helps you: I've found this method in ios 8 worked (although the crop parameters were very tricky and seemed to require normalization with UIScreen.mainScreen().scale) In ios 9.0, this method didn't work at all (returned nil). In ios 9.2 this method now works (it now returns a non-nil texture) however subsequent creation of nodes from the texture do not need the size normalization. And furthermore, to make serialization/deserialization work, I found you ultimately have to do #3 above.
I hope this helps you. I imagine I've struggled more than most with SKTextures since my app is so dependent on them.
I tested your code in Xcode 7 and found texture returned in generateTexture was null. That's the reason why you can't load anything from the file, and you even haven't saved anything.
Try to use NSLog to log the description of your texture or sprite. E.g. add this line in generateTexture:
NSLog(#"texture: %#", texture);
What you will get in console:
texture: '(null)' (300 x 300)
And same for s1 and dict1 in your code:
s1: name:'(null)' texture:[ '(null)'
(300 x 300)] position:{100, 100} scale:{1.00, 1.00} size:{100, 100}
anchor:{0.5, 0.5} rotation:0.00
dict1: {
object = " '(null)' (300 x 300)"; }
You may do these tests on both iOS 8 and iOS 9 and you will probably get different results.
I'm not sure why you add the SKShapeNode to a scene and then save the texture from the scene. One workaround is to set texture for your SKShapeNode, and your code should work fine.
shapeNode.fillTexture = [SKTexture textureWithImageNamed:#"Spaceship"];
SKTexture *texture = shapeNode.fillTexture;
return texture;
Update:
It's quite annoying that textureFromNode doesn't works as expected in iOS 9. I tried to solve it by trial and error but no luck at last. Thus, I asked you if you would consider make a snapshot of the whole screen and set it as your thumbnail. Here's the progress I made today and hope you will get inspired from it.
I created a scene which contained SKLabelNode and SKSpriteNode in didMoveToView. After I clicked anywhere on screen, snapshot would be invoked and the down-scaled screenshot would be saved in the document folder. I used the code here.
- (UIImage *)snapshot
{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.5);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshotImage;
}
Since the thumbnail is saved as UIImage therefore loading it back for the sprite's texture should be done easily. A sample project demonstrates the whole process and it works on both iOS 8 and 9.
I'm currently messing around with Sprite Kit on iOS to figure out if it would be a fitting framework to make relatively simple 2D game in.
Due to my ActionScript background, i am very comfortable working with Sprite Kit code-wise
But there is something i just can't figure out. Animated nodes with Texture Atlas as a resource are incredibly memory heavy. I've imported an atlas into my project (size of textures is about 35MB). Preloading textures into RAM seems ok but at the moment i run the actual animation, the heap size increases exponentinaly (from about 80MB to 780MB)
Here goes my code:
self.noahFrames = [[NSMutableArray alloc] init];
SKTextureAtlas *noahAtlas = [SKTextureAtlas atlasNamed:#"noahAnimati"];
int imgCount = noahAtlas.textureNames.count;
for (int i=1; i <= imgCount; i++) {
NSString *textureName = [NSString stringWithFormat:#"NoahMainMenuAnimation_%d", i];
SKTexture *temp = [noahAtlas textureNamed:textureName];
[self.noahFrames addObject:temp];
}
SKSpriteNode *noahNode = [self createSpriteWithName:#"noah" imagePath:#"Noah_main_menu_hd" positionXPath:#"MainMenu.Noah.x" positionYPath:#"MainMenu.Noah.y" scalePath:#"MainMenu.Noah.scale"];
[self addChild:noahNode];
//up to this point everything goes fine
[noahNode runAction:[SKAction repeatActionForever:
[SKAction animateWithTextures:self.noahFrames
timePerFrame:0.1f
resize:YES
restore:YES]] withKey:#"animatedNoah"];
So i guess my actual question is why does the application become that insanely memory heavy after calling the SKAction animation ? I must be missing something rather obvious ...
I do know that when a texture is loaded in graphic memory it's loaded without any compression, but I don't think that xcode monitors graphic memory, so it's really strange to me.
I usually load and execute animations just like you do and I don't have such memory behaviour, but i noticed it when testing on the simulator. Are you using iOS simulator for your tests? Does your application crash when you reach those memory levels?
I was going through the SpriteKit documentation by Apple and came across a really useful feature that I could use when programming my UI. The problem is I can't get it to work.
Please see this page and scroll down to "Resizing a Sprite" - Apple Docs
I have literally copied the image dimensions and used the same code incase I was doing something wrong. But I always end up with a stretched looking image rather than the correct "end caps" staying the same scale.
I am referring to this code:
SKSpriteNode *button = [SKSpriteNode spriteWithImageNamed:#"stretchable_button.png"];
button.centerRect = CGRectMake(12.0/28.0,12.0/28.0,4.0/28.0,4.0/28.0);
What am I doing wrong? Is there a step I have missed?
EDIT:
Here is the code I have been using. I stripped it of my button class and tried to use it with an SKSPriteNode but still the problem persists. I also changed the image just to make sure it wasnt that. The image im using is a 32x32 at normal size.
SKSpriteNode *button = [SKSpriteNode spriteNodeWithImageNamed:#"Button.png"];
[self addChild:button];
button.position = ccp(200, 200);
button.size = CGSizeMake(128, 64);
button.centerRect = CGRectMake(9/32, 9/32, 14/32, 14/32);
The .centerRect property works as documented if you adjust the sprites .scale property.
Try:
SKTexture *texture = [SKTexture textureWithImageNamed:#"Button.png"];
SKSpriteNode *button = [[SKSpriteNode alloc] initWithTexture:texture];
button.centerRect = CGRectMake(9/32, 9/32, 14/32, 14/32);
[self addChild:button];
button.xScale = 128.0/texture.size.width;
button.yScale = 64.0/texture.size.height;
9/32 is integer division, so the result passed to CGRectMake is zero. Ditto the other three parameters. If you use floating point literals like the example you cite, you might get better results.
Here's a refresh of how exactly this works. By the way, my image size width is 48 pixels and height is 52 pixels, but this doesn't matter at all. Any image can be used:
SKSpriteNode *button = [SKSpriteNode spriteNodeWithImageNamed:#"Button.png"];
//(x, y, width, height). First two values are the four corners of the image that you DON'T want touched/resized (They will just be moved).
//The second two values represent how much of images width & height you want cut out & used as stretching material. Cut out happens from the center of the image.
button.centerRect = CGRectMake(20/button1.frame.size.width, 20/button1.frame.size.height, 5/button1.frame.size.width, 15/button1.frame.size.height);
button.position = CGPointMake(self.frame.size.width/2, self.frame.size.height/2); //Positions sprite in the middle of the screen.
button.xScale = 4; //Resizes width (This is all I needed).
//button.yScale = 2; //Resizes height (Commented out because I didn't need this. You can uncomment if the button needs to be higher).
[self addChild:button];
Read the section called "Resizing a Sprite" in this document: https://developer.apple.com/library/ios/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Sprites/Sprites.html#//apple_ref/doc/uid/TP40013043-CH9-SW10
'Figure 2-4 A stretchable button texture' demonstrates how the (x, y, width, height) works.
Based on rwr's answer here is a working init method for a SKSpriteNode. I use this in my own game. Basically you make insets of 10px all around the the output image. And then call it like this:
[[HudBoxScalable alloc] initWithTexture:[atlas textureNamed:#"hud_box_9grid.png"] inset:10 size:CGSizeMake(300, 100) delegate:(id<HudBoxDelegate>)clickedObject];
-(id) initWithTexture:(SKTexture *)texture inset:(float)inset size:(CGSize)size {
if (self=[super initWithTexture:texture]) {
self.centerRect = CGRectMake(inset/texture.size.width,inset/texture.size.height,(texture.size.width-inset*2)/texture.size.width,(texture.size.height-inset*2)/texture.size.height);
self.xScale = size.width/texture.size.width;
self.yScale = size.height/texture.size.height;
}
return self;
}
I've got a simple UIView class that draws some text in it's drawRect routine:
[mString drawInRect:theFrameRect withFont:theFont];
That looks OK at regular resolution, but when zoomed, it's fuzzy:
[image removed, not enough posts]
So, I added some tiling:
CATiledLayer *theLayer = (CATiledLayer *) self.layer;
theLayer.levelsOfDetailBias = 8;
theLayer.levelsOfDetail = 8;
theLayer.tileSize = CGSizeMake(1024,1024);
(plus the requisite layerClass routine)
but now the text will draw twice, when zoomed, when the size of the frame is larger than the size of tile:
[image removed, not enough posts]
I'm not clear as to the solution for this. Drawing the text is an atomic operation. I could figure out how to calculate what portion of the text to draw based on the rect that's passed in...but is that really the way to go? Older code examples use drawLayer, but that seems to have been obviated by iOS 5, and is clearly more cumbersome than a straight drawRect call.
I'm trying to make an application and i have to calculate the brightness of the camera like this application : http://itunes.apple.com/us/app/megaman-luxmeter/id455660266?mt=8
I found this document : http://b2cloud.com.au/tutorial/obtaining-luminosity-from-an-ios-camera
But i don't know how to adapt it to the camera directly and not an image. Here is my code :
Image = [[UIImagePickerController alloc] init];
Image.delegate = self;
Image.sourceType = UIImagePickerControllerCameraCaptureModeVideo;
Image.showsCameraControls = NO;
[Image setWantsFullScreenLayout:YES];
Image.view.bounds = CGRectMake (0, 0, 320, 480);
[self.view addSubview:Image.view];
NSArray* dayArray = [NSArray arrayWithObjects:Image,nil];
for(NSString* day in dayArray)
{
for(int i=1;i<=2;i++)
{
UIImage* image = [UIImage imageNamed:[NSString stringWithFormat:#"%#%d.png",day,i]];
unsigned char* pixels = [image rgbaPixels];
double totalLuminance = 0.0;
for(int p=0;p<image.size.width*image.size.height*4;p+=4)
{
totalLuminance += pixels[p]*0.299 + pixels[p+1]*0.587 + pixels[p+2]*0.114;
}
totalLuminance /= (image.size.width*image.size.height);
totalLuminance /= 255.0;
NSLog(#"%# (%d) = %f",day,i,totalLuminance);
}
}
Here are the issues :
"Instance method '-rgbaPixels' not found (return type defaults to 'id')"
&
"Incompatible pointer types initializing 'unsigned char *' with an expression of type 'id'"
Thanks a lot ! =)
Rather than doing expensive CPU-bound processing of each pixel in an input video frame, let me suggest an alternative approach. My open source GPUImage framework has a luminosity extractor built into it, which uses GPU-based processing to give live luminosity readings from the video camera.
It's relatively easy to set this up. You simply need to allocate a GPUImageVideoCamera instance to represent the camera, allocate a GPUImageLuminosity filter, and add the latter as a target for the former. If you want to display the camera feed to the screen, create a GPUImageView instance and add that as another target for your GPUImageVideoCamera.
Your luminosity extractor will use a callback block to return luminosity values as they are calculated. This block is set up using code like the following:
[(GPUImageLuminosity *)filter setLuminosityProcessingFinishedBlock:^(CGFloat luminosity, CMTime frameTime) {
// Do something with the luminosity
}];
I describe the inner workings of this luminosity extraction in this answer, if you're curious. This extractor runs in ~6 ms for a 640x480 frame of video on an iPhone 4.
One thing you'll quickly find is that the average luminosity from the iPhone camera is almost always around 50% when automatic exposure is enabled. This means that you'll need to supplement your luminosity measurements with exposure values from the camera metadata to obtain any sort of meaningful brightness measurement.
Why do you place the camera image into an NSArray *dayArray? Five lines later you remove it from that array but treat the object as an NSString. An NSString does not have rgbaPixels. The example you copy-pasted has an array of filenames corresponding to pictures taken at different times of the day. It then opens those image files and performs the analysis of luminosity.
In your case, there is no file to read. Both outer for loops, i.e. on day and i will have to go away. You already got access to the Image provided through the UIImagePickerController. Right after adding the subview, you could in principle access pixels as in unsigned char *pixels = [Image rgbaPixels]; where Image is the image you got from UIImagePickerController.
However, this may not be what you want to do. I imagine that your goal is rather to show the UIImagePickerController in capture mode and then to measure luminosity continuously. To this end, you could turn Image into a member variable, and then access its pixels repeatedly from a timer callback.
You can import below class from GIT to resolve this issue.
https://github.com/maxmuermann/pxl
Add UIImage+Pixels.h & .m files into project. Now try to run.