The reason for the question is -- is it better to composite a texture at runtime (and make a sprite on that), or just use multiple sprites?
Example: Say you have a small image source for a repeating pattern, where you need many sprites to fill the view (as in a background).
A SKTexture is just image data. A SKSpriteNode is a display object.
The quick answer is no, you cannot draw a SKTexture to the screen without SKSpriteNode.
This answer goes over that limitation : How do you set a texture to tile in Sprite Kit
However, I wanted to answer to give you an option to achieve your ultimate goal.
What you could do is use an SKNode as a container for however many SKSpriteNodes you need to create your background. Then using the SKView method textureFromNode you can create one single SKTexture from that SKNode, that you can use to create a single SKSpriteNode for your background.
Hopefully upcoming version of SpriteKit for iOS 8 has some better tiling options.
Update
Also, in doing some research tonight, since I had a need for this same functionality, I found this :
http://spritekitlessons.wordpress.com/2014/02/07/tile-a-background-image-with-sprite-kit/
Which is doing similar to what I was pondering doing. Gonna copy the code here, in case that page ends up gone :
CGSize coverageSize = CGSizeMake(2000,2000); //the size of the entire image you want tiled
CGRect textureSize = CGRectMake(0, 0, 100, 100); //the size of the tile.
CGImageRef backgroundCGImage = [UIImage imageNamed:#"image_to_tile"].CGImage; //change the string to your image name
UIGraphicsBeginImageContext(CGSizeMake(coverageSize.width, coverageSize.height));
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawTiledImage(context, textureSize, backgroundCGImage);
UIImage *tiledBackground = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
SKTexture *backgroundTexture = [SKTexture textureWithCGImage:tiledBackground.CGImage];
SKSpriteNode *backgroundTiles = [SKSpriteNode spriteNodeWithTexture:backgroundTexture];
backgroundTiles.yScale = -1; //upon closer inspection, I noticed my source tile was flipped vertically, so this just flipped it back.
backgroundTiles.position = CGPointMake(0,0);
[self addChild:backgroundTiles];
Related
I am developing a game for the AppStore in Xcode's Sprite kit. I have some single colored shapes that I want to 'splice' together into one, and have the user be able to drag and drop this spliced image as they would with the single colored shapes.
In more detail: I have the following two images:
http://s1381.photobucket.com/user/shahmeen/media/CircleYellow_zps11feede7.png.html
http://s1381.photobucket.com/user/shahmeen/media/CircleRed_zps49eb1802.png.html
SKSpriteNode *spriteA;
spriteA = [SKSpriteNode spriteNodeWithImageNamed:#"CircleYellow"];
[self addChild:spriteA];
SKSpriteNode *spriteB;
spriteB = [SKSpriteNode spriteNodeWithImageNamed:#"CircleRed"];
[self addChild:spriteB];
I would like to have a third sprite, that looks as it does below (forgive me for my crude photoshop skills ... if the links aren't working, what I want to create is an image, spriteC, with the left half of it being the left half of spriteA and the right half of it being the right half of spriteB):
http://s1381.photobucket.com/user/shahmeen/media/CircleRed_zps9ca710cb.png.html
(some code that crops spriteA and spriteB and then)
SKSpriteNode *spriteC;
spriteC = (the output of spriteA and spriteB cropped and spliced together);
[self addChild:spriteC];
I know I can do something like this using SKShapeNodes with the simplicity of the objects above, but I intend to do this with much more complex figures. Also I don't think it is practical for me to load in several .pngs because I'll be getting into the several hundreds count with all the permutations. I'll be happy to clarify anything - thanks
I would create the final combination of the two images by code and make a SKSpriteNode use this combined image as texture. You can do it like follows, assuming the two images & the final one have the same size:
- (UIImage*)combineImage:(UIImage*)leftImage withImage:(UIImage*)rightImage{
CGSize finalImageSize = leftImage.size;
CGFloat scale = leftImage.scale;
UIGraphicsBeginImageContextWithOptions(finalImageSize, NO, scale);
CGContextRef context = UIGraphicsGetCurrentContext();
[leftImage drawAtPoint:CGPointZero];
CGContextClipToRect(context, CGRectMake(finalImageSize.width/2, 0, finalImageSize.width/2, finalImageSize.height)); //use clipToMask with a maskImage if you have some more complicated images
[rightImage drawAtPoint:CGPointZero];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I'm trying to use a SKEmitterNode to create a shader, kind of like in Pokemon when you are in a cave:
http://www.serebii.net/pokearth/maps/johto-hgss/38-route31.png
Here is the code I have so far :
NSString *burstPath =
[[NSBundle mainBundle] pathForResource:#"MyParticle" ofType:#"sks"];
SKNode *area = [[SKNode alloc] init];
SKSpriteNode *background = [SKSpriteNode spriteNodeWithColor:[SKColor blackColor] size:CGSizeMake(self.frame.size.width, self.frame.size.width)];
background.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
SKEmitterNode *burstNode =
[NSKeyedUnarchiver unarchiveObjectWithFile:burstPath];
burstNode.position = CGPointMake(CGRectGetMidX(self.frame),CGRectGetMidY(self.frame));
burstNode.particleBlendMode = SKBlendModeSubtract;
[area addChild:background];
[area addChild:burstNode];
[self addChild:area];
Here is the SKEmitterNode : http://postimg.org/image/60zflqjzt/
I've had two ideas.
The first one was to create a rectangular SKSpriteNode and remove the SKEmitterNode from the rectangular SKSpriteNode. That way, we have a black rectangle with a "hole" in the center, where we can see through.
The second one was to add the rectangular SKSpriteNode and the SKEmitter node to another SKNode (area), then set the particleBlendMode of the SKEmitterNode and finally set the alpha of the SKNode (area) in function of the color. For exemple, if the color of a pixel is black, the alpha value of that pixel will be 1.0 and another pixel is white, that other pixel's alpha value will be 0.0.
This question is a possible duplicate of How to create an alpha mask in iOS using sprite kit in some ways, but since no good answer has been given, I assume it isn't a problem.
Thank you very much.
These are not the nodes you are looking for! ;)
Particles can't be used to make a fog of war, even if you could make them behave to generate a fog of war it would be prohibitively slow.
Based on the linked screenshot you really only need an image with a "hole" in it, a transparent area. The image should be screen-sized and just cover up the borders to whichever degree you need it. This will be a non-revealing fog of war, or rather just the effect of darkness surrounding the player.
A true fog of war implementation where you uncover the world's area typically uses a pattern, in its simplest form it would just be removing (fading out) rectangular black sprites.
Now, with the powerful devices of this era (iPhone 12) is it possible to use 'SKEmitterNode' without lost too much frames per second.
You must build an SKS (SpriteKit Particle File) with this image:
Then, set your vars like this picture:
So, go to your code and add your particle with something like this example:
let fog = SKEmitterNode(fileNamed: "fog")
fog.zPosition = 6
self.addChild(fog)
fog.position.y = self.frame.midY
fog.particlePositionRange.dx = self.size.width * 2.5
I am trying to create a conveyor belt effect using SpriteKit like so
MY first reflex would be to create a conveyor belt image bigger than the screen and then move it repeatedly forever with actions. But this does not seem ok because it is dependent on the screen size.
Is there any better way to do this ?
Also obviously I want to put things (which would move independently) on the conveyor belt so the node is an SKNode with a the child sprite node that is moving.
Update : I would like the conveyor belt to move "visually"; so the lines move in a direction giving the impression of movement.
Apply physicsBody to all those sprites which you need to move on the conveyor belt and set the affectedByGravity property as NO.
In this example, I am assuming that the spriteNode representing your conveyor belt is called conveyor. Also, all the sprite nodes which need to be moved are have the string "moveable" as their name property.
Then, in your -update: method,
-(void)update:(CFTimeInterval)currentTime
{
[self enumerateChildNodesWithName:#"moveable" usingBlock:^(SKNode *node, BOOL *stop{
if ([node intersectsNode:conveyor])
{
[node.physicsBody applyForce:CGVectorMake(-1, 0)];
//edit the vector to get the right force. This will be too fast.
}
}];
}
After this, just add the desired sprites on the correct positions and you will see them moving by themselves.
For the animation, it would be better to use an array of textures which you can loop on the sprite.
Alternatively, you can add and remove a series of small sprites with a sectional image and move them like you do the sprites which are travelling on the conveyor.
#akashg has pointed out a solution for moving objects across the conveyor belt, I am giving my answer as how to make the conveyor belt look as if it is moving
One suggestion and my initial intuition was to place a larger rectangle than the screen on the scene and move this repeatedly. Upon reflecting I think this is not a nice solution because if we would want to place a conveyor belt on the middle, in a way we see both it ends this would not be possible without an extra clipping mask.
The ideal solution would be to tile the SKTexture on the SKSpriteNode and just offset this texture; but this does not seem to be possible with Sprite Kit (no tile mechanisms).
So basically what I'm doing is creating subtextures from a texture that is like so [tile][tile](2 times a repeatable tile) and I just show these subtextures one after the other to create an animation.
Here is the code :
- (SKSpriteNode *) newConveyor
{
SKTexture *conveyorTexture = [SKTexture textureWithImageNamed:#"testTextureDouble"];
SKTexture *halfConveyorTexture = [SKTexture textureWithRect:CGRectMake(0.5, 0.0, 0.5, 1.0) inTexture:conveyorTexture];
SKSpriteNode *conveyor = [SKSpriteNode spriteNodeWithTexture:halfConveyorTexture size:CGSizeMake(conveyorTexture.size.width/2, conveyorTexture.size.height)];
NSArray *textureArray = [self horizontalTextureArrayForTxture:conveyorTexture];
SKAction *moveAction = [SKAction animateWithTextures:textureArray timePerFrame:0.01 resize:NO restore:YES];
[conveyor runAction:[SKAction repeatActionForever:moveAction]];
return conveyor;
}
- (NSArray *) horizontalTextureArrayForTxture : (SKTexture *) texture
{
CGFloat deltaOnePixel = 1.0 / texture.size.width;
int countSubtextures = texture.size.width / 2;
NSMutableArray *textureArray = [[NSMutableArray alloc] initWithCapacity:countSubtextures];
CGFloat offset = 0;
for (int i = 0; i < countSubtextures; i++)
{
offset = i * deltaOnePixel;
SKTexture *subTexture = [SKTexture textureWithRect:CGRectMake(offset, 0.0, 0.5, 1.0) inTexture:texture];
[textureArray addObject:subTexture];
}
return [NSArray arrayWithArray:textureArray];
}
Now this is still not ideal because it is necessary to make an image with 2 tiles manually. We can also edit a SKTexture with a CIFilter transform that could potentially be used to create this texture with 2 tiles.
Apart from this I think this solution is better because it does not depend on the size of the screen and is memory efficient; but in order for it to be used on the whole screen I would have to create more SKSpriteNode objects that share the same moveAction that I have used, since tiling is not possible with Sprite Kit according to this source :
How do you set a texture to tile in Sprite Kit.
I will try to update the code to make it possible to tile by using multiple SKSpriteNode objects.
I setting up an OpenGL iOS project. I´m read a lot about the projection matrix an GluUnproject. I want to draw on an 3D model dynamically. Therefore I need the corresponding points from my window to the 3D object.
From GluUnproject I get a ray through my 3D scene. After that I can find collision point with iterative algorithms (raytracing)...
Now the problems:
How do I get the corresponding texture?
How do I get the corresponding vertices/pixels?
How can I write on that perspective texture/pixel?
How do I get the corresponding texture?
Getting the texture should be easy enough if you are using an object based approach to the objects in your scene. Just store a reference to the texture file name in the class and then iterate through your scene objects in your raycasting method, grabbing the texture name when you get a match.
How do I get the corresponding vertices/pixels?
Again this should be easy if you have used an object based approach for your object drawing (i.e an instantiation of a custom object class for each object in the scene). Assuming all your scene objects are in an NSMutableArray, you can just iterate through the array until you find a match on the raycasted object.
How can I write on that perspective texture/pixel?
If you are looking at writing text on a new texture one way of doing this is to use the layer of a UILabel as a texture (e.g see below), but if you are looking at drawing on an existing texture this is much more difficult (and to be honest to be avoided).
UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, width, height)];
label.text = text;
lLabel.font = [UIFont fontWithName:#"Helvetica" size:12];
UIGraphicsBeginImageContext(standLabel.bounds.size);
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, height);
CGContextRef myContext = UIGraphicsGetCurrentContext();
// flip transform used to transform the coordinate system from origin for OpenGL.
CGAffineTransform flipTransform = CGAffineTransformConcat(CGAffineTransformMakeTranslation(0.f, height),
CGAffineTransformMakeScale(1.f, -1.f));
CGContextConcatCTM(myContext, flipTransform);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
[standLabel.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *layerImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I am a bit confused about changing the texture of a CCSprite.
I have:
aTexture[NUM_WALLS+11] = [[CCTexture2D alloc]initWithImage:[UIImage imageNamed:#"shop1.png"]];
[aSprite setTexture:aTexture[NUM_WALLS+11]];
and
aTexture[NUM_WALLS+9] = [[CCTexture2D alloc]initWithImage:[UIImage imageNamed:#"bush2.png"]];
[aSprite setTexture:aTexture[NUM_WALLS+9]];
The two images have two different sizes. However, the sprite does NOT change size when I change the texture. Instead, the image scales to the size of the sprite. I thought the sprite was supposed to change size.
Can someone please clarify?
The image files may have a different size, but the textures may use the same size since they usually scale up to the next nearest power of two (with NPOT texture support being disabled by default in cocos2d).
Meaning: if one image is 150x150 and the other is 250x250 their textures will both have 256x256 dimensions. If you load a sprite from an image file, cocos2d adjusts the actual part of the texture that is drawn to the size of the image (contentSize). If you change the texture, cocos2d will simply use the size of the texture regardless of the size of the image stored in the texture - because that information is lost after a texture has been created.
In that case you will have to manually call setTextureRect: on the sprite to draw only the area of the actual image in the texture.
The better solution is to create a texture atlas with both textures and then just change the sprite frame displayed by the sprite. It's a lot easier and saves memory, too.
For those who still want to use setTexture.
I found that this here works:
CCTexture2D* tex2d = [[CCTexture2D alloc] initWithImage:[UIImage imageNamed:_texture]];
CGSize texSize = [tex2d contentSize];
[sprite setTexture:tex2d];
[sprite setTextureRect:CGRectMake(0, 0, texSize.width, texSize.height)];
the order of setTexture and setTextureRect is important. Otherwise it wouldn't work.
as far as i know, CCSprite use it's contentSize property and position to calculate array of vertices to be drawn using OpenGL. In such way, sprite will not change it's vizible size until you change it's contentSize property.
CCTexture2D* tex2d = [[CCTexture2D alloc] initWithImage:[UIImage
imageNamed:_texture]]; CGSize texSize = [tex2d contentSize]; [sprite
setTexture:tex2d]; [sprite setTextureRect:CGRectMake(0, 0,
texSize.width, texSize.height)];
WARNING
This code stores a texture but it may cause a memory leak because the dealloc function is never called.
You can check it by subclassing CCTexture2D class. At least alloc-init always works this wrong way for CCSprite objects