It seems that when pulling textures from a texture atlas, I am now generating new textures instead of using the same texture across different sprites with iOS 10. On iOS 9, this works as expected. Is anyone else experiencing this issue? Perhaps there is a step I missed that is now a part of iOS 10.
Notes: I created a sample project and created a new atlas, then just dragged spaceship in #1x, I have also tried preloading, and that did nothing as well.
Code:
let atlas = SKTexturAtlas(named:"Sprites")
var texture = atlas.textureNamed("Spaceship")
print("\(Unmanaged.passUnretained(texture)),\(Unmanaged.passUnretained(texture).toOpaque())")
texture = atlas.textureNamed("Spaceship")
print("\(Unmanaged.passUnretained(texture)),\(Unmanaged.passUnretained(texture).toOpaque())")
Edit: To get around issues of comparison, I use the description property to compare if 2 textures are equal. For this to work though, you can't be using 2 atlases that each contain a texture with an exact name and size. I will never hit this situation, but for anybody out there looking for help, keep this in mind.
I've make the same test and get your same results.
I'm not sure 100% but seems that during the development of Swift 3 there was a proposal here to change Unmanaged to use UnsafePointer.
But if you try to make:
func address<T: AnyObject>(o: T) -> String{
let addr = unsafeBitCast(o, to: Int.self)
return NSString(format: "%p", addr) as String
}
Usage:
print(address(o: texture))
in iOS9 you have correct values, in iOS10 wrong results.
I think you're right, we are facing a bug (another..)
Is having a different physical address for a texture referencing the "same texture" really a problem?
I've run the default sample game project, but setup for Obj-C. I have a texture atlas that would be something like the image below. However, note that I ran that through TexturePacker. So the actual generated atlas by Xcode is different.
I do as you said and created 2 textures with the same name.
self.myTextureAtlas = [SKTextureAtlas atlasNamed:#"MyTexture"];
self.tex0 = [self.myTextureAtlas textureNamed:#"tex0"];
self.tex1 = [self.myTextureAtlas textureNamed:#"tex0"];
As you said, the pointers for tex0 and tex1 are different. So at least there is consistency between Swift and Obj-C.
However, I don't think this is a problem/bug. What I suspect is that that they changed implementation so the returned SKTexture is a new "instance", however the underlying texture is still the same.
I'll talk OpenGL, since that is what I write my engines in. Metal will still have similarities. A basic sub-texture really has only 2 important properties: a texture name (this is the OpenGL texture name) and UVs. If you were thinking about what would be considered the "equality" for conforming to Equatable, it would most likely be testing for equality against those 2 items. The texture name is the atlas texture name, and the UVs are the UVs within the atlas which represent the area of the particular sub-texture.
To test this hypothesis, I ran a GPU frame capture on this. With Xcode 8 this seems pretty buggy. Using Metal, it crashed 100% of the time. I forced it to use OpenGL and managed to get a frame capture. As expected when I looked at all texture resources, I see only one texture for my atlas.
Texture #3 is MyTexture.
If I dump the textureRect, which appear to be the UVs, I can see they are the same:
Tex0 Rect 0.001618 0.793765 0.139159 0.203837
Tex1 Rect 0.001618 0.793765 0.139159 0.203837
Based on this, it would seem that both self.tex0 and self.tex1, although having different physical addresses, still both point to the same sub-texture.
Note that I no longer use SpriteKit. My current renderer uses handles for textures, however, when retrieved, you can get handle objects with different physical addresses. They still all dereference to the true texture since they still reference the same underlying texture instance.
I guess, I don't really see getting diff pointers a problem provided they still reference the same underlying texture (ie. no more texture memory is allocated).
To get around this issue, I had to come up with a way to cache the textures so that it doesn't duplicate:
private var textureCache = [String: SKTexture]()
extension SKTextureAtlas
{
func texturesWithNames(_ names:[String]) -> [SKTexture]
{
var textures = [SKTexture]()
names.forEach({textures.append(textureNamed($0))})
return textures
}
func cachedTextureWithName(_ name:String) -> SKTexture
{
if textureCache[name] == nil
{
textureCache[name] = textureNamed(name)
}
return textureCache[name]!
}
func cachedTexturesWithNames(_ names:[String]) -> [SKTexture]
{
var textures = [SKTexture]()
names.forEach({textures.append(cachedTextureWithName($0))})
return textures
}
func clearCache()
{
textureCache = [String: SKTexture]()
}
}
extension SKTexture
{
var name : String
{
return self.description.slice(start: "'",to: "'")!
}
}
Related
I know that I can add sequences of individual images in Xcode, and let it create the Texture Atlas. This process is well-described in easily-searched venues. (Like this one.)
I am getting atlases from designers, created with Adobe Animate (v18.0) already combined into the full sheet, and moreover with the accompanying XML file describing the animation. (In which sub-images and display frames do not match 1:1, so it's hard to see how Xcode would figure that out.)
It's not clear to me from the SpriteKit docs whether/how to use these pre-defined Texture Atlases. Is this possible?
If you're getting pre-baked texture atlases, with externally-generated descriptions of where the sprites should get their textures, you'll probably have to create your sprites using SKTextures that you create using the init(rect:in:) initializer of a SKTexture.
You'll need to read the sprite's extents out of the XML file, and then create a texture out of the atlas. Then you can create a new SKTexture object that represents a part of the larger texture to act as your sprite's texture.
This is untested pseudocode, but it shows the process:
let spriteRect = (get the rect from the XML)
let atlas = SKTexture( imageNamed: "myTextureAtlas" )
let spriteTexture = SKTexture( rect:spriteRect, in:atlas )
let sprite = SKSpriteNode( texture:spriteTexture )
Once you have this process in place, you can animate the sprites using the usual methods, like setting up SKActions with a list of textures out of the texture atlas.
I've done some research and I can't seem to find anything that clearly explains how to go about preloading both single textures and textures within animations. I'm currently using Atlas's in Assets.xcassets to group related animation images. Does having my images in the Atlas mean that they are preloaded? As far as single images, does it make sense to declare the texture before GameScene like this: let laserImage = SKTexture(imageNamed: "Sprites/laser.jpg") and then (for example) within one of my SKSpriteNode subclass I can just pass laserImage through?
I ultimately wanted to know if there was a well defined way of going about this or if I should just store each texture as a constant before GameScene. Any advice on the proper (and most efficient) way of going about this would be great.
I tried to implement appzYourLife answer but it increased the time to load every texture. So I ended up putting all the most used Sprites in one single atlas and puting it in a singleton.
class Assets {
static let sharedInstance = Assets()
let sprites = SKTextureAtlas(named: "Sprites")
func preloadAssets() {
sprites.preloadWithCompletionHandler {
print("Sprites preloaded")
}
}
}
I call Assets.sharedInstance.preloadAssets() in menuScene. And:
let bg1Texture = Assets.sharedInstance.sprites.textureNamed("background1")
bg1 = SKSpriteNode(texture: bg1Texture)
to reference a texture already loaded in memory.
One Single Texture Atlas
Put all your assets into a single Sprite Atlas. If they don't fit, try at least to put all the assets of a single scene into a single Sprite Atlas
Preloading
If you want you can preload the texture atlas in memory with this code
SKTextureAtlas(named: "YourTextureAtlasName").preloadWithCompletionHandler {
// Now everything you put into the texture atlas has been loaded in memory
}
Automatic caching
You don't need to save a reference to the texture atlas, SpriteKit has an internal caching system. Let it do it's job.
Which name should I use to reference my image?
Forget the name of the file image, the name you assign to the image into Asset Catalog is the only name you will need.
How can I create a sprite from an image into a texture atlas?
let texture = SKTextureAtlas(named:"croc").textureNamed("croc_walk01")
let croc = SKSpriteNode(texture: texture)
I have a kernel function (compute shader) that reads nearby pixels of a pixel from a texture and based on the old nearby-pixel values updates the value of the current pixel (it's not a simple convolution).
I've tried creating a copy of the texture using BlitCommandEncoder and feeding the kernel function with 2 textures - one read-only and another write-only. Unfortunately, this approach is GPU-wise time consuming.
What is the most efficient (GPU- and memory-wise) way of reading old values from a texture while updating its content?
(Bit late but oh well)
There is no way you could make it work with only one texture, because the GPU is a highly parallel processor: Your kernel that you wrote for a single pixel gets called in parallel on all pixels, you can't tell which one goes first.
So you definitely need 2 textures. The way you probably should do it is by using 2 textures where one is the "old" one and the other the "new" one. Between passes, you switch the role of the textures, now old is new and new is old. Here is some pseudoswift:
var currentText = MTLTexture()
var nextText = MTLTexture()
let semaphore = dispatch_semaphore_create(1)
func update() {
dispatch_semaphore_wait(semaphore) // Wait for updating done signal
let commands = commandQueue.commandBuffer()
let encoder = commands.computeCommandEncoder()
encoder.setTexture(currentText, atIndex: 0)
encoder.setTexture(nextText, atIndex: 1)
encoder.dispatchThreadgroups(...)
encoder.endEncoding()
// When updating done, swap the textures and signal that it's done updating
commands.addCompletionHandler {
swap(¤tText, &nextText)
dispatch_semaphore_signal(semaphore)
}
commands.commit()
}
I have written plenty of iOS Metal code that samples (or reads) from the same texture it is rendering into. I am using the render pipeline, setting my texture as the render target attachment, and also loading it as a source texture. It works just fine.
To be clear, a more efficient approach is to use the color() attribute in your fragment shader, but that is only suitable if all you need is the value of the current fragment, not any other nearby positions. If you need to read from other positions in the render target, I would just load the render target as a source texture into the fragment shader.
I am making a sprite kit game and I am using the plist file to set properties of each level. One of the properties in my plist file is a dictionary called patterns, which contains n items, where each of the items is a block, with hand typed x and y positions. This model is working perfectly fine for the kind of game I am making, as it is very convenient to set the levels right in a quick manner. However, I am facing one drawback I cant solve myself due to the lack of coding experience: some of the levels have as many as 290 blocks, so when the app tries to read the level, the app freezes for like 5 seconds. This is very annoying for the user. At the beginning my approach was: Read the plist file, and for each item call the method which creates the block as a SKSpriteNode using its imageNamed "" method. I thought this is the reason it lags so much, the fact that I am trying to load 300 sprites at the runtime seemed as a promising cause of the problem. Then I tried the following: I made the method which loads a pool of block initially, when the game starts for the first time. This is my method for that
func addObsticles1ToPool() {
for i in 0...300 {
let element = SKSpriteNode(imageNamed: "obsticle1")
element.hidden = true
obsticle1Pool.append(element)
}
}
Then, my code reads the plist file, and for each of the block calls the following:
func block(x: CGFloat, y: CGFloat, movingUp: Bool, movingSide: Bool, spin: Bool, type: Int16) {
var block: SKSpriteNode!
for obs in obsticle1Pool {
if obs.hidden {
block = obs
break
}
}
block.hidden = false
// Further down the properties of the block are set, such as actions it should perform depending on the input values, also its physics body is set.
I also have methods handling the fact that new elements should be added to the pool as game the proceeds and all that works just fine. The lag time dropped to around 3.5 - 4 secs, but that is still not good enough obviously. I would like to have a game with no lag. However, I am not sure if there is another, more efficient way, to do what I am trying to do than using the sprites pool.
Does anyone know how to reduce this lag time?
I have had the same problem! The issue is in this line...
let element = SKSpriteNode(imageNamed: "obsticle1")
SpriteKit isn't smart enough to know that a texture was already created with that image. So what it is doing is creating that texture over and over again and that is expensive.
Instead create a texture outside of the loop first and then create the sprite node with a texture. Something like this...
let elementTexture = SKTexture(imageNamed: "objstical1")
for i in 0...300 {
let element = SKSpriteNode(texture: elementTexture)
element.hidden = true
obsticle1Pool.append(element)
}
Not only will this be a ton faster it will decrease your apps memory a ton...assuming it was the same issue I was having. Hopefully that helps.
I'm making multilevel game based on SpriteKit.
Everything works well except one thing: when user plays long time, changes many levels, etc... then SpriteKit starts losing textures.
I mean there is no big red cross like when image load fails but just empty space like nothing is there.
Hours of debugging and googling did not produce any results.
How can I deal with that bug?
I think I might be having a related issue, except the loss of textures occurs when I am rapidly running actions on a SKSpriteNode. In my code, I run an action each time I get a touch and when the touches are rapid and the animations are firing quickly, the base texture of the SKSpriteNode seemingly disappears. No memory warnings, not a peep from the console; the SKSpriteNode's texture is magically set to nil.
I get the impression from your question that this isn't your exact cause, but you are having the same symptoms. Unfortunately I don't know what is causing it. What I've done to work around the issue has been to constantly check if the texture on my SKSprite node has been set to nil immediately after I run an SKAction and then re-assign it if needed.
So, an abridged version (in Swift) of what I'm doing looks like this :
func doAnimation( ) {
_character.runAction(someSKAction, withKey: "animation")
//Whoops!, we lost our base texture again!
if _character.texture == nil {
let atlas = SKTextureAtlas(named: "someAtlasName")
let texture = atlas.textureNamed("idleFrame" )
_character.texture = texture
}
}
This is not really solution so much as a workaround, but it might be adaptable to your situation until you (or someone else on SO) figures it out. I won't argue that it's not ugly.
BTW, you are not alone with the disappearing texture issue; I wrote a similar response to a similar question here.