SpriteKit red X - ios

I need a help or an advice with SprikeKit. My app/game passed Apple review, but later I got some complaints regarding graphics (big red X). I am using spriteNodeWithImageNamed for loading pictures. Is it any way to find out if the picture was loaded? I'm sure that the picture does exist. Probably, the problem is related to a memory or something else. If I put the wrong picture name in Xcode I see error output and red x but how can I catch the error - #try/#catch does not help in this case. Any inputs, hints are very appreciated.
xcode 5, iPad air

Are you trying to load any very large textures? The max texture size for different models of iDevices vary - perhaps that's what you're running into. If you try to load a texture that's too large for a particular device, it might fall back to that X graphic.
As far as detecting whether the texture didn't load, I don't know a way to do that, but you might be able to ensure that it loads by using SKTexture's preloadWithCompletionHandler: method to make sure it's loaded.
Also, Apple recommends not loading textures on the fly, especially if you're loading many of them in a short time span, and carefully managing texture memory by discarding SKTextures that you no longer need. Do some preloading, and check that you're not keeping textures around that you don't need, and see if the X icons go away.

Related

Fixing or avoiding memory leak in default third party library

I developed an app that includes the ability to preview the subdivision results of a 3D model on the fly. I have my own catmull clark subdivision functions to permanently modify the geometry, but I use the .subdivisionLevel property of the SCNGeometry to temporarily subdivide the model as a preview. In most cases previewing does not automatically mean the user will go for the permanent option.
.subdivisionLevel uses (just as MDLMesh’s subdivision, which I tried as a workaround) Pixar’s OpenSubdiv to do the actual subdivision and smoothing. It works faster than my own but more importantly it doesn’t permanently modify the vertex data I provide through a SCNGeometry source.
The problem is, I can’t get it to stop leaking memory. I first noticed this a long time ago, figured it was something in my code. I don’t think it’s just one specific IOS version and it happens in both Swift and Objective C. Eventually I set up a small example adding just 1 line to the SceneKit game template in Xcode, setting the ship’s subdivisionLevel to 1. Instruments shows that immediately results in memory leaks:
I submitted a bug report to Apple a week ago but I’m not sure I can expect a reply or a fix anytime soon or at all. The screenshot is from a test with a very small model, but even with small models (hundreds to couple of thousand vertices) it leaks a lot and fast and will lead to the app crashing.
To reproduce, create a new project in Xcode based on the SceneKit game template and add the following lines to handletap:
if result.node.geometry!.subdivisionLevel == 3 {
result.node.geometry!.subdivisionLevel = 0
} else {
result.node.geometry!.subdivisionLevel = 3
}
(Remove the ! For objective c)
Tap the ship to leak megabytes, tap it some more and it quickly adds up.
OpenSubdiv is used in 3D Studio max as well as others obviously and it appears to be in Apple’s implementation. So my question is: is there a way to fix/avoid this problem without giving up on the subdivision features of SceneKit entirely, or is a response from Apple my only chance?
Going through the WWDC videos to get an idea of how committed Apple is to OpenSubdiv and thus the chance of them fixing the leaks, I found the subdivision can be performed on the GPU by Metal since the latest SceneKit update.
Here are the required two lines (Swift) if you want to use subdivision in SceneKit or Model IO:
let tess = SCNGeometryTessellator()
geometry.tessellator = tess
(from WWDC 2017 What's new in Scenekit, 23:45 into the video)
This will cause the subdivision to be performed on the GPU (thus faster, especially at higher levels), use less memory, and most importantly, releases the memory when setting the subdivision level lower or back to zero.

Sprite Animation file sizes in SpriteKit

I looked into inverse kinematics as a way of using animation, but overall thought I might want to proceed with using sprite texture atlases to create animation instead. The only thing is i'm concerned about size..
I wanted to ask for some help in the "overall global solution":
I will have 100 monsters. Each has 25 frames of animation for an attack, idle, and spawning animation. Thus 75 frames in total per monster.
I'd imagine I want to do 3x, 2x and 1x animations so that means even more frames (75 x 3 images per monster). Unless I do pdf vectors then it's just one size.
Is this approach just too much in terms of size? 25 frames of animation alone was 4MB on the hard disk, but i'm not sure what happens in terms of compression when you load that into the Xcode and texture atlas.
Does anyone know if this approach i'm embarking on will take up a lot of space and potentially be a poor decision long term if I want even more monsters (right now I only have a few monsters and other images and i'm already up to ~150MB when I go to the app on the phone and look at it's storage - so it's hard to tell what would happen in the long term with way more monsters but I feel like it would be prohibitively large like 4GB+).
To me, this sounds like the wrong approach, and yet everywhere I read, they encourage using sprites and atlases accordingly. What am I doing wrong? too many frames of animation? too many monsters?
Thanks!
So, you are correct that you will run into a problem. In general, the tutorials you find online simply ignore this issue of download side and memory use on device. When building a real game you will need to consider total download size and the amount of memory on the actual device when rendering multiple animations at the same time on screen. There are 3 approaches, just store everything as PNG, make use of an animation format that compresses better than PNG, or third you can encode things as H264. Each of these approaches has issues. If you would like to take a look at my solution to the memory use issue at runtime, have a peek at SpriteKitFireAnimation link at this question. If you want to roll your own approach with H264, you can get lots of compression but you will have issues with alpha channel support. The lazy thing to do is use PNGs, it will work and support alpha channel, but PNGs will bloat your app and runtime memory use is heavy.

Graphic in iOs device is all distorted Unity

Right now I'm developing application for iOs by Unity3D, and I have no idea what's happening with all my graphic. All textures are absolutely distorted and I've been struggling with this for a long time with no success.
I believe Unity compresses textures before convert the project to iOS, but I really can't find out how to change this setting or even knock it off. I appreciate any help from you guys. Here's the screenshot of my Texture Settings. I've setup it according to Ray Wenderlich lesson.
P.S.: I've tried to use "Point" for Filter Mode, but it had made the texture even worse.
I would suggest disabling mitmaps to see if this solves your problem, and making sure you have the best quality settings in Edit>Player Settings>Quality
As an aside, how large is your button image? You're overriding the max size to be 2048 for what seems to be a button. That sounds a little excessive.
By "distorted" you mean "blurred"? Try to uncheck "Generate Mip Maps".

How to prevent pixel bleeding from rendering sprite-sheet generated with Zwoptex on older iOS device?

I packed up several individual sprites and generated a big sprite-sheet 2048*2048 in size with Zwoptex. But I scale down to match each iOS device such as 2048*2048 for iPad HD, 512*512 for iPhone, etc.
I found out that "Spacing Pixel" option in Zwoptex will effect the result of sprites rendering on device. That value means a space (in pixel) between each individual sprite packing up inside sprite-sheet. For instance, if I set that value too low then there's more chance that pixel bleeding will occur on newer or better device as well as older device. But if I increase that value, the chance lowers and for certain value that is high enough, pixel bleeding (hopefully) won't happen.
Anyway, I set value to around 17-20 which is really high and it consumes valuable space on sprite-sheet. The result is, on iPhone simulator, there's still a problem.
As we can only restricts some devices from install the game for certain iOS version, but iPhone 3GS can still update to newest version thus I need to solve this problem.
So I want to know the solution on how to prevent pixel bleeding problem to occur across all iOS devices ranging from iPhone to iPad (included retina).
It would be great to know any best practice or practical solution on selecting certain value for "Spacing Pixel" between sprites to remove the problem away when rendering.
If only the Simulator shows those artifacts, then by all means ignore them! None of your users will ever run your app in the Simulator, will they? The Simulator isn't perfect.
A spacing of 2 pixels around each texture atlas sprite frame is enough (and generally recommended) to kill all artifacts caused by pixel bleeding. If you still see artifacts, they're not a direct cause from too little spacing. They can't be.
I'm not sure about Zwoptex, do you actually have to manually create each scaled-down version of the texture atlas? You may be doing something wron there. Try TexturePacker, I wouldn't be surprised if the artifacts go away just like that.
For example, one type of artifact is caused by not placing objects at integer positions. You may see a gap (usually a black line) between two objects if their position is something like (1.23545, 10.0) and (41.23545, 10.0). Using integer coordinates (1,10) and (41,10) would fix the issues. The difficulty is that this goes all the way up the hierarchy, if these object's parent node is also on a non-integer position you can still experience this line gap artifact.
If you search around you'll find numerous cause and effect discussions for cocos2d artifacts. One thing to keep in mind: do not use the CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL macro. It's not a fix, it doesn't even come close. It kinda fixes the non-integer position artifact and introduces another (much worse IMHO): aliasing/flickering during movement.

Is iOS glGenerateMipmap synchronous, or is it possibly asynchronous?

I'm developing an iPad app that uses large textures in OpenGL ES. When the scene first loads I get a large black artifact on the ceiling for a few frames, as seen in the picture below. It's as if higher levels of the mipmap have not yet been filled in. On subsequent frames, the ceiling displays correctly.
This problem only began showing up when I started using mipmapping. One possible explanation is that the glGenerateMipmap() call does its work asynchronously, spawning some mipmap creation worker (in a separate process, or perhaps in the GPU) and returning.
Is this possible, or am I barking up the wrong tree?
Within a single context, all operations will appear to execute strictly in order. However, in your most recent reply, you mentioned using a second thread. To do that, you must have created a second shared context: it is always illegal to re-enter an OpenGL context. If already using a shared context, there are still some synchronization rules you must follow, documented at http://developer.apple.com/library/ios/ipad/#DOCUMENTATION/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithOpenGLESContexts/WorkingwithOpenGLESContexts.html
It should be synchronous; OpenGL does not in itself have any real concept of threading (excepting the implicit asynchronous dialogue between CPU and GPU).
A good way to diagnose would be to switch to GL_LINEAR_MIPMAP_LINEAR. If it's genuinely a problem with lower resolution mip maps not arriving until later then you'll see the troublesome areas on the ceiling blend into one another rather than the current black-or-correct effect.
A second guess, based on the output, would be some sort of depth buffer clearing issue.
I followed #Tommy's suggestion and switched to GL_LINEAR_MIPMAP_LINEAR. Now the black-or-correct effect changed to a fade between correct and black.
I guess that although we all know that OpenGL is a pipeline (and therefore asynchronous unless you are retrieving state or explicity synchronizing), we tend to forget it. I certainly did in this case, where I was not drawing, but loading and setting up textures.
Once I confirmed the nature of the problem, I added a glFinish() after loading all my textures, and the problem went away. (Btw, my draw loop is in the foreground and my texture loading loop - because it is so time consuming and would impair interactivity - is in the background. Also, since this may vary between platforms, I'm using iOS5 on an iPad 2)

Resources