As far as I know from Cocos2D 2.0 a 1025*1025 texture does NOT use 4 times more memory than a 1024*1024 texture, just proportionally more.
If I put my textures to an atlas, there is some unused space almost all the time. This is wasted. (Not to mention the iOS5 POT textures memory bug, which makes POT Texture Atlases waste 33% more memory.) But If I just use my textures the way they are, then there is no memory wasted. The only advantage of Texture Atlases in my opinion is the ability to use a SpriteBatchNode.
But my app is heavily memory limited, and I only support devices which support NPOT textures. I know that NPOT texture handling is a bit slower, but saving memory is the most important for me.
I might be wrong, please confirm me, or show me why I am wrong. Thank you! :)
You should design for the worst case. Assume the bug always exists, and design your app's memory usage accordingly. There's no telling whether the bug will go away, reappear or an even worse bug introduced with a newer iOS version.
Riding on the brink of full memory usage is not a good idea, you always have to leave a threshold to allow for the occasional oddity. A new iOS version might introduce another bug, take more memory, the user might have apps running in background that use up more memory, there may be a tiny memory leak adding up over time, etc.
Also, CCSpriteBatchNode can be used with any texture, not just texture atlases.
Related
Ok so I'd like to place a large number of skspritenodes on screen. In the game I'm working on and even in the sample twirling spaceship game the cpu usage runs high in the simulator. I'm not sure how to test my game on an actual device (unless I submit to apple) but I'd like to know whether having something like 50-100 nodes on screen would use too much CPU time.
I've tested putting out large numbers of skspritenodes and the cpu usage reads 90% or more. Is this normal? Will I get laughed at if I hand Apple this game based on the extremely high (and growing) amount of CPU usage for this game?
Lastly, is there a way to avoid lagging during different points in the game? Arching? Preloading textures? idk, something like that.
Performance results seen in simulator are not relevant at all. If you are interested in real results, then you should test on different devices.
From the docs:
Rendering performance of OpenGL ES in Simulator has no relation to the
performance of OpenGL ES on an actual device. Simulator provides an
optimized software rasterizer that takes advantage of the
vector-processing capabilities of your Macintosh computer. As a
result, your OpenGL ES code may run faster or slower in iOS simulator
(depending on your computer and what you are drawing) than on an
actual device. Always profile and optimize your drawing code on a real
device, and never assume that Simulator reflects real-world
performance.
On the other hand, SpriteKit is capable to render a hundreds of sprites at 60fps if you are using texture atlases to draw many nodes in a single draw pass. Read more here.
About preloading textures into memory you can check :
Preload Texture Atlas Data section
and
+ preloadTextureAtlases:withCompletionHandler: method.
Hope this helps.
I have a model with lots of high quality textures and I try hard to keep the overall memory usage down. One of the things I tried, is to remove the mipmaps after they got pushed to the GPU, in order to releadse the texture data from common RAM. When doing so, the model is still rendered with the once uploaded mipmaps texture. So thats fine, but the memory doesnt drop at all.
material.mipmaps.length = 0;
So my question is:
Is there a reference to the mipmaps kept by ThreeJS, that the garbace collector can't release the memory. Or is the texture referenced by webGL itself, which seems kind of strange, as webGL lets me think textures are always used in dedicated memory and must therefore be copied. If webGL keeps a reference to the original texture in the RAM, does webGL, would behave different on a desktop with a dedicated graphic card and a laptop with an onboard graphic card sharing the common RAM.
Would be really glad if some one can explain me whats going on inside of threeJS/webGL for texture references.
That's a good question.
Let's go down there...
So normally you'd dispose() a texture when you want it to be kicked out of the VRAM.
Tracing what that does might bring us to an answer. So what does dispose do?
https://github.com/mrdoob/three.js/blob/2d59713328c421c3edfc3feda1b116af13140b94/src/textures/Texture.js#L103-L107
Alright, so it dispatches an event. Alright. Where's that handled?
https://github.com/mrdoob/three.js/blob/2d59713328c421c3edfc3feda1b116af13140b94/src/renderers/WebGLRenderer.js#L654-L665
Aha, so finally:
https://github.com/mrdoob/three.js/blob/2d59713328c421c3edfc3feda1b116af13140b94/src/renderers/WebGLRenderer.js#L834-L837
And that suggests that we're leaving THREE.js and enter the world of raw WebGL.
Digging a bit in the WebGL specs here (sections 3.7.1 / 3.7.2) and a tutorial on raw WebGL here and here shows that WebGL is keeping a reference in memory, but that isn't a public property of the THREE.js texture.
Now, why that goes into RAM and not the VRAM I don't know... did you test that on a machine with dedicated or shared GPU RAM?
On one hand, I can understand the Mip-mapping (to be able to half the size recursive).
Is there anything else?
I'm just investigating if I have no intention to resize my textured sprites runtime, then I should not be aware of this power-of-two consideration, or still should for some performance/whatever reasons.
Used to be that GPUs required such texture sizes. Then they improved & you could use other sizes, but at a performance penalty. Nowadays those restrictions have been relaxed to the point where you can forget all about it.
Also see https://gamedev.stackexchange.com/questions/26187/why-are-textures-always-square-powers-of-two-what-if-they-arent
I'm developing a 2D game on iOS, but I'm finding it difficult getting drawing to run fast (60 FPS on Retina display).
I've first used UIKit for drawing, which is of course not suitable for a game. I coulnd't draw a couple of sprites without slowdown.
Then I moved on to OpenGL, because I read it's the closest I can get to the GPU (which I think it means it's the fastest possible). I was using glDrawArrays(). When I ran it on the Simulator, FPS dropped when I was reaching over 200 triangles. People said it was because the Simulator or the computer are not optimized to run iOS OpenGL. Then I tested it on a real device, and to my surprise, the performance difference was really small. It still couldn't run that few triangles smoothly - and I know other games on iOS use a lot more polygons, shaders, 3D graphics, etc.
When I ran it through Instruments to check OpenGL performance, it told me I could speed it up by using VBOs. So I rewrote my code to use VBO instead, updating all vertices each frame. Performance increased very little, and I still can't surpass 200 triangles at consistent 60 FPS. And that is 2D drawing alone without context changes/transformations. I also didn't write the game yet - there are no objects making no CPU-intensive tasks.
Everyone I ask says OpenGL is top performance. What could I possibly be doing wrong? I am assuming OpenGL can handle LOTS of polygons that are updated each frame - is that right? Which method other games use that I see they run fine, like Infinity Blade which is 3D, or even Angry Birds which has lots of ever-updating sprites? What is recommended when making a game?
OpenGL is definitely going to be your fastest option. Even on the oldest iOS devices you can run about 20,000 polygons at 30+ fps.
Sounds like you must be doing something wrong or extra. It is impossible to try to guess what that might be without seeing your source code.
Generally speaking though, you want to make sure you create your VBO and all your loading outside of your drawing pipeline.
I am learning OpenGL and recently discovered about glGenTextures.
Although several sites explain what it does, I feel forced to wonder how it behaves in terms of speed and, particularly, memory.
Exactly what should I consider when calling glGenTextures? Should I consider unloading and reloading textures for better speed? How many textures should a standard game need? What workarounds are there to get around any limitations memory and speed may bring?
According to the manual, glGenTextures only allocates texture "names" (eg ids) with no "dimensionality". So you are not actually allocating texture memory as such, and the overhead here is negligible compared to actual texture memory allocation.
glTexImage will actually control the amount of texture memory used per texture. Your application's best usage of texture memory will depend on many factors: including the maximum working set of textures used per frame, the available dedicated texture memory of the hardware, and the bandwidth of texture memory.
As for your question about a typical game - what sort of game are you creating? Console games are starting to fill blu-ray disk capacity (I've worked on a PS3 title that was initially not projected to fit on blu-ray). A large portion of this space is textures. On the other hand, downloadable web games are much more constrained.
Essentially, you need to work with reasonable game design and come up with an estimate of:
1. The total textures used by your game.
2. The maximum textures used at any one time.
Then you need to look at your target hardware and decide how to make it all fit.
Here's a link to an old Game Developer article that should get you started:
http://number-none.com/blow/papers/implementing_a_texture_caching_system.pdf