Hi I am using PIXI and I am destroying textures whenever necessary. I am seeing that in the chrome task manager, the image cache and other resources are consuming very less memory. My only problem is with the new screens and textures getting loaded, the GPU memory increases (Like 5-10 mb on every activity). I am using texture.destroy(true) to destroy resources. I am also destroying base textures. I am also logging PIXI.utils.TextureCache and PIXI.utils.BaseTextureCache and I am seeing that the number of objects in these are bare minimum required for my app.
I am also making use of PIXI Animated Sprite. If it consumes any extra resources which I should be worried about, please let me know.
I am not sure where the webgl memory is increasing. I am using webgl because I make extensive usage of filters which is not possible using canvas renderer. Can any one help on how do I debug the memory usage and delete unnecessary textures from webgl. I am running on a tight timeline, so any help is very much appreciated.
Related
I am using an MTLBuffer in Metal that I created by allocating several memory pages (using vm_allocate) with
device.makeBuffer(bytesNoCopy:length:options:deallocator:).
I write the buffer with CPU and the GPU only reads it. I know that generally I need to synchronise between CPU and GPU.
However, I have more knowledge about where in the MTLBuffer write (by CPU) and read (by GPU) happens and in my case writing is into different memory pages than the read (in a given time interval).
My question: Do I need to sync between CPU and GPU even if the relevant data that is written and read are on different memory pages (but in the same MTLBuffer)? Intuitively, I would think not, but then MTLBuffer is a bit opaque and I don't really know what kind of processing/requirement the GPU actually does/has with the MTLBuffer.
Additional info: This is a question for iOS and MTLStorageMode is shared.
Thank you very much for help!
Assuming the buffer was created with MTLStorageModeManaged, you can use the function didModifyRange to sync CPU to GPU for only a portion (a page for example) of the buffer.
I have 8GB or Vram (Gpu) & 16GB of Normal Ram when allocating (creating) many lets say large 4096x4096 textures i eventual run out of Vram.. however from what i can see it then create it on ram instead.. When ever you need to render (with or to) it .. it seams to transfer the render-context from the ram to vram in order to do so. Running normal accessing many render-context over and over every frame (60fps etc) the pc lags out as it tries to transfer very high amounts back and forth. However so long the amount of new (not recently used render-contexts (etc still on ram not vram)) is references each second.. there should not be a issue (performance wise). The question is if this information is correct?
DirectX will allocate DEFAULT pool resources from video RAM and/or the PCIe aperture RAM which can both be accessed by the GPU directly. Often render targets must be in video RAM, and generally video RAM is faster memory--although it greatly depends on the exact architecture of the graphics card.
What you are describing is the 'over-commit' scenario where you have allocated more resources than actually fit in the GPU-accessible resources. In this case, DirectX 11 makes a 'best-effort' which generally involves changing virtual memory mapping to get the scene to render, but the performance is obviously quite poor compared to the more normal situation.
DirectX 12 leaves dealing with 'over-commit' up to the application, much like everything else about DirectX 12 where generally "runtime magic behavior" has been removed. See docs for details on this behavior, as well as this sample
By pressing F12 and then Esc on Chrome, you can see a few options to tick. One of them is show FPS meter, which allows us to see GPU memory usage in real time.
I have a few questions regarding this GPU memory usage:
This GPU memory means the memory the webpage needs to store its code: variables, methods, images, cached videos, etc. Is this right to affirm?
Is there a reason as to why it has an upper bound of 512 Mb? Is there a way to reduce or increase it?
How much GPU memory usage is enough to see considerable slowdown on browser navigation?
If I have an array with millions of elements (just hypothetically), and I splice all the elements in the array, will it free the memory that was in use? Or will it not "really" free the memory, requiring an additional step to actually wipe it out?
1. What is stored in GPU memory
Although there are no hard-set rules on the type of data that can be stored in GPU-memory, the bulk of GPU memory generally contains single-frame resources like textures, multi-frame resources like vertex buffers and index buffer data, and programmable-shader compiled code fragments. So while in theory it is possible to store video's in GPU memory, as well as all kinds of other bulk data, in practice, for every streamed video only a bunch of frames will ever be in GPU-ram.
The main reason for this soft-selection of texture-like data sets is that a GPU is a parallel hardware architecture, and it expects the data to be compatible with that philosophy, which means that there are no inter-dependencies between sets of data (i.e. pixels). Decoding images from a video stream is more or less the same as resolving interdependence between data-blocks.
2. Is 512MB enough for everyone?
No. It's probably based on your hardware.
3. When does GPU memory become slow?
You have to know that some parts of the GPU memory are so fast you can't even start to appreciate the speed. There is nothing wrong with the speed of a GPU card. What matters is the time it takes to get the data IN that memory in the first place. That is called bandwidth, and the operations usually need to be synchronized. In that case, the driver will lock the Northbridge bus so that data can flow from main memory into GPU memory, and this locking + transfer takes quite some time.
So to answer the question, once it is uploaded, the GUI will remain fast, no matter how much more memory is used on the GPU card. The only thing that can slow it down, are changes to the GUI, and other GPU processes taking time to complete that may interfere with rendering operations.
4. Splicing ram memory frees it up?
I'm not quite sure what you mean by splicing. GPU memory is freed by applications that release that memory by using the API calls to do that. If you want to render you GPU memory blank, you'd have to grab the GPU handles of the resources first, upload 'clear' data into them, and then release the handles again, but (for normal single-threaded GPU applications) you can only do that in your own process context.
Is it good practice to call the following "purge" methods at the beginning of each Scene?
And if not when should I call them and is there any tutorial in explaining when to use each call? Am I missing something?
[CCTextureCache purgeSharedTextureCache];
[CCSpriteFrameCache purgeSharedSpriteFrameCache];
[CCAnimationCache purgeSharedAnimationCache];
[CCShaderCache purgeSharedShaderCache];
[[CCFileUtils sharedFileUtils] purgeCachedEntries];
(I am using Cocos2d 2.0 and ARC is enabled, don't think it is relevant but still thought that was worth mentioning)
IMO it's bad practice to purge cocos2d's caches at all! Purging caches is as useful a tool as a hammer is in repairing electronic devices.
It's contraproductive to purge everything only to have the characters reload their purged animations and textures in the next frame because they frickin' need them! If you purge caches unconditionally, you haven't done your asset memory-management job (well).
There can still be uses to purge everything but they should be few and far between (like shutting down cocos2d altogether in a UIKit app) - because why would you ever want to remove all of the cached resources? You remove what you've added, and if that isn't enough, you've done as much as you could.
As an app developer you should be aware of your app's total memory usage and what assets are in use and which aren't. Including the resources handled by cocos2d. Release only those assets you know aren't in use currently and will also not be needed in the near future. It's really that simple, yet it's more work than simply purging caches - which seems to do the job but really is just a terrible thing to do.
One problem with purging caches specifically during memory warnings is that memory warnings may occur when you're currently preloading assets. Now when you purge the caches while you're loading assets, you're shooting yourself in the foot because the already preloaded assets will be removed and then need to be loaded again as soon as they're needed. At worst this can actually cause an unrecoverable memory warning if the reload happens instantly due to the additional memory needed to load assets in the first place (ie textures use 2x memory while loading for a short period of time!).
In most cases purging caches will only delay the memory warning related termination, while adding lag to the game in the meantime. In the remaining cases it will simply make for a bad experience as the game stutters to recover, possibly over a longer period of time.
Cocos2D purges the caches during memory warnings only as a last resort measure, and mainly for developers who won't concern themselves with such nonsense as memory usage. It's a solution that works for taking the first few app development steps, perhaps even for a developer's first app, but is really pretty much useless for any serious/ambitious efforts.
As an ambitious app developer with an eye on quality you should react to memory warnings in a more graceful manner. Meaning you first and foremost work hard to minimize memory warnings altogether (here are some tips) and when they do occur, you need to make sure of two things:
Your app may be terminated very soon - be sure to save the app's state.
You release any memory you are absolutely sure is not needed at this point in time.
Regarding #2: If you're programming close to the edge, you may want to fall back to some sort of "memory safety" mode where you tune down your app's memory usage altogether, for example by displaying fewer sprites, particles, combining the use of two textures, removing certain textures at the expense of additional loading times, etc.
If you can't free up enough memory with step #2 then #1 will happen almost inevitably, regardless of whether you purge cocos2d's caches or not. Purging the "unused" textures of CCTextureCache can help, but since sprite frames retain the texture it usually doesn't do much (or nothing) if you're using texture atlases without first releasing the corresponding sprite frames. Keep that in mind.
The process for handling a memory warning thus is:
Know what assets are in use, and which are merely cached to reduce loading times.
Remove the "not currently needed" sprite frames and then call CCTextureCache's removeUnusedTextures. This has the greatest chance of releasing most of the memory your app is using while not really needing it anymore.
Remove any other extraneous memory you may have allocated in your code but isn't currently in use - or fall back to a "memory safe" mode (however you implement that).
Hope for the best.
Do not purge cocos2d's caches unconditionally! It's not helping, it will probably only make things worse.
Considering all of the cocos2d caches, 99% of the memory will be retained by CCTextureCache. So it's pretty much pointless to purge any of the other caches anyway, just ignore them.
You really only need to be looking at which texture atlas you're currently using, and those you don't you remove the sprite frames and their textures.
If you're using .pvr.ccz textures to begin with, you can even ignore "caching to reduce load times" altogether and remove from memory every texture atlas whenever you stop using it - because .pvr.ccz load so fast it makes barely any difference as far as switching scenes is concerned. That also helps to avoid memory warnings in the first place.
Is it possible to store the display list data on the video card memory?
I want to use only video memory like Video Buffer Object(VBO) to store DisplayList.
But when I try it, it always uses main memory instead of video memory.
I tested on nVidia geForce 8600GTS, and GTX260.
Display lists are a very old feature, that dates back to OpenGL-1.0. They have been depreceated a long time ago. Anyhow you can still use them for compatibility reasons.
The way OpenGL works, prevents display lists from being held in GPU memory only. The graphics server (as OpenGL calls it) is a purely abstract thing, and the specification warrants, that what you put in a display lists is always available. However in modern GPUs there's only a limited amount of memory, so payload data may be swapped in and out as needed.
Effectively GPU memory is a cache for data in system RAM (the same way system RAM should be treaded as cache for storage).
Even moreso, modern GPUs may crash, and the drivers will perform a full reset giving the user the impression everything works normal. But after the reset all the data on GPU memory must be reinitialized.
So it is necessary for OpenGL to keep copies of every payload data in memory to support smooth operation.
Hence it is perfectly normal for your data to show up as consuming system RAM as well. It is though very likely, that the display lists are also cached in GPU memory.
Display Lists are deprecated. You can use VBO with vertex indices to use graphics memory, and draw it with glDrawElements.