Are the video memory and the GPU memory the same - memory

In Chrome task manager, there is a column called GPU memory. In GPU-z, I can see the memory size information of the video card. I suppose it is the video memory. Is it the same as GPU memory?

Yes that is the same as the GPU Memory.
The only exception to this is on some lower-end computers use a technique called shared graphics memory in which the integrated graphics card uses some of the RAM as video memory. In the case of your non-integrated graphics card, this would not be the case.

Related

How to understand iOS memory physical footprint

According to this WWDC iOS Memory Deep Dive talk https://developer.apple.com/videos/play/wwdc2018/416, memory footprint equals to dirty and swapped size combined. However, when I use vmmap -summary [mempgraph] to inspect my memgraphs, many times they don't add up. In this particular case vmmap
, memory footprint = 118.5M while the dirty size is 123.3M. How can footprint be smaller than the dirty size?
In the same WWDC talk, it's mentioned that heap --sortBySize [memgraph] can be used to inspect heap allocations, and I see from my memgraph that the heap size is about 5M All zones: 110206 nodes (55357344 bytes) , which is much smaller than the MALLOC regions in the vmmap result. Doesn't malloc allocate spaces in the heap?

How to allocate OpenCV Mat/Image on CUDA pinned memory?

So I'm using OpenCV cv::Mat to read/write file. But since they allocate using normal memory, transfering data to the GPU is slow.
Is there any way to make OpenCV use pinned memory (cudaMallocHost or cudaHostAlloc) by default? Memory size consumption is not a concern.
You can use this:
cv::Mat::setDefaultAllocator(cv::cuda::HostMem::getAllocator (cv::cuda::HostMem::AllocType::PAGE_LOCKED));
which will change the default allocator to use cuda pinned memory
Or for shared memory use
cv::Mat::setDefaultAllocator(cv::cuda::HostMem::getAllocator (cv::cuda::HostMem::AllocType::SHARED));

Does a texture with data and a texture with no data consume the same amount of memory?

In my application I create textures, render to them and delay reading from them until absolutely needed by the CPU.
I'd like to know (and I still don't know how) given that I guess and monitor the consumed GPU memory would the call to readPixels() alleviate the memory consumption of the GPU by transferring data to the CPU? Or would that memory still be occupied until I destroy the texture?
readPixels just copies the data. It does not remove it from the GPU.
Textures that you don't pass data to (you passed null) take the same amount of memory as textures you pass no data to. The browser just fills the texture with zeros for you.
The only way for a texture to stop using memory is for you to delete it with gl.deleteTexture. You also need to remove every reference to it (unbind it from any texture units it's still on and remove it from any framebuffer attachments or delete the framebuffer's it's attached to).

Total Texture memory size iOS / OpenGL ES

My team is running into an issue where the amount of texture memory allocated via the glTexImage2D is high enough that it crashes the app ( at about 400 MB for iPhone 5). We're taking steps to minimize the texture allocation ( via compression, using fewer bits/channel and doing procedural shaders for VFX etc).
Since the app crashed on glTexImage2D, I felt like, it's running out of texture memory (as against virtual memory). Is there any documentation/guideline on the recommended texture memory usage by an app (not just optimize your texture memory) .
AFAIK on iOS devices ( and many Android devices) there's no dedicated VRAM and our app process is still well within the virtual memory limit. Is this some how related to the size of physical RAM ? My searches so far has resulted only in info on max texture size and tricks for optimizing texture usage and such. Any information is appreciated.

glGenTextures speed and memory concerns

I am learning OpenGL and recently discovered about glGenTextures.
Although several sites explain what it does, I feel forced to wonder how it behaves in terms of speed and, particularly, memory.
Exactly what should I consider when calling glGenTextures? Should I consider unloading and reloading textures for better speed? How many textures should a standard game need? What workarounds are there to get around any limitations memory and speed may bring?
According to the manual, glGenTextures only allocates texture "names" (eg ids) with no "dimensionality". So you are not actually allocating texture memory as such, and the overhead here is negligible compared to actual texture memory allocation.
glTexImage will actually control the amount of texture memory used per texture. Your application's best usage of texture memory will depend on many factors: including the maximum working set of textures used per frame, the available dedicated texture memory of the hardware, and the bandwidth of texture memory.
As for your question about a typical game - what sort of game are you creating? Console games are starting to fill blu-ray disk capacity (I've worked on a PS3 title that was initially not projected to fit on blu-ray). A large portion of this space is textures. On the other hand, downloadable web games are much more constrained.
Essentially, you need to work with reasonable game design and come up with an estimate of:
1. The total textures used by your game.
2. The maximum textures used at any one time.
Then you need to look at your target hardware and decide how to make it all fit.
Here's a link to an old Game Developer article that should get you started:
http://number-none.com/blow/papers/implementing_a_texture_caching_system.pdf

Resources