So I'm using OpenCV cv::Mat to read/write file. But since they allocate using normal memory, transfering data to the GPU is slow.
Is there any way to make OpenCV use pinned memory (cudaMallocHost or cudaHostAlloc) by default? Memory size consumption is not a concern.
You can use this:
cv::Mat::setDefaultAllocator(cv::cuda::HostMem::getAllocator (cv::cuda::HostMem::AllocType::PAGE_LOCKED));
which will change the default allocator to use cuda pinned memory
Or for shared memory use
cv::Mat::setDefaultAllocator(cv::cuda::HostMem::getAllocator (cv::cuda::HostMem::AllocType::SHARED));
Related
According to this WWDC iOS Memory Deep Dive talk https://developer.apple.com/videos/play/wwdc2018/416, memory footprint equals to dirty and swapped size combined. However, when I use vmmap -summary [mempgraph] to inspect my memgraphs, many times they don't add up. In this particular case vmmap
, memory footprint = 118.5M while the dirty size is 123.3M. How can footprint be smaller than the dirty size?
In the same WWDC talk, it's mentioned that heap --sortBySize [memgraph] can be used to inspect heap allocations, and I see from my memgraph that the heap size is about 5M All zones: 110206 nodes (55357344 bytes) , which is much smaller than the MALLOC regions in the vmmap result. Doesn't malloc allocate spaces in the heap?
I'm confused with logical and physical memory alignments.
To use vector instructions efficiently such as AVX and SSE, we may need data to be aligned.
Does this say that data aligned in virtual memory is also aligned properly in physical memory?
If yes, how does the compiler do the good job?
In my application I create textures, render to them and delay reading from them until absolutely needed by the CPU.
I'd like to know (and I still don't know how) given that I guess and monitor the consumed GPU memory would the call to readPixels() alleviate the memory consumption of the GPU by transferring data to the CPU? Or would that memory still be occupied until I destroy the texture?
readPixels just copies the data. It does not remove it from the GPU.
Textures that you don't pass data to (you passed null) take the same amount of memory as textures you pass no data to. The browser just fills the texture with zeros for you.
The only way for a texture to stop using memory is for you to delete it with gl.deleteTexture. You also need to remove every reference to it (unbind it from any texture units it's still on and remove it from any framebuffer attachments or delete the framebuffer's it's attached to).
In Chrome task manager, there is a column called GPU memory. In GPU-z, I can see the memory size information of the video card. I suppose it is the video memory. Is it the same as GPU memory?
Yes that is the same as the GPU Memory.
The only exception to this is on some lower-end computers use a technique called shared graphics memory in which the integrated graphics card uses some of the RAM as video memory. In the case of your non-integrated graphics card, this would not be the case.
I have a model with lots of high quality textures and I try hard to keep the overall memory usage down. One of the things I tried, is to remove the mipmaps after they got pushed to the GPU, in order to releadse the texture data from common RAM. When doing so, the model is still rendered with the once uploaded mipmaps texture. So thats fine, but the memory doesnt drop at all.
material.mipmaps.length = 0;
So my question is:
Is there a reference to the mipmaps kept by ThreeJS, that the garbace collector can't release the memory. Or is the texture referenced by webGL itself, which seems kind of strange, as webGL lets me think textures are always used in dedicated memory and must therefore be copied. If webGL keeps a reference to the original texture in the RAM, does webGL, would behave different on a desktop with a dedicated graphic card and a laptop with an onboard graphic card sharing the common RAM.
Would be really glad if some one can explain me whats going on inside of threeJS/webGL for texture references.
That's a good question.
Let's go down there...
So normally you'd dispose() a texture when you want it to be kicked out of the VRAM.
Tracing what that does might bring us to an answer. So what does dispose do?
https://github.com/mrdoob/three.js/blob/2d59713328c421c3edfc3feda1b116af13140b94/src/textures/Texture.js#L103-L107
Alright, so it dispatches an event. Alright. Where's that handled?
https://github.com/mrdoob/three.js/blob/2d59713328c421c3edfc3feda1b116af13140b94/src/renderers/WebGLRenderer.js#L654-L665
Aha, so finally:
https://github.com/mrdoob/three.js/blob/2d59713328c421c3edfc3feda1b116af13140b94/src/renderers/WebGLRenderer.js#L834-L837
And that suggests that we're leaving THREE.js and enter the world of raw WebGL.
Digging a bit in the WebGL specs here (sections 3.7.1 / 3.7.2) and a tutorial on raw WebGL here and here shows that WebGL is keeping a reference in memory, but that isn't a public property of the THREE.js texture.
Now, why that goes into RAM and not the VRAM I don't know... did you test that on a machine with dedicated or shared GPU RAM?