Can a VM with more VRAM than RAM do machine learning efficiently? [closed] - machine-learning

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 days ago.
Improve this question
I'm working on a virtual machine that has been given 12 GB of RAM and has a Quadro RTX 6000 with 24 GB of VRAM. I'm trying to do machine learning on this virtual machine.
My intuition right now suggests that it is not using the video card as efficiently as it could be with the limited amount of RAM, and would work better with more. Is this correct?
The following suggests it is so, but is not very clear on that.
OpenCL - what happens if GPU memory is larger than system RAM
In short, how much RAM should I expect to need for machine learning and computer vision for this video card typically?

It very much depends on the software you are using. In some cases, GPU software can use significantly more VRAM than RAM, when the model runs only on the GPU and there is no need to have a copy of it in RAM.
As an example, although CFD and not ML, the FluidX3D software uses between 3.2x and 5.4x more VRAM than RAM. Here in your case the 24GB VRAM capacity would still be the limiting factor.
If it's 1:1 RAM:VRAM allocation, then you're limited by the 12GB RAM. In the end, you have to test your software and check the allocation ratio with tools like top/htop and nvidia-smi.

Related

REDIS high memory usage [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
We have written 6.7M keys with value size of 600 bytes. Total memory usage expected is 4.2GB but we are noticing rss memory usage of 5.7GB.
Why does redis use extra 2GB of memory?
memory info command output
info memory
# Memory
used_memory:5913620368
used_memory_human:5.51G
used_memory_rss:6065446912
used_memory_rss_human:5.65G
used_memory_peak:5913639024
used_memory_peak_human:5.51G
used_memory_peak_perc:100.00%
used_memory_overhead:338769120
used_memory_startup:1018080
used_memory_dataset:5574851248
used_memory_dataset_perc:94.29%
allocator_allocated:5913554768
allocator_active:6065409024
allocator_resident:6065409024
total_system_memory:34359738368
total_system_memory_human:32.00G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.03
allocator_frag_bytes:151854256
allocator_rss_ratio:1.00
allocator_rss_bytes:0
rss_overhead_ratio:1.00
rss_overhead_bytes:37888
mem_fragmentation_ratio:1.03
mem_fragmentation_bytes:151892144
mem_not_counted_for_evict:3738
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:17440
mem_aof_buffer:4096
mem_allocator:libc
active_defrag_running:0
lazyfree_pending_objects:0
number of keys in redis
info keyspace
# Keyspace
db6:keys=6765516,expires=0,avg_ttl=0
Please check this https://redis.io/topics/faq :
64-bit systems will use considerably more memory than 32-bit systems to store the same keys, especially if the keys and values are small. This is because pointers take 8 bytes in 64-bit systems. But of course the advantage is that you can have a lot of memory in 64-bit systems, so in order to run large Redis servers a 64-bit system is more or less required. The alternative is shardi

Which GPU model/brand is optimal for Neural Networks? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
This is not an unreasonable question. Nvidia and ATI architectures differ, enough so that for certain tasks (such as bitcoin mining) ATI is vastly better than Nvidia.
The same could be true for Neural Network related processing. I have attempted to find comparisons of the 2 GPU brands in such a context but failed to do so.
My expectation is that the most important thing for Neural Network processing in a GPU is the number of cores. Would that be correct?
Almost all ML software that uses GPU works (best) with CUDA, thus Nvidia's GPUs are preferable.
Take a look at this discussion. And, there's an article about which GPU to get for deep learning (modern neural networks). Relevant quote:
So what kind of GPU should I get? NVIDIA or AMD?
NVIDIA’s standard libraries made it very easy to establish the first deep learning libraries in CUDA, while there were no such powerful standard libraries for AMD’s OpenCL. Right now, there are just no good deep learning libraries for AMD cards – so NVIDIA it is. Even if some OpenCL libraries would be available in the future I would stick with NVIDIA: The thing is that the GPU computing or GPGPU community is very large for CUDA and rather small for OpenCL. Thus in the CUDA community good open source solutions and solid advice for your programming is readily available.
The reason NVIDIA rocks is that they invested a lot of effort into support of scientific computing (see cuDNN, for example. This means they acknowledge the field and try to move towards these applications).
So, NVIDIA has lots of GPUs. Which one should you get?
Short answer, based on the article cited above (I strongly suggest to read it!): GTX 980.
Actually, number of cores is not that significant. GPUs don't have tons of memory, so communication with host (your RAM) is inevitable. So what matters is amount of on-board memory (so that you can load and process more), and bandwidth (so you don't spend a lot of time waiting).

Memory breakdown based on its speed [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
In one technical discussion the person asked me which things you look into when you buy a laptop.
Then he asked me to Sort different types of memory e.g RAM etc on the basis of speed.In simple words he wanted memory hierarchy .
Technically speaking a processor's registers are the fastest memory a computer has. The size is very small and people generally don't include those numbers when talking about a CPU.
The quickest memory in a computer that would be advertised is the memory that is directly attached to the CPU. It's called cache, and in modern processors you have 3 levels - L1, L2, and L3 - where the first level is the fastest but also the smallest (it's expensive to produce and power). Cache typically ranges from several kilobytes to a few megabytes and is typically made from SRAM.
After that there is RAM. Today's computers use DDR3 for main memory. It's much larger and cheaper than cache, and you'll find sticks upwards of 1 gigabyte in size. The most common type of RAM today is DRAM.
Lastly storage space, such as a hard drive or flash drive, is a form of memory but in general conversation it's grouped separately from the previous types of memory. E.g. you would ask how much "memory" a computer has - meaning RAM - and how much "storage" it has - meaning hard drive space.

What factors determine the maximum amount of physical memory a system can have? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What factors determine the maximum amount of physical memory a system can have? I know the operating system and the hardware play a roll. For hardware does it come down to the number of lines in the address bus?
According to this website different versions of Windows 7 have differnet caps on the usable memory, but what I don't get is why they don't corolate with the 32 bit/64 bit version? For example why does Home Premium support up to 16GB and Professional up to 192GB when both are 64 bit?
Is the max the lower of the two: hardware and operating system? For example what would happen if you had a 32bit address bus and put in 8GB of ram and had Windows 7 64bit?
Any 32 bit systems will not recognise anything more than 4GB. You can put whatever you want in it, but you will only get 4GB usable (actually a little less).
Regarding Windows, it is merely a marketing restriction. If Home Premium would support 192GB who would buy Professional?
It like if your Toyota Yarris could do 300 km/h in 10 seconds why would you buy Ferrari? :-)

How to use graphics memory as RAM? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Since graphics cards provide large amounts of RAM (0.5GiB to 2GiB) and API access to the GPU is not that difficult with CUDA, Stream and more portable OpenCL I wondered if it is possible to use graphics memory as RAM. Grahics RAM might have a larger latency (from CPU) than real RAM but its definitively faster than HDD so it could be optimal for caching.
Is it possible to access graphics memory directly or at least with a thin memory management layer within own applications (rather than free usable for the OS)? If so, what the the preferred way to do this?
Yes, you can use it as swap memory on Linux. Refer to the link here for more details.
With Linux, it's possible to use it as swap space, or even as RAM disk.
Be warned
It's nice to have fast swap or RAM
disk on your home computer but be
warned, if a binary driver is loaded
for X, it may freeze the whole system
or create graphical glitches. Usually
there is no way to tell the driver how
much memory could be used, so it won't
know the upper limit. However, the
VESA driver can be used because it
provides the possibility to set the
video RAM size.
So, Direct Rendering or fast swap.
Your choice.
Unlike motherboard RAM and hard
drives, there aren't any known video
cards that have ECC memory. This may
not be a big deal for graphics
rendering, but you definitely don't
want to put critical data in it or use
this feature on servers.

Resources