Can I use **Sapphire Nitro+ Radeon RX 5500 XT 8GB Graphics Card (local gpu) **in google colab or Jupyter notebook? Is there any way to run image processing code using local gpu without Nvidia gpu?
Related
Whenever I use Jupyter notebook, it is using 99/100% of my CPU (according to task Manager). I am using windows 10.
Is there any way I can limit this CPU usage? I want to use my pc for some other task while keeping the jupyter kernel running.
Thanks in advance.
I've created a CNN from scratch only using Pytorch tensors and matrix operation functions in the hope of utilizing GPU. To my surprise, the GPU stays 0% utilized and my training doesn't seem to be faster than running on my cpu.
Before Training:
While Training:
I've double checked whether CUDA is available and have installed it already.
Graphics card: Nvidia GEFORCE 2070 SUPER
Processor: Intel i5 10400
Coding Environment: Jupyter Notebook
Cuda & Cudnn Version: 11.0
Pytorch version: 1.6.0
You have to move your model and data to GPU using
model.cuda()
# and
x = x.cuda()
y = y.cuda()
You seem to be doing this with-in the calls of forward and backwards. To make sure the model is going on to GPU, monitor the GPU usage continually using shell command
watch -n 5 nvidia-smi
I have a laptop running Ubuntu 18.04 with both Intel and NVIDIA graphics cards
00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06)
01:00.0 VGA compatible controller: NVIDIA Corporation GM204M [GeForce GTX 970M] (rev a1)
I would like to use the Intel card for my actual graphics display, and my NVIDIA card for simultaneously running GPGPU things (e.g. TensorFlow models, other CUDA stuff, OpenCL). Is this possible? How would I go about this?
Ideally, I'd be able to turn the NVIDIA GPU on and off easily, so that I can just turn it on when I need to run something on it, and turn it off after to save power.
Currently, I have it set up with nvidia-prime so that I can switch between one card or the other (I need to reboot in between). However, if I've loaded the Intel card for graphics (prime-select intel), then the NVIDIA kernel drivers never get loaded and I can't access the NVIDIA GPU (nvidia-smi doesn't work).
I tried loading the NVIDIA kernel module with sudo modprobe nvidia when running the graphics on Intel, but I get ERROR: could not insert 'nvidia': No such device.
Yes, this is indeed possible. It is called "Nvidia Optimus" and means that the integrated Intel GPU is used by default to save power and the dedicated Nvidia GPU is used only for high-performance applications. Here are guides on how to set it up in Linux:
The Ultimate Guide to Setting Up Nvidia Optimus on Linux
archlinux: Nvidia Optimus
Short answer: You can give a try to my modified version of prime-select, which adds an 'hybrid' profile (graphics on Intel, TensorFlow and other CUDA stuff on Nvidia GPU). https://github.com/lperez31/prime-select-hybrid
Long answer:
I came around the same issue and found several blogs talking about different solutions, but I wanted a more straightforward solution, and I didn't want to have to switch between profiles each time I needed TensorFlow to run on Nvidia GPU.
When setting the 'intel' profile, prime-select blacklists three modules: nvidia, nvidia-drm and nvidia-modeset. It also removes the three aliases to these modules. Thus, when running in intel profile, the sudo modprobe nvidia command should fail. Indeed, if the alias would not have been removed, this command should do the trick.
In order to use Intel for graphics and Nvidia GPU for TensorFlow, the 'hybrid' profile in the modified version of prime-select above blacklists nvidia-drm and nvidia-modeset modules, but not nvidia module. Thus nvidia drivers are loaded, but as nvidia-drm (Display Rendering Manager) is not loaded, the graphics remain on Intel GPU.
If you don't want to use my version of prime-select, you could just edit /usr/bin/prime-select and comment the two following lines:
blacklist nvidia
alias nvidia off
With these lines commented, nvidia-smi command should run even in 'intel' profile, you should be able to use CUDA stuff on Nvidia GPU and your graphics will use Intel.
I just moved from AWS to Google Cloud Platform because of its lower GPU price. I followed the instructions on the website creating a compute engine instance with a K80 GPU, installed the latest-versioned Tensorflow, keras, cuda driver and cnDNN, everything goes very well. However when I try to train my model, the training process is still on CPU.
NVIDIA-SMI 387.26 Driver Version: 387.26
Cuda compilation tools, release 9.1, V9.1.85
Tensorflow version -1.4.1
cudnn cudnn-9.1-linux-x64-v7
Ubuntu 16.04
Perhaps you installed the cpu version of tensorflow?
Now, Google Cloud's compute engine has a VM OS image with all the needed software installed for an easier/faster way to get started: https://cloud.google.com/deep-learning-vm/
I have machine with several GPUs. My idea is to attach them to different docker instances in order to use that instances in CUDA (or OpenCL) calculations.
My goal is to setup docker image with quite old Ubuntu and quite old AMD video drivers (13.04). Reason is simple: upgrade to newer version of driver will broke my OpenCL program (due to buggy AMD linux drivers).
So question is following. Is it possible to run docker image with old Ubuntu, old kernel (3.14 for example) and old AMD (fglrx) driver on fresh Arch Linux setup with fresh kernel 4.2 and newer AMD (fglrx) drivers in repository?
P.S. I tried this answer (with Nvidia cards) and unfortunately deviceQuery inside docker image doesn't see any CUDA devices (as It happened with some commentors of original answer)...
P.P.S. My setup:
CPU: Intel Xeon E5-2670
GPUs:
1 x Radeon HD 7970
$ lspci -nn | grep Rad
83:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X] [1002:6798]
83:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT HDMI Audio [Radeon HD 7970 Series] [1002:aaa0]
2 x GeForce GTX Titan Black
With docker you rely on virtualization on Operating System level. That means you use the same kernel in all containers. If you wish to run different kernels for each container, you'll probably have to use system-level virtualization, e.g., KVM, VirtualBox. If your setup supports Intel's VT-d, you can pass the GPU as a PCIe device to the container(better terminology in this case is, Virtual Machine).