Hybrid graphics in linux - nvidia

I came across Nvidia optimus implementation for linux called bumblebee project https://github.com/Bumblebee-Project
I installed bumblebee on my laptop with Nvidia graphics card. The issue is that for the applications which need to use discrete gpu, have to be run through a special command "Optirun". Hence only when this is done, the discrete gpu is powered on else it is powered off whenever necessary to conserve power.
Is there a way to identity whether an application needs discrete gpu to run or could run on normal on chip graphics processor. Can this be done in linux ?

I don't think so, I also have a laptop with an optimus card, even on windows it has a list of applications you want to run with the nvidia chip or the intel one.
I believe that when you install the driver it comes with a list.
In theory you could profile each application that uses the video card for how much GPU/Memory it uses, if it is more than said limit you tag that this app should run on the nvidia if it is running on nvidia but using a small ammount you tag it to use the intel chip

Related

Comparison between USB and Mini PCIe Interfaces

I'm deciding between the MiniPCIe and USB accelerators for a home Linux CCTV project. The host has both USB3 and a MiniPCIe socket. The host's physical environment will range from an ambient 20C up to a potential 35C (during the summer).
I'm struggling to determine the pros and cons for each. I have gotten this far, although many are guesses:
USB:
Supports Windows and MacOS as well as Linux
Appears to have greater mindshare/use/community support on the Internet
External so can be placed to optimise heat dissipation
Heatsink
Two manual performance modes, highest requires ambient temp of max 25C
Can use up to 4.5W (900mA # 5V)
Mini PCie:
Cheaper (25%)
Lower power consumption (1.4W for 416 fps)
Automatic thermal throttling via driver
Relies on host system for active cooling
Will maintain max operation at 85C
There's probably many I've missed. In particular I can't determine if there's any limitations on throughput/capacity using USB vs PCIe. If there is no difference, then I suspect the USB form factor is the better option, if only for the mindshare, although the power usage/heat generated may be a concern.
To whittle this down to an actual question: in what cases would the Mini PCIe interace be a preferred option to the USB one?
If you are looking for a plug&play solution, then I definitely suggest the USB Accelerator. Overall, as long as you have the system requirements then it'll always works (maybe with some modifications to the standard linux configs like adding your user to the plugdev group, ...). Then the software for the CCTV is all up to you :)
PCIes sometimes need extra works like adding extra kernel arguments and modules to keep the pcie modules happy. If you are looking to launch a huge product where volumes are expected, then it is worth investigating it since it's cheaper and more compact. However, the power usage is a must for consideration as the USB Accelerator could uses up to 900mA, so that could play a factor.
May I know what host are you trying to attach the accelerators to?

How to install freeRTOS on a laptop without win32 or linux port to get real time behavior?

I'm getting started with freeRTOS. I went through the documentation provided in freeRTOS.org, and had some practice with some demo projects. My question is how to install freeRTOS without using the win32 port (since it is only an emulator that doesn't provide real time behaviour)? Is it possible to install freeRTOS as a standalone OS, or is it necessary to use linux kernel or windows?
FreeRTOS is a real time operating system kernel. It's not a fully blown OS, it's just the kernel. You don't "install" FreeRTOS like you would windows or a ubuntu distro on an x86 PC. You build a project and use freeRTOS to schedule tasks, manage memory resources etc. In general, you need a different microcontroller/processor than one you're developing on as your platform.
If you want to use only your laptop, then you'll need to simulate a "target" processor (that's what that option is). You won't be able to achieve "real time" results (windows will get in the way), but you can get pretty close.
The first thing I'd do is get an eval kit for whatever microcontroller you want to actually use/target/develop on.

Is it possible to disable GPU in kivy app?

I have developed a windows desktop application using kivy framework. Running standalone on a windows10 desktop the overall performance is OK. However, running the same app in a VMWare VDI Client the graphical performance is very bad. The resources assigned to the GPU are limited as you can see in the attached report.
Would it be possible to disable the GPU and to render the graphics in CPU? And if so, how to?
Thanks in advance for your help.

nvidia-smi command could communicate with nvidia driver microsoft azure dsvm

Right after creating and starting up a data science virtual machine and connecting through ssh, I tried to use the nvidia-smi to see if the built-in nvidia and cuda were working property. The returned message read
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA
driver. Make sure that the latest NVIDIA driver is installed and
running.
These were supposed to be part of the vm, yet when I tried to run the program I created, my local computer's default CPU was used instead of the vm's GPU. The ultimate goal of my project is to run an object detection model with the performance sped up from the my lousy 11 sec/image, so I figured I would use a vm and take advantage of its computing power. Yet it seems like this may not be the best option, so if anyone else has some advice there, I would appreciate it.
The issue you are seeing is because you are using a D Series VM. Only the N series VMs have GPUs. So in order to utilize the GPU you need to select one of the following sizes:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu
For this size family, the vCPU (core) quota in your subscription is initially set to 0 in each region. You will need to request a vCPU quota increase for this family in an available region.

How do I increase the "global memory" available to the Intel CPU OpenCL driver?

My system has 32GB of ram, but the device information for the Intel OpenCL implementation says "CL_DEVICE_GLOBAL_MEM_SIZE: 2147352576" (~2GB).
I was under the impression that on a CPU platform the global memory is the "normal" ram and thus something like ~30+GB should be available to the OpenCL CPU implementation. (ofcourse I'm using the 64bit version of the SDK)
Is there some sort of secret setting to tell the Intel OpenCL driver to increase global memory and use all the system memory ?
SOLVED: Got it working by recompiling everything to 64bit. Quite stupid as it seems, but I thought that OpenCL was working similar to OpenGL, where you can easily allocate e.g. 8GB texture memory from a 32bit process and the driver handles the details for you (ofcourse you can't allocate 8GB in one sweep, but e.g. transfer multiple textures that add up to more that 4GB).
I still think that limiting the OpenCL memory abstraction to the adress space of the process (at least for intel/amd drivers) is irritating, but maybe there are some subtle details or performance tradeoff why this implementation was chosen.

Resources