How do I check which of my NVIDIA GPUs is used for display? - nvidia

I'm on a system with multiple NVIDIA GPUs. One or more of them may - or may not - be used to drive a physical monitor. In my compute work, I want to avoid using that one (or more).
How can I, programmatically, check which GPUs are used for display?
If there's no robust way of doing that, I'll settle for getting those GPUs which are used by an Xorg process (which is what nvidia-smi gives me on the command-line)

In case you want to use the same process, you can check the NVML API functions nvmlDeviceGetDisplayActive and nvmlDeviceGetDisplayMode.
Specifically,
nvmlReturn_t nvmlDeviceGetDisplayMode ( nvmlDevice_t device, nvmlEnableState_t* display ) can be used to detect if a physical display is connected to a device.
nvmlReturn_t nvmlDeviceGetDisplayActive ( nvmlDevice_t device, nvmlEnableState_t* isActive ) can be used to check if X server is attached to a device, it can be possible that an X server is running, without an attached physical display.
Link to documentation

Try the following on a terminal
nvidia-smi --format=csv --query-gpu=index,display_mode,display_active
For more information check the nvidia-smi documentation and nvidia-smi --help-query-gpu

Related

Comparison between USB and Mini PCIe Interfaces

I'm deciding between the MiniPCIe and USB accelerators for a home Linux CCTV project. The host has both USB3 and a MiniPCIe socket. The host's physical environment will range from an ambient 20C up to a potential 35C (during the summer).
I'm struggling to determine the pros and cons for each. I have gotten this far, although many are guesses:
USB:
Supports Windows and MacOS as well as Linux
Appears to have greater mindshare/use/community support on the Internet
External so can be placed to optimise heat dissipation
Heatsink
Two manual performance modes, highest requires ambient temp of max 25C
Can use up to 4.5W (900mA # 5V)
Mini PCie:
Cheaper (25%)
Lower power consumption (1.4W for 416 fps)
Automatic thermal throttling via driver
Relies on host system for active cooling
Will maintain max operation at 85C
There's probably many I've missed. In particular I can't determine if there's any limitations on throughput/capacity using USB vs PCIe. If there is no difference, then I suspect the USB form factor is the better option, if only for the mindshare, although the power usage/heat generated may be a concern.
To whittle this down to an actual question: in what cases would the Mini PCIe interace be a preferred option to the USB one?
If you are looking for a plug&play solution, then I definitely suggest the USB Accelerator. Overall, as long as you have the system requirements then it'll always works (maybe with some modifications to the standard linux configs like adding your user to the plugdev group, ...). Then the software for the CCTV is all up to you :)
PCIes sometimes need extra works like adding extra kernel arguments and modules to keep the pcie modules happy. If you are looking to launch a huge product where volumes are expected, then it is worth investigating it since it's cheaper and more compact. However, the power usage is a must for consideration as the USB Accelerator could uses up to 900mA, so that could play a factor.
May I know what host are you trying to attach the accelerators to?

nvidia-smi command could communicate with nvidia driver microsoft azure dsvm

Right after creating and starting up a data science virtual machine and connecting through ssh, I tried to use the nvidia-smi to see if the built-in nvidia and cuda were working property. The returned message read
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA
driver. Make sure that the latest NVIDIA driver is installed and
running.
These were supposed to be part of the vm, yet when I tried to run the program I created, my local computer's default CPU was used instead of the vm's GPU. The ultimate goal of my project is to run an object detection model with the performance sped up from the my lousy 11 sec/image, so I figured I would use a vm and take advantage of its computing power. Yet it seems like this may not be the best option, so if anyone else has some advice there, I would appreciate it.
The issue you are seeing is because you are using a D Series VM. Only the N series VMs have GPUs. So in order to utilize the GPU you need to select one of the following sizes:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu
For this size family, the vCPU (core) quota in your subscription is initially set to 0 in each region. You will need to request a vCPU quota increase for this family in an available region.

What exactly determines what’s in the radiotap header when capturing on WLAN?

I’m doing a study project on wifi signal quality. What I want to do is use Raspberry Pi’s to monitor as many metrics as possible on packet level data. I want to do this by putting wifi adapters on monitor mode (using airmon-ng) and than capture the data about the packets using a wireless network protocol analyzer, like tshark.
What I understand from the wireless networks is that you mainly have three parts: a frame part that has the same information independent of what you’re capturing on, which contains things as frame number, frame length and arrival time. (Want to upload images but don't have 10 reputation yet...).
Then the IEEE 802.11 data which contains the necessary stuff for the network to work. When capturing on WLAN this contains the MAC addresses.
And than we have the radiotap header, which contains all kind of information (signal strength db and dbm, noise level, signal quality, TX value, and much more). This one is a bit different, since this information is actually filled or injected by the wifi adapter you use to capture the data with.
In the present flags you can find which values are actually being injected by the wifi adapter. Now my problem is that for my research I really need as much values as possible. I’ve been working for hours but I didn’t succeed in finding a way to capture with anything more than dmb signal strength (if even available). So this is what I tried so far:
The adapters I used so far are the Edimax EW7811UN, the AirPcap Classic, the AirPcap Tx and two similar alfa adapters with Atheros AR9271 chipset. The AR9271 adapters worked out of the box on raspbian (debian for raspberry pi) on the ath9k_htc driver. Putting them on monitor mode and capturing works fine, but only dbm singal strength is given (as in the screenshots above) in the capture. The Edimax was working out of the box on the 8192cu driver, however it clearly doesn’t support monitor mode. I could put it into monitor mode when booting it on the zd1211rw driver but that didn’t even give the dbm signal strength. Strange thing however, is that a friend tried the exact same Edimax adapter and he could capture, and the only difference we could find is that the lsmod says rtl8192cu and not 8192cu. Strangely, forums are saying that 8192cu is the newer version, however this friend had the newest arch linux kernel installed (newer than the raspbian). So I installed Arch Linux on the pi, but still wasn’t able to put the edimax on 8192cu driver in monitor mode. Then I found a package in the aur repos: dkms-8192cu which was supposed to have a patched version. However, after installing it still didn’t work. Also downloading the driver from the realtek website didn’t work. There is some stuff on patching on the aircrack-ng website, but it actually is mentioning injection of frames and doesn’t really look to be what I exactly need.
Than I bought the Airpcap Classic and the Airpcap Tx to see what they could do. First of all, they have zero linux support so that already is a big drawback since l need to use it from the Pi’s. However even in windows the airpcap’s only capture db and dbm noise and signal quality. It does receive some data at dbm noise level, but it’s worthless since it is always at -100 level. I can boot the Airpcap classic and tx have zd1211B chipset so I can boot them on zd1211rw driver but this also gives no dbm signal value or anything else.
So my question is, what exactly determines what’s in the radiotap header? I guess it would be all in the driver, but I need to be exactly sure before I write off every ath9_htc driver based adapter. I’m about to purchase another adapter which runs on carl9170 driver, however I can’t find no guarantee anywhere that it will give me those values. What I did find in the literature is that the madwifi driver gives (or was giving) noise levels, however it is acquired by Atheros so the project stopped and all websites are suggestion just to use ath9k or ath5k drivers. I tried to install it but I failed because it seems to be really outdate software since the project stopped.
It would be of really big help if someone can explain me what exactly determines what’s inside the radiotap headers, and also if someone could share any experience on when they did capture more than only dbm signal strength values from linux.

How to view output of OutputDebugString () across the network?

Further to my previous question, I find that I cannot use the GExpertsDebugWindow on a PC which did not previously have Delphi installed.
If I have the following (not unusal, so probably of interest to others) requirements, do I need to roll my own code or is there and existing and free solution?
Must be able to read acorss the network (i.e., PC 1 monitors PC 2's debug output) by specifying PC 2's IP address
If posible, I would like to be able to filter by process name
Thanks in advance for any help
Microsoft's DebugView tool has those features. It can display OutputDebugString output, even from remote systems. Depending on other factors, it can even install itself remotely.

Understanding the Android emulator: Testing images? Network connectivity dependencies?

To better clarify my generic question:
I have gotten the Android emulator to work by running a full "make full-eng" build, as per the Google documentation. However, I wanted to debug it, so once I ran the emulator, and called "$ adb shell dmesg" and routed that to an output text file, I found a couple of strange lines:
...
<4>goldfish_new_pdev goldfish_interrupt_controller at ff000000 irq -1
<4>goldfish_new_pdev goldfish_device_bus at ff001000 irq 1
<4>goldfish_new_pdev goldfish_timer at ff003000 irq 3
<4>goldfish_new_pdev goldfish_rtc at ff01000
So when you run the Android full build, it gives you Goldfish as the system image? I want to know if it's testing the things I want for Galaxy Nexus. The kernel was a modified maguro kernel (omap project) for Galaxy Nexus, that I put into the build tree. But the platform I want to be testing is IceCreamSandwich. Is the emulator testing this platform? (b/c the output in this log is leading me to believe it isn't) Or is the emulator testing a "generic" image?
Also, an important further question: I modified the kernel's "socket.h" file, to override the INET protocol with an undefined protocol (FINS). In theory the phone should boot up, but NO internet access. Does the phone emulator care what you do to the internet protocols? Does it use your host computer's networking capabilities?
One further follow-up: What processes/system-services/events (that are involved in booting to a stable state) of the phone DEPEND on the internet protocols of the traditional underlying network stack? (protocols being defined to set up the network sockets)
At the time I wrote the question I did not understand a few things and think I've learned a little while messing with the emulator at the "kernel level". First of all, the emulator tests the "goldfish kernel" (Linux version 2.6.29, with ARM architecture) of a "generic" phone brand. It's almost as if the emulator is a type of phone in of itself, and you cannot mix these image kernels. For example, I tried building a Nexus S crespo phone image with goldfish kernel (so in other words, no crespo kernel) and the phone just "hangs" at the Google splash-screen (at least it's not a boot-loop).
My research (FINS) worked on this emulator, but did not work on any of the 3 platforms supported on actual hardware: Nexus S, Galaxy Nexus, and Motorola Xoom. I am not sure why, given Google does not seems to give users the ability to debug at the lowest level of a phone (I'm sure the actual developers use such kinds of tools in building these phones/testing them). This leads to one major issue which answers my last follow-up: The Android Debug Bridge depends upon INET protocol. My emulator boots up successfully and runs as I want (no internet, b/c there is no INET), but these actual phones do NOT. My hypothesis is that: If INET is overridden with a protocol that is empty (in this case, that would be FINS, which intends to deal with INET at the userspace level, but this appears to be too late for the phone system to be satisfied), the ADB daemon (classified as a type of system service perhaps) cannot work/be connected to and Android hardware will crash because of this. The emulator I believe is more flexible than a real phone, as the hardware is perhaps virtually represented and does not have the same limitations as physical hardware does.
You can consult my wiki/documentation (part of my research team's larger site) of my struggle with the Android phone boot process for more details and my various attempts: http://finsframework.org/mediawiki/index.php/Alexander_G._Ororbia_II
If anyone ever figures out how to get a working boot log from a Nexus S, Galaxy Nexus, or Motorola Xoom that gets stuck in a "boot-loop" (without ADB), please let let me know, as I will be working on this problem for a while to come (and I will update my other Stack Overflow-Android questions to reflect this correction). Any corrections to my understanding would also be appreciated.
NOTE: This answer is editable, as I still think there is some way of getting the phone to produce boot logs on the host machine without the ADB daemon.

Resources