Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
In a SUSE machine with multiple GPUs, is it possible to quickly and programmatically tell which GPU (or GPUs) are rendering displays?
The goal is to automatically detect a card eligible/available for use in debugging.
(One cannot use cuda-gdb on a card that is rendering a display, and guessing is... inelegant.)
Non-programmatically, you can use the NVIDIA control panel (if you have a proper nvidia linux driver loaded for your GPUs, you should just be able to do nvidia-settings at a terminal to launch the control panel) to determine which GPU is connected and/or rendering to which display.
Programmatically, it's a bit more complicated because you have to define what you mean (programmatically) by "the display". But as an example, if you have only one display (thus there is no confusion about which one you have in mind), you can use the NVIDIA API that nvidia-settings is built on (NVCtrl), to get at the information programmatically.
And with CUDA 5.5, you can use cuda-gdb on a GPU that is rendering a display, but it requires a cc 3.5 or better GPU and some extra setup.
I suppose another approach (possibly simplest, programmatically) would be to use the NVML function nvmlDeviceGetDisplayMode
NVML is the api that the utility nvidia-smi is built on. So you can manually query the display mode of devices that way as well.
Since you've edited to indicate a programmatic approach, I think the first method I would recommend is the NVML one. If you have no other selection criteria, simply cycle through the GPUs until you find one for which display mode is disabled. If you want to be sure that a particular GPU has it's display mode disabled, be sure to exclude it from your X configuration for your specific distro (e.g. be sure it is not referenced in xorg.conf, on many linux distros)
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
Update:
Based on down votes I got, I decided to explain why I am asking this question.
I am writing a C# application that use the default printer.
The problem is when printing in Windows 10 through my application, the default printer is not returned as expected.
After some research, I found that it is a new feature developed by Microsoft: the last printer used for print is being the default one. However, that it is possible to turn this feature off.
Now, back to the original post:
I have Windows 10 installed and I am trying to turn off the "Let Windows manage my default printer" but I cannot find this option.
According to this answer, I tried to turn it off through regedit but the instance of LegacyDefaultPrinterMode is also not there.
Any idea why I cannot find turn off option?
Is there any other alternative to turn this flag off?
How to disable automatic default printer manager
1- Use the Windows Key + I keyboard shortcut to open Settings.
2- Navigate to Devices, then go to Printers & scanners, and disable the Let Windows manage my default printer.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I need run a 16bits application on a windows 64bits without virtualization or XP mode.
Do you have a solution ?
My application have an user interface and she need to print and to access the disk.
It's a old monster, we don't have the source code (Delphi) and it's very specific (made on demand).
I think about a sort of encapsulation or a "translator" between the OS and the binary.
An idea ?
I need run a 16 bit application on a Windows 64 bit system without virtualization.
That is not possible. The only way to run this application on such a system is via a virtualized environment of one form or another.
You wonder about some form of translator or adapter, but that is of course exactly what virtualization is. A 64 bit system cannot run a 16 bit process natively, ergo you need a virtualized environment in order to run it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have a .Net Micro Framework app that is failing to write bytes to a microSD card. When I take the card out of the device and look at it on my PC using a microSD to SD adapter the PC cannot write to the device as it appears to be locked. I'm trying to work out if the lock is a faulty adapter (the switch on the side of the adapter is set to the unlocked position) or the sate of the microSD card itself.
MicroSD cards have no visible way of locking and unlocking them but is there any setting in the card itself that locks it?
I have tried searching but most threads I can find (e.g. this one and this one, to choose two SO ones) talk about the adapter. Is there locking in the microSD specification?
Duskwuff gave an answer on the SuperUser SE, pointing out that:
"most computer-based SD card adapters are unable to execute arbitrary commands on an SD card" but that there are commands "available to (and used by) embedded devices"
Commands such as CMD27 (PROGRAM_CSD) "can be used to set bits which control temporary or even permanent write protection" and CMD42 (LOCK_UNLOCK) "can even be used to turn on and off password-based read protection".
There are more details about these register commands in Appendix C.1 SD Mode Command List of Part E1 of the SDIO Simplified Specification
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We just received the stable version of CUDA 5. There are some new terms like Kepler and ability of using MPI with better performance, and running the same card with 32 applications at the same time. I am a bit confused though and looking for the answers of such questions:
Which cards and compute capabilities are required to fully utilize CUDA 5's features?
Are new features only available for Kepler architecture, like GPUDirect, Dynamic Parallelism, Hyper Q and Dynamic Parallelism.
If we have Fermi architectures, what are the benefits of using CUDA 5. Does it bring benefits other than ability of using NSight at Linux and Eclipse. I think the most important feature is ability of building libraries?
Did you see any performance improvements by just passing from CUDA 4 to CUDA 5. (I got some speed ups at Linux machines)
I found out some documents like
http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/Kepler_Compatibility_Guide.pdf
http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf
http://blog.cuvilib.com/2012/03/28/nvidia-cuda-kepler-vs-fermi-architecture/
However a better, short description may make our minds clearer.
PS: Please do not limit the answer to the questions above. I might be missing some similar questions.
Compute capability 3.5 (GK110, for example) is required for dynamic parallelism because earlier GPUs do not have the hardware required for threads to launch kernels or directly inject other API calls into the hardware command queue.
Compute capability 3.5 is required for Hyper-Q.
SHFL intrinsics require CC 3.0 (GK104)
Device code linking, NSight EE, nvprof, performance improvements and bug fixes in CUDA 5 benefit Fermi and earlier GPUs.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a program that reads about a million of rows and group the rows; the client computer is not stressed at all, no more than 5% cpu usage and the network card is used at about 10% or less.
If in the same client machine I run four copies of the program the use grow at the same rate, with the four programs running, I get about 20% cpu usage and about 40% network usage. That makes me think that I can improve the performance using threads to read the information from the database. But I don't want to introduce this complexity if a configuration change could do the same.
Client: Windows 7, CSDK 3.50.TC7
Server: AIX 5.3, IBM Informix Dynamic Server Version 11.50.FC3
There are a few tweaks you can try, most notably setting the fetch buffer size. The environment variable FET_BUF_SIZE can be set to a value such as 32767. This may help you get closer to saturating the client and the network.
Multiple threads sharing a single connection will not help. Multiple threads using multiple connections might help - they'd each be running a separate query, of course.
If the client program is grouping the rows, we have to ask "why?". It is generally best to leave the server (DBMS) to do that. That said, if the server is compute bound and the client PC is wallowing in idle cycles, it may make sense to do the grunt work on the client instead of the server. Just make sure you minimize the data to be relayed over the network.