I'm looking at buying a new gaming PC on which to do some XNA development. Before I buy a machine with SLI graphic cards, does anyone know if XNA has any problems taking advantage of a SLI setup?
XNA is abstracted away from the hardware. I don't believe you can write code specific to certain cards (if you can you shouldn't). If you had that kind of code, you wouldn't be able to move the project over to an xbox 360. I believe you would want to go the directx route if you plan on using card specific features.
That being said, if you you a working SLI setup, you would surpass any minimum requirements.
The only consideration you need to make with XNA is the shader profile you code to ... that is, if you write custom shaders at all :-) aside from that, as long as the video card is dx9+ compatible, you should usually be fine.
Edit: According to the list of Supported Operating Systems and Hardware for XNA Game Studio 3.1
To run XNA Framework games on a
computer running a Windows operating
system, you need a graphics card that
supports, at a minimum, Shader Model
1.1, and DirectX 9.0c. We recommend using a graphics card that supports
Shader Model 2.0 because some samples
and starter kits may require it.
Furthermore, according to NVidia's marketing material, the following cards are SLI, and support up to shader model 3.0
NVIDIA GeForce 7900 GPUs
NVIDIA GeForce 7800 GTX
NVIDIA GeForce 7800 GT
NVIDIA GeForce 7600 GPUs
NVIDIA GeForce 6800 Ultra
NVIDIA GeForce 6800 GT
NVIDIA GeForce 6800 GS
NVIDIA GeForce 6800
NVIDIA GeForce 6800 XT
NVIDIA GeForce 6600 GT
NVIDIA GeForce 6600
NVIDIA GeForce 6600 LE
Related
long time tormented by this question, I ask your advice in what direction to move. Objective - to develop universal application with yolo on windows, which can use computing power of AMD/Nvidia/Intel GPU, AMD/Intel CPU (one of the devices will be used). As far as I know, the OpenCV DNN module is leading in CPU computation; a DNN + Cuda bundle is planned for Nvidia graphics cards and a DNN + OpenCL bundle is planned for Intel GPUs. But testing AMD GPU rx580 with DNN + OpenCL, I ran into the following problem: https://github.com/opencv/opencv/issues/17656. Does this module not support AMD GPU computing at all? If so, could you please let me know what platform this is possible on and, preferably, as efficiently as possible. A possible solution might be Tencent's ncnn, but I'm not sure of the performance on the desktop. By output I mean the coordinates of detected objects and their names (in opencv dnn module I got them with cv::dnn::Net::forward()). Also, correct me if I'm wrong somewhere. Any feedback would be appreciated.
I tried the OpenCV DNN + OpenCL module and expected high performance, but this combination does not work.
I believe OpenCV doesn't support AMD for GPU optimization. If you're interested in running DL models on non-Nvidia GPUs, I suggest reading PlaidML, YOLO-OpenCL, DeepCL
I need to build standalone module which records video from rasberry pi camera to sd card (through external module) when the motion on video is detected.
So, I need to run OpenCV that I will use for motion detection. Is it possible to run it on Raspberry Pi Pico on board? How much FPS will it have in case for i.e. background subtraction?
The RPi Pico uses an RP2040.
RP2040 is a dual-core ARM Cortex-M0+. It comes with "264kB on-chip SRAM". You shouldn't expect this to have any power that's useful for image processing. It doesn't even run Linux. Were those 264 kB fully available to you, you could fit a single grayscale image of size 593x445 in there.
OpenCV can target ARM but not such tiny microcontrollers.
Here's some evaluations by OpenCV itself: https://opencv.org/arm/
You should investigate "OpenVX".
Is there any noticeable difference in TensorFlow performance if using Quadro GPUs vs GeForce GPUs?
e.g. does it use double precision operations or something else that would cause a drop in GeForce cards?
I am about to buy a GPU for TensorFlow, and wanted to know if a GeForce would be ok. Thanks and appreciate your help
I think GeForce TITAN is great and is widely used in Machine Learning (ML). In ML, single precision is enough in most of cases.
More detail on the performance of the GTX line (currently GeForce 10) can be found in Wikipedia, here.
Other sources around the web support this claim. Here is a quote from doc-ok in 2013 (permalink).
For comparison, an “entry-level” $700 Quadro 4000 is significantly slower than a $530 high-end GeForce GTX 680, at least according to my measurements using several Vrui applications, and the closest performance-equivalent to a GeForce GTX 680 I could find was a Quadro 6000 for a whopping $3660.
Specific to ML, including deep learning, there is a Kaggle forum discussion dedicated to this subject (Dec 2014, permalink), which goes over comparisons between the Quadro, GeForce, and Tesla series:
Quadro GPUs aren't for scientific computation, Tesla GPUs are. Quadro
cards are designed for accelerating CAD, so they won't help you to
train neural nets. They can probably be used for that purpose just
fine, but it's a waste of money.
Tesla cards are for scientific computation, but they tend to be pretty
expensive. The good news is that many of the features offered by Tesla
cards over GeForce cards are not necessary to train neural networks.
For example, Tesla cards usually have ECC memory, which is nice to
have but not a requirement. They also have much better support for
double precision computations, but single precision is plenty for
neural network training, and they perform about the same as GeForce
cards for that.
One useful feature of Tesla cards is that they tend to have is a lot
more RAM than comparable GeForce cards. More RAM is always welcome if
you're planning to train bigger models (or use RAM-intensive
computations like FFT-based convolutions).
If you're choosing between Quadro and GeForce, definitely pick
GeForce. If you're choosing between Tesla and GeForce, pick GeForce,
unless you have a lot of money and could really use the extra RAM.
NOTE: Be careful what platform you are working on and what the default precision is in it. For example, here in the CUDA forums (August 2016), one developer owns two Titan X's (GeForce series) and doesn't see a performance gain in any of their R or Python scripts. This is diagnosed as a result of R being defaulted to double precision, and has a worse performance on new GPU than their CPU (a Xeon processor). Tesla GPUs are cited as the best performance for double precision. In this case, converting all numbers to float32 increases performance from 12.437s with nvBLAS 0.324s with gmatrix+float32s on one TITAN X (see first benchmark). Quoting from this forum discussion:
Double precision performance of Titan X is pretty low.
As said here, OpenCV uses IPP which uses GPU:
It turned out that OpenCV was using IPP and IPP itself can use GPU
nowadays.
just in case someone else googles for "opencv gpu slower" and didnt
know about the IPP GPU support ;)
Also, I found this:
Optimizing an Augmented Reality Pipeline using Intel® IPP Asynchronous
Using Intel® GPUs to Optimize the Performance and Power Consumption of
Total Immersion's D'Fusion* Augmented Reality Pipeline
And there is no one keyword: OpenCL, OpenACC, CUDA, nVidia, ...
There is only one relative to GPU keyword: OpenGL
Does this mean that the Intel IPP supports only Intel GPU? Or Intel IPP support any GPU (nVidia GeForce, AMD Radeon) which supports OpenGL?
Intel IPP doesn't support GPUs. It was kind of "preview" product - Intel 8.0 Preview, which was discontinued.
Intel integrated graphics is supported only by OpenCL. Intel IPP is focused on CPUs only.
Regards!
I was the one of your linked posting...
I didnt try to find out IPP capabilities because it wasnt my project but one of a colleague...
googling talks about "IPP Asynchronous" library and config stuff like
HPP_ACCEL_TYPE_GPU_VIA_DX9 looks like there are options to use OpenCL and DX9 but no warranties from my side that they are supported or that this list is complete...
Iam using Opencv 2.4.3 , my grpahic card is ATI , but I keep reading that CUDA is Nvidia enabaled , does this mean I can't use gpu functions as long as I have ATI graphic card ?
Indeed, CUDA Technology is exclusive to NVIDIA devices, so ATI video cards doesn't support it.
However, OpenCV 2.4.3 was the first version to support OpenCL. There has been a considerable amount of changes to the ocl module since it was first released, so I suggest you upgrade to a more recent version.
You might be able to enjoy OpenCV's GPU processing if your ATI video card supports OpenCL.