Tensor RT installation - nvidia

I am trying to install Tensor Rt in windows 11. I have installed the necessary CUDA and CUDNN versions. But it says that the system is not supporting. So is tensor rt compatible with windows 11
I have tried installing it in possible ways but failed.

Related

EasyOCR disables Cuda GPU when installed

Hello after installing EasyOCR via pip install easyocr via Anaconda Prompt (base) I receive this error.
OpenCV(4.5.4)
D:\a\opencv-python\opencv-python\opencv\modules\core\include\opencv2/core/private.cuda.hpp:106: error: (-216:No CUDA support) The library is compiled without CUDA
support in function 'throw_no_cuda'
Before installing easyocr, cuda is active and working with my GPU, but is disabled after installing EasyOCR.
I complied my build via Cmake using opencv4.5.4 and opencv-contrib4.5.4 with Cuda 11.3.1 and Cudnn 8.4.0 as well as uprading tot he most recent numpy 1.22.3.
It seems that after installing eascyOCR is changes my opencv build from whatever I am using to "opencv-python-headless-4.5.4.60" and that is I believe where the issues is at.
This is the version of easyocr installed.
Downloading opencv_python_headless-4.5.4.60-cp39-cp39-win_amd64.whl (35.0 MB)
Successfully installed easyocr-1.4.2 opencv-python-headless-4.5.4.60 python-bidi-0.4.2
Any idea or help on how I can get easyocr working with Cuda and my GPU?
Solved,
Have to install EasyOCR first
Then build OPENCV

getCudaEnabledDeviceCount() returning -1 : OpenCV [cuda] built with vcpkg

Building an OpenCV CUDA app on win10 with MSVC 2017 using the opencv[cuda] package installed by vcpkg. To check for usable devices, I call getCudaEnabledDeviceCount() and it returns -1, which OCV documents as meaning cuda support is enabled but the CUDA driver is incompatible.
Re-installing opencv[cuda] with vcpkg did not help.
Can you suggest a way to diagnose or fix?
Answer: The machine is an old gaming laptop whose most recent recommended nVidia display driver is 425.31. I had recently installed CUDA toolkit 10.2, whose runtime is actually incompatible with that driver. Rolling back to toolkit 10.1 resolved the problem.

Tiny yolo v4 dnn module opencv no detection

I'have trained yolo-tiny-v4 on colab and the detection works well on colab.
Then I've tried to load the yolo-tiny-v4 in this way on visual studio integrated with Gazebo/ROS:
No error appears, but the detection fails (no object detected, the output of the detection is a vector of Nan).
I'm using OpenCV Version: 4.2.0 and Python 2.7.17 in visual studio.
Any idea?
Try compiling OpenCV >= v4.5.0 from sources.
Compiling version 4.5.0 from sources solved the issue for me in Python 3 and I checked it also works in Python 2.7.
I initially got the same issue with Yolo Tiny v4 and Python 3.7, both on Raspberry Pi 4 and Windows 7, with OpenCV installed via pip install opencv-contrib-python (seems not available for Python 2.7 ?).
I tried different versions iteratively, got from pip or recompiled from sources (latest version available via pip on Raspbian was 4.1.0.25):
opencv-contrib-python==3.4.10.37 no detections (tested on Windows)
opencv-contrib-python==4.1.0.25: no detections (tested on Rasbian Buster and Windows)
opencv-contrib-python==4.2.0.34: no detections (tested on Windows)
opencv-contrib-python==4.3.0.38: no detections (tested on Windows)
opencv 4.4.0 compiled from sources: no detections (tested on Rasbian Buster)
opencv-contrib-python==4.4.0.40: ok (tested on Windows)
opencv-contrib-python==4.4.0.46: ok (testd on Windows)
opencv 4.5.0 compiled from sources: ok (tested on Rasbian Buster)
Versions a little after opencv-contrib-python==4.4.0.40 seemed to works, so the "next" version available on Raspbian at the time was v4.5.0 from sources.

Can I run a Docker container with CUDA 10 when host has CUDA 9?

Im deploying an application in a docker container that requires CUDA 10. This is necessary to run some of the underlying pytorch functionality that the application uses.
However, the host server is running docker ce 17, Nvidia-docker v 1.0 with CUDA version 9, and I will not be able to upgrade the host.
I’m under the impression that I’m handcuffed to the v1 nvidia docker runtime and CUDA version available on the host.
Is there a way to run CUDA 10 on the container so I can leverage the functionality of this toolkit?
In the general case, any specific CUDA version will require a minimum GPU driver version. That is covered in places like here and here (table 1). So to use CUDA 9.0 you would need at least a GPU driver version that supports CUDA 9.0, such as a R384 driver. To use CUDA 10.0 you would need at least a GPU driver version that supports CUDA 10.0, such as a R410 driver.
The usage of containers doesn't fundamentally change this. If you want to use a container that has CUDA 10 code in it, your base machine needs a driver that supports CUDA 10.
NVIDIA did start publishing compatibility libraries that allow modifications to the above statements. These compatibility libraries are available but not installed by default with a CUDA toolkit install. These compatibility libraries only work in certain cases, and they have certain requirements to be usable. The compatibility libraries are documented here.
One of the specific requirements for use of these compatibility libraries is that the GPU(s) in use must be Tesla-brand GPUs. GeForce, Quadro, Jetson, and Titan family GPUs are not supported by these compatibility libraries.
Furthermore, the libraries only work with certain combination of CUDA toolkit versions, and GPU driver versions installed on the base machine. This "compatibility matrix" is documented here (Table 3). Only the specific combinations of CUDA toolkit versions with installed driver versions will be usable for compatibility. To pick one example, if you wish to use CUDA 10.0, and your base machine has a Tesla GPU with a R396 driver installed, there is no compatibility support. In the same setup, however, if you wish to use CUDA 10.1, there is compatibility support for that.
If you have satisfied the requirements for compatibility usage, then the remaining step would be to install the compatibility libraries (or build your container from a base container that has the compatibility libraries already installed).
For a package manager CUDA install method, the method to install the compatibility libraries is simple (example on Ubuntu, installing the CUDA 10.1 compatibility to match CUDA 10.1 toolkit install):
sudo apt-get install cuda-compat-10.1
Make sure to match the version to the CUDA toolkit version that you are using (that you installed with the package manager method, or that was already installed in your container).
This compatibility "path" only began in the CUDA 9.0 timeframe. Systems that are equipped with drivers that predate CUDA 9.0 will not be usable in any way for this compatibility path. There are also various functional limitations and restrictions, which are covered in the documentation.
When this "compatibility path" is correctly installed and in use, the overall system configuration can "appear" to be violating the rules indicated at the top of this answer. For example a CUDA 10.1 application could possibly be running on a machine that had only a R396 driver installed.
For the specific question in view here, OP eventually indicated that the base machine had a Quadro GPU, so this "compatibility path" does not apply, and the only way to run e.g. a CUDA 10.0 container would be if a CUDA 10.0-capable driver is installed in the base machine, e.g. R410 or later driver.

CUDA driver version is insufficient for CUDA runtime version - OpenCV - GPU Toolkit

I am trying to run the CUDA GPU Toolkit 7.5 built with OpenCV 3.1.0 .
My graphic card is : Nvidia Quadro FX 5800 . Driver version : 341.92 (Latest available version for the same)
Nvidia classifies my Graphics card in the legacy category with the 1.3 compute capability.
I keep getting the error in the title. and can understand the driver mismatch.
I updated to the latest driver for the graphics card.
My question is what version of the GPU toolkit should i build opencv with ? that would also be compatible with VS 2013 C++ env. I tried building it with CUDA toolkit 6.0 and its not compatible with VS 2013.
Sticky situation any advice would be appreciated.
This was fixed by building OpenCV with 1.3 compute capability. Dont let Cmake choose it automatically. CUDA_ARCH_PTX was set to 1.3 ->(which is the compute capability of my legacy graphics card).

Resources