I am trying to use OpenCV with target OpenCL in a Ubuntu 16.04 system with intel UHD 620 graphics. I have installed ocl-icd-opencl-dev for OpenCL but cv::ocl::haveOpenCL() tells me that I do not have OpenCL
clinfo gives me
Number of platforms 0
Then I tried installing beignet as this answer proposes. Still cv::ocl::haveOpenCL() tells me that I do not have OpenCL and now clinfo says
Number of platforms 1
Platform Name Intel Gen OCL Driver
Platform Vendor Intel
Platform Version OpenCL 1.2 beignet 1.1.1
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_spir cl_khr_icd
Platform Extensions function suffix Intel
beignet-opencl-icd: no supported GPU found, this is probably the wrong opencl-icd package for this hardware
Can anybody help?
ocl-icd-opencl-dev are development files for OCL-ICD loader. You'll need that if you want to develop (compile) against libOpenCL. If you don't want to develop, only use OpenCL programs, then you just need ocl-icd-libopencl1.
cv::ocl::haveOpenCL() tells me that I do not have OpenCL
ocl-icd is just a loader; you need an actual implementation. As explained on Khronos:
The OpenCL Installable Client Driver (ICD) is a mechanism to allow OpenCL implementations from multiple vendors to coexist on a system
Then I tried installing beignet
beignet is an implementation, but it's too old for your GPU. You need either their proprietary implementation, or Intel NEO.
Related
I am trying to venture into accelerating my Fortran 2003 programs with OpenACC directives on my Ubuntu 18.04. workstation with Nvidia GeForce RTX 2070 card. To that end, I have installed Nvidia HPC-SDK version 20.7 which should comes with compilers I need (Fortran 2003 from Portland Group and Nvidia (both are version 20.7-0)) as well as profilers (nvprof and Nvidia Nsight Sytems (2020.3.1)).
After a few post-installation glitches, and owing mostly to the help from Robert Cravella (https://stackoverflow.com/users/1695960/robert-crovella) and Mat Colgrove (https://stackoverflow.com/users/3204484/mat-colgrove) I managed to get things going which made me very happy.
My workflow looks like this:
Compile my program:
pgfortran -acc -Minfo=accel -o my_program ./my_program.f90
I run it through profiler:
nsys profile ./my_program
And then import into nsight-sys with File -> Open and chose report1.qdrep
I believe this to be a proper workflow. However, while opening the report file, nsight-sys gives me the warning: "OpenACC injection initialization failed. Is the PGI runtime version greater than 15.7?" That's quite unfortunate, because I use OpenACC to accelerate my programs.
I am not quite sure what PGI runtime is, nor would I know how to check it or change it? I assume it is something with Portland Group (compiler), but I use the suite compilers shipped with Nvidia's HPC-SDK, so I wouldn't expect incompatibilities with the profiler tools shipped in the same package.
Is it an option, or possible at all, to update the PGI runtime thing?
And advice, please?
Cheers
Same answer as your previous post. There's a know issue with Nsight-Systems version 2020.3 which may sometimes cause an injection error when profiling OpenACC. I've been told that this was fixed in version 2020.4, hence the work around would be download and install 2020.4 or use a prior release.
https://developer.nvidia.com/nsight-systems
Version 2020.3 is what we shipped with the NVHPC 20.7 SDK. I'm not sure we have enough time to update to 2020.4 in our upcoming 20.9 release, but if not, we'll bundle it in a later release.
Thanks Mat,
In the meanwhile I managed to have everything running. I did as follows:
First installed CUDA toolkit, which came with the latest driver for my Nvidia RTX 2070 card, 11.1 to be precise. It needed a reboot, but that's OK. For CUDA toolkit to work, I had to set LD_LIBRARY_PATH to its libraries.
Then I installed Nvidia HPC-SDK, which I needed for Fortran 2003 compiler.
HPC-SDK is built for CUDA version 11.0 and comes with its own libraries and LD_LIBRARY_PATH should point to its libraries different from CUDA toolkit.
But, I kept the LD_LIBRARY_PATH to point to CUDA toolkit ones, and then compilers and profilers work in perfect harmony :-)
Thanks again, you and Robert helped me big time to get things running.
Im deploying an application in a docker container that requires CUDA 10. This is necessary to run some of the underlying pytorch functionality that the application uses.
However, the host server is running docker ce 17, Nvidia-docker v 1.0 with CUDA version 9, and I will not be able to upgrade the host.
I’m under the impression that I’m handcuffed to the v1 nvidia docker runtime and CUDA version available on the host.
Is there a way to run CUDA 10 on the container so I can leverage the functionality of this toolkit?
In the general case, any specific CUDA version will require a minimum GPU driver version. That is covered in places like here and here (table 1). So to use CUDA 9.0 you would need at least a GPU driver version that supports CUDA 9.0, such as a R384 driver. To use CUDA 10.0 you would need at least a GPU driver version that supports CUDA 10.0, such as a R410 driver.
The usage of containers doesn't fundamentally change this. If you want to use a container that has CUDA 10 code in it, your base machine needs a driver that supports CUDA 10.
NVIDIA did start publishing compatibility libraries that allow modifications to the above statements. These compatibility libraries are available but not installed by default with a CUDA toolkit install. These compatibility libraries only work in certain cases, and they have certain requirements to be usable. The compatibility libraries are documented here.
One of the specific requirements for use of these compatibility libraries is that the GPU(s) in use must be Tesla-brand GPUs. GeForce, Quadro, Jetson, and Titan family GPUs are not supported by these compatibility libraries.
Furthermore, the libraries only work with certain combination of CUDA toolkit versions, and GPU driver versions installed on the base machine. This "compatibility matrix" is documented here (Table 3). Only the specific combinations of CUDA toolkit versions with installed driver versions will be usable for compatibility. To pick one example, if you wish to use CUDA 10.0, and your base machine has a Tesla GPU with a R396 driver installed, there is no compatibility support. In the same setup, however, if you wish to use CUDA 10.1, there is compatibility support for that.
If you have satisfied the requirements for compatibility usage, then the remaining step would be to install the compatibility libraries (or build your container from a base container that has the compatibility libraries already installed).
For a package manager CUDA install method, the method to install the compatibility libraries is simple (example on Ubuntu, installing the CUDA 10.1 compatibility to match CUDA 10.1 toolkit install):
sudo apt-get install cuda-compat-10.1
Make sure to match the version to the CUDA toolkit version that you are using (that you installed with the package manager method, or that was already installed in your container).
This compatibility "path" only began in the CUDA 9.0 timeframe. Systems that are equipped with drivers that predate CUDA 9.0 will not be usable in any way for this compatibility path. There are also various functional limitations and restrictions, which are covered in the documentation.
When this "compatibility path" is correctly installed and in use, the overall system configuration can "appear" to be violating the rules indicated at the top of this answer. For example a CUDA 10.1 application could possibly be running on a machine that had only a R396 driver installed.
For the specific question in view here, OP eventually indicated that the base machine had a Quadro GPU, so this "compatibility path" does not apply, and the only way to run e.g. a CUDA 10.0 container would be if a CUDA 10.0-capable driver is installed in the base machine, e.g. R410 or later driver.
I am trying to run the CUDA GPU Toolkit 7.5 built with OpenCV 3.1.0 .
My graphic card is : Nvidia Quadro FX 5800 . Driver version : 341.92 (Latest available version for the same)
Nvidia classifies my Graphics card in the legacy category with the 1.3 compute capability.
I keep getting the error in the title. and can understand the driver mismatch.
I updated to the latest driver for the graphics card.
My question is what version of the GPU toolkit should i build opencv with ? that would also be compatible with VS 2013 C++ env. I tried building it with CUDA toolkit 6.0 and its not compatible with VS 2013.
Sticky situation any advice would be appreciated.
This was fixed by building OpenCV with 1.3 compute capability. Dont let Cmake choose it automatically. CUDA_ARCH_PTX was set to 1.3 ->(which is the compute capability of my legacy graphics card).
In the post OpenCV 2.4.3rc and CUDA 4.2: "OpenCV Error: No GPU support" , it is said that C:\opencv\build\gpu\x86... libs must be added instead of C:\opencv\build\x86... ones. But there is no gpu folder for 2.4.4 realese. I added opencv_gpu244.lib lib file for release and opencv_gpu244d.lib for debug modes on vs 2010 conf. which are reside in C:\opencv\build\x64\vc10\lib. But i get opencv error ( no gpu support ): the library is compiled without cuda support. By the way i!'m using cuda toolkit 5.0.
The procedure described in the given answer, still applies to the current distribution of OpenCV. There is just 1 small difference. The pre-built distribution of OpenCV 2.4.4 does not contain GPU binaries. To add GPU support, you have to build the library yourself using CMake.
OpenCV 2.4.4 is optimized for Kepler architecture GPUs. In version 2.4.3, only the GPU binaries are approximately 1.4 GB. So you can guess, that adding the code for Compute capabilty 3.0 and 3.5 would make this even larger. So it is not feasible to ship these binaries, and that is why the gpu folder is not present in version 2.4.4 prebuilt distribution.
You should compile OpenCV libraries using CMake with CUDA support ( there is a checkbox ). Before releases include pre-compiled gpu files.
The Accelerate framework is a Mac-specific framework that provides things like image convolutions and LAPACK, supposedly optimized to be as fast as possible on Macs. My question: Does OpenCV take advantage of this? Specifically, does the function "filter2D" use Accelerate?
It does not use the Accelerate framework, but it looks like it has been speeded up using the CUDA stuff in 2.2
The relevant files in OpenCV2.2 ...
/modules/gpu/include/opencv2/gpu/gpu.hpp
/modules/gpu/src/filtering.cpp
and
modules/imgproc/src/filter.cpp
for the non-gpu stuff
Not a mac expert but AFAIK openCV uses IPP (if installed) TBB (build option) and NVidia-CUDA (build option)
If you use the MacPorts version, you can specify the options
$ port variants opencv
opencv has the variants:
debug: Enable debug binaries
python26: Add Python 2.6 bindings
* conflicts with python27
python27: Add Python 2.7 bindings
* conflicts with python26
tbb: Use Intel TBB
universal: Build for multiple architectures
I have used
sudo port install py26-numpy
sudo port install opencv +python26 +tbb
with success. Concerning the Accelerate.framework specifically, this blog entry says "# Add Accelerate.framework which is used internally from OpenCV library.", but I have no clue as to know if it is the case here.