Running OpenCV on an AMD processor - opencv

First off: there is a similar question but it is ten years old. Obviously hardware changes in that time span ;) Is OpenCV 2.0 optimized for AMD processors?.
My intention is to nuke the windoze 10 on an AMD machine with AMD Ryzen 7 R7-3750H cpu and install ubuntu 18.0.4 LTS. Will opencv work -i.e. will it compile and run? What would I expect in terms of performance vs a bit older core i7 ?

Related

NVIDIA Nsight waring: OpenACC injection initialization failed. Is the PGI runtime version greater than 15.7?

I am trying to venture into accelerating my Fortran 2003 programs with OpenACC directives on my Ubuntu 18.04. workstation with Nvidia GeForce RTX 2070 card. To that end, I have installed Nvidia HPC-SDK version 20.7 which should comes with compilers I need (Fortran 2003 from Portland Group and Nvidia (both are version 20.7-0)) as well as profilers (nvprof and Nvidia Nsight Sytems (2020.3.1)).
After a few post-installation glitches, and owing mostly to the help from Robert Cravella (https://stackoverflow.com/users/1695960/robert-crovella) and Mat Colgrove (https://stackoverflow.com/users/3204484/mat-colgrove) I managed to get things going which made me very happy.
My workflow looks like this:
Compile my program:
pgfortran -acc -Minfo=accel -o my_program ./my_program.f90
I run it through profiler:
nsys profile ./my_program
And then import into nsight-sys with File -> Open and chose report1.qdrep
I believe this to be a proper workflow. However, while opening the report file, nsight-sys gives me the warning: "OpenACC injection initialization failed. Is the PGI runtime version greater than 15.7?" That's quite unfortunate, because I use OpenACC to accelerate my programs.
I am not quite sure what PGI runtime is, nor would I know how to check it or change it? I assume it is something with Portland Group (compiler), but I use the suite compilers shipped with Nvidia's HPC-SDK, so I wouldn't expect incompatibilities with the profiler tools shipped in the same package.
Is it an option, or possible at all, to update the PGI runtime thing?
And advice, please?
Cheers
Same answer as your previous post. There's a know issue with Nsight-Systems version 2020.3 which may sometimes cause an injection error when profiling OpenACC. I've been told that this was fixed in version 2020.4, hence the work around would be download and install 2020.4 or use a prior release.
https://developer.nvidia.com/nsight-systems
Version 2020.3 is what we shipped with the NVHPC 20.7 SDK. I'm not sure we have enough time to update to 2020.4 in our upcoming 20.9 release, but if not, we'll bundle it in a later release.
Thanks Mat,
In the meanwhile I managed to have everything running. I did as follows:
First installed CUDA toolkit, which came with the latest driver for my Nvidia RTX 2070 card, 11.1 to be precise. It needed a reboot, but that's OK. For CUDA toolkit to work, I had to set LD_LIBRARY_PATH to its libraries.
Then I installed Nvidia HPC-SDK, which I needed for Fortran 2003 compiler.
HPC-SDK is built for CUDA version 11.0 and comes with its own libraries and LD_LIBRARY_PATH should point to its libraries different from CUDA toolkit.
But, I kept the LD_LIBRARY_PATH to point to CUDA toolkit ones, and then compilers and profilers work in perfect harmony :-)
Thanks again, you and Robert helped me big time to get things running.

Running armhf Debian 9 (Stretch) inside Docker on an arm64 Debian 9 system

I lately tried to compile some Debian package natively on some low-end ARM board with only 2 GB RAM using dpkg-buildpackage. The CMake build runs quite some time, but it is getting slower and slower until it breaks (after some hours) due to low memory. This is because the code of the application is quite complex C++ code that includes a lot of stuff and statically links nearly each and everything. This unfortunately can not be changed.
My intention is now to run it on some large scale cloud ARM server (96 cores, 128 GB RAM), but it runs some arm64 DebianĀ 9 (Stretch).
Is it possible to run a Debian 9 armhf system on some Debian 9 arm64 server?
If yes, how would it look like to set it up?
On the ARMv8 high-end-servers (e.g., Cavium ThunderX), it is not possible to run armhf code, since these SoCs are AArch64 only, but QEMU seems not aware of this. If you try to run a chroot with QEMU (e.g., with qemu-debootstrap), it also fails. I believe QEMU could be improved to detect this situation and emulate the 32-bit ARM correctly, but obviously it doesn't.
On low-end ARM-v8-SoCs (e.g., Cortex-A53) supporting both AArch32 and AArch64, I believe, it should work out of the box. Example: single-board computers are the Raspberry Pi 3 and the Pine64.

OpenCV 3.4 build with CUDA 9.1 on Windows - traincascade not use GPU

I builded OpenCV 3.4 with CUDA, Intel TBB, Intel MKL in VS 2015 like this
When I run traincascade for classifier training, 100% of CPU is used, but the GPU is not used 0%.
Does the OpenCV-traincascade use the functions of the library CUDA for calculations on the GPU?
No
https://devtalk.nvidia.com/default/topic/951477/jetson-tk1/are-tools-like-opencv_traincascade-gpu-accelerated-in-opencv4tegra-/
To train cascade is meant to be used as an offline tool to create a cascade detector, you should try using a powerful desktop system for training, and then use OpenCV4Tegra on Jetson to run the trained classifier on the device.
There is a CUDA accelerated version of the cascade training tool available in the Ubuntu Desktop x64 version of the OpenCV4Tegra package, which can be downloaded here:
http://developer.nvidia.com/embedded/dlc/l4t-24-1-opencv4tegra-ubuntu
Which sums it up more eloquantly than I could.
Also No - answered here
In Summary
The opencv_traincascade functionality is not developed using GPU code, for reasons I do not know. This tool however is meant to be run offline, and then the results from this training used in your actual detection run-time code which can be GPU optimised.

Which would be best suited for Ruby on Rails Dev Projects - 32 bit or 64 bit Ubuntu 11.10

I have a 4GB Ram, 500 gb hdd with a 64 bit OS running on an intel core 2 duo 2.2 Ghz processor.
I just had one clarification would all the tools(different editors etc.) and softwares(like RubyMine IDE etc.) related to Ruby on Rails projects support/run with the 64 bit OS version or should I go with the recommended 32 bit os installation of Ubuntu 11.10 as given in http://www.ubuntu.com/download/ubuntu/download .
Also is the 64 bit only suited for AMD processors?(just a doubt as the download file name of this version has amd tagged to it.) Would it work with the aforementioned intel processor?
Kindly suggest, so that I could install the appropriate bit version of OS. I'm planning to go ahead with the Desktop version itself.
Thank you for your inputs.
Afaik there is no reason not to use the 64-bit OS. Take the OS that works best with your computer. I have one machine running 32-bit and another 64-bit without any problems (both on 11.10).

CUDA x64 + openCV 2.1

the previous tutorials have not shown anybody else having this problem: compiling openCV and CUDA projects in vs2008 in windows 7 x64. but i have been stuck on it for over a week.
Zero problems building openCV samples and my own code and CUDA within their own projects. I cannot get them to build in a single project together no matter what i try to do in VS.
Here's a good guide, i'm sure it will help you: How to Build OpenCV 2.2 with GPU (CUDA) on Windows 7
self solve
this is NOT possible in windows, don't bother trying... i have since changed to ubuntu with no problems
It is possible when compiling OpenCV in x64 mode with checking "With Cuda" in CMAKE. Also you need to have the x64 Cuda Toolkit with Nvidia Performance Primitives and GPU Computing SDK.

Resources