How portable is OpenCV GPU code? - opencv

I have written a DLL in C++ which uses OpenCV. It is called by Labview. I found I can easily move it to other systems and use it with Labview by just including the necessary OpenCV DLLs in the folder of the actual DLL.
If I wrote a DLL that uses the OpenCV GPU capability on the first computer, could I transfer it as easily or would I need to recompile OpenCV for that particular system?

The Compute capability of different GPU is different. When you build the OpenCV with CUDA you build it for a range of compute_capability and a particular GPU architecture as long as the other machine has GPU of same architecture your code will work fine but if they differ from the build config that will trigger some OpenCV GPU errors

Related

can we run digits or caffe on Mac without GPU?

I have seen caffe installation for Mac. But I have a question. If my Mac does not have GPU, then I have no chances to use GPU?? and I have to use CPU-only?
or I have the chance of using (virtual!) GPU by NVIDIA web driver?
Moreover, can I have digits on my Mac? as I try to download it, it does not have any options for Mac download and it is just for Ubuntu!
I am very confused about these questions! Can you please make me clear about these?
The difference in architectures between CPU and GPU does not allow simple transformation of the code written for one architecture to the other. The GPU drivers are specifically written for the GPU architecture and cannot be easily virtualized. On the other hand, some software supports both. This includes OpenGL instructions and caffe (http://caffe.berkeleyvision.org/). NVidia DIGITS is based on caffe and therefore can work without a dedicated GPU (Here the thread how to install on Macs: https://github.com/NVIDIA/DIGITS/issues/88)
According to https://www.github.com/NVIDIA/DIGITS/issues/251 CUDA cannot be run on computers that do not have a dedicated NVidia GPU, but according to How to run my CUDA application on ATI or Intel card in software mode? there is a program gpuocelot that receives CUDA instructions and can work on NVidia GPU, AMD GPU and x86.
In scientific shared computing they wrote separate programs for different devices, e.g. Einstein at Home has four separate programs to find gravitational waves: CPU, NVidia GPU (CUDA), AMD GPU and ARM.
To make DIGITS work you need to
build Caffe with CPU_ONLY and tell DIGITS not to use any GPUs by
running digits-devserver with the --config flag
(https://github.com/NVIDIA/caffe/blob/v0.13.2/Makefile.config.example#L9-L10, https://github.com/NVIDIA/DIGITS/issues/251).
Other possibility:
you can still use the --config flag with the web installer. Try this:
./runme.sh --config. Choose "N" to select none.
Also a possibility:
I am trying to answer how you can choose CPU or GPUs.. Within the
caffe folder, there is a Makefile.config.example file.. Copy the
contents of this file into a new file and rename it as
"Makefile.config". If you want to use CPU, then
1. comment out the "USE_CUDNN :=1 Within "Makefile.config" file,
2. uncomment CPU_ONLY := 1
3. issue the make all command again within the caffe folder..
And if nothing helps you can do the procedure two times because it helped someone at the end of the thread.

OpenCV codes in Code Composer (CCStudio)

I'm using CCStudio v5 to implement a vision system and want to use OpenCV functions in my code, but I don't know it is possible to use OpenCV code in CCStudio or not!
How I can import OpenCV library into my CCStudio project? Is this depends on my hardware?
There is no official release of OpenCV for system without OS. OpenCV library is available for Windows, linux, mac, Android and Ios operating system.
Here
you can find a link which explain the challenges of having OpenCV running on microcontrollers

OpenCv 2.4.3 prebuild seems not to use TBB/IPP

I am using OpenCv 2.4.3. I just downloaded it from their site and used the build that they have made. I did not want to take the headache of building it from the source myself. Anyway, in my machine the haar classifier gives very slow performance to detect faces. In another machine my friend runs it fine.( he built from source with TBB and IPP supprt on in cmake).
Though in the release they say that : "You do not need TBB anymore on MacOSX, iOS and Windows. BTW, the binary package for Windows is now built without TBB support. Libraries and DLLs for Visual Studio 2010 use the Concurrency framework."
I do not know much about these TBB and IPP. Only thing that I understand is making these things available will make multi-threading and parallelism possible resulting to speeding up my program.
Do I need to compile the source with cmake, TBB IPP bla bla... or there is something else that I am missing? Any ideas?
What they say, is that they have the pre-built binaries compiled in a way that does not need TBB, because they use another concurrency framework. So if You don't want to meddle in the library's settings You can use the pre-built version without sacrificing performance. But that is on Windows, iOS and MacOS.
The performance might also depend on the machines parameters (You know, cascades are power hungry), so if Your friend has a stronger machine, he will probably get better results, and OS You are operating, but I cannot tell You which is the best, as I didn't try OpenCV on anything besides Linux.

Cross compiling for several systems at once

Question: How do make a single make file to compile several different systems, environments, and sets of libraries at once?
Info:
I'm a student and as such most of my work is done for the sake of learning how these things work. Right now I'm building a game engine from the ground up. I'd like to be able to make it cross platform in terms of OS, but also for different environments. My target environments are both 32 and 64 bit (my desktop as well as my netbook), with a graphics card and with mesa, and linux and windows. so overall it should out put 8 binaries.
I'm still very new to make, as well as the whole concept of cross compiling. I imagine that the process of compiling more than 1 binary isn't hard. but where I'm kind of stuck is how do i get it to attach the right libraries? The Ubuntu Linux vs the WinAPI libraries, 32bit vs 64bit libraries. etc etc. Is it even possible to get libraries in such a manner?
If you need me to clarify further I can. Thanks.
Addendum: Basically I want to know how to compile headers for drivers i may not have. for example. I want to compile all the files on my netbook, including the ones for openCL, I don't want to run them, as my netbook has no GPU, I just want to compile. conversely, I want to use my desktop compile for my netbook which uses ocelot and mesa for its gpu dealings, but my desktop does not have mesa or ocelot on it. that sort of thing. Thanks.

Vala: reducing the size of dependencies

I am developing small command line utilities using Vala on win32. Programs compiled using vala depend on the following DLLs
libgobject-2.0-0.dll
libgthread-2.0-0.dll
libglib-2.0-0.dll
They are taking up 1500 kbyes of space. Is there a way to reduce the size of these dependencies (besides compressing them with UPX and the like)? I can't imagine a simple helloworld like app using all the features provided by glib.
Thanks!
If your vala source is fairly simple, you may be able to compile it in the posix profile
valac --profile posix hello.vala
Then your binary will not have any dependency outside of the standard C library. However, the posix profile may still be experimental.

Resources