How to determine an execution device for opencv transparent API - opencv

I have a code I wrote using Umat in opencv 3.1
I have several devices on my system, an Nvidia GPU Tesla k20 and Intel graphics HD 4600, I'd like to run my Umat OpenCL code on the Intel graphics HD, and on different thread run my CUDA code on Nvidia device.
How can I determine Umat execution platform?

You can set the desired device via OPENCV_OPENCL_DEVICE environment variable here you have some examples:
OPENCV_OPENCL_DEVICE = NVIDIA:GPU
Variable Name: OPENCV_OPENCL_DEVICE
OPENCV_OPENCL_DEVICE = AMD:GPU
OPENCV_OPENCL_DEVICE = AMD:Pitcairn
(If you have few AMD devices.Pitcairn is the device name.)

Related

UE5 lumen lighting on Mac

I own an m1 Mac with Apple's in house chip. I'm also a beginner in Unreal Engine 5. I was following some very beginner tutorials on lighting and lumen. However, lumen doesn't seem to work for me. After setting the global illumination method to lumen, nothing seems to happen or change. Is there a fix for this? Or is it just the fact that I'm using a Mac and lumen isn't supported on it?
The recommended system specs are
GeForce RTX 2080 / AMD Radeon 5700 XT or higher
I'm on a 14" M1 Macbook Pro and lumen works great. This is a completely dynamic scene with no baked lighting with lumen dynamic global illumination. This scene is a red cube in a white room with a single point light.
Try using parallels once. Check on windows10 arm

can we run digits or caffe on Mac without GPU?

I have seen caffe installation for Mac. But I have a question. If my Mac does not have GPU, then I have no chances to use GPU?? and I have to use CPU-only?
or I have the chance of using (virtual!) GPU by NVIDIA web driver?
Moreover, can I have digits on my Mac? as I try to download it, it does not have any options for Mac download and it is just for Ubuntu!
I am very confused about these questions! Can you please make me clear about these?
The difference in architectures between CPU and GPU does not allow simple transformation of the code written for one architecture to the other. The GPU drivers are specifically written for the GPU architecture and cannot be easily virtualized. On the other hand, some software supports both. This includes OpenGL instructions and caffe (http://caffe.berkeleyvision.org/). NVidia DIGITS is based on caffe and therefore can work without a dedicated GPU (Here the thread how to install on Macs: https://github.com/NVIDIA/DIGITS/issues/88)
According to https://www.github.com/NVIDIA/DIGITS/issues/251 CUDA cannot be run on computers that do not have a dedicated NVidia GPU, but according to How to run my CUDA application on ATI or Intel card in software mode? there is a program gpuocelot that receives CUDA instructions and can work on NVidia GPU, AMD GPU and x86.
In scientific shared computing they wrote separate programs for different devices, e.g. Einstein at Home has four separate programs to find gravitational waves: CPU, NVidia GPU (CUDA), AMD GPU and ARM.
To make DIGITS work you need to
build Caffe with CPU_ONLY and tell DIGITS not to use any GPUs by
running digits-devserver with the --config flag
(https://github.com/NVIDIA/caffe/blob/v0.13.2/Makefile.config.example#L9-L10, https://github.com/NVIDIA/DIGITS/issues/251).
Other possibility:
you can still use the --config flag with the web installer. Try this:
./runme.sh --config. Choose "N" to select none.
Also a possibility:
I am trying to answer how you can choose CPU or GPUs.. Within the
caffe folder, there is a Makefile.config.example file.. Copy the
contents of this file into a new file and rename it as
"Makefile.config". If you want to use CPU, then
1. comment out the "USE_CUDNN :=1 Within "Makefile.config" file,
2. uncomment CPU_ONLY := 1
3. issue the make all command again within the caffe folder..
And if nothing helps you can do the procedure two times because it helped someone at the end of the thread.

How portable is OpenCV GPU code?

I have written a DLL in C++ which uses OpenCV. It is called by Labview. I found I can easily move it to other systems and use it with Labview by just including the necessary OpenCV DLLs in the folder of the actual DLL.
If I wrote a DLL that uses the OpenCV GPU capability on the first computer, could I transfer it as easily or would I need to recompile OpenCV for that particular system?
The Compute capability of different GPU is different. When you build the OpenCV with CUDA you build it for a range of compute_capability and a particular GPU architecture as long as the other machine has GPU of same architecture your code will work fine but if they differ from the build config that will trigger some OpenCV GPU errors

Point Grey Bumblebee2 firewire 1394 with Nvidia Jetson TK1 board

I have successfully interfaced Point Grey Bumblebee2 firewire1394 camera with Nvida Jetson TK1 board and I get the video using Coriander and video for Linux loop back device is working as well. But when I tried to access camera using OpenCV and Coriander at the same time, I have conflicts. And when I tried to access the video from camera by closing the Coriander then I can get the video but in that case I am not able to change the mode and format of the video. Anyone can help me to resolve this problem. Can I change the video mode of the camera from OpenCV.
You will have to install the flycapture sdk for ARM if you want to do it manually (by code). The flycap UI software i dont believe works on ARM, let alone ubuntu 14.04, just ubuntu 12.04 x86. If you have access, what I usually do is plug it into my windows machine and use the Flycap software to change the configurations on the camera.
I found this question completely randomly, but coincidentally I am trying to interface the bumblebee2 with the jetson right now as well. Would you care to share as to what firewire mini-PCIe you used and how you went about any configurations (stock or grinch kernel, which L4T version) ?
Also, although not fully complete, you can view a code example as to how to interface with the camera using the flycaputre sdk here: https://github.com/ros-drivers/pointgrey_camera_driver. It is a ROS driver, but you can just reference the PointGreyCamera.cpp file for examples if your not using ROS.
Hope this helps
This is not well advertised, but PtGrey do not support firewire on ARM (page 4):
Before installing FlyCapture, you must have the following prerequisites:... A Point Grey USB 3.0 camera, (Blackfly, Grasshopper3, or Flea3)
Other Point Grey imaging cameras (FireWire, GigE, or CameraLink) are NOT supported
However as you have seen it is possible to use the camera (e.g. in Coriander) using standard firewire tools.
libdc1394 or the videography library should do what you need.

x86 Windows 8 Tablet running Pro can't access Integrated Cameras with Emgu OpenCV

I can't get Emgu CV to work with the cameras on the new Atom based Windows 8 x86 tablets. These are not the ARM based tablets running Windows 8 RT these are running the full blown Windows 8 Pro x86 on x86 Atom CPUs. I've tried working code on a release version of the Samsung XE500T1C (Ativ?) and on a pre-release version of the HP ElitePad 900.
Emgu CV tells me: "Error: Unable to create capture from camera 0". The problem is probably related to the fact that the new Atom chipset is handling some of the camera functions. I've attached a screenshot of the device manager with the offending cameras highlighted.
Under Imaging devices we have:
Intel(R) Imaging Signal Processor 2300
Under System devices we have:
Camera Sensor OV2720
Camera Sensor OV8830
Flash LM3554
I've searched the Internets high and low and can't find anything useful. I've contacted HP and they're contacting their engineering. I tried Intel and the best I got was this: http://software.intel.com/en-us/articles/sample-windows-store-app-for-camera-picture-taking which is actually for the Windows Store apps. Although it does work.
Does anyone have any ideas? Needless to say I'm in a bind. One other thing Emgu CV is working fine on my Samsung Slate Series 7 that is running Windows 8 Pro. It also runs fine in a 32-bit Windows 8 Pro VM. It just appears to be these new ones with the new Atoms with the Intel Imaging Signal Processor 2300.
Thanks everyone!
Hal
I rolled Emgu back to 2.1.1 from December of 2010. This appears to work fine with the new Intel Clover Trail Atoms.
FYI: I also tried the 2.4.9 Alpha dated 2013-01-14, but that would not work.

Resources