`RuntimeError: Failed to allocate tensors` When using retrained object detection model on Google Coral - object-detection-api

I am trying to deploy an object detection model on Google Coral. I trained the model using the following config file, which I tried to closely match to the demo config file from the docker image described here.
I successfully trained my model, then I ran the script ./convert_checkpoint_to_edgetpu_tflite.sh seemingly successfully with the following output:
WARNING:tensorflow:From /media/wwang/WorkDir/projects/SANATA/.venv/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /media/wwang/WorkDir/projects/SANATA/models/research/object_detection/anchor_generators/multiple_grid_anchor_generator.py:183: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
2019-09-12 11:15:11.539092: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-09-12 11:15:11.707588: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x68382b0 executing computations on platform CUDA. Devices:
2019-09-12 11:15:11.707625: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5
2019-09-12 11:15:11.728473: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3298290000 Hz
2019-09-12 11:15:11.729431: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x68a1b90 executing computations on platform Host. Devices:
2019-09-12 11:15:11.729473: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
2019-09-12 11:15:11.729783: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.635
pciBusID: 0000:05:00.0
totalMemory: 10.73GiB freeMemory: 10.34GiB
2019-09-12 11:15:11.729823: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-09-12 11:15:11.732474: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-12 11:15:11.732509: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-09-12 11:15:11.732523: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-09-12 11:15:11.732730: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10057 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:05:00.0, compute capability: 7.5)
WARNING:tensorflow:From /media/wwang/WorkDir/projects/SANATA/.venv/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2019-09-12 11:15:15.451695: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-09-12 11:15:15.451741: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-12 11:15:15.451748: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-09-12 11:15:15.451753: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-09-12 11:15:15.451857: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10057 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:05:00.0, compute capability: 7.5)
WARNING:tensorflow:From /media/wwang/WorkDir/projects/SANATA/.venv/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py:232: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From /media/wwang/WorkDir/projects/SANATA/.venv/lib/python3.5/site-packages/tensorflow/python/framework/graph_util_impl.py:245: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
2019-09-12 11:15:17.880135: I tensorflow/tools/graph_transforms/transform_graph.cc:317] Applying strip_unused_nodes
CONVERTING frozen graph to TF Lite file...
2019-09-12 11:15:19.959403: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-09-12 11:15:20.105331: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x3f28f50 executing computations on platform CUDA. Devices:
2019-09-12 11:15:20.105370: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5
2019-09-12 11:15:20.124476: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3298290000 Hz
2019-09-12 11:15:20.125267: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x3f92630 executing computations on platform Host. Devices:
2019-09-12 11:15:20.125297: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
2019-09-12 11:15:20.125542: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.635
pciBusID: 0000:05:00.0
totalMemory: 10.73GiB freeMemory: 10.34GiB
2019-09-12 11:15:20.125569: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-09-12 11:15:20.127390: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-12 11:15:20.127411: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-09-12 11:15:20.127420: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-09-12 11:15:20.127553: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10057 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:05:00.0, compute capability: 7.5)
TFLite graph generated at model_exported/output_tflite_graph.tflite
Then I ran edgetpu_compiler output_tflite_graph.tflite also seemingly successfully with the following output:
Edge TPU Compiler version 2.0.258810407
INFO: Initialized TensorFlow Lite runtime.
Model compiled successfully in 383 ms.
Input model: model_exported/output_tflite_graph.tflite
Input size: 1.65MiB
Output model: output_tflite_graph_edgetpu.tflite
Output size: 2.33MiB
On-chip memory available for caching model parameters: 7.00MiB
On-chip memory used for caching model parameters: 2.11MiB
Off-chip memory used for streaming uncached model parameters: 0.00B
Number of Edge TPU subgraphs: 1
Total number of operations: 115
Operation log: output_tflite_graph_edgetpu.log
Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 114
Number of operations that will run on CPU: 1
See the operation log file for individual operation details.
And the following output_tflite_graph_edgetpu.log file:
Edge TPU Compiler version 2.0.258810407
Input: output_tflite_graph.tflite
Output: output_tflite_graph_edgetpu.tflite
Operator Count Status
DEPTHWISE_CONV_2D 33 Mapped to Edge TPU
RESHAPE 13 Mapped to Edge TPU
LOGISTIC 1 Mapped to Edge TPU
CUSTOM 1 Operation is working on an unsupported data type
ADD 10 Mapped to Edge TPU
CONCATENATION 2 Mapped to Edge TPU
CONV_2D 55 Mapped to Edge TPU
Finally, I put my converted output_tflite_graph_edgetpu.tflite on the Coral, and got the following error:
Traceback (most recent call last):
File "main.py", line 224, in <module>
main()
File "main.py", line 221, in main
run_app(add_render_gen_args, render_gen)
File "/home/mendel/projects/DARTS/object_detection/edge_tpu_vision/edgetpuvision/apps.py", line 75, in run_app
display=args.displaymode):
File "/home/mendel/projects/DARTS/object_detection/edge_tpu_vision/edgetpuvision/gstreamer.py", line 243, in run_gen
inference_size = render_overlay_gen.send(None) # Initialize.
File "main.py", line 154, in render_gen
engines, titles = utils.make_engines(args.model, DetectionEngine)
File "/home/mendel/projects/DARTS/object_detection/edge_tpu_vision/edgetpuvision/utils.py", line 53, in make_engines
engine = engine_class(model_path)
File "/usr/lib/python3/dist-packages/edgetpu/detection/engine.py", line 55, in __init__
super().__init__(model_path)
File "/usr/lib/python3/dist-packages/edgetpu/swig/edgetpu_cpp_wrapper.py", line 300, in __init__
this = _edgetpu_cpp_wrapper.new_BasicEngine(*args)
RuntimeError: Failed to allocate tensors.
What am I doing wrong?
Thanks!
PS: I realize this may be more suited for a git issue, but I am not sure where to post google-coral issues on git...

I had the exact same problem with the Coral board as well after following the tutorial to retrain an object detection model.
For me the issue was due to the compiler targeting a different runtime version than I had on the Coral board. On the Coral board, you can check the runtime version like this:
python3 -c "import edgetpu.basic.edgetpu_utils; print(edgetpu.basic.edgetpu_utils.GetRuntimeVersion())"
On my board the runtime version was 10 which is not the same as the default for the compiler which was 12. If you indeed have the same issue then you can compile the model targeting this version using:
edgetpu_compiler --min_runtime_version 10 your_model.tflite
Source:
https://coral.withgoogle.com/docs/edgetpu/compiler/#compiler-and-runtime-versions

Could you try with the newest compiler and check back? I know there were a few bugs in the compiler it self which is fixed. Guide is here:
https://coral.withgoogle.com/news/updates-09-2019/

Related

Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory

I am doing a Docker build in mt computer where NVDIA GPU is not available. I use tensorflow/tensorflow Docker image as the base image with CPU.
Dockerfile
FROM tensorflow/tensorflow
WORKDIR /project
COPY /app .
RUN python3 main.py
But it shows an error
2020-06-12 20:06:56.822576: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2020-06-12 20:06:56.825090: E tensorflow/stream_executor/cuda/cuda_driver.cc:313] failed call to cuInit: UNKNOWN ERROR (303)
2020-06-12 20:06:56.827746: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (735abddf4141): /proc/driver/nvidia/version does not exist
2020-06-12 20:06:56.837312: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-06-12 20:06:57.040593: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2400000000 Hz
2020-06-12 20:06:57.045853: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f377c000b20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-06-12 20:06:57.045913: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-06-12 20:07:07.017642: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 614400000 exceeds 10% of free system memory.
Killed
at running this code
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')])
I don't want to use GPU version. I need to run this with CPU.
i am using tensorflow serving and meet this problem, and i find TFS dockerfile.gpu base on docker image nvidia/cuda,so, after install nvidia-docker, solved this problem. hope this may help you.
As #Dr. snoopy has commented this is a memory error. you should increase the docker memory for example following this post

Windows Driver : Install legacy hardware with command line tool

Operation System: Windows Server 2016 R2
I have a RAMDisk driver can be installed with "Add Legacy Hardware Wizard" (hdwwiz.exe) successfully. From the output of devcon hwids *, the device can be seen as below.
ROOT\UNKNOWN\0000
Name: RAMDrive [ QSoft ] Enterprise (x64)
Hardware IDs:
ramdriv
However, I need complete the installation via Ansible, hence hdwwiz.exe cannot be used. It has to be done via command line without interaction.
I tried several approaches and none of them works.
Approach I : DevCon.exe (Windows Device Console)
C:\Ramdisk64_inst>devcon.exe install RAMDriv.inf ramdriv
Device node created. Install is complete when drivers are installed...
Updating drivers for ramdriv from C:\Ramdisk64_inst\RAMDriv.inf.
devcon.exe failed.
C:\Ramdisk64_inst>devcon.exe install RAMDriv.inf ROOT\UNKNOWN\0000
Device node created. Install is complete when drivers are installed...
Updating drivers for ROOT\UNKNOWN\0000 from C:\Ramdisk64_inst\RAMDriv.inf.
devcon.exe failed.
Here is the log from C:\Windows\INF\setupapi.dev.log
>>> [Device Install (UpdateDriverForPlugAndPlayDevices) - ramdriv]
>>> Section start 2018/12/20 07:10:35.670
cmd: C:\Ramdisk64_inst\devcon.exe install C:\Ramdisk64_inst\RAMDriv.inf ramdriv
ndv: INF path: C:\Ramdisk64_inst\RAMDriv.inf
ndv: Install flags: 0x00000001
! ndv: Unable to find any matching devices.
<<< Section end 2018/12/20 07:10:35.717
<<< [Exit status: FAILURE(0xe000020b)]
>>> [Device Install (UpdateDriverForPlugAndPlayDevices) - ROOT\UNKNOWN\0000]
>>> Section start 2018/12/20 07:11:50.687
cmd: devcon.exe install RAMDriv.inf ROOT\UNKNOWN\0000
ndv: INF path: C:\Ramdisk64_inst\RAMDriv.inf
ndv: Install flags: 0x00000001
! ndv: Unable to find any matching devices.
<<< Section end 2018/12/20 07:11:50.734
<<< [Exit status: FAILURE(0xe000020b)]
Approach 2 : DPInst.exe (Driver Package Installer)
C:\Ramdisk64_inst>dpinst.exe /PATH C:\Ramdisk64_inst /Q /C /LM
INFO: Option set: dumping log info to console.
INFO: Current working directory: 'C:\Ramdisk64_inst'
INFO: Running on path 'C:\Ramdisk64_inst'
INFO: No valid 'dpinst.xml' file provided.
INFO: Install option set: Running in quiet mode. Suppressing Wizard and OS popups.
INFO: Install option set: legacy mode on.
INFO: Found driver package: 'C:\Ramdisk64_inst\RAMDriv.inf'.
INFO: Preinstalling 'c:\ramdisk64_inst\ramdriv.inf' ...
INFO: ENTER: DriverPackagePreinstallW
INFO: Driver package is already preinstalled 'c:\ramdisk64_inst\ramdriv.inf'.
SUCCESS:c:\ramdisk64_inst\ramdriv.inf is preinstalled.
INFO: RETURN: DriverPackagePreinstallW (0xB7)
INFO: ENTER: DriverPackageGetPathW
INFO: RETURN: DriverPackageGetPathW (0x0)
INFO: ENTER: DriverPackageInstallW
WARNING:DRIVER_PACKAGE_LEGACY_MODE flag set but not supported on Plug and Play driver on VISTA. Flag will be ignored.
INFO: Installing INF file 'c:\ramdisk64_inst\ramdriv.inf' (Plug and Play).
INFO: Looking for Model Section [DiskDevice.NTamd64]...
INFO: No matching devices found in INF "C:\Windows\System32\DriverStore\FileRepository\ramdriv.inf_amd64_fcc99ac0622d865b\ramdriv.inf" on the Machine.
INFO: No drivers installed. No devices found that match driver(s) contained in 'C:\Windows\System32\DriverStore\FileRepository\ramdriv.inf_amd64_fcc99ac0622d865b\ramdriv.inf'.
INFO: RETURN: DriverPackageInstallW (0xE000020B)
INFO: No matching device was found for 'c:\ramdisk64_inst\ramdriv.inf'. Driver will be installed when plugged in.
INFO: Returning with code 0x100
Approach 3 : rundll32 calls SetupAPI
C:\Ramdisk64_inst>rundll32.exe setupapi.dll,InstallHinfSection DiskInstall 128 C:\Ramdisk64_inst\RAMDriv.inf
It ends without any error, but the driver is not installed.
Approach 4 : PnPUtil
C:\Ramdisk64_inst>pnputil.exe /add-driver C:\Ramdisk64_inst\RAMDriv.inf /install /subdirs /restart
Microsoft PnP Utility
Adding driver package: RAMDriv.inf
Driver package added successfully.
Published Name: oem7.inf
Driver package installed on matching devices.
Total driver packages: 1
Added driver packages: 1
It succeeded, but in fact driver is not installed.
None of them works. DpInst.exe says No matching devices found in INF, DevCon.exe says Unable to find any matching devices. It seems the same reason.
However the driver can be installed with hdwwiz.exe manually, does anyone know what is the secret inside hdwwiz.exe?
Legacy Drivers canĀ“t be installed with pnputil and have to use LaunchINFSectionEx-Call
I tested the following and it works with several drivers from Windows 2000 up to Windows 10, 2012R2, 2016, 2019.
rundll32.exe advpack.dll,LaunchINFSectionEx ykmd.inf,Yubico64_Install.NT,,4,N
Pay attention to use the correct section
The correct section of the inf-File must be used, when there is no [DefaultInstall]-Section. This lacks in most answers. Look it up in your drivers inf-File and use the correct section (in my example "Yubico64_Install.NT"). Using the wrong section wont prompt an error. Im my example I use Quiet mode, no UI (4) and Never reboot (N) to install the driver automated via GPO. All options are documented in detail here:
https://learn.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/platform-apis/aa768006(v%3Dvs.85)

Raspbian wifi driver for 0bda:f179

I recently brought my raspberry pi 3 model B+ and my wifi adapter which is a Realtek RTL8188
It doesn't work when I plug into the Raspberry pi, but works instantly on my Windows 10 Laptop
(More information)
Terminal:
lsusb
Bus 001 Device 009: ID 0bda:f179 Realtek Semiconductor Corp.
Bus 001 Device 005: ID 046d:c52f Logitech, Inc. Unifying Receiver
Bus 001 Device 007: ID 0424:7800 Standard Microsystems Corp.
Bus 001 Device 003: ID 0424:2514 Standard Microsystems Corp. USB 2.0 Hub
Bus 001 Device 002: ID 0424:2514 Standard Microsystems Corp. USB 2.0 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
uname -r
4.14.34-v7+
dmesg
[ 3063.390743] usb 1-1.3: new high-speed USB device number 10 using dwc_otg
[ 3063.521627] usb 1-1.3: New USB device found, idVendor=0bda, idProduct=f179
[ 3063.521643] usb 1-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 3063.521652] usb 1-1.3: Product: 802.11n
[ 3063.521661] usb 1-1.3: Manufacturer: Realtek
[ 3063.521669] usb 1-1.3: SerialNumber: 000F004035AE
sudo lshw
*-usb:1 UNCLAIMED
description: Generic USB device
product: 802.11n
vendor: Realtek
physical id: 3
bus info: usb#1:1.3
version: 0.00
serial: 000F004035AE
capabilities: usb-2.00
configuration: maxpower=500mA speed=480Mbit/s
I have already done my fair share of research on it
/ It says the person also trying to install the driver by it's kernel version is at 3.8.0-30-generic
https://ubuntuforums.org/showthread.php?t=2172877
sudo apt-get install --reinstall linux-headers-$(uname -r) linux-headers-generic build-essential dkms
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package linux-headers-generic is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Unable to locate package linux-headers-4.14.34-v7
E: Couldn't find any package by glob 'linux-headers-4.14.34-v7'
E: Couldn't find any package by regex 'linux-headers-4.14.34-v7'
E: Package 'linux-headers-generic' has no installation candidate
wget https://realtek-8188cus-wireless-drivers-3444749-ubuntu-1304.googlecode.com/files/rtl8192cu-tjp-dkms_1.6_all.deb
--2018-06-11 13:27:39-- https://realtek-8188cus-wireless-drivers-3444749-ubuntu-1304.googlecode.com/files/rtl8192cu-tjp-dkms_1.6_all.deb
Resolving realtek-8188cus-wireless-drivers-3444749-ubuntu-1304.googlecode.com (realtek-8188cus-wireless-drivers-3444749-ubuntu-1304.googlecode.com)... 74.125.200.82, 2404:6800:4003:c00::52
Connecting to realtek-8188cus-wireless-drivers-3444749-ubuntu-1304.googlecode.com (realtek-8188cus-wireless-drivers-3444749-ubuntu-1304.googlecode.com)|74.125.200.82|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2018-06-11 13:27:40 ERROR 404: Not Found.
After trying to make it work for 3 hours on my Raspberry Pi 2, I found the correct driver:
0bda:f179 uses Realtek 8188FU -> look for this particular driver for your version of Raspbian. Generic versions and 8188EU will not work.
There is a script by MrEngman in https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=62371 that detects the correct version and downloads it automatically.

How to use Orbbec Astra sensor/camera on ROS to view pointcloud?

I have installed ROS indigo, openni2, and plugged Orbbec Astra sensor in.
When I try to execute:
roslaunch openni2_launch openni2.launch
I get the warning that no device is connected:
No matching device found.... waiting for devices.
Reason: std::string openni2_wrapper::OpenNI2Driver::resolveDeviceURI(const string&)
# /tmp/buildd/ros-indigo-openni2-camera-0.2.3-0trusty-20150327-
0611/src/openni2_driver.cpp # 623 :
Invalid device number 1, there are 0 devices connected.
How to view simple pointcloud in rviz using Orbbec Astra camera?
Install these two packages: ros_astra_camera and ros_astra_launch
Follow instruction from the ros_astra_camera README.

How to run basic OpenCL face detection code

I tried running the OpenCL basic face detector code which is in "samples/ocl/facedetect.cpp". Before compiling the face detect code I followed the below steps to compile OpenCV with OpenCL enabled.
I downloaded OpenCL SDK by Intel from Intel SDK for OpenCL Applications 2013 and installed it.
Using CMake, I enabled "WITH_OPENCL" and built OpenCV successfully. The "libopencv_ocl245.dll" got created.
I tried compiling the face detection code and it compiled successfully. But int devnums = cv::ocl::getDevice(oclinfo, ocl::CVCL_DEVICE_TYPE_DEFAULT); always returns "0".
I changed the following flag, and I got the below error:
Flag:
int devnums = cv::ocl::getDevice(oclinfo,
ocl::CVCL_DEVICE_TYPE_DEFAULT);
Error:
> 0x06670A02 (0x08730080 0x00000000 0x00000018 0x0A3121D8)0x06670A02 (0x08F70080 0x00000000 0x00000024 0x0A0DC008)0x06670A02 (0x097B0080
> 0x00000000 0x00000000 0x0028E48C) <unknownmodule>
> <unknown module>
> 0x630E6900 (0x0A3121C8 0x00000000 0x00000000 0x00000018)0x630E6900
> (0x0A0DBFF8 0x00000000 0x00000000 0x00000024) <unknown module>
> 0x630E6900 (0x0028E47C 0x00000000 0x00000000 0x00000000),
> opencl_mic_printf() + 0x12130 bytes(s), opencl_mic_printf() + 0x12130
> bytes(s), opencl_mic_printf() + 0x12130 bytes(s) ....
Is there any hardware dependency for running OpenCL face detection code? I am running the code on an Intel CPU.
What are the specifications needed to run the OpenCL face detection sample code?
Am I wrong anywhere in the above steps?
Is only installing the OpenCL SDK enough? Or do I need to install something else?
You are possibly using OpenCL 1.1 instead of 1.2. Or perhaps an Intel CPU that is only compatible with OpenCL 1.1. OpenCV needs the OpenCL 1.2 files and an OpenCL 1.2 compatible device to run.
Judging by the fact that your using an Intel CPU, I'm probably going to be wrong on this answer since most Intel CPUs would be compatible with OpenCL 1.2, but it's worth a look. Another possibility is that your default OpenCL device is your GPU, which if it's from Nvidia, probably isn't compatible with OpenCL 1.2.

Resources