Does anyone know how to get Chromium to be hardware accelerated for WebGL if you start with Buster Lite?
Hardware:
Raspberry Pi 4 w/ 2GB
Test1:
Buster w/ Desktop 2019-09-26
chrome://gpu shows WebGL: Hardware Accelerated and three.js renders fine and chromium shows minimal cpu usage.
Test2:
Buster Lite 2019-09-26
install:
$ sudo apt-get install --no-install-recommends xserver-xorg x11-xserver-utils xinit openbox chromium-browser
Then make an auto start that launches chromium-browser and run $ startx.
chrome://gpu shows WebGL: software only, hardware acceleration unavailable and three.js renders very slowly. Chromium also shows > 200% cpu.
I think the issue might be related to mesa. In the 'desktop' version, chromium shows that it's using mesa, and the the 'lite' version, it does not. Mesa shows that it's installed on the 'lite' if I query for it in the console and I can run the gears demo on the 'lite' and it renders just fine.
I have the 'desktop' version implemented as a temporary solution, but I would really like to go back to using 'lite' with just chromium.
I installed libgl1-mesa-dri libgl1-mesa-glx libgles2 libgles2-mesa additionally and according to the chrome://gpu page HW acceleratated webgl become available.
Update:
I checked it second time, and it seems only libgles2 is enough to enable webGL HW acceleration
Related
I want to install gcc-9 to run (E)XLA accelerated Linear Algebra on the CPU.
XLA requires GLIBCXX_3.4.26 which is only available for gcc > 8.
Debians testing package source does not seem to work, since it cannot find gcc-9 either.
I'have trained yolo-tiny-v4 on colab and the detection works well on colab.
Then I've tried to load the yolo-tiny-v4 in this way on visual studio integrated with Gazebo/ROS:
No error appears, but the detection fails (no object detected, the output of the detection is a vector of Nan).
I'm using OpenCV Version: 4.2.0 and Python 2.7.17 in visual studio.
Any idea?
Try compiling OpenCV >= v4.5.0 from sources.
Compiling version 4.5.0 from sources solved the issue for me in Python 3 and I checked it also works in Python 2.7.
I initially got the same issue with Yolo Tiny v4 and Python 3.7, both on Raspberry Pi 4 and Windows 7, with OpenCV installed via pip install opencv-contrib-python (seems not available for Python 2.7 ?).
I tried different versions iteratively, got from pip or recompiled from sources (latest version available via pip on Raspbian was 4.1.0.25):
opencv-contrib-python==3.4.10.37 no detections (tested on Windows)
opencv-contrib-python==4.1.0.25: no detections (tested on Rasbian Buster and Windows)
opencv-contrib-python==4.2.0.34: no detections (tested on Windows)
opencv-contrib-python==4.3.0.38: no detections (tested on Windows)
opencv 4.4.0 compiled from sources: no detections (tested on Rasbian Buster)
opencv-contrib-python==4.4.0.40: ok (tested on Windows)
opencv-contrib-python==4.4.0.46: ok (testd on Windows)
opencv 4.5.0 compiled from sources: ok (tested on Rasbian Buster)
Versions a little after opencv-contrib-python==4.4.0.40 seemed to works, so the "next" version available on Raspbian at the time was v4.5.0 from sources.
I am unable to open a camera on the network using Catalina, Python 3.7 and OpenCV 4.1.2.
I am running an IP Webcam app on a phone that exposes an endpoint as: http://192.168.87.26:8080/video. The following command fails:
import cv2
cap = cv2.VideoCapture('http://192.168.87.26:8080/video')
and the error message is:
OpenCV: Couldn't read video stream from file "http://192.168.87.26:8080/video"
At the same time, a video mp4 file works well. I have added permissions in MacOS such that the default webcam also works.
I have tried with both the pip install opencv-python as well as built an opencv from source, but the error for the video stream does not go away.
FFMPEG is installed in the system. FFPLAY on this URL http://192.168.87.26:8080/video works very well.
$ brew info ffmpeg
ffmpeg: stable 4.2.1 (bottled), HEAD
Play, record, convert, and stream audio and video
https://ffmpeg.org/
/usr/local/Cellar/ffmpeg/4.2.1_2 (287 files, 56.6MB) *
What I am missing?
After some more digging around, I was able to make it work. Looks like I had an old version of Intel OpenVINO around from Mojave days that was interfering with any local version of OpenCV that I would install.
During the exercise, I also figured that building OpenCV from scratch is far better than installing it from pip.
I am trying to follow: https://github.com/jetsonhacks/installTensorFlowTX2
to install tensorflow on my TX2. After ./setTensorFlowEV.sh I get the following error:
Invalid path to cuDNN toolkit. Neither of the following two files can be found:
/usr/lib/aarch64-linux-gnu/lib64/libcudnn.so.6.0.21
/usr/lib/aarch64-linux-gnu/libcudnn.so.6.0.21
/usr/lib/aarch64-linux-gnu/libcudnn.so.6.0.21
This suggest I do not have cudnn6 installed on my TX2. Since tx2 is aarch64 and not x86 I am a bit stuck as nvidia only provides binary for x86 etc and not for aarch64. I understand I can flash my device with newest jetpack to get the cudnn.
Is there any other simpler way (without flashing my device) to install cudnn6 on tx2?
You can use JetPack to install cuDNN without flashing the device. Just
open JetPack, click next until you reach the screen showing all the available packages and set everything to no action
select cuDNN, set it to install and click next
A screen will show up asking you for the ip, username and password of your Jetson. Fill that out and click next
JetPack will now SSH into your Jetson and install cuDNN for you
I have compiled OpenCV 2.4.6 on my Raspberry Pi using the Sourceforge repository. I used the following commands to install it:
wget http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.6/opencv-2.4.6.tar.gz
tar zxvf opencv-2.4.6.tar.gz
cd opencv-2.4.6
cmake -DCMAKE_BUILD_TYPE=RELEASE -DCMAKE_INSTALL_PREFIX=/usr/local -DBUILD_PERF_TESTS=OFF -DBUILD_opencv_gpu=OFF -DBUILD_opencv_ocl=OFF
make install
I get no errors when I compile. I am using the Face Recognition API to recognize faces from video captured through the Raspberry Pi camera module. I am using a C++ API called RaspiCam to capture frames from the camera, and it is compatible with OpenCV, allowing you to save captured frames as an OpenCV Mat object. The documentation for the API is at http://www.uco.es/investiga/grupos/ava/node/40. The source code for building the RaspiCam library is http://sourceforge.net/projects/raspicam/files/?source=navba.
Most of the time when I run my face recognition application, it runs fine. But every now and then, when I run my app it becomes unresponsive after an unpredictable amount of time with no error. Task Manager shows that the program is still running, but at a very small CPU usage like 2% instead of the usual 70-80% that it normally uses. I placed OpenCV try blocks for error handling to catch any OpenCV errors that may arrive, but none of them get invoked. I have noticed that my program crashes less often when I don't use the OpenCV highgui window to display frames, particularly if I run it through ssh. Has anyone had any similar problems?
I was experiencing the same problem with 'raspicam-0.1.1'. For me, downgrading the raspberry pi firmware resolved the problem.
sudo rpi-update 8660fe5152f6353dec61422808835dbcb49fc8b2
I found this firmware version mentioned when I was browsing the RPi-Cam-Web-Interface