Piping video from Raspberry Pi to Desktop running OpenCV - opencv

I'm looking for some hints. I've got my Pi running OpenCV, but I'm about to take on a project which will need several IP cameras, all piping video to OpenCV. I'm curious if it's possible to use the Pi+webcam in place of an IP camera?
I was attempting this by using Gstreamer on the Pi to pipe the video to a desktop PC, where I would use Python and OpenCV to process the images, then ship back answers to the Pi. The Pi is connected to actuators, so the described setup would save me the purchase of a few ip cams.
I've setup ffmpeg to capture the video and stream it, I just can't seem to find an appropriate Gstreamer pipe to get it pulled up in OpenCV on the Desktop.
I hope this is clear.

first of all, i strongly recommend the latest gestreamer code you can get to compile on the rpi.
some recent builds of gstreamer can be found in a 3rd party apt repository:
add
deb http://vontaene.de/raspbian-updates/ . main
to
/etc/apt/sources.list
and run
apt-get update && apt-get upgrade
as superuser.
hope that helps. if not you may find some useful info at
http://pi.gbaman.info/?p=150
http://sanjosetech.blogspot.de/2013/03/web-cam-streaming-from-raspberry-pi-to.html
or even https://raspberrypi.stackexchange.com/

I recommend you the UV4L driver for the pi, this driver will enable an URL were you can see the pi camera so you can just process the images with cv2.videocapture("http://raspberrypi-ip/live") in this way you dont need to process anything on the pi as is very limited in comparition with your pc which will give you nice results.

Related

Can I run NVIDIA DeepStream SDK in Windows Server 2019?

System: I've a Windows Server 2019 OS installed with a NVIDIA Tesla T4 Tensor Core GPU.
Goal: Planning to read real time streaming videos from an IP camera and to further process frame by frame. Goal is to leverage NVIDIA DeepStream SDK, but issue is, it isn't available for Windows OS. So, I'm thinking on the docker lines, but since am very new to docker containers, would like to know if I can install a docker on Windows and can run this deepstream docker image on that.
If not, is there any way I can run this Linux based DeepStream docker image on Windows? Any help shall be greatly acknowledged.
I have never worked with the windows server before it should be the same as a docker in Linux VM.
First, you need to pull docker images for deepstream
docker pull nvcr.io/nvidia/deepstream:5.0-dp-20.04-triton
and then try to run sample apps provided in the docker image.
Refer this for the procedure.
if you are interested in python apps you can check sample apps here.
Note:- make sure you are able to access display from inside the container cause deepstream use eglsink in their samples app which will try to open a display window on your screen or you can change the sink type to filesink if you want to save it is a file.
Refer this for available plugins and their attributes.
According to the post in Nivida forum, Windows not supported.
As alternative, I wonder if anyone used the Nvidia Graph Composer in Windows.

Can I use Tensorflow on Orange pi 4G IOT with Ubuntu?

I am trying to build an imaging system and I want to use Tensorflow with Orange pi 4G. Does anyone know if there are limitations, is this possible?
As I can see Orange PI 4g iot is still not compatible with Ubuntu but I hope it will be in the near future. Any information you could give me i will be happy.
Official CI server for Tensorflow has some nightly builds with python wheels for raspberry pi armv7l, it is not officially supported by tensorflow yet, they officially support only 64-bit architectures so far, but I managed to get yolo-keras working on "orange pi pc plus" using their nightly build wheel file.
You can also find the scripts they used for building the wheel (actually it's cross-built using a docker container) in directory tensorflow/tensorflow/tools/ci_build inside source code.
Some people also provided guides for native building, but it generally requires more effort to get it to work.
I suggest you start by trying the python wheel file for tensorflow v1.8.0 for raspberry pi armv7l architecture, found here.

How much space is required to install OpenCV on Raspberry Pi 3

I am new user to the Raspberry Pi 3.
How much space is required to install OpenCV on Raspberry?
For my with my Raspberry Pi 3B+ / Raspberry Pi 4b a 8gb sd-card was too small. I would recommend at least 16gb. But I really depends on which version and which operating system you use (pitop, twister os, raspbian, raspberry pi os...). Maybe you should try running your pi with a USB-flashdrive or a small ssd?
Installing OpenCV on your Raspberry Pi can be done in two ways, both having different space requirements:
You can use the Debian repositories with the sudo apt-get install libopencv-dev command. This is the easiest way to install OpenCV on your Pi, and also takes the least amount of space (if that's a concern for you). It will take around 80M when installed. The downsides of this approach is that you get OpenCV 2.4.9., there isn't and upgrade to OpenCV 3.0 yet. Also you can't customize the installation.
The second and more difficult option is to compile the sources yourself. To compile the code you will need more than 4GB of disk space as the compiled code takes a lot more space. However the installed libraries (.lib) are under a 100M. If you want to use this option I recommend you to connect a USB stick or external HDD to your Pi. Use the USB or HDD (with more than 4GB of free space) to compile and build OpenCV. After installing OpenCV you can delete this directory again as you only need the library files (of course, this changes when you actually want to develop code on your Pi).
Mmmm... how long is a piece of string?
It depends on what/how you install and what/how you count. The following factors, and others, will affect the answer:
debug or release versions
examples installed or not
documentation installed or not
contrib code installed or not
It also depends on whether you count the fact that to build it, you will need a compiler and all its associated stuff, cmake and a bunch of V4L, video formats and image formats and libraries.
Also, you can build it and install it and then delete the source yet continue to use the product.
FWIW, my build area on a Raspberry Pi amounts to 2.1GB - that is the source and a release build without contrib.
Opencv takes around 5.5 gigabytes of space on your sd card.
From experience I used a 64gb card with raspbian lite on it. I recommend you use a 32 gb or higher disc for your projects. Just know that when you are going to install a lot of packages for your future projects, you will run out of space. Under 32 might work but it is not recommended. Here is a tip: install the latest opencv version on your raspberry pi.
Here is a tutorial which I have personally followed which works. https://linuxize.com/post/how-to-install-opencv-on-raspberry-pi/

How to import an image to IBM Streams?

Apparently in order to import an image in IBM InfoSphere Streams, (I am currently using the VMware-streams4.1.1) I have to download OpenCV libraries and I have followed this guide. http://ibmstreams.github.io/streamsx.opencv/doc/html/InstallingToolkit.html
And I get to the point where I have to use CMAKE, after that, a make file is created but is not "made" when I type make in the command line...
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=$OPENCV_INSTALL_PATH -D OPENCV_EXTRA_C_FLAGS="-DHAVE_CAMV4L -DHAVE_CAMV4L2" -D WITH_OPENCL=OFF ../opencv-3.1.0
make
sudo make install
Is it necessary to download OpenCV to upload images to IBM Streams? or is there any built in libraries already there? Since apparently there is not much information regarding the installation of these libraries.
The main task here is to take the average of an image's pixels in IBM Streams
No, IBM Streams does not have any built-in image processing libraries. However, there is an toolkit for Streams that shows how OpenCV functions can be used in Streams operators. The toolkit contains several dozen examples of operators and applications for processing video streams. The toolkit depends upon OpenCV libraries, which must be installed separately, and they depend upon FFmpeg libraries, which must also be installed.
There are step-by-step instructions for installing FFmpeg and OpenCV libraries, here:
http://ibmstreams.github.io/streamsx.opencv/doc/html/InstallingToolkit.html
After installing the toolkit and libraries, there are suggestions for verifying that everything was installed correctly by executing some of the sample applications, here:
http://ibmstreams.github.io/streamsx.opencv/doc/html/UsingToolkit.html

Is it possible to setup a WebRTC Twilio client on Raspberry PI?

I'm working on a prototype on which I need to create a peer-to-peer video chat between a Raspberry Pi equipped with a Raspberry Cam and an iOS device using Twilio. The iOS part was easy but I can't find a way to do the same on the Raspberry. Is that even possible?
Thanks.
I've not tried this, but it seems like you would have to rely on the browser capabilities of the Pi. The current standard there seems to be the Epiphany browser which you'd get with the following commands:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install epiphany-browser
Then you can check whether that browser would support using Twilio Client:
https://www.twilio.com/docs/quickstart/php/client
Alternatively, if for whatever reason it would not work with Twilio Client, you could still use the Pi as a WebRTC device via other methods as modeled in this blog post.
I don't have a Raspberry Pi 2 or camera to test this with so let me know if this helps at all.

Resources