How much space is required to install OpenCV on Raspberry Pi 3 - opencv

I am new user to the Raspberry Pi 3.
How much space is required to install OpenCV on Raspberry?

For my with my Raspberry Pi 3B+ / Raspberry Pi 4b a 8gb sd-card was too small. I would recommend at least 16gb. But I really depends on which version and which operating system you use (pitop, twister os, raspbian, raspberry pi os...). Maybe you should try running your pi with a USB-flashdrive or a small ssd?

Installing OpenCV on your Raspberry Pi can be done in two ways, both having different space requirements:
You can use the Debian repositories with the sudo apt-get install libopencv-dev command. This is the easiest way to install OpenCV on your Pi, and also takes the least amount of space (if that's a concern for you). It will take around 80M when installed. The downsides of this approach is that you get OpenCV 2.4.9., there isn't and upgrade to OpenCV 3.0 yet. Also you can't customize the installation.
The second and more difficult option is to compile the sources yourself. To compile the code you will need more than 4GB of disk space as the compiled code takes a lot more space. However the installed libraries (.lib) are under a 100M. If you want to use this option I recommend you to connect a USB stick or external HDD to your Pi. Use the USB or HDD (with more than 4GB of free space) to compile and build OpenCV. After installing OpenCV you can delete this directory again as you only need the library files (of course, this changes when you actually want to develop code on your Pi).

Mmmm... how long is a piece of string?
It depends on what/how you install and what/how you count. The following factors, and others, will affect the answer:
debug or release versions
examples installed or not
documentation installed or not
contrib code installed or not
It also depends on whether you count the fact that to build it, you will need a compiler and all its associated stuff, cmake and a bunch of V4L, video formats and image formats and libraries.
Also, you can build it and install it and then delete the source yet continue to use the product.
FWIW, my build area on a Raspberry Pi amounts to 2.1GB - that is the source and a release build without contrib.

Opencv takes around 5.5 gigabytes of space on your sd card.
From experience I used a 64gb card with raspbian lite on it. I recommend you use a 32 gb or higher disc for your projects. Just know that when you are going to install a lot of packages for your future projects, you will run out of space. Under 32 might work but it is not recommended. Here is a tip: install the latest opencv version on your raspberry pi.
Here is a tutorial which I have personally followed which works. https://linuxize.com/post/how-to-install-opencv-on-raspberry-pi/

Related

How to install multiple Tensorflow versions?

I'm trying to run the code from this repository: https://github.com/danielgordon10/thor-iqa-cvpr-2018
It has the following requirements
Python 3.5
CUDA 8 or 9
cuDNN
Tensorflow 1.4 or 1.5
Ubuntu 16.04, 18.04
an installation of darknet
My system satisfies neither of these. I don't want to reinstall tf/cuda/cudnn on my machine (especially if have to do that everytime I try to run deep learning code with different tensorflow requirements everytime).
I'm looking for a way to install the requirements and run the code regardless of the host.
To my knowledge that is exactly what Docker is for.
Looking into this there exist docker images from nvidia. For example one called "nvidia/cuda:9.1-cudnn7-runtime". Based on the name I assumed that any image build with this as the base comes with cuda installed. This does not seem to be the case as if I try to install darknet it will fail with the error that "cuda_runtime.h" is missing.
So what my question basicaly boils down to is: How do I keep multiple different versions of cuda and tensorflow on the same machine ? Ideally with docker (or similar) so I won't have to do the process to many times.
It feels like I'm missing and/or don't understand something obvious, because I can't imagine that it can be so hard to run tensorflow code with different versions without reinstalling things from scratch all the time.

How do I pull Docker images with specific library versions installed in them?

I have an outdated neural network training python2.7 script which utilizes keras 2.1 on top of tensorflow 1.4; and I want it to be trained on my nVidia GPU; and I have CUDA SDK 10.2 installed on Linux. I thought Docker Hub is exactly for publishing frozen software packages which just work, but it seems there is no way to find a container with specific software set.
I know docker >=19.3 has native gpu support, and that nvidia-docker utility has cuda-agnostic layer; but the problem is i cannot install both keras-gpu and tensorflow-gpu of required versions, cannot find wheels, and this legacy script does not work with other versions.
Where did you get the idea tha Docker Hub hosts images with all possible library combinations?
If you cannot find an image that suits you, simply build your own.

Looking for Coral M.2 Accelerator + RHEL/Centos 8 Drivers on x86_64

I'm a little lost (and admit that I'm pretty green to all this). I am looking for the drivers for the M.2 drivers for RHEL/CentOS 8 on x86_64 architecture. Previously I was successful installing the drivers under Ubuntu following the Getting Started guide on the Coral website (https://coral.ai/docs/m2/get-started). But I need to run CentOS 8 for other reasons. So I know that the board works. I know it can be supported in Linux, but don't know how to convert the instructions for CentOS.
My M.2 board is connected to my server using a M.2 to PCIe adapter.
Thanks in advance!
ben
I also believe that you should be able to get this working.
Couple things that you'll need:
libedgetpu.so - You can download the latest runtime from here: https://github.com/google-coral/edgetpu/tree/master/libedgetpu/direct/k8
apex/gasket modules - This is a required kernel module for talking to the pcie module. This is going to be very tricky, first you'll need to make sure that you don't already have apex/gasket module already installed. If you do, blacklist it and load our modules. Now our modules cannot be installed with apt-get since you are in centos, so your only option is to download the code from source and compile it on your own: https://coral.googlesource.com/linux-imx/+/refs/heads/release-day/drivers/staging/gasket
Cheers

Can I use Tensorflow on Orange pi 4G IOT with Ubuntu?

I am trying to build an imaging system and I want to use Tensorflow with Orange pi 4G. Does anyone know if there are limitations, is this possible?
As I can see Orange PI 4g iot is still not compatible with Ubuntu but I hope it will be in the near future. Any information you could give me i will be happy.
Official CI server for Tensorflow has some nightly builds with python wheels for raspberry pi armv7l, it is not officially supported by tensorflow yet, they officially support only 64-bit architectures so far, but I managed to get yolo-keras working on "orange pi pc plus" using their nightly build wheel file.
You can also find the scripts they used for building the wheel (actually it's cross-built using a docker container) in directory tensorflow/tensorflow/tools/ci_build inside source code.
Some people also provided guides for native building, but it generally requires more effort to get it to work.
I suggest you start by trying the python wheel file for tensorflow v1.8.0 for raspberry pi armv7l architecture, found here.

How to install Torch on windows 8.1?

Torch is a scientific computing framework with wide support for machine learning algorithms. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.
Q:
Is there a way to install torch on MS Windows 8.1?
I got it installed and running on Windows (although not 8.1, but I don't expect the process to be different) following instructions in this repository; it's now deprecated, but wasn't deprecated few months ago when I built it. The new instructions point to torch/torch7 repository, but it has a different structure and I haven't been able to build it on Windows yet.
There are instructions on how to install Torch7 from luarocks, but you may run into issues on windows as well; I haven't tried this process. It seems like there is no official support for Windows yet, but some work is being done by contributors (there is a link to a pull request in that thread).
Based on my experience, compiling that deprecated repo may be your best option on Windows at the moment.
Update (7/9/2015): I've recently submitted several changes that fix compilation issues with mingw, so you may try the most recent version of torch7 and follow the build instructions in the ticket. Note that the changes only apply to the core lib and additional libraries may need similar changes.
This webpage hosted by New York University recommends installing a Linux virtual machine in order to run Torch7 on Windows through Linux. Another option would off course be to install a Linux dist in parallel with Windows 8.
Otherwise, if you don't mind running an older version of Torch, there is a Windows installer for Torch5 at SourceForge.
I think to use a GPU from inside the virtual machine, the processor and the motherboard should not only support VT-x , but VT-d should be supported too.
But the question is, if I use a CPU with VT-d supported, do you think there will be a significant loss in PCIe connections efficiency?
From what I understand,
VT-d is important if I want to give the virtual machines direct access to my hardware components (like PCI Express cards). Like directly attach graphics card to vm instead of host machine. Isn't that mean that the PCIe connections efficiency will be the same just like if it was the host?

Resources