Can I use Tensorflow on Orange pi 4G IOT with Ubuntu? - image-processing

I am trying to build an imaging system and I want to use Tensorflow with Orange pi 4G. Does anyone know if there are limitations, is this possible?
As I can see Orange PI 4g iot is still not compatible with Ubuntu but I hope it will be in the near future. Any information you could give me i will be happy.

Official CI server for Tensorflow has some nightly builds with python wheels for raspberry pi armv7l, it is not officially supported by tensorflow yet, they officially support only 64-bit architectures so far, but I managed to get yolo-keras working on "orange pi pc plus" using their nightly build wheel file.
You can also find the scripts they used for building the wheel (actually it's cross-built using a docker container) in directory tensorflow/tensorflow/tools/ci_build inside source code.
Some people also provided guides for native building, but it generally requires more effort to get it to work.
I suggest you start by trying the python wheel file for tensorflow v1.8.0 for raspberry pi armv7l architecture, found here.

Related

How do I pull Docker images with specific library versions installed in them?

I have an outdated neural network training python2.7 script which utilizes keras 2.1 on top of tensorflow 1.4; and I want it to be trained on my nVidia GPU; and I have CUDA SDK 10.2 installed on Linux. I thought Docker Hub is exactly for publishing frozen software packages which just work, but it seems there is no way to find a container with specific software set.
I know docker >=19.3 has native gpu support, and that nvidia-docker utility has cuda-agnostic layer; but the problem is i cannot install both keras-gpu and tensorflow-gpu of required versions, cannot find wheels, and this legacy script does not work with other versions.
Where did you get the idea tha Docker Hub hosts images with all possible library combinations?
If you cannot find an image that suits you, simply build your own.

Looking for Coral M.2 Accelerator + RHEL/Centos 8 Drivers on x86_64

I'm a little lost (and admit that I'm pretty green to all this). I am looking for the drivers for the M.2 drivers for RHEL/CentOS 8 on x86_64 architecture. Previously I was successful installing the drivers under Ubuntu following the Getting Started guide on the Coral website (https://coral.ai/docs/m2/get-started). But I need to run CentOS 8 for other reasons. So I know that the board works. I know it can be supported in Linux, but don't know how to convert the instructions for CentOS.
My M.2 board is connected to my server using a M.2 to PCIe adapter.
Thanks in advance!
ben
I also believe that you should be able to get this working.
Couple things that you'll need:
libedgetpu.so - You can download the latest runtime from here: https://github.com/google-coral/edgetpu/tree/master/libedgetpu/direct/k8
apex/gasket modules - This is a required kernel module for talking to the pcie module. This is going to be very tricky, first you'll need to make sure that you don't already have apex/gasket module already installed. If you do, blacklist it and load our modules. Now our modules cannot be installed with apt-get since you are in centos, so your only option is to download the code from source and compile it on your own: https://coral.googlesource.com/linux-imx/+/refs/heads/release-day/drivers/staging/gasket
Cheers

How much space is required to install OpenCV on Raspberry Pi 3

I am new user to the Raspberry Pi 3.
How much space is required to install OpenCV on Raspberry?
For my with my Raspberry Pi 3B+ / Raspberry Pi 4b a 8gb sd-card was too small. I would recommend at least 16gb. But I really depends on which version and which operating system you use (pitop, twister os, raspbian, raspberry pi os...). Maybe you should try running your pi with a USB-flashdrive or a small ssd?
Installing OpenCV on your Raspberry Pi can be done in two ways, both having different space requirements:
You can use the Debian repositories with the sudo apt-get install libopencv-dev command. This is the easiest way to install OpenCV on your Pi, and also takes the least amount of space (if that's a concern for you). It will take around 80M when installed. The downsides of this approach is that you get OpenCV 2.4.9., there isn't and upgrade to OpenCV 3.0 yet. Also you can't customize the installation.
The second and more difficult option is to compile the sources yourself. To compile the code you will need more than 4GB of disk space as the compiled code takes a lot more space. However the installed libraries (.lib) are under a 100M. If you want to use this option I recommend you to connect a USB stick or external HDD to your Pi. Use the USB or HDD (with more than 4GB of free space) to compile and build OpenCV. After installing OpenCV you can delete this directory again as you only need the library files (of course, this changes when you actually want to develop code on your Pi).
Mmmm... how long is a piece of string?
It depends on what/how you install and what/how you count. The following factors, and others, will affect the answer:
debug or release versions
examples installed or not
documentation installed or not
contrib code installed or not
It also depends on whether you count the fact that to build it, you will need a compiler and all its associated stuff, cmake and a bunch of V4L, video formats and image formats and libraries.
Also, you can build it and install it and then delete the source yet continue to use the product.
FWIW, my build area on a Raspberry Pi amounts to 2.1GB - that is the source and a release build without contrib.
Opencv takes around 5.5 gigabytes of space on your sd card.
From experience I used a 64gb card with raspbian lite on it. I recommend you use a 32 gb or higher disc for your projects. Just know that when you are going to install a lot of packages for your future projects, you will run out of space. Under 32 might work but it is not recommended. Here is a tip: install the latest opencv version on your raspberry pi.
Here is a tutorial which I have personally followed which works. https://linuxize.com/post/how-to-install-opencv-on-raspberry-pi/

Emulating Raspberry Pi with Docker on OS X

I've been doing a lot of Raspberry Pi work, but that means I have to carry about my Pi (or SSH home), and well, the Pi isn't the fastest in the world. I've been using Docker for running things like Postgres, and was thinking it would be awesome to just download a Docker image of the ARM build of Debian Jessie, and have everything function as if it was actually running in a real rPi. Even better if I could just somehow then quickly mirror this to an SD card and throw it into a real rPi.
Has anyone explored this? Everything I'm finding is about running Docker on the rPi, not running Docker to emulate an rPi.
Based on the answers and comments to similar questions - such as this one on the Raspberry Pi Stack Exchange site I think that the short answer to "no" (or at least not without a lot of effort)
Your problem is that as mentioned in the comments Docker doesn't do full-on virtualisation (that's kind of the point of it) so you can't get an ARM Raspbian Docker image and run it on an x86 Virtualbox host - which is what it sounds like you'd like to do.
The Docker image needs to be built for the same architecture as the host system. you get the same problem if you try to run x86 Docker images on the Raspberry Pi if it is acting as a Docker host.
By way of a solution - what I'd suggest is running a Debian VM on your Mac. Raspbian is close enough to Debian that you'll have a fairly "Pi-like" environment to develop in and can copy your code to an SD card when you're done.
If you want an easy way to manage the configuration so that the number of cores, RAM, disk space etc matches your Pi, then Vagrant may be a good solution.

How to install Torch on windows 8.1?

Torch is a scientific computing framework with wide support for machine learning algorithms. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.
Q:
Is there a way to install torch on MS Windows 8.1?
I got it installed and running on Windows (although not 8.1, but I don't expect the process to be different) following instructions in this repository; it's now deprecated, but wasn't deprecated few months ago when I built it. The new instructions point to torch/torch7 repository, but it has a different structure and I haven't been able to build it on Windows yet.
There are instructions on how to install Torch7 from luarocks, but you may run into issues on windows as well; I haven't tried this process. It seems like there is no official support for Windows yet, but some work is being done by contributors (there is a link to a pull request in that thread).
Based on my experience, compiling that deprecated repo may be your best option on Windows at the moment.
Update (7/9/2015): I've recently submitted several changes that fix compilation issues with mingw, so you may try the most recent version of torch7 and follow the build instructions in the ticket. Note that the changes only apply to the core lib and additional libraries may need similar changes.
This webpage hosted by New York University recommends installing a Linux virtual machine in order to run Torch7 on Windows through Linux. Another option would off course be to install a Linux dist in parallel with Windows 8.
Otherwise, if you don't mind running an older version of Torch, there is a Windows installer for Torch5 at SourceForge.
I think to use a GPU from inside the virtual machine, the processor and the motherboard should not only support VT-x , but VT-d should be supported too.
But the question is, if I use a CPU with VT-d supported, do you think there will be a significant loss in PCIe connections efficiency?
From what I understand,
VT-d is important if I want to give the virtual machines direct access to my hardware components (like PCI Express cards). Like directly attach graphics card to vm instead of host machine. Isn't that mean that the PCIe connections efficiency will be the same just like if it was the host?

Resources