How to fetch TPU utilization details in coral developer board using tflite_runtime or linux command - google-coral

I am just trying to fetch the TPU utilization details in coral dev board. But can't find any relevant for the same..!
Please share your findings / suggestions..!

Related

How to develop a model on a CPU before migrating

I am building a multivariate LSTM to model longitudinal data with pytorch.
I have installed Graphcore pytorch (3.1, which includes poplar and popart) and tools from docker. Rather than installing an IPU immediately, can I develop the model on the CPU to start with before adding or migrating to an IPU? When I issue any gc-* command it reports no IPU available, which I know is true!
I generally prefer to run in bare metal [Ubuntu 20.04 LTS AMD 1950X Threadripper] rather than via VMs. Do I need a Graphcore account to do this, so I can sign the licence agreement etc? I guess that is implied in the docker application.

neo4j 4.4 community scalability

My neo4j community database reach near 1TB and I have only 64GB of RAM , can I use fabric to scale the database horizontally and share the load between different servers or fabric is only for enterprise licensed neo4j versions?
From the Getting Started with Neo4j Fabric Blog:
Fabric is an Enterprise Only feature, meaning it is not available for the Neo4j Community Edition.
The licensing FAQ says:
Neo4j Enterprise Edition is also available for free for a number of uses
You could state your use case and ask them for a free Enterprise license maybe.
The community edition doesn't have an option for scaling/sharding.

Cannot infer on Movidius (NCS2) using OpenVINO Workbench through Docker: Drivers setup failed?

I am trying to run some inferences using the OpenVINO Workbench Docker image https://hub.docker.com/r/openvino/workbench . Everything works well using my CPU as targeted device (Configuration -> Select Environment). But I get the following error when I select my Intel Movidius Myriad X VPU (a Neural Compute Stick 2):
"Cannot infer this model on Intel(R) Movidius(TM) Neural Compute Stick 2 (NCS 2). Possible causes: Drivers setup failed. Update the drivers or run inference on a CPU." (cf attached screenshot).
I did not change the start_workbench.sh script. Here are my execution params:
./start_workbench.sh -IMAGE_NAME openvino/workbench -TAG latest -ENABLE_MYRIAD -DETACHED -ASSETS_DIR /hdd-raid0/openvino_workbench
However, I can play with the NCS2 using the classification or cross check commands provided by https://hub.docker.com/r/openvino/ubuntu18_dev.
Any idea ?
Thxxxx!
This is how you can use a Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs: https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_docker_linux.html
Kindly navigate to the specific topic. You will found that there are a few additional steps to be done before NCS2 can be used with Docker.

Can I use Tensorflow on Orange pi 4G IOT with Ubuntu?

I am trying to build an imaging system and I want to use Tensorflow with Orange pi 4G. Does anyone know if there are limitations, is this possible?
As I can see Orange PI 4g iot is still not compatible with Ubuntu but I hope it will be in the near future. Any information you could give me i will be happy.
Official CI server for Tensorflow has some nightly builds with python wheels for raspberry pi armv7l, it is not officially supported by tensorflow yet, they officially support only 64-bit architectures so far, but I managed to get yolo-keras working on "orange pi pc plus" using their nightly build wheel file.
You can also find the scripts they used for building the wheel (actually it's cross-built using a docker container) in directory tensorflow/tensorflow/tools/ci_build inside source code.
Some people also provided guides for native building, but it generally requires more effort to get it to work.
I suggest you start by trying the python wheel file for tensorflow v1.8.0 for raspberry pi armv7l architecture, found here.

hardware requirements for PlasticSCM server

I'm evaluating PlasticSCM on a VMWare Machine with 4GB RAM and 4Core CPU. Since I've ported our trunk into the server (about 6GB of Data), the service ran out of memory (started swapping). I've increased the the VM RAM to 6GB This is actually more than I'd like to load the host system with, since I've also got VMs for PlasticSCM Client, TeamCity Server, TeamCity Agent.
I was trying to find a spec with details on hardware requirement for running PlasticSCM server which incorporates scaling. So far, I've only found the minimum requirement (512MB RAM etc.) and the system information of your heavy load and scale test. As far as I can see, it's all about RAM. :)
Anyway is there a detailed spec with recommendations for the hardware being used?
P.S.: Of course, in case of switching to Plastic we'd run the service on a real machine instead of VM.

Resources