Google AutoML Vision Exported Model on NVIDIA Jetson Nano - docker

I would like to run the Exported Model from the Google AutoML Vision on NVIDIA Jetson Nano. Since it is easy I wanted to use the pre-built containers to do predictions following the official Edge containers tutorial.
The problem is that the pre-built CPU container stored in Google Container Registry (gcr.io/automl-vision-ondevice/gcloud-container-1.12.0:latest) is based on amd64 arch, while NVIDIA Jetson Nano is using arm64 arch (Ubuntu 18.04). That is why 'docker run ...' returns:
docker image error: standard_init_linux.go:211: exec user process caused “exec format error”
What can I do? Should I build a container similar to the pre-built one compatible with arm64 arch?

There are two ideas that would help you to achieve your goal:
[Idea 1] You could export the model with *.tflite format to do your detection.
[Idea 2] Deploy the model as an API service on Google AutoML Vision and call it with python or any other supported language.

Related

Drake/Meshcat Visualizer example on ubuntu 22.04 with apt installation

I am trying to use drake c++ on ubuntu 22.04 and I have installed it using nightly build. The installation is completed in opt folder and the example on drake-external-examples run fine.
Drake on ubuntu 22.04 is not supported by drake visualizer and I could not find the link of the visualizer in opt folder. I was struggling to find the equivalent format when installed using apt/tar format with cmake outside the drake source format. I have been able to run meshcat visualizer in drake source examples.
I will appreciate if there is a link to an example utilizing meshcat visualizer with apt/tar installation.
If you were previously using /opt/drake/bin/drake-visualizer, you'll probably be best suited to use Meldis as the next step. Meldis will listen for LCM visualizer traffic just like drake-visualizer used to do.
Here's the command when using apt installed into /opt/drake on Ubuntu 22.04 as of Drake v1.12.0:
env PYTHONPATH=${PYTHONPATH}:/opt/drake/lib/python3.10/site-packages python3 -m pydrake.visualization.meldis -w
Sorry, I know that's a mouthful. We'll work on adding a shortcut and better documentation in a future release.

Cannot infer on Movidius (NCS2) using OpenVINO Workbench through Docker: Drivers setup failed?

I am trying to run some inferences using the OpenVINO Workbench Docker image https://hub.docker.com/r/openvino/workbench . Everything works well using my CPU as targeted device (Configuration -> Select Environment). But I get the following error when I select my Intel Movidius Myriad X VPU (a Neural Compute Stick 2):
"Cannot infer this model on Intel(R) Movidius(TM) Neural Compute Stick 2 (NCS 2). Possible causes: Drivers setup failed. Update the drivers or run inference on a CPU." (cf attached screenshot).
I did not change the start_workbench.sh script. Here are my execution params:
./start_workbench.sh -IMAGE_NAME openvino/workbench -TAG latest -ENABLE_MYRIAD -DETACHED -ASSETS_DIR /hdd-raid0/openvino_workbench
However, I can play with the NCS2 using the classification or cross check commands provided by https://hub.docker.com/r/openvino/ubuntu18_dev.
Any idea ?
Thxxxx!
This is how you can use a Docker* Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs: https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_docker_linux.html
Kindly navigate to the specific topic. You will found that there are a few additional steps to be done before NCS2 can be used with Docker.

How do I pull Docker images with specific library versions installed in them?

I have an outdated neural network training python2.7 script which utilizes keras 2.1 on top of tensorflow 1.4; and I want it to be trained on my nVidia GPU; and I have CUDA SDK 10.2 installed on Linux. I thought Docker Hub is exactly for publishing frozen software packages which just work, but it seems there is no way to find a container with specific software set.
I know docker >=19.3 has native gpu support, and that nvidia-docker utility has cuda-agnostic layer; but the problem is i cannot install both keras-gpu and tensorflow-gpu of required versions, cannot find wheels, and this legacy script does not work with other versions.
Where did you get the idea tha Docker Hub hosts images with all possible library combinations?
If you cannot find an image that suits you, simply build your own.

Install Intel IPP (libipps.so) in Docker image

I'm creating a Docker image to run my node.js app based on the ubuntu:latest image and I need Intel Performance Primitives (IPP) to be installed on it.
I've looked online as to how to do that but haven't found anything really.
Any ideas?

Public windows docker image for azure machine learning

Our machine learning workflow requires use of a custom windows .pyc file. Where can I find a windows docker image file.
I am puzzled by this statement from https://learn.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-custom-docker-image#create-a-custom-base-image. Is it really true that azure cannot use windows images?
Image requirements: Azure Machine Learning only supports Docker images that provide the following software:
Ubuntu 16.04 or greater.
Conda 4.5.# or greater.
Python 3.5.# or 3.6.#.
Searching on docker hub also did not turn up anything promising
That is correct. The Azure ML Service currently only supports Linux for dockerized execution.

Resources