Deploying Apple GPU trained Tensor Flow model - docker

I'm very new to AI and ML, but very much not new to web dev.
I have trained a Tensor Flow implementation of pix2pix on my M1 GPU. I've wrapped it up in a Flask server and I want to deploy it. I've got it running locally in a Docker container but when I deploy it Google Cloud Run there seem to be issues related to me training it on ARM and then deploying it on something different (I assume x86 but I can't find docs to confirm).
I notice that the image I get back from the local Docker instance is very different from running on just localhost - much lower quality output in Docker (from an AI perspective, not image quality) as well.
I'm wondering if there are specific things that need to be done when training on Apple Silicon and then deploying on more traditional cloud hardware?
Should I just train and develop in the cloud? Seems like a waste of a great local GPU.
I appreciate this is vague, my understanding of this area is low.

Related

Problem with Reinforcement Learning Algorithm in AWS Sagemaker using custom Docker Image

I am not using any of the recommended algorithms in the AWS Sagemaker Examples. My question is simply understanding whether this error occurs because of the way I create docker image.
I use MacBook Pro M1 to create the Docker image, and I am launching in a ml.m5.xlarge instance. (First I want to know if this is the problem).
I want to also mention that my algorithm is a bit weird; in the sense I it is raw RL job. Not optimised with Ray or StableBaselines, so when I try to use RLEstimator in Sagemaker instead of Estimator class I wasn't able to execute the training. However, I do not believe this error is due to that problem. Hoping to get some insight from anyone who has experience in AWS Sagemaker, Docker.

I would like to deploy an API using pytorch. Is there any good deployment location?

I made an API using pytorch, fastAPI and opencv, but when I deploy it to heroku, it's too big to deploy. So, I'm thinking of deploying on another site. Is there any good site? Hopefully you can use it for free without having to think about the size of pytorch, fastAPI and opencv.

Migrate from running ML training and testing locally to Google Cloud

I currently have a simple Machine Learning infrastructure running locally and I want to migrate this all onto Google Cloud. I simply fetch the data I need from a database, build my model and then test the model on test data. This is all done in PyCharm locally.
I want to simply migrate this and have the possibility for all this to be done on Google Cloud, while having the flexibility to make local changes that can apply when run on the cloud as well. There are many Google Cloud resources relating to this and so I am looking for best practices people follow on running such a procedure.
Thanks and please let me know if there are any clarifications needed.
I highly suggest you to take a look at this machine learning workflow in the cloud which consists of:
Data Ingestion and Collection
Storing the data.
Processing data.
ML training.
ML deployment.
Data Ingestion and Collection
There are multiple resources you can use if you would like to ingest data with Google Cloud Platform. The simplest solution I can recommend to you are both Google Compute Engine or an App Engine App (for example for a forum where a user fill some data up).
Nonetheless, if you would like to ingest data in real-time, you can also use Cloud Pub/Sub.
Storing the data
As you mentioned, you are retrieving all the information from a database. If you are used to work with SQL or NoSQL I highy suggest you to go after Cloud SQL. Not only provides a good interface when building your instance, but also lets you access it securely and very rapidly.
If it not the case, you can also use Google Cloud Storage or BigQuery, but over those two, I will pick BigQuery since it has also the possibility to work with stream data.
Processing data
For processing data before feeding it to the model you can use either:
Cloud DataFlow: Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch (historical) modes with equal reliability and expressiveness -- no more complex workarounds or compromises needed.
Cloud Dataproc: Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way.
Cloud Dataprep: Cloud Dataprep by Trifacta is an intelligent data service for visually exploring, cleaning, and preparing structured and unstructured data for analysis, reporting, and machine learning.
ML training & ML deployment
For training/deploying your ML model I would suggest to use AI platform.
AI Platform makes it easy for machine learning developers, data scientists, and data engineers to take their ML projects from ideation to production and deployment, quickly and cost-effectively.
If you have to work with huge datasets, the best practices are run the model as a Tensorflow job with AI Platform so you can have a training cluster.
Finally for deploying your models using AI Platform, you can take a look here.

how to choose parallel computing framework for machine learning?

how to choose parallel computing framework for machine learning? I am a beginner, I saw there are Spark,Hadoop, OpenMP...what should I consider besides the language?
Look up Horovod from Uber, it's specifically designed for machine learning, available for several frameworks such as tensorflow/pytorch. It's available in Docker image repository on AWS too.

Splitting OpenCV operations between frontend and backend processors

Is it possible to split an OpenCV application into a frontend and
backend modules, such that frontend runs on thin-clients that have
very limited processing power (running Intel Atom dual-core
processors, with 1-2GB RAM), and backend does most the computational
heavy-lifting s.a. using Google Compute Engine ?
Is this possible
with an additional constraint of the network communication between
frontend and backend being not fast, s.a. being limited to say
128-256kbps ?
Are there any precedents of this kind ? Is there any such opensource
project ?
Are there some common architectural patters that could help
in such design ?
Additional clarification:
The front-end node, need NOT be purely a front-end, as in running the user-interface. I would imagine that certain OpenCV algorithms could be run on the front-end node, that is especially useful in reducing the amount of data that needs to be sent to the back-end for processing (s.a. colour-space transformation, conversion to grayscale, histogram etc.). I've successfully tested real-time face-detection (Haar cascade) on this low-end machine, in realtime, so the frontend node can pull some workload. In fact, I'd prefer to do most of the work in the frontend, and only push those computation heavy aspects to the backend, that are clearly and definitely well beyond the computational power of the frontend computer.
What I am looking for are suggestions/ideas on nature of algorithms that are best run on Google Compute Engine, and some architectural patterns that are tried & tested, for use with OpenCV to achieve such a split.

Resources