Using Artifactory to manage machine learning model - machine-learning

Is there any guideline or best practice for storing machine learning models? We can store them as binary files. However, machine learning model is more than model artifact (it also contains data, code, hyperparameters, metrics). Wonder if there is any practice around integrating Artifactory with CI/CD process (using to manage ML model artifact/metadata, supporting automated model promotion and human-in-the-loop promotion)?
The only article touching this topic very lightly is:
https://towardsdatascience.com/who-moved-my-binaries-7c4d797cd783

Related

Best practices for serving user-specific large ML/DL models in a web application?

First excuse any naive statement you may find below, i'm a newcomer to the ML/DL field.
How do web applications that integrate fine-tuning of large machine learning/deep learning models handle the storage and retrieval of these models for inference?
I'm trying to implement a web app that allows users to fine-tune a stable diffusion model using their own images with dreambooth. as the fine-tuned model is quite large reaching several gigabytes. After the model is trained and saved, the app should retrieve and use the model for inference each time a user visits the site and requests one.
The current approach I am considering is to store the fine-tuned model in a compressed format in a S3 or R2 bucket. Each time a user visits the web app and requests an inference, I would retrieve the model from the bucket, decompress it, and run the inference.
that being said adding the overhead of fetching + decompression to inference is obviously not a good idea.
I'm sort of sure that there's a standard approach that the community follows for handling such scenarios, what are those if they exist ? how typically these scenarios are handled ?

Does AutoML accept external models?

I used random search and got the best hyper parameters for my model, can I pass that model to the AutoML?
Does AutoML do the random search for the best hyper parameters by itself? or is there something I need to pass?
I presume you're referring to Google Cloud AutoML. It is a cloud-based Machine Learning (ML) platform that suggests a no-code approach to building data-driven solutions. AutoML was designed to build custom models for both newcomers and experienced machine learning engineers.
For newcomers, you could use Vertex AI (fully automated) to build a ML model:
For experienced ML engineers, you could also use AutoML Tabular to build a custom model, with the ability to select a model and input the selected hyperparameters:
You can read more details from here

Deploy Machine Learning Model

I created a Machine Learning pipeline from training the model and deploy it as a web service. I put everything on Github but I did not put the training dataset as Github limits file size up to 100 MB. After I train the model, I save the model and necessary files into .pkl file. The model file size itself ~300 MB so I can't upload the model to Github. I connected my repo to Heroku and try to send a request but then I realized that I do not have the model along the training dataset so I can't make a request.
Is there any best practice to do deploy Machine Learning model considering some limitation from Github?
Please advise
Github is a version control system. Technically, your repository should not contain training data or trained models.
The most real-life Machine Learning systems store trained models in the file storage, for instance S3.

Kubernetes Machine Learning Model Serving

Is there a suggested way to serve hundreds of machine learning models in Kubernetes?
Solutions like Kfserving seem to be more suitable for cases where there is a single trained model, or a few versions of it, and this model serves all requests. For instance a typeahead model that is universal across all users.
But is there a suggested way to serve hundreds or thousands of such models? For example, a typeahead model trained specifically on each user's data.
The most naive way to achieve something like that, would be that each typeahead serving container maintains a local cache of models in memory. But then scaling to multiple pods would be a problem because each cache is local to the pod. So each request would need to get routed to the correct pod that has loaded the model.
Also having to maintain such a registry where we know which pod has loaded which model and perform updates on model eviction seems like a lot of work.
You can use Catwalk mixed with Grab.
Grab has a tremendous amount of data that we can leverage to solve
complex problems such as fraudulent user activity, and to provide our
customers personalized experiences on our products. One of the tools
we are using to make sense of this data is machine learning (ML).
That is how Catwalk is created: an easy-to-use, self-serve, machine
learning model serving platform for everyone at Grab.
More infromation about Catwalk you can find here: Catwalk.
You can serve multiple Machine Learning models using TensorFlow and Google Cloud.
The reason the field of machine learning is experiencing such an epic
boom is because of its real potential to revolutionize industries and
change lives for the better. Once machine learning models have been
trained, the next step is to deploy these models into usage, making
them accessible to those who need them — be they hospitals,
self-driving car manufacturers, high-tech farms, banks, airlines, or
everyday smartphone users. In production, the stakes are high and one
cannot afford to have a server crash, connection slow down, etc. As
our customers increase their demand for our machine learning services,
we want to seamlessly meet that demand, be it at 3AM or 3PM.
Similarly, if there is a decrease in demand we want to scale down the
committed resources so as to save cost, because as we all know, cloud
resources are very expensive.
More information you cna find here: machine-learning-serving.
Also you can use Seldon.
Seldon Core is an open source platform for deploying machine learning models on a Kubernetes cluster.
Features:
deploying machine learning models in the cloud or on-premise.
gaining metrics ensuring proper governance and compliance for your
running machine learning models.
creating inference graphs made up of multiple components.
providing a consistent serving layer for models built using
heterogeneous ML toolkits.
Useful documentation: Kubernetes-Machine-Learning.

Migrate from running ML training and testing locally to Google Cloud

I currently have a simple Machine Learning infrastructure running locally and I want to migrate this all onto Google Cloud. I simply fetch the data I need from a database, build my model and then test the model on test data. This is all done in PyCharm locally.
I want to simply migrate this and have the possibility for all this to be done on Google Cloud, while having the flexibility to make local changes that can apply when run on the cloud as well. There are many Google Cloud resources relating to this and so I am looking for best practices people follow on running such a procedure.
Thanks and please let me know if there are any clarifications needed.
I highly suggest you to take a look at this machine learning workflow in the cloud which consists of:
Data Ingestion and Collection
Storing the data.
Processing data.
ML training.
ML deployment.
Data Ingestion and Collection
There are multiple resources you can use if you would like to ingest data with Google Cloud Platform. The simplest solution I can recommend to you are both Google Compute Engine or an App Engine App (for example for a forum where a user fill some data up).
Nonetheless, if you would like to ingest data in real-time, you can also use Cloud Pub/Sub.
Storing the data
As you mentioned, you are retrieving all the information from a database. If you are used to work with SQL or NoSQL I highy suggest you to go after Cloud SQL. Not only provides a good interface when building your instance, but also lets you access it securely and very rapidly.
If it not the case, you can also use Google Cloud Storage or BigQuery, but over those two, I will pick BigQuery since it has also the possibility to work with stream data.
Processing data
For processing data before feeding it to the model you can use either:
Cloud DataFlow: Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch (historical) modes with equal reliability and expressiveness -- no more complex workarounds or compromises needed.
Cloud Dataproc: Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way.
Cloud Dataprep: Cloud Dataprep by Trifacta is an intelligent data service for visually exploring, cleaning, and preparing structured and unstructured data for analysis, reporting, and machine learning.
ML training & ML deployment
For training/deploying your ML model I would suggest to use AI platform.
AI Platform makes it easy for machine learning developers, data scientists, and data engineers to take their ML projects from ideation to production and deployment, quickly and cost-effectively.
If you have to work with huge datasets, the best practices are run the model as a Tensorflow job with AI Platform so you can have a training cluster.
Finally for deploying your models using AI Platform, you can take a look here.

Resources