upload a pre-trained model locally into databricks - machine-learning

Is it possible to upload a pre-trained machine learning model that was trained on a different environment on databricks, and serve it? Or is it impossible on Databricks ?

The best way to use a trained model on another environment is to use MLflow. You can save several models with different versions and load them in any Databricks environment. I advise you to consult the following documentation here.

Related

How to deploy machine learning model saved as pickle file on AWS SageMaker

I have built an XGBoost Classifier and RandomForest Classifier model for the audio classification project. I want to deploy these models which are saved in pickle (.pkl) format on AWS Sagemaker. From what I have observed, there isn't a lot of resources available online. Can anyone guide me with the steps and if possible also provide the code? I already have the models built and I'm just left with deploying it on Sagemaker.
By saying that you want to deploy to sagemaker, I assume you mean a sagemaker endpoint.
The answer is sagemaker inference toolkit. It's basically about educating sagemaker how to load and do inference. More details here: https://github.com/aws/sagemaker-inference-toolkit and here is an example implementation: https://github.com/aws/amazon-sagemaker-examples/tree/master/advanced_functionality/multi_model_bring_your_own

What is the difference between Deploying and Serving ML model?

Recently I have developed a ML model for classification problem and now would like to put in the production to do classification on actual production data, while exploring I have came across two methods deploying and serving ML model what is the basic difference between them ?
Based on my own readings and understanding, here's the difference:
Deploying = it means that you want to create a server/api (e.g. REST API) so that it will be able to predict on new unlabelled data
Serving = it acts as a server that is specialized for predict models. The idea is that it can serve multiple models with different requests.
Basically, if your use case requires deploying multiple ML models, you might want to look for serving like torchServe. But if it's just one model, for me, Flask is already good enough.
Reference:
Pytorch Deploying using flask
TorchServe

Deploy Machine Learning Model

I created a Machine Learning pipeline from training the model and deploy it as a web service. I put everything on Github but I did not put the training dataset as Github limits file size up to 100 MB. After I train the model, I save the model and necessary files into .pkl file. The model file size itself ~300 MB so I can't upload the model to Github. I connected my repo to Heroku and try to send a request but then I realized that I do not have the model along the training dataset so I can't make a request.
Is there any best practice to do deploy Machine Learning model considering some limitation from Github?
Please advise
Github is a version control system. Technically, your repository should not contain training data or trained models.
The most real-life Machine Learning systems store trained models in the file storage, for instance S3.

Using Artifactory to manage machine learning model

Is there any guideline or best practice for storing machine learning models? We can store them as binary files. However, machine learning model is more than model artifact (it also contains data, code, hyperparameters, metrics). Wonder if there is any practice around integrating Artifactory with CI/CD process (using to manage ML model artifact/metadata, supporting automated model promotion and human-in-the-loop promotion)?
The only article touching this topic very lightly is:
https://towardsdatascience.com/who-moved-my-binaries-7c4d797cd783

Data processing while using tensorflow serving (Docker/Kubernetes)

I am looking to host 5 deep learning models where data preprocessing/postprocessing is required.
It seems straightforward to host each model using TF serving (and Kubernetes to manage the containers), but if that is the case, where should the data pre and post-processing take place?
I'm not sure there's a single definitive answer to this question, but I've had good luck deploying models at scale bundling the data pre- and post-processing code into fairly vanilla Go or Python (e.g., Flask) applications that are connected to my persistent storage for other operations.
For instance, to take the movie recommendation example, on the predict route it's pretty performant to pull the 100 films a user has watched from the database, dump them into a NumPy array of the appropriate size and encoding, dispatch to the TensorFlow serving container, and then do the minimal post-processing (like pulling the movie name, description, cast from a different part of the persistent storage layer) before returning.
Additional options to josephkibe's answer, you can:
Implementing processing into model itself (see signatures for keras models and input receivers for estimators in SavedModel guide).
Install Seldon-core. It is a whole framework for serving that handles building images and networking. It builds service as a graph of pods with different API's, one of them are transformers that pre/post-process data.

Resources