Tensorflow Model Question about saved model - object-detection-api

what is differnece between
inference graph/saved model and ssd_... (inferencee graph = python exporter_main_v2.py ,,,--output_directory)
SSD MobileNet V2 FPNLite 320x320... -tpu/saved_model

Related

PyTorch loading pretrained weights

I am trying to load a pretrained model resnet_18.pth file into pytorch. Online documentation suggested importing like so:
weights = torch.load("resnet_18.pth")
When I print the output of weights, it gives something like the following:
('module.layer4.1.bn2.running_mean', tensor([ 9.1797e+01, -2.4204e+02, 5.6480e+01, -2.0762e+02, 4.5270e+01,
-3.2356e+02, 1.8662e+02, -1.4498e+02, -2.3701e+02, 3.2354e+01,
...
All of the tutorials mentioned loading weights using a base model:
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
I want to use a default resnet-18 model to apply the weights on, but I the resent18 from tensorflow vision does not have the load_state_dict function. Help is appreciated.
from torchvision.models import resnet18
resnet18.load_state_dict(torch.load("resnet_18.pth"))
# 'function' object has no attribute 'load_state_dict'
resnet18 is itself a function that returns a ResNet18 model. What you can do to load your own pretrained weights is to use
model = resnet18()
model.load_state_dict(torch.load("resnet_18.pth"))
Note that load_state_dict(...) loads the weights in-place and does not return model itself.

How to load a trained BlazingText model

I have trained a text classification model using blazingText on AWS sagemaker, I can load the trained model and deploy an inference endpoint
model = bt_model.deploy(initial_instance_count=1, endpoint_name=endpoint_name, instance_type='ml.m5.xlarge', serializer=JSONSerializer())
payload = {"instances": terms}
response = model.predict(payload)
predictions = json.loads(response)
and it's working fine, now I need to load the model's bin file using an entry_point in order to do some logic before and after predictions in the input_fn and output_fn.
I extracted the bin file from the model.tar.gz and I can load it, but I get Segmentation Fault when I try to run a prediction
from gensim.models import FastText
from gensim.models.fasttext import load_facebook_model, load_facebook_vectors
model=FastText.load('model.bin')
model.predict('hello world')
As per blazingText documentations
For both supervised (text classification) and unsupervised (Word2Vec)
modes, the binaries (*.bin) produced by BlazingText can be
cross-consumed by fastText and vice versa. You can use binaries
produced by BlazingText by fastText. Likewise, you can host the model
binaries created with fastText using BlazingText.
Here is an example of how to use a model generated with BlazingText
with fastText:
#Download the model artifact from S3 aws s3 cp s3://<YOUR_S3_BUCKET>//model.tar.gz model.tar.gz
#Unzip the model archive tar -xzf model.tar.gz
#Use the model archive with fastText fasttext predict ./model.bin test.txt
but for some reason it's not working as expected

How to load pretrained weights in npz format into tensorflow 1.x model

I am unable to restore flownet 2 weights which is present in npz format into my tensorflow 1.x implementation. I need to use it without tensorlayer.

Salvaging a piclked XGBoostClassifier from older major version

TL;DR How to import a pickled XGBoost model from an older major version?
I trained a XGBoost model using version 0.6 using their scikit-learn API so the classifier is of class xgboost.XGBClassifier. I saved that trained model in the pickle format.
However, I need to move my model to an updated version of XGBoost 1.0.
I've tried following their guide on loading/saving model (https://xgboost.readthedocs.io/en/latest/tutorials/saving_model.html) but it seems like the old XGBClassifer model doesn't have any of those methods.
What do I do with this trained xgboost.XGBClassifier object so I can convert it to be loadable in XGBoost 1.0?
In the old environment (with old xgboost version) you load the pickled model normally, then call the hidden _booster.save_model method:
import pickle as pkl
clf = pkl.load(model,'rb')
clf._booster.save_model('clf.model')
Then in the updated environment (here: with xgboost==1.0) you load the model using the new load_model method:
import xgboost as xgb
clf = xgb.XGBoostClassifier()
clf.load_model('clf.model')
This relies on the guarantee of backward compatibility of XGBoost models (as opposed to lack thereof for serializations as pickled objects) - see docs.

Use tested machine learning model on new unlabeled single observation or dataset?

How can I use a trained and tested algorithm (eg. machine learning classifier) after being saved, on a new observation/dataset, whose I do not know the class (eg. ill vs healthy) based on predictors used for model training?
I use caret but can't find any lines of code for this.
many thanks
After training and testing any machine learning model you can save the model as .rds file and call it as
#Save the fitted model as .rds file
saveRDS(model_fit, "model.rds")
my_model <- readRDS("model.rds")
Creating a new observation from the same dataset or you can use a new dataset also
new_obs <- iris[100,] #I am using default iris dataset, 100 no sample
Prediction on the new observation
predicted_new <- predict(my_model, new_obs)
confusionMatrix(reference = new_obs$Species, data = predicted_new)
table(new_obs$Species, predicted_new)

Resources