Where to get the Model for EdgeBoxes in OpenCV Python - opencv

The link below provides the python implementation for edgeboxes:
https://github.com/opencv/opencv_contrib/blob/96ea9a0d8a2dee4ec97ebdec8f79f3c8a24de3b0/modules/ximgproc/samples/edgeboxes_demo.py
However, I do not understand this part:
model = sys.argv[1]
I want to know from where can I get this model?

model = sys.argv[1]
means that the first argument passed when you call this script from shell it's the model.
Usage:
edgeboxes_demo.py [<model>] [<input_image>]
You can use the example model provided in the opencv extra repository

Related

How can I read pytorch model file via cv2.dnn.readNetFromTorch()?

I am able to save a PyTorch custom model? (it can work any PyTorch version above 1.0)
However, I am not able to read the saved model. I am trying to read it via cv2.dnn.readNetFromTorch() so as to use the model in Opencv framework (4.1.0).
I saved the PyTorch model with different methods as follows to see whether this difference reacts to the reading function of cv2.dnn.
torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.pt')
torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.t7')
torch.save(model, '/home/aktaseren/people-opencv/pidxx.t7')
torch.save(model, '/home/aktaseren/people-opencv/pidxx.pth')
None of these saved file can be readable via cv2.dnn.readNetFromTorch().
The error I am getting is always the same on this issue, which is below.
cv2.error: OpenCV(4.1.0) ../modules/dnn/src/torch/torch_importer.cpp:1022: error: (-213:The function/feature is not implemented) Unsupported Lua type in function 'readObject'
Do you have any idea how to solve this issue?
OpenCV documentation states can only read in torch7 framework format. There is no mention of .pt or .pth saved by pytorch.
This post mentions pytorch does not save as .t7.
.t7 was used in Torch7 and is not used in PyTorch. If I’m not mistaken
the file extension does not change the behavior of torch.save.
An alternate method is to export the model as onnx, then read the model in opencv using readNetFromONNX.
Yes, we have tested these methods (saving the model as .pt or .pth). And we couldn't load these model files by using opencv readNetFromTorch. We should use LibTorch or ONNX as the intermediate model file to be read from C++.

Spacy-Transformers: Access GPT-2?

I'm using Spacy-Transformers to build some NLP models.
The Spacy-Transformers docs say:
spacy-transformers
spaCy pipelines for pretrained BERT, XLNet and GPT-2
The sample code on that page shows:
import spacy
nlp = spacy.load("en_core_web_trf")
doc = nlp("Apple shares rose on the news. Apple pie is delicious.")
Based on what I've learned from this video,"en_core_web_trf" appears to be the spacy.load() package to use a BERT model. I've searched the Spacy-Transformers docs and haven't yet seen an equivalent package, to access GPT-2. Is there a specific spacy.load() package, to load in order to use a GPT-2 model?
The en_core_web_trf uses a specific Transformers model, but you can specify arbitrary ones using the TransformerModel wrapper class from spacy-transformers. See the docs for that. An example config:
[model]
#architectures = "spacy-transformers.TransformerModel.v1"
name = "roberta-base" # this can be the name of any hub model
tokenizer_config = {"use_fast": true}

Serializing Drake objects with Python

Is there any way to serialize and de-serialize objects (such as
pydrake.trajectories.PiecewisePolynomial, Expression ...) using pickle
or some other way?
It does not complain when I serialize it, but when trying to load from file
it complains:
TypeError: pybind11_object.__new__(pydrake.trajectories.PiecewisePolynomial) is not safe, use object.__new__()
Is there a list of classes you would like to serialize / pickle?
I can create an issue for you, or you can create one if you have a list already in mind.
More background:
Pickling for pybind11 (which is what pydrake uses) has to be defined manually:
https://pybind11.readthedocs.io/en/stable/advanced/classes.html#pickling-support
At present, we don't have a roadmap in Drake to serialize everything, so it's a per-class basis at present.
For example, for pickling RigidTransform: issue link and PR link
A simpler pickling example for CameraInfo: PR link
(FTR, if an object is easily recoverable from it's construction arguments, it should be trivial to define pickling.

coremltools Pipeline input node not connected to rest of model

I've built an Encoder/Decoder model (in PyTorch), saved as two separate mlmodel objects. I want to put these together in a coremltools.models.pipeline, for efficiency purposes. With the two input models saved to disk, this is what I use to build the pipeline:
from coremltools.models.pipeline import *
from coremltools.models import datatypes
input_features = [('distorted_input', datatypes.Array(28*28))]
output_features = ['z_distribution', 'rectified_input']
pipeline = Pipeline(input_features, output_features)
pipeline.add_model(enc_mlmodel)
pipeline.add_model(dec_mlmodel)
pipeline_model = coremltools.models.MLModel(pipeline.spec)
pipeline_model.save('inputFixerPipeline.mlmodel')
The creation of the pipeline runs fine, but the model that's saved fails to connect the input -- i.e., looking at the model in Netron, I see that the distorted_input node is just hanging on its own. The rest of the pipeline appears to be correct.
Any thoughts?
Answering my own question: I had an argument for image_input_names on the 2nd model in my pipeline. In fact, it doesn't take an image, but just a tensor, so I suppose it was somehow confusing the pipeline builder. Removing the image_input_names entry correct the pipeline model right away.
Hopefully this helps someone avoid some time in future.

How can I deploy a scikit learn python model with Watson Studio and Machine Learning?

Suppose that I already have a scikit-learn model and I want to save this to my Watson Machine Learning and deploy it using the python client.
The python client docs: http://wml-api-pyclient.mybluemix.net
I have like:
clf = svm.SVC(kernel='rbf')
clf.fit(train_data, train_labels)
# Evaluate your model.
predicted = clf.predict(test_data)
What I want to do is to deploy this model as a web service accessible via REST API.
I read in the Watson Machine Learning Documentation here: https://dataplatform.cloud.ibm.com/docs/content/analyze-data/wml-ai.html?audience=wdp&context=analytics
but I'm having trouble when deploying the model.
You can also deploy it as a python function. what you need is to wrap all your functionalities into a single deployable function (learn python closure).
The way you use the credential is the same in this Method.
Step 1 : Define the function
Step 2 : Store the function in the repository
after that, you can deploy it and access by two ways
using the Python client
using the REST API
This has been explained in detail in this see this post
With scikit learn model, Watson Machine Learning expects a pipeline object instead of just a fit model object. This is so that you also deploy the data transformation and preprocessing logic to the same endpoint. For example, try changing your code to:
scaler = preprocessing.StandardScaler()
clf = svm.SVC(kernel='rbf')
pipeline = Pipeline([('scaler', scaler), ('svc', clf)])
model = pipeline.fit(train_data, train_labels)
Then you will be able to deploy the model by following the docs here:http://wml-api-pyclient.mybluemix.net/#deployments
From your Notebook in Watson Studio, you can just
from watson_machine_learning_client import WatsonMachineLearningAPIClient
wml_credentials = {
"url": "https://ibm-watson-ml.mybluemix.net",
"username": "*****",
"password": "*****",
"instance_id": "*****"
}
client = WatsonMachineLearningAPIClient(wml_credentials)
and then use the client to deploy the model after saving the model first to the repository.
You can see how to accomplish all of this in this tutorial notebook: https://dataplatform.cloud.ibm.com/exchange/public/entry/view/168e65a9e8d2e6174a4e2e2765aa4df1
from the Community

Resources