neo compilation job failed on Yolov5/v7 model - machine-learning

I was trying to use AWS SageMaker Neo compilation to convert a yolo model(trained with our custom data) to a coreml format, but got an error on input config:
ClientError: InputConfiguration: Unable to determine the type of the model, i.e. the source framework. Please provide the value of argument "source", from one of ["tensorflow", "pytorch", "mil"]. Note that model conversion requires the source package that generates the model. Please make sure you have the appropriate version of source package installed.
Seems Neo cannot recognize the Yolo model, is there any special requirements to the model in AWS SageMaker Neo?
I've tried both latest yolov7 model and yolov5 model, and both pt and pth file extensions, but still get the same error. Seems Neo cannot recognize the Yolo model. I also tried to downgrade pytorch version to 1.8, still not working.
But when I use the yolov4 model from this tutorial post: https://aws.amazon.com/de/blogs/machine-learning/speed-up-yolov4-inference-to-twice-as-fast-on-amazon-sagemaker/, it works fine.
Any idea if Neo compilation can work with Yolov7/v5 model?

Related

Use sentence transformers models with Apache Beam

I have an apache beam pipeline that works flawlessly with a DirectRunner, but not with a DataflowRunner :
When using a DataflowRunner I get a "Error 413 (Request entity too large)"
From what I understand, it is because the pipeline file is too large. (I get it with the following option : --dataflow_job_file=gs://...
And this is caused by the model I use :
embeding_model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L3-v2')
Have anyone already experimented something similar ?
You are correct in the assumption that the pipeline file is too large--the direct runner doesn't have that limitation but I believe Dataflow limits the JSON to something like 20mb.
I'm guessing you're embedding the model into that JSON? You'll probably be better off loading it from an external source. For example, RunInference in the Python SDK allows for loading custom models.

How can I read pytorch model file via cv2.dnn.readNetFromTorch()?

I am able to save a PyTorch custom model? (it can work any PyTorch version above 1.0)
However, I am not able to read the saved model. I am trying to read it via cv2.dnn.readNetFromTorch() so as to use the model in Opencv framework (4.1.0).
I saved the PyTorch model with different methods as follows to see whether this difference reacts to the reading function of cv2.dnn.
torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.pt')
torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.t7')
torch.save(model, '/home/aktaseren/people-opencv/pidxx.t7')
torch.save(model, '/home/aktaseren/people-opencv/pidxx.pth')
None of these saved file can be readable via cv2.dnn.readNetFromTorch().
The error I am getting is always the same on this issue, which is below.
cv2.error: OpenCV(4.1.0) ../modules/dnn/src/torch/torch_importer.cpp:1022: error: (-213:The function/feature is not implemented) Unsupported Lua type in function 'readObject'
Do you have any idea how to solve this issue?
OpenCV documentation states can only read in torch7 framework format. There is no mention of .pt or .pth saved by pytorch.
This post mentions pytorch does not save as .t7.
.t7 was used in Torch7 and is not used in PyTorch. If I’m not mistaken
the file extension does not change the behavior of torch.save.
An alternate method is to export the model as onnx, then read the model in opencv using readNetFromONNX.
Yes, we have tested these methods (saving the model as .pt or .pth). And we couldn't load these model files by using opencv readNetFromTorch. We should use LibTorch or ONNX as the intermediate model file to be read from C++.

Save tensorflow 2.0 model and use them in opencv 4

I currently code my models with tensorflow 2.0 and I want to run them with opencv 4 (I want to compare performance). But I can't find a way to convert my tensorflow model for opencv.
For running in opencv I want to use:
cv2.dnn.readNetFromTensorflow('saved_model.pb', 'saved_model.pbtxt')
but when I save my model with:
model.save('./')
I obtain this files:
saved_model.pb | variables/variables.index | variables/variables.data-00000-of-00002 |variables/variables.data-00001-of-00002
I have a my .pb but not my .pbtxt. How it is possible to write this file ? According to opencv documentation this file is the text graph definition. I already try to write a .pbtxt with
model.to_json()
but it didn't work :/
Do you have any ideas ?
Thanks in advance !
Tanguy
Additionally, OpenCV requires an extra configuration file based on the
.pb, the .pbtxt. It is possible to import your own models and
generate your own .pbtxt files by using one of the following files
from the OpenCV Github repository...
Here is a link to tutorial: https://jeanvitor.com/tensorflow-object-detecion-opencv/
Haven't tried it myself, but seems legit.
For example tf_text_graph_ssd.py does job done.

error loading faster_rcnn models into Opencv

I followed exactly every step from TensorFlow Object Detection API and trained the faster_rcnn_resnet50 model. Then I referenced link:Wiki to generate the pbtxt file for cv2 read net.
When I ran the model using opencv, it gave no error most of the time and this error sometimes:
cv2.error: OpenCV(3.4.3)
C:\projects\opencv-python\opencv\modules\dnn\src\tensorflow\tf_importer.cpp:495:
error: (-2:Unspecified error) Input layer not found:
CropAndResize/Reshape/shape in function
'cv::dnn::experimental_dnn_34_v7::anonymous-namespace'::TFImporter::connect'
I tried the optimized tool and re-generated the pbtxt file, but still experienced the same.
Any suggestions to make the model work? Thanks in advance!
Got answer from #dkurt at https://github.com/opencv/opencv/issues/13050
Wish that can help if you are experiencing the same problem

NiftyNet 'evaluation' action output is incorrect

I'm trying to use the new 'evaluation' action after inference to generate some metrics for my output. However, the .csv files just show scores of '0' for average_distance and '1' for Jaccard and Dice for each of my data volumes. I can't seem to find any documentation for the evaluation action, so I'm not sure what I'm doing wrong. Also, the --dataset_to_infer=Validation option doesn't seem to work, both inference and evaluation are being applied to all data rather than just the validation set.
Thanks!
For the evaluation issue, we're working on the documentation. The dataset_to_infer option is only tested for the applications in NiftyNet/niftynet/application; applications from the model zoo are not upgraded to support it yet (please file an issue with more details https://github.com/NifTK/NiftyNet/issues if you believe it's a bug).
For the time being, pointing directly to the inference result in the config.ini worked for me.
e.g.
[inferred]
csv_file = model_dir/save_seg_dir/inferred.csv
I believe this file is not found currently and then evaluation defaults to comparing labels to labels. See the issue on GitHub.

Resources